Google recently dropped Gemini CLI, a new natural language AI assistant for your terminal that marks a deep change in how developers interact with tools.
You can use natural language prompts to chat with the AI and speed up bread and butter tasks like shell commands, bug fixes, scripting, and producing docs. If you’ve ever used a terminal to run a build script, restart a local server, poke around with logs, or wrangle some test data, then you’ll realize how handy these new features are.
But is Gemini CLI as good as it’s cracked up to be? After trying it out, we’ve put together a few observations, which we’ll explore in more detail in this article.
You open your terminal and instead of typing in commands like npm install or git pull, Gemini CLI acts as a virtual assistant that you can direct at any time.
You chat to it using natural language prompts, such as “Read this file and tell me what’s going on,” or “Write a quick script to fetch some data from this API.” As long as the prompt is within its scope of understanding, the AI agent will do it for you.
Gemini CLI runs on Node.js, so if you’re already using JavaScript or other web-based tools, installing it via npm is easy enough.
The agent is powered by Gemini 2.5 Pro, a robust AI model with a massive one million token context window. This means it can understand big chunks of code or even whole projects, keeping track of what’s going on across files.
It can also carry out live Google searches while working on a task. If the AI needs more context or doesn’t quite understand what you’re asking for, it can look it up in real time.
It also reasons and acts, like an agent. It plans what to do, runs tasks like reading files, fetching stuff from the web, or executing commands, and will adjust its approach based on the results.
Let’s say you’re working on a project, and someone logs a bug on GitHub. Normally you’d have to follow the steps below.
With Gemini CLI, you can pretty much talk through the whole thing. You might start by saying something like, “Read this GitHub issue and check where the bug is happening.”
Gemini will read it, apply a reasoning approach to it then come back with a response like, “The problem looks like it could be in auth.js, line 42.” You can then check the relevant line and if it makes sense, follow up with something like, “Fix it and write a test for it.” The AI will go ahead and do the following.
It’ll explain what it’s doing as it goes, so you can check the quality of the output. As with all natural language models and agents, it isn’t perfect.
Sometimes it guesses wrong or misses the mark in terms of contextual and logical understanding, but for simple tasks it can save you a lot of clicking around, especially when reading text and context switching. It’s like having a junior dev on standby who’s super-fast and fairly reliable, but you’ll have to check their work as you go to be sure it’s right.
There’s a tendency among some developers to dismiss natural language AI tools as little more than a clever autocomplete. Although there’s some truth to this, it’s also a bit unfair, especially when it comes to tools that run on an agent model.
With an AI agent, you do more than just typing commands and getting responses. The model reasons in steps, similar to the way a human might do when we’re trying to solve a problem. It’s done by using something called a “ReAct loop” which follows a sequence similar to this:
Instead of giving you a single answer straight away, it breaks the task into stages and adjusts what it does based on what it discovers. It’s a bit like pair programming, with the tool “thinking” alongside you and speeding things up.
You’re sure to have heard the term “vibe coding” by now. It’s a movement away from traditional coding. Before, you would map out the problem then type commands to solve it. Now, you’ll talk through the problems with an AI agent instead and build the code together.
This conversational development approach means your thought processes and interaction with the AI agent are more important than the nuts and bolts of coding. You’re less hands-on in terms of writing the code, but provide the creativity to shape how it all comes together.
You guide the AI, pointing it in the right direction to achieve the solution you have in mind. Then you review the output in the same way a system designer would. It’s more about direction and supervision than syntax.
To make sure the AI responds in a way that suits your workflow and project, you can use GEMINI.md files to define specific behavior, approach, tool usage, and tone.
The beauty of vibe coding is that it saves you a lot of time on “donkey work”. You can pipe files, generate tests, write markdown docs, clean up old code, and get it to explain confusing functions, all by simply chatting with the AI agent through natural prompts.
Gemini CLI takes context window size to a whole new level. It has a monster one-million-token memory, which means it can look at a lot more code at once than most tools. It’s big enough to hold the entire context of a Shopify codebase, for example.
This means it can spot links and dependencies across files and generate coherent test plans that actually tell you what might happen if you tweak something in one spot.
However, it doesn’t mean you can stop thinking. Just because Gemini can “see” your whole project doesn’t mean it actually understands it the way a dev would. It’s great at tracking patterns and catching things across files, but you’ve still got to steer the ship to stay out of choppy waters.
For instance, the AI agent might suggest changes that look legit but miss subtle side effects, especially in older, messier codebases where the logic is tangled.
There’s no denying that the AI is an incredibly powerful co-pilot, but only if you use it with caution. You still need to double-check its work, especially when it comes to things like:
It’s useful for getting unstuck and seeing the big picture, but it’s best to treat its output like a rough first draft, i.e. helpful but not final.
On the surface, Gemini CLI looks friendly. It’s open source, so you can plug in your own tools and use the Model Context Protocol, which makes it compatible with different systems. There’s also a very generous free tier that doesn’t ask you to register a credit card. You can just sign up with your personal Google account.
However, once you’re in, you’ll start to notice how closely it’s tied to the rest of Google’s AI stack, including Vertex AI and Code Assist. What looks like a useful free and open CLI tool is pulling you into the wider Google ecosystem which becomes the backbone of your workflow.
This isn’t necessarily a bad thing, as Google’s AI tools are up there with the industry leaders. But it’s undeniably a Trojan horse move, as it all looks open, but the longer you use it, the more you’re building on their stack.
It boils down to whether you’re okay with that trade-off or you’d like to keep things more portable.
Reasons To Go All-In with Google
Reasons To Stay Portable
When Gemini CLI’s working properly, it’s scary good at times. You ask for a script or a markdown summary, and it’s done in seconds. You review it and there are only a couple of mistakes.
But it’s not always such smooth sailing. Sometimes it makes weird mistakes out of the blue. Or it rambles on with long, over-explained “reasoning steps” that sound smart but aren’t helpful on closer inspection.
Sometimes it downgrades you from the Gemini Pro model to Flash without warning and you feel the drop in quality straight away. You might hit a rate limit (those annoying 429 errors) and your flow crashes.
When you try throwing more complex tasks at it, like refactoring UI for different screen sizes or doing tricky code optimisation, it wobbles even more. Claude Code still feels a bit more “on it” for those kinds of tasks.
Overall, Gemini CLI is a beast for streamlining lightweight tasks, but you can’t throw the kitchen sink at it yet. At least for now, it’s more of a promising junior assistant than a clever senior dev, so don’t expect miracles.
Overall, Google Gemini CLI, and other similar tools, represent a change in how developers use the coding terminal. What was once a solitary affair is now accompanied by a fast, yet flawed AI agent.
That raises questions. We’re still driving it for now, but where will the future take us? Will we still have full control or will we just tap the brakes when things go wrong and steer it back onto the right track?
One thing’s for sure: the tools are getting smarter and show no signs of slowing down. Whether it turns out to be a productivity win for developers or a risky loss of control is yet to be seen.
Browsing habits from yesterday won’t win today.
Unlock a faster, smarter web experience with:
By downloading, you agree to our Terms and Privacy Policy
Firewalls and traditional antivirus tools weren’t built to stop today’s most evasive browser-based threats. HEAT attacks exploit web technologies, deliver …
Google’s new IP Protection feature automatically masks Chrome users' IP addresses using Google-run proxy servers. The feature is part of …
Firewalls and traditional antivirus tools weren’t built to stop today’s most evasive browser-based threats. HEAT attacks exploit web technologies, deliver …