Claude code is a powerful AI toolset that runs right in your terminal. While providing a lot of impressive utility, it also suffers from the issues that arise from similar AI toolings with the addition of an expensive pricing model.
Context is Important
To me, the main selling point for Claude Code is its ability to read through your entire codebase; a big shortcoming of many AI workflows is the model only partially understanding an issue due to it not having enough of the project's context and convention to be effective. Claude Code has the ability to access all of the files in the directory where you initiated the session, and can even ask for permission to search through extraneous directories. While potentially helpful, this is also a little sketchy when considering that everything Claude is processing is getting sent over the wire to Anthropic's servers; according to their privacy policy, the data sent over for processing will not be used for LLM training unless the user specifically opts in or the content has been flagged for a Trust & Safety Review. It's also important to be aware that Claude's data retention policy has changed in recent months and will likely continue to change.
Step by Step
When you start a Claude Code session and give it a task to complete, it will usually try to break down the task into steps to complete. Upon starting a 'step', Claude will show you what it wants to do, and ask you for permission to do it. In my experience, I always had three options when presented with Claude's suggestion: accept the suggested action, always accept the suggested action, or do not accept and tell Claude to do something different. I would urge any developer to stick to the first and last option. Take the time to look at Claude's suggestion critically and decide if it seems like the right way to go. Claude is usually logical, but also susceptable to hallucinations. A big advantage of Claude Code living in the terminal, is that it has the ability to run bash commands and verify its own work in lots of ways. This means it can actually run tests to see if the code it wrote compiles, if tests pass, or if a server spun up correctly. This was surprisingly useful when trying it out for my own projects and I found that Claude presented lot of concerns that I also held when working through a problem, and also highlighted potential mistakes and optimizations I had yet to consider. Claude is certainly not foolproof and sometimes it tries to confidently to run a command that makes no sense, or try to write some code that is off track.
The Cost Question
Claude Code isn't cheap and charges you based on token usage. Each session burns through API tokens at a rate that can add up fast if you're using it regularly. If Claude is going through one of its episodes, it can end up repeating itself in logical thought loops that burn tokens quickly. For hobbyists or developers at smaller companies, this could create a weird calculus where you'd constantly ask if the task you're about to delegate is worth it or not. This friction fundamentally changes how you interact with the tool instead of treating it as a seamless assistant, you're rationing its use (Which is probably a smart thing to do with AI anyways😬).
When It Works (And When It Doesn't)
Claude Code is pretty impressive when working on well defined problems in codebases with obvious patterns and structure (Like a nice Ruby on Rails app). Need to refactor some logic in a Rails model or service object? Want to add some error handling across several files? Need to write some specs for new features? These are tasks where I found Claude Code to have the most success. It typically understands the scope, stays consistent, and navigates the files without losing its train of thought.
But it struggles with ambiguity. Vague requests like "make this faster" or "fix the bugs" often lead to Claude making assumptions that don't align with what you actually meant. It also tends to be overly conservative sometimes and overly aggressive other times there's no middle ground. It might refuse to modify a file because it's "not sure" what you want, or it might rewrite an entire module when you just wanted a small tweak.
My Verdict
Claude Code is a pretty cool tool that improves certain workflows, but it's not a perfect advance in an AI revolution. It's expensive, it requires diligent supervision, and can make some pretty baffling mistakes. For the right tasks, particularly refactoring, boilerplate generation, or getting you out of being stumped by a bug, it can save a significant amount of time. Don't expect to be able to hand Claude Code your project directory and walk away. Claude Code is a tool that is more of a supplement to your workflow than it is a replacement. And for me, that's probably how it should be. The day we can fully trust AI to write production code unsupervised is not here yet and maybe further in the future than the hype suggests.
Use Claude Code with your eyes open. Review everything. Don't hit "always accept." And maybe check your API bill more often than you'd like to.