I’ve checked out codex after the glowing reviews here around September / October and it was, all in all, a letdown (this was writing greenfield modules in a larger existing codebase).
Codex was very context efficient, but also slow (though I used the highest thinking effort), and didn’t adapt do the wider codebase almost at all (even if I pointed it at the files to reference / get inspired by). Lots of defensive programming, hacky implementations, not adapting to the codebase style and patterns.
With Claude Code and starting each conversation by referencing a couple existing files, I am able to get it to write code mostly like I would’ve written it. It adapts to existing patterns, adjusts to the code style, etc. I can steer it very well.
And now with the new cheaper faster Opus it’s also quite an improvement. If you kick off sonnet with a long list of constraints (e.g. 20) it would often ignore many. Opus is much better at “keeping more in mind” while writing the code.
Note: yes, I do also have an agent.md / claude.md. But I also heavily rely on warming the context up with some context dumping at conversation starts.
All codex conversations need to be caveat with the model because it varies significantly. Codex requires very little tweaking but you do need to select the highest thinking model if you’re writing code and recommend the highest thinking NON-code model for planning. That’s really it, it takes task time up to 5-20m but it’s usually great.
Then I ask Opus to take a pass and clean up to match codebase specs and it’s usually sufficient. Most of what I do now is detailed briefs for Codex, which is…fine.
This blog post lacks almost any form of substance.
It could've been shortened to: Codex is more hands off, I personally prefer that over claude's more hands-on approach. Neither are bad. I won't bring you proof or examples, this is just my opinion based on my experience.
> Codex is more hands off, I personally prefer that over claude's more hands-on approach
Agree, and it's a nice reflection of the individual companie's goals. OpenAI is about AGI, and they have insane pressure from investors to show that that is still the goal, hence codex when works they could say look it worked for 5 hours! Discarding that 90% of the time it's just pure trash.
While Anthropic/Boris is more about value now, more grounded/realistic, providing more consistent hence trustable/intuitive experience that you can steer. (Even if Dario says the opposite). The ceiling/best case scenario of a claude code session is a bit lower than Codex maybe, but less variance.
Well, if you had tried using GPT/Codex for development you would know that the output from those 5 hours would not be 90% trash, it would be close to 100% pure magic. I'm not kidding. It's incredible as long as you use a proper analyze-plan-implement-test-document process.
I don't think the comparison to programming languages holds, maybe very tenuously at best. Coding assistants evolve constantly, you can't even be talking about "Codex" without specifying the time range (ie, Codex 2025-10) because it's different from quarter to quarter. Same with CC.
I believe this is the main source of disagreement / disappointment when people read opinions / reviews, then proceed to have an experience very different from expected.
Ironically, this constant improvement/evolution erodes product loyalty -- personally, I'm a creature of habit and will stay with a tool past its expiry date; with coding assistants / sota llms, I cancel and switch subscriptions all the time.
A lot of (carefully hedged) pro Codex posts on HN read suspect to me. I've had mixed results with both CC and Codex and these kinds of glowing reviews have the air of marketing rather than substance.
If only fair comparisons would not be so costly, in both time and money.
For example, I have a ChatGPT and a Gemini subscription, and thus could somewhat quickly check out their products, and I have looked at a lot of the various Google AI dev ventures, but I have not yet found the energy/will to get more into Gemini CLI specifically. Antigravity with Gemini 3 pro did some really wonky stuff when I tried it.
I also have a Windsurf subscription, which allows me to look at any frontier model for coding (well, most of the time, unless there's some sort of company beef going). This I have often used to check out Anthropic models, with much less success than Codex with > GPT-5.1 – but of course, that's without using Clode Caude (which I subscribed to for a month, idk, 6 months ago, and seemed fine back then but not mind blowingly so).
Idk! Codex (mostly using the vscode extension) works really well for me right now, but I would assume this is simply true across the board: Everything has gotten so much better. If I had to put my finger on what feels best about codex right now, specifically: Least amount of oversights and mistakes when working on gnarly backend code, with the amount of steering I am willing to put into it, mostly working off of 3-4 paragraph prompts.
I’ve been using frontier Claude and GPT models for a loooong time (all of 2025 ;)) and I can say anecdotally the post is 100% correct. GPT codex given good enough context and harness will just go. Claude is better at interactive develop-test-iterate because it’s much faster to get a useful response, but it isn’t as thorough and/or fills in its context gaps too eagerly, so needs more guidance. Both are great tools and complement each other.
The usage limits on Claude have been making it too hard to experiment with. Lately, I get about an hour a day before hitting session/weekly limits. With Codex, the limits are higher than my own usage so I never see them.
Because of that, everyone who is new to this will be focused on Codex and write their glowing reviews of the current state of AI tools in that context.
I've been using Claude code most of the year, and codex since soon after it released:
It's important to separate vibes coding from vibes engineering here. For production coding, I create fairly strict plans -- not details, but sequences, step requirements, and documented updating of the plan as it goes. I can run the same plan in both, and it's clear that codex is poor at instruction following because I see it go off plan most of the time. At the same time it can go on its own pretty far in an undirected way.
The result is when I'm doing serious planned work aimed for production PRs, I have to use Claude. When it's experimental and I don't care about quality but speed and distance, such as for prototyping or debugging, codex is great.
Edit: I don't think codex being poor at instruction following is inherent, just where they are today
Respectfully I don’t think the author appreciates that the configurability of Claude Code is its performance advantage. I would much rather just tell it what to do and have it go do it, but I am much more able to do that with a highly configured Claude Code than with Codex which is pretty much just set at the out of the box quality level.
I spend most of my engineering time these days not on writing code or even thinking about my product, but on Claude Code configuration (which is portable so should another solution arise I can move it). Whenever Claude Code doesn’t oneshot something, that is an opportunity for improvement.
Skills, MCPs, /commands, agents, hooks, plugins, etc. I package https://charleswiltgen.github.io/Axiom/ as an easily-installable Claude Code plugin, and AFAICT I'm not able to do that for any other AI coding environment.
The process you have described for Codex is scary to me personally.
it takes only one extra line of code in my world(finance) to have catastrophic consequences.
even though i am using these tools like claude/cursor, i make sure to review every small bit it generated to a level, where i ask it create a plan with steps, and then perform each step, ask me for feedback, only when i give approval/feedback, it either proceeds for the next step or iterate on previous step, and on top of that i manually test everything I send for PR.
because there is no value in just sending a PR vs sending a verified/tested PR
with that said, I am not sure how much of your code is getting checked in without supervision, as it's very difficult for people to review weeks worth of work at a time.
I think the author glosses over the real reason why tons of people use Codex over CC: limits. If you want to use CC properly you must use Opus 4.5 which is not even included in the Claude Pro plan. Meanwhile you can use Codex with gpt-5.2-codex on the ChatGPT Plus plan for some seriously long sessions.
Looks like Gemini plans have even more generous limits on the equivalently priced plans (Google AI Pro). I'd be interested in the experiences of people who used Google Antigravity/Gemini CLI/Gemini Code Assist for nontrivial tasks.
Thanks for the correction, looks like I misremembered. But limits are low enough with Sonnet that, I imagine you can barely do anything serious with Opus on the Pro plan.
It's hard to compare the two tools because they change so much and so fast.
Right now, as an example, claude code with opus 4.5 is a beast, but before that, with sonnet 4.0, codex was much better.
Gemini-cli, on the other hand, with gemini-flash-3.0 (which is strangely good for the "small and fast" model), it's very good (but the cli and the user experience are not on par with codex or claude yet).
So we need to be in constant observations of those tools. Currently (after gemini-flash-3.0 came out), I tend to submit the same task to claude (with opus) and gemini to understand the behaviour. gemini is surprising me.
This is an interesting opinion but I would like to see some proof or at least more details.
What plans are you using, what did you build, what was the output from both on similar inputs, what's an example of a prompt that took you two hours to write, what was the output, etc?
I've noticed a lot of these posts tend to go codex vs claude, but as author is someone who does AI workshops curious why Cursor is left out of this post (and more generally posts like this).
From my personal experience I find cursor to be much more robust because rather than "either / or" its both and can switch depending on the time or the task or whatever the newest model is.
It feels like the same way people often try to avoid "vendor lock in" in software world that Cursor allows freedom for that, but maybe I'm on my own here as I don't see it naturally come up in posts like these as much.
Speaking from personal experience and talking to other users - the agents/harnesses of the vendors are just better and they are customized for their own models.
what kinds of tasks do you find this to be true for? For a while I was using claude code inside of the cursor terminal, but I found it to be basically the same as just using the same claude model in there.
Presumably the harness cant be doing THAT much differently right? Or rather what tasks are responsibilities of the harness could differentiate one harness from another harness
Github Copilot also allows you to use both models, codex, claude, and gemini on top.
Cursor has this "tool for kids" vibe, it's also more about the past - "tab, tab, enter" low-level coding versus the future - "implement task 21" high level delegating.
I do feel like the Codex CLI is quite a bit behind CC. If I recall correctly it took months for Codex to get the nice ToDo Tool Claude Code uses in memory to structure a task into substeps. Also I‘m missing the ability to have the main agent invoke subagents a lot.
All this of course can be added using MCPs, but it’s still friction. The Claude Code SDK is also way better than OpenAI Agents, it’s almost no comparison.
Also in general when I experienced bugs with Codex I was always almost sure to find an open GitHub issue with people already asking about a fix for months.
Still I like GPT-5.2 very much for coding and general agent tasks, and there is EveryCode which is a nice fork of Codex that mitigates a lot of shortcomings
Seems like you wrote at the same time I did my edit, yes Every Code is great however Ctlr+T is important to get terminal rendering otherwise is has performance problems for me
Spec dev can certainly be effective, but having used Claude Code since its release, I’ve found the pattern of continuous refactoring of design and code produces amazing results.
And I’ll never use OpenAI dev tools because the company insists on a complete absence of ethical standards.
Codex was very context efficient, but also slow (though I used the highest thinking effort), and didn’t adapt do the wider codebase almost at all (even if I pointed it at the files to reference / get inspired by). Lots of defensive programming, hacky implementations, not adapting to the codebase style and patterns.
With Claude Code and starting each conversation by referencing a couple existing files, I am able to get it to write code mostly like I would’ve written it. It adapts to existing patterns, adjusts to the code style, etc. I can steer it very well.
And now with the new cheaper faster Opus it’s also quite an improvement. If you kick off sonnet with a long list of constraints (e.g. 20) it would often ignore many. Opus is much better at “keeping more in mind” while writing the code.
Note: yes, I do also have an agent.md / claude.md. But I also heavily rely on warming the context up with some context dumping at conversation starts.
Then I ask Opus to take a pass and clean up to match codebase specs and it’s usually sufficient. Most of what I do now is detailed briefs for Codex, which is…fine.
It could've been shortened to: Codex is more hands off, I personally prefer that over claude's more hands-on approach. Neither are bad. I won't bring you proof or examples, this is just my opinion based on my experience.
Agree, and it's a nice reflection of the individual companie's goals. OpenAI is about AGI, and they have insane pressure from investors to show that that is still the goal, hence codex when works they could say look it worked for 5 hours! Discarding that 90% of the time it's just pure trash.
While Anthropic/Boris is more about value now, more grounded/realistic, providing more consistent hence trustable/intuitive experience that you can steer. (Even if Dario says the opposite). The ceiling/best case scenario of a claude code session is a bit lower than Codex maybe, but less variance.
I believe this is the main source of disagreement / disappointment when people read opinions / reviews, then proceed to have an experience very different from expected.
Ironically, this constant improvement/evolution erodes product loyalty -- personally, I'm a creature of habit and will stay with a tool past its expiry date; with coding assistants / sota llms, I cancel and switch subscriptions all the time.
For example, I have a ChatGPT and a Gemini subscription, and thus could somewhat quickly check out their products, and I have looked at a lot of the various Google AI dev ventures, but I have not yet found the energy/will to get more into Gemini CLI specifically. Antigravity with Gemini 3 pro did some really wonky stuff when I tried it.
I also have a Windsurf subscription, which allows me to look at any frontier model for coding (well, most of the time, unless there's some sort of company beef going). This I have often used to check out Anthropic models, with much less success than Codex with > GPT-5.1 – but of course, that's without using Clode Caude (which I subscribed to for a month, idk, 6 months ago, and seemed fine back then but not mind blowingly so).
Idk! Codex (mostly using the vscode extension) works really well for me right now, but I would assume this is simply true across the board: Everything has gotten so much better. If I had to put my finger on what feels best about codex right now, specifically: Least amount of oversights and mistakes when working on gnarly backend code, with the amount of steering I am willing to put into it, mostly working off of 3-4 paragraph prompts.
You can check my history to confirm I criticize sama far too much to be an OpenAI shill.
Because of that, everyone who is new to this will be focused on Codex and write their glowing reviews of the current state of AI tools in that context.
It's important to separate vibes coding from vibes engineering here. For production coding, I create fairly strict plans -- not details, but sequences, step requirements, and documented updating of the plan as it goes. I can run the same plan in both, and it's clear that codex is poor at instruction following because I see it go off plan most of the time. At the same time it can go on its own pretty far in an undirected way.
The result is when I'm doing serious planned work aimed for production PRs, I have to use Claude. When it's experimental and I don't care about quality but speed and distance, such as for prototyping or debugging, codex is great.
Edit: I don't think codex being poor at instruction following is inherent, just where they are today
I spend most of my engineering time these days not on writing code or even thinking about my product, but on Claude Code configuration (which is portable so should another solution arise I can move it). Whenever Claude Code doesn’t oneshot something, that is an opportunity for improvement.
Is this just things like skills and MCPs, or something else?
it takes only one extra line of code in my world(finance) to have catastrophic consequences.
even though i am using these tools like claude/cursor, i make sure to review every small bit it generated to a level, where i ask it create a plan with steps, and then perform each step, ask me for feedback, only when i give approval/feedback, it either proceeds for the next step or iterate on previous step, and on top of that i manually test everything I send for PR.
because there is no value in just sending a PR vs sending a verified/tested PR
with that said, I am not sure how much of your code is getting checked in without supervision, as it's very difficult for people to review weeks worth of work at a time.
just my 2 cents
Looks like Gemini plans have even more generous limits on the equivalently priced plans (Google AI Pro). I'd be interested in the experiences of people who used Google Antigravity/Gemini CLI/Gemini Code Assist for nontrivial tasks.
Right now, as an example, claude code with opus 4.5 is a beast, but before that, with sonnet 4.0, codex was much better.
Gemini-cli, on the other hand, with gemini-flash-3.0 (which is strangely good for the "small and fast" model), it's very good (but the cli and the user experience are not on par with codex or claude yet).
So we need to be in constant observations of those tools. Currently (after gemini-flash-3.0 came out), I tend to submit the same task to claude (with opus) and gemini to understand the behaviour. gemini is surprising me.
What plans are you using, what did you build, what was the output from both on similar inputs, what's an example of a prompt that took you two hours to write, what was the output, etc?
From my personal experience I find cursor to be much more robust because rather than "either / or" its both and can switch depending on the time or the task or whatever the newest model is.
It feels like the same way people often try to avoid "vendor lock in" in software world that Cursor allows freedom for that, but maybe I'm on my own here as I don't see it naturally come up in posts like these as much.
Presumably the harness cant be doing THAT much differently right? Or rather what tasks are responsibilities of the harness could differentiate one harness from another harness
Cursor has this "tool for kids" vibe, it's also more about the past - "tab, tab, enter" low-level coding versus the future - "implement task 21" high level delegating.
All this of course can be added using MCPs, but it’s still friction. The Claude Code SDK is also way better than OpenAI Agents, it’s almost no comparison.
Also in general when I experienced bugs with Codex I was always almost sure to find an open GitHub issue with people already asking about a fix for months.
Still I like GPT-5.2 very much for coding and general agent tasks, and there is EveryCode which is a nice fork of Codex that mitigates a lot of shortcomings
[1] https://github.com/just-every/code
OpenAI needs to get access to Claude Code to fix them :)
And I’ll never use OpenAI dev tools because the company insists on a complete absence of ethical standards.