I’m voting with my dollars by having cancelled my ChatGPT subscription and instead subscribing to Claude.
Google needs stiff competition and OpenAI isn’t the camp I’m willing to trust. Neither is Grok.
I’m glad Anthropic’s work is at the forefront and they appear, at least in my estimation, to have the strongest ethics.
An Anthropic safety researcher just recently quit with very cryptic messages , saying "the world is in peril"... [1]
Codex quite often refuses to do "unsafe/unethical" things that Anthropic models will happily do without question.
Anthropic just raised 30 bn... OpenAI wants to raise 100bn+.
Thinking any of them will actually be restrained by ethics is foolish.
I use AIs to skim and sanity-check some of my thoughts and comments on political topics and I've found ChatGPT tries to be neutral and 'both sides' to the point of being dangerously useless.
Like where Gemini or Claude will look up the info I'm citing and weigh the arguments made ChatGPT will actually sometimes omit parts of or modify my statement if it wants to advocate for a more "neutral" understanding of reality. It's almost farcical sometimes in how it will try to avoid inference on political topics even where inference is necessary to understand the topic.
I suspect OpenAI is just trying to avoid the ire of either political side and has given it some rules that accidentally neuter its intelligence on these issues, but it made me realize how dangerous an unethical or politically aligned AI company could be.
Anthropic was the first to spam reddit with fake users and posts, flooding and controlling their subreddit to be a giant sycophant.
They nuked the internet by themselves. Basically they are the willing and happy instigators of the dead internet as long as they profit from it.
They are by no means ethical, they are a for-profit company.
The funny thing is that Anthropic is the only lab without an open source model
I’m going the other way to OpenAI due to Anthropic’s Claude Code restrictions designed to kill OpenCode et al. I also find Altman way less obnoxious than Amodei.
Anthropic (for the Superbowl) made ads about not having ads. They cannot be trusted either.
You "agentic coders" say you're switching back and forth every other week. Like everything else in this trend, its very giving of 2021 crypto shill dynamics. Ya'll sound like the NFT people that said they were transforming art back then, and also like how they'd switch between their favorite "chain" every other month. Can't wait for this to blow up just like all that did.
Grok usage is the most mystifying to me. Their model isn't in the top 3 and they have bad ethics. Like why would anyone bother for work tasks.
I dropped ChatGPT as soon as they went to an ad supported model. Claude Opus 4.6 seems noticeably better than GPT 5.2 Thinking so far.
I did this a couple months ago and haven't looked back. I sometimes miss the "personality" of the gpt model I had chats with, but since I'm essentially 99% of the time just using claude for eng related stuff it wasn't worth having ChatGPT as well.
I pay multiple camps. Competition is a good thing.
Same. I'm all in on Claude at the moment.
> I’m glad Anthropic’s work is at the forefront and they appear, at least in my estimation, to have the strongest ethics.
Damning with faint praise.
> in my estimation [Anthropic has] the strongest ethics
Anthropic are the only ones who emptied all the money from my account "due to inactivity" after 12 months.
Which plan did you choose? I am subscribed to both and would love to stick with Claude only, but Claude's usage limits are so tiny compared to ChatGPT's that it often feels like a rip-off.
Here I am, thinking they will all betray us if the incentives are there and the institutional and political environment allow it, therefore consuming based on simpler criteria like which service is cheaper (which is OpenAI), but reading this and feeling like I need to dust off the old Peter Singer tomes, and try harder to consume more ethically.
Trust is an interesting thing. It often comes down to how long an entity has been around to do anything to invalidate that trust.
Oddly enough, I feel pretty good about Google here with Sergey more involved.
This sounds suspiciously like they #WalkAway fake grassroots stuff.
It definitely feels like Claude is pulling ahead right now. ChatGPT is much more generous with their tokens but Claude's responses are consistently better when using models of the same generation.
Same and honestly I haven't really missed my ChatGPT subscription since I canceled. I also have access to both (ChatGPT and Claude) enterprise tools at work and rarely feel like I want to use ChatGPT in that setting either
Jesus people aren't actually falling for their "we're ethical" marketing, are they?
This is just you verifying that their branding is working. It signals nothing about their actual ethics.
I use Claude at work, Codex for personal development.
Claude is marginally better. Both are moderately useful depending on the context.
I don't trust any of them (I also have no trust in Google nor in X). Those are all evil companies and the world would be better if they disappeared.
Their ethics is literally saying china is an adverse country and lobbying to ban them from AI race because open models is a threat to their biz model
idk, codex 5.3 frankly kicks opus 4.6 ass IMO... opus i can use for about 30 min - codex i can run almost without any break
uhh..why? I subbed just 1 month to Claude, and then never used it again.
• Can't pay with iOS In-App-Purchases
• Can't Sign in with Apple on website (can on iOS but only Sign in with Google is supported on web??)
• Can't remove payment info from account
• Can't get support from a human
• Copy-pasting text from Notes etc gets mangled
• Almost months and no fixes
Codex and its Mac app are a much better UX, and seem better with Swift and Godot than Claude was.
[dead]
Ethics often fold under the face of commercial pressure.
The pentagon is thinking [1] about severing ties with anthropic because of its terms of use, and in every prior case we've reviewed (I'm the Chief Investment Officer of Ethical Capital), the ethics policy was deleted or rolled back when that happens.
Corporate strategy is (by definition) a set of tradeoffs: things you do, and things you don't do. When google (or Microsoft, or whoever) rolls back an ethics policy under pressure like this, what they reveal is that ethical governance was a nice-to-have, not a core part of their strategy.
We're happy users of Claude for similar reasons (perception that Anthropic has a better handle on ethics), but companies always find new and exciting ways to disappoint you. I really hope that anthropic holds fast, and can serve in future as a case in point that the Public Benefit Corporation is not a purely aesthetic form.
But you know, we'll see.
[1] https://thehill.com/policy/defense/5740369-pentagon-anthropi...