logoalt Hacker News

Joeriyesterday at 8:18 AM13 repliesview on HN

I already switched to claude a while ago. Didn’t bring along any context, just switched subscriptions, walked away from chatgpt and haven’t touched it again. Turned out to be a non-event, there really is no moat.

I switched not because I thought Claude was better at doing the things I want. I switched because I have come to believe OpenAI are a bad actor and I do not want to support them in any way. I’m pretty sure they would allow AGI to be used for truly evil purposes, and the events of this week have only convinced me further.


Replies

kdheiwnsyesterday at 10:34 AM

Yesterday was my first time trying it. One thing that felt a bit strange to me was that I asked it something and the response was just one paragraph. Which isn't bad or anything but it felt... strange? Like I always need to preface ChatGPT/gemini/whatever question with "Briefly, what is..." or it gives me enough fluff to fill a 5 page high school essay. But I didn't need to do that and just got an answer that was to the point and without loads of shit that's barely related.

And the weirdest thing that I noticed: instead of skimming the response to try finding what was relevant, I just straight up read it. Kind of felt like I got a slight amount of focus ability back.

Accuracy is something I can't really compare yet (all chatbots feel generally the same for non-pro level queries), but so far, I'm fairly satisfied.

show 8 replies
KellyCriterionyesterday at 8:39 AM

> there really is no moat.

For ChatGPT and Gemini, yes.

But for Claude, they have a very deep & big one: Its the only model that gets production ready output on the first detailled prompt. Yesterday I used my tokens til noon, so I tried some output from Gemini & Co. I presented a working piece of code which is already in production:

1. It changed without noticing things like "Touple.First.Date.Created" and "Touple.Second.Date.Created" and it rendered the code unworking by chaning to "Touple.FirstDate" and "Touple.SecondDate"

2. There was a const list of 12 definitions for a given context, when telling to rewrite the function it just cut 6 of these 12 definitions, making the code not compiling - I asked why they were cut: "Sorry, I was just too lazy typing" ?? LOL

3. There is a list include holding some items "_allGlobalItems" - it changed the name in the function simply to "_items", code didnt compile

As said, a working version of a similar function was given upfront.

With Claude, I never have such issues.

show 6 replies
crossroadsguyyesterday at 10:25 AM

I wrote off ChatGPT/OpenAI because of Sam Altman and those eyeball scan things - so sort of even before all this was a rage and centre stage. Sometimes it's just the gut feeling, and while it may not always be accurate, if something doesn't "feel" right, maybe it is not right. No one else is all good either, but what I mean to say is there are some entities/people who repeatedly don't feel right, have things attached to them that never felt right, etc., and you get a combined "gut feeling". At least that's how it was for me.

Buttons840yesterday at 3:43 PM

I love no mote!

One day I'd like to create a server in my basement that just runs a few really really nice models, and then get some friends and CO workers to pay me $10 a month for unlimited access.

All with the understanding that if you hog the entire server I'm going to kick you off, and if you generate content that makes the feds knock on my door I'm turning over the server logs and your information. Don't be an idiot, and this can be a good thing between us friends.

It would be like running a private Minecraft server. Trust means people can usually just do what they want in an unlimited way, but "unlimited" doesn't necessarily mean you can start building an x86 processor out of redstone and lagging the whole server. And you can't make weird naked statues everywhere either.

Usually these things aren't issues among a small group. Usually the private server just means more privacy and less restriction.

show 1 reply
jacquesmyesterday at 10:03 AM

> I’m pretty sure they would allow AGI to be used for truly evil purposes

It's perfectly possible that 'truly evil purposes' were the goal all along. Slogans and ethics departments are mere speed bumps on the way to generational wealth.

rustyhancockyesterday at 8:42 AM

I know this is necessarily a very unpopular opinion however.

I think HN in particular as a crowd are very vulnerable to the halo effect and group think when it comes to Anthropic.

Even being generous they are only very minimally a "better actor" than OpenAI.

However, we are so enthralled by their product that we tend to let the view bleed over to their ethics.

Saying we want out tools used in line with the US constitution within the US on one particular point. Is hardly a high moral bar, it's self preservation.

All Anthropic have said is:

1. No mass domestic surveillance of Americans.

2. No fully autonomous lethal weapons yet.

My goodness that's what passes for a high moral standard? Really anything that doesn't hit those very carefully worded points is not "evil"?

show 4 replies
bkoyesterday at 3:16 PM

I never understood the point of this kind of comment. It doesn't add any value or anything to the discussion. Its basically two paragraphs with some presupposition (openai bad) and how the author is virtuous by canceling his subscription. No explanation, argument, nuance. Its just virtue signaling. Actually... I guess I do know the point of this kind of comment. I just don't know why these kinds of comments get upvoted, even if you do agree openai bad

bossyTeacheryesterday at 10:56 AM

I tried Claude recently (after they dropped the nonsensical requirement to give them your phone number) and I was surprised to see how significantly less sycophant it was. Chatgpt, unless you are talking hard science, tends to be overly agreeable. Claude questions you a lot (you ask for x and it asks you stuff like: why are you interested in x, or based on our previous convo, x might not be suitable to you, or I see your point but based on our previous convo, y is better than x, etc). Chatgpt rarely does that.

Of course, also OpenAI being ran by openly questionable people while Dario so far doesn't seem nowhere near as bad even if none of them are angels.

samivyesterday at 11:26 AM

I did the same thing and cancelled my OpenAI plan today. Besides boycotting it for their latest grifting I also found it to not really produce much value in my use cases.

Moving back to doing this archaic thing called using my own brain to do my work. Shocking.

mannanjyesterday at 2:10 PM

Yes they have a great marketing team and a powerful astro turfing presence though, especially with the recent "Claude beat up OpenClaw! OpenAI is supporting the community by buying it!" and that nonsense.

Though tbh I hardly feel Claude is innocent either. When their safety engineer/leader left, I didn't see any statements from the Anthropic team not one addressing the legitimate points of his for why he left. Instead we got an eager over-push in the media cycle of "Anthropic standing up to DOD! Here's why you can trust us!"

It's all sounds too similar to propaganda and astroturfing to me.

Gooblebraiyesterday at 9:42 AM

Claude still doesn't have image generation?

show 5 replies
neyayesterday at 8:59 AM

[flagged]

show 3 replies