logoalt Hacker News

solenoid0937yesterday at 1:11 AM12 repliesview on HN

> whether the company that branded itself as the ethical AI lab actually is one

FWIW I have two(!!) close friends working for Anthropic, one for nearly two years and one for about 4 months.

Both of them tell me that this is not just marketing, that the company actually is ethical and safety conscious everywhere, and that this was the most surprising part about joining Anthropic for them. They insist the culture is actually genuine which is practically unicorn rarity in corporate America.

We have worked for FAANG so I know where they're coming from; this got me to drop my cynicism for once and I plan on interviewing with them soon. Hopefully I can answer this question for myself.


Replies

root_axisyesterday at 2:03 AM

Yeah, every engineer in the bay area has a way of framing the business they work for as a benign force for good... Until they find themselves working somewhere else, then suddenly they have a lot to say about the unacceptable things going on there.

From the outside, I find Anthropic's hyperbolic marketing to be an indication that they are basically the same as every other bay area tech startup - more or less nice folks who are primarily concerned with money and status. That's not a condemnation, but I reject all the "do no evil" fanfare as conveniently self serving.

show 4 replies
Bolwinyesterday at 4:33 AM

I find it bizarre even the public image of Anthropic is seen as ethical after the Department of War debacle, in which they themselves admitted they had basically no qualms with their tech being used for war and slaughter at all except two very very thin lines, namely mass surveillance of American citizens and fully automated weaponry with their current models.

It only showed they were marginally more ethical than OpenAI and XAI which isn't saying much.

show 2 replies
__alexsyesterday at 8:42 AM

If you know even the basics of ethics then such claims are clearly nonsense. There is no stable context independent ethical behaviour. This is a great example of the dangers of motivated reasoning.

MichaelDickensyesterday at 2:41 PM

Maybe people inside the company think Anthropic behaves ethically, which says something scary about either their ethical standards or their general awareness, considering how much documented unethical behavior we've seen from Anthropic leadership.[1]

[1] "Unless Its Governance Changes, Anthropic Is Untrustworthy" https://anthropic.ml/

DirkHyesterday at 7:04 AM

I have multiple friends at Anthropic. I can second this. One thing I notice about Anthropic culture is that it is unusually kind.

So much so that I worry they won't be Machiavellian enough to survive. Hope I am wrong.

foolswisdomyesterday at 1:56 AM

I think cynicism is deserved just from observing Dario's remarks.

victorevectoryesterday at 2:21 PM

Im curious — how do ethical and safety conscious manifest themselves there? Is it more cultural or process driven? Do you have any examples?

jarek-foksayesterday at 10:01 AM

> the company actually is ethical and safety conscious everywhere

I wonder what Anthropic tries to achieve by spreading such blatant lies with their bot accounts. I'm definitely not buying anything from a company so morally corrupt to smear the competition while claiming to be somehow "ethical". And I'm not talking just about this thread, it's a recurring pattern on Reddit.

hollerithyesterday at 4:08 PM

>the company actually is ethical and safety conscious everywhere

Anthropic is emphatically not safe. None of the AI labs with customers (i.e., excluding a few small nonprofits whose revenue comes from donations) are anything like safe -- because of extinction risk. The famous positive regard that Anthropic employees have for their organization's mission means almost nothing because there have been hundreds of quite destructive cults and political parties whose members believed that theirs is the most ethical and benign organization ever.

The best thing you can say about Anthropic is that if you have to support some AI lab by becoming a customer, investor or employee, it is slightly less dangerous for the world to support Anthropic than OpenAI although IMHO (and I admit I am in a minority on this among extinction-risk activists) it is slightly less dangerous to support Google Deep Mind or Mistral than Anthropic.

All four organizations I mentioned should be shut down tomorrow with their assets returned to shareholders.

The current crop of services provided by the leading AI labs are IMHO positive on net in their effect of people and society, but the leading AI labs are spending a large fraction of the 100s of billions of dollars they've received from investors on creating more powerful models, and they might succeed in their goal of creating models that are much more powerful than the ones they have now, which is when most of the danger would manifest.

The leaders of all of the leading AI labs have the ambition of completely transforming society and the world through AI.

keyboredyesterday at 11:48 AM

Are your friends also credited in Silicon Valley (2014)?

hypersoaryesterday at 2:48 AM

[flagged]

show 1 reply