logoalt Hacker News

crazygringoyesterday at 11:39 PM11 repliesview on HN

> “We have high confidence that the actor likely leveraged an A.I. model to support the discovery and weaponization of this vulnerability,” the report said.

I wonder what gives them that "high confidence", as opposed to this being just a traditional zero-day?

I'm not being snarky or critical, I'm genuinely wondering what about an attack could possibly indicate it was discovered with LLM assistance?

Like, unless the attackers' computers have been seized and they've been able to recover the actual LLM transcript history? But nothing in the article indicates that the hackers have been caught, just that a patch was developed.


Replies

bigp3t3today at 6:23 AM

From Google's GTIG report: https://cloud.google.com/blog/topics/threat-intelligence/ai-...

"Although we do not believe Gemini was used, based on the structure and content of these exploits, we have high confidence that the actor likely leveraged an AI model to support the discovery and weaponization of this vulnerability. For example, the script contains an abundance of educational docstrings, including a hallucinated CVSS score, and uses a structured, textbook Pythonic format highly characteristic of LLMs training data (e.g., detailed help menus and the clean _C ANSI color class) "

show 3 replies
chromacitytoday at 1:07 AM

> I wonder what gives them that "high confidence", as opposed to this being just a traditional zero-day?

Google, Cloudflare, and Microsoft are a trio of companies that get to see most of what's going on the internet. I imagine that if they see you attacking them, they can work back from that and get remarkably far, even against sophisticated actors. If it's their LLM, they presumably keep transcripts. If you searched for the affected API function via a search engine, they almost certainly know. Even if you used a competing search product, you probably went to a site that has Google Analytics. Oh, and one of these companies probably has your DNS lookups. And a good chunk of the world's email traffic. And telemetry from your workstation. And auto-uploaded crash reports... And if it's bad, they can work together behind the scenes to get to the bottom of it.

So, when their threat intel orgs say they have high confidence in something, I'd be inclined to believe it.

show 1 reply
DrewADesigntoday at 12:43 AM

Well, it’s great marketing for LLM products at the enterprise level. Even if they weren’t sure, they have every incentive to run with it now, and the issue a “whoopsie daisy” apology later after the tech media stopped paying attention.

show 1 reply
_alternator_today at 2:21 AM

The article strongly implies they have the (Python) source code, and that it looks LLM generated. I don't know about you, but I can usually tell LLM code from a mile away.

show 1 reply
neyatoday at 6:56 AM

We are going to be seeing a lot of these moving forward. It's the easy way out. If you've worked with Google, you will know that it's an environment where accountability doesn't thrive. You will find people who know nothing about Google's product portfolio hold advisory roles around the products. They don't care, there's no one to even question them. They just know to make colourful graphs with the most useless metrics to justify they "add value" to the company. Expecting them to take accountability is like trying to mix oil and water.

HlessClaudesmantoday at 6:14 AM

Humans can sometimes find a needle in a haystack, but its impossible for us to find multiple needles in multiple haystacks and chain them together into an attack. AIs can work through a complex search space much more efficiently, that's the tell.

show 3 replies
glensteintoday at 12:29 AM

The article says it included excessive explainer text. And I'm almost positive an earlier version of the article referenced hallucinated library references though I don't see it in the present version of the article.

eatsyourtacosyesterday at 11:46 PM

Maybe after they realized how they were vulnerable they asked an LLM to find the exploit through a similar means to try and replicate it. Still doesn't prove it but maybe gives them confidence this weird thing can only really be found that way etc.

slatertoday at 12:42 AM

> I wonder what gives them that "high confidence", as opposed to this being just a traditional zero-day?

Excessive use of em-dashes, and emoji bullet points in the readme

yacthingtoday at 12:16 AM

Maybe they saw traffic that looked like AI prodding an API and quickly adapting to find the bug?

But at this point I feel like odds are everyone looking for vulnerabilities is using AI to some extent. Why wouldn't they? It'd be stranger if they didn't.

show 1 reply
nullcyesterday at 11:56 PM

Presumably the attacker used Google's own LLM and they searched the history of all user chats to find the transcript.

I say this only slightly in jest, as that's about the only thing I can think of which would legitimately give them 'high confidence'.

show 1 reply