logoalt Hacker News

gck1today at 1:26 AM8 repliesview on HN

I don't understand how this can be enforced without ridiculous levels of false positives. I'm truly baffled. The same with Claude Code situation.

gemini-cli, claude-code, codex etc, they ALL have a -p flag or equivalent, which is non-interactive IO interface for their LLM inference.

If I wire my tooling (or openclaw) to use the -p flag (or equivalents), is that allowed?

Okay, maybe they get rid of the -p flag and I have to use an interactive session. I can then just use OS IO tooling to wire OpenClaw with their cli. Is that allowed?

How does sending requests directly to the endpoints that their CLI is communicating with suddenly make their subsidized plans expensive? Is it because now I can actually use my 100% quota? If that's so, does it mean their products are such that their profitability stands on people not using them?

What is even going on?


Replies

rustyhancocktoday at 1:48 AM

The direct answer is their clients play extra nice with their backend.

Specifically all optimize caching.

The indirect answer is for everyone using third party tools to play about there are 10x using it to spam or malicious use cases hammering their backend far cheaper than if it was by API.

These people are the false positives in this situation, but whether Google or Claude care is unlikely. They're happy to ban you and expect you to sign up for the API.

This has always been a worry when you use a service like Google.

merlindrutoday at 1:39 AM

claude -p is allowed as far as I'm aware.

if i understand correctly, they even have a wrapper around it to make it easier to use: the Claude Agent SDK

the thing that's disallowed is pretending you're the claude binary, logging in through OAuth

in other words, if you use some product thats not Claude Code, and your browser opens asking you to "give Claude Code access to your account", you're in hot water

as for how they detect it: they say they use heuristics and usage patterns. if something falls wildly out of the distribution it's a ban.

my take is that the problem is not the means of detection. that's fine and seems to work well. the problem is that its an instant outright ban. they should give you a couple warning emails, then a timeout, etc.

show 5 replies
googinsider123today at 3:52 AM

Haha, no. I can tell you that it is so obvious and there is basically no false positives. Can’t share more details though.

If it makes you feel any better, some google employees have their personal accounts banned too (only Gemini access, not the whole account) for running opeclaw, and also have a hard time getting their account reinstated.

show 2 replies
lelanthrantoday at 9:44 AM

> I don't understand how this can be enforced without ridiculous levels of false positives.

It's embarrassingly trivial, IMO - compare what antigravity reports for token to what the backend reports for token usage for that user.

joshribakofftoday at 2:14 AM

There are examples of labs banning these use cases for sure, as well as the presence of terms and conditions allowing them to ban you for merely “competing” with them. If you’re building, it could be worth locking in a contract first.

hendersoontoday at 1:39 AM

The -p flag should be fine, so long as you don't use their oauth in a third-party tool. Gemini also supports A2A for this sort of thing.

show 1 reply
mannanjtoday at 3:03 AM

I feel like it's about data quality. They want humans using the tools because that data is valuable and helps them improve the product. AI's using their product like OpenClaw makes their training missions harder. And even if you opt-out of training, they are still using your data for non-training purposes (you can't open out of that) and that human data is valuable.

dev1ycantoday at 1:28 AM

Every subscription's profitability stands on people forgetting to unsubscribe, how is this surprising?

show 2 replies