logoalt Hacker News

Claude Code Routines

611 pointsby matthieu_blyesterday at 4:54 PM355 commentsview on HN

Comments

joshstrangeyesterday at 6:32 PM

LLMs and LLM providers are massive black boxes. I get a lot of value from them and so I can put up with that to a certain extent, but these new "products"/features that Anthropic are shipping are very unappealing to me. Not because I can't see a use-case for them, but because I have 0 trust in them:

- No trust that they won't nerf the tool/model behind the feature

- No trust they won't sunset the feature (the graveyard of LLM-features is vast and growing quickly while they throw stuff at the wall to see what sticks)

- No trust in the company long-term. Both in them being around at all and them not rug-pulling. I don't want to build on their "platform". I'll use their harness and their models but I don't want more lock-in than that.

If Anthropic goes "bad" I want to pick up and move to another harness and/or model with minimal fuss. Buying in to things like this would make that much harder.

I'm not going to build my business or my development flows on things I can't replicate myself. Also, I imagine debugging any of this would be maddening. The value add is just not there IMHO.

EDIT: Put another way, LLM companies are trying to climb the ladder to be a platform, I have zero interest in that, I was a "dumb pipe", I want a commodity, I want a provider, not a platform. Claude Code is as far into the dragon's lair that I want to venture and I'm only okay with that because I know I can jump to OpenCode/Codex/etc if/when Anthropic "goes bad".

show 26 replies
andaiyesterday at 5:47 PM

I'm a little confused on the ToS here. From what I gathered, running `claude -p <prompt>` on cron is fine, but putting it in my Telegram bot is a ToS violation (unless I use per-token billing) because it's a 3rd party harness, right? (`claude -p` being a trivial workaround for the "no 3rd party stuff on the subscription" rule)

This Routines feature notably works with the subscription, and it also has API callbacks. So if my Telegram bot calls that API... do I get my Anthropic account nuked or not?

show 4 replies
comboyyesterday at 7:33 PM

Unrelated, but Claude was performing so tragically last few days, maybe week(s), but days mostly, that I had to reluctantly switch. Reluctantly because I enjoy it. Even the most basic stuff, like most python scripts it has to rerun because of some syntax error.

The new reality of coding took away one of the best things for me - that the computer always just does what it is told to do. If the results are wrong it means I'm wrong, I made a bug and I can debug it. Here.. I'm not a hater, it's a powerful tool, but.. it's different.

show 3 replies
Eldodiyesterday at 6:11 PM

Anthropic is really good at releasing features that are almost the same but not exactly the same as other features they released the week before

show 6 replies
minimaxiryesterday at 5:26 PM

Given the alleged recent extreme reduction in Claude Code usage limits (https://news.ycombinator.com/item?id=47739260), how do these more autonomous tools work within that constraint? Are they effectively only usable with a 20x Max plan?

EDIT: This comment is apparently [dead] and idk why.

show 4 replies
ctothyesterday at 5:31 PM

You'd think that if they were compute-limited ... Trying to get people to use it less ... The rational thing to do would be to not ship features that will use more compute automatedly? Or does this use extra usage?

show 5 replies
brandensilvatoday at 2:20 AM

Anthropic is burning their good will faster than the tokens we use these days. It is hard to be excited about these new features when the core product has been neutered into oblivion.

mellosoulsyesterday at 5:49 PM

Put Claude Code on autopilot. Define routines that run on a schedule, trigger on API calls, or react to GitHub events...

We ought to come up with a term for this new discipline, eg "software engineering" or "programming"

show 6 replies
oxag3nyesterday at 7:56 PM

Are they going to mirror every tool software engineers were used to for decades, but in a mangled/proprietary form?

I think to become really efficient they'll have to invent new programming language to eliminate all the ambiguity and non-determinism. Call it "prompt language", with ai-subroutines, ai-labels and ai-goto.

eranationyesterday at 6:15 PM

I've been using it for a while (it was just called "Scheduled", so I assume this is an attempt to rebrand it?)

It was a bit buggy, but it seems to work better now. Some use cases that worked for me:

1. Go over a slack channel used for feedback for an internal tool, triage, open issues, fix obvious ones, reply with the PR link. Some devs liked it, some freaked out. I kept it.

2. Surprisingly non code related - give me a daily rundown (GitHub activity, slack messages, emails) - tried it with non Claude Code scheduled tasks (CoWork) not as good, as it seems the GitHub connector only works in Claude Code. Really good correlation between threads that start on slack, related to email (outlook), or even my personal gmail.

I can share the markdowns if anyone is interested, but it's pretty basic.

Very useful, (when it works).

cedwsyesterday at 8:53 PM

This is the beginning of AI clouds in my estimation. Cloud services provide needed lock-in and support the push to provide higher level services over the top of models. It just makes sense, they'll never recoup the costs on just inference.

holografixtoday at 6:12 AM

Anthropic is putting a lot of eggs into the same Claude Code basket.

If the Lovable clone is real that’s going to piss off many model consumers out there.

Is Sierra next?

vfalbortoday at 8:31 AM

Two things about My experience, first you only have one at a time per suscription, if you need implement two at the same time i could not able to. The second is that you can do that with a well configured cron.

summarityyesterday at 5:31 PM

If you’re trying this for automating things on GitHub, also take a look at Agentic Workflows: https://github.github.com/gh-aw/

They support much of the same triggers and come with many additional security controls out of the box

show 3 replies
bryanhogantoday at 4:13 AM

So do I understand correctly that this is a competitor to something like n8n, but instead entirely vibe-coded?

n8n: https://n8n.io/

richardwyesterday at 10:29 PM

I’m moving away from Claude for anything complicated. It’s got such nice DX but I can’t take the confident flaky results. Finding Codex on the high plan more thorough, and for any complicated project that’s what I need.

Still using Claude for UX (playgrounds) and language. OpenAI has always been a little more cerebral and stern, which doesn’t suit those areas. When it tries to be friendly it comes off as someone my age trying to be a 20-something.

haukemyesterday at 11:43 PM

I used the claude-code-action GitHub Action to review PRs before, but it is pretty buggy e.g. PRs from forked repositories do not work, and I had to fix it myself. This should work better with Claude Code Routines. claude-code-action only works with the API and is therefore pretty expensive compared to the subscription.

I think LLM reviews on PRs are helpful and will reduce the load on maintainers. I am working on OpenWrt and was approved for the Claude Code Max Open Source Program today. The cap of 15 automatic Claude Code Routines runs per day is a bit low. We get 5 to 20 new PRs per day and I would like to run it on all of them. I would also like to re-run it when authors make changes, in that case it should be sufficient to just check if the problems were addressed.

Is it possible to get more runs per day, or to carry over unused ones from the last 7 days? Maybe 30 on Sonnet and 15 on Opus?

When I was editing a routine, the window closed and showed an error message twice. Looks like there are still some bugs.

lherrontoday at 3:36 AM

It’s interesting to watch Ant try to ship every value-add product feature they can while they still have the SOTA model for agentic. When an open weights equivalent to Opus 4.5’s agentic capabilities comes out, I expect massive shifts of workloads away from Claude.

Don’t get me wrong, I think their business model is still solid and they will be able to sell every token they can generate for the next couple years. They just won’t be critical path for AI diffusion anymore, which will be good for all sides.

whhtoday at 6:53 AM

I don’t think LLMs should be trying to replace what essentially should be well tested heuristics.

It’s fine if it’s a stop gap. But, it’s too inconsistent to ever be reliable.

airstrikeyesterday at 5:38 PM

Still no moat.

The reason someone would use this vs. third-party alternatives is still the fact that the $200/mo subscription is markedly cheaper than per-token API billing.

Not sure how this works out in the long term when switching costs are virtually zero.

show 2 replies
mercurialsolotoday at 5:43 AM

Why not just do event based triggers e.g. register (web)hooks instead of schedules time based triggers. Have a mechanism to listen to an event and then run some flow - analyze, plan, execute, feedback

mkageniustoday at 6:55 AM

I also felt the need for a cloud based cron like automations, so decided to build it myself https://cronbox.sh with:

  1. an ephemeral linux sandbox for each task 
  2. capability to fetch any url
  3. can use tools like ffmpeg to fulfill your scheduled task
kylegalbraithtoday at 6:13 AM

Having used the cowork version of this: scheduled automations. I have very little confidence in this from Anthropic. 90% of the time the automation never even runs.

twobitshifteryesterday at 10:02 PM

It seemed OpenClaw is just Pi with Cron and hooks, and it seems like this is just Claude Code with Cron and hooks. Based on the superiority of Pi, I would not expect this to attract any one from OpenClaw, but it will increase token usage in Claude Code.

sminchevyesterday at 7:20 PM

Everything is big race! Each company is trying to do as much as possible, to provide as many tools as possible, to catch the wave and beat the concurrency. I remember how Antropic and OpenAI made releases in just 10-15 minutes of difference, trying to compete and gain momentum.

And because they use AI heavily, they produce new product every week. So fast, that I have no time to check, does it worth or not.

This one looks interesting. I have some custom commands that I execute manually weekly, for monitoring, audits, summary, reports. It it can send reports on email, or generate something that I can read in the morning with my coffee, or after I finish with it ;) it might be a good tool.

The question is, do I really want to so much productive? I am already much better in performance with AI, compared with the 'old school' way...

Everything is just getting to much for me.

yohamtatoday at 2:31 AM

Claude and Open AI seems to be trying not to be 'Just a model', but this is intrinsically problematic because model can be degraded and prices only goes up once they lock-in customers. It is increasingly important for anyone who are responsible of managing 'AI workflows' to keep the sovereignty about how you use AI models. This is why I'm super excited in building the local-first workflow orchestration software called "Dagu", that allows us to own your harness on your own. It's not only more cost-effective, but outcome is better as well because you have 100% full control. I think it's only matter of time that people notice they need to own their workflow orchestration on their own not relying on Anthropic, OpenAI, or Google.

show 1 reply
sublimefiretoday at 4:12 AM

Did this sort of a thing in my own macos app which can have routines with a cron, custom configs and chains of prompts. There is also more like custom VMs and models to be used for different tasks. Interesting to see larger providers trying to do the same.

But their own failure is the fact that there is a limited way to configure it with other models, think 3d modelling and integrating 3d apps on a VM to work with. I believe an OSS solution is needed here, which is not too hard to do either.

netduryesterday at 5:40 PM

didn’t we have several antitrust cases where a vendor used its monopoly to disadvantage rivals? did not anthropic block openclaw?

show 3 replies
kennywinkertoday at 4:32 AM

Am I crazy in thinking an LLM doing any kind of serious workload is risky as hell?

Like, say it works today, but tomorrow they update the model and instead of emailing you an update it emails your api keys to all your contacts? Or if it works 999 times out of 1000 but then commits code to master that makes all your products free?

Idk man… call me Adama, but i do not trust long-running networked ai one bit

vessenesyesterday at 5:36 PM

This is one of the best features of OpenClaw - makes sense to swipe it into Claude Code directly. I wonder if Anthropic wants to just make claude a full stand-in replacement for openclaw, or just chip away at what they think the best features are, now that oAI has acquired.

show 1 reply
rahimnathwaniyesterday at 11:57 PM

The docs list the GitHub events that can be used as triggers. This is included in the list:

  Push Commits are pushed to a branch
But when I try to create a routine, the only GitHub events available in the drop down related to pull requests and releases. Nothing available related to pushes/commits or issues. Am I holding it wrong?
tills13yesterday at 7:27 PM

> react to GitHub events from Anthropic-managed cloud infrastructure

Oh cool! vendor lock-in.

dispenceryesterday at 6:45 PM

This wild, one of the pieces I was lacking for a very openclaw-esque future. Now I think I have all the mcp tools I need (github, linear, slack, gmail, querybear), all the skills I need, and now can run these on a loop.

Am I needed anymore?

show 2 replies
hackermeowstoday at 3:52 AM

Is claude at the AWS's old throw sh*t at the wall and see what sticks phase of their business already . That did not take very long.

tallesborges92yesterday at 10:58 PM

Anthropic I don’t care for your tools, just ship good and stable models so we can build the tools we need.

thegdskstoday at 3:25 AM

Looks like they are slowly getting the OpenClaw features here in Cowork Already seeing the 5 limit per day in usage bar now..

yalogintoday at 1:29 AM

I am beginning to fear Claude is going to massively raise prices or at the very least severely restrict its $20/month plan. Hope it doesn’t happen but feels inevitable

sridyesterday at 6:47 PM

I just used this to summarize HN posts in last 24 hours, including AI summaries.

This PR was created by the Claude Code Routine:

https://github.com/srid/claude-dump/pull/5

The original prompt: https://i.imgur.com/mWmkw5e.png

taw1285yesterday at 6:53 PM

I have a small team of 4 engineers, each of us is on the personal max subscription plan and prefer to stay this way to save cost. Does anyone know how I can overcome the challenge with setting up Routines or Scheduled Tasks with Anthropic infra in a collaborate manner: ie: all teammates can contribute to these nightly job of cleaning up the docs, cleaning up vibe coding slops.

show 1 reply
watermelon0yesterday at 5:50 PM

Seems like it only supports x86_64. It would be nice if they offered a way to bring your own compute, to be able to work on projects targeting arm64.

causalyesterday at 8:05 PM

Haven't Github-triggered LLMs already been the source of multiple prompt injection attacks? Seems bad.

theodorewilesyesterday at 6:08 PM

How does this deal with stop hooks? Can it run https://github.com/anthropics/claude-code/blob/main/plugins/...

cryptonectortoday at 5:15 AM

Oof, running Claude Code automatically on PRs is scary.

woeiruayesterday at 9:00 PM

I don't get the use case for these... Their primary customers are enterprises. Are most enterprises happy with running daily tasks on a third party cloud outside of their ecosystem? I think not.

So who are they building these for?

show 2 replies
egamirorrimyesterday at 6:48 PM

I wish they'd release more stuff that didn't rely on me routing all my data through their cloud to work. Obviously the LLM is cloud based but I don't want any more lock-in than that. Plus not everyone has their repositories in GitHub.

nojvektoday at 9:48 AM

I could understand the portal before. Now it’s a gazillion things bolted on.

Enshittification is well in force.

I’d trust the huperscalers a lot more with their workers/lambda like infra to run routine jobs calling LLM APIs or deterministic code instead of Anthropic.

Anthropic is a phenomenal paid model but they have a poor reliability record.

I don’t care much if Claude code hiccups when generating code. But after the code is generated I want it to run with multiple 9s under certain latencies every single time.

amebaheadtoday at 4:47 AM

Anthropic's update cycle is too fast..

eranationyesterday at 9:55 PM

If anyone from anthropic reads it. I love this feature very much, when it works. And it mostly doesn't.

The main bugs / missing features are

1. It loses connection to it's connectors, mostly to the slack connector. It does all the work, then says it can't connect to slack. Then when you show it a screenshot of itself with the slack connector, it will say, oh, yeah, the tools are now loaded and does the rest of the routine.

2. ability to connect it to github packages / artifactory (private packages) - or the dangerous route of allowing access to some sort of vault (with non critical dev only secrets... although it's always a risk. But cursor has it...)

3. the GitHub MCP not being able to do simple things such as update release markdown (super simple use case of creating automated release notes for example)

You are so close, yet so far...

show 1 reply
compounding_ittoday at 7:16 AM

A year ago everyone was so hyped on LLMs even on HN. A year later I see frustration and disappointment on HN. It’s very interesting because this is the case with every new technology and the ‘next thing’

🔗 View 36 more comments