Looks like they are slowly getting the OpenClaw features here in Cowork Already seeing the 5 limit per day in usage bar now..
If anyone from anthropic reads it. I love this feature very much, when it works. And it mostly doesn't.
The main bugs / missing features are
1. It loses connection to it's connectors, mostly to the slack connector. It does all the work, then says it can't connect to slack. Then when you show it a screenshot of itself with the slack connector, it will say, oh, yeah, the tools are now loaded and does the rest of the routine.
2. ability to connect it to github packages / artifactory (private packages) - or the dangerous route of allowing access to some sort of vault (with non critical dev only secrets... although it's always a risk. But cursor has it...)
3. the GitHub MCP not being able to do simple things such as update release markdown (super simple use case of creating automated release notes for example)
You are so close, yet so far...
The obvious functionality that seems to be missing here is any way to organize and control these at an organization rather than individual level.
So basically OpenClaw but better and safer I presume haha!
Why would you use it if you don't know whether the model will be nerfed at that run?
Is there a consensus on whether or not we've reached Zawinski's Law?
Nice, could this enable n8n-style workflows that run fully automatically then?
Oof, running Claude Code automatically on PRs is scary.
Can someone tell me what this does that n8n doesn't?
Anthropic's update cycle is too fast..
My only real disappointment with Claude is its flakiness with scheduling tasks. I have several Slack related tasks that I’ve pretty much given up trying to automate - I’ve tried Cowork and Claude Code remote agents, only to find various bugs with working with plugins and connectors. I guess I’ll give this a try, but I don’t have high hopes.
how did they not call this OpenClaude?
This is massive. Arguably will be the start of the move to openclaw-style AI.
I bet anthropic wants to be there already but doesn't have the compute to support it yet.
I couldn’t agree more.
So MCP servers all over again? I mean at the end of the day this is yet another way of injecting data into a prompt that’s fed to a model and returned back to you.
Could be a start
A year ago everyone was so hyped on LLMs even on HN. A year later I see frustration and disappointment on HN. It’s very interesting because this is the case with every new technology and the ‘next thing’
AI companies act like pelicans. They want to gobble everything.
For the love of god fix bugs and write some fricken tests instead of dropping new shiny things
It is absolutely wild to me you guys broke `--continue` from `-p` TWO WEEKS AGO and it is still not fixed.
Seems like more vendor lock-in tactics.
Not saying it doesn’t look useful, but it’s something that keeps you from ever switching off Claude.
Next year, if Claude raises rates after getting bought by Google… what then?
And what happens when Claude goes down and misses events that were supposed to trigger Routines? I’m not at the point where I trust them to have business-dependable uptime.
All these new offers try to kill fire with fire. You don’t make the codebase better with more agents. You introduce more complicated issues.
It’s a trap.
please, no more features. just fix context bloat.
Can we use it free or fee? If free, how many free routines per day.
hehe imagine 10 years ago releasing a library where the functions may or may not do what you expect 100 percent of the time. And paying lots of money to use it.
What a time to be alive.
meta:
Sorry, but I just have to ask. Why is u/minimaxir's comment dead? Is this somehow an error, an attack, or what?
This is a respected user, with a sane question, no?
I vouched, but not enough.
edit: His comment has arisen now. Leaving this up for reference.
“Scheduled tasks and actions invoked by callback urls”
I could understand the portal before. Now it’s a gazillion things bolted on.
Enshittification is well in force.
I’d trust the huperscalers a lot more with their workers/lambda like infra to run routine jobs calling LLM APIs or deterministic code instead of Anthropic.
Anthropic is a phenomenal paid model but they have a poor reliability record.
I don’t care much if Claude code hiccups when generating code. But after the code is generated I want it to run with multiple 9s under certain latencies every single time.
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[flagged]
Is claude at the AWS's old throw sh*t at the wall and see what sticks phase of their business already . That did not take very long.