So I guess I should just give up on my dream of having a useful AI assistant for day to day "human" tasks. We're just hell bent on replacing humans in jobs.
Anthropic’s roadmap is widely underappreciated; it is the company defining AI productivity today.
Hmm.. i don't like prompting my way to fix pixel decisions.
Fed it my figma file to it and creating a design system. And yeah, it looks like it, but it's still wayyy off at the same time
Will give it a try but my experience with Claude and browser use so far is that it’s extremely lazy: it rarely notices or cares when something doesn’t look right, it needs lots of pointing out “hey you ignored that broken render” etc
To people saying its replacing Lovable. It's still missing the ability to deploy right from the app with a backend ready to go. I'm sure it will get there tho.
Wonder how Canva were roped into this, Claude Design will just replace them from what I read?
Why doesn't it have an "Export to Figma" button?
This Ai will help vibe coders but won't be better than UI/UX designer with experience. We saw huge difference in our app when we have started working with one.
Who’s the end user for this? I struggle to relate but then again I also don’t use Figma so I may not be the target demographic
I have been doing fine just instructing Claude code to use Tailwind and reference design documents
We used to have RAD based IDE's and now I guess with AI we're going full circle, I'm curious how their mobile thing works, if its just React Native based or what.
One thing I am wondering: How much logic (in the programming sense) is there in this product and how much of it is „just“ a lean wrapper around the LLM?
I thought polymet did a pretty good job of creating mobile app designs and component libraries when I tested it. Winder how this will compare
Reminder that Claude's own AI design skill (which is probably incorporated directly into this product) says things like
>NEVER use generic AI-generated aesthetics like overused font families (Inter, Roboto, Arial, system fonts), cliched color schemes (particularly purple gradients on white backgrounds), predictable layouts and component patterns, and cookie-cutter design that lacks context-specific character.
> Interpret creatively and make unexpected choices that feel genuinely designed for the context. No design should be the same. Vary between light and dark themes, different fonts, different aesthetics. NEVER converge on common choices (Space Grotesk, for example) across generations.
[1] https://github.com/anthropics/claude-code/blob/main/plugins/...
Start designing at claude.ai/design.
That link is redirecting me to https://claude.ai/404, anyone else?
Did anyone see what happened to Figma's stock? Its crazy that just an announcement from Anthropic can move the market.
Every day is Christmas. Not enough time to play with our new toys before the next batch drops. Insane.
Will be taking this for a spin imminently. What seems to be very unique here vs other canvas tools are the generative UI controls.
If you’re confused about why everyone is shipping their own canvas tool - this is what I wrote when I reviewed Stitch from Google in my newsletter:
“every SOTA (State-of-the-Art) model can already do this. Give it a prompt, it'll spit out an HTML design. Ask for 10 responsive variations, you'll get them. Stitch is a Ul and context harness on top of Gemini, in the same way Claude Code is a harness on top of Claude's models. This means every Al lab will likely ship their own version, and they'll all interoperate because at the end of the day it's all HTML and markdown.”
More generally, this is a competition of where the product development work starts and lives. The business value will accrue to those who become a destination, not a plugin.
AI can now power a new sort of tool that supports the entire process (not just coding or just design). So there’s no reason for Anthropic or any other lab to give that up to another tool.
ultimately if it’s so close to the finished product you may as well just do it in cursor rather than have an extra step.
The design problem to solve post-ai isn’t this it’s how the space for thinking fits into all this, getting to the end result slower so human ideation can play out. This is just optimized for the first generic output + tweaks.
Is there a way one can do DTP using LLMs? InDesign only integrates image generating, if I'm not mistaken. .
This stuff is the antithesis of what I want to see AI used for.
Deriving a bland average of creativity is the saddest thing you could do. I don’t even enjoy design and I find this offensive.
Anything but solve real problems
Anything but that
Does anyone know how they turn html to enable powerpoints so seamlessly?
Had some fun with the wireframe mode, very useful to sketch some interaction concepts
such a cheerful background music to celebrate the death of lovable, bolt.new, figma LOL
Sorry, I hit my session limit by just reading the title
This thing is... not great...
I just started using Claude and its amazing. I cant believe ive waited so long. Looking forward to designing
this might be a game changer (for show dont tell), and fast itterations in design meetings, to show what your thinking of.
Design industry is shaking right now.
oh. i was expecting this to be like claude's brand guidelines
I'm not sure if I'm holding it wrong, but at these usage rates, I can hardly see this being useful for designers in their daily work. In two prompts using the Max 20x plan, it consumed 11% of my weekly limit for Claude Design, which is separate from your normal limits. A day of work would exhaust over four weeks of usage. Is this meant for intermittent use only? Lately I've been getting the feeling that Anthropic is forgetting how absurdly much we are already paying for these tools, compared to conventional development tools, or even competing inference providers.
Nice video but the expectations vs reality is brutal: provide it with an actual PDF with a design system, wait for 30min and the entire thing is stuck.
You talk to it, it consumes "usage" and no design system created.
Had a similar experience with their online code sessions.
All this slop generated code is so freaking bad. We'll get drowned in so much slop code that at some point only these garbage machines will be able to "work" with it.
Rate exceeded.
This seems a bit fiddly... I gave it the ChatGPT iframe html for my (not yet released) Liveclip MCP server that's like tables [https://x.com/firasd/status/2039926453408776649]. And I had this chat in the sidebar and it popped up some questions in a form
> do you see our current html
Hi-fi design Interactive prototype html
> do you see our current html
Hi-fi design Interactive prototype html
● Claude Yes — I can see your html/ folder. It has 7 files:
files-list.html math-result.html rows-view.html table-view.html text-editor-doc-view.html trending-topics.html unique-result.html Want me to open them up and take a look at what's in there? And what would you like to do — polish them, turn them into an interactive prototype, build something new on top, or something else?
> yeah in table view i'm thinking about how to deal with cells that may be big. i guess we want to do something different than excel or sheets in that auto expand rows with cells that contain big values. we can show like the first -- what does twitter show, 280? so we can show 280 chars and then [...]
● Let me take a look at the current table view to understand what we're working with.
Got it — I see the current table view. Right now every cell is clamped to max-width:220px with white-space:nowrap and text-overflow:ellipsis, so long values just get cut off at whatever ~30 chars fits the column. You're thinking about the opposite end: what if a cell genuinely holds 2,000 chars of text? Neither pure clip nor Excel-style row expansion is quite right.
Before I start sketching, a few quick questions so I don't guess wrong:
> Questions answered: - truncation_threshold: ~280 chars (new tweet) - expansion_ux: nothing right now -- it's not interactive - row_height_policy: Grow freely like Notion/Airtable - content_types: Not sure yet - indicator_style: [...] at the end of the text - newlines: Render them (row grows) - copy_affordance: no interaction right now - column_width: Fixed width, all columns equal - variation_count: 2 variations - aesthetic_direction: Same bones, but let you push it a bit
● Got it — so: non-interactive, rows can grow freely, 280-char cap with [...], newlines render, same bones but push it a bit, 2 variations. Let me sketch this out.
Set up variations file with two table treatments V1: Notion/Airtable-style free-grow with [...] cap V2: pushed aesthetic variant Wire both to the same sample data w/ long cells Let me peek at a couple of the sibling files so the variations stay in-family with the rest of the widget set.
Good — now I have the visual family. The shared vocabulary across widgets:
#f5f5f3 page, #fff card, #d0d0cc borders, #f0f0ec headers System font, 12–14px Subtle alternating row tints, no drop shadows, flat corners Color used sparingly; #666 for secondary text My plan for the two variations (side-by-side in a canvas so you can compare on the same rows):
V1 — "Notion-ish, stay in family"
Keep current palette, borders, zebra striping Rows grow vertically as needed; white-space: pre-wrap so newlines render Cap each cell at 280 chars; if longer, slice to 280 + […] inline Same fixed column widths, everything else identical V2 — "Same bones, pushed"
Tightens the grid (hairline borders instead of panels, more whitespace, denser typography) Column rail with sticky row numbers Soft fade on the bottom of ultra-tall cells instead of mid-text […] when content is really long (hybrid: char cap + visual) Monospaced tabular digits for row numbers; type column ⌾ / ▤ markers subtly hinting content type; column width slightly wider for breathing room Let me build it.
Now let me build the two variations. I'll set up realistic sample data with a mix of short values, tweet-length values, long-long values, and ones with newlines, so the behaviors are visible.
Now the two table components.
Clean. Two variations on the canvas, both rendering the same 7 rows so you can compare directly:
V1 — in-family. Your current widget, with white-space: pre-wrap so rows grow, newlines render, and a soft grey […] appended when a cell goes past 280 chars. Row 3's summary (double-long prose) and row 4's URL show the cap. V2 — same bones, pushed. Hairline borders instead of filled header panels, monospaced tabular row gutter, column letters as uppercased small-caps above the header names row, tighter typographic rhythm, cells > 280 → […] legend in the footer. Row 7 shows how newlines render naturally. Both slice at the nearest word boundary when one is within ~40 chars of the 280 cap, otherwise hard-cut — so […] never lands mid-word.
A couple of things worth calling out, not changed:
280 is pretty generous when a row has multiple long cells side-by-side — row 3 is ~2 cells × 280 = a very tall row. If that ends up being painful in real usage, dropping to ~180 or adding a per-row max-height with fade might be worth a follow-up. Neither variation is interactive per your answer; there's no click-to-expand, hover, or copy. Easy to bolt on later. Fork verifier agent Both variations are live on the canvas. Let me know which direction feels right and I'll fold it back into html/table-view.html." -----
It's basically similar to their Artifacts system in a harness I guess
How dangerous is this eh?
no info regarding privacy and data if you connect your repo?
Am I the only one who gets annoyed at "design" being used to mean specifically UI design without qualification or warning?
Isn't this just going to be AI slop but on a design level?
It's not X, it's Y! all over again?
Until the large model output variety/mode collapse is solved, tools like this one will be generating monotonous slop. Very few people using it will be specifying each detail of what they want.
lmao at the idea of figma integrations, categorically. I don't ever want to look at figma again in my life. figma is what people used to ideate quickly because they couldn't build directly in the browser. we can now ideate quickly while working directly in the browser. figma's entire raison d'etre is dinosaur tier over.
With that said, this seems like it's just claude code + simplified devtools in a container? Will need to spend more time with it to see how EG commenting influences things, but 5 minutes in, this seems like it's needless formalization on top of how I already used claude for deisgn
Anthropic is making the exact same mistake OpenAI did: lack of focus. Release, release, release, all good apps, all full of bugs. This is a massive mess and people aren't talking enough about how bad all of this is.
Just started messing around with this but I like it. It produces better results than just using Claude Code on its own. The initial output has a lot of junk that needs to be removed (just like anything LLMs generate). I suspect it's only good at reproducing content that is relatively cookie-cutter and prominent in the training data. But still, as a non-designer this produces better results than I can and in line with the level of quality of many paid templates.
I think we will have maybe 5 to 10 years of all this crap, devaluing human made art and human made products, vibe coding everything. But eventually it will all fall apart. Long term only a minority will be happy seeing AI generated crap everywhere and most people will pay for quality human made goods.
Unfortunately it's going to be a tough few years until that happens, where it really does feel like the idiots are winning.
THE IDIOTS ARE WINNING
Interestingly, Claude Design has its own completely separate usage bar.
Well, after having high expectations from watching the intro the actual result of a simple prompt "Bear on a bicycle" is very underwhelming.
Maybe AI is not good at everything, yet.
nice!
This but attached targeting a 3D printer.