Hi HN, I'm Zach, one of the co-founders of Adam (https://adam.new).
We've been on HN twice before with text-to-CAD/3D experiments [1][2]. The honest takeaway from those threads: prompt-to-3D model web apps are fun, but serious mechanical engineers don't want a black box that spits out an STL. They want help inside the CAD tool they already use, with full visibility and control over the feature tree.
So we built that. Adam is now a harness that integrates directly with your CAD. It reads your parts, understands the existing feature tree, and edits it for you agentically. We are now live in beta on Onshape and Fusion! [3]:
Install link Autodesk Fusion: https://fusion.adam.new/install
Install link PTC Onshape: https://cad.onshape.com/appstore/apps/Design%20&%20Documenta...
Things people are using it for today: - "Merge redundant features and clean up my tree" - "Rename every feature so the tree is actually readable" - "Round all internal edges with a 2mm fillet" - “Parametrize my model” - Along with of course, using Adam to generate CAD end-to-end!
A few things we care about that aren't obvious from the listing:
1. From the start we have always believed in CAD as code as the right abstraction. Our harness leverages Onshape's FeatureScript and Python in Fusion heavily.
2. We run an internal CAD benchmark across frontier models. There has been a massive jump in the spatial reasoning capabilities of recent models, particularly GPT 5.5 and Opus 4.7 [4] [5]
3. We open-sourced our earlier text-to-CAD work [6]
A note on the Anthropic Autodesk connector that shipped a couple days ago [7]: We think it's great for the space and validates the direction.
Where Adam is different: - Model-agnostic. We pick whichever frontier model is winning on each task type from our own internal bench, instead of being tied to one lab. - We live natively in your CAD apps and are actively building integrations across all programs
What would you want an in-CAD agent to do that nothing does today?
[1] https://news.ycombinator.com/item?id=44182206
[2] https://news.ycombinator.com/item?id=45140921
[3] https://x.com/adamdotnew/status/2050264512230719980?s=20
[4] https://x.com/adamdotnew/status/2044859329329893376?s=20
[5] https://x.com/adamdotnew/status/2047795078912172122?s=20
Looks cool.
One task that is always time consuming for a mech design team is generating library parts. Mcmaster carr or other vendor model downloads are one thing, but they never have everything you need, and I don't want a separate model for each and every size/configuration of part. There are still plenty of parts that you can't get a model so have to generate it from scratch using data and pictures from a pdf catalogue.
I want a single model containing all the available configurations of that product.
I just had a go with the Onshape connector to generate such a model of a simple BSPT hex reducing nipple for piping. It looks promising but didn't quite get there and hit the daily token limit while I was trying to get it to fix the model.
FYI when I hit the token limit and click the 'See plans' link, I get "Application error: a client-side exception has occurred."
If Anthropic starts entering the engineering space, OpenAI and others may follow.
The key question is: why would your tool or harness perform better than the frontier model providers’ own native tools, such as Claude for Creative Work, if your product is only a thin layer on top of their model or their agentic system?
Similarly, why would your tool work better than a CAD company’s own agentic tool? For example, it would not be very difficult for PTC to add an Onshape co-pilot that calls the Claude Agent SDK, while PTC can also build more powerful internal tools/MCP servers for their own use without exposing them to external API users.
What is the best way for someone without a licence to get this working as quickly as possible? I have used CAD before but would like to have Claude code do it all locally from CLI
Mechanical Engineer here, stop using AI to deal with the most enjoyable part of design PLEASE
An automated drafting too where I can describe design intent and requirements would be a million times better, especially if it is CAD context aware.
I would say around 5-20% of mENG is not actually modelling, the endless pursuit of text to cad and other ai works is both not helpful and not enjoyable
(PS: The feature tree renaming does look very useful)
Been following you guys a while, seems like you've been gaining some traction recently, lets goo and congrats!
I have been working on GrandpaCAD[0] for a while, a very similar product. I thought of you as my biggest competitors but noticed recently you are focusing more and more on professionals while I am focusing on total noobs in modeling who just want to whip out a quick model. So I guess we are not competitors anymore?
My evals[1] show that Opus 4.7 and GPT 5.5 are very comparable in terms of generation quality, but GPT 5.5 is slower and costs sooo much more in my harness. And the original breakthrough model was Gemini 3.1. I'm curious do you have more written about your benchmarks setup?
If you want to chat email is in my profile. Btw, just met "your"(?) neighbour on a plane a couple of days ago. World is small.
[1]: https://grandpacad.com/en/blog/public-benchmarks-misled-me-o...
Obligatory mention of https://zoo.dev/ the leader in this space.
I will say I explored this reasonably deeply and came away with the conclusion that even though we have OpenSCAD and all these examples, LLMs are still very weak at spatial reasoning compared to diffusion models.
You can do all sorts of tricks like have a parts library to get around this and do physics checks but another inconvenient truth is whenever you design a complex assembly, every change to that part needs to be aware of the other parts in the design -- thus you need a global part-aware editing capability from diffusion.
That's getting solved already in china leading labs, and bottlenecked by the lack of good training data, which china is solving with mass labor.
This will be solved overseas first before we will in the US.
https://zoo.dev/ allows you to re-iterate on the same model over and over with prompts, without resorting to creating a new model from scratch every time.
I built https://is.gd/X1KScw for this exact gap — an AI specifically trained on off-grid and survival knowledge rather than a general LLM. Curious what this community thinks.
This looks interesting and promising! But I'm confused about your business model and pricing, which mentions "creative generations"? I'd like to understand it better before investing time into this.
From the OnShape demo videos in the tweets, it looks like sketches are unconstrained. Can this create constraints or other parametric relationships between entities?
And does this use your OnShape API quota? If it's making a new API call for each individual feature, I could see this blowing through the annual quota very quickly. What does this look like in practice?
Is the internal data model of fusion structured enough to be understood with a text-based LLM? Or do you need to basically screenshot the render to understand what is happening?
Would a more CAD-as-code based approach to CAD design be more suitable?
Just like, LLMs have an easier time to build a presentation with latex than with powerpoint...
Most of the work in vertical agent tooling ends up in shaping the domain APIs, not the model layer. How are you handling errors the model can't recover from? Surfaced as tool errors for retry?
Mechanical engineer here. The idea of having to sift through every intricate detail this thing spits out, just to guard against one hallucinated miscalculation making its way into the real world, is enough to keep me up at night. This AI shit is getting ridiculous.
Any plans to make this available for Autodesk Revit? Congrats on the launch.
Next: PCB harness, just describe the board and function and it will design it for you, selecting best matching components, with an MCP to submit it to PCB manufacturer automatically!
There are more elegant solutions to this problem. Why are you trying to get an LLM to work with bloated, archaic tools that you have to rent from a feudal lord in the cloud when there are free open-source alternatives like OpenSCAD.
This is just one example of a superior tool that's natively easy for LLMs to interact with, because the source files are just composable scripts containing lists of shapes and then lists of tools and parameters to apply to the shapes.
I wrote a simple set of system prompts you can use in any repo to show any LLM how to make SCAD files with a whole bunch of cool examples. This is just another example where walking away from the bloated, inferior feudal system of SaaS and cloud models leads to simpler processes and outcomes with superior results in less time, for free.
> Adam is now a harness that integrates directly with your CAD
It does not integrate with "my" CAD, which happens to be none of the two closed-source, closed-ecosystem, commercial products you built your tool for.
[flagged]
[dead]
[dead]
[flagged]
[dead]
Text-to-CAD? No please, sounds like a really bad idea.
My friend is an electrical engineer. He designs circuit boards for a living. We were having dinner the other night, and when the topic of AI came up he told me rather confidently that he didn't think AI was coming for his job anytime soon.
I kind of cautiously disagreed. He told me that the applications he used had no tooling for AI.
I basically said "give it six months". I think in my googling now, it's already here.
As someone with a background in mechanical engineering, I'd love to be able to automate CAD design as it's quite tedious and only fun like 5% of the time, but I've tried these tools and I really don't think text-to-CAD is the right approach. It usually takes longer for me to come up with an accurate written prompt to fully dimension what I need than to just grab my space mouse and do it.