logoalt Hacker News

lewtunyesterday at 4:14 PM1 replyview on HN

Shameless plug: https://huggingface.co/spaces/smolagents/ml-intern

It’s a simple harness around Opus, but with tight integration to Hugging Face infra, so the agent can read papers, test code and launch experiments


Replies

westurneryesterday at 7:06 PM

What are the benchmarks for this, in terms of costs of computation and error; cost to converge?

Re: hyperparameter tuning and autoresearch: https://news.ycombinator.com/item?id=47444581

Parameter-free LLMs would be cool