Personal Assistant
Home Settings
Daily Digest Newsletters Papers Ruby Posts AI Posts Ruby: Blogs and News AI: Blogs and News Gem Updates Gem Discoveries Digest Tweets
Twitter Lists Bluesky Lists RSS Lists Tracked Gems
Sign in Explore
@hwchase17

Harrison Chase

@hwchase17

loved this from @karpathy over the weekend I built "autoresearch but for agents" Same idea — give an AI coding agent your agent code + an eval dataset, let it experiment autonomously overnight. It modifies the code, runs evals via LangSmith, keeps improvements, discards regressions. You wake up to a better agent. Bring your own agent (any framework or none), dataset, and eval metrics. https://github.com/hwchase17/autoresearch-agents

github.com

GitHub - hwchase17/autoresearch-agents

Contribute to hwchase17/autoresearch-agents development by creating an account on GitHub.

Andrej Karpathy

Andrej Karpathy

@karpathy

· Mar 7

I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the prompt (.md) - the AI agent iterates on the training code (.py) The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc. github.com Part code, part sci-fi, and a pinch of psychosis :)

Quoted tweet media

github.com

5:41 PM · Mar 9, 2026