AI stream

AI Post

@fchollet
Ai research Medium

@fchollet

Importance score: 4 • Posted: February 27, 2026 at 02:58

Score

4

Even after the steep progress of the past 3 months, it remains that AI performance is tied to task familiarity. In domains that can be densely sampled (via programmatic generation + verification), performance is effectively unbounded, and will keep increasing from current levels. In novel, unfamiliar domains, performance remains low and further progress still requires new ideas, not just more data and compute.

Taelin

Taelin

@VictorTaelin

2026-02-27T02:51:21.000000Z

Open

Ok, I think my experiment leaving AI working on stuff 24/7 ends here. It doesn't work. Code explodes in complexity, results are not that great, the AI can't get past hard walls (it is still completely unable to even *grasp* SupGen), and it is insanely expensive (spent ~1k over the last 2 days). The best results are on the JS compiler, mostly because it is familiar (compared to inets), but not worth losing control over the codebase. I think the dream of having AI's working on the background and making real progress on things that matter (i.e., truly new things) isn't here yet. It is still a machine hard-stuck on its own training data, incapable of thinking out of the box. It is great for building things that were already built. But not new things Also coding normally has the under-appreciated advantage that you're doing two things at the same time: building a codebase *and* learning it. AI's do only half of that. The other half is obviously impossible 🤔

Grok reasoning
Key insight from Keras creator on AI scaling limits in novel domains.

Likes

582

Reposts

58

Views

38,714

Tweet ID: 2027216811414974875
Prompt source: ai-influencers-news
Fetched at: February 27, 2026 at 07:00