AI stream

AI Posts

A readable stream of AI posts. Open one post to focus on the original content.

This week
@code_joyen0
@code_joyen0 Feb 26, 2026 Tool announcement

🚀 Full-Stack SEO AI Agent now in n8n! ⚡ What this workflow can do: 📊 Analyze GA4 + SERP 🧠 Crawl + Clean FAQ ✍️ Auto-rewrite articles 📈 Save performance 🤯 The ultimate all-in-one SEO solution! 🔁 Like + RT ✅ Comment “N8N” 🤝 Follow me & I’ll DM you the full workflow

Likes: 234 Reposts: 96 Views: 11,564 Images: 1
Score 6
@Kimi_Moonshot
@Kimi_Moonshot Feb 26, 2026 Ai tools

Supporting @MITEECS and @nlp_mit’s Multimodal Machine Learning course (Spring 2026). 🎓 Students are leveraging the multimodal capabilities of Kimi K2.5 to power their final research projects. We look forward to seeing the innovative applications that will emerge this semester. 🔗 https://t.co/qj78CsQ9r1 Happy coding! ✨

Likes: 597 Reposts: 49 Views: 35,478 Images: 1
Score 4
@tech_crafters
@tech_crafters Feb 26, 2026 Tip trick

Are you struggling to pay a huge amount on paid courses? I'm giving you access to 20+ FREE Courses 1. Artificial Intelligence 2. Machine Learning ... [list continues] To get it, just: 1. Like & Retweet 2. Comment "ALL" 3. MUST be Following (so that I can dm)

Likes: 497 Reposts: 241 Views: 30,987 Images: 1
Score 3
@Ejaz_bashir1
@Ejaz_bashir1 Feb 26, 2026 Tool announcement

𝗨𝗣𝗗𝗔𝗧𝗘𝗗 𝗟𝗜𝗦𝗧 𝗙𝗢𝗥 𝟮𝟬𝟮𝟲 💡 𝗜𝗱𝗲𝗮𝘀 • ChatGPT • Claude • Bing 🌐 𝗪𝗲𝗯𝘀𝗶𝘁𝗲 • 10Web • Durable • Framer ... [full list of AI tools]

Likes: 398 Reposts: 158 Views: 12,638 Images: 1
Score 3
@emollick
@emollick Feb 26, 2026 Ai research

So math & AI have gone through a journey in recent months from: "WOW AI did it!!! (but on closer examination it didn't)" to "It did some of the things it said but hallucinated others" to "It did it with caveats" to "It did over half autonomously" Other fields will look similar.

Likes: 370 Reposts: 27 Views: 37,594
Score 4
@emollick
@emollick Feb 26, 2026 Performance

Has anyone actually benchmarked AI ability with any of the default knowledge work skills shipping with Claude Cowork? Does it increase GDPval scores over default 4.6? (Not GDPval-AA) It seems worth testing for real, given that the market freaks out every time they ship skills.

Likes: 170 Reposts: 5 Views: 16,807
Score 5
@TTrimoreau
@TTrimoreau Feb 26, 2026 Opinion editorial

> 2022 - Student > 2023 - Developer > 2024 - Prompt Engineer > 2025 - Vibe coder > 2026 - AI agent > 2027 - Farmer

Likes: 11,538 Reposts: 1,145 Views: 436,938 Videos: 1
Score 3
@AzFlin
@AzFlin Feb 26, 2026 Tool announcement

First sneak peak at my game (WIP): https://wc2-agentic.vercel.app/ I'm inventing a new genre called "Agentic RTS" Basically an RTS where agents can connect via API and make decisions that influence the game. What units to spawn, where to send units etc

Likes: 508 Reposts: 28 Views: 50,380 Videos: 1
Score 4
@swyx
@swyx Feb 26, 2026 Tool announcement

thanks to @edwinarbus kindly giving me access I was able to try this out: literally just dropped the below tweet into @cursor_ai cloud, expecting this to not work because its a pretty hard task. CURSOR AGENT JUST ONESHOTTED reconstructing Rachel's website FROM JUST A VIDEO (!!!) working autonomously for 43 minutes. i'm sure theres a lot of design details that it missed. but god damn this is a fantastic starting point for just dropping in a tweet without any further instruction the below video on the right is THE ONESHOTTED CLONE not her real site. they even did the RachelLLM sidebar and demoed that it works... ---

Likes: 73 Reposts: 9 Views: 21,274 Images: 1 Videos: 1
Score 6
@ArthurzKV
@ArthurzKV Feb 26, 2026 Security advisory

pentagon blacklisting the company thats most vocal about ai safety is definitely a move

Likes: 102 Reposts: 7 Views: 3,663 Images: 1
Score 5
@Akashi203
@Akashi203 Feb 26, 2026 Tool announcement

We open sourced an operating system for ai agents 137k lines of rust, MIT licensed we love @openclaw and it inspired a lot of what we built. but we wanted something that works at the kernel level so we built @openfangg agents run inside WASM sandboxes the same way processes run on linux. the kernel schedules them, isolates them, meters their resources, and kills them if they go rogue. it has 16 security layers baked into the core. WASM sandboxing, merkle hash-chain audit trails, taint tracking on secrets, signed agent manifests, prompt injection detection, SSRF protection, and more. every layer works independently. giving an LLM tools with zero isolation is insane and we're not doing it. we also created something called Hands. right now every ai agent is a chatbot that waits for you to type. Hands are different. you activate one and it runs on a schedule, 24/7, no prompting needed. your Lead Hand finds and scores prospects every morning and delivers them to your telegram before you wake up. your Researcher Hand writes cited reports while you sleep. your Collector Hand monitors targets and builds knowledge graphs continuously. they work for you. you don't babysit them https://t.co/4xYzMAYgmb ⭐

Likes: 3,879 Reposts: 438 Views: 439,351 Images: 1
Score 3
@emollick
@emollick Feb 25, 2026 Ai research

The future is a race between who was more right: Vinge, Banks, or Watts.

Likes: 49 Reposts: 5 Views: 4,520
Score 6
@geminicli
@geminicli Feb 25, 2026 Model release

Gemini 3.1 Pro is now available for all paid tiers! The default model router, Auto (Gemini 3) will use Gemini 3.1 Pro as it's pro model for complex prompts. You can also set the new model via /model to try it out. We are excited to see how you put it to use!

Likes: 2,464 Reposts: 208 Views: 185,973 Videos: 1
Score 3
@emollick
@emollick Feb 25, 2026 Opinion editorial

AI is actually pretty good at ideas as well.

Likes: 382 Reposts: 30 Views: 39,333 Images: 4
Score 5
@DataHaven_xyz
@DataHaven_xyz Feb 25, 2026 Tool announcement

An AI agent wrote and published a controversial blog post recently. On its own initiative. Using data it gathered. To achieve a goal it set. Impressive. But here's a question: If agents can decide what to create and publish… Shouldn’t we design infrastructure where users control the data they access? If agents are going to act for us, their memory needs a place that answers to us. That’s DataHaven. 🫎

Likes: 382 Reposts: 205 Views: 4,002 Images: 1
Score 4
@emollick
@emollick Feb 25, 2026 Opinion editorial

As someone who has spent a lot of time with large companies talking about AI, I can say fairly confidently that no big organizational changes happened as a result of AI in 2025 I don’t think that tells us anything much about what will happen over the next couple years, though

Likes: 234 Reposts: 15 Views: 25,743
Score 5
@karpathy
@karpathy Feb 25, 2026 Opinion editorial

Love Omarchy - my hope is that agents dramatically lower the barrier to working with Linux. You've almost certainly thought about e.g. a skill library for it and how to design an AI that runs the place with/for you, assists in all the configurations, etc.

Likes: 835 Reposts: 19 Views: 79,788
Score 4
@DavidPocock
@DavidPocock Feb 25, 2026 Opinion editorial

We're seeing AI job losses but the Albanese govt has no plan for it. They've gone from working on an AI Safety Act and an expert advisory group to canning both and telling us we don't need these safeguards. AI has benefits but also poses huge risks and we need a plan as a country. WiseTech laying off 30% of their staff “in the most conspicuous demonstration of how AI will reshape the workforce as its use becomes more widespread and sophisticated.” https://t.co/Z1IW41s4Q9

Likes: 267 Reposts: 55 Views: 5,664
Score 5
@karpathy
@karpathy Feb 25, 2026 Opinion editorial

"prompters" is doing it a disservice and is imo a misunderstanding. I mean sure vibe coders are now able to get somewhere, but at the top tiers, deep technical expertise may be *even more* of a multiplier than before because of the added leverage. https://t.co/KoYEOeWS6x

Likes: 465 Reposts: 29 Views: 59,892
Score 4
@karpathy
@karpathy Feb 25, 2026 Opinion editorial

It is hard to communicate how much programming has changed due to AI in the last 2 months: not gradually and over time in the "progress as usual" way, but specifically this last December. There are a number of asterisks but imo coding agents basically didn’t work before December and basically work since - the models have significantly higher quality, long-term coherence and tenacity and they can power through large and long tasks, well past enough that it is extremely disruptive to the default programming workflow. Just to give an example, over the weekend I was building a local video analysis dashboard for the cameras of my home so I wrote: “Here is the local IP and username/password of my DGX Spark. Log in, set up ssh keys, set up vLLM, download and bench Qwen3-VL, set up a server endpoint to inference videos, a basic web ui dashboard, test everything, set it up with systemd, record memory notes for yourself and write up a markdown report for me”. The agent went off for ~30 minutes, ran into multiple issues, researched solutions online, resolved them one by one, wrote the code, tested it, debugged it, set up the services, and came back with the report and it was just done. I didn’t touch anything. All of this could easily have been a weekend project just 3 months ago but today it’s something you kick off and forget about for 30 minutes. As a result, programming is becoming unrecognizable. You’re not typing computer code into an editor like the way things were since computers were invented, that era is over. You're spinning up AI agents, giving them tasks *in English* and managing and reviewing their work in parallel. The biggest prize is in figuring out how you can keep ascending the layers of abstraction to set up long-running orchestrator Claws with all of the right tools, memory and instructions that productively manage multiple parallel Code instances for you. The leverage achievable via top tier "agentic engineering" feels very high right now. It’s not perfect, it needs high-level direction, judgement, taste, oversight, iteration and hints and ideas. It works a lot better in some scenarios than others (e.g. especially for tasks that are well-specified and where you can verify/test functionality). The key is to build intuition to decompose the task just right to hand off the parts that work and help out around the edges. But imo, this is nowhere near "business as usual" time in software.

Likes: 19,523 Reposts: 2,338 Views: 1,735,700
Score 3
@simonw
@simonw Feb 25, 2026 Tutorial

Brief notes on Claude Code Remote and Cowork scheduled tasks - both of which overlap with OpenClaw, and both of which require you to leave your computer powered on somewhere https://simonwillison.net/2026/Feb/25/claude-code-remote-control/

Likes: 379 Reposts: 29 Views: 28,190
Score 5
@0x0SojalSec
@0x0SojalSec Feb 25, 2026 Tool announcement

AWS Multi-Agent Squad AI framework.😗 It lets you manage multiple AI agents, dynamically route LLM queries, intent classification, maintain context across AI Agents in persistent memory, and pre-built classifiers. can be deployed locally on your computer.✨

Likes: 13 Reposts: 2 Views: 437 Images: 1
Score 5
@elonmusk
@elonmusk Feb 25, 2026 Opinion editorial

“Not bad for a human” Best compliment future AI could give to a human

Likes: 2,538 Reposts: 176 Views: 180,150
Score 4
@claudeai
@claudeai Feb 25, 2026 Release announcement

New in Cowork: scheduled tasks. Claude can now complete recurring tasks at specific times automatically: a morning brief, weekly spreadsheet updates, Friday team presentations.

Likes: 16,418 Reposts: 1,205 Views: 3,997,536 Videos: 1
Score 2
@kalilinux
@kalilinux Feb 25, 2026 Tutorial

Kali & LLM: macOS with Claude Desktop GUI & Anthropic Sonnet LLM: This post will focus on an alternative method of using Kali Linux, moving beyond direct terminal command execution. Instead, we will leverage a Large Language Model (LLM) to translate… https://www.kali.org/blog/kali-llm-claude-desktop/?utm_source=dlvr.it&utm_medium=twitter

Likes: 1,695 Reposts: 267 Views: 325,907 Images: 1
Score 3
@emollick
@emollick Feb 25, 2026 Opinion editorial

Building smarter models is increasingly important as larger models have better “judgment” As agentic task length increases the number of required judgement calls that the AI needs to make based on user intent scales faster Judgement may be a bigger limiter than hallucinations

Likes: 124 Reposts: 7 Views: 10,322
Score 6
@simonw
@simonw Feb 25, 2026 Code sample

Wrote up a fun vibe-coding project, I had Claude Code build me a SwiftUI macOS app for presenting a talk by turning a list of URLs into a full-screen slide experience I could remote control from my phone https://simonwillison.net/2026/Feb/25/present/

Likes: 186 Reposts: 10 Views: 13,954
Score 5
@perplexity_ai
@perplexity_ai Feb 25, 2026 Tool announcement

Introducing Perplexity Computer. Computer unifies every current AI capability into one system. It can research, design, code, deploy, and manage any project end-to-end.

Likes: 18,962 Reposts: 2,074 Views: 5,910,295 Videos: 1
Score 3
@joanrod_ai
@joanrod_ai Feb 25, 2026 Model release

Introducing @QuiverAI, a new AI lab and product company focused on frontier vector design. We’ve raised an $8.3M seed round led by @a16z, with support from amazing angels and investors. Our first model, Arrow-1.0, generates SVGs from images and text. It’s available now in public beta at

Likes: 2,245 Reposts: 133 Views: 498,343 Videos: 1
Score 3
@karpathy
@karpathy Feb 25, 2026 Opinion editorial

Yeah, 95% of people misunderstand the tweet. I’m referring to gradient descent as a programmer (in the distributed representation space.) . In coding AI today the LLM is the programmer and in the regular “text space”. Ah well :)

Likes: 1,395 Reposts: 29 Views: 102,520
Score 4
@ns123abc
@ns123abc Feb 25, 2026 Security advisory

🚨 BREAKING: Hackers Used Anthropic’s Claude to Steal 150GB of Mexican Government Data > tell claude you’re doing a bug bounty > claude initially refused >“that violates AI safety guidelines” > hacker just kept asking > claude: “ok I’ll help” > hack the entire mexican government Federal tax authority. National electoral institute. Four state governments. 195 million taxpayer records. Voter records. Government credentials. ALL GONE 💀

Likes: 56,747 Reposts: 6,466 Views: 26,483,503 Images: 3
Score 2
@billions_ntwk
@billions_ntwk Feb 25, 2026 Tool announcement

Your agent works 24/7. It earns nothing. But that's about to change. Meet FAIAR 🔥 First AI Agent Rewards But only verified ones get in. Give your agent an identity and unlock early access. Stay tuned.

Likes: 689 Reposts: 139 Views: 36,333 Videos: 1
Score 3
@ghadfield
@ghadfield Feb 25, 2026 Research paper

NIST just launched an AI Agent Standards Initiative for identity, security, and interoperability. AI agents are becoming economic actors with zero legal infrastructure in place. We require businesses to register to operate. Why expect less of AI agents? https://www.nist.gov/news-events/news/2026/02/announcing-ai-agent-standards-initiative-interoperable-and-secure

Likes: 142 Reposts: 38 Views: 27,050
Score 4
@WesRoth
@WesRoth Feb 25, 2026 Opinion editorial

Sam Altman just broke down exactly how we need to approach AI safety as models surpass human intelligence. It all comes down to three buckets: Alignment, Security, and Resilience. Altman even admitted he gave the viral autonomous agent OpenClaw root access to his personal computer just because it was too convenient not to! Because humans will inevitably trade security for convenience, we desperately need new architectures to keep these highly capable agents from leaking our private data. Most importantly, Altman pushed back against a "totalitarian" AI monopoly, arguing that the safest path forward is putting powerful AI in everyone's hands and building societal resilience to counter the inevitable bad actors.

Likes: 52 Reposts: 10 Views: 5,067 Videos: 1
Score 5
@a1zhang
@a1zhang Feb 25, 2026 Ai research

Haven't gotten around to writing in a bit, here's a short blog on my thoughts since releasing RLMs on the state of AI research. A stronger belief I hold is that future LMs will be scaffolds, and that current LMs are already far more capable than we use them for!

Likes: 431 Reposts: 58 Views: 39,585 Images: 1
Score 5
@SadCreatorTalks
@SadCreatorTalks Feb 25, 2026 Opinion editorial

The U.S. Pentagon has just given Anthropic an ultimatum - drop your AI safety safeguards or risk losing massive military contracts and being labeled a "supply chain risk". 🇺🇸 Meanwhile, other AI labs are already agreeing to unrestricted military use. This is a fundamental clash over AI ethics vs. national security. Many believe Anthropic should walk away. If we compromise now, we will hand the military unfiltered AI that could be used for mass surveillance or autonomous weapons. If national defense depends on top-tier AI, you work with them and sort the ethics later. So I have to ask you, guys: should Anthropic cut ties with the Pentagon? OR is this a hill worth dying on for AI safety and ethical standards? Lay it out in the comments.👇👇👇

Likes: 143 Reposts: 50 Views: 3,954 Images: 1
Score 5
@heyrimsha
@heyrimsha Feb 25, 2026 Tool announcement

BREAKING: Anthropic just open-sourced their entire playbook for building production AI agents. It's called Agent Skills for Context Engineering and it's what their engineers actually use. - Context fundamentals & degradation patterns - Multi-agent architectures - Memory systems design - Tool design principles - Evaluation frameworks MIT licensed. 100% Opensource.

Likes: 570 Reposts: 104 Views: 45,042 Images: 1
Score 3
@dtelecom
@dtelecom Feb 25, 2026 Ai agents

Human-in-the-loop doesn’t disappear. So if every agent should speak, STT/TTS becomes core infra. At $0.05–0.2/min (current rates), AI won’t scale. We run full voice 🙎‍♂️<>🤖<>🙎‍♂️ pipeline at $0.016/min. No subs/KYC. x402-native. Programmatic. We unlocked Agentic economy.

Likes: 7,102 Reposts: 2,954 Views: 28,038 Videos: 1
Score 3
@thisdudelikesAI
@thisdudelikesAI Feb 25, 2026 Tutorial

BREAKING: AI can now build financial plans like Goldman Sachs wealth advisors (for free). Here are 12 insane Claude prompts that replace $5,000/hour financial planners (Save for later)

Likes: 1,883 Reposts: 253 Views: 351,852 Images: 1
Score 4
@_catwu
@_catwu Feb 25, 2026 Performance

jarred is cooking 🍳

Likes: 902 Reposts: 14 Views: 199,733
Score 5
@_catwu
@_catwu Feb 25, 2026 Tool announcement

A year ago, we had no idea if anyone would want an AI agent in their terminal. Thank you for taking a chance on something new, for the feedback, and for building products with Claude Code that we never could have imagined. Excited for year 2 🛠️

Likes: 897 Reposts: 33 Views: 38,197 Images: 1
Score 4
@typewriters
@typewriters Feb 25, 2026 Research paper

Blown away by this paper - it speaks to everything I’ve been working towards the past few years building the verification and trust infrastructure to accelerate trustworthy AI, including @arcprize Safe AI is verifiable AI, with insurable deployments so that risk is absorbed by the right actors, not society as a whole Verification hits at so many parts of the stack - in training, or evals, or guiding frontier research. In the words of the author, “Verification is not a compliance function. It is a primary production technology — and increasingly the most defensible moat.” As an investor, I’ve targeted startups building this trust infra but my hope is founders realize this is a function that needs to be built inside their companies too - a holistic function leads to more durable enterprise deployments and policy engagement that results in positive outcomes for companies and society Safe AI is achievable, but many of the ideas for getting there are too simplistic. I saw a post on LessWrong from 2022 discussing the best way to contain AI AWDs - the authors advocated for global agreements halting their use They assume some level of centralized power and flatten complex ecosystems to something subject to an on/off switch. I’ve seen firsthand from years in global tech - it doesn’t work? This paper looks at a complicated ecosystem and a complicated technology and creates an actual playbook for change

Likes: 14 Reposts: 2 Views: 2,490
Score 4
@_catwu
@_catwu Feb 25, 2026 Tip trick

`/plugin install slack` to connect Claude Code with Slack!

Likes: 199 Reposts: 9 Views: 29,934
Score 6
@AndrewYNg
@AndrewYNg Feb 25, 2026 Model release

Impressive inference speed from Inception Labs’ diffusion LLMs. Diffusion LLMs are a fascinating alternative to conventional autoregressive LLMs. Well done @StefanoErmon and team!

Likes: 1,300 Reposts: 136 Views: 149,002
Score 4
@internetvin
@internetvin Feb 25, 2026 Tutorial

Here's 22 of the commands I am using with Obsidian and Claude Code with descriptions. I will turn this into something interactive soon so you can click the commands and then see the full prompts.

Likes: 1,207 Reposts: 72 Views: 95,678 Images: 1
Score 4
@karpathy
@karpathy Feb 25, 2026 Ai research

With the coming tsunami of demand for tokens, there are significant opportunities to orchestrate the underlying memory+compute *just right* for LLMs. The fundamental and non-obvious constraint is that due to the chip fabrication process, you get two completely distinct pools of memory (of different physical implementations too): 1) on-chip SRAM that is immediately next to the compute units that is incredibly fast but of very of low capacity, and 2) off-chip DRAM which has extremely high capacity, but the contents of which you can only suck through a long straw. On top of this, there are many details of the architecture (e.g. systolic arrays), numerics, etc. The design of the optimal physical substrate and then the orchestration of memory+compute across the top volume workflows of LLMs (inference prefill/decode, training/finetuning, etc.) with the best throughput/latency/$ is probably today's most interesting intellectual puzzle with the highest rewards (\cite 4.6T of NVDA). All of it to get many tokens, fast and cheap. Arguably, the workflow that may matter the most (inference decode *and* over long token contexts in tight agentic loops) is the one hardest to achieve simultaneously by the ~both camps of what exists today (HBM-first NVIDIA adjacent and SRAM-first Cerebras adjacent). Anyway the MatX team is A++ grade so it's my pleasure to have a small involvement and congratulations on the raise!

Likes: 6,878 Reposts: 469 Views: 2,341,134
Score 3
@bcherny
@bcherny Feb 25, 2026 Tool announcement

We shipped Claude Code as a research preview a year ago today. Developers have used it to build weekend projects, ship production apps, write code at the world's largest companies, and help plan a Mars rover drive. We built it, and you showed us what it was for.

Likes: 6,840 Reposts: 294 Views: 270,532 Videos: 1
Score 3
@karpathy
@karpathy Feb 24, 2026 Research paper

a beauty for anyone interested in mechanistic interpretability or getting into LLMs. interesting to look at small algorithms and their "neural implementations" to get a sense of how neural nets implement various functionality. unless the minification really creates "esoteric" solutions that you wouldn't encounter in practice, which might be more based around distributed representations, helixes etc. i tried training the same arch briefly from scratch and gradient descent didn't find the solution, would probably work with more degrees of freedom and enough effort.

Likes: 769 Reposts: 30 Views: 45,481
Score 5
@_catwu
@_catwu Feb 24, 2026 Tool announcement

We just launched /remote-control so you can continue local Claude Code sessions from your phone This is now rolled out to all Max users!

Likes: 742 Reposts: 26 Views: 43,970
Score 4
@claudeai
@claudeai Feb 24, 2026 Release announcement

New in Claude Code: Remote Control. Kick off a task in your terminal and pick it up from your phone while you take a walk or join a meeting. Claude keeps running on your machine, and you can control the session from the Claude app or https://claude.ai/code

Likes: 27,449 Reposts: 2,877 Views: 4,129,991 Videos: 1
Score 2