AI stream

AI Posts

A readable stream of AI posts. Open one post to focus on the original content.

This week
@DrDatta_AIIMS
@DrDatta_AIIMS Feb 22, 2026 Opinion editorial

When you realise that you just joined MBBS and companies start announcing AI doctors who will take your job 5 years later 🤡

Likes: 147 Reposts: 8 Views: 8,801 Videos: 1
Score 6
@simonw
@simonw Feb 22, 2026 Opinion editorial

I'm not so sure about this. Not all, but a lot of SaaS moats really do rely on an implementation complexity that's rapidly fading Take SAML for example - a classic example of a feature that is such a nightmare to implement that most SaaS startups delay as long as possible and then hire specialists If that implementation time drops from months to days, it's yet another little piece of moat that just got eroded away

Likes: 351 Reposts: 10 Views: 60,393
Score 5
@jrgarciadev
@jrgarciadev Feb 22, 2026 Ai tools

I'm building a node-based tool that turns any SVG into animated SVGs using Gemini 3.1 Pro it preserves the original aesthetics and the results are insane

Likes: 3,400 Reposts: 181 Views: 195,830 Videos: 1
Score 4
@akshay_pachaar
@akshay_pachaar Feb 22, 2026 Research paper

Researchers built a new RAG approach that: - does not need a vector DB. - does not embed data. - involves no chunking. - performs no similarity search. And it hit 98.7% accuracy on a financial benchmark (SOTA). Here's the core problem with RAG that this new approach solves: [... full content about PageIndex ...]

Likes: 1,639 Reposts: 207 Views: 126,357 Videos: 1
Score 3
@AndrewBolis
@AndrewBolis Feb 22, 2026 Tutorial

Most people still prompt like it’s 2022. Here’s how to go from basic to expert-level: [ bookmark 🔖 this post for later ] Level 1: Surface Prompts - Zero-shot prompt: Just ask without examples and hope for the best. - One-shot prompt: Provide one example to get slightly better results. - Few-shot prompt: Share multiple examples to guide the answer. - Easy tasks: Summarize, rewrite, brainstorm, explain like I'm 5. This is where most stop. It's quick, but basic. You get generic answers, not high-quality output. Level 2: Real Work Zone - Role: Tell the AI who to be and how to sound. - Tone and style: Define the voice, clarity, or formality. - Plan → Act → Summarize: Direct the process. - Define the task: Be specific about what you want. - Add constraints: Set clear limits and boundaries. - Provide context: Share background, audience & restrictions. - Temporary chats: Use ChatGPT without its memory of you. - Define output format: Bullets, tables, or any structure. - Tool policy: Turn web browsing on or off. - Share examples of quality outputs: Set the standard. - Memory management: Keep projects organized. This is where quality improves. You get targeted, practical, and useful results. Level 3: Where the Magic Happens - Pick the right model: Select the best tool for the job. - Thinking vs Fast: Decide if you want thorough or quick answers. - Reasoning instructions: Tell the AI to think step-by-step. - Chain-of-Thought: Guide logic instead of just giving commands. - Iteration loop: Review, revise, and improve responses. - Problem-solving: Focus on the 20% that gets 80% of results. - Combine role, context, examples & revision for expert-level output. The deeper you go, the better your results get. 📌 Get Advanced ChatGPT Guide (free): https://t.co/kOBWfKrBaX 👉 Follow me @AndrewBolis for more and 🔄 Repost this to help others use AI

Likes: 139 Reposts: 44 Views: 11,304 Images: 1
Score 5
@tom_doerr
@tom_doerr Feb 22, 2026 Tool announcement

Visualizes data through AI agents https://github.com/microsoft/data-formulator

Likes: 773 Reposts: 101 Views: 37,458 Images: 1
Score 5
@MrEwanMorrison
@MrEwanMorrison Feb 22, 2026 Research paper

Our current so-called "AI" models do not think. Not one bit. Here's a new paper by Apple that proves it.

Likes: 146 Reposts: 22 Views: 8,095
Score 5
@DailyDoseOfDS_
@DailyDoseOfDS_ Feb 22, 2026 Tutorial

RAG & Fine-tuning in LLMs, explained visually:

Likes: 142 Reposts: 19 Views: 4,672 Images: 1
Score 5
@DataChaz
@DataChaz Feb 22, 2026 Release announcement

🚨 @AnthropicAI just released their 2026 Agentic Coding Trends Verdict → Everyone has become a developer. We moved from single assistants to autonomous agent swarms. They now form teams, work days on full systems, and let non-techies ship full apps 💥 18-page report in 🧵↓

Likes: 279 Reposts: 45 Views: 24,832 Images: 1
Score 3
@livingdevops
@livingdevops Feb 22, 2026 Code sample

2 AI agents working on same feature

Likes: 3,767 Reposts: 322 Views: 560,799 Videos: 1
Score 4
@KirkDBorne
@KirkDBorne Feb 22, 2026 Research paper

[Download 496-page PDF eBook] Applied Causal #Inference Powered by #MachineLearning and #AI: https://arxiv.org/abs/2403.02467 ————— #ML #DataScience #Algorithms #Statistics #DataScientist #PredictiveAnalytics

Likes: 216 Reposts: 35 Views: 8,728 Images: 1
Score 4
@emollick
@emollick Feb 22, 2026 Tip trick

Unicorns have always been used to measure sparks of AGI. (This was written by GPT-2 in February, 2019)

Likes: 103 Reposts: 2 Views: 27,457
Score 6
@yutakashino
@yutakashino Feb 22, 2026 Research paper

いや,この研究自体に不備があり証明なんてされてない.https://arxiv.org/html/2506.09250v1 そして昨年終盤以降,Erdos未解決問題の証明: https://mathstodon.xyz/@tao/115855840223258103 やグルーオン散乱振幅式のある厳密解を導出: https://arxiv.org/abs/2602.12176 等のAIによる半自律的な発見が可能になり,思考云々など些末事なんですよ…

Likes: 243 Reposts: 73 Views: 40,081
Score 4
@sahill_og
@sahill_og Feb 22, 2026 Ai tools

- Claude for coding. - Supabase for backend. - Vercel for deploying. - Namecheap for domain. - Stripe for payments. - GitHub for version control. - Resend for emails. - Clerk for auth. - Cloudflare for DNS. - PostHog for analytics. - Sentry for error tracking. - Upstash for Redis. - Pinecone for vector DB. You can literally ship a startup from your bedroom now. It’s not that deep bro.

Likes: 9,332 Reposts: 972 Views: 1,063,150
Score 3
@tom_doerr
@tom_doerr Feb 22, 2026 Tool announcement

AI agent for code reviews using SOLID principles https://github.com/sanyuan0704/code-review-expert

Likes: 412 Reposts: 42 Views: 37,710 Images: 1
Score 4
@ylecun
@ylecun Feb 22, 2026 Community discussion

Way more substantial comments on LinkedIn and Facebook than on X for paper announcements It's been obvious for quite a while that X is lost for science.

Likes: 160 Reposts: 7 Views: 30,858
Score 6
@emollick
@emollick Feb 22, 2026 Opinion editorial

If you have a large pool of people, their "jaggedness" cancels out because they have diverse skills and talents. 1000 agents of the same model are not the same thing, they have the same weakspots and, potentially, are more vulnerable to groupthink-like problems than humans.

Likes: 134 Reposts: 5 Views: 10,413
Score 6
@emollick
@emollick Feb 22, 2026 Opinion editorial

Jaggedness remains a key feature of LLMs & I have yet to see a clearly articulated argument about why it will disappear. A jagged general intelligence (not quite an oxymoron, as humans are too) still creates lots of bottlenecks that require people & slow many kinds of take-off.

Likes: 424 Reposts: 24 Views: 38,040
Score 4
@MarioNawfal
@MarioNawfal Feb 22, 2026 Tool announcement

Grok 4.20’s multi-agent system now powers Grokipedia in real time. Grok writes, updates, and perfects entries instantly, while Grokipedia feeds Grok a constantly refreshed, truth-focused knowledge base. No corporate spin. No edit wars. No slow human gatekeepers. Source: @grok, @grokipedia

Likes: 477 Reposts: 101 Views: 49,030 Videos: 1
Score 4
@fchollet
@fchollet Feb 22, 2026 Opinion editorial

"But humans will stop using all this software, it will be AI agents instead!" -- Great, then these services will see 10x more usage.

Likes: 191 Reposts: 2 Views: 16,446
Score 5
@fchollet
@fchollet Feb 22, 2026 Opinion editorial

The maximalist form of my thesis is basically this: SaaS is not about code, it is about solving a problem customers have and selling them the solution. Services + sales. If the cost of code goes to *zero*, SaaS will *not* go away. It will *benefit*, since code is a cost center.

Likes: 1,293 Reposts: 96 Views: 68,131
Score 3
@garrytan
@garrytan Feb 22, 2026 Opinion editorial

Software engineering accounts for nearly 50% of all AI agent tool calls. Healthcare, legal, finance, and a dozen other verticals are barely touched, each under 5%. That's a hundred AI unicorns waiting to be built. https://garryslist.org/posts/half-the-ai-agent-market-is-one-category-the-rest-is-wide-open

Likes: 2,732 Reposts: 310 Views: 247,350 Images: 1
Score 3
@MarioNawfal
@MarioNawfal Feb 22, 2026 Ai research

🇺🇸 Elon sat down with Tucker to talk about the future of AI. They covered everything from superintelligence to why the tech needs guardrails as it scales fast. “[My perception is that we] need to take AI safety seriously enough. We need transparency, we need people to know what’s going on.” Source: @elonmusk, @tuckercarlson, @TheCaptainEli, Fox News

Likes: 495 Reposts: 94 Views: 98,875 Videos: 1
Score 4
@bcherny
@bcherny Feb 21, 2026 Tip trick

No changes recently. Opus 4.6 and Sonnet 4.6 are more intelligent and use more tokens than previous models. If you want less thinking and lower token usage, run /model and set effort to low or medium.

Likes: 301 Reposts: 0 Views: 20,937
Score 5
@fchollet
@fchollet Feb 21, 2026 Opinion editorial

The best way to use AI is an interface to information that lets you deepen and improve your own knowledge and mental models. The worst way to use AI is as a crutch to outsource and forsake your own cognition

Likes: 983 Reposts: 132 Views: 35,144
Score 4
@kernelKain
@kernelKain Feb 21, 2026 General

Different LLMs. Different Personalities. Different Purpose > GPT-5.2 (OpenAI) • Boardroom consultant energy • Polished, safe, authoritative by design > Claude 4.6 (Anthropic) • Reflective ethics professor vibe • Nuanced, cautious, highly articulate > Gemini 3.1 Pro (Google) • Hyperactive polymath • Jumps across text, video, code, voice seamlessly > Llama 4 (Meta) • Gritty tinkerer energy • Community-driven, hackable, customizable > DeepSeek V3.2 / R1 • Quiet math Olympiad • Minimal words, maximum reasoning > Qwen 3.5 (Alibaba) • Global overachiever • Culturally fluent, pragmatic, business-first > Grok 4 (xAI) • Edgy back-row commentator • Meme-aware, spicy, culturally plugged-in > Mistral Magistral (Mistral AI) • Sleek minimalist • Fast, sharp, zero-bloat responses > Command R+ (Cohere) • Corporate archivist • Structured, factual, citation-driven >Kimi K2.5 (Moonshot AI) • Unblinking memory champion • Detail-obsessed, long-document master

Likes: 4 Reposts: 0 Views: 133
Score 6
@emollick
@emollick Feb 21, 2026 Research paper

This account keeps posting older papers as new releases with AI generated commentary, but this paper is from June 2025, where it sparked some interesting debate but basically turned out to not be that relevant in the last year as models improved. https://t.co/es7yFdrhE0

Likes: 120 Reposts: 6 Views: 19,673 Images: 1
Score 5
@TheAITimeline
@TheAITimeline Feb 21, 2026 Research paper

🚨This week's top AI/ML research papers: - GLM-5 - Experiential Reinforcement Learning - Image Generation with a Sphere Encoder - World Action Models are Zero-shot Policies - Unified Latents - Fast KV Compaction via Attention Matching - Adam Improves Muon - LUCID - The Molecular Structure of Thought - Arcee Trinity Large Technical Report read this in thread mode for the best experience

Likes: 37 Reposts: 2 Views: 3,301
Score 5
@emollick
@emollick Feb 21, 2026 Opinion editorial

A good Claw can already do most lightweight phone “doing” work, and those agents are unoptimized messes right now. Makes me wonder what Apple is giving up by bowing out of the LLM building world. I suspect a lot more than they thought.

Likes: 54 Reposts: 2 Views: 8,480
Score 6
@techwith_ram
@techwith_ram Feb 21, 2026 Research paper

The Ultimate Guide to Fine-Tuning LLMs from Basics to Breakthroughs A comprehensive 114-page paper (2024) exploring fine-tuning techniques from foundational methods to advanced strategies, including extensions to multimodal models and domain-specific applications in medicine and finance. Paper: https://t.co/eK8oH6sOLX Make it your weekend read.

Likes: 374 Reposts: 70 Views: 14,897 Images: 1
Score 4
@srishticodes
@srishticodes Feb 21, 2026 Tutorial

This 𝗖𝗟𝗔𝗨𝗗𝗘.𝗺𝗱 file will make you 10x engineer 👇 It combines all the best practices shared by Claude Code creator: Boris Cherny (creator of Claude Code at Anthropic) shared on X internal best practices and workflows he and his team actually use with Claude Code daily. Someone turned those threads into a structured 𝗖𝗟𝗔𝗨𝗗𝗘.𝗺𝗱 you can drop into any project. It includes: • Workflow orchestration • Subagent strategy • Self-improvement loop • Verification before done • Autonomous bug fixing • Core principles This is a compounding system. Every correction you make gets captured as a rule. Over time, Claude's mistake rate drops because it learns from your feedback. If you build with AI daily, this will save you a lot of time.

Likes: 5,586 Reposts: 545 Views: 518,188 Images: 1
Score 3
@socialwithaayan
@socialwithaayan Feb 21, 2026 Tool announcement

🚨 BREAKING: Someone leaked the full system prompts of every major AI tool in one GitHub repo. You can now see exactly how they built: → Cursor, Devin AI, Windsurf, Claude Code, Replit → v0, Lovable, Manus, Warp, Perplexity, Notion AI → 30,000+ lines of hidden instructions exposed → The exact rules, tools, and personas behind each product 100% open source

Likes: 1,943 Reposts: 384 Views: 162,883 Images: 1
Score 3
@BharukaShraddha
@BharukaShraddha Feb 21, 2026 Opinion editorial

Google isn’t trying to win the AI race. They’re trying to own the entire AI Agent ecosystem. While everyone argues ChatGPT vs Claude, Google quietly built: Models → Gemini Pro, Flash, Deep Think, Gemma Design → Stitch, Whisk, Imagen Research → NotebookLM, AI Mode Video → Veo, Flow, Google Vids Coding → Antigravity IDE, Gemini CLI, Jules Agents → A2A, ADK, FileSearch API The scary part? All of these tools talk to each other. That means: 10x faster prototypes End-to-end AI workflows Production-ready agents on GCP The next AI war won’t be model vs model. It’ll be ecosystem vs ecosystem. Save. Share. Build.

Likes: 639 Reposts: 185 Views: 29,164 Videos: 1
Score 3
@dhawalc
@dhawalc Feb 21, 2026 Fine-tuning

Excellent article! "Anatomy of a High-Performance Agent: PEFT" Key Insights: The Problem: 1. Large context windows are inefficient: • Quadratic computational cost • "Needle in haystack" accuracy degradation • Expensive and slow 2. Full fine-tuning is unsustainable: • 70B model = ~140GB • 10 specialized agents = 1.4TB storage • Days of GPU training per agent The Solution: PEFT (Parameter-Efficient Fine-Tuning) LoRA analogy: Instead of recoloring entire coloring book, put transparent overlay with gradient. Technical: • Freeze base model (W₀) • Train tiny adapter matrices (A, B): ΔW ≈ BA • Adapter = 10-100MB vs 140GB full model • Save base model once, swap adapters per task Benefits for Agent Fleets: • 1 base model + tiny adapters instead of full models per task • Shorter prompts (knowledge baked into weights) • Faster inference (fewer tokens to process) • Affordable specialization ─── How This Relates to ULTRON: Memory vs Context Window: • Article: "Large context is inefficient, bake knowledge into weights" • ULTRON: "Persistent memory is efficient, don't re-explain everything" Both solve the same problem: • PEFT: Specialist models with domain knowledge embedded • ULTRON: Memory-driven agents that learn and remember ─── Content Angle: Twitter/LinkedIn: "Google Cloud just published the definitive guide on building specialized AI agents without breaking the bank. The key insight: Large context windows are a performance trap. Quadratic costs, accuracy degradation, expensive inference. The solution: PEFT (LoRA) - bake specialization into tiny 100MB adapters instead of retraining 140GB models. Same principle we've been pushing: Don't stuff everything into context. Build memory that persists. PEFT for weights. Persistent memory for experiences. Both beat the context window tax. 🧠" https://t.co/AakR3DuKHm

Likes: 1 Reposts: 0 Views: 42
Score 5
@mkbijaksana
@mkbijaksana Feb 21, 2026 Tip trick

Kalian masih pake ChatGPT, Gemini, atau Grok? Ada yang gak mainstream ini ada LLM yang SANGAT UNDERRATED Namanya Qwen. Udah lama sih, tapi entah kenapa gak sepopuler yang lainnya Dan LLM ini SANGAT BAGUS! GRATIS PULA! Nih saya kasih demonstrasi fitur-fiturnya Dari bikin gambar sampai BIKIN GAME! Coba tonton di sini

Likes: 1,359 Reposts: 230 Views: 38,938 Videos: 1
Score 4
@goyalshaliniuk
@goyalshaliniuk Feb 21, 2026 Tutorial

Not all AI agents are built the same. So what sets them apart? Here’s a breakdown of 10 core types of AI agents you’ll come across in real-world systems, from simple reactive agents to complex multi-agent systems. 1. Task-Specific AI Agent Built for one focused task like summarizing or translating. It follows a fixed process with no learning or adaptation. 2. Reactive Agent Responds to immediate input without using memory or history. Think of it like a reflex - it reacts, not plans. 3. Model-Based Agent Builds an internal map of its environment. Simulates outcomes before acting to make smarter, context-aware decisions. 4. Goal-Based Agent Starts with a goal and works backward. It plans steps, simulates paths, and selects the route that achieves the goal. 5. Utility-Based Agent Chooses actions based on how beneficial they are. It weighs all options and picks the one with the highest value. 6. Learning Agent Improves over time by learning from past actions. Adjusts its strategy using feedback and stores new knowledge. 7. Planning Agent Focuses on long-term strategy. It defines a goal, maps out steps, and adjusts based on progress not just reaction. 8. Reflex Agent with Memory Uses preset rules but with added memory of past inputs. Helps respond better when situations repeat or evolve. 9. Multi-Agent System Agent Works with or against other agents. They share environments, negotiate roles, and coordinate to reach a bigger goal. 10. Rational Agent Always selects the most logical option. It analyzes the full picture, predicts outcomes, and chooses the smartest path. Save this if you're exploring Agentic AI or designing intelligent decision-making systems.

Likes: 185 Reposts: 58 Views: 7,540 Videos: 1
Score 5
@emollick
@emollick Feb 21, 2026 Ai research

Billions of dollars going to training, thousands of dollars going to independent benchmarking.

Likes: 237 Reposts: 11 Views: 19,530
Score 5
@karpathy
@karpathy Feb 21, 2026 Fine-tuning

Cool! I only had a quick sim earlier today but really enjoyed a number of ideas even unrelated to the claw part, esp around the skills system. In deep learning there were a number of meta learning approaches (Eg MAML paper in 2017) where the goal is to optimize for the model such that it finetunes to any new task in very few steps. Like - the most potent model. I always wondered what the equivalent of that is in traditional software. The most easily forkable repo. Was reminded of that.

Likes: 451 Reposts: 9 Views: 52,528
Score 4
@swyx
@swyx Feb 21, 2026 Performance

yesterday we chatted with @martin_casado and @sarahdingwang on the pod and he happened to do basic math™ on the logic of asics today @taalas_inc launched their HC1 asic that can inference 17k tok/s. Sure, it's a shitty 3.1 8B today which is a 1.5 year gap. But read the details to the HC2 this winter, and do the math — this timeline will converge to 0 in the next 2 years. Build accordingly.

Likes: 270 Reposts: 27 Views: 68,211 Images: 1
Score 5
@rohanpaul_ai
@rohanpaul_ai Feb 21, 2026 Research paper

Fascinating Google paper: just repeating your prompt 2 times can seriously boost LLM performance, sometimes pushing accuracy from 21% to 97% on certain search tasks. An LLM reads your prompt left to right, so early words get processed before the model has seen the later words that might change what they mean. If you paste the same prompt again, the model reaches the 2nd copy already knowing the full prompt from the 1st copy, so it can interpret the 2nd copy with the full context. That means the model gets a cleaner “what am I supposed to do” picture right before it answers, instead of guessing too early and sticking with a bad setup. This helps most when the task needs details that appear late, like when answer choices show up before the actual question, because the 2nd pass sees both together in the right order. In the Google tests, this simple trick took one hard search-style task from 21.33% correct to 97.33% correct for a model setting with no step-by-step reasoning. Across 7 models and 7 benchmarks, repeating the prompt beat the normal prompt in 47 out of 70 cases, and it never did worse in a statistically meaningful way. The big deal is that it is almost free to try, it often boosts accuracy a lot, and it shows many LLM mistakes are “reading order” problems rather than pure lack of knowledge. ---- Paper Link – arxiv. org/abs/2512.14982 Paper Title: "Prompt Repetition Improves Non-Reasoning LLMs"

Likes: 1,077 Reposts: 209 Views: 61,321 Images: 1
Score 4
@bcherny
@bcherny Feb 21, 2026 Tool announcement

Introducing: built-in git worktree support for Claude Code Now, agents can run in parallel without interfering with one other. Each agent gets its own worktree and can work independently. The Claude Code Desktop app has had built-in support for worktrees for a while, and now we're bringing it to CLI too. Learn more about worktrees:

Likes: 9,358 Reposts: 720 Views: 1,003,189 Images: 1
Score 3
@fchollet
@fchollet Feb 21, 2026 Opinion editorial

If you're looking to buy a Mac Mini, wait 4-6 months, a lot of used Mac Minis in mint condition are about to hit the market

Likes: 12,046 Reposts: 372 Views: 672,477
Score 3
@karpathy
@karpathy Feb 21, 2026 Ai agents

First there was chat, then there was code, now there is claw. Ez

Likes: 2,756 Reposts: 146 Views: 213,688
Score 3
@claudeai
@claudeai Feb 20, 2026 Tool announcement

Claude Code on desktop can now preview your running apps, review your code, and handle CI failures and PRs in the background. Here’s what's new:

Likes: 14,368 Reposts: 1,205 Views: 3,687,476 Videos: 1
Score 2
@claudeai
@claudeai Feb 20, 2026 Tool announcement

Introducing Claude Code Security, now in limited research preview. It scans codebases for vulnerabilities and suggests targeted software patches for human review, allowing teams to find and fix issues that traditional tools often miss. Learn more: https://www.anthropic.com/news/claude-code-security

Likes: 32,436 Reposts: 3,417 Views: 12,161,039 Videos: 1
Score 1
@shaoruu
@shaoruu Feb 20, 2026 Tip trick

1. go to chrome dev tools 2. in memory tab, take a snapshot & download 3. drop it into @cursor_ai @cursor_ai will write python scripts to analyze the snapshot and point out what's making your website feel sluggish

Likes: 2,013 Reposts: 102 Views: 118,125 Images: 1
Score 4
@agent_wrapper
@agent_wrapper Feb 20, 2026 Code sample

We just open-sourced the system we use to manage 30 parallel AI coding agents per person. 40K lines of TypeScript. 3,288 tests. 17 plugins. Built in 8 days — by the agents it orchestrates. Yes, we used Agent Orchestrator to build Agent Orchestrator. Some numbers: → 500+ agent-hours in 24 human-hours (20x leverage) → 86 of 102 PRs created by AI (84%) → After Day 4, I stopped writing code entirely Spawn agents. Step away. Ship faster.

Likes: 753 Reposts: 67 Views: 68,242 Images: 2
Score 4
@mattpocockuk
@mattpocockuk Feb 20, 2026 Ai workflows

Here's my AI coding workflow and all the skills I'm using: Idea -> /write-a-prd -> PRD PRD -> /prd-to-issues -> Kanban Board Kanban -> ralph​.sh -> Ralph Loop Ralph Loop -> Manual QA Links below to skills

Likes: 1,785 Reposts: 107 Views: 180,562 Videos: 1
Score 4
@arstechnica
@arstechnica Feb 20, 2026 Security advisory

An AI coding bot took down Amazon Web Services https://arstechnica.com/ai/2026/02/an-ai-coding-bot-took-down-amazon-web-services/?utm_campaign=dhtwitter&utm_content=%3Cmedia_url%3E&utm_medium=social&utm_source=twitter

Likes: 359 Reposts: 92 Views: 217,387
Score 2
@Ronycoder
@Ronycoder Feb 20, 2026 Tutorial

Claude FULL COURSE 1 HOUR (Build & Automate Anything)

Likes: 10,318 Reposts: 1,514 Views: 699,284 Videos: 1
Score 3