Personal Assistant
Home Settings
Daily Digest Newsletters Papers Ruby Posts AI Posts Ruby: Blogs and News AI: Blogs and News Gem Updates Gem Discoveries Digest Tweets
Twitter Lists Bluesky Lists RSS Lists Tracked Gems
Sign in Explore
@SadCreatorTalks

Sad Creator

@SadCreatorTalks

The U.S. Pentagon has just given Anthropic an ultimatum - drop your AI safety safeguards or risk losing massive military contracts and being labeled a "supply chain risk". πŸ‡ΊπŸ‡Έ Meanwhile, other AI labs are already agreeing to unrestricted military use. This is a fundamental clash over AI ethics vs. national security. Many believe Anthropic should walk away. If we compromise now, we will hand the military unfiltered AI that could be used for mass surveillance or autonomous weapons. If national defense depends on top-tier AI, you work with them and sort the ethics later. So I have to ask you, guys: should Anthropic cut ties with the Pentagon? OR is this a hill worth dying on for AI safety and ethical standards? Lay it out in the comments.πŸ‘‡πŸ‘‡πŸ‘‡

Post media
12:31 PM Β· Feb 25, 2026