Tech policy professor who served in US air force explains how a feud between an AI startup and the US military illuminates ethical fault linesAnthropic’s ongoing fight with the Department of Defense over what safety restrictions it can put on its artificial intelligence models has captivated the tech industry, acting as a test of how AI may be used in war and the government’s power to coerce companies to meet its demands.The negotiations have revolved around Anthropic’s refusal to allow the federal government to use its Claude AI for domestic mass surveillance or autonomous weapons systems, but the dispute also reflects the messy nature of what happens when tech companies have their products integrated into conflict. The Pentagon this week declared Anthropic a supply chain risk for its refusal to agree to the government’s terms, while Anthropic has vowed to challenge the designation in court. Continue reading...
UPVOTERS
Community appreciation
See who found this content valuable and showed their support.
No upvotes yet.
Be the first to show your appreciation for this content.
TOPICS
Explore the same topics
Discover more content from the topics this post is mapped to.
Keep browsing
Explore more from this topic
Dive into the full feed of curated posts covering Tech News & Product Launches.
Discussion
Your thoughts matter!
Your input is valuable—be the first to share it!