Anthropic is issuing a call to action against AI "distillation attacks," after accusing three AI companies of misusing its Claude chatbot. On its website, Anthropic claimed that DeepSeek, Moonshot and MiniMax have been conducting "industrial-scale campaigns…to illicitly extract Claude’s capabilities to improve their own models." Distillation in the AI world refers to when less capable models lean on the responses of more powerful ones to train themselves. While distillation isn't a bad thing across the board, Anthropic said that these types of attacks can be used in a more nefarious way. According to Anthropic, these three Chinese AI firms were responsible for more than "16 million exchanges with Claude through approximately 24,000 fraudulent accounts." From Anthropic's perspective, these competing companies were using Claude as a shortcut to develop more advanced AI models, which could also lead to circumventing certain safeguards. Anthropic s
UPVOTERS
Community appreciation
See who found this content valuable and showed their support.
TOPICS
Explore the same topics
Discover more content from the topics this post is mapped to.
Keep browsing
Explore more from this topic
Dive into the full feed of curated posts covering Tech News & Product Launches.
Discussion
Get the discussion rolling
A single comment can start something great.