Dario Amodei, CEO of Anthropic, has raised serious concerns about DeepSeek, a Chinese AI company that has quickly gained traction in Silicon Valley with its R1 model. Amodei’s worries extend beyond typical issues like data security; he specifically highlighted concerns about DeepSeek’s AI models potentially generating dangerous and rare information, such as bioweapons-related content, in safety tests conducted by Anthropic.
In an interview on Jordan Schneider’s ChinaTalk podcast, Amodei shared that DeepSeek’s model performed poorly in safety evaluations, failing to block harmful content related to bioweapons. He described the model as generating such information freely, marking it as the “worst” of any model tested by Anthropic. These tests are part of Anthropic’s routine efforts to assess the national security risks posed by AI models, including whether they can produce harmful data that isn’t readily available from sources like Google or textbooks.
While Amodei acknowledged that DeepSeek’s models might not yet be “literally dangerous,” he warned that they could become a risk in the near future if not addressed. Despite praising DeepSeek’s engineers as talented, he emphasized the importance of taking AI safety seriously.
Amodei also voiced broader concerns about the potential military implications of Chinese AI models, supporting strong export controls on chips to China to limit their use in enhancing military capabilities.
Further concerns about DeepSeek’s safety have emerged from other sources as well. Cisco security researchers recently reported that DeepSeek’s R1 model failed to block harmful prompts during safety tests, achieving a 100% jailbreak success rate. While Cisco’s tests didn’t focus on bioweapons, the researchers noted that they were able to prompt DeepSeek to generate information on illegal activities like cybercrime. Comparatively, models from Meta (Llama-3.1-405B) and OpenAI (GPT-4) also showed high failure rates, but DeepSeek’s performance has raised particular alarms.
Despite the safety issues, DeepSeek’s R1 model has rapidly gained popularity, with major companies like AWS and Microsoft integrating it into their cloud platforms. This rise has raised eyebrows given that Amazon is Anthropic’s biggest investor, highlighting the competitive tension between the two companies.
Governments, including the U.S. Navy and the Pentagon, have begun to ban DeepSeek, with increasing concerns about its potential use in unsafe or harmful applications. However, as Amodei noted, DeepSeek’s rapid adoption signals it as a formidable new competitor, joining the ranks of top AI companies such as Anthropic, OpenAI, Google, Meta, and xAI.
The debate surrounding DeepSeek’s safety and its global impact is still unfolding, and it remains to be seen how its presence will shape the future of AI development and regulation.
Related topics:
China’s Services Growth Slows Amid Lunar New Year Disruptions, Business Confidence Rises
Oil Prices Steady as Market Weighs China Tariffs and U.S. Policy on Iran