MIT researchers recently contrived a scenario where non-scientist students used ChatGPT to help them obtain information on how to acquire DNA that could make pathogens with pandemic potential. But by using the chatbot, they were able to gain the knowledge to create dangerous material in the lab and evade biosecurity measures. This experiment drew attention to the impacts of artificial intelligence tools on the biothreat landscape—and how such applications contribute to global catastrophic biological risks.
This article was written by Matthew E. Walsh and originally published by The Bulletin Of The Atomic Scientists.
Click here to read the rest of the article.