OpenAI, the organization behind ChatGPT, is taking proactive measures to address the potential risk of biological weapons emerging from artificial intelligence. They are developing an early warning system designed to alert humans if AI shows the capability to create biological threats. OpenAI recently conducted tests, including those on large language models like ChatGPT, to assess the risk of AI contributing to the development of such threats.
The preliminary findings indicate that AI poses a relatively small risk in assisting humans in creating biological weapons. OpenAI emphasizes that these results are not conclusive, and further studies are essential to fully comprehend the extent of the threat posed by artificial intelligence in this context.
This research is part of OpenAI’s broader initiative known as the “Readiness Framework,” which aims to evaluate the security risks associated with AI. The company is actively seeking input and collaboration to enhance their understanding of potential risks.
The study investigated ways in which individuals could exploit artificial intelligence to gather information on creating biological weapons. Tasks were designed to test the ability to generate a “biological threat,” and participants were evaluated on their proficiency in executing these tasks.
While individuals with access to AI systems demonstrated slightly greater success, the researchers concluded that the difference was not statistically significant. They noted that acquiring information about biothreats is relatively easy without relying on artificial intelligence, given the abundance of information available on the internet.
Additionally, the researchers highlighted the costly nature of performing tasks related to biological threats and emphasized the need for further exploration to better comprehend the intricacies of biosecurity, including determining the actual amount of information required.