Skip to content

OpenAI Partners With Los Alamos Lab to Save Us From AI

Los Alamos warns that ChatGPT-4 can provide information "that could lead to the creation of biological threats."

OpenAI is partnering with Los Alamos National Laboratory to study how artificial intelligence can be used to fight against biological threats that could be created by non-experts using AI tools, according to announcements Wednesday by both organizations. The Los Alamos lab, first established in New Mexico during World War II to develop the atomic bomb, called the effort a ā€œfirst of its kindā€ study on AI biosecurity and the ways that AI can be used in a lab setting.

The difference between the two statements released Wednesday by OpenAI and the Los Alamos lab is pretty striking. OpenAIā€™s statement tries to paint the partnership as simply a study on how AI ā€œcan be used safely by scientists in laboratory settings to advance bioscientific research.ā€ And yet the Los Alamos lab puts much more emphasis on the fact that previous research ā€œfound that ChatGPT-4 provided a mild uplift in providing information that could lead to the creation of biological threats.ā€

Much of the public discussion around threats posed by AI has centered around the creation of a self-aware entity that could conceivably develop a mind of its own and harm humanity in some way. Some worry that achieving AGIā€”advanced general intelligence, where the AI can perform advanced reasoning and logic rather than acting as a fancy auto-complete word generatorā€”may lead to a Skynet-style situation. And while many AI boosters like Elon Musk and OpenAI CEO Sam Altman have leaned into this characterization, it appears the more urgent threat to address is making sure people donā€™t use tools like ChatGPT to create bioweapons.

ā€œAI-enabled biological threats could pose a significant risk, but existing work has not assessed how multimodal, frontier models could lower the barrier of entry for non-experts to create a biological threat,ā€ Los Alamos lab said in a statement published on its website.

The different positioning of messages from the two organizations likely comes down to the fact that OpenAI could be uncomfortable with acknowledging the national security implications of highlighting that its product could be used by terrorists. To put an even finer point on it, the Los Alamos statement uses the terms ā€œthreatā€ or ā€œthreatsā€ five times, while the OpenAI statement uses it just once.

ā€œThe potential upside to growing AI capabilities is endless,ā€ Erick LeBrun, a research scientist at Los Alamos, said in a statement Wednesday. ā€œHowever, measuring and understanding any potential dangers or misuse of advanced AI related to biological threats remain largely unexplored. This work with OpenAI is an important step towards establishing a framework for evaluating current and future models, ensuring the responsible development and deployment of AI technologies.ā€

Los Alamos sent Gizmodo a statement that was generally optimistic about the future of the technology, even with the potential risks:

AI technology is exciting because it has become a powerful engine of discovery and progress in science and technology. While this will largely lead to positive benefits to society, it is conceivable that the same models in the hands of a bad actor might use it to synthesize information leading to the possibility of a ā€œhow-to-guideā€ for biological threats. It is important to consider that the AI itself is not a threat, rather it is how it can be misused that is the threat.

Previous evaluations have mostly focused on understanding whether such AI technologies could provide accurate ā€œhow-to-guidesā€. However, while a bad actor may have access to an accurate guide to do something nefarious, it does not mean that they will be able to. For example, you may know you need to maintain sterility while cultivating cells or use a mass spec but if you do not have experience in doing this before it may be very difficult to accomplish.

Zooming out, we are more broadly trying to understand where and how does these AI technologies add value to a workflow. Information access (e.g., generating an accurate protocol) is one area where it can but it is less clear how well these AI technologies can help you learn how to do a protocol in a lab successfully (or other real world activities such as kicking a soccer ball or painting a picture). Our first pilot technology evaluation will look to understand how AI enables individuals to learn how to do protocols in the real world which will give us a better understanding of not only how it can help enable science but also whether it would enable a bad actor to execute a nefarious activity in the lab.

The Los Alamos labā€™s effort is being coordinated by the AI Risks Technical Assessment Group.

Correction: An earlier version of this post originally quoted a statement from Los Alamos as being from OpenAI. Gizmodo regrets the error.

 

You May Also Like