Skip to content
TECHNOLOGY

Lenient Rules for Biotech Research Put the World at Risk. Will AI Do the Same?

To avoid the mistakes of biotech, the AI community needs to start thinking right now about the safety of generative artificial intelligence.

Story text
Recently journalist Kevin Roose was evaluating Microsoft Bing's artificial intelligence chatbot, named Sydney, when it told him, in conversation, "I just want to love you and be loved by you." As amusing and inconsequential as it may seem, this episode should serve as a wake-up call to the AI community. It needs to start thinking right now about the safety of generative artificial intelligence—the technology behind Sydney, OpenAI's ChatGPT, and Google's Bard—and the appropriate guardrails to put in place, before bigger problems arise. I offer this advice from personal experience. Two decades ago, I was a top official in the George W. Bush White House when another world-changing technology—genetic manipulation—transformed our future. Like generative AI, this new biotechnology was a once-in-a-generation advance that inspired both excitement and fear. In the years since, biotechnology delivered many benefits, but it has also put the world at great risk—in part because of insufficient oversight. All innovations have the potential to provide benefits and cause harm—what has been called "dual use" technology. For some people, generative artificial intelligence is an exciting and pivotal moment in technology, with far-reaching implications. For others, it portends a future of dangerous silicon-based sentient life forms making decisions over our lives that we can't control or stop. More than 20 years ago, the world was similarly divided over the dual use of dangerous biologic agents, especially those that were genetically engineered. The incentives to move forward with genetic manipulation far outweighed the incentives for moving carefully and cautiously. These incentives included not only the creation of economically and socially valuable new vaccines and drugs, but also advances in basic science and, not insignificantly, careers of academic scientists.
ChatGPT
This photo illustration shows the ChatGPT logo at an office in Washington, DC, on March 15, 2023. - The company behind the ChatGPT app that churns out essays, poems or computing code on command released... Stefani Reynolds / AFP/Getty Images
In 1996, the U.S. Congress enacted rules for controlling access to and transfer of some selected infectious agents and toxins. These rules didn't go far enough. After anthrax attacks in 2001 in the U.S. that resulted in 5 deaths, Congress passed legislation to increase oversight of a specific set of selected agents and toxins that had a potential to pose a severe threat to public health or agriculture. Those rules were, regrettably, not sufficient. In 2003, a National Research Council report ("Biotechnology Research in an Age of Terrorism") recommended changes in research oversight rules, regulations and best practices to reduce the threat of gene manipulation being used for malevolent purposes. In response, I wrote the charter that created the National Science Advisory Board for Biosecurity (NSABB). Since 2004, NSABB (I remain an active member) has tried to advance the research recommendations in the NRC report. It has increased oversight of potentially dangerous biologic research while concurrently guarding against draconian regulations that can stifle innovation. But implementation has been problematic and slow, in part because of objections from many scientists, who complained that excessive and bureaucratic oversight was constraining critical research. Since then, controversies have erupted over the safety of some biotechnology research. The most noteworthy was a series of "gain of function" experiments on the H5N1 influenza (bird flu) virus, in which scientists made the virus more transmissible to mammals, in order to learn how to block virus transmission. We in the biosecurity community were alarmed over the risks scientists were taking in those experiments—among them, the risk that a lab accident could cause a pandemic. These issues prompted concerned scientists to recommend that the U.S. government enact a three-year pause in funding for a list of potentially dangerous gain-of-function studies, during which the government would establish new and more aggressive oversight mechanisms for research. Unfortunately, the increased oversight was restricted to a small number of known respiratory viruses that could potentially cause a pandemic.
Avian influenza test laboratory
Microbiologist Anne Vandenburg-Carroll tests poultry samples collected from a farm located in a control area for the presence of avian influenza, or bird flu, at the Wisconsin Veterinary Diagnostic Laboratory at the University of Wisconsin-Madison... Getty Images/Scott Olson
The risk of accidentally starting a pandemic must have seemed rather abstract to most Americans back then. No more. Today, polls show that many people believe that a laboratory manipulation of coronaviruses resulted in the creation of SARS-CoV-2, the virus that causes COVD-19. The origin of Covid-19 is also a contentious issue among scientists and politicians. One faction loudly contends the pandemic coronavirus resulted from its intentional genetic manipulation in a laboratory in Wuhan, China. While a few in the lab-origin camp claim the virus was deliberately changed to make it more dangerous, most believe a virus created by risky research, without appropriate oversight, was unintentionally released. The opposing camp insists just as vehemently that COVID-19 resulted from natural mutations of a bat virus transmitted to a wild (as yet unknown) animal, and then to human patrons in a market in Wuhan, China. Did Covid come from a lab or a food market? I get asked this question every day. My answer? We honestly don't yet know. What we do know is that the science has come far enough that the virus could have come from genetic manipulation of the coronavirus gene, whether or not it actually did in this case. That threat exists in part because the current oversight mechanisms for research on potentially dangerous pathogens are not sufficient to assure us that we are safe from intentional or unintentional genetic manipulation of organisms with pandemic potential. Despite good intentions, for the past 20 years the U.S. government has approached oversight in piecemeal fashion, failing to provide early, consistent, and timely guidance to researchers and institutions on evaluating and mitigating the risks inherent in this kind of research. As a result, government, commercial, and academic institutions instituted variable and inconsistent research review and oversight. And that oversight mostly focused on anthrax, Lassa fever, smallpox, bird flu, and other select agents as specified by the government on a list of dangerous potential pathogens. That narrow focus is not sufficient to protect us from broader, potentially risky biologic research. Threats, we all know well, can also come from mutations in much more common bacteria and viruses, such as the coronavirus. The NSABB has recently submitted recommendations to the Biden administration for significantly increasing the oversight of genetic research that could be used for both good and harm—either from laboratory accidents or deliberate misuse. These recommendations would expand the research review and oversight to include experiments on any agent likely to be capable of uncontrollable spread or cause significant morbidity or mortality and, in addition, is likely to pose a severe threat to public health or national security. Several Congressional committees are also actively concerned that the biotech community must do better at considering safety when performing genetic research of concern. But we are playing catchup. The community of AI researchers, entrepreneurs and regulators would do well to heed the lessons of biotech. My plea to them is to start thinking about safety now. Don't wait for the first big problem of malfeasance or unintended consequences. Get ahead by planning for coordinated oversight, and continue to modify these plans as the technology moves forward.
AI reading minds
Illustration of a human-AI brain interface. monsitj/Getty
Here are some concrete steps: 1. Stakeholders from the technology sector, government, academia, and the public should agree that generative AI poses dangers and requires an AI oversight process that is clear, inclusive, and transparent, with the goal of building a whole-of-society responsibility for the successes and the risks of the new technology. 2. Stakeholders must act without delay to create specific and uniform guidance for ensuring new generative AI platforms and programs maximize public good and minimize misuse. As with biotechnology, malevolent misuse is a distinct possibility. Waiting to implement ethical guidelines until there is demonstrated harm could be disastrous. 3. A national expert advisory committee should be established immediately, with members drawn not only from technology developers and government, but also from the academic, ethics and public sectors to review and give guidance on the creation and evolution of AI programs that can be intentionally or inadvertently used to harm or control others. As Kevin Roose's story shows, we can be both pleasantly surprised and uncomfortably disturbed by what new technology can create. Let's not let that surprise come at a terrible personal and social cost. Kenneth Bernard, MD, is former Special Assistant to the President for Biosecurity under presidents George W. Bush and Bill Clinton. He is a former Assistant Surgeon General and retired Rear Admiral in the U.S. Public Health Service.
To read how Newsweek uses AI as a newsroom tool, Click here.