• | 8:00 am

Forget robots taking jobs, these researchers compare AI to fire. Here’s how we need to tend it

Researchers at Georgetown who now serve the government say if we deploy it too quickly and without adequate foresight, AI will burn in ways we cannot control.

[Source photo: Tobias Rademacher; Pixabay]

Ben Buchanan and Andrew Imbrie are both researchers at Georgetown’s Center for Security and Emerging Technology but are currently on leave to serve the United States government. Buchanan is acting as assistant director of the White House Office of Science and Technology Policy for the Biden-Harris Administration, meanwhile, Imbrie is serving the State Department.

Below, Ben and Andrew share five key insights from their new book, The New Fire: War, Peace, and Democracy in the Age of AIListen to the audio version—read by MIT Press Editorial Director Gita Manaktala—in the Next Big Idea App.


It is common to analogize AI to electricity: ubiquitous, beneficial, and safe. This is far too rosy. Today we encounter AI as our distant ancestors once encountered fire. If we manage this technology well, it will become a tremendous force for global good, lighting the way to transformative inventions. If we deploy it too quickly and without adequate foresight, AI will burn in ways we cannot control. If we harness it to destroy, it will enable more powerful weapons for the strongest governments as they engage in a combustible geopolitical competition. That the frequent analogy to electricity denies this wide range of possible outcomes only makes us less prepared.

AI is akin to fire in another way: its power comes from its accelerating force. Just as an unchecked wildfire burns more each second than in the second before, so too does accelerating growth in AI’s underlying components yield rapidly increasing capabilities. Ever-larger data sets representing vast stores of human knowledge power today’s AI systems. Increasingly capable and efficient algorithms push machines to new heights, reaching milestones that not long ago seemed decades or even centuries away. And ever more powerful computer chips (some of the most remarkable and intricate inventions ever devised) work together in huge numbers to do all the math that makes these new capabilities possible.


Recent breakthroughs leave no doubt that AI can make our lives better. This capability goes far beyond playing games. AI is better than humans not just at vastly complex games like Go, StarCraft, and poker, but also at fighter pilot dogfights, managing nuclear reactions, and even at some of the most foundational tasks in science itself. Consider something called the protein folding problem. Proteins are one of the most fundamental building blocks of life. Each protein is made up of a sequence of amino acids that, when the protein “folds,” arrange themselves into a complex 3D shape. For decades, predicting the shape of a protein from its sequence was one of the tallest orders in science—a PhD student might dedicate years of research to determine the structure of a single protein. But because knowledge of protein shapes is so valuable for medicine (including drug discovery) this painstaking manual effort is often worth it.

But AI scientists at a company called DeepMind thought there had to be a better way. In 2016, DeepMind began working on AlphaFold, an AI system that predicts a protein’s shape when given its sequence. By 2018, AlphaFold was the best automated system in the world at the task. By 2020, it was capable of solving the protein folding problem entirely. By the end of 2022, DeepMind will have determined and made public the structure of more than 130 million proteins, hundreds of times more than what all of humanity had collectively determined in the manual work prior to AlphaFold’s invention. As one leading biologist said, “This will change medicine. It will change research. It will change bioengineering. It will change everything.”


Despite its extraordinary power, AI is far from perfect. Bias insidiously sneaks into AI systems, especially when they learn from data sets of human decisions. The real-world consequences can be severe. Amazon had to scrap a resume screening tool after it learned to systematically discriminate against women. Another algorithm regularly denied healthcare to people of color. Similarly, facial recognition technologies perform far worse for diverse groups; in the United States, police have arrested innocent Black Americans solely on the basis of an incorrect facial recognition match.

Nor can AI explain how it reaches its conclusions. Like a lazy middle school student, even when the machine gets the right answer, it rarely shows its work, making it harder for humans to trust its methods. Worse still, this opacity can hide the instances when AI systems optimize for a goal that is not quite what their human creators had in mind. For example, one system designed to detect pneumonia in chest X-rays discovered that X-rays from one hospital were more likely than others to exhibit pneumonia because that hospital usually had sicker patients. The machine learned to look for the X-ray’s hospital of origin rather than at the X-ray itself. Another system was designed to identify cancerous skin lesions. It trained on a set of images from dermatologists who often used a ruler to measure lesions they thought might be cancerous. The AI system recognized that the presence of a ruler correlated with the presence of cancer, so it started checking if a ruler was present rather than focusing on the characteristics of the lesion.

In both of these cases, alert human operators noticed the failures before the systems were deployed, but it is impossible to know how many cases like these have gone undetected and how many more will go undetected in the future.


Militaries and intelligence agencies are ushering in an era of lethal autonomous weapons, including not just drones that loiter above the battlefield, but missiles capable of selecting their own targets. Top military thinkers, including in the United States, conceive of machine-driven warfare that is faster than ever before. In this future combat, operating at human speed is a surefire way to lose. Some strategists even propose giving AI the capability to launch nuclear weapons—a decision currently reserved for the President, and one that holds the fate of civilization in the balance.

But AI will transform more than warfare. Many of the most powerful cyberattacks in history—including some that have done tens of billions of dollars of damage—have been largely automated, and new AI techniques could take this trend further. Russian hackers have built malicious code that autonomously targets power systems, while the Pentagon has run gigantic tests in which AI systems hack and defend one another at rapid speed.

In addition, AI is adept at writing disinformation. One test in 2021 showed that AI systems could write targeted propaganda messages that preyed upon racial, religious, and political differences in the United States to successfully change the opinions of its targets. Maybe worse still, theorists worry that deep fake videos are poised to undermine the notion of truth itself.


Against this backdrop of automated warfare, rampant bias, and widespread disinformation campaigns, a worrying and common proposition emerges: AI will benefit autocracies at the expense of democracies. At a time when dictators seem empowered all over the world, it is easy to assume that this new technology will favor tyranny. Unencumbered by ethics, autocrats will crush dissent with automated surveillance systems at home and race ahead with AI-enabled warfare abroad.

But this is too fatalistic. The age of AI is still young, and its outcome is far from preordained. Democracies have the opportunity to develop shared norms for the technology’s use at home and abroad, unlocking its extraordinary potential while still guarding against bias and preserving civil liberties. They have the capacity to integrate it into militaries and intelligence agencies in such a way that preserves and enhances democratic values while putting autocracies on the defensive. Most importantly, democracies offer an innovative ecosystem that can determine where the technology goes next. AI will shape statecraft, but so will statecraft shape AI. If AI is the new fire, what matters most is how we tend it.

This article originally appeared in Next Big Idea Club magazine and is reprinted with permission.

More Top Stories: