Artificial Intelligence Cyberattack and Nuclear Weapons A Dangerous Combination

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Bulletin of the Atomic Scientists

ISSN: 0096-3402 (Print) 1938-3282 (Online) Journal homepage: http://www.tandfonline.com/loi/rbul20

Artificial intelligence, cyberattack, and nuclear


weapons—A dangerous combination

Pavel Sharikov

To cite this article: Pavel Sharikov (2018) Artificial intelligence, cyberattack, and nuclear
weapons—A dangerous combination, Bulletin of the Atomic Scientists, 74:6, 368-373, DOI:
10.1080/00963402.2018.1533185

To link to this article: https://doi.org/10.1080/00963402.2018.1533185

Published online: 22 Oct 2018.

Submit your article to this journal

Article views: 22

View Crossmark data

Full Terms & Conditions of access and use can be found at


http://www.tandfonline.com/action/journalInformation?journalCode=rbul20
BULLETIN OF THE ATOMIC SCIENTISTS
2018, VOL. 74, NO. 6, 368–373
https://doi.org/10.1080/00963402.2018.1533185

Artificial intelligence, cyberattack, and nuclear weapons—A dangerous


combination
Pavel Sharikov

ABSTRACT KEYWORDS
The military use of artificial intelligence can increase the possibility of war in a number of ways – Artificial intelligence; AI;
by allowing more targets for computer hacking, for example, or threatening critical infrastructure. cyberattack; machine
How does it change the classic model of stability in a nuclear world? learning; destabilizing
technology; national
security; cyberoffense;
cyberdefense

Since the earliest days, there has been a close alliance be taught to execute different functions, including
between the military and computing. For example, those related to military purposes – on strategic, tacti-
during the 1940s, the US government sponsored cal, and operational levels. (AI technologies may also
research into what became ENIAC – one of the first have purely civilian uses, or a combination of military
programmable, general-purpose electronic digital com- and civilian purposes, in what is known as dual-use).
puters, used to tackle massive numerical problems con- Today artificial intelligence is a very specific tech-
taining multiple variables, such as what occurs when nology. While commercial use of AI is very wide, there
trying to devise accurate artillery firing tables, which are only three countries which are reported to be
were of vital importance to the Allies in World War II. developing serious military AI technologies: The
(These tables tell the user the necessary angle of eleva- United States, China, and Russia. As has always been
tion for a particular-sized gun barrel in order to strike a true of computing technology, the use of AI does
target at a given distance, factoring in projectiles of provide a significant military advantage to a nation’s
different weights, using different amounts of different military.
propellents providing different amounts of force, and But what is different this time around is that mili-
allowing for complications such as temperature, wind, tary AI may be used as a part of an offensive cybercap-
and sometimes even the Coriolis effect). Such calcula- ability – an especially alarming development, as it has a
tions could be done by hand, but were immensely strong potential to be a destabilizing element to the
time-consuming and prone to inaccuracies. balance of power. (I refer to the phrase cyberattack to
The alliance that created this urgently needed refer to an attempt to damage or destroy an opponent’s
machine – part of the “military-industrial complex” computer network.) With the advent of machine learn-
that Eisenhower later warned about – was to continue ing, more targets have become available for computer
for decades, and became key to the early research and hacking, meaning that critical infrastructure – banking
development of semiconductors, global positioning systems, airport flight tracking, hospital records, the
satellites, and the Internet itself, to name just a few of programs that run the nation’s nuclear power reactors –
the more well-known fruits of this collaboration. can be vulnerable. But one of the most pressing pro-
Arguably, however, computing and technological blems lies in AI’s destabilizing effects on the nuclear
progress really began to accelerate only after the balance of power.
1960s, as computer memory radically improved and
algorithms became more complex, among other fac-
A new Cold War, with a cyber twist
tors. Today, the development of computers has reached
such a sophisticated level that some computers and Different cybertechnologies have been used by the
applications have to teach themselves to some extent, military for a long time. Nowadays, however, the
an approach that is called machine learning technol- means of conducting cyberattacks are much more
ogy, or artificial intelligence (AI). Through this so- easily accessible than before, and they may be used to
called “smart approach,” self-learning machines may inflict moderate and significant damage – and may not

CONTACT Pavel Sharikov [email protected]


© 2018 Bulletin of the Atomic Scientists
BULLETIN OF THE ATOMIC SCIENTISTS 369

necessarily be used by the military. Instead, non-state attack; the means of attack; the target of attack; and
actors, or even certain individuals (including hackers) the defenses against attack.
may take advantage of the inherent near-invisibility of It is worth to explore the possible AI implications
the perpetrators of a cyberattack; it can be (almost) within each of the cyberthreat elements.
impossible to attribute the origins of such an attack.
The cyberthreat problem is of special concern in
The source of attack
Russia-US relations. Cybersecurity – protecting a
nation’s hardware, software, and data against attack – Because the United States, China, and Russia are the
became another issue in bilateral relations on a par countries most actively pursuing AI military capabil-
with the problems of nuclear arms control, the possi- ities, it is logical to assume that the most likely source
bility of accidental military actions, and others. of attack would originate from within these countries
It should be noted, however, that there are certain or their close allies. (It should be noted, however, that
differences in Russian and American understandings of just because a country has a vibrant commercial AI
cybersecurity. While the American approach focuses research and development community, it does not
mostly on the technological side of the problem, the necessarily follow that its artificial intelligence is
Russian approach includes the content of the informa- applicable to offensive military uses.)
tion. In fact, the term “cybersecurity” is not recognized According to the information available, AI is
in Russia; instead, the phrase “information security” a priority in the national security strategies of all
(the closest translation) is used – although the term three countries, which are each trying to develop their
inevitably takes on a different meaning. For example, own comprehensive approach to using AI for military
in the West, fake news is technically not considered to purposes – to take advantage of all possible applica-
be really a security problem – after all, the hardware, tions of AI on all levels – strategic, tactical, and
software and raw data remain secure, even if the mes- operational.
sages distributed via that technology are false. Much of the information about what each country is
Meanwhile in Russia, the content is considered part doing in this area is closely held. Nevertheless, some
and parcel of information security. Hence, the Russian hints can be discerned.
approach focuses more on making sure Russian citi- In the United States, AI is considered a strategic
zens get access only to proper content – as the govern- priority for the Defense Department, and one “that
ment defines the word “proper.” could transform the way the department operates,”
Today, the owners of the largest stockpiles of said Brendan McCord, head of Defense Innovation
nuclear weapons, Russia and the United States, are Unit Experimental – one of the Defense Department’s
living through a crisis, where many of the traditional offices, established in 2015, responsible for Defense’s
contacts between the two countries (including military innovation activities (including those related to AI).
ones) have ceased, making for one more element upset- Artificial Intelligence may be used in supporting all
ting the model of mutual assured destruction (MAD) the strategic goals outlined in the National Defense
that had provided so much of whatever stability and Strategy, in particular the last objective outlined in
predictability existed in the past. Indeed, a confronta- the Strategy: “Establishing an unmatched twenty-first
tion similar to that of the Cold War is coming back, century National Security Innovation Base that effec-
destabilizing the entire international system. And due tively supports Department operations and sustains
to the existence of new technologies (including cyber security and solvency” (National Defense Strategy
and AI), this new form of standoff becomes much less 2018). This objective clearly demonstrates the fact
predictable, much more dangerous, and different in that AI technologies are a top priority in national
many ways from what we have known before. security, and that these technologies are likely to
In fact, this new arms race may have already begun, become a part of a new arms race. That is why, in
but in the technological field rather than the nuclear 2018, the Defense Department announced the creation
one. Artificial Intelligence may be especially dangerous of Joint Artificial Intelligence Center (JAIC), and why
in combination with other information technologies – it currently conducts 592 projects that involve some
cloud computing, big data, the internet of things, and form of AI (Mehta 2018).
others. The Chinese approach to AI capabilities in the mili-
In my view, this cyberthreat has at least four ele- tary is more operational in its approach, focusing less
ments; each of them may be explored separately and on basic fundamental research and development in
should be considered in working out cybersecurity comparison to the United States. And it seems that
strategies. The four elements include: the source of Chinese approach is more focused on dealing with
370 P. SHARIKOV

tactical rather than strategic problems. Elsa Kania, an Kremlin leadership was decapitated. [One of the
analyst from an Aspen-based think tank known as the Russian communications specialists who oversaw the
Long Term Strategy Group, explained it as: “The PLA’s system’s installation, Col. Valery Yarynich, later
sophisticated unmanned weapons systems will increase revealed its existence to the Western press; in 2003,
its anti-access/area-denial capabilities, while its pro- Yarynich published a book about some of its workings,
gress in multiple military applications of artificial intel- titled C3: Nuclear Command, Control, Cooperation,
ligence could enable a disruptive operational published with the help of Bruce Blair of the Center
advantage.” (Kania 2017) In other words, the Chinese for Defense Information (Yarynich 2003). When asked
military is pursuing relatively low-tech approaches in by a Washington Post reporter why he chose to speak
its use of AI, such as the use of large numbers of so openly to the West, Yarynich replied “…it was utter
unmanned aerial vehicles on the battlefield in large stupidity to keep the Dead Hand secret; such a retalia-
swarms, in order to offset the high-tech prowess of its tory system was useful as a deterrent only if your
rivals. This observation is buttressed by a report pre- adversary knew about it” (Hoffman 2012)].
pared on behalf of the US-China Economic and But while we do not know the details about how
Security review commission in October 2016 which today’s automated systems could be used as a means of
found that China is also pursuing smart weapons and attack, we can make some general observations about
autonomous robotic soldiers. (Ray et al. 2016) the field as a whole.
Meanwhile in Russia, AI development is apparently While the nuclear arms race was about quantity and
not as advanced, though it is hard to be certain. Little is quality of warheads, the cyberweapons arms race will
known beyond the official pronouncement in the be about maintaining informational (machine learning)
Russian Military Encyclopedia that the priorities of AI superiority over the enemy.
in Russian military include “creation of knowledge In a report for the non-partisan, nonprofit govern-
systems, neuro-systems and systems of heuristic mental agency the Congressional Research Service,
search” – though what that statement is supposed to researchers Daniel Hadley and Nathan Lucas outlined
mean in a boots-on-the-ground sense is unclear a comprehensive list of areas in which this advantage
(Russian Military Encyclopedia, No Date). (For a literal could be applied for military purposes: intelligence,
English-language translation of the full statement, see logistics, cyberspace, command and control, autono-
endnote.1) mous vehicles, and lethal autonomous weapon systems
The most concerning element of AI use in the (LAWS). Nearly all of them may be used to enhance
military is how it integrates artificial intelligence into the scale and speed of a cyberattack (Hoadley and
the decision-making process. In a crisis situation, when Lucas 2018)].
the decision should be made quickly, AI may make a To this list another two possible AI cyberoffensive
decision that would lead to escalation of a conflict, technologies can be added. One is infiltration – that is,
rather than its resolution. autonomous agents capable of “remembering” the
information gathered during reconnaissance and
using it to plan an infiltration path. And the other is
Means of attack
the aformentioned use of swarms – sets of decentra-
As noted earlier, there are a variety of ways in which AI lized, cooperating, autonomous agents that can form a
can be used, as both a strategic and a tactical offensive whole new kind of botnet, where there is no need for a
weapon. Expensive, physically large, high-performance centralized command and control (Guarino 2013).
computers (also known as “supercomputers,” or more In a sense, these technologies show how human
colloquially as “big iron”) can be used to devote huge hackers can be replaced with software that can heal
amounts of computing resources for short periods of its own bugs and vulnerabilities, while simultaneously
time, in order to attack and disrupt strategic military searching for and exploiting bugs in adversary systems.
targets as well as critical infrastructure. For obvious Commenting upon the results of a recently conducted
reasons, such research is highly classified. It is unlikely, virtual war game known as DARPA’s “Grand
however, that today any of these countries possess a Challenge,” the head of US Army Cyber Command,
doomsday device – although it should be noted that for Lt. Gen. Paul Nakasone, stated that “artificial intelli-
many years, there was a semi-automated defensive gence will change both offensive and defensive opera-
system informally called the “Dead Hand” based in a tions” (Tucker 2018).
deep underground bunker in the Ural Mountains, It is notable that the applicability of existing inter-
which theoretically could have allowed the Russian national laws to AI weapons was discussed at a recent
military to strike back with nuclear weapons if the meeting of the UN Group of Governmental Experts
BULLETIN OF THE ATOMIC SCIENTISTS 371

(UNOG 2018). While there was no consensus about to attack certain critical infrastructure. If such an attack
lethal autonomous weapons systems, there was a gen- takes place, military commanders and political leaders
eral understanding that human intelligence should will have at least some guarantee that the attack is not
remain as the key decision maker on the battlefield. official, and hence may retaliate against the perpetra-
At the same time, this is not a cure-all: If AI lethal tors – to the appropriate degree, and following the
weapons systems will not be autonomous but con- rules of war, without existential consequences for the
trolled by humans, there still remains the chance that entire world.
the adversary may hack and misuse these technologies.
The bottom line is that it is important to work out
Defense against cyberattacks
new rules of engagement, in order not to make the new
arms race uncontrollable. The whole process of ensuring cybersecurity is very
dynamic. Unlike conventional and even nuclear weap-
ons, cyberoffense is constantly changing, and hence the
Targets of the attack
defense against it also must be constantly developing. A
The most dangerous feature of an AI cyberattack is that robust cyberdefense is the one that can adjust to the
it may target civilian infrastructure as well as military constantly changing environment.
targets. The problem is that because this infrastructure The line between cyberoffense and cyberdefense is
is most likely to be commercial – and hence not subject very obscure. It does not correspond with the logic of a
to government control – it is likely to be less secure sword and a shield.
and more vulnerable. Defense against an AI cyberattack, especially from
It is easier for potential attackers to find these vul- an autonomous system like a bot, often requires novel
nerabilities, and harder for the government to guaran- solutions; consequently, the defense has to be self-
tee security. Artificial intelligence together with big learning, so it can learn the specifics of an offensive
data analysis may be used to figure out the most technology.
vulnerable infrastructure, or choose the best timing to Nevertheless, certain elements of deterrence doc-
make an attack most catastrophic. trine may be used in preventing the use of AI in a
To get around this problem, after the 2001 terrorist military attack.
attacks, the George W. Bush administration’s national Deterring a cyberthreat is very different from deter-
security strategy outlined 13 specific infrastructure ring a nuclear attack. First of all, cyberthreats as well as
objects that were declared critical for national security cybervulnerabilities are asymmetrical, which means
and became subject to special government regulations that no parity can be reached. Nuclear weapons could
(National Strategy for the Physical Protection of be counted by the number of warheads, launchers, and
Critical Infrastructures and Key Assets 2003). delivery vehicles, while there are hardly any indicators
But of all the possible targets for a cyberattack, that allow comparing AI capabilities. Second, the threat
possibly the most worrisome is the use of AI to target must be credible, so that the enemy is sure that either
nuclear command and control systems. As of today, its attack would be ineffective or retaliation would be
there is no evidence that nuclear military facilities may destructive. In order to make sure that the threat is
be attacked by cyberweapons, but one should never credible, a weapon must be either tested or used. But in
exclude such a possibility. Besides, the problem may the cyber world, if a cyberweapon is actually demon-
not only be about unauthorized access, but about com- strated, then the enemy may create a remedy and the
promising the system, deceiving the computers, or offender would lose superiority. Hence, the idea of
sending false signals. cyberdeterrence is nearly unfeasible, as RAND corpora-
Such actions interfere with the underlying principles tion expert Martin Libicki noted in 2009 (Libicki 2009).
of Mutually Assured Destruction, and as such could So, rather than being about cyberdeterrence, the
provoke either Russia or the United States to launch a artificial intelligence arms race is about achieving and
nuclear attack based on false data. Logically, neither maintaining superiority in getting and analyzing infor-
Russia nor the United States would have such inten- mation, and making decisions.
tions – but a third party may. And a third party is not
necessarily a nation, but could be a non-state actor,
Our cyberfuture?
such as Al Qaeda.
To guard against this possibility, cyber incidents and Even though it is impossible to reverse the advances of
AI technologies should be on the agenda of strategic AI development, it is still possible to work out the rules
arms control negotiations. The parties should agree not of an arms race involving this technology. In this
372 P. SHARIKOV

regard, it is worth thinking about the experience of given it by human beings except where such orders
Soviet-American agreements when antiballistic missile would conflict with the First Law. A robot must protect
technologies were introduced in the 1970s. Instead of its own existence as long as such protection does not
getting into an arms race centered around this new conflict with the First or Second Law.”
technology, the Soviet Union and the United States With these three precepts in mind, we can see one
instead agreed to accept mutual vulnerabilities, thus useful tool in determining the moral guidelines for
preventing both parties from a nuclear attack. This research into artificial intelligence.
logic was used in the ABM treaty and remained an It is also important to note that military AI research
element of strategic stability until 2002. and development must remain under government’s
Policy makers together with technical experts may think total control in order to exclude unauthorized use of
about similar vulnerabilities to prevent a cyber AI attack. an attack, and it must also remain transparent to civil
It is critical to work out similar rules of engagement, society to the extent possible.
taking into very thorough consideration the specifics of And we must be sure to think the unthinkable and
artificial intelligence. The self-learning machines are work- look at the possibility of the use of AI in attacks against
ing within the framework designed by humans, and the the command and control systems of nuclear weapons
input of the offensive AI technologies should be discussed systems. Doing so is vital to raising the level of mutual
on the international level. The problem of verification is trust between Russia and the United States, and exclud-
nearly impossible. AI technology threats should be ing the possibility that third parties might exploit
explored as a part of the national security strategy. opportunities to compromise nuclear command and
It is worth noting that AI technologies have even control.
attracted Henry Kissinger’s attention. Kissinger – who
had a hand in much of US diplomacy in the ‘70s –
expressed concern recently in an interview with The Note
Atlantic, saying that AI “makes strategic judgments 1. Russian military language can be very complicated, but a
about the future, some based on data received as code literal English-language translation of the article regard-
(for example, the rules of a game), and some based on ing artificial intelligence in the Russian Military
data it gathers itself (for example, by playing 1 million Encyclopedia (n.d.) would be the following:
iterations of a game)” (Kissinger 2018). In this article, “[Artificial Intelligence is a] field of research, that
this master of realpolitik noted that many ethical and includes models, systems and devices, that imitate intel-
philosophical concerns appear, and Kissinger asks lectual activities of a human (apprehension of informa-
more questions than he provides answers. tion, logical thinking) in combat. The AI studies are
conducted in three major directions: creation of knowl-
The international community should be interested in
edge systems, neurosystems, and systems of heuristic
working out the international norms of using AI technol- search. In Russia’s Strategic Missile Forces, AI achieve-
ogies. Deploying artificial intelligence technologies in the ments are used for creating one of the most advanced
armed forces requires strong public-private partnership, classes of automated information military systems – a
thus allowing for the outside world to somewhat see what system of official decision-making support and so-
is going on in the secretive world of military AI – and it is called intellectual systems and weapon systems of dif-
ferent purposes (in particular,airborne control systems).
notable that recently Google was pressured to stop the In addition, an important direction of AI military use
Maven project to develop a military drone. along with automation of professional activities in
Consequently, Google published a set of AI princi- Russia’s Strategic Missile Forces is the creation and
ples, which remind me of Isaac Asimov’s Three Laws of exploitation of expert systems. Especially important
Robotics. Though they sprang from a science fiction are diagnostic expert systems. That is a complex of
software which implements the methods of artificial
story, these rules set the course for much of the think-
intelligence based on knowledge. The Expert System
ing behind today’s very real research into the field of allows one to accumulate knowledge from one field
artificial intelligence. (This is not the first time such a within a model of knowledge (production model of
thing has happened. The thought experiment known as knowledge, networking, frame and other) and make
Schrödinger’s cat was first proposed as a joke by new knowledge based on these models, solve intellectual
Danish physicist Erwin Schrödinger; now it is often practical problems, and explain the solution. Expert
Systems include: knowledge databases, linguistic pro-
used by particle physicists to explain a basic concept
cessors which enables user interface with the expert
of quantum mechanics.) It is worth repeating Asimov’s system, and which executes the mechanism of logical
Three Laws of Robotics here: “A robot may not injure a conclusion with the support of working memory; com-
human being or, through inaction, allow a human ponents of knowledge acquisition, and explanation of
being to come to harm. A robot must obey orders the solution and the result of the problem.”
BULLETIN OF THE ATOMIC SCIENTISTS 373

Acknowledgments Warfare.” Testimony before the US-China Economic and


Security Review Commission, February 23. https://www.
This research has been done for the author’s primary affilia- uscc.gov/sites/default/files/Kania_Testimony.pdf
tion, the Institute for the U.S. and Canada Studies, Russian Kissinger, H. 2018. “How the Enlightenment Ends.” The
Academy of Sciences. Atlantic, June. https://www.theatlantic.com/magazine/
archive/2018/06/henry-kissinger-ai-could-mean-the-end-
of-human-history/559124/
Disclosure statement Libicki, M. 2009. “Cyberdeterrence and Cyberwar.” RAND
No potential conflict of interest was reported by the author. Corporation. https://www.rand.org/content/dam/rand/
pubs/monographs/2009/RAND_MG877.pdf
Mehta, A. 2018. “Pentagon AI Center Progressing, but
Funding Hypersonics and Lasers May Not Get Same
Treatment.” Defense News, April 24. https://www.defen
The research has been funded by the author's primary affilia- senews.com/pentagon/2018/04/24/pentagon-ai-center-
tion. Grant [007-00314-18-00]. progressing-but-hypersonics-and-lasers-may-not-get-
same-treatment/
National Defense Strategy. 2018. https://dod.defense.gov/
Notes on contributor Portals/1/Documents/pubs/2018-National-Defense-
Strategy-Summary.pdf
Pavel Sharikov is a senior research fellow at the Institute for National Strategy for the Physical Protection of Critical
USA and Canada Studies at the Russian Academy of Sciences Infrastructures and Key Assets. 2003. https://www.dhs.
(ISKRAN). In 2015 he authored the book “Information gov/xlibrary/assets/Physical_Strategy.pdf
security in a multipolar world” (in Russian). He also teaches Ray, J. et al. 2016. “China’s Industrial and Military
a number of courses as an associate professor at Moscow Robotics Development.” Research Report Prepared on
State University. Sharikov is a member of the Russian Behalf of the US-China Economic and Security Review
International Affairs Council, the Valdai Discussion Club, Commission. October. p. 85. https://www.uscc.gov/
and the Younger Generation Leadership Network for Euro sites/default/files/Research/DGI_China%27s%
Atlantic security. 20Industrial%20and%20Military%20Robotics%
20Development.pdf
Russian Military Encyclopedia. n.d. http://encyclopedia.mil.
References ru/encyclopedia/dictionary/details_rvsn.htm?id=
13200@morfDictionary
Guarino, A. 2013. “Autonomous Intelligent Agents in Cyber Tucker, P. 2018. “Trump’s Pick for NSA/CyberCom Chief
Offence.” 5th International Conference on Cyber Conflict. Wants to Enlist AI for Cyber Offense.” Defense One,
http://www.ccdcoe.org/cycon/2013/proceedings/d1r1s9_guar January 9. https://www.defenseone.com/technology/2018/
ino.pdf 01/how-likely-next-nsacybercom-chief-wants-enlist-ai
Hoadley, D., and N. Lucas 2018. “Artificial Intelligence /145085/
and National Security.” Congressional Research Service, UNOG, 2018. Chair’s Summary of the Discussion. United
April 26. https://fas.org/sgp/crs/natsec/R45178.pdf Nations’ Group of Governmental Experts. April 10–13.
Hoffman, D. 2012. “Valery Yarynich, the Man Who Told of the https://www.unog.ch/80256EDD006B8954/httpAssets/
Soviets’ Doomsday Machine.” Washington Post, December 20. DF486EE2B556C8A6C125827A00488B9E/$file/Summary
https://www.washingtonpost.com/opinions/valery-varynich- +of+the+discussions+during+GGE+on+LAWS+April
the-man-who-told-of-the-soviets-doomsday-machine/2012/ +2018.pdf
12/20/147f3644-4613-11e2-8061-253bccfc7532_story.html. Yarynich, V. 2003. C3: Nuclear Command, Control,
Kania, E. 2017. “Chinese Advances in Unmanned Systems and Cooperation, ed B. Blair. Washington, DC: Center for
the Military Applications of Artificial Intelligence—The Defense Information. https://www.scribd.com/doc/
PLA’s Trajectory Towards Unmanned, ‘Intelligentized’ 282622838/C3-Nuclear-Command-Control-Cooperation

You might also like