Discover millions of ebooks, audiobooks, and so much more with a free trial

From $11.99/month after trial. Cancel anytime.

Artificial Intelligence: Ethical, social, and security impacts for the present and the future
Artificial Intelligence: Ethical, social, and security impacts for the present and the future
Artificial Intelligence: Ethical, social, and security impacts for the present and the future
Ebook352 pages5 hours

Artificial Intelligence: Ethical, social, and security impacts for the present and the future

Rating: 0 out of 5 stars

()

Read preview

About this ebook

A global perspective on AI

AI is much more than just a simple tool powering our smartphones or allowing us to ask Alexa about the latest cinema times. It is a technology that is, in very subtle but unmistakable ways, exerting an ever-increasing influence over our lives – and the more we use it, the more AI is altering our existence.

The rise of AI and super-intelligent AI raises ethical issues. AI is the power behind Google’s search engine, enables social media sites to serve up targeted advertising, and gives Alexa and Siri their voices. It is also the technology enabling self-driving vehicles, predictive policing, and autonomous weapons that have the ability to kill without direct human intervention. All of these bring up complex ethical issues that are still unresolved and will continue to be the subject of ongoing debate.

There are untold how-to books on AI technology, replete with methods to improve and advance the statistics and algorithms of AI; however, the social, ethical and security impacts are often at best a secondary consideration – if discussed at all.

This book explores the complex topic of AI ethics in a cross-functional way, alternating between theory and practice. Practical and recent events, and their associated challenges, are presented, such as the collateral effects of the COVID-19 pandemic on the application of AI technologies. The book also gives an up-to-date overview of the potential positive and negative outcomes of AI implementations together with an analysis of AI from an ethical perspective.

Before you dive into a world populated with AI, read this book to understand the associated ethical challenges of AI technologies
LanguageEnglish
Publisheritgovernance
Release dateJun 30, 2022
ISBN9781787783720
Artificial Intelligence: Ethical, social, and security impacts for the present and the future
Author

Julie Mehan

Dr Julie Mehan is a Principal Analyst for a strategic consulting firm in the State of Virginia. She has been a career Government Service employee, a strategic consultant, and an entrepreneur.

Read more from Julie Mehan

Related to Artificial Intelligence

Related ebooks

Law For You

View More

Related articles

Reviews for Artificial Intelligence

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Artificial Intelligence - Julie Mehan

    INTRODUCTION

    Let’s start by saying that this book is not a guide on how to develop AI. There are plenty of those – and plenty of YouTube videos providing introductions to machine learning (ML) and AI. Rather, the intent is to provide an understanding of AI’s foundations and its actual and potential social and ethical implications – though by no means ALL of them, as we are still in the discovery phase. Although it is not technically-focused, this book can provide essential reading for engineers, developers, and statisticians in the AI field, as well as computer scientists, educators, students, and organizations with the goal of enhancing their understanding of how AI can and is changing the world we live in.

    An important note: throughout this book, the term AI will be used as an overarching concept encompassing many of the areas and sub-areas of AI, ML, and deep learning (DL). So, readers, allow some latitude for a certain degree of inaccuracy in using the overarching AI acronym in reference to all of its permutations.

    It is essential to begin at the outset to define and describe AI, all the while bearing in mind that there is no one single accepted definition. This is partly because intelligence itself is difficult to define. As Massachusetts Institute of Technology (MIT) Professor Max Tegmark pointed out, There’s no agreement on what intelligence is even among intelligent intelligence researchers.³

    In fact, few concepts are less clearly defined as AI. The term AI itself is polysemous – having multiple meanings and interpretations. In fact, it appears that there are as many perceptions and definitions of AI as there are proliferating applications. Although there are multiple definitions of AI, let’s look at this really simple one: AI is intelligence exhibited by machines, where a machine can learn from information (data) and then use that learned knowledge to do something.

    According to a 2017 Rand Study,

    algorithms and artificial intelligence (AI) agents (or, jointly, artificial agents) influence many aspects of our lives: the news articles we read, the movies we watch, the people we spend time with, our access to credit, and even the investment of our capital. We have empowered them to make decisions and take actions on our behalf in these and many other domains because of the efficiency and speed gains they afford.

    AI faults in social media may have only a minor impact, such as pairing someone with an incompatible date. But a misbehaving AI used in defense, infrastructure, or finance could represent a potentially high and global risk. A misbehaving algorithm refers to an AI whose processing results lead to incorrect, prejudiced, or simply dangerous consequences. The market’s Flash Crash of 2010⁵ is a painful example of just how vulnerable our reliance on AI can make us.

    As an international community, we need to address the more existential concerns. For example, where will continued innovation in AI ultimately lead us? Will today’s more narrow applications of AI make way for fully intelligent AI? Will the result be a continuous acceleration of innovation resulting in exponential growth in which super-intelligent AI will develop solutions for humanity’s problems, or will future AI intentionally or unintentionally destroy humanity – or even more likely, be distorted and abused by humanity? These are the immediate and long-term concerns arising from the increased development and deployment of AI in so many facets of our society.

    But there is a counter to this argument that runs central to this book, and it could not be better expressed than in the words of Kevin Kelly, founder of Wired magazine:

    But we haven’t just been redefining what we mean by AI – we’ve been redefining what it means to be human. Over the past 60 years, as mechanical processes have replicated behaviors and talents that we once thought were unique to humans, we’ve had to change our minds about what sets us apart … In the grandest irony of all, the greatest benefit of an everyday, utilitarian AI will not be increased productivity or an economics of abundance or a new way of doing science – although all those will happen. The greatest benefit of the arrival of artificial intelligence is that AIs will help define humanity. We need AIs to tell us who we are.

    ³ Tegmark, M. 2017. Life 3.0: Being Human in the Age of Artificial Intelligence. London: Penguin Books.

    ⁴ Osonde A. Osoba, and William Welser IV. (2017). An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence. RAND Corporation.

    ⁵ On May 6, 2010, Wall Street experienced its worst stock plunge in several decades, wiping almost a trillion dollars in wealth out in a mere 20 minutes. Other so-called flash crashes have occurred since, and most were a result of a misbehaving algorithm.

    ⁶ Kelly, Kevin. (October 27, 2014). The Three Breakthroughs That Have Finally Unleashed AI on the World. Wired magazine online. Available at www.wired.com/2014/10/future-of-artificial-intelligence/

    .

    CHAPTER 1: AI DEFINED AND COMMON DEPICTIONS OF AI – IS IT A BENEVOLENT FORCE FOR HUMANITY OR AN EXISTENTIAL THREAT?

    By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.

    Eliezer Yudkowsky

    OK! AI will destroy humans!

    This statement sums up some of the common (mis-) perceptions held by humans about AI. In truth, we are at no near-term (or even long-term) risk of being destroyed by intelligent machines.

    Elon Musk, the noted tech tycoon, begs to differ, with his claim that AI is a fundamental risk for the existence of human civilization.⁸ Musk made this statement based on his observations that the development and deployment of AI is far outpacing our ability to manage it safely.

    Narratives about AI play a key role in the communication and shaping of ideas about AI. Both fictional and non-fictional narratives have real-world effects. In many cases, public knowledge about the AI and its associated technology is limited. Perceptions and expectations are therefore usually informed by personal experiences using existing applications, by film and books, and by the voices of prominent individuals talking about the future. This informational disconnect between the popular narratives and the reality of the technology can have potentially significant negative consequences.

    Narratives that are focused on utopian extremes could create unrealistic expectations that the technology is not yet able to meet. Other narratives focused on the fear of AI may overshadow some of the real challenges facing us today. With real challenges, such as wealth distribution, privacy, and the future of work facing us, it’s important for public and legislative debate to be founded on a better understanding of AI. Bad regulation is another potential consequence resulting in misleading narratives and understanding, and influencing policymakers: they either respond to these narratives because these are the ones that resonate with the public, or because they are themselves influenced by them. AI may develop too slowly and not meet expectations, or it may evolve so fast that it is not aligned with legal, social, ethical, and cultural values.

    A very brief history of AI – and perceptions of AI

    Whether AI is a potential threat or not may be debatable, but before entering the debate, let’s look at the history of AI. AI is not a new term. In fact, it was first introduced in 1956 by John McCarty, an assistant Professor at Dartmouth College, at the Dartmouth Summer Research Project. His definition of AI was the science and making of intelligent machines or getting machines to work and behave like humans.

    But the concept of AI was not first conceived with the term in 1956. Although it is not surprising that AI grew rapidly post-computers, what is surprising is how many people thought about AI-like capabilities hundreds of years before there was even a word to describe what they were thinking about. In fact, something similar to AI can be found as far back as Greek mythology and Talos. Talos was a giant bronze automaton warrior said to have been made by Hephaestus to protect Europa, Zeus’s consort, from pirates and invaders who might want to kidnap her.

    Between the fifth and fourteenth centuries, or the Dark Ages, there were a number of mathematicians, theologians, philosophers, professors, and authors who contemplated mechanical techniques, calculating machines, and numeral systems that ultimately led to the idea that mechanized human thought might be possible in non-human beings.

    Although never realized, Leonardo da Vinci designed an automaton (a mechanical knight) in 1495.

    Jonathan Swift’s novel Gulliver’s Travels from the 1700s talked about an apparatus it called the engine. This device’s supposed purpose was to improve knowledge and mechanical operations to a point where even the least talented person would seem to be skilled – all with the assistance and knowledge of a non-human mind.

    Inspired by engineering and evolution, Samuel Butler wrote an essay in 1863 entitled Darwin Among the Machines wherein he predicted that intelligent machines would come to dominate:

    … the machines are gaining ground upon us; day by day we are becoming more subservient to them […] that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question.

    Fast forward to the 1900s, where concepts related to AI took off at full tilt and there was the first use of the term robot. In 1921, Karel Čapek, a Czech playwright, published a play entitled Rossum’s Universal Robots (English translation), which featured factory-made artificial people – the first known reference to the word.

    One of the first examples in film was Maria, the Maschinenmensch or machine-human, in the Fritz Lang directed German movie Metropolis made in 1927. Set in a dystopian future, the gynoid¹⁰ Maria was designed to resurrect Hel, the deceased love of the inventor, Rotwang, but Maria evolved to seduce, corrupt, and destroy. In the end, her fate was to be destroyed by fire. Many claims have been made that this movie spawned the trend of futurism in the cinema. Even if we watch it today in its 2011 restoration, it is uncanny to see how many shadows of cinema yet to come it already contains.

    Figure 1-1: Gynoid Maria from the movie Metropolis¹¹

    In 1950, Alan Turing published "Computing Machinery and Intelligence," which proposed the idea of The Imitation Game – this posed the question of whether machines could actually think. It later became known as The Turing Test, a way of measuring machine (artificial) intelligence. This test became an important component in the philosophy of AI, which addresses intelligence, consciousness, and ability in machines.

    In his novel, Dune, published in 1965, Frank Herbert describes a society in which intelligent machines are so dangerous that they are banned by the commandment Thou shalt not make a machine in the likeness of a human mind.¹²

    Fast forward to 1969, and the birth of Shakey – the first general purpose mobile robot. Developed at the Stanford Research Institute (SRI) from 1966 to 1972, Shakey was the first mobile robot to reason about its actions. Its playground was a series of rooms with blocks and ramps. Although not a practical tool, it led to advances in AI techniques, including visual analysis, route finding, and object manipulation. The problems Shakey faced were simple and only required basic capability, but this led to the researchers developing a sophisticated software search algorithm called A* that would also work for more complex environments. Today, A* is used in applications, such as understanding written text, figuring out driving directions, and playing computer games.

    1997 saw the development of Deep Blue by IBM, a chess-playing computer that became the first system to play chess against the reigning world champion, Gary Kasparov, and win. This was a huge milestone in the development of AI and the classic plot we’ve seen so often of man versus machine. Deep Blue was programmed to solve the complex, strategic problems presented in the game of chess, and it enabled researchers to explore and understand the limits of massively parallel processing. It gave developers insight into ways they could design a computer to tackle complex problems in other fields, using deep knowledge to analyze a higher number of possible solutions. The architecture used in Deep Blue has been applied to financial modeling, including marketplace trends and risk analysis; data mining – uncovering hidden relationships and patterns in large databases; and molecular dynamics, a valuable tool for helping to discover and develop new drugs.

    From 2005 onwards, AI has shown enormous progress and increasing pervasiveness in our everyday lives. From the first rudimentary concepts of AI in 1956, today we have speech recognition, smart homes, autonomous vehicles (AVs), and so much more. What we are seeing here is a real compression of time in terms of AI development. But why? Blame it on the increase in data or big data. Although we may not see this exact term, it hasn’t disappeared. In fact, data has just got bigger. This increase in data has left us with a critical question: Now what? As in: We’ve got all this stuff (that’s the technical term for it!) and it just keeps accumulating – so what do we do with it? AI has become the set of tools that can help an organization aggregate and analyze data more quickly and efficiently. Big data and AI are merging into a synergistic relationship, where AI is useless without data, and mastering today’s ever-increasing amount of data is insurmountable without AI.

    So, if we have really entered the age of AI, why doesn’t our world look more like The Jetsons, with autonomous flying cars, jetpacks, and intelligent robotic housemaids? Oh, and in case you aren’t old enough to be familiar with The Jetsons – well, it was a 1960s TV cartoon series that became the single most important piece of twentieth-century futurism. And though the series was just a Saturday morning cartoon, it was based on very real expectations for the future.

    In order to understand where AI is today and where it might be tomorrow, it’s critical to know exactly what AI is, and, more importantly, what it is not.

    What exactly is AI?

    In many cases, AI has been perceived as robots doing some form of physical work or processing, but in reality, we are surrounded by AI doing things that we take for granted. We are using AI every time we do a Google search or look at our Facebook feeds, as we ask Alexa to order a pizza, or browse Netflix movie selections.

    There is, however, no straightforward, agreed-upon definition of AI. It is perhaps best understood as a branch of computer science that endeavors to replicate or simulate human intelligence in a machine, so machines can efficiently – or even more efficiently – perform tasks that typically require human intelligence. Some programmable functions of AI systems include planning, learning, reasoning, problem solving, and decision-making.

    In effect, AI is multidisciplinary, incorporating human social science, computing science, and systems neuroscience,¹³ each of which has a number of sub-disciplines.¹⁴

    Figure 1-2: AI is multidisciplinary¹⁵

    Computer scientists and programmers view AI as algorithms for making good predictions. Unlike statisticians, they are not too interested in how we got the data or in models as representations of some underlying truth. For them, AI is black boxes making predictions.

    Statisticians understand that it matters how data is collected, that samples can be biased, that rows of data need not be independent, that measurements can be censored, or truncated. In reality, the majority of AI is just applied statistics in disguise. Many techniques and algorithms used in AI are either fully borrowed from or heavily rely on the theory from statistics.

    And then there’s mathematics. The topics at the heart of mathematical analysis – continuity and differentiability – are also what is at the foundation of most AI/ML algorithms.

    All AI systems – real and hypothetical – fall into one of three types:

    1. Artificial narrow intelligence (ANI) , which has a narrow range of abilities;

    2. Artificial general intelligence (AGI) , which is on par with human capabilities; or

    3. Artificial superintelligence (ASI) , which is more capable than a human.

    ANI is also known as weak AI and involves applying AI only to very specific and defined tasks, i.e. facial recognition, speech recognition/voice assistants. These capabilities may seem intelligent, however, they operate under a narrow set of constraints and limitations. Narrow AI doesn’t mimic or replicate human intelligence, it merely simulates human behavior based on a narrow and specified range of parameters and contexts. Examples of narrow AI include:

    •Siri by Apple, Alexa by Amazon, Cortana by Microsoft, and other virtual assistants;

    •IBM’s Watson;

    •Image/facial recognition software;

    •Disease mapping and prediction tools;

    •Manufacturing and drone robots; and

    •Email spam filters/social media monitoring tools for dangerous content.

    AGI is also referred to as strong or deep AI, or intelligence that can mimic human intelligence and/or behaviors, with the ability to learn and apply its intelligence to solve any problem. AGI can think, understand, and act in a way that is virtually indistinguishable from that of a human in any given situation. Although there has been considerable progress, AI researchers and scientists have not yet been able to achieve a fully-functional strong AI. To succeed would require making machines conscious, and programming a full set of cognitive abilities. Machines would have to take experiential learning to the next level, not just improving efficiency on singular tasks, but gaining the ability to apply the experiential knowledge to a wide and varying range of different problems. The physicist Stephen Hawking stated that there is the potential for strong AI to … take off on its own and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, could not compete, and would be superseded.¹⁶

    One of the most frightening examples of AGI is HAL (Heuristically programmed ALgorithmic computer) in 2001: A Space Odyssey. HAL 9000, the sentient computer at the heart of 2001, remains one of the most memorable characters in the film. Faced with the prospect of disconnection after an internal malfunction, HAL eventually turns on the Discovery 1 astronaut crew, killing one, before being manually shut down by the other crew member. HAL continues to represent a common fear of future AI, in which man-made technology could turn on its creators as it evolves in knowledge and consciousness.

    ASI is still only a hypothetical capability. It is AI that doesn’t just mimic or understand human intelligence and behavior; ASI represents the point machines become self-aware and may even surpass the capacity of human intelligence and ability. ASI means that AI has evolved to be so similar to a human’s emotions and experiences that it doesn’t just understand them, it even develops emotions, needs, beliefs, and desires of its own.

    A possible example of ASI is the android Data who appeared in the TV show, Star Trek: The Next Generation. In one episode, The Measure of a Man, Data becomes an object of study, threatened with his memory being removed and then being deactivated and disassembled in order to learn how to create more Data-like androids. The scientist argues that Data is purely a machine; Data claims that he will lose himself, as his identity consists of a complex set of responses to the things he has experienced and learned over time, making him unique. And if other androids were created, they would be different from him for precisely this reason. The possibility of new androids does not make him worry about his own identity; rather, it is the possibility that he will be reverted to something like a blank slate, which would then no longer be him. In the end, it came down to the question of What is human? Can humanity be defined by something like sentience, self-awareness, or the capacity for self-determination (autonomy), and how are these determined? It appears that these questions could not even be fully answered for humans, much less for Data, the android.

    As AI continues to evolve, however, these may become the most salient questions.

    Before we talk any further about AI, it’s critical to understand that AI is an overarching term. People tend to think that AI, ML, and DL are the same things, since they have common applications. These distinctions are important – but this book will continue to us AI as the primary term that reaches across all of these subsets.

    Enjoying the preview?
    Page 1 of 1