Americas

  • United States

Asia

Contributor

How machines and humans will coexist in the future

opinion
Oct 26, 20155 mins
Augmented RealityTechnology Industry

A look at the benefits of transparency in the evolving world of artificial intelligence.

artificial intelligence robot brain network
Credit: Thinkstock

As of late, discussions have run rampant about the impact of intelligent systems on the nature of work, jobs and the economy. Whether it is self-driving cars, automated warehouses, intelligent advisory systems, or interactive systems supported by deep learning, these technologies are rumored to first take our jobs and eventually run the world.

There are many points of view with regard to this issue, all aimed at defining our role in a world of highly intelligent machines but also aggressively denying the truth of the world to come. Below are a few popular arguments of how we’ll coexist with machines in the future. 

1. Machines take our jobs, new jobs are created

Some arguments are driven by the historical observation that every new piece of technology has both destroyed and created jobs. The cotton gin automated the cleaning of cotton. This meant that people no longer had to do the work because a machine enabled the massive growth of cotton production, which shifted the work to cotton picking. For nearly every piece of technology, from the steam engine to the word processor, the argument is that as some jobs were destroyed, others were created.

2. Machines only take some of our jobs

A variant of the first argument is that even if new jobs are not created, people will shift their focus to those aspects of work that intelligent systems are not equipped to handle. This includes areas requiring the creativity, insight and personal communication that are hallmarks of human abilities, and ones that machines simply do not possess. The driving logic is that there are certain human skills that a machine will never be able to master.

A similar, but more nuanced argument portrays a vision of man-machine partnerships in which the analytical power of a machine augments the more intuitive and emotional skills of the human. Or, depending on how much you value one over the other, human intuition will augment a machine’s cold calculations.

3. Machines take our jobs, we design new machines

Finally, there is the view that as intelligent machines do more and more of the work, we will need more and more people to develop the next generation of those machines. Supported by historical parallels (i.e. cars created the need for mechanics and automobile designers), the argument is that we will always need someone working on the next generation of technology. This is a particularly presumptuous position as it is essentially technologists arguing that while machines will do many things, they will never be able to do what technologists do.

These are all reasonable arguments and each one has its merits. But they are all based on the same assumption: Machines will never be able to do everything that people can do, because there will always be gaps in a machine’s ability to reason, be creative or intuitive. Machines will never have empathy or emotion, nor have the ability to make decisions or be consciously aware of themselves in a way that could drive introspection.

These assumptions have existed since the earliest days of A.I. They tend to go unquestioned simply because we prefer to live in a world in which machines cannot be our equals, and we maintain control over those aspects of cognition that, to this point at least, make us unique.

But the reality is that from consciousness to intuition to emotion, there is no reason to believe that any one of them will hold. A discussion of this specific point is beyond the scope of this posting but I will note a comment made by Maggie Boden, the godmother of A.I. and Cognitive Science. She commented that the only alternative to the belief that human thought can be modeled on a machine is to believe that our minds are the product of “magic.” Either we are part of the world of causation or we are not. If we are, A.I. is possible.

So what happens to our world and our work when an intelligent system exists that can do everything we do and do it better? That is an argument for another day.

Putting hypothetical futures to the side, we do need to consider our current relationship with machines that are growing smarter every day and how we want them to unfold.

As machines get smarter, it is imperative that we enable intelligent machines to communicate and explain themselves to us. If we don’t, as I have argued earlier, we will find ourselves in a less than ideal place. We will follow the dictates of systems that may be exceedingly good at what they do but are unwilling and/or unable to relay the reasons behind their actions.

However, in a world in which communication and transparency rule, we will have machine partners that we can understand and work with, even when we reach a day when that work is simply unnecessary. As our systems and machines become more capable and smarter, they also need to have the ability to explain both their results and their processes. If not, we will build ourselves a world of black boxes that provide us with answers but no insight.

As Chief Scientist and co-founder, Kris Hammond focuses on R&D at Narrative Science. His main priority is to define the future of Advanced NLG, the democratization of data rich information and how language will drive both interactive communications and access to the Internet of Things (IoT).

In addition to being Chief Scientist, Kris is a professor of Computer Science at Northwestern University. Prior to Northwestern, Kris founded the University of Chicago’s Artificial Intelligence Laboratory. His research has always been focused on artificial intelligence, machine-generated content and context-driven information systems.

Kris previously sat on a United Nations policy committee run by the United Nations Institute for Disarmament Research (UNIDIR). Kris received his PhD from Yale.

The opinions expressed in this blog are those of Kris Hammond and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.

More from this author