Americas

  • United States

Asia

Contributor

What is artificial intelligence?

opinion
Apr 10, 20156 mins

What is artificial intelligence (AI), and what is the difference between general AI and narrow AI?

Artificial intelligence computer brain circuits electronics grid
Credit: Thinkstock

What is artificial intelligence (AI), and what is the difference between general AI and narrow AI?

There seems to be a lot of disagreement and confusion around artificial intelligence right now.

We’re seeing ongoing discussion around evaluating AI systems with the Turing Test, warnings that hyper-intelligent machines are going to slaughter us and equally frightening, if less dire, warnings that AI and robots are going to take all of our jobs.  

In parallel we have also seen the emergence of systems such as IBM Watson, Google’s Deep Learning, and conversational assistants such as Apple’s Siri, Google Now and Microsoft’s Cortana. Mixed into all this has been crosstalk about whether building truly intelligent systems is even possible.

A lot of noise.

To get to the signal we need to understand the answer to a simple question:  What is AI?

AI: A textbook definition

The starting point is easy.  Simply put, artificial intelligence is a sub-field of computer science. Its goal is to enable the development of computers that are able to do things normally done by people — in particular, things associated with people acting intelligently.

Stanford researcher John McCarthy coined the term in 1956 during what is now called The Dartmouth Conference, where the core mission of the AI field was defined.

If we start with this definition, any program can be considered AI if it does something that we would normally think of as intelligent in humans.  How the program does it is not the issue, just that is able to do it at all. That is, it is AI if it is smart, but it doesn’t have to be smart like us.

Strong AI, weak AI and everything in between

It turns out that people have very different goals with regard to building AI systems, and they tend to fall into three camps, based on how close the machines they are building line up with how people work.

For some, the goal is to build systems that think exactly the same way that people do. Others just want to get the job done and don’t care if the computation has anything to do with human thought. And some are in-between, using human reasoning as a model that can inform and inspire but not as the final target for imitation.

The work aimed at genuinely simulating human reasoning tends to be called “strong AI,” in that any result can be used to not only build systems that think but also to explain how humans think as well. However, we have yet to see a real model of strong AI or systems that are actual simulations of human cognition, as this is a very difficult problem to solve. When that time comes, the researchers involved will certainly pop some champagne, toast the future and call it a day.

The work in the second camp, aimed at just getting systems to work, is usually called “weak AI” in that while we might be able to build systems that can behave like humans, the results will tell us nothing about how humans think. One of the prime examples of this is IBM’s Deep Blue, a system that was a master chess player, but certainly did not play in the same way that humans do.

Somewhere in the middle of strong and weak AI is a third camp (the “in-between”): systems that are informed or inspired by human reasoning. This tends to be where most of the more powerful work is happening today. These systems use human reasoning as a guide, but they are not driven by the goal to perfectly model it.

A good example of this is IBM Watson. Watson builds up evidence for the answers it finds by looking at thousands of pieces of text that give it a level of confidence in its conclusion. It combines the ability to recognize patterns in text with the very different ability to weigh the evidence that matching those patterns provides. Its development was guided by the observation that people are able to come to conclusions without having hard and fast rules and can, instead, build up collections of evidence. Just like people, Watson is able to notice patterns in text that provide a little bit of evidence and then add all that evidence up to get to an answer.

Likewise, Google’s work in Deep Learning has a similar feel in that it is inspired by the actual structure of the brain. Informed by the behavior of neurons, Deep Learning systems function by learning layers of representations for tasks such as image and speech recognition. Not exactly like the brain, but inspired by it.

The important takeaway here is that in order for a system to be considered AI, it doesn’t have to work in the same way we do. It just needs to be smart.

Narrow AI vs. general AI

There is another distinction to be made here — the difference between AI systems designed for specific tasks (often called “narrow AI”) and those few systems that are designed for the ability to reason in general (referred to as “general AI”). People sometimes get confused by this distinction, and consequently, mistakenly interpret specific results in a specific area as somehow scoping across all of intelligent behavior.  

Systems that can recommend things to you based on your past behavior will be different from systems that can learn to recognize images from examples, which will also be different from systems that can make decisions based on the syntheses of evidence. They may all be examples of narrow AI in practice, but may not be generalizable to address all of the issues that an intelligent machine will have to deal with on its own. For example, I may not want the system that is brilliant at figuring out where the nearest gas station is to also perform my medical diagnostics.

The next step is to look at how these ideas play out in the different capabilities we expect to see in intelligent systems and how they interact in the emerging AI ecosystem of today. That is, what they do and how can they play together. So stay tuned – there’s more to come.

As Chief Scientist and co-founder, Kris Hammond focuses on R&D at Narrative Science. His main priority is to define the future of Advanced NLG, the democratization of data rich information and how language will drive both interactive communications and access to the Internet of Things (IoT).

In addition to being Chief Scientist, Kris is a professor of Computer Science at Northwestern University. Prior to Northwestern, Kris founded the University of Chicago’s Artificial Intelligence Laboratory. His research has always been focused on artificial intelligence, machine-generated content and context-driven information systems.

Kris previously sat on a United Nations policy committee run by the United Nations Institute for Disarmament Research (UNIDIR). Kris received his PhD from Yale.

The opinions expressed in this blog are those of Kris Hammond and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.

More from this author