Americas

  • United States

Asia

Contributor

Artificial Intelligence: Transparency isn’t just a trend

opinion
Jan 12, 20177 mins

There is a growing realization that we cannot start deploying and using A.I. systems if their reasoning is opaque. We need to know what they are thinking.

I straddle three very different worlds. First of all, I am a technologist living in an academic environment as a professor of computer science at Northwestern University. But I am also a co-founder of Narrative Science, an advanced natural language generation company that is a thriving business. And my work in both of these worlds has lead to a substantial amount of time spent with different government organizations that are considering how and when A.I. should be utilized.

Each world has its own concerns and it is no secret that they occasionally look at each other with some skepticism. Academics and businesses each see the other as completely missing the point. The government tends to see businesses as too short-sighted while stereotyping academics as having their heads in the clouds. And everyone sees the government as hampered by bureaucracy. All things considered, these three communities rarely tend to agree.

You can imagine how my ears perked up when I recently heard the same topic surfacing over and over again — transparency. All three worlds — academia, business and government – weren’t just in agreement but actually in aggressive alignment. In particular, there is a growing realization that we cannot start deploying and using intelligent systems, machine learning solutions or cognitive computing platforms if their reasoning is opaque. We need to know what they are thinking.

Without explanation, we are blindly trusting the output of systems that many of us don’t understand at an algorithmic level. While somewhat tolerable for systems that are identifying faces on Facebook, it is unacceptable for systems that are integrating highly sensitive and highly valuable business logic, goals and priorities into their reasoning. Nothing much rests on the former while our livelihoods are driven by the latter.

The idea of machines explaining both themselves and the world that surrounds them has always been a personal driver for my work. While readers may perceive this as a comment driven by my substantial stake in a company whose business is machine-driven explanation, the causality is reversed. I don’t believe in the need for machines to explain themselves because it serves my business. Rather, Narrative Science exists as a business because we believe that transparency is essential for humans who work with data, analytics and intelligent systems.

For some time, it has felt like the issue of transparency had fallen to the wayside with people (mostly vendors) arguing that performance alone should support acceptance. I am happy to say, however, that over the past few months there has been an upsurge of interest in transparency.

On the academic side, I have seen three vastly different technology talks exploring the science of models used to unpack the results of deep learning systems and the threads of thought associated with evidence-based approaches to reasoning such as IBM Watson. On the business side, I keep finding myself in conversations with CIOs who are asking how they can deploy learning and reasoning systems that are completely opaque. In effect, asking the question, how do we use a piece of software that no one can understand? Finally, the topper was a recent announcement from DARPA requesting proposals for work in “explanatory A.I.”

So clearly, the awareness and demand for transparency is growing, but the question is now — how do we actually make these systems more transparent? Some technologies, such as deep learning models, are opaque to the point of disagreement among practitioners as to what they are actually doing once you step above the specifics of the algorithms. So what can we do?

In the long run, we need to focus on the design of systems that don’t just think, but can think and explain. As we wait for this to happen, there are a few rules of the road to follow for evaluating current systems.

Above all else — do not deploy an intelligent system if you cannot explain its reasoning. You need to understand what a system is doing even if you don’t understand how it’s doing it at a detailed algorithmic level. These are table stakes because it’ll support your understanding of the data required, the decisions it is going to make and the reasoning it uses to support those decisions.

Beyond this essential issue, there are the three levels of capabilities that are important.

  • Explain and consider — Optimally, systems should be able to explain and consider. We need a system to provide clear and coherent explanations of how it has come to a decision and the alternatives. For example, a system that is designed to identify possible vendor fraud needs to be able to not only list the features that participated in an alert but also explain why each of those features is indicative of fraud. Since some indicators may be outside of the data set or not included in a system’s model, the possibility of presenting those features to a system and assessing their impact is also crucial. The ability to ask, “What about X?” is crucial when working with people and is similarly crucial as we begin to work with intelligent systems.
  • Articulation — Even those systems that are not open to end user manipulation of their models need to be able to at least articulate the features participating and the framework of the reasoning itself. This does not mean simply presenting a picture of the ten thousand pieces of evidence that lead to a conclusion. Systems need to be able to at least extract the truly relevant features and how they interact to support the reasoning. If a system alerts a user that it has recognized an instance of fraudulent behavior, it should be able to indicate the set of untoward transactions that set off the alert.

  • Audibility — If a system is not designed to provide real-time or user-facing explanations or articulation of a piece of reasoning, it needs to at least be auditable after the fact. There has to be a logic trace that is available for inspection so that any problems can be unpacked for inspection. Even if an end user cannot get access to the trace of a system, the analysts who designed the back end need to be able to see it.

Given the early stage of A.I. technologies, many systems simply cannot yet support explanation, consideration, articulation or audits. These systems remain powerful, but should only be used in areas where an explanation of its reasoning is unnecessary — like facial recognition photo tagging on Facebook. This same system would be inappropriate for evaluating the creditworthiness of someone applying for a mortgage loan, because while it might perform accurately, it will not provide a useful explanation once the applicant questions your “yes” or “no.”

As with humans, we want to be able to work with rather than for A.I. systems in our homes and workplaces. To support this, these systems need to possess the ability to explain themselves to us. Otherwise, we will be in a position where all we can do is listen and obey. As always with A.I., we have a choice between developing systems that will act as partners and those that will simply be telling us what to do.  

While transparency may seem like a technical issue, it has widespread societal and economical implications. Without transparency, users will be hard-pressed to fully trust and respect A.I. systems. Without trust and respect, the adoption of A.I. systems will stall and potentially wither the vast and positive returns that these technologies could bring to our world.

As Chief Scientist and co-founder, Kris Hammond focuses on R&D at Narrative Science. His main priority is to define the future of Advanced NLG, the democratization of data rich information and how language will drive both interactive communications and access to the Internet of Things (IoT).

In addition to being Chief Scientist, Kris is a professor of Computer Science at Northwestern University. Prior to Northwestern, Kris founded the University of Chicago’s Artificial Intelligence Laboratory. His research has always been focused on artificial intelligence, machine-generated content and context-driven information systems.

Kris previously sat on a United Nations policy committee run by the United Nations Institute for Disarmament Research (UNIDIR). Kris received his PhD from Yale.

The opinions expressed in this blog are those of Kris Hammond and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.

More from this author