Artificial Intelligence — Human Augmentation is what’s here and now

Artificial Intelligence — Human Augmentation is what’s here and now

The Artificial Intelligence (“AI”) vs Intelligence Augmentation (“IA”) debate has been around for over half a century. IA or Intelligence Augmentation classically refers to the effective use of information technology in augmenting human capabilities, and the idea has been around since 1950s. AI is today increasingly used as a wide term to describe machines that can mimic human functions such as learning and problem solving, but was originally founded on the premise that human intelligence can be precisely described, and machines made to simulate it. The term Artificial General Intelligence (AGI) is often used to represent only the latter, stricter definition. There is unprecedented hype today around AI, its incredible recent growth trajectory, myriad potential applications, and its potential emergent threats to society.

The broader definition of AI creates confusion, especially for those that may not be closely following the technology. Recent highly visible advancements driven by application of Machine Learning are sometimes misinterpreted and extrapolated to imply that we are the threshold of imminent advance towards AGI, and all that it would entail for the societal order.

There is possibly a continuum of progress between IA and AGI technologies. I posit in this article that the rapid advances we are seeing represent an acceleration of IA technologies driven by machine learning. However, moving towards the original premise of AI - and AGI - would require significant additional technical breakthroughs beyond recent advances. IA technologies help advance human potential by increasing worker productivity, alleviating mundane tasks and enhancing convenience in our lives. What we are seeing today is an acceleration in machines’ abilities to perform tasks that they have already been better than humans at for decades. And over the next decade, we will see further rapid advances along these fronts driven by further penetration of machine learning technology across many industries and spheres of life.

What’s old is new

Many AI and Machine Learning algorithms used today have been around for decades. Advanced Robots, Autonomous Vehicles, UAVs have been used by defense agencies for nearly half a century. The first Virtual Reality prototypes were developed in the 1960’s. Yet, as of late 2016, not a day goes by when a main stream publication doesn’t pontificate on the upcoming societal impacts of AI. According to CB Insights data, funding for startups leveraging AI will reach $4.2B in 2016, up over 8x from just four years back.

What has changed?

There are many factors at work, but there is consensus that many recent developments, such as massive recent improvements in Google translate, Google DeepMind’s victory at the game of Go, the natural conversational interface of Amazon’s Alexa, Tesla’s auto-pilot feature have all been propelled by advances in machine learning, more specifically Deep Learning Neural Networks, a sub field of AI. The theory behind Deep Learning has existed for decades, but it started to see renewed focus and a significantly accelerated rate of progress starting around the year 2010. What we are seeing today are the beginnings of a snowball effect in the impact of Deep Learning across use cases and industries.

Exponentially higher availability of data, cloud economics at scale, sustained advances in hardware capabilities (including GPUs running machine learning workloads), omnipresent connectivity and low power device capabilities, along with iterative improvements in algorithmic learning techniques have all played a part in making Deep Learning practicable and effective in many day-to-day situations. Deep Learning, along with other related techniques in statistical analysis, predictive analytics and natural language processing already beginning to get seamlessly embedded in our day-to-day life, and across the enterprise.

Machines and Humans

Machines have long been significantly better than the human brain at several types of tasks, especially those that relate to scale and speed of computation. Three academic economists (Ajay A. et al) in a recent paper and HBR article posit that the recent advances in machine learning can be classified as advances in Machine “Prediction”:

“What is happening under the hood is that the machine uses information from past images of apples to predict whether the current image contains an apple. Why use the word “predict”? Prediction uses information you have to generate information you do not have. Machine learning uses data collected from sensors, images, videos, typed notes, or anything else that can be represented in bits. This is information you have. It uses this information to fill in missing information, to recognize objects, and to predict what will happen next. This is information you do not have. In other words, machine learning is a prediction technology.”

Completing any major task involves several components – namely data gathering, prediction, judgement and action. Humans still significantly outpace machines at Judgement-oriented tasks (widely defined), and Ajay et al postulate that the value of these tasks will increase as cost of Prediction goes down with machine learning.

While there are purpose-built machines that can demonstrate selected human-like soft skills, machine capabilities in those areas have not made nearly as much progress as they have in the “Prediction” function, driven by Deep Learning, over the past few years. Here are some areas that humans excel at, and new technology breakthroughs may be required for machines to emulate:

  • Learning to learn: Most recent spectacular results from use of Machine Learning have been generated by machines that observe how humans act in a variety of instantiations of the task at hand (or large sets of inputs and outputs to the problem at hand), and “learn” using the deep neural network approach. Such machines are designed deliberately by a team of humans for a specific task, carefully generating the requisite training data and fine tuning requisite learning algorithms over a period of time. Machine learning techniques today could require orders of magnitude more training data than a human might for performing a similar task, e.g. a toddler can recognize an elephant after seeing a picture of one a few times, but a machine requires a much larger (and possibly primed) data set. Humans are good at learning to learn – they can learn a new skill completely unrelated to their current skill set, can decide what to learn and find and gather data accordingly, can learn implicitly/subconsciously, can learn from a variety of instruction formats, and can ask relevant questions to enhance their learning. Machines today are only beginning to learn to learn.


  • Common sense: Humans are good at exercising “common sense”, i.e. judgment in universal ways without thinking expansively or requiring large data sets. Machines are in their relative infancy in this field, in spite of rapid strides in Natural Language Processing using deep learning. Scientists working on common sense reasoning reckon that additional new advances are required for machines to exhibit common sense. We (or our kids) have all faced this issue while trying to chat with Alexa or Siri
  • Intuition and Zeroing in: The human brain is good at exercising intuition and zeroing in, i.e. finding a fact, idea or course of action from a very large, complex, ambiguous set of options. There are ongoing academic efforts and headways on bringing intuition to machines, but machine intelligence is generally very early on this dimension
  • Creativity: While there are now machines which have generated works of music or art that are indistinguishable to lay persons from the work of masters, these have been largely based on learning the creativity patterns of those masters. True creativity would entail ability to generate novel solutions to problems not previously seen or to create truly innovative works of art
  • Empathy, ability to understand emotions, value system, setting a vision, leadership and other soft capabilities remain uniquely human skills
  • Versatility: The same person can perform reasonably at many tasks, e.g. pick up a box, drive a car to work, console a kid, and give a speech. Machines and robots are still purpose built for specific tasks

IA and AI

Summarizing the above - while machines have made rapid strides at learning or “prediction” skills, they are in their early days of attempting to emulate truly “human” skills. We propose that we classify prediction, first-order machine learning and human-in-the-loop automation capabilities as “IA” technologies. These technologies typically use the unique power of machines (ability to process huge data sets) to effectively augment human capabilities, and the final output of the system typically depends on humans that bring complementary skills, those that train them, and those that design them.

Given its original roots, as well as the potential for confusion with AGI, the term AI should be reserved for machines that additionally demonstrate aforementioned human capabilities of judgement, common sense, innate creativity, ability to learn to learn, and empathy. This may still be short of the omnipotent AGI, but Full Automation of complex work flows would require machines that have most of these skills.

With this lens, here is how one could classify some current and upcoming technologies expected to impact our daily lives and work:

Impact of IA and AI

It is a well-known adage that we tend to overestimate the impact of technology in the short term, but underestimate it in the longer term. This is also known as “Amara’s Law”, and often represented by the illustration below.

It is noteworthy that this curve does not have a scale on either axis. And at any point at the beginning of the curve, we don’t accurately know how far out that inflexion point is. But it does illustrate an important tendency well - that the impact of new technology breakthroughs is initially slow to take off, then accelerates significantly as the technology develops and reaches mass market adoption, and eventually saturates. Human and market expectations tend to overlook this trend. This then feeds into the well-known market hype cycles where human expectations initially get well ahead of the immediate impact of the technology, thereby creating the milieu for the dip into the trough of disillusionment. As the tech impact then continues to expand and reaches broad scale, the technology reaches the plateau of productivity.

I believe that what we are seeing today in the form of significant developments and awareness around the term “AI” is really an acceleration of a prior “IA” curve, where Deep Learning using artificial neural networks (along with aforementioned drivers around hardware, data, cloud economics, connectivity, and other algorithmic advances) is driving us towards an inflexion point on that curve. In many scenarios, Deep Learning provides an upgrade over prior “prediction” methods that used regression and other statistical tools, rules-based systems and/or hand-coded logic. Machine learning is providing significant acceleration here by improving model accuracy, increasing ability to process data, and being more adaptive to inputs.

In light of aforementioned limitations that machine intelligence still has, I think that the curve for Full Automation technologies should be seen as an entirely new one. And I believe we are still in very early days of the Full Automation curve.

There have been multiple prior hype cycles and inaccurate predictions around AI and singularity. Many AI pioneers during its early days in the 1950’s believed that machines with full human-like capabilities would exist within ten or twenty years. The reason we didn’t get there was not that compute power did not advance fast enough, but that scientific breakthroughs on multiple new dimensions were required for that to happen. And the timing of such fundamental breakthroughs is hard to predict and coordinate. According to Stephen Hawking, as of 2015, “There’s no consensus among AI researchers about how long it will take to build human-level AI and beyond”.

We are likely close to a local maxima in the hype around “AI”. However, IA (as defined above) presents a very large investible opportunity set with a 5 to 10 year horizon. Humans and machines at the current point in time complement each other, excelling at different types of skills. This will enable humans to focus on skills that they uniquely possess and enjoy exercising, while machines handle most of the mundane tasks that do not require human skills of judgement, creativity and empathy. A lot has been written about the changes that IA technologies would entail to the nature of work and jobs, and change is not going to be easy for all.

This Wired article summarized it well:

“You’ll be paid in the future based on how well you work with robots.” 

Where are the immediate innovation and investment opportunities?

I believe that the impact of IA (human-enhancing automation using Deep Learning and other machine learning techniques) would be bigger in the medium term than most think, while Full Automation is further away than some recent reporting might indicate.

The point of this post is not to speculate on whether AGI is 10 years away or 100, or whether or not it is a threat to society. The point I am making is that if you are investing in or starting a company or project today with up to a 5 - 10 year horizon to achieving mass scale, there is significant immediate value in IA or human augmentation technologies propelled by machine learning, and that these present a lower resistance path to commercial and societal success.

As always, for B2B models, we are looking for offerings that: address a clear existing pain point, can demonstrate strong ROI, fit neatly into existing work flows, are able to align interests with the buyer, user and facilitator in the enterprise, and are a pain killer rather than vitamin. In this space, I would bet on human-in-the loop (IA) technologies which help increase overall productivity, optimize costs, personalize solutions or help business offer new products to their customers.

Machine learning technologies are being applied to many aspects across various vertical industries. This is a topic for an entire investment thesis (or at least a separate post of its own), but here is a short list of sectors where companies are leveraging advances in machine learning to augment human capabilities, enhance productivity or optimize use of resources:

  • Enterprise – Robotic assistants that automate mundane, repetitive tasks are coming up across a variety of functions, and will penetrate deeper across the enterprise fabric over the next decade. Wearables using Augmented Reality will help smoothly accomplish tasks that are unsafe or expensive today
  • Manufacturing  Collaborative, intelligent robots that can safely work alongside humans and can handle tasks that are hard, unsafe or repetitive will help increase productivity
  • Transportation and Logistics – While there is a fierce ongoing public race amongst various technology companies and OEMs to drive towards fully autonomous vehicles, there is significant nearer term opportunity from reducing driver work load in mundane driving situations such as highways, reducing error rates and accidents in human-driven vehicles, and improving traffic flow and fuel efficiency. Over a period of time, the fully autonomous ecosystem promises to change the fabric of urban life, and will create many concomitant opportunities
  • Healthcare – Machine learning technologies that can help medical professionals with more accurate, personalized diagnosis based on wider data sets
  • Agriculture – A variety of farming robots, crop optimization techniques, automated irrigation systems and pest warning systems will help increase agricultural productivity at high rates

The 100 year Study on Artificial Intelligence conducted by a preeminent academic panel recently published a balanced report with a good overview of opportunities and progress in machine learning technologies across many of these sectors. CB Insights has a nice compendium of how Deep Learning is being applied by innovative startups across a broad set of industries.

Let’s use the term AI solely to describe Full Automation technologies, which, as we have argued, are not necessarily a direct extrapolation of the curve we are on today. And in the meanwhile, let’s capitalize on opportunities arising out of rapid recent strides in IA technologies. 

(Anupam is a growth stage technology investor at NGP, and focuses at the intersection of IoT, Data and Machine Learning in the Enterprise)

(A version of this article was published by CB Insights in Dec 2016)

Anil Srikantiah

Co-Founder EcoCycle365 Token - Depin /Defi

7y

By the time we reach full automation , we probably would have developed ability for interstellar travel and living in space structure s.

Milton Vendrusculo

Account Manager | Especialista em Tecnologias de Coletas de Dados - AIDC | RFID | B2B

7y

what´s the impact on our lives?

A classic theory of AI entities, based on what we have seen in SF literature and sociology pointed out, that the most dangerous question a human being can ask to an AI is the simplest "why?". In my opinion this is the key to design a truly self-sustaining AI entity, which will gain or posess self-conciousness. Short time after this event we will be observing a technological singularity and probably there will be also some people thinking about themselves as "gods". The answer, that this kind of AI will give for the simple "why?" will with no doubt shape the future of mankind.

Like
Reply
TingTing Wang

Product Development Engineer at Cummins Inc.

7y

Now it's right time to learn machine language. 🙂

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics