Emerging Technologie 749318 NDX

Download as pdf or txt
Download as pdf or txt
You are on page 1of 60

Emerging Technologies and Trends Impact Radar:

2022
Published 15 November 2021 - ID G00749318 - 81 min read
By Analyst(s): Tuong Nguyen, Danielle Casey, Eric Goodness, Alys Woodward, Annette
Jump

Initiatives: Emerging Technologies and Trends Impact on Products and Services

The technologies with the most potential to disrupt a broad cross


section of markets show four themes: the smart world, the
productivity revolution, ubiquitous and transparent security, and
critical enablers. Product leaders must explore these technologies
now to capitalize on market opportunities.

Overview
Key Findings
■ Smart spaces and multimodal UI will revolutionize how users and workers interact
with the world around them by adding multiple dimensions of contextual awareness
to create natural, seamless, automated interactions with the world.

■ Synthetic data and self-supervised learning will rapidly accelerate AI capabilities and
unlock unprecedented levels of business efficiency, effectiveness and growth
through the application of advanced AI techniques.

■ Protecting the privacy of individuals, organizations and data will require robust and
user-friendly technologies, such as passwordless authentication and homomorphic
encryption.

■ Graph technologies act as a glue and multiplier by covering connections between all
data and delivering value in areas such as healthcare management, clinical research
and healthcare supply chain.

Recommendations
For product leaders assessing the impact of emerging technologies and trends on
products and services:

Gartner, Inc. | G00749318 Page 1 of 59

This research note is restricted to the personal use of [email protected].


■ Invest in user experience technologies, such as advanced virtual assistants and
multimodal UI that improve productivity and provide more natural and dynamic
interactions.

■ Unlock the potential of AI tools, such as AI-augmented software engineering (AIASE),


to deliver products with clear value to customers, faster, at lower cost, and with
higher quality by automating high-frequency software engineering tasks. Apply
AIASE to use cases such as automated peer review.

■ Use homomorphic encryption to ensure data privacy while delivering compliant, safe
operation and ethical application of user-experience-friendly security technologies,
such as passwordless authentication.

■ Add business value to your solution by using emerging technologies and trends such
as graph technologies to store, manipulate and analyze relationships between
entities.

Analysis
Overview of the Emerging Technologies and Trends Impact Radar
The Emerging Technologies and Trends Impact Radar highlights the technologies and
trends that have the most potential to disrupt a broad cross section of markets. In this
document, we have identified 20 of the highest-impact emerging technologies and trends
(see Figure 1) that are critical for product leaders to evaluate as part of their competitive
strategy, summarized by four key themes.

This radar summarizes (but is not limited to) the technologies and trends found in this
year’s Impact Radars and most closely aligned with (or most influential to) these themes.

The Smart World


By 2026, key rapidly advancing technologies, such as digital twin, Internet of Things (IoT)
platforms, smart spaces, multimodal UI and advanced virtual assistants, will transform
how people interpret and interact with the world.

Digital twin represents a design pattern but also one way the physical world and the
accompanying processes involved are being digitized. IoT platforms underscore the
importance of captured data to drive business decision improvement. They further
underscore the value of sensor and sensing data for contextual relevance and awareness
— two aspects that are essential to expanding and improving people’s ability to interact
with the world.

Gartner, Inc. | G00749318 Page 2 of 59

This research note is restricted to the personal use of [email protected].


Smart spaces represent the epitome of contextually aware environments and the
convergence of multiple, independently evolving trends to create highly personalized
experiences — for example, through the lens of augmented or virtual reality experiences.
Multimodal UI and advanced virtual assistants (VAs) are changing the way customers
interact with IoT devices and engage with digital platforms, improving employee
productivity and leading to improved business productivity and cost optimization. (See
Emerging Technologies: Top Use Cases for Advanced Virtual Assistants in Enterprise
Operations.) In total, these technologies will change the way people experience the world.

The growing intersection of the physical and digital world will require flexibility of
interaction modalities. New experiences will require a combination of interfaces,
depending on the person, device, application and context. Multimodal UI will be required to
facilitate the interactions between humans and machines.

The Productivity Revolution


Within the next decade, AI and computing will see a second revolution, bringing
breakthroughs in capability and speed. These breakthroughs will be a force multiplier for
business and technology by unlocking further potential for meaningful innovation and
making foundational artificial intelligence (AI) technologies more useful. (See Emerging
Technologies and Trends Impact Radar: Artificial Intelligence, 2021.)

Generative AI will add a new dimension to productivity by producing totally novel media
content (including text, image, video and audio), synthetic data and models of physical
objects based on the original data. For example, generative models can be used in drug
discovery or for the inverse design of materials having specific properties.

Synthetic data will help train AI models where sufficient data is not available. There are
already numerous areas that are taking advantage of synthetic data, including
automotive, healthcare, finance, computer vision, data monetization, external analytics
support, platform evaluation and the development of test data. Furthermore, synthetic
data that is produced using generative AI techniques supports the accuracy and speed of
AI delivery.

Self-supervised learning will take us to the next phase of AI by enabling data labels to be
created from the data itself, without having to rely on external (human) supervisors that
provide labels or feedback. This will overcome one of the fundamental problems with
current AI — the need for large amounts of data and the time and energy required to to
label the data.

Gartner, Inc. | G00749318 Page 3 of 59

This research note is restricted to the personal use of [email protected].


Ubiquitous and Transparent Security
As humans and technology become interwoven, security will play an increasingly crucial
role in addressing threats. Access to a growing suite of devices, systems, applications and
accounts will partly be secured by passwords. But the number and complexity of
passwords required is already causing poor user experience (UX) and, in turn, potentially
heightening security risks as users circumvent password best practices. Passwordless
authentication is meant to minimize the use of passwords and improve UX while
removing the known vulnerabilities associated with centrally stored passwords.

Furthermore, the co-evolution of the physical and digital world will be determined by the
systems of values and moral principles for the conduct of electronic interactions among
people, organizations and things (digital ethics). Technology such as homomorphic
encryption will be an important way to ensure the protection and privacy of data between
third-party data processing and analytics providers. The importance of security
technologies such as homomorphic encryption will grow as privacy and data protection
mandates continue to expand globally.

Critical Enablers
Critical technology enablers will disrupt markets where they are applied by reshaping
business practices, processes, methods, models and functions. Organizations require
products that improve business outcomes that will involve capabilities across several
products. Collaborative ecosystem product development (CEPD) is one way product
leaders can deliver on this need — by partnering with several, sometimes competing,
vendors to develop new solutions. Further efficiency and flexibility will be enabled by the
next era of composable enterprise — AI-generated composite applications. This will
enable dynamic personalized experiences seamlessly across channels — without
requiring a human application developer.

The demands of spatial computing, novel interconnected network paradigms and real-
time analysis of interface and experiences will require a shift from centralized cloud
computing models to a distributed model. Hyperscale edge computing (HEC) is one
example in which data storage and processing are placed close to the things or people
that produce and/or consume that information.

Understanding the dynamics between and within the physical and digital world will
uncover new opportunities and yield additional business value. Graph technologies will
help make sense of relationships between entities such as organizations, people or
transactions. This will allow organizations to store, manipulate and analyze widely varied
perspectives.

Gartner, Inc. | G00749318 Page 4 of 59

This research note is restricted to the personal use of [email protected].


The Impact Radar
Figure 1 shows the highest-impact emerging technologies and trends based on time to
adoption.

Figure 1: Impact Radar for 2022

The objective of this research is to guide product leaders on how emerging technologies
and trends are evolving and impacting areas of interest. Providers can leverage this
knowledge to determine which technologies or trends are most important to the success
of their business and when it makes sense to advance their products and services by
investing in them. Refer to the How to Use the Impact Radar section for more information.

Emerging Technologies or Trend Profiles


Table 1 lists emerging technologies in 2022 according to their time to adoption. Click on a
technology name in the table to jump to a profile of the technology.

Gartner, Inc. | G00749318 Page 5 of 59

This research note is restricted to the personal use of [email protected].


Table 1. Most Impactful Emerging Technologies and Trends in 2022 Based on Time to
Adoption

Now 1 to 3 Years 3 to 6 Years 6 to 8 Years

Passwordless Digital Twins IoT Platforms Self-Supervised


Authentication Learning

Edge AI Hyperscale Edge Smart Spaces AR Cloud


Computing

LCAPs Multimodal UI Graph Technologies 6G

Advanced Virtual AI-Augmented AI-Generated


Assistants Software Composite
Engineering Applications

Synthetic Data Digital Ethics

Collaborative Generative AI
Ecosystem Product
Development

Homomorphic
Encryption

The technology profiles highlighted in Table 1 are ranked by order of impact mass. For an
explanation of Gartner’s methodology for assessing Impact Radar technologies, please
see Note 1.

In addition to the technologies in Table 1, there are several longer-range technologies that
product leaders should track and be prepared to make early investments in so as to be
ready to utilize them when they come to maturity. These include:

Metaverse

Quantum Computing

Photonic Computing

Gartner, Inc. | G00749318 Page 6 of 59

This research note is restricted to the personal use of [email protected].


Now Range
Passwordless Authentication
Back to Top

Analysis by: Swati Rakheja

Description: At a basic level, passwordless authentication is the means of authenticating


users without using passwords. Passwords are one of the most commonly used
authentication methods in both workforce and customer use cases primarily due to the
simple deployment mechanism, without any requirement for additional software or
hardware on the user side. While passwords have long been proven on the internet, people
tend to find passwords hard to remember. This has led to poor password hygiene, making
passwords vulnerable to phishing, social engineering and brute-force attacks. On the
service provider side as well, significant cost and expertise are required to secure
usernames and passwords. Further, the centralized password storage mechanisms used
act as a honeypot for attackers, making them vulnerable to data leakage. This, in turn,
increases the success of attacks such as credential stuffing or password spraying. Thus,
passwordless approaches are of benefit to both users and providers.

Any authentication method basically uses one or more of the following factors:

■ Something only the user knows (such as a password, pattern or PIN)

■ Something only the user holds (such as a smart card, one-time password [OTP]
hardware token or mobile phone)

■ Something only the user is or does (a morphological or behavioral biometric trait)

Passwordless authentication approaches leverage any permutation of one or more of


these factors, except a centrally stored password. Passwordless authentication methods
improve user experience (UX) while removing the known vulnerabilities associated with
centrally stored passwords. Some approaches focus on removing the password from the
user authentication flow, but the password remains part of the infrastructure.

Sample Vendors: HYPR, Microsoft, Secret Double Octopus, Transmit Security, Trusona,
TruU, Yubico and Veridium

Range: 0 to 1 Year

Gartner, Inc. | G00749318 Page 7 of 59

This research note is restricted to the personal use of [email protected].


Gartner considers passwordless authentication to be in the “now” range, with estimated
adoption around 30% to 40% of the way toward the early majority target — based on the
installed base of customers and implementation benefits of eliminating passwords.
Gartner sees increased client interest in passwordless authentication, especially among
employee use cases. The Fast IDentity Online (FIDO) Alliance has introduced protocols to
enable passwordless authentication via a combination of public-key credentials and a
local gesture (endpoint PIN or a biometric) within a software authenticator on a user’s
device or on an external authenticator (e.g., a hardware security key). FIDO2 also enables
the use of a FIDO2-enabled smartphone as an external authenticator with other devices.
(See Innovation Insight for Many Flavors of Authentication Token.) Initial adoption of
FIDO for passwordless authentication in the first edition of FIDO protocols was focused
on B2C use cases and has been relatively low. However, the overall support for FIDO2 is
growing among large ecosystem vendors, such as Microsoft, Apple and Google, as well as
among access management vendors. Online applications using web authentication
(WebAuthn) are increasing. Microsoft’s support for FIDO2 external authenticators, in
addition to Windows Hello for Business in Windows 10 and Azure Active Directory (AD), is
driving FIDO2 adoption in enterprises. Interest in FIDO2-based authentication for customer
identity and access management use cases is relatively lower due to the cost and poor UX
of using security keys, as well as the unwillingness of customers to download apps for
authentication. Adoption of non-FIDO passwordless methods is also expanding as the
proliferation of mobile multifactor authentication (MFA) or biometric-authentication-based
vendors increases (although mobile MFA may include FIDO2 options).

Industries most interested in passwordless options are banking, finance, insurance,


services and retail.

Mass: Very High

Gartner rates passwordless authentication (which includes all varieties of passwordless


methods and flows) as very high mass (a measure of overall breadth of market impact).
Demand is driven by the need of potentially every organization to minimize the use of
passwords in order ro reduce account takeover risk and improve employee and customer
experience.

Recommended Actions:

■ Transition to passwordless authentication by offering either proprietary or open


standard methods in your applications.

Gartner, Inc. | G00749318 Page 8 of 59

This research note is restricted to the personal use of [email protected].


■ Add FIDO2 as an option in authentication products to prepare for future
compatibility.

■ Work with product marketing managers to properly articulate the value of open
standards support and interoperability.

■ Work with product marketing to clearly highlight the value proposition of


passwordless authentication solutions for both users and service providers —
simpler authentication for users and stronger security requirements for service
providers.

Recommended Reading:

■ Hype Cycle for Identity and Access Management Technologies, 2021

■ Innovation Insight for Many Flavors of Authentication Token

Edge AI
Back to Top

Analysis by: Eric Goodness

Description: Edge AI is the use of AI techniques embedded within IoT endpoints, gateways
and other edge devices. Use cases range from autonomous vehicles to streaming
analytics. While predominantly focused on AI inference, many systems also use statistical
techniques to adapt to and accommodate local conditions.

Sample Vendors: Braincube, Crosser, FogHorn, Phizzle, Octonion, Swim

Range: 0 to 1 Year

Edge AI is within one year from crossing the chasm because of widespread demand, and
revenue opportunities are rapidly moving it out of the early adopter stage. Edge AI is a
platform for value implemented in a multitude of use “edges” and environments. The only
gating factors for edge AI are the design constraints of the hardware and software
deployed, such as power, processing and connectivity. Immediate revenue opportunities
place edge AI in the 90% to 100% early majority stage, as outlined below.

Gartner, Inc. | G00749318 Page 9 of 59

This research note is restricted to the personal use of [email protected].


AI embedded in an IoT endpoint is a leading revenue opportunity for technology and
service providers (TSPs) and is driving early majority adoption. Manufacturers and their
service provider channels seek market differentiation and revenue uplift by “servitizing”
assets. Here, the IoT endpoint (asset) runs AI models to interpret captured or external data
and drives the endpoint’s functions (automation and actuation). In this case, the AI model
is trained (and updated) on a central system and deployed to the IoT endpoint. An
example is a camera offered as a software-defined asset, meaning detectors are trained
locally, and cloud services are used only for software change management (such as
updates, reconfigurations and patches). Additionally, whole classes of industrial heavy
equipment are able to function autonomously within environments such as mining and
agriculture, with local resources on board the asset. Over the next few years, IoT endpoints
will increasingly leverage techniques such as federated machine learning (ML) and self-
training and learning algorithms.

Adoption of edge AI for analytics is accelerating, particularly in industrial settings. In these


use cases, data is captured at an IoT endpoint and transferred to an AI system hosted
within an edge computer, gateway or other aggregation point. This edge AI model is used
for many industrial enterprises in scenarios on a factory or plant floor, where sensor data
from various assets is normalized and analyzed, and/or integrated within various
business planning and logistics applications. Application examples include
manufacturing execution systems, asset performance management and enterprise asset
management.

Increasingly, edge AI is a catalyst for the adoption of broader IoT solutions because of its
ability to reduce solution costs. In connected but geographically remote environments,
such as wind and solar farms, predictive and preventive maintenance solutions are cost-
prohibitive because assets are directly tethered to intermittently or expensively reached
cloud systems for intelligence. In these scenarios, edge AI allows for local, context-based
decisions on devices, with cloud services for alert generation, resource-intensive
computing or additional analytical output.

IoT adoption by process industries (such as oil and gas, manufacturing, utilities, retail,
and transportation and logistics) is driving significant interest in innovative edge analytics
and investment in edge AI. The eventual emergence and commercialization of 5G is also
increasing interest and investment in edge solutions, with 5G base stations being able to
serve as edge computing nodes. Early 5G deployments have a tendency of leveraging a
hybrid use of near-edge (on-premises) and far-edge solutions (multiaccess edge
computing [MEC]).

Gartner, Inc. | G00749318 Page 10 of 59

This research note is restricted to the personal use of [email protected].


Common use cases include computer vision for video surveillance and monitoring, and
audio event detection, along with anomaly detection and data normalization in industrial
data acquisition and analyses. Common challenges met when using edge AI include
managing the rebalancing of processing capacity and energy consumption, storing the
parameters of learning and inferencing on resource-constrained devices, and allowing the
release or execution of models on distributed devices.

Mass: Very High

Edge AI will have a very high impact because of its potential to disrupt numerous use
cases across almost all industries. While edge solutions have existed as “edge-in”
solutions, the development of “cloud-out” edge AI solutions by cloud service providers
invigorated interest by industrial enterprises and investors. All edge AI solutions will
usually need to connect to cloud services or a remote/local data center at some point for
acceleration or data transfer; however, edge AI is not relegated to another market that will
be dominated by the hyperscalers. In fact, Gartner believes that edge-in architectures will
provide a platform for innovation for users and providers. As such, edge AI is a platform
for revenue and margin growth for many different TSP market segments, such as:

■ Hardware OEMs offering products “as a service” with embedded edge AI

■ Integrators and managed service providers offering integrated DevOps and AIOps as
managed services

■ Telecommunications providers leveraging their “edge-centric” expertise to offer


multiedge solutions spanning AI embedded in IoT endpoints, gateways and the far
edge of the MEC

Edge AI is a platform for innovation based on local context and efficient AI models and
deep learning. Ultimately, edge AI offers product leaders a new platform to create new
value, new products and new business models.

Recommended Actions:

■ Partner with IoT solution providers by supplying edge AI focused on use cases with
high communications costs that are sensitive to latency or ingest high volumes of
data at the edge.

■ Focus edge AI solutions on manufacturers seeking to servitize their products.

Gartner, Inc. | G00749318 Page 11 of 59

This research note is restricted to the personal use of [email protected].


Recommended Reading:

■ Deploy Leaner AI at the Edge: Comparing Three Architecture Patterns to Enable Edge
AI

■ Tech Providers 2025: The Future of AI Is on the Edge

Low-Code Application Platforms (LCAPs)


Back to Top

Analysis by: Fabrizio Biscotti, Paul Vincent, Jason Wong, Laurie Wurster

Description: A low-code application platform (LCAP) supports rapid application


development, one-step deployment, execution and management using declarative high-
level programming abstractions, such as model-driven and graphical development
approaches. They support the development of UIs, business logic and data services, and
they improve productivity at the expense of runtime portability and openness, as
compared with high-control application platforms, and are typically delivered as cloud
services.

Sample Vendors: Appian, Creatio, Mendix, Microsoft, Oracle, OutSystems, Pegasystems,


Quickbase, Salesforce, ServiceNow

Range: 0 to 1 Year

The movement of LCAPs to the early majority will happen within a year, as LCAPs cover a
large and increasing subset of enterprise application requirements, with some enterprises
starting to choose them as their strategic application platform. Indeed, its application
scope is evolving to cover more digital business scenarios and advanced use cases such
as consumer-facing applications.

LCAP offerings are all multifunction and combine development tools, runtime platforms
for test and production, embedded databases, and integration/composition capabilities.
The short development time for applications built on LCAPs facilitates agile practices in
conjunction with business users and encourages collaboration and innovation.
Furthermore, LCAPs’ raised abstraction levels for application development and process
automation reduce the skill sets required for building basic business applications, and
they support generic application functions, such as data collection, workflow and
reporting.

Gartner, Inc. | G00749318 Page 12 of 59

This research note is restricted to the personal use of [email protected].


Mass: Very High

With many enterprises adopting multiple LCAP solutions with success, growing vendor
numbers, continued advancement on innovation, and few issues beyond vendor lock-in
and pricing model transparency, this technology is approaching mainstream adoption.
Such adoption is prevalent across multiple industries and impacts most business
functions and markets, replacing existing capabilities. So much so that we predict that by
2024, well over half of midsize to large enterprises will have adopted LCAPs as a strategic
application platform. LCAPs support both democratization of application development
beyond central IT and enable increased automation of business services. Their
multifunction support for data, user experience (some extending to multiexperience
touchpoints), intuitive developer experience and integration make them a potent best-of-
breed application delivery tool for mainstream business use cases. They can entirely
remove the need for high-control frameworks and platforms in some organizations.

Recommended Actions:

■ Emphasize that LCAPs will be lower-risk, require less coding skills and training, and
be faster to deploy for many use cases over the traditional third-generation language
alternative styles of application development while providing more flexibility and
customization than SaaS alternatives. Develop a product strategy directed to citizen
development, departmental application or enterprise application modernization use
cases.

■ Mitigate architecture constraints by giving preference to LCAPs that provide for


composition of external services, SaaS APIs or other packaged business capabilities,
or that fit well with your existing integration strategy and technologies.

■ Help customers navigate the trade-offs of low-code application development and


their fears of vendor lock-in (language, metadata and models are generally
proprietary). The models supported in model-driven design usually focus on classic
application components (database, form or page, and process). Also aim to support
multicloud and hybrid cloud deployments with LCAPs that run on container
architectures.

Recommended Reading:

■ Forecast Analysis: Low-Code Development Technologies

■ Magic Quadrant for Enterprise Low-Code Application Platforms

Gartner, Inc. | G00749318 Page 13 of 59

This research note is restricted to the personal use of [email protected].


■ Quick Answer: What Is the Difference Between No-Code and Low-Code Development
Tools?

■ Market Share Analysis: Application Infrastructure and Middleware Software,


Worldwide, 2020

■ Emerging Technologies: High-Velocity Demands Accelerate Low-Code Application


Platforms

Gartner, Inc. | G00749318 Page 14 of 59

This research note is restricted to the personal use of [email protected].


1 to 3 Years
Digital Twins
Back to Top

Analysis by: Al Velosa

Description: A digital twin is a dynamic virtual representation of an entity such as an


asset, person or process. It is developed to support business objectives. Digital twin
elements include the model, data, unique one-to-one association and monitorability. The
three taxonomy levels of digital twins are discrete, composite and organizational. These
digital twin elements are built, used and shared in enabling technologies such as analytics
software, IoT platforms or simulation tools.

Sample Vendors: Ansys, Arrayworks, AVEVA, Braincube, Cognite, Cosmo Tech, COVACSIS
Technologies, Esri, Flutura, GE Digital, Gematica, Hitachi, Microsoft Azure IoT, NTT Group,
Quidgest, ROOTCLOUD, ScaleOut Software, SEKAI, Siemens, Slingshot Simulations,
Thynkli and Tuya Smart

Range: 1 to 3 Years

Although enterprises are interested in digital twins, their understanding of the full potential
of digital twin impact and opportunity remains immature, from a business, technology
and governance perspective. In part, this is due to the immaturity of digital twins. This
challenges product leaders to build strong value propositions and have good business
and technology marketing and education efforts, in addition to clear sales strategies.
Some TSPs are beginning to effectively link digital twin approaches and technologies to
well-defined business outcomes. However, most TSPs in this crowded vendor landscape
still lack good messaging or go-to-market strategies. Most vendors still overemphasize
technical capabilities while lacking industry-specific business solutions. Some software
product leaders are waking up to the business differentiation reality and pointing the
direction for the future by building business solutions and portfolios of digital twin
templates.

Software product leaders are starting to shift from considering digital twins as just an
R&D area, toward thinking of them as a parallel product arena with revenue potential. In
part, this reflects a shift in understanding by product leaders about digital twins’ life cycles
and its implications for their revenue strategy.

Gartner, Inc. | G00749318 Page 15 of 59

This research note is restricted to the personal use of [email protected].


For example, digital twins in the initial short term enable platform and app sales — such
as the IoT or business process management (BPM) or analytics platform — as well as a
design and implementation revenue element. The longer-term revenue opportunity for
digital twins includes marketplace, managed services, contract renewal and digital twin
refurbishment opportunities. Most enterprise digital twins are custom business
engagements that can last a decade or two if the vendor shows clear value.

Product leader challenges include a lack of standards, deceptively challenging end-to-end


solution integration requirements that increase project cost, unclear licensing for digital
twins’ intellectual property, and a lack of governance processes and budget pools at most
enterprises.

Mass: Very High

Interest in and demand for digital twins remains high. Gartner’s 2020 IoT survey data
clearly shows 88% of enterprises that are implementing IoT projects indicated they had
already deployed digital twins or planned to deploy them over the next 12 months. Note
that this does not imply that the entire enterprise is using digital twins, but rather that they
are using them in these IoT projects (see Survey Analysis: Companies Heavily Use Digital
Twins to Optimize Operations).

While digital twins are extensively being deployed in asset-intensive industries — such as
oil and gas, mining, and manufacturing — interest is increasing in other sectors.
Enterprises such as airports and real estate management organizations that have a
critical need to monitor citizens or employees for health and safety purposes, and conduct
COVID-19-related compliance reporting, are starting to use digital twins. A variety of
medical institutions are going beyond patient health records to develop digital twins of
patients. OEMs are investing in digital twins to drive digital transformation and
monetization strategies. This level of interest is also reflected in the rise of standards
organizations and consortia focused on digital twins, such as the Digital Twin Consortium
and the National Digital Twin programme (NDTp) at the Centre for Digital Built Britain
(CDBB).

Recommended Actions:

■ Incorporate digital twins into your product roadmap, especially if you deal with
asset-intensive industries or IoT-based solutions.

■ Build a strategy and a revenue map for how digital twins can contribute to your
short- and long-term revenue opportunities.

Gartner, Inc. | G00749318 Page 16 of 59

This research note is restricted to the personal use of [email protected].


■ Develop a digital twin go-to-market ecosystem map to align key partners that
complement your strengths, including possibly partnering with horizontal digital twin
TSPs to fill offering gaps.

Recommended Reading:

■ Tool: 50-Plus Digital Twin and IoT Cost Optimization Examples

■ Strengthen 4 Elements for Successful Management and Governance of Digital


Twins

■ Survey Analysis: Companies Heavily Use Digital Twins to Optimize Operations

■ What Should I Do to Ensure Digital Twin Success?

Hyperscale Edge Computing


Back to Top

Analysis by: Sid Nag

Description: Hyperscale edge computing (HEC) describes a distributed computing


topology managed and controlled through a hyperscale public cloud service in which data
storage and processing are placed close to the things or people that produce and/or
consume that information. Drawing from the concepts of mesh networking and
distributed computing, edge computing strives to keep traffic and processing local and off
the center of the network. Edge balances latency requirements and the bandwidth required
for an application, allows for autonomous operation, and enables the placement of
workloads and data that satisfies regulatory/security demands.

Sample Vendors: Alibaba Cloud, Amazon Web Services (AWS), Google Cloud, IBM,
Microsoft Azure, Oracle, Tencent

Range: 1 to 3 Years

Gartner, Inc. | G00749318 Page 17 of 59

This research note is restricted to the personal use of [email protected].


We believe the range for HEC is one to three years because HEC has quickly become the
decentralized complement to the largely centralized implementation of hyperscale public
cloud. Edge computing solves many pressing issues, such as unacceptable latency and
bandwidth requirements, given the massive increase in edge-located data. The edge-
computing topology enables the specifics of the IoT, digital business and distributed IT
solutions, as a foundational element of next-generation applications. See Hype Cycle for
Cloud Computing, 2021. Hyperscale edge computing benefits from the centralized
management and control of a hyperscale public cloud service.

Gartner, however, has also confirmed that market interest in edge computing is growing
rapidly. Unlike other cloud-neutral, edge computing solutions, the cloud-out approach has
the potential to greatly simplify cloud-edge integration. Gartner predicts that 20% of
installed edge computing platforms will be delivered and managed by hyperscale cloud
providers by 2023, compared with less than 1% in 2020 (see Predicts 2021: Cloud and
Edge Infrastructure).

Although it is still in an early stage, more organizations are interested in using the same
programming models, APIs and management systems as public cloud in edge computing
systems, particularly for infrastructure and platform layers. Therefore, we believe the
range of “hyperscale edge” is assessed to be anywhere from one to three years.

Mass: Very High

We believe the mass for HEC is very high because HEC complements the distributed cloud
computing style in digital business use cases serving distinct markets by addressing
latency, bandwidth, autonomy and privacy requirements, where cloud computing can’t
meet well. Edge computing is expected to have a major impact on a wide range of
markets beyond existing major use cases and verticals, such as IoT and
retail/manufacturing. One of the large potential areas in the future is 5G mobile
technology and network, and many network carriers are forming alliances with cloud
service providers to develop edge computing solutions (see Market Trends: How TSPs Are
Preparing 5G Solutions With Cloud Edge Providers). Gartner expects most business values
in edge computing to come from software and service business rather than hardware
infrastructure (see Leading the Edge: Gartner’s Initial Edge Hardware Infrastructure
Forecast).

Recommended Actions:

Gartner, Inc. | G00749318 Page 18 of 59

This research note is restricted to the personal use of [email protected].


■ Balance between short-term opportunities of existing major use cases and vertical
solutions and emerging use cases and vertical solutions, in which the vendor can
keep core competency while using other technologies in a combinatorial manner,
such as cloud, 5G and AI/ML.

■ Differentiate your edge computing software and service products through an


ecosystem by partnering with 5G network carriers and edge infrastructure vendors.

■ Address customer needs that leverage the combination of cloud, edge and 5G
technologies to solve complex industry use cases, such as smart cities, gaming and
high-performance computing.

Recommended Reading:

■ Competitive Landscape: Hyperscale Edge Solution Providers

■ Leading the Edge: Gartner’s Initial Edge Hardware Infrastructure Forecast

Multimodal UI
Back to Top

Analysis by: Annette Jump

Description: Multimodal user interface (UI) is a high-level design model in which user and
machine interactions can occur simultaneously via a combination of various user-spoken
or -written natural language, as well as via touch (on a screen). Data can be processed
from various data sources beyond text, including images, video, tables, maps, audio,
gesture, motion, myoelectric, brain-computer interface and eye movement.

Sample Vendors: Amelia, Google Multitask Unified Model (MUM), Kore.ai, NVIDIA Riva

Range: 1 to 3 Years

Gartner, Inc. | G00749318 Page 19 of 59

This research note is restricted to the personal use of [email protected].


The one- to three-year range of multimodal UI is driven by the technology evolving from a
niche, emerging field to enabling a multiexperience for users’ interactions across various
touchpoints on their digital journey. Multimodal UI is a next evolution stage of
conversational UI (CUI) and can happen across enterprise applications, VAs, devices and
IoT. Fusing vision, audio, voice and other inputs can support multiuser, multicontext
conversations in various applications and dramatically change how humans do various
complex tasks. The individual modalities enabling multimodal UI are quickly maturing,
thus multimodal UI adoption is expected to accelerate very quickly in the next 12 to 18
months. By 2025, multimodal interactions will be a standard feature for VAs, up from less
than 2% in 2021. This technology will be enabled by an amalgamation of various
language technologies, computer vision, video, emotional AI and gesture recognition.

Common use cases for multimodal UI include:

■ Multiperson interaction environments, like in a meeting room or in a car (when you


need to recognize different speakers)

■ Remote support/assistance use cases in noisy environments, such as an oil rig or


busy train station

■ VAs for autonomous vehicles

■ Support for human-machine interactions for people with disabilities via gaze
detection

■ Common challenges faced when using multimodal UI include:

■ Conversational accuracy

■ Ability to converge and process all or specific modalities’ data (like speech and voice,
but not audio)

■ Making different modalities work together with open ecosystem solutions (while
avoiding individual vendor lock-in solutions)

■ Multiturn and multi-intent recognition

■ Availability of context-relevant training data

■ The need for specific hardware sensors

■ Cultural inertia

Gartner, Inc. | G00749318 Page 20 of 59

This research note is restricted to the personal use of [email protected].


Mass: Very High

The long-term impact of multimodal UI will be very high, as it will transform all types of
interactions between humans and machines, as well as enable more natural search and
assist capabilities. The flexibility of combining various interaction modes within
multimodal UI will enable the technology to be integrated into a wide range of enterprise
applications, advanced VAs, mobile apps and human-machine interfaces for myriad
devices, consumer electronics, IoT and experiences. This will enable the ultimate potential
for multimodal UIs to be vast, transformative and broadly impactful.

The availability of frameworks from NVIDIA and Google will help accelerate and
democratize the development of multimodal UI-enabled experiences and applications
across a broad spectrum of developers. The examples will include multiperson
interactions in social venues or autonomous cars, or providing guidance based on voice
authentication and visual feedback on the remote location.

Advancements in multimodal UI will result in new capabilities replacing many existing


capabilities, supporting the emergence of new technology providers and driving market
consolidation among VA and natural language technology solution providers in the next
two to three years. Multimodal UI will coexist as part of multiple UI interactions for many
applications in the next three to five years.

Recommended Actions:

■ Enable more natural communications with your software, devices and IoT by
incorporating selected adjacent technologies, such as computer vision, video
support, emotion AI and computer-generated imagery (CGI).

■ Improve product differentiation and stickiness for your software products by


identifying where multimodal UI can provide better guidance and experience.

■ Aggregate information from various data sources, and improve the intelligence of
your VAs or risk losing competitiveness in the next 18 months.

Recommended Reading:

■ Emerging Technologies: Current and Future Capabilities of Advanced Virtual


Assistants

Gartner, Inc. | G00749318 Page 21 of 59

This research note is restricted to the personal use of [email protected].


■ Emerging Technology Analysis: Differentiate Your User Experience With Human-
Machine Interfaces

■ Magic Quadrant for Multiexperience Development Platforms

Advanced Virtual Assistants


Back to Top

Analysis by: Annette Jump

Description: Advanced virtual assistants (VAs) assist people by processing human inputs
to execute tasks, deliver predictions and offer decisions. They are powered by a
combination of:

■ More advanced user interfaces (like 3D and multimodal)

■ Natural language processing (NLP; multi-intent recognition, syntactic- and semantic-


based methods, neural real-time machine translation, and synthetic voices)

■ Semantic and deep learning techniques (such as deep neural networks [DNNs]),
enabling decision support and personalization

■ Contextual and domain-specific knowledge

In this manner, advanced VAs assist people with more humanlike multiturn conversations
and automate more complex tasks.

VxAs are a type of advanced VA designed to perform skilled, domain-specific tasks (like in
healthcare, banking, retail or legal). VxAs incorporate customizable, pretrained language
models (by task and industry) and integrate with enterprise applications and domain-
specific systems. This enables VxAs to automate more complex high-value tasks and
proactively engage with skilled professionals by offering some advisory capabilities — an
expert system.

Sample Vendors and/or Products:

■ Virtual enterprise assistants: Amelia, Artificial Solutions, boost.ai, IBM Watson


Assistant, Kore.ai, Omelia, OneReach.ai, Oracle Digital Assistant

Gartner, Inc. | G00749318 Page 22 of 59

This research note is restricted to the personal use of [email protected].


■ Virtual customer assistants: Amelia, boost.ai, DAVI, Nuance, Omelia, Soul Machines,
yellow.ai

■ VxA: DAVI, Baidu’s Melody, Clinc’s Finie, Conversica, Paradox Interactive, SKAEL, Soul
Machines

Range: 1 to 3 Years

Advanced VAs are one to three years away from early majority adoption because of
complexities in advancing conversational language capabilities, developing domain
knowledge and integrating with enterprise applications. While VAs are already being
adopted by many organizations to support customer- and employee-facing interactions,
the COVID-19 pandemic has accelerated this adoption. This also led to emergence of new
use cases. Many of them are advanced VAs that have more advanced conversational
capabilities and hybrid intent recognition, integration with enterprise applications, and
supporting multimodal capabilities. These capabilities have enabled VAs to develop
specialized, domain-specific skills, giving rise to VxAs. This enables VxAs to possess a
higher level of intelligence and automation, higher containment rates, as well as provide
proactive outreach and some end-user advisory capabilities.

VxAs support both business or consumer business value outcomes. Examples of


business-domain specific VxAs will be virtual sales assistants or virtual IT assistants,
while consumer-domain-specific VAs will be virtual sales assistants or virtual brand
ambassadors.

Advanced VAs are starting to be adopted to support customer-facing use cases. However,
enterprise-facing adoption has significantly increased in importance and occurrence in the
last 12 months. These applications tend to be more bleeding-edge and capabilities are
more disruptive than customer-facing applications. For example, VA for sales can provide
significant efficiency and revenue generation benefits. Fraud prevention and voice
monitoring is emerging as an advanced use case for call center virtual agents, helping to
further automate various customer interactions and deliver business value around
operational efficiency and cost savings.

Gartner, Inc. | G00749318 Page 23 of 59

This research note is restricted to the personal use of [email protected].


The advancements in future capabilities of advanced VAs will be enabled by
amalgamation with adjacent technologies and evolve around domain-specific intelligence,
supporting hyperautomation initiatives and developing multimodal interfaces for VAs. By
2025, advancements in VAs will automate up to 75% of call center agent tasks, up from
30% in 2021. Advanced VAs will also play advisory roles for knowledge workers but will
automate less than 10% of tasks in the next three years. Those developments will propel
advanced VAs into every sphere of consumer lives, business interactions and operations.

Mass: Very High

Advanced VAs have very high mass because they will be adopted by organizations across
many verticals. Based on Gartner client inquiries, the mind share for advanced VAs
increased by 33% in 2020, with advanced VA and language technologies delivering
business value across various industries. Finance, and communications, media and
services benefited the most from VA solutions. In the last 12 months, the adoption of VAs
by retail has also dramatically increased. Business value outcomes for retail are
concentrated on customer satisfaction and operational efficiency, enabling revenue
growth as well. Telecommunications companies expand the business value of VAs for
sales and marketing enablement. Other industries are exploring VA business outcomes
with experimental approaches in education, healthcare and manufacturing. Future
business opportunities for advanced VAs will be around virtual learning, virtual
recruitment, virtual shopping and virtual healthcare advisors.

Advanced VAs also have the potential to transform the nature of how employees interact
with enterprise applications via conversational front ends and with advanced VAs
identifying patterns in relevant business data, providing insights and alerts/notifications
based on real-time changes. This will improve employee productivity, enhance consumer
experience, and increase engagement with IoT and devices. Common challenges faced by
organizations in adoption of advanced VA solutions are lack of domain knowledge
capabilities, integration issues with relevant enterprise applications and data stores, as
well as issues with organization’s acceptance and overhyped/disappointing expectations.

Recommended Actions:

■ Explore advanced VA technology for software solutions by introducing voice


interfaces and domain-specific advanced VA capabilities for use cases where it
provides significant user value.

Gartner, Inc. | G00749318 Page 24 of 59

This research note is restricted to the personal use of [email protected].


■ Expand advisory and proactive capabilities of your software by leveraging advanced
VAs and developing prebuilt integrations with relevant enterprise applications for
faster adoption.

■ Develop a technology roadmap and partner strategy by incorporating differentiated


capabilities enabled by advanced VA technologies into solutions, but be aware of
imminent market consolidation.

■ Train partners to support more vertical-specific deployments by enabling them to


scale their solution after initial entry with a customer.

Recommended Reading:

Emerging Technologies: Top Use Cases for Customer-Facing Advanced Virtual Assistants

Emerging Technologies: Top Use Cases for Advanced Virtual Assistants in Enterprise
Operations

Emerging Technologies: Top Business Value Patterns in Advanced Virtual Assistant


Adoption

Emerging Technologies: Vendor Differentiation Patterns in Virtual Assistant Technologies

Emerging Technologies Venture Capital Growth Insights: Natural Language Technologies

Gartner, Inc. | G00749318 Page 25 of 59

This research note is restricted to the personal use of [email protected].


3 to 6 Years
IoT Platforms
Back to Top

Analysis by: Eric Goodness

Description: An IoT platform is software that enables development, deployment and


management of solutions that connect to and capture data from IoT endpoints to drive
improved business decisions. Functional capabilities include IoT edge device
management, integration tools and management, data management, analytics,
application enablement and management, and security.

IoT platforms may be deployed on-premises, as a cloud-based IoT platform as a service


(PaaS), or as a hybrid consisting of edge software as IoT PaaS.

Sample Vendors: Amazon, Ayla Networks, GE Digital, Hitachi Vantara, Huawei, Microsoft,
myDevices, Particle, PTC and Software AG

Range: 1 to 3 Years

Enterprise adoption remains relatively strong as businesses add IoT capabilities to their
physical plant and assets, as they have IoT-enabled their finished goods and services.
However, continued vendor hype, culture, schedule and security concerns likely push
mainstream adoption out three to six years. Additionally, speed of adoption will vary
across the consumer, commercial and industrial verticals, with consumer and commercial
adoptions reaching mainstream adoption in three and five years, respectively.

The IoT platform market is inhibited by the crowded marketplace of vendors and a lack of
investment by service providers to create robust, competitive service practices in order to
plan and build a broad continuum of platforms. Additionally, most IoT platform providers
are not profitable, which has slowed the pace of reinvestment and innovation in the
market. Although previously IoT platforms were the principal or lead products for many
vendors, they are now the technology that underpins the implementation. They have given
way to IoT-enabled applications and solutions as the new center of value.

Mass: Very High

Gartner, Inc. | G00749318 Page 26 of 59

This research note is restricted to the personal use of [email protected].


IoT has broad appeal across all sectors in terms of value derived from connecting and
analyzing assets and processes. While consumer IoT remains very visible, the economic
impact of commercial and industrial enterprise IoT deployments has proven to be the
larger market. This is because enterprises use IoT for cost optimization and process
improvement and to augment and replace the functions of OT systems.

What has distinguished the IoT platform market over the past few years is the impact of
non-IT, nontraditional buying centers that drive increasing demand for IoT solutions. This
trend will increase as IoT becomes more entwined with digital business. Through the use
of cloud and traditional analytics with innovative AI/ML techniques, the investments
required to be competitive are rising.

Recommended Actions:

■ Establish a go-to-market focus by determining which IoT market segment —


consumer, commercial, industrial — is best aligned with common use cases and
outcomes supported by the IoT catalog.

■ Extend the IoT platform into digital business initiatives by developing value-added
IoT applications and technology alliances to expand the impact of outcomes.

Recommended Reading:

■ Emerging Technologies: IoT Platforms for Digital Optimization and Transformation

■ Forecast: Enterprise and Automotive IoT Platforms, Worldwide, 2019-2025

■ Competitive Landscape: IoT Platform Vendors

Smart Spaces
Back to Top

Analysis by: Eric Goodness and Danielle Casey

Description: A smart space is a physical or digital environment in which humans and


technology-enabled systems interact in increasingly open, connected, coordinated and
intelligent ecosystems. The design patterns to create smart spaces are referred to with
various names, including “smart city,” “digital workspaces,” “smart venues” and “ambient
intelligence.”

Gartner, Inc. | G00749318 Page 27 of 59

This research note is restricted to the personal use of [email protected].


Sample Vendors: Adappt, Budderfly, CBRE, GoSpace AI, ICONICS, Smarten Spaces,
SmartSpace Software, Spacewell, Verdigris

Range: 3 to 6 Years

Smart spaces have advanced closer to early majority adoption within the three- to six-year
range in the last 12 months. COVID-19 has accelerated market adoption of smart spaces,
as worker safety and social distancing capabilities have become de facto standards
within this emerging market. Based on observed investment, development and marketing
of new solutions by technology providers, Gartner believes the market is entering a period
of highly competitive offerings and accelerated delivery. Opportunities are increasing to
drive more connected, coordinated and intelligent solutions across target environments.
This is the result of smart spaces offering combinatorial value spanning legacy building
management systems, IoT, computer vision, NLP, edge AI and broader deep learning
techniques.

Common use cases for smart spaces include preventive maintenance for building
infrastructure, precision agriculture solutions for animal husbandry, and automated tolls
and billing in public and private spaces.

Common challenges faced when creating smart spaces include the technical debt and
costs for the integration of AI with operational technologies (for example, traffic
management and building management).

Mass: Very High

Smart spaces have enjoyed early adoption from vertical sectors such as commercial real
estate and public-sector multifamily housing. However, the potential mass is very high, as
this emerging market offers broad, pansector appeal wherever people and mobile traffic
require observation and management.

A major catalyst to the emergence of smart spaces is the requirement to refresh legacy
solutions such as building and traffic management systems. Current systems are ill-
equipped to integrate large volumes of sensor data and offer corresponding analytics.
Additionally, the emergence of new classes of sensors (such as cameras and natural
language inputs) is changing the day-to-day monitoring and management of spaces and
the breadth and depth of potential outcomes and value to owners and occupiers, alike.
Together, legacy solution upgrades and broad sensor adoption deliver significant
disruption potential.

Gartner, Inc. | G00749318 Page 28 of 59

This research note is restricted to the personal use of [email protected].


The growth in availability of IoT and AI will create more flexible and autonomous
coordination among various IT and legacy operational technology (OT) systems. This will
optimize system operations (like building management) at enterprises that have relied on
OTs, which are typically closed and isolated from one another. Beyond system
optimizations, smart spaces are changing how people interact with one another and
influence decision support systems with various spaces (e.g., buildings, factories and
venues). IoT platforms leveraging strong AI capabilities promise to disintermediate legacy
OT-centric markets, thus displacing multiple billions of dollars of market value for new
innovative products and services.

Recommended Actions:

■ Determine which smart space solutions to develop by aligning new investments with
legacy market sector coverage.

■ Qualify and identify specific areas (such as worker spaces, physical plants, customer
engagement and experience) where AI can add material value.

Recommended Reading:

■ Competitive Landscape: IoT-Enabled Smart Building Management Platforms

■ Infographic: Artificial Intelligence Use-Case Prism for Smart Cities

Graph Technologies
Back to Top

Analysis by: Alys Woodward

Description: The term “graph technologies” refers to graph data management and
analytics techniques, which enable the exploration of relationships between entities such
as organizations, people or transactions. Analyzing relationship data can require a large
volume of heterogeneous data, storage and analysis — all of which is not well-suited to
relational databases.

Graph analytics consists of models that determine the “connectedness” across data
points. Graph analytics is typically portrayed via multicontext visualizations for business
users.

Gartner, Inc. | G00749318 Page 29 of 59

This research note is restricted to the personal use of [email protected].


Graph DBMSs store data elements and their relationships as first-class objects, optimizing
for the connections among elements. Most graph DBMSs use basic graph theory and are
suitable for multiple types of interactions ranging from simple node, edge traversal and
triple pattern matching for transactional uses, to complex multihop queries, reasoning and
inference, and algorithms for analytical workloads.

Sample Vendors: Amazon, Cambridge Semantics, DataStax, MarkLogic, Neo4j, Redis


(RedisGraph), Stardog, TIBCO Software (Graph Database), TigerGraph

Range: 3 to 6 Years

Graph is of great interest to end users, and inquiry volume is growing rapidly. However,
due to the wide range of possible applications for graph, it will take three to six years to
reach early majority adoption across the total addressable market. A significant
proportion of graph technologies will be sold as integrated components of existing data
platforms as multimodel data platforms add graph capabilities to support additional use
cases. These components will be either developed in-house or integrated via resale
agreements from specialist vendors. There is a healthy market of startups developing
graph data capabilities. Graph technologies will also be a strong component of metadata
management systems, thus supporting wider use of data and analytics across the board.

Despite the rise in graph analytics solutions that make it possible to query graph solutions
using SQL, there is still demand for new skills related to graph-specific knowledge, which
currently restricts growth in adoption. The new skills required include knowledge and
experience with the Resource Description Framework (RDF), property graphs, SPARQL
Protocol and RDF Query Language (SPARQL), as well as executing graph analysis in
Python and R.

Mass: High

Gartner inquiry volume and interest in graphs has risen by 280% from October 2018
through October 2020. Graph technologies are showing increased demand globally,
focused on specific industries. Established AI techniques (such as Bayesian networks) are
increasing the power of knowledge graphs and the usefulness of graph analytics through
further nuance in representational power. Graph databases are ideal for storing,
manipulating and analyzing the widely varied perspectives in the graph model due to their
graph-specific processing languages and capabilities, scalability, and computational
power.

Gartner, Inc. | G00749318 Page 30 of 59

This research note is restricted to the personal use of [email protected].


Healthcare management, clinical research and healthcare supply chain use cases have
dramatically increased. Graph databases are ideal for storing, manipulating and
analyzing the widely varied perspectives in the graph model due to their graph-specific
processing languages, capabilities and computational power.

Recommended Actions:

■ Incorporate graph technologies into your solutions where evaluating relationships


between entities adds to their business value.

■ Embed graph analytics capabilities from other providers rather than building the
capability yourself if you are not an expert in this technology. Consider pure players
along with established database providers with graph capabilities.

Recommended Reading:

■ Market Guide for Graph Database Management Solutions

■ Hype Cycle for Data Management, 2021

AI-Augmented Software Engineering


Back to Top

Analysis by: Mark Driver and Arun Batchu

Description: AI-augmented software engineering (AIASE) is the use of AI technologies


such as ML, NLP and similar technologies to aid software engineering teams in creating
and delivering applications faster, at lower effort and cost, and with higher quality. AIASE
commonly integrates with an engineer’s existing tools to provide them with real-time
intelligent feedback and suggestions.

Unlike previous AI technologies that were brittle and static, today’s AI technologies are
general-purpose technologies and adaptive. They are transformative, just like steam and
electric technologies were in their era. However, unlike steam and electric technologies,
today’s AI technologies increase in their capabilities proportional to the amount of data
and computing capacity available to them.

Sample Vendors: Amazon, Microsoft, OutSystems, Diffblue, Functionize, IBM, Kite,


Mendix, ScopeMaster

Gartner, Inc. | G00749318 Page 31 of 59

This research note is restricted to the personal use of [email protected].


Range: 3 to 6 Years

While they have emerged among early adopters today, AIASE technologies are expected to
reach mainstream adoption within three to six years. Propelled by the rapid growth of
software code, the data generated by digital applications and cloud computing, these AI
machines will gain capabilities that will transform the software development life cycle. We
expect the technology to pass through three stages. The first and current stage is where AI
is able to help as an apprentice, suggesting code fragments. The next stage is where the
AI becomes smarter to act like a peer to the developer. The third stage is the lead expert
stage where the AI generates entire applications, with the designer, developer, and tester
tweaking as necessary.

Mass: High

We assess the mass impact of AIASE to be high in coming years because various AIASE
innovations will emerge across the entirety of the software development life cycle, in
some areas this will be faster and in more depth than in others. For example, today,
several AIASE innovations are emerging that show strong potential to disrupt modern
application development. AIASE is enabling creative business problem-solving by
automating boilerplate software engineering tasks. It is increasing developer velocity by
recommending highly relevant code and library recommendations in a fraction of the time
it would take otherwise. It is augmenting quality and testing engineers by allowing tests to
self-heal and by automatically creating tests. In addition, market leading and innovative
intelligent process management platforms (such as intelligent business process
management suites), business rule management systems, and decision management
suites incorporate AI capabilities to support decision management and integrate with
predictive analytics technologies.

In particular, the use of AI to build other AI models is increasing the ability of enterprise
employees to create models that add value to applications and data in the business. We
see this in the popularity of low-code tools that aim to increase productivity by reducing or
avoiding the need for specialist “code” by scarce data scientists and developers. For
example, in Microsoft’s announcement of the integration of the AI model GPT-3 into its
Power Apps low-code development tools, we see the convergence of AIASE with other
developer productivity improvements. Finally, development and quality assurance in
organizations are leveraging ML combined with NLP to provide a set of services based on
large-scale source code analysis. “ML on source code” innovations have applications in
several areas, including intelligent code completion, automated peer review, automated
coding convention compliance, source code conversions and others.

Gartner, Inc. | G00749318 Page 32 of 59

This research note is restricted to the personal use of [email protected].


Recommended Actions:

■ Explore the AIASE features applicable to your products/services available today by


focusing first on developer “quality of life” enhancements as a starting point.

■ Account for significant changes and advances to the breadth and depth of AIASE
capabilities over the next three to six years and proactively plan to make
product/service “course corrections” to align with customer expectations.

■ Incorporate AIASE into a broader software hyperautomation development strategy to


fully exploit the potential of the technology with most long-term industry impact.

■ Build a roadmap for advancing AI-augmented product development strategies over


time, leveraging both team practices (such as test-driven development and behavior-
driven development techniques) and cloud services (automated ML).

Recommended Reading:

■ Innovation Insight for AI-Augmented Development

■ Hype Cycle for Software Engineering, 2021

■ Infographic: Artificial Intelligence Use-Case Prism for Software Development and


Testing

■ Emerging Technologies: Critical Insights Into AI-Augmented Software Development

Synthetic Data
Back to Top

Analysis by: Alys Woodward

Description: Synthetic data is a class of data that is artificially generated, that is, not
obtained from direct observations of the real world. Data can be generated using different
methods such as statistically rigorous sampling from real data, semantic approaches,
generative adversarial networks, or by creating simulation scenarios where models and
processes interact to create completely new datasets of events. Synthetic data is one
solution to the problem of a lack of sufficient data to train AI models. It also enables the
anonymization of personally identifiable information data for sharing and analysis.

Sample Vendors: Accelario, AI.Reverie, Hazy, MOSTLY AI, Neuromation, Tonic

Gartner, Inc. | G00749318 Page 33 of 59

This research note is restricted to the personal use of [email protected].


Range: 3 to 6 Years

Synthetic data will ultimately apply to a wide range of data types and across different
usage styles — data annotation, data anonymization, data enhancement and data
generation. Because it’s currently early days, there is a wide opportunity, but it will take
three to six years to achieve early majority adoption. To meet increasing demand for
synthetic data for natural language automation training, especially for chatbots and
speech applications, new and existing vendors are bringing offerings to market. This is
expanding the vendor landscape and driving synthetic data adoption. Wider use of
simulation techniques are also accelerating synthetic data.

Synthetic data can be generated for a wide range of data types. While row/record,
image/video, text and speech applications are common, R&D labs are expanding the
concept of synthetic data to graphs. Synthetically generated graphs will resemble but not
overlap the original. As organizations begin to use graph technology more, we expect this
method to mature and drive adoption.

In some situations, synthetic data will always be a lower-quality substitute for real data,
but in other areas, synthetic data will be a critical component of delivering high-value,
high-quality AI models. Synthetic data can add domain knowledge to AI models, complete
incomplete datasets, enable testing of AI models to improve robustness, and solve issues
of model portfolios like portfolio optimization and sequencing of models.

It is fairly early days for synthetic data, and it still has significant flaws. It can have bias
problems, miss natural anomalies, be complicated to develop or may not contribute any
new information to existing, real-world data. Buyers are still confused over when and how
to use the technology with other data pipeline tools. As the number of techniques in data
and model pipeline increases, buyers struggle to determine which techniques to use to
achieve their aims (e.g., synthetic data, federated learning, differential privacy) and how to
use them together.

Mass: High

■ Early applications of synthetic data applications were focused on automotive and


computer vision use cases, but now, synthetic data increasingly supports use cases
around data monetization, external analytics support, platform evaluation and the
development of test data. In healthcare and finance, buyers’ interest is growing as
synthetic data can be used to preserve privacy in AI training data.

Gartner, Inc. | G00749318 Page 34 of 59

This research note is restricted to the personal use of [email protected].


Recommended Actions:

■ Use synthetic data to widen the application of your solutions to additional use cases
or to increase the business value of your solutions.

■ Position synthetic data as a differentiator while the market understanding is


maturing.

■ Allay customer concerns about quality, accuracy and bias of synthetic data by
applying the data specifically to the individual use case.

Recommended Reading:

■ Maverick* Research: Forget About Your Real Data — Synthetic Data Is the Future of
AI

■ Top Trends in Data and Analytics for 2021: From Big to Small and Wide Data

■  Will 2020 Be the Year of Synthetic Data?

Collaborative Ecosystem Product Development


Back to Top

Analysis by: Balaji Abbabatulla

Description: Collaborative ecosystem product development involves partnering with


several, sometimes competing, vendors to develop new solutions. Such collaboration is
strategic and aims to impact business outcomes over a period of time by delivering a
series of solutions. Collaborative ecosystem product development (CEPD) enables
product managers to both develop and deliver innovative solutions cost-effectively and
frequently.

Sample Vendors: Microsoft, Oracle, IBM, SAP, Salesforce, Blue Yonder, Adobe, Genesys

Range: 3 to 6 Years

We estimate that the early majority will use CEPD within a three- to six-year period
because of the widespread support by application software vendors.

Gartner, Inc. | G00749318 Page 35 of 59

This research note is restricted to the personal use of [email protected].


The top five software application vendors — Microsoft, Oracle, SAP, IBM and Salesforce —
worldwide support product partnerships. Leading vendors in fast-growing application
markets have also been using product development ecosystems to develop and deliver
innovative solutions. However, current product ecosystem partnerships are largely
opportunistic, tactical relationships. A lack of robust governance models and transparent
commercial frameworks discourages vendors from using CEPD to develop strategic
solutions.

Changing buyer expectations about rapid impact on business outcomes will require
vendors to develop solutions using capabilities beyond their internal resources. This will
lead to an increase in the strategic solutions developed using CEPD, resulting in adoption
by the early majority within three to six years.

Mass: High

We estimate that a number of industries, markets and business functions will be impacted
by CEPD solutions replacing existing product development methodologies over time.

The underlying architecture of a CEPD solution enables easy replacement of a set of


modules delivered by one combination of vendors to another combination. This improves
extensibility of solutions across multiple industries by replacing the vendor combination
with a combination that is more appropriate for the new industry.

Enterprise software product leaders can improve business agility by quickly packaging the
most appropriate product capabilities that are required to support a customer’s new
business strategy. The solution architecture includes several product capabilities required
to fulfill a specific business outcome. The solution is designed to create a higher impact
than the sum of the modules contributed by the ecosystem participants. The pace and the
impact of CEPD solutions will replace and transform existing product development
methodologies.

Recommended Actions:

■ Use CEPD to develop strategic solutions as an alternative to traditional product


development methodologies to deliver innovative solutions frequently.

■ Improve trust among partners by defining a robust governance model, and align
distribution of customer revenue to participants based on their roles and
responsibilities.

Gartner, Inc. | G00749318 Page 36 of 59

This research note is restricted to the personal use of [email protected].


Recommended Reading:

■ Rebound Quickly From the Current Downturn by Using Collaborative Ecosystem


Product Development

■ Market Insight: How Product Managers Can Leverage Application Software Provider
Ecosystems to Deliver Rapid Innovation

■ Emerging Technology Analysis: Application Ecosystems Accelerate Software


Product Innovation and Value

■ Market Insight: How to Prioritize Your Product Roadmap Features by Using a Value
Map to Gain Competitive Advantage

Homomorphic Encryption
Back to Top

Analysis by: Shawn Eftink

Description: Homomorphic encryption (HE) is a cryptographic method that returns an


encrypted result to the data owner, enabling third parties to process encrypted data while
having no knowledge about the data or the results.

There are three categories of HE technologies:

■ Partially homomorphic encryption (PHE) allows only one operation on the encrypted
data (i.e., either addition or multiplication, but not both).

■ Somewhat homomorphic encryption (SWHE) allows a limited number of either


addition or multiplication operations of the data, but not both.

■ Fully homomorphic encryption (FHE) allows an unlimited number of both addition


and multiplication operations.

In practice today, FHE is not fast enough for most business implementations. As such,
PHE is the most practical implementation, but we have seen the emergence of FHE being
offered by some vendors for specific use cases in healthcare, financial and public sectors.
Furthermore, HE protects data in use but does not address data at rest or in transit, which
must be addressed separately.

Gartner, Inc. | G00749318 Page 37 of 59

This research note is restricted to the personal use of [email protected].


Wisely, most HE pioneers understand that the future of this technology is tied to the
increasing role of open innovation investments, which runs counter to the secrecy and silo
mentality of traditional corporate research labs. A shortlist of prominent HE open-source
projects includes:

■ HElib

■ Homomorphic Encryption for Arithmetic of Approximate Numbers (HEAAN)

■ Microsoft SEAL

■ PALISADE

■ Torus-FHE (TFHE)

Sample Vendors: Cryptolab, DataFleets, Duality, Enveil, IBM, Inpher, Microsoft

Range: 3 to 6 Years

Homomorphic encryption is three to six years out because several factors are inhibiting
the adoption in the near term. Performance issues, lack of standardization and complexity
are expected to slow progress to the early majority stage.

Performance: While massive improvements have been made in recent years, general FHE-
based processing remains 1,000 to 1,000,000 times slower than equivalent plain-text
efforts. However, some commercial use cases are reaching 10 times to 100 times.
Consequently, the computational overhead remains too heavy for FHE in most general
computing scenarios. However, as we discussed previously, there are plenty of
opportunities to exploit FHE potential, even in discrete use cases. Nevertheless, much work
remains to optimize FHE software infrastructures to broaden the scope of practical
applications. Moreover, FHE will benefit from continued advances in hardware
performance in years to come.

Lack of standardization: Like any early technology, HE efforts remain diverse and
fragmented. A lack of standardization inhibits consistency wherein TSPs and potential
customers can rally around to create an economy of scope and scale. For example, the HE
community must continue to work to simplify and standardize APIs and software
development kits (SDKs).

Gartner, Inc. | G00749318 Page 38 of 59

This research note is restricted to the personal use of [email protected].


Too difficult for typical IT developers: The overwhelming historical HE research has come
from elite corporate and academic cryptographic experts. As a result, most HE libraries
remain too difficult for mainstream IT providers and customers to leverage without
intensive training. To succeed among mainstream end-user and TSP organizations, HE
technology must be abstracted and simplified by incorporating it into familiar developer
languages, frameworks and platforms. Some of the aforementioned sample vendors are
working to deliver SDKs to address this challenge. Moreover, the IT industry must work
with HE experts to train application developers on ways to incorporate HE within real-
world practical solutions.

Mass: High

Gartner rates HE as high. Gartner believes that HE will be a core technology for many
future SaaS offerings to ensure the protection and privacy of data between third-party
data processing and analytics providers. The dominant use case will be for employing HE
to eliminate the current need to exchange and store data between business partners, third-
party analysis firms or other extended data analytics solutions. Privacy and data security
mandates continue to emerge globally, with examples such as the EU’s General Data
Protection Regulation (GDPR), PCI standards, the California Consumer Privacy Act,
Australia’s Privacy Act and the Data Security Law of the People’s Republic of China. All
these mandates are expected to oblige providers and customers to evaluate their use and
exchange of data between third-party entities. Where possible, technologies such as HE
and data-sharing arrangements will benefit by avoiding the sharing of that data.

Recommended Actions:

■ Assess the core benefits of customer and regulatory scope of using HE as an


alternative to quantum-safe and privacy-preserving computation techniques and in
conjunction with a broader data security governance strategy.

■ Evaluate the speed and performance trade-offs between FHE, SWHE and PHE
compared to the mathematical operations required for the desired outcome.

■ Leverage opportunities for encrypted data in use by integrating HE into encryption,


messaging and third-party data analytics services, which must consider the
implications of using HE in their solutions to leverage opportunities for encrypted
data in use.

■ Differentiate your solutions against competitors by working with product marketing


managers on articulating the core benefits of HE on customer privacy.

Gartner, Inc. | G00749318 Page 39 of 59

This research note is restricted to the personal use of [email protected].


■ Position and message HE solutions aligned to verticals and use cases.

Recommended Reading:

■ Emerging Technologies: Homomorphic Encryption Technology Spending, 2020


Survey Trends

■ Emerging Technologies: Adoption Growth Insights for Network Detection and


Response

Gartner, Inc. | G00749318 Page 40 of 59

This research note is restricted to the personal use of [email protected].


6 to 8 Years
Self-Supervised Learning
Back to Top

Analysis by: Danielle Casey and Pieter den Hamer

Description: Self-supervised learning is an approach to machine learning in which labeled


data is created from the data itself, without having to rely on external (human) supervisors
that provide labels or feedback. This is achieved by masking elements in the available
data (e.g., a part of an image, a sensor reading in a time series, a frame in a video or a
word in a sentence) and then training a model to “predict” the missing element. Thus, the
model learns how information relates to other information, for example, how situations
typically precede or follow another and which words often go together.

Sample Vendors: craftworks, Facebook, Google, Microsoft

Range: 6 to 8 Years

Self-supervised learning has recently emerged from academia and is currently only
practiced by a limited number of innovative AI companies. It is worth considering when
available data volumes are limited or when the benefits of the ML solution do not
outweigh the costs of manual labeling or annotating of data. However, self-supervised
learning currently depends on the creativity of highly experienced ML experts to design a
self-supervised learning task, based on masking available data, allowing a model to build
up knowledge and representations that are meaningful to the business problem at hand.
Tool support is still virtually absent, making implementation a knowledge-intensive and
low-level coding exercise.

Mass: High

Self-supervised learning will have a high impact because it aims to overcome one of the
biggest drawbacks of supervised learning: the need for large amounts of labeled data.
This is not just a practical problem in many organizations with limited relevant data or
where manual labeling is prohibitively expensive. It is also a fundamental problem in
current AI, in which the learning of even simple tasks requires a huge amount of data, time
and energy. In self-supervised learning, labels can be generated from relatively limited
data. Self-supervised learning is an important enabler for a next main phase in AI,
overcoming the limitations and going beyond the current dominance of supervised
learning.

Gartner, Inc. | G00749318 Page 41 of 59

This research note is restricted to the personal use of [email protected].


Self-supervised learning enables models to represent concepts and their spatial, temporal
or other relations in a particular domain. These models can be fine-tuned using “transfer
learning” for one or more specific tasks with practical relevance. In addition to supporting
specialized model development, it may also shorten training time and improve the
robustness and accuracy of models. This is because self-supervised learning gains
general knowledge through abstractions and then uses this knowledge as a foundation
for new learning tasks. Thus, self-supervised learning is continuously and incrementally
building up knowledge.

The potential impact and benefits of self-supervised learning are very large, as it will
extend the applicability of machine learning to organizations that do not have the
availability of large datasets. Its relevance is most prominent in AI applications that
typically rely on unlabeled data, such as computer vision, natural language processing,
IoT analytics/continuous intelligence and robotics.

Recommended Actions:

■ Develop a self-supervised learning solution that focuses on building models for a


specific data type, such as video and images, voice, or text.

■ Develop use cases by identifying industries reliant on large, labeled datasets, as well
as highly regulated industries where using existing datasets may be untenable.

Recommended Reading:

■ Hype Cycle for Data Science and Machine Learning, 2021

■ Tech Providers 2025: Why Small Data Is the Future of AI

■ Data Science and Machine Learning Trends You Can’t Ignore

AR Cloud
Back to Top

Analysis by: Tuong Nguyen

Gartner, Inc. | G00749318 Page 42 of 59

This research note is restricted to the personal use of [email protected].


Description: AR Cloud enables the unification of physical and digital worlds by delivering
persistent, collaborative and contextual digital content overlaid on people, objects and
locations to provide people with information and services directly tied to every aspect of
their physical surroundings. For example, any individual can receive fare, route, schedule
and routing information about public transit based on their context (personal status,
geolocation, calendar appointment, travel preferences, etc.), by simply “looking at” a bus
or bus station with your phone, tablet, or HMD. Further information can be crowdsourced,
such as users noting how often the bus has been late in recent weeks.

Sample Vendors: Apple, Facebook, Google, Inpixon, Microsoft, 8th Wall, Mapbox, Niantic,
SLAMcore, Magic Leap

Range: 6 to 8 Years

AR Cloud will take six to eight years to reach the early majority because it requires
numerous, underlying elements, such as edge networking, high bandwidth and low-latency
communications, standardized tools and content types for publishing into the AR Cloud,
management and delivery of content, and interoperability to ensure seamless and
ubiquitous — rather than siloed — experiences. All these elements will need to be created
and operated in concert to enable this shift in how we organize and interact with digital
content. Some of this infrastructure and requirements will be ushered in by the arrival of
low-latency, wireless networking (5G will serve as an enabling tech), while others are still
being developed (spatial registries, graph technologies). Furthermore, demand for spatial
computing experiences enabled by AR Cloud is weak because users have yet to realize, let
alone understand, the potential for these experiences. Meanwhile, vendors are still
discovering the value and future applications.

Mass: Very High

AR Cloud has a very high mass because it will transform how people will interact with the
world around them. AR Cloud will provide a digital abstraction layer for people, places and
things and will space across business and consumer applications and impact every
industry regardless of geography. This will enable new experiences and in turn, new
business models and ways to interact and monetize the physical world. The AR Cloud will
change the way that enterprises think of physical assets, how they interact with
customers and the associated risks. As a new experience type, AR Cloud is expected to
introduce new security risks and violate privacy in yet to be discovered ways.

Recommended Actions:

Gartner, Inc. | G00749318 Page 43 of 59

This research note is restricted to the personal use of [email protected].


■ Evaluate potential areas of impact of AR Cloud to business outcomes by creating a
roadmap that extends the functionality of current offerings (for example, visual
configuration tools, remote support offerings and navigation applications)

■ Assess AR Cloud security and privacy impacts by establishing security hierarchies


for data capture and protection.

■ Align compliance initiatives to specific objects by creating digital ethics guidelines


and identifying regulatory frameworks such as GDPR.

Recommended Reading:

■ Emerging Technologies: Tech Innovators in Augmented Reality — Augmentation and


Spatial Interaction Layer

■ Emerging Technologies: Tech Innovators in Augmented Reality — AR Cloud

■ Emerging Technologies: Tech Innovators in Augmented Reality — Spatial Web

■ Emerging Technologies: AR Cloud Will Create a Multilayered Crowdsourced Canvas


of the World

6G
Back to Top

Analysis by: Kosei Takiishi

Description: 6G is the generic name for the next-generation cellular wireless that is
expected to be next in line after 5G-Advanced. In 2021, the features and timetable for 6G
are not clearly defined although it’s expected to be commercialized in 2028 by some CSP
pioneers. 6G will enhance recent 5G capabilities and will be able to provide higher
theoretical peak data rate (e.g., 100 Gbps to 1 Tbps), lower latency (e.g., 0.1 msec latency),
more connection density and energy efficiency (e.g., 10 times more efficient).

Sample Vendors: Ericsson, Huawei, Nokia, NTT DOCOMO, SK Telecom

Range: 6 to 8 Years

Gartner rates the range of 6G as six to eight years because:

Gartner, Inc. | G00749318 Page 44 of 59

This research note is restricted to the personal use of [email protected].


■ Although there is no clear 6G definition and the telecom industry is just trying to add
one generation every 10 years (same as before), many technologies and concepts
from 6G research will find their way into various wireless systems (cellular and
otherwise). 5G networks will also be modernized and democratized to become
software-based networks based on client demands, and 6G will benefit from it.

■ Different from 4G and current 5G, 6G will become a sort of national network
supported or impacted by countries and national policies. Some leading countries
have started their initiatives: In August 2020, the South Korean government
announced that the country plans to launch a pilot project for 6G in 2026. In October
2020, the Alliance for Telecommunications Industry Solutions (ATIS) in the U.S.
launched the Next G Alliance to advance North American Leadership in 6G. In
November 2020, the South Korean Ministry of Science and ICT hosted the first 6G
Global 2020 in Seoul. In April 2021, the U.S. and Japan agreed to jointly invest $4.5
billion for the development of next-generation communications known as 6G.

Mass: High

The impact of 6G is expected to be high because:

■ The 2030 agenda including 17 sustainable development goals by the United Nations
is heavily impacted by mobile technologies, including 6G. Many of these social
issues and ambitious goals will result in technologies, such as edge computing and
AI, that become part of 5G or 6G cellular deployments. Design and research for 6G
has already begun with many industrial associations, academic and commercial
organizations. 5G can solve some of these challenges, but 6G is essential for
continuous growth and problem solving in the 2030s.

While the telecommunications industry has formulated its own specifications and
standardization (such as 2G, 3G, 4G and 5G), it is more open to collaborate with vertical
industries on 6G by aiming to realize agile innovation and industry digitalization.

Recommended Actions:

■ Avoid deploying 6G commercially until 2028. However, there could be deliverables of


the 6G research projects that emerge in other wireless areas before 6G. For example,
THz wireless systems could well emerge before 2030.

■ Support your regulators and government to create their new national policy by 5G
evolution and 6G.

Gartner, Inc. | G00749318 Page 45 of 59

This research note is restricted to the personal use of [email protected].


■ Watch for fragmentation, as geopolitics influences the standards-setting process to
promote domestic vendors and patent holders.

Recommended Reading:

Predicts 2021: CSP Technology and Operations Strategy

Emerging Technologies: Communications Technology Spending — 2021 Survey Trends

AI-Generated Composite Applications


Back to Top

Analysis by: Jim Hare

Description: AI-generated composite applications reflect the future generation of the


composable business. Composable business is a concept where leaders can quickly build
new business capabilities by assembling digital assets in an organization that is
architected for real-time adaptability and resilience in the face of uncertainty. Today,
composite applications are custom-built by human application developers. In the future,
composite applications will automatically be built and deployed using AI, enabling more
dynamic personalized experiences seamlessly across channels. AI is context-aware and
can detect a specific need based on a user action or a business situation and
automatically build and orchestrate the application using packaged business capabilities
(PBCs) as the building blocks. The application may exist permanently or temporarily until
no longer needed.

Sample Vendors: No vendors so far

Range: 6 to 8 Years

The technology to enable applications to be composed from building blocks exists in the
form of APIs. But most business applications today are static and monolithic and need to
be decomposed into PBCs to achieve real reusability — both from inside and outside
organizations. The ability to self-integrate is beginning to emerge from some integration
vendors for specific vendor application suites, but no vendor has yet combined all the
elements successfully. The lack of standards and PBC cataloging capabilities are
additional inhibitors. Until these challenges are addressed, AI-generated composable
applications that can be automatically created and deployed are still some way off.

Gartner, Inc. | G00749318 Page 46 of 59

This research note is restricted to the personal use of [email protected].


Mass: High

AI-generated composite applications will have an impact on nearly every industry and
business function, especially in consumer-focused industry verticals. Organizations need
to deliver innovation and adapt more quickly to respond to the accelerating pace of
business change and market dynamics. Customers and employees increasingly expect
more contextualized and personalized application experiences. To deliver on digital
transformation, organizations will need applications that can be assembled, reassembled
and extended. This will require a seismic shift in organizations deploying applications to
build business capabilities and application experiences. AI-generated composite
applications will help address this shift by making the composable experience more
scalable and dynamic than manually composing applications using humans.

Gartner is not aware of any vendor offering AI-generated composite applications in the
market. Vendors that start planning offerings that move from static, monolithic
applications to packaged business capabilities and use AI to dynamically compose
applications will have the first-mover advantage.

Recommended Actions:

■ Refactor your software into discrete packaged business components and APIs that
make it faster to create new applications and user experiences.

■ Assess application user needs and what packaged capabilities are required to build
and orchestrate application services using AI.

■ Look for use cases where significant time savings or contextualized experiences can
be delivered to users by automatically generating applications on-demand.

■ Incorporate AI capabilities that can determine user needs and context to


automatically build and deploy a custom application. Use AI to determine when the
user no longer needs the application and remove user access rights.

Recommended Reading:

■ Future of Applications: Delivering the Composable Enterprise

■ Innovation Insight for Composable Modularity of Packaged Business Capabilities

■ How to Design Enterprise Applications That Are Composable by Default

Gartner, Inc. | G00749318 Page 47 of 59

This research note is restricted to the personal use of [email protected].


Digital Ethics
Back to Top

Analysis by: Elizabeth Kim

Description: Digital ethics comprises the systems of values and moral principles for the
conduct of electronic interactions among people, organizations and things. Key areas
where digital ethics should be applied include social and mobile technologies, social
interactions, cloud and security, data and analytics and privacy, autonomous technologies
and freedom, AI/robotization and the value of work, and predictive algorithms and
decision making.

Range: 6 to 8 Years

The distance from early majority adoption by overall customers is still far, and digital
ethics is estimated to be 5% to 20% of the way to the early majority target. There are
indications that digital ethics has moved beyond a mere concept to a practice that
organizations are implementing. Over the past few years, a growing number of
organizations have declared their AI ethics principles, frameworks and guidelines, and
some organizations already have digital ethics practices.

Digital ethics remains a growing concern for individuals, organizations and governments.
Consumers are increasingly aware that their personal information is valuable and are
frustrated by the lack of transparency and continuing misuses and breaches. Board
members and other executives are sharing concerns about the unintended consequences
that the innovative use of technology can have. Government commissions and industry
consortia are actively developing guidelines for ethical use of AI. Examples include the
Ethical Framework for Artificial Intelligence In Colombia, a new AI regulation in the EU, and
the U.S. FTC’s Using Artificial Intelligence and Algorithms.

Regardless, digital ethics still requires societal, economic, political and strategic debate;
new types of governance; and new processes and technologies to control new
technologies. Despite the hype around digital ethics, many organizations are still ignoring
it. Additionally, there is still a lack of clear guidelines and regulations organizations need
to comply with around the ethical use of innovations such as IoT, 3D printing, cloud,
mobile, social and AI. There is also a lack of guidance from providers of these emerging
technologies to their customers. These opposing forces are why the majority adoption of
digital ethics is still six to eight years away.

Gartner, Inc. | G00749318 Page 48 of 59

This research note is restricted to the personal use of [email protected].


Mass: High

The impact mass is high because digital ethics will augment, not displace, existing
technology, but it will impact every industry. While there are tools in the market for
compliance and ethics, digital ethics is mostly a business practice discipline — therefore,
the impact to existing technology markets is indirect. Rather, emerging technologies
should evolve to address digital ethics either:

■ Natively (for example, AI and the use of ML models to make autonomous decisions
is driving the need for explainable AI); or

■ Through professional services (for the implementation of frameworks, guidelines or


best practices around digital ethics) delivered by the provider of emerging
technologies or by third-party consultants or system integrators

Alternatively, the impact of digital ethics is very high because it is relevant to many (if not
all) industries. It is applicable to practically all organizations and consumers using
emerging technologies, so technology providers need to consider ethical impacts during
product design and development for transparency and adherence to design principles.
Additionally, the probability that unintended consequences will occur is high as the use of
technology creates distance between morals and actions.

Recommended Actions:

■ Develop a repeatable practice to identify and assess digital ethics issues arising
from adopting emerging technologies by leveraging Tool: Assess How You Are Doing
With Your Digital Ethics.

■ Define a digital ethics code of conduct that reflects the organization’s values related
to the safety, privacy and commitment to transparency linked to product
development and the services provided, as well as creating accountability with an
obligation to report a violation without retaliation.

■ Communicate digital ethics to senior stakeholders (such as the board of directors)


as a source of business value instead of simply a regulatory compliance issue by
linking digital ethics to concrete business performance metrics. This will drive better
awareness at the board and executive level.

Recommended Reading:

Gartner, Inc. | G00749318 Page 49 of 59

This research note is restricted to the personal use of [email protected].


■ Tool: How Technology Teams Can Be Trained in Digital Ethics

■ Tool: Assess How You Are Doing With Your Digital Ethics

Generative AI
Back to Top

Analysis by: Svetlana Sicular

Description: Generative AI refers to AI techniques that learn a representation of artifacts


from the data and use it to generate brand-new, completely original artifacts that preserve
a likeness to original data. Generative AI can produce totally novel media content
(including text, image, video and audio), synthetic data and models of physical objects.
Generative models also can be used in drug discovery or for the inverse design of
materials having specific properties.

Sample Vendors: Adobe Sensei, Bitext, Dessa, Diveplane, DeepMind, IBM, Landing AI,
MOSTLY AI, OpenAI, Phrasee, Rosebud AI, Spectrm, Tanjo, Textio

Range: 6 to 8 Years

Generative AI is an emerging technology that has only begun to be exploited


commercially. Most use cases have less than 1% of target market adoption, with some
exceptions. The field of generative AI will progress rapidly in both scientific discovery and
technology commercialization. While it is currently as futuristic as it gets, we already see
successes in a wide range of applications, from creating new materials to preserving data
privacy. The fast progress of transformers capable of generating novel artifacts is top of
mind in the AI community. Notably, GPT-3 by OpenAI and AlphaFold 2 by Google’s
DeepMind, both of which use transformers, were recently the main AI news.

While generative AI is becoming more accessible, many generative techniques are new,
and more are coming to the market. Reproducibility of generative AI results will be
challenging in the near term. Fragmented and specialized technology offerings (such as
generating only images or only text) currently require a combination of tools rather than a
single solution. Compute resources for training large generative models are high and are
not affordable to most vendors. Generative adversarial networks (GANs), variational
autoencoders, autoregressive models and zero/one/few-shot learning have been rapidly
improving generative modeling while reducing training data requirements.

Gartner, Inc. | G00749318 Page 50 of 59

This research note is restricted to the personal use of [email protected].


Safety concerns and negative use of generative AI, such as deepfakes, might slow down
adoption in some industries. Technologies that provide AI trust and transparency will
become an important complement to the generative AI solutions.

Mass: High

The mass is high, because exploration of generative AI methods is growing and proving
itself in a wide range of industries, including life sciences, healthcare, manufacturing,
material science, media, entertainment, automotive, aerospace, defense and energy
industries. For example, a growing number of life sciences companies are examining
generative AI to accelerate drug development. The interest in generative AI for creative
work is increasing in marketing, design, architecture and creative media content. A
combination of generative techniques, like audio-to-video generation, inspires new creative
and business applications.

Synthetic data that is produced using generative AI techniques supports the accuracy and
speed of AI delivery. Synthetic data draws customer and partner attention by helping them
augment scarce data, mitigate bias or preserve data privacy. Gartner expects synthetic
data to be available as part of most AI platforms. We predict that by 2024, 60% of the
data used for the development of AI and analytics solutions will be synthetically
generated. Generative AI will disrupt software coding. When combined with existing
development automation techniques, it has the potential to automate up to 70% of the
work done by programmers. Machine learning and NLP platforms are introducing
generative AI capabilities, along with transfer learning for reusability of generative models,
making them accessible to customers.

Recommended Actions:

■ Track the progress of generative AI techniques, as we expect their rapid adoption.

■ Determine the impact of generative AI to specific industries and domains by


assessing the potential use cases, and prioritize specialized industry and domain
solutions when incorporating generative AI in your products.

■ Determine how synthetically generated data could benefit your existing product
offerings, for example, to accelerate the AI development cycle, lessen regulatory
concerns and lower the cost of data acquisition. Generative AI has limitations —
ensure you do not overuse synthetic data, for example when you need a real “ground
truth.”

Gartner, Inc. | G00749318 Page 51 of 59

This research note is restricted to the personal use of [email protected].


Recommended Reading:

■ Innovation Insight for Generative AI

■ Predicts 2021: Artificial Intelligence and Its Impact on People and Society

■ How to Benefit From Creative AI — Assisted and Generative Content Creation

■ Emerging Technologies: Critical Insights Into AI-Augmented Software Development

Emerging Technologies or Trends Watchlist


Emerging Technologies or Trends Outside Eight Years but Meriting
Awareness
Metaverse
Analysis by: Tuong Nguyen, Adrian Lee, Anushree Verma

Back to top

Description: The metaverse is a persistent and immersive digital environment of


independent, yet interconnected networks that will use yet-to-be determined protocols for
communications. It enables persistent, decentralized, collaborative, interoperable digital
content that intersects with the physical world’s real-time, spatially oriented and indexed
content. Access is currently device-dependent and includes experiences spanning the
immersive (augmented, mixed and virtual reality) spectrum.

The metaverse is in the early stage of its evolution. Emergent metaverse adoption is
strictly limited to a niche, small segment of early adopters — for example, gaming, virtual
collaboration, navigation apps, social media, and fungible and nonfungible tokens. It is an
example of a combinatorial trend in which a number of individually important, discrete
and independently evolving trends and technologies interact with one another to give rise
to another trend. Solutions currently being positioned as the metaverse are potentially
capable/compatible, but do not meet the definition of metaverse. Early solutions may
contain one or more attributes (persistence, decentralization, collaborativeness and
interoperability), but not all of them, which are required for a complete metaverse. The
upside is that investment is strong. This includes technologies to enable spatial
orientation and indexing, as well as persistent and decentralized content, 5G, distributed
ledger, IoT, and DNNs and AI applications.

Gartner, Inc. | G00749318 Page 52 of 59

This research note is restricted to the personal use of [email protected].


While the benefits and opportunities from the metaverse are not immediately viable yet,
emergent metaverse solutions give an indicator of potential use cases. We expect the
transition toward the metaverse to be as significant as the one from analog to digital. In
physical world interactions, the metaverse will supply real-time, interesting, actionable
information across scenarios. Examples include wayfinding for both enterprise and
consumer use, guidance for an industrial repair task, interactive demonstrations at a
museum, dynamic information overlays for knowledge workers, and augmented social
networking filters. In digital interactions, this includes the ability to transverse different
virtual realms — for example, “teleporting” from an office meeting into a social gathering,
into a video game, or into an underwater tour — all within a single interface/application.
Although metaverse experiences will not completely replace current digital interactions
(via apps, websites, and so on), it is likely to displace many of them, while opening up new
types of interactions and business models to optimize on these new use cases.

Sample Vendors: No vendors so far

Recommended Reading:

■ Emerging Technologies: Tech Innovators in Augmented Reality — AR Cloud

■ Emerging Technologies: Tech Innovators in Augmented Reality — Spatial Web

■ Emerging Technologies: AR Cloud Will Create a Multilayered Crowdsourced Canvas


of the World

Photonic Computing
Analysis by: Anushree Verma

Back to top

Description: Photonic computing uses photons for data transmission, instead of the
electrons used in traditional digital logic. These computing systems will utilize lasers to
generate the photons and combine electronics, silicon photonics and algorithms to build a
compute platform.

Current optical switches are 1,000 to 10,000 times the size of silicon transistors. This is
not a problem for simple circuits, but it is challenging for complex systems. In addition,
optical channels and switches do not scale with Moore’s Law. Hence, this will limit
development of photonic computing systems. Consequently, we expect photonic
computing will take more than eight years to reach an early majority.

Gartner, Inc. | G00749318 Page 53 of 59

This research note is restricted to the personal use of [email protected].


While photonic computing is at a very early stage of development and still unproven on a
large commercial scale, these systems promise a significant increase in processing
bandwidth. At the same time, they are very energy-efficient when compared to today’s
high-performance data center systems. Initial adoption will be in use cases that have high
throughput requirements, such as deep-learning AI use cases, such as image and video
processing, natural language understanding, and robotics.

Sample Vendors: Lightmatter, CogniFiber, Lightelligence, Luminous Computing

Recommended Reading:

■ Expert Insight Video: Invest in Silicon Photonics Now

■ Cool Vendors in Silicon Photonics

Quantum Computing
Analysis by: Alan Priestley and Martin Reynolds

Back to top

Description: Quantum computing is a type of nonclassical computing that operates on


the quantum state of subatomic particles. The particles represent information as
elements denoted as quantum bits (qubits). A qubit can represent all possible values of its
two dimensions (superposition) until read. Qubits can be linked with other qubits, a
property known as entanglement. Quantum algorithms manipulate linked qubits in their
entangled state, enabling future system designs that can potentially address a set of use
cases that classical systems cannot handle.

Gartner, Inc. | G00749318 Page 54 of 59

This research note is restricted to the personal use of [email protected].


Quantum computers are not general-purpose computers. Rather, they are accelerators for
a limited number of algorithms with the potential of orders of magnitude of speedup over
conventional computers. They do, however, require a complex hybrid ecosystem of
physical technologies, often involving extremely low temperatures, vacuum environments
and lasers, combined with high-performance general-purpose computer systems to
control and manage the quantum elements. Current quantum computing systems are
good enough to demonstrate the potential of quantum computing. However, quantum
systems face challenges in scale, noise and connectivity that require as-yet unknown
breakthroughs to offer business value above and beyond what classical approaches can
deliver today. There are several different approaches to the design of quantum computers,
and there are significant differences in capabilities and algorithms enabled across the
approaches.

Today, it is not clear what benefits and opportunities quantum computing will bring to
business or when quantum computing will deliver business value (see Emerging
Technologies: Quantum Computing Planning for Product Leaders). However, quantum
computing could have a significant impact, especially in areas such as optimization,
machine learning, cryptography, drug discovery, organic chemistry and the finance
industry. While the disruptive impact of quantum computing is more than a decade away,
product leaders at technology and service providers must start planning to engage with
quantum computing developments in the next six to eight years. This will be necessary to
be prepared to intercept the technology when it becomes commercially viable.

Sample Vendors: IBM, D-Wave, Google, Alibaba Cloud, Amazon, Honeywell, IonQ,
Microsoft, QC Ware, QinetiQ, Rigetti Computing, Zapata Computing, PsiQuantum, Xanadu,
1QBit

Recommended Reading:

■ Emerging Technologies: Quantum Computing Planning for Product Leaders

■ Predicts 2021: Disruptive Potential During the Next Decade of Quantum Computing

■ Strategy Guide to Navigating the Quantum Computing Hype

Gartner, Inc. | G00749318 Page 55 of 59

This research note is restricted to the personal use of [email protected].


How to Use the Impact Radar
This Emerging Technologies and Trends Impact Radar content analyzes and illustrates
two significant aspects of impact — when we expect it to have a significant impact on the
market (the range); and how big an impact it has on relevant markets (mass). Each
emerging technology or trend profile analysis is composed of these two aspects. See Note
1 for a complete description of our approach to this research.

In this document, profiles are organized by range, starting with the center and moving to
the outer rings of the radar. The center of the impact radar represents when the emerging
technology will cross the chasm from early adopter to early majority. The rings represent
one to three years, three to six years and six to eight years from crossing the chasm.

The objective of this research is to guide product leaders on how emerging technologies
and trends are evolving and impacting areas of interest. Providers can leverage this
knowledge to determine which technologies or trends are most important to the success
of their business and when it makes sense to advance their products and services by
investing in them. Technology vendors should use this Emerging Technologies and Trends
Impact Radar to:

1. Identify emerging technologies and trends that are important to the success of their
business

2. Determine when to act upon those trends and technologies based on business
strategy

3. Begin formulating a response to the technology or trend’s evolution

Note 1: Research and Methodology for the Emerging


Technologies and Trends Impact Radar
The Emerging Technologies and Trends Impact Radar content analyzes and illustrates
two significant aspects of impact:

1. When we expect it to have a significant impact on the market (specifically, range)

2. How big an impact it will have on relevant markets (namely, mass)

Analysts evaluate range and mass independently and score them each on a 1 to 5 Likert-
type scale:

Gartner, Inc. | G00749318 Page 56 of 59

This research note is restricted to the personal use of [email protected].


■ For range, this scoring determines in which radar ring the Emerging Technologies
and Trends will appear.

■ For mass, the score determines the size of the radar point.

In the Emerging Technologies and Trends Impact Radar, the range estimates the distance
(in years) that the technology, technique or trend is from crossing over from early adopter
status to early majority adoption. This indicates that the technology is prepared for and
progressing toward mass adoption. So at its core, range is an estimation of the rate at
which successful customer implementations will accelerate. That acceleration is scored
on a five-point scale with one being very distant (beyond eight years) and five being very
near (within a year). Each of the five scoring points corresponds to a ring of the Emerging
Technologies and Trends Impact Radar graphic (see Figure 1). Those Emerging
Technologies and Trends with a score of one (beyond eight years) do not qualify for
inclusion on the radar. When formulating scores for range, Gartner analysts consider
many factors, including:

■ The volume of current successful implementations

■ The rate of new successful implementations

■ The number of implementations required to move from early adopter to early


majority

■ The growth of the vendor community

■ The growth in venture investment

Mass in the Emerging Technologies and Trends Impact Radar estimates how substantial
an impact the technology or trend will have on existing products and markets. Mass is
also scored on a five-point scale — with one being very low impact and five being very
high impact. Emerging Technologies and Trends with a score of one are not included in
the radar. When evaluating mass, Gartner analysts examine the breadth of impact across
existing products (specifically, sectors affected) and the extent of the disruption to
existing product capabilities. It should be noted that an emerging technology or trend may
be expressed in different positions on different Emerging Technologies and Trends Impact
Radars. This occurs when the maturity of Emerging Technologies and Trends varies
based on the scope of radar coverage.

Document Revision History

Gartner, Inc. | G00749318 Page 57 of 59

This research note is restricted to the personal use of [email protected].


Emerging Technologies and Trends Impact Radar: 2021 - 21 November 2020

Emerging Technologies and Trends Impact Radar - 5 November 2019

Recommended by the Authors


Some documents may not be available as part of your current Gartner subscription.

Emerging Technologies and Trends Impact Radar: Autonomous Vehicles


Emerging Technologies and Trends Impact Radar: Artificial Intelligence, 2021

Emerging Technologies and Trends Impact Radar: Cloud Computing

Emerging Technologies and Trends Impact Radar: Communications

Emerging Technologies and Trends Impact Radar: Display Technologies


Emerging Technologies and Trends Impact Radar: Drones and Mobile Robots

Emerging Technologies and Trends Impact Radar: Enterprise Software

Emerging Technologies and Trends Impact Radar: Electrified Vehicles

Emerging Technologies and Trends Impact Radar: Internet of Things

Emerging Technologies and Trends Impact Radar: Personal Technologies


Emerging Technologies and Trends Impact Radar: Security

Emerging Technologies and Trends Impact Radar: Semiconductor and Electronics


Technologies
Emerging Technologies and Trends Impact Radar: Semiconductor Manufacturing
Technology

Emerging Technologies and Trends Impact Radar: Sensing Technologies and Applications

Gartner, Inc. | G00749318 Page 58 of 59

This research note is restricted to the personal use of [email protected].


© 2022 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of
Gartner, Inc. and its affiliates. This publication may not be reproduced or distributed in any form
without Gartner's prior written permission. It consists of the opinions of Gartner's research
organization, which should not be construed as statements of fact. While the information contained in
this publication has been obtained from sources believed to be reliable, Gartner disclaims all warranties
as to the accuracy, completeness or adequacy of such information. Although Gartner research may
address legal and financial issues, Gartner does not provide legal or investment advice and its research
should not be construed or used as such. Your access and use of this publication are governed by
Gartner’s Usage Policy. Gartner prides itself on its reputation for independence and objectivity. Its
research is produced independently by its research organization without input or influence from any
third party. For further information, see "Guiding Principles on Independence and Objectivity."

Gartner, Inc. | G00749318 Page 59 of 59

This research note is restricted to the personal use of [email protected].


Now 1 to 3 Years 3 to 6 Years 6 to 8 Years

Passwordless Authentication Digital Twins IoT Platforms Self-Supervised Learning

Edge AI Hyperscale Edge Computing Smart Spaces AR Cloud

LCAPs Multimodal UI Graph Technologies 6G

Advanced Virtual Assistants AI-Augmented Software Engineering AI-Generated Composite Applications

Synthetic Data Digital Ethics

Collaborative Ecosystem Product Generative AI


Development

Homomorphic Encryption

Gartner, Inc. | G00749318 Page 1A of 1A

This research note is restricted to the personal use of [email protected].

You might also like