AI Cities Risks Applications and Governance 1700954413

Download as pdf or txt
Download as pdf or txt
You are on page 1of 124

AI and Cities

Risks, Applications
and Governance
Acknowledgements

Lead Authors
Shin Koseki (Université de Montréal)
Shazade Jameson (Tilburg University, Mila)
Golnoosh Farnadi (HEC, Mila)
David Rolnick (McGill, Mila)
Catherine Régis (Université de Montréal, Mila)
Jean-Louis Denis (Université de Montréal)

Authors
Amanda Leal, Cecile de Bezenac, Giulia Occhini,
Hugo Lefebvre, Jose Gallego-Posada, Khaoula
Chehbouni, Maryam Molamohammadi, Raesetje
Sefala, Rebecca Salganik, Safiya Yahaya, Shoghig
Téhinian

UN Habitat Reference Group


Cerin Kizhakkethottam, Christelle Lahoud, Herman
Jean Pienaar, Ivan Thung, Judith Owigar, Katja
Schaefer, Kerstin Sommer, Leandry Nkuidje, Melissa
Permezel, Paula Pennanen-Rebeiro-Hargrave,
Pontus Westerberg, Remy Sietchiping, Salma Yousry

Mila Core Team


Benjamin Prud’homme (co-lead)
Rose Landry (co-lead), Anna Jahn (reviewer)

UN Habitat Core Team


Abdinassir Shale Sagar (lead)
Melissa Permezel (reviewer)

Editing
Andrea Zanin, Daly-Dallaire, Services de traduction

Design, Layout and Illustrations


Simon L’Archevêque
CO NTE NTS

Contents

Foreword from UN-Habitat’s 3. APPLICATIONS p. 19


executive director p. 5 3.1. Applications overview p. 20
3.2. Energy p. 21
Foreword from Valérie Pisano, 3.2.1. Forecasting electricity generation p. 21
President and CEO of Mila p. 6 3.2.2. Predictive maintenance of existing
infrastructure p. 22
3.2.3. Accelerating experimentation p. 22
1. INTRODUCTION p. 7 3.2.4. Transmission and distribution p. 23
1.1. How to read this report p. 9 3.2.5. System optimisation and control p. 23
1.2. Guiding frameworks p. 10 3.2.6. Forecasting supply and demand p. 24
3.2.7. Predictive maintenance for the transmission
and distribution network p. 25
2. AI AND CITIES p. 11 3.2.8. Energy use and efficiency p. 25
2.1. What is AI? p. 12
2.1.1. What is responsible AI? p. 12 3.3. Mobility p. 27
2.1.2. What are different types of AI? p. 12 3.3.1. Public transport p. 27
2.1.3. Deep learning applications p. 14 3.3.2. Private transport p. 29
2.1.4. Opportunities for AI in cities p. 14 3.3.3. Transportation infrastructure p. 30
2.1.5. Limitations of AI p. 15
3.4. Public safety p. 32
2.2. Governance p. 16 3.4.1. Environmental safety p. 32
2.2.1. Governance overview p. 16 3.4.2. Population risks p. 34
2.2.2. Why governance matters p. 16
2.2.3. AI in the city p. 17
2.2.4. Key challenges for AI governance p. 17

3  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
3.5. Water and waste management p. 35 4. RISK FRAMEWORK p. 52
3.5.1. Water p. 35 4.1. Risks overview p. 53
3.5.2. Waste p. 36 4.2. The AI life cycle p. 54
4.2.1. Phase 1: Framing p. 56
3.6. Healthcare p. 38 4.2.2. Phase 2: Design p. 65
3.6.1. Health promotion and disease 4.2.3. Phase 3: Implementation p. 70
prevention p. 38 4.2.4. Phase 4: Deployment p. 85
3.6.2. Health system and healthcare 4.2.5. Phase 5: Maintenance p. 90
organisation p. 39
3.6.3. Public health surveillance of disease
outbreaks p. 40 5. URBAN AI STRATEGY p. 96
5.1. Urban AI strategy overview p. 97
3.7. Urban planning p. 42 5.2. Start from the local context p. 98
3.7.1. Planning and management p. 42 5.3. Prioritise capacity-building p. 101
3.7.2. Neighbourhoods p. 44 5.4. Develop innovative regulatory tools for AI p. 102
3.7.3. Buildings, public spaces and 5.5. Foster cross-sectoral collaborations p. 105
infrastructure p. 46 5.6. Build horizontal integration p. 107

3.8. City governance p. 49


3.8.1. Enhanced government efficiency p. 49 6. CONCLUSION p. 109
3.8.2. Engagement with the public p. 49
3.8.3. Informed policymaking p. 51
7. REFERENCES p. 111
INTRO DUC TIO N

FOREWORD FROM UN-HABITAT’S


EXECUTIVE DIRECTOR

Artificial intelligence has already started to have an impact It is important for local (and national) governments to
on urban settings at an unprecedented pace, with sophis- recognise the risks associated with the use of artificial
ticated solutions being deployed in the streets, at airports intelligence that can arise as a result of flawed AI data,
and in other city installations. In fact, cities are becoming tools and recognition systems. Our preferred approach
experimental sites for new forms of artificial intelligence to AI in urban environments is anchored in the UN’s
and automation technologies that are applied across a Universal Declaration of Human Rights and UN-Habitat’s
wide variety of sectors and places. specific mandate to promote inclusive, safe, resilient and
sustainable cities (SDG 11). Our approach is aligned with
These developments, and emerging practices such as
our people-centred and climate-sensitive approaches to
predictive policing, are dramatically changing cities and
innovation and smart cities, which seek to make urban
our societies at a time when the world is experiencing
digital transformation work for the benefit of all, driving
rapid urbanisation and a range of changes and challenges:
sustainability, inclusivity, prosperity and the realization
climate change, the ongoing impact of the COVID-19
of human rights in cities and human settlements. Our
pandemic, access to basic urban services, infrastructure,
approach also focuses on addressing safety and urban
housing, livelihoods, health and education. At the same
planning concerns to ensure people-centred, safe and
time, AI and AI-enabled solutions are opening up new
appropriate deployment of artificial intelligence within
opportunities for cities while also being deemed to pose
cities.
significant risks and challenges, such as potential bias
and discrimination, privacy violations and other human Within this approach, we emphasise the important role
rights violations, including surveillance schemes. that governments, particularly local authorities, play in
stewarding the necessary frameworks, infrastructure and
To support cities in their efforts to appropriately apply
capacity development to manage and govern the respon-
artificial intelligence, UN-Habitat has partnered with
sible deployment and use of AI-powered solutions.
Mila – Quebec Artificial Intelligence Institute to provide
reflections and guidance on AI and its responsible use in Thanks to our partners at Mila for their collaboration.
cities. The paper, which is part of our strategy to promote a
I hope you find this report insightful and useful.
people-centred approach to digital transformation, covers
urban applications of AI, risks, sets out specific approaches
and tools for urban AI governance, and provides a set of Ms. Maimunah Mohd Sharif
key recommendations for urban leaders implementing AI Under-Secretary-General and Executive Director
in local governments. United Nations Human Settlements Programme
(UN-Habitat)

5  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
INTRO DUC TIO N

FOREWORD FROM VALÉRIE PISANO,


PRESIDENT AND CEO OF MIL A

Technological innovation, including artificial intelligence The development of socially responsible AI for the benefit
(AI), is (re)shaping how we approach almost every of all is at the heart of Mila’s mission. As a global leader
sphere of life. Urban environments are no exception to in the field, Mila aims to contribute to the development
this transformation. AI systems can already be applied of responsible AI and the fostering of social dialogue and
to key areas of urban intervention ranging from waste engagement on this question, which is why we are proud
management, energy and transportation to public safety, to collaborate on this report with UN-Habitat. We hope
healthcare and city governance. As AI continues to evolve, this effort can support and inform civil society and public
exciting opportunities that were once unimaginable will authorities as they navigate both the extraordinary bene-
become available for cities and settlements to help them fits and the significant risks of AI-enabled technologies.
become more efficient and resilient in the face of today’s
Through this interdisciplinary initiative that brings together
challenges.
experts in AI development, governance, public health,
Like any other transformative opportunity, integrating AI ethics, sustainability and urban development, we can start
into urban environments comes with challenges and risks paving the way together for vibrant AI-powered cities that
that must be taken and tackled seriously for AI to benefit are climate-conscious, socially just and designed for all.
societies. Therefore, as we push the boundaries of AI
integration in cities and settlements, these efforts must
Valérie Pisano
be rooted in and supportive of the human rights frame-
President and CEO
work, as well as being sustainable, inclusive and aligned
Mila – Quebec Artificial Intelligence Institute
with local contexts. In other words, urban environments
must adopt responsible AI technologies to succeed.

6  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
INTRO DUC TIO N

SECTION 1

Introduction
1.1. How to read this report p. 9
1.2. Guiding frameworks p. 10

7  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
INTRO DUC TIO N

AI is a disruptive technology that offers a plethora of opportunities. This report


presents an ambitious overview of some of the strategic applications of AI. Taking
a risk-based approach, it also raises awareness of the risks of using AI, regardless
of the application. The aim is to provide local authorities with the tools to assess
where, and whether, AI could be valuable and appropriate, rather than instructing
on what is or is not the right opportunity for a given context.

Cities and local authorities provide crucial areas for AI applications and policy­
making because they regularly make day-to-day decisions about AI and how it
affects people’s lives. The emergence of AI technologies offers new ways to better
manage and equip cities (UN-Habitat 2020, 180). However, the reshaping of cities
through technology and innovation needs to reflect citizens’ needs and, where
possible, be used as a tool to foster more equal prosperity and sustainability.
Cities, towns and settlements may have less policy and risk assessment capacity
than nation states.

This report is part of UN-Habitat’s strategy for guiding local authorities in supporting
people-centred digital transformation processes in their cities or settlements. It
is a collaboration with Mila–Quebec Artificial Intelligence Institute, a community of
more than 1,000 researchers dedicated to scientific excellence and the develop-
ment of responsible AI for the benefit of all. UN-Habitat helps build safe, resilient,
inclusive and sustainable communities in over 90 countries by working directly
with partners to respond to the UN Sustainable Development Goals (SDGs), and
SDG 11 in particular. Together, this Mila–UN-Habitat collaboration offers a vision and
understanding of how responsible AI systems could support the development of
socially and environmentally sustainable cities and human settlements through
knowledge, policy advice, technical assistance and collaborative action.

8  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
INTRO DUC TIO N

1.1

HOW TO READ THIS REPORT

The report is structured in five main parts: an introduction The applications section follows; it identifies key sectors
to AI, guidelines for AI governance, an overview of urban for intervention for cities, along with examples of AI appli-
applications of AI, a risk assessment framework and a cations within each of those pillars. Each application area
guide for AI strategy. Each section has a short overview is linked to the Sustainable Development Goal it supports,
summary. and a series of tags indicates high-impact, locally relevant
and long-term endeavours.
While the report is lengthy, it is a tool for both policymak-
ers and technical experts. This is a vertical document The risk assessment framework presents an overview
that works across an organisation. It can be used by local of the risks along the different phases of AI. There are
authorities, by city managers and directors, and by the many interrelations between risks. Each listed risk is
technical people involved in the coding or maintenance accompanied by a set of reflective guiding questions.
of an AI system. Defining success and evaluating the risks of an AI system
should be done holistically, including both technical and
First, the report situates the discussion by describing AI:
contextual issues. Technical specifications alone will not
what it is, its different types, the opportunities it offers for
determine an AI system’s success; the social, political
cities, and its current limitations. The report then briefly
and structural contexts are crucial. The framework is
discusses the importance of AI governance—which should
designed for holistic reflection.
be context-sensitive, anchored in and respectful of human
rights, and centred on the public interest—as well as some Finally, the report offers a guide of recommendations and
of its key challenges. areas of action to consider when building an AI strategy.
While this guide is not exhaustive, its intention is to support
local authorities by suggesting areas of action that can
help ensure the development of responsible AI for cities
and settlements.

9  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
INTRO DUC TIO N

1.2

GUIDING FRAMEWORKS

This report builds on existing frameworks that direct principles, values and policy actions in relation to
artificial intelligence.

HUMAN RIGHTS PEOPLE-CENTRED SMART CITIES


Human rights are the universal and inalienable rights UN-Habitat’s flagship programme, People-Centred
of every human being, and they form the basis of all UN Smart Cities, provides strategic and technical support
development approaches. They were institutionalised in to governments on digital transformation. It promotes
the Universal Declaration of Human Rights by the United the deployment of technological innovations to realise
Nations General Assembly in 1948. The use of AI should sustainability, inclusivity, prosperity and human rights and
be guided by these rights to ensure that no one is left to make urban digital transformation work for the benefit
behind, excluded or negatively impacted by its use. of all. It leverages digital technologies for inclusive and
sustainable development while preventing cities from
having to constantly catch up (UN-Habitat, 2020a).
SDGS
The UN’s 17 Sustainable Development Goals (SDGs)
ETHICS OF AI
are an integrated set of goals and targets for inclusive
global development by 2030. SDG 11 calls for the safety, In November 2021, UNESCO adopted the first global
resilience, inclusivity and sustainability of cities and for instrument on the ethics of AI, which is strongly grounded
enhancing participatory and integrated human settlement in the recognition of the importance of promoting and
planning and management. The SDGs emphasise the protecting human rights. See the Recommendation on
importance of integrated approaches to using AI, the Ethics of Artificial Intelligence (UNESCO, 2021).
considering a range of sectors and stakeholders.

NEW URBAN AGENDA


The New Urban Agenda (NUA) represents UN-Habitat’s
shared vision of how to achieve a more sustainable future
for urbanisation (UN, 2017). It focuses on creating syner-
gies across the mandates and strategic plans of different
UN entities to maximise impact. The NUA encourages
cities to develop frameworks for technologies such as
AI to guide their development.

10  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
INTRO DUC TIO N

SECTION 2

AI and Cities
2.1. What is AI? p. 12
2.2. Governance p. 16

11  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A I A ND C ITIE S

2.1. There is not yet consensus regarding a definition of respon­


sible AI. In this report, it is referred to as an approach
WHAT IS AI? whereby the lifecycle of an AI system must be designed
to uphold—if not enhance—a set of foundational values
Artificial intelligence as a technology is continuously and principles, including the internationally agreed-upon
expanding, so it is challenging to define AI as a whole. human rights framework and SDGs, as well as ethical
This continuous growth has led to a wide range of defi- principles such as fairness, privacy and accountability.
nitions of AI (Russell and Norvig, 2010; McCarthy, 2007). In this context, the objective of an AI system—for example,
Colloquially, AI is often used as an umbrella term to cover whether it aims to automate administrative tasks or
a range of different types and technical subcategories. to support the fight against the climate crisis (“AI for
good”)—is relevant yet secondary. Rather, responsible AI
At the core, AI is a system that produces outcomes based
emphasises the importance of holistically and carefully
on a predefined objective ​​(OECD, 2019). The objective
thinking about the design of any AI system, regardless
for an AI is the translation of a human-defined goal into a
of its field of application or objective. It is therefore the
mathematical one. Outcomes can be predictions, recom-
collection of all choices—implicit and explicit—made
mendations or decisions. For example, the human goal of
in the design of the life cycle of an AI system that can
winning a chess game can be translated into an objective
make it either responsible or irresponsible. In summary,
of choosing a sequence of moves that maximises the
“ensuring responsible, ethical AI is more than designing
probability of winning a chess game.
systems whose result can be trusted. It is about the way
The terms algorithm, AI system, AI ecosystem, and AI we design them, why we design them, and who is involved
indicate different scale levels. An algorithm is the most in designing them” (Dignum, 2022).
specific; it is a process which generates an output from an
This report is premised on the belief that AI applications
input. An AI system is usually a single application, using a
should be without exception responsible in their design,
unique input and producing its own output. An AI ecosys-
and provides a general framework on how to deploy AI
tem is a network of AIs which interact with each other. AI is
responsibly in the context of cities or settlements.
the broader term which includes all the different methods
and systems, usually referred to the field as a whole.

While AI systems are complex, they tend to follow three 2.1.2.


major steps: engaging with data, abstracting the percep- WHAT ARE DIFFERENT TYPES OF AI?
tions in the data, and formulating outcomes. Within the AI
community, these steps are referred to as the AI pipeline. Understanding what AI is and isn’t is important for
non-technical users, because this knowledge allows them
to reflect more critically about AI and enables them to
2.1.1. participate in the necessary public conversations about
WHAT IS RESPONSIBLE AI? its use, its governance, and ultimately, the place it should
take within our societies, cities and settlements.
Digital innovation can be an inclusive force for good only
There are two main categories of AI systems: Symbolic
if implemented with a firm commitment to improving
methods rely on a series of predefined logical rules,
people’s lives and well being, as well as to building city
and statistical methods identify patterns in a dataset
systems that truly serve their communities (UN-Habitat,
to shape the outcomes (Clutton-Brock et al., 2021). For
2021, p. 6). However, good intentions alone are not suffi-
example, in a chess game, a symbolic AI system could
cient to ensure AI systems are built responsibly. In fact,
choose moves based on a series of rules, such as “when
it is possible—and unfortunately not uncommon—even
having to choose between losing the queen or a pawn,
for an “AI for good” project or system to unintentionally
sacrifice the pawn.” In contrast, a statistical AI system can
replicate or compound social inequalities and biases.
“learn” which moves are desirable based on a dataset of
This is why responsible AI must be a central part of
previous games.
any discussion about AI development.

12  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A I A ND C ITIE S

In statistical methods, machine learning and deep or are used as an extra step for other algorithms (e.g.,
learning algorithms are the most famous. There are representation learning (Bengio et al., 2013)).
three broad approaches within machine learning: super-
In reinforcement learning, an algorithm interacts with
vised learning, unsupervised learning and reinforcement
its environment by choosing a sequence of actions to
learning. The difference reflects the type of information
accomplish an objective. The goal is to learn the best
the algorithm uses and whether or not it interacts with
sequence of actions through rewards or feedback from
its environment. These differences are relevant because
the environment. For example, autonomous navigation
certain challenges arise out of this interaction.
often uses reinforcement learning (Kiran et al., 2021);
In supervised learning, the algorithm is trained with when the algorithm makes decisions about trajectory,
a labelled dataset of examples to learn a rule that can safely controlling the vehicle provides larger rewards.
predict the label for a new input. For example, a training
Deep learning algorithms are rapidly gaining popularity.
dataset may be images of pedestrian crossings that have
According to Goodfellow et al. (2016, p. 5), “deep learning
been labelled for whether there is a person crossing or
enables the computer to build complex concepts out of
not. Because data labelling is usually done manually, it
simpler concepts.” This differs from traditional approaches
requires significant labour.
to AI, which require large amounts of feature engineering,
In unsupervised learning, algorithms use unlabelled which uses human experts to extract attributes from raw
datasets. These discover “hidden” structures in the data. Deep learning is compositional, combining a large
dataset (e.g., clustering and visualisation (Murphy, 2012)) hierarchy of learned concepts.

Artificial intelligence

SYMBOLIC AI HYBRID AI MACHINE


LEARNING

Supervised
learning

Expert
systems

DEEP Reinforcement
LEARNING learning
Knowledge
graphs

Unsupervised
learning

13  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A I A ND C ITIE S

2.1.3. 2.1.4.
DEEP LEARNING APPLICATIONS OPPORTUNITIES FOR AI IN CITIES

Advancements in deep learning have led to significant Cities and settlements of all sizes and locations may
innovations in AI applications. Many of these innovations benefit from using AI systems to address key urban
power the types of AI cities will encounter and use. challenges. Because AI is not specific to one domain or
even to one technology, it has numerous applications for
For example, speech recognition is used in auto-transcrip-
more sustainable and inclusive development. This report
tion devices. In traditional machine learning, datasets
focuses on key areas of intervention for cities: energy,
used to contain labelled features such as volume or pitch.
mobility, public safety, water and waste management,
In contrast, in deep learning, an algorithm is provided with
healthcare, urban planning and city governance.
sound files and, in the training stage, conducts mathe-
matical processing of the audio to infer its own attributes. Integrating AI systems could be a key to addressing
social, economic and ecological challenges at a global
In the case of image classification, a deep learning algo-
scale. While every city is different, cities are the centre of
rithm aims to build higher-level graphic concepts such as
societal transformations, and digital transformation in
“person” and “house” from lower-level concepts such as
particular. They are where people, jobs, research, wealth
“corner” and “texture,” and those are in turn constructed
and leisure concentrate. They concentrate access to
from “edges” or pixel values.
opportunities for a greater number of people, as well
Computer vision has benefited greatly from deep learning as concentrating societal and environmental issues.
innovations. For example, a computer vision task can have Because cities play a role in global networks, the benefits
the objective of labelling the items in a set of images. In and the risks of integrating AI systems also extend well
this way, when the algorithm is given a dataset of labelled beyond their borders.
images, it learns to distinguish between two items such
As cities continue to experience significant challenges
as a cat and a car.
relating to resource demands, governance complexity,
Lastly, natural language processing can be seen in chat- socioeconomic inequality and environmental threats,
bots. Chatbots can be trained to generate a conversation innovation is necessary for tackling emerging problems
and automatically respond to people’s text messages (Yigitcanlar et al., 2021). To take full advantage of the
online. To do so, developers provide the chatbot with a potential of AI for cities, local governments can create the
dataset containing, for instance, all of the public written enabling conditions for sustainable and inclusive devel-
content on some social media platform. opment. Guiding the development of these conditions,
together with a careful balancing of the opportunities
and risks, is the purpose of AI governance.

14  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A I A ND C ITIE S

2.1.5.
LIMITATIONS OF AI

Applying AI responsibly requires understanding the key AI systems cannot evaluate their own performance. AI
limitations of AI systems. Three in particular will always systems measure their performance against pre-defined
be present: optimisation goals, but these are isolated from the great-
er context. The term artificial intelligence is somewhat of
AI systems reinforce the assumptions in their data and
a misnomer; human reasoning can judge the relevance of
design. In order for an algorithm to reason, it must gain
a type of knowledge to a specific situation, but algorithms
an understanding of its environment. This understand-
cannot. While it may be tempting to see algorithms as
ing is provided by the data. Whatever assumptions and
neutral “thinkers,” they are neither neutral nor thinkers.
biases are represented in the dataset will be reproduced
This means that an AI system’s objective must be careful-
in how the algorithm reasons and what output it produc-
ly aligned with human goals, and considerable attention is
es. Similarly, design choices are made all along the AI
required for monitoring and evaluating by humans.
life cycle, and each of these decisions affects the way an
algorithm functions. Because negative societal assump- AI systems are mathematical and cannot integrate
tions may be reflected in the dataset and design choices, nuance. Defining an AI system’s objective requires trans-
algorithms are not immune to the discriminatory biases lating a human goal into a mathematical formula. This
embedded in society. creates strict constraints on what types of knowledge
and information can be integrated into an algorithm’s
reasoning. Because everything needs to be concretely
Algorithms can reproduce gender stereotypes defined, algorithms are unable to comprehend a whole
based on their dataset. Chatbots, for example, range of subjective, qualitative and nuanced information.
include algorithms trained on large datasets of text
to learn existing relations between words (Garg et
al., 2018). The result is that women are associated
with words such as “nurse” and “receptionist,” while
men are associated with the words “doctor” and
“engineer” (Bolukbasi et al., 2016).

15  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
GOV E RNA NC E

2.2. 2.2.2.

GOVERNANCE WHY GOVERNANCE MATTERS

Governance is a tool to direct the development of AI


towards a set of values, such as inclusive and sustainable
development. Directing urban development towards the
2.2.1. public interest and respect for human rights is a conscious
GOVERNANCE OVERVIEW choice. If this choice is not made, the structures and
processes of AI and its governance will embed values
AI is not neutral, and context matters. Understanding unconsciously, causing significant risks (see section
how structures embed values offers local authorities the 4.2.1). AI is not neutral; both formal structures, such as
possibility to direct the development of AI towards key the design of algorithms, and informal arrangements,
values for inclusive and sustainable development. This such as social norms, embed and propagate values.
requires an understanding of AI governance, which is
The context matters for AI adoption. The technical
the sum of AI regulations, ethics, norms, administrative
choices around developing AI are important, and much
procedures and social processes.
of this report outlines these. However, the success of
Readers should keep these issues in mind while reading an AI system is often dependent on what happens once
the rest of this report in order to better anchor the the algorithm moves out of the laboratory and into the
applications and risks in the local context and in local real world, where there are people (see section 4.2.4). AI
definitions of the public interest. technologies are co-created within a society, especially
when AI uses citizens’ data and shapes their lives. Each
This section is short. It presents an overview of what AI
urban settlement will have its own unique context, with its
governance is, why it matters and what challenges are
own interrelationship of social norms, values and ways of
particular to AI in an urban setting. Section 5 then makes
working. Governance is a tool that local authorities can
specific recommendations on how to design and imple-
use to balance opportunities and risks of AI in a manner
ment a strategy.
most appropriate to their local context.

AI governance, like digital governance more broadly,


combines regulations, ethics, norms and social practices.
Governance is greater than the sum of its parts; it also
includes the process of how to make decisions about
these aspects and the social relations that shape these
decisions (Floridi, 2018; Jameson et al., 2021). This
definition is intentionally broad, as the meaning of “gover-
nance” can vary depending on the discipline and context
in which it is used. For example, many definitions focus
only on administrative rules and tools to fulfil legal and
ethical requirements (Mäntymäki et al., 2022). For a city
directing inclusive urban development, self-regulation and
ethics processes must be complemented with building
capacities and hard laws (Larsson and Heintz, 2020;
Ala-Pietilä and Smuha, 2021; Wirtz et al., 2020).

16  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
GOV E RNA NC E

2.2.3. 2.2.4.
AI IN THE CITY KEY CHALLENGES FOR AI
GOVERNANCE
AI affects the city, and it changes the environment that
local authorities are working in. As AI systems evolve
to change urban socioeconomic development, they will MULTI-LEVEL GOVERNANCE
also shift social organisation in the city. For example, a
major dynamic in cities is platform urbanism, where the While many countries have already released national
increasing use of platforms, with their predictive tools and guidelines on AI (Schmitt, 2021), local governments still
need for data, reshapes relationships of labour, mobility, face various challenges regarding the development,
consumption and governance (Leszczynski, 2020). implementation and evaluation of regulatory frameworks
for policy capacity (Taeihagh, 2021). A city is itself an
These are systemic issues. As a result, implementing AI actor within a larger territorial entity, an independent
in a city combines with and compounds existing dynam- system within larger systems, and a node within a global
ics so that new systemic issues emerge. For example, network of cities. The nested geographic aspect of a
AI can have adverse consequences on city residents’ city’s territory is reflected in the dispersion of governance
rights by disproportionately shifting power dynamics in a capacities and responsibilities across multiple jurisdic-
negative direction or maintaining existing negative power tions. The different levels of technological governance
dynamics (Rodriguez, 2021). An AI system may also involve interdependent actors from all levels of govern-
intensify inequality between those who have access to ment operating on the city’s territory, as well as non-state
its services and those who don’t, or between those who actors, including the public, private and social sectors
benefit from it and those who don’t (see the “digital divide” (Enderlein et al., 2010).
subsection under section 4.2.2.1). The combination of
different data sources and predictive tools creates new The resulting multi-level organisation poses a genuine
knowledge about city residents, which in certain contexts challenge as it may imply limited capacity and resources
may support existing instances of rights abuses (Reuter, for local authorities. However, multi-level governance
2020; Aitken, 2017). approaches may support cities in benefiting from these
jurisdictional structures by attributing each level of
To address this changing environment, local authorities governance with a key role in decision-making.
are in a unique position to shape the city’s context. They
can create an enabling environment for the development The role of national governments in particular is to
of AI that in turn enables sustainable and inclusive consolidate specific principles, policy frames and values
development. Through the regulatory environment and that are active in national discourses on AI governance
with each contract awarded, local authorities shape what and to cultivate a national ecosystem for AI innovation
services are implemented in the city, thus creating the and development. While cities will refer to these national
conditions for future development. They can set condi- guidelines when they exist, cities can still actively partici­
tions for investment in technology and infrastructure, pate in establishing priorities and bringing forward their
enable a bustling civil society, and foster an innovative interests and their vision of AI (UN-Habitat, 2021).
environment to advance the public interest. In order
for the city to be proactive rather than reactive, digital
innovation requires a clear city-level strategy. Specific
recommendations for how to create such a strategy
are presented in the section 5.

17  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
GOV E RNA NC E

ACCOUNTABILITY LIMITED CAPACITY


AI comes with three major accountability challenges: Limited capacity is one of the biggest challenges for AI
political accountability, adapting to changes over time, governance in cities across the world. While the lack of AI
and the responsibilities of automated systems. Overall, skills among professionals may be exacerbated by a city’s
accountability shapes much of how the city balances socio-economic situation and the global digital divide, it is
risks and opportunities in developing urban AI. This a challenge for rich and poor cities alike.
makes accountability structures especially important to
The global move towards AI has spurred an increasing
consider in AI governance. Here, we focus on the latter
demand for IT professionals. In the race for IT talent
two challenges.
and more niche AI skills, the private sector outpaces
It’s incredibly important to consider who is accountable governments in their ability to attract these professionals.
after an AI system is delivered or procured. Algorithmic Notably, cities lack sufficient funds to hire specialised
systems will change over time, and their impacts are not human resources and drive cutting-edge development
always predictable. AI systems will shape and be shaped work in-house. This limited capacity puts cities in a
by the environment in which they are deployed, and a position where the technical expertise needed for the
change of purpose over time may challenge agreements proper governance of AI is almost always outsourced
that were made in the early stages of the AI life cycle. and procured.
One example is mission creep, a relatively common oc-
Furthermore, the skill gap between the decision-makers
currence when technologies are intentionally repurposed
who are responsible for funding AI solutions and those
for surveillance practices (see section 4.2.1).
who will provide the technology renders system monitor-
There are also ever-increasing concerns regarding the ing very difficult. Through cross-sectoral collaborations
responsibilities of automated systems. Algorithms act (short to medium term) and local capacity-building (long
on the world without transparent human intention, which term), the city may overcome this limitation while centring
challenges our existing human-centred accountability its values. These two levers form a significant part of the
frameworks. While autonomous systems may act following governance recommendations (see section 5.3).
relatively independently, they are still designed, funded
and owned by human actors. As a result, allocating
responsibility among the actors designing, funding or,
in some cases, using the algorithm is important. For
instance, if a self-driving car crashes, who is liable?

The design and implementation of an algorithm in a city’s


critical services shifts existing responsibilities amongst
actors. For instance, a public-private partnership created
for public transport services may rely on a multitude of
subcontractors that interact with AI systems and process
sensitive data. The accountability challenge is to acknowl-
edge responsibility throughout an AI system’s life cycle
and create governance mechanisms to respond to issues
if and when something goes wrong.

18  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS

SECTION 3

Applications
3.1. Applications overview p. 20
3.2. Energy p. 21
3.3. Mobility p. 27
3.4. Public safety p. 32
3.5. Water and waste management p. 35
3.6. Healthcare p. 38
3.7. Urban planning p. 42
3.8. City governance p. 49

19  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS

3.1.

Applications
overview

AI is evolving at such a rapid pace that the potential number of applica-


tions in the urban context is now tremendous. In order to make this con-
versation digestible, this section identifies key sectors for intervention for
cities, along with examples of AI applications within each of those pillars.

The key sectors outlined include energy, mobility, public safety, water and
waste management, healthcare, urban planning and city governance. This
outline is not exhaustive.

Each application presented is a concrete example of an existing technol-


ogy, not of futuristic innovations. The examples also explicitly support
sustainable, low-carbon, inclusive development.

Each application area is linked to the Sustainable Development Goal it


supports. A series of tags is used to indicate high-impact, locally relevant
and long-term endeavours.

Lastly, the applications are linked to the Risk Framework which follows.
Particular applications are linked to specific risks from the Risk Framework
where appropriate, though these links are for illustrative purposes. For a
full view, readers are encouraged to refer to the section 4.

20  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS ENERGY

3.2. 3.2.1.

Energy
FORECASTING ELECTRICITY
GENERATION
HIGH C A L LY
LO

SDGs: 7.1, 7.2, 7.3, 7.a, 7.b, 9.1, 9.4, 9.5, 9.b, 11.b, 13.2
R T
IM EL
PACT E VA N
RISK: Mission creep

AI can be a useful tool across electricity systems and With the transition to renewable energy, electricity
in accelerating the transition to a low-carbon society. generation will become both more intermittent and better
Indeed, AI can both reduce emissions from existing distributed. This is because the output of such generation
power plants and enable the transition to carbon-free will be determined by local environmental conditions (e.g.,
systems, while also improving the efficiency of systems. wind speeds and cloud cover in the case of wind turbines
In particular, AI can contribute to power generation by and solar panels, respectively) that can vary significantly.
forecasting supply from variable sources and through pre- Importantly, AI models can employ various types of data,
dictive maintenance. Working in this area requires close such as satellite images and video feeds, to create
collaborations with electricity system decision-makers forecasts to understand the emissions from different
and practitioners in fields such as electrical engineering. sources. Here, AI offers additional opportunities by:

• Enabling forecasts of wind- and weather-generated


power quantities by analysing patterns in historical
RISK: Mission creep data. These can provide much-needed foresight in con-
texts such as power system optimisation, infrastructure
Policymakers and stakeholders considering appli-
planning and disaster management (Mathe et al., 2019;
cations of AI in power generation must be mindful
Das et al., 2018; Voyant et al., 2017; Wan et al., 2015;
not to impede or delay the transition to a low-­
Foley et al., 2012).
carbon electricity system. For example, using AI
to prolong the usable lifetime of a high-emissions • AI predictions have been used as information and
coal power plant, or to accelerate the extraction evidence to help determine the location and need for
of fossil fuels, could run counter to climate and air new plants, supporting operators and investors.
quality goals. Ideally, projects should be preceded
by system impact analyses that consider effects
on society and the environment. Such early impact
assessment can ensure that projects do not enable
or perpetuate unsustainable behaviours.

21  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS ENERGY

Forecasting with hybrid physical models

Many AI techniques are “domain-agnostic,” meaning they can easily be applied to different domains. In an
optimal scenario, however, the AIs of the future will improve predictions by incorporating domain-specific
insights. This is particularly important for forecasting electricity generation: since weather drives both electricity
generation and demand, AI algorithms forecasting these quantities should not disregard established techniques
for climate and weather forecasting. Such hybrid models involving both physics and AI can not only help with
getting more reliable forecasts in the long term, but also help with the prediction of catastrophic weather
events (see section 3.4).

3.2.2. 3.2.3.
PREDICTIVE MAINTENANCE OF ACCELERATING EXPERIMENTATION
EXISTING INFRASTRUCTURE
NG -TER
O
M
L

HIGH C A L LY
LO

N
E

DE U
AVO
R T
IM EL
PACT E VA N
AI can be used to accelerate scientific discovery in areas
Optimising maintenance on existing power generation such as materials sciences. In such cases, AI is not a
systems has multiple advantages over building new replacement for scientific experimentation, but it can learn
infrastructure, as it can help minimise emissions and from past experiments in order to suggest future experi-
reduce the need for costly financial investments in new ments that are more likely to be successful (Clutton-Brock
infrastructure (Sun and You, 2021). In this context, AI has et al., 2021).
been successfully used to operate the diagnostics and • AI is being used to accelerate the development of new
maintenance of existing systems through sensor data materials that can better store and harness energy
and satellite imagery. In particular, AI has helped with: from variable low-carbon sources (e.g., batteries and
• Detecting leaks in natural gas pipelines (Wan et al., photovoltaic cells) (Butler et al., 2018; Liu et al., 2017;
2011; Southwest Research Institute, 2016) Gómez-Bombarelli et al., 2018).

• Detecting faults in rooftop solar panels (Bhattarcharya • AI has also been used to understand innovation
and Sinha, 2017) processes in order to inform policy for accelerating
material science (Venugopalan and Rai, 2015).
• Detecting cracks and anomalies from image and video
data (Nguyen et al., 2018)

• Preemptively identifying faults from sensors and


simulation data (Caliva et al., 2018).

22  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS ENERGY

methods are proving inadequate for these purposes,


3.2.4.
pushing cities towards the use of AI to optimise and
TRANSMISSION AND DISTRIBUTION balance power grids in real time (smart grids) (Ramchurn
et al., 2012; Perera et al., 2014; Victor, 2019).
SDGs: 7.1, 7.2, 7.3, 7.a, 7.b, 11.a, 11.b, 13.2
• In this context, AI has been used to improve scheduling
Power grids require balance between the supply and de- and dispatch processes by improving the quality of flow
mand of energy. This balance can be affected by multiple optimisation solutions (Borgs et al., 2014; Dobbe et al.,
factors, including unexpected fluctuations in supply or 2017; Dobbe et al., 2018).
demand, the algorithms used to control grid infrastruc-
• AI is also used to learn from the actions of human
ture, and failures or weaknesses in that infrastructure. In
engineers working with power-system control (Donnot
most countries, the electricity grid has changed very little
et al., 2017).
since it was first installed. Moreover, existing grids were
designed based on the idea that electricity is produced • AI techniques have been used to ensure that the distribu-
by a relatively small number of large power stations that tion system runs smoothly by estimating the state of the
burn fossil fuels and is delivered to a much larger number system even when only few sensors are available (Donti
of customers, often some distance from these genera- et al., 2018; Jiang et Zhang, 2016; Pertl et al., 2016).
tors, on demand (Ramchurn et al., 2012). Aggravating
• Image processing, clustering and optimisation tech-
this, the grid itself relies on aging infrastructure plagued
niques have also been used on satellite imagery to inform
by poor information flow (for example, most domestic
electrification initiatives (Ellman, 2015). Traditionally,
electricity meters are read at intervals of several months)
figuring out what clean electrification methods are best
and has significant inefficiencies arising from loss
for different areas can require slow and intensive survey
of electricity within the transmission networks (on a
work, but AI can help scale this work.
national level) and distribution networks (on a local level).
AI models can also help operate rural microgrids—that
is to say, localised, self-sufficient energy grids—through
3.2.5. accurate forecasts of demand and power production,
SYSTEM OPTIMISATION since small microgrids are even harder to balance than
AND CONTROL country-scale electric grids (Cenek et al., 2018; Otieno
et al., 2018). As these new local sources of energy are
HIGH C A L LY also emerging, decentralised energy generation will be
LO
increasingly important to future energy systems and will
actively contribute to the optimisation of the grid. For
IM
R
EL T
example, homes equipped with smart meters and fitted
PACT E VA N
with clean energy sources, as well as newly developed
energy storage, can plug into the grid and supply energy
RISKS: Explainability, privacy and privacy into distributed energy networks.
attacks
AI can also help integrate rooftop solar panels into the
When balancing electricity systems, operators need to electric grid (Malof et al., 2016; Yu et al., 2018). In the
determine how much power every controllable generator United States and Europe, for instance, rooftop solar
should produce in a process called scheduling and panels are connected to a part of the electric grid called
dispatch. With the aim to achieve optimal power flow, the distribution grid, which traditionally did not have many
this process must also be coordinated at different time sensors because it was only used to deliver electricity
scales. The balancing process becomes even more com- one way, from centralised power plants to consumers.
plex as electricity systems include more storage, variable However, rooftop solar and other distributed energy
energy sources and flexible demands. Traditional power resources have created a two-way flow of electricity
system monitoring, optimisation and intelligent control on distribution grids.

23  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS ENERGY

C A L LY
LO

Definition: Microgrids R
EL T
E VA N
RISK: Privacy and privacy attacks
A microgrid is a self-sufficient energy system that
serves a discrete geographic area, such as a college Relevant stakeholders should be aware that sharing
campus, a hospital complex or a neighbourhood. data about critical infrastructure such as energy
Within a microgrid are one or more kinds of distrib- systems without adequate protection may pose
uted energy (solar panels, wind turbines, combined a risk to cybersecurity and system resilience.
heat and power, generators) that produce its power.
Microgrids are an important step towards energy
equity. They are uniquely suited to help empower
disadvantaged communities. This is because of
3.2.6.
their robustness; they can provide energy both
with the centralised grid and independently from it, FORECASTING SUPPLY AND DEMAND
providing power even when the centralised grid is
unavailable. Since energy generation and demand both fluctuate,
real-time electricity scheduling and longer-term system
planning require forecasting ahead of time. Better short-
term forecasts can also reduce the reliance on standby
RISK: Explainability plants, which often rely on fossil fuels, and help proac-
tively manage increasing amounts of low-carbon variable
When AI helps control electrical grids, system generation. In this context, hybrid AI physical models can
developers may require technical details about how contribute to modelling the availability of different energy
algorithms function, regulators may require assur- sources and to forecasting supply and demand across
ance about how data is processed, and households the system (see section 3.2.1).
may want their smart meters to provide accessible
• Hybrid AI-physical models have been used for model-
information through an intuitive user interface.
ling precise energy demand within buildings (Robinson
Explainability should be carefully considered: the
et al., 2017).
more critical a system is and the greater the cost
of failure, the more accurate an explanation is • Through hybrid AI-physical models, it has also been
required. While the risk of an insufficient expla- possible to model energy dynamics in an urban
nation is low in the case of an app helping a user microclimate, such as for instance a campus or
understand household energy consumption, it is a neighbourhood (Nutkiewicz et al., 2018).
much greater when it comes to digital interfaces
• AI has also been used to understand specific catego-
informing grid operators. Indeed, incorrect inter-
ries of demand, for instance by clustering households
pretation of outputs and model usage could lead
into groups with similar patterns of energy use (Zhang
to grid failure, which is often a catastrophic event.
et al., 2018).

24  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS ENERGY

3.2.7. 3.2.8.
PREDICTIVE MAINTENANCE ENERGY USE AND EFFICIENCY
FOR THE TRANSMISSION AND
DISTRIBUTION NETWORK SDGs: 7.1, 7.2, 7.3, 7.a, 7.b, 11.6, 11.a, 11.b, 13.2

HIGH
3.2.8.1.
Forecasting energy use and improving
IM efficiency
PACT

Following the transition to variable low-carbon energy,


RISK: Digital divides
the supply and price of electricity will vary over time.
Predictive maintenance is a key strategy for AI to contrib- Thus, energy flexibility in buildings will be required in
ute to decreasing emissions while increasing infrastruc- order to schedule consumption when supply is high. For
ture safety, driving down costs and increasing energy this, automated demand-side forecasts can respond to
efficiency. During transmission and distribution, predictive electricity prices, smart meter signals or learned user
maintenance can help prevent avoidable losses. preferences and help efficiently schedule energy use.
C A L LY
• AI can suggest proactive electricity grid upgrades. AI LO
can analyse the grid information at any given time and
determine the health of the grid, avoiding wastage of
R
generated power (see sources cited in section 3.2.2). EL T
E VA N

• Similar techniques also help reduce the amount of


energy consumed during transmission (Muhammad RISKS: Explainability, privacy and
et al., 2019). privacy attacks

While some of these losses are unavoidable, others can AI can enable more flexible industrial electrical loading by
be significantly mitigated to reduce waste and emissions. optimising a firm’s demand response to electricity prices.
There are several promising ways to enhance the operat-
ing performance of heavy-consumption energy systems
using AI, as for instance:

• Demand response optimisation algorithms can help


adjust the timing of energy-intensive processes such as
cement crushing and powder coating to take advantage
of electricity price fluctuations (Zhang et al., 2016).

• AI is also used to make sense of the data produced by


meters and home energy monitors.

25  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS ENERGY

3.2.8.2.
Controlling energy usage
RISK: Sustainability, mission creep
C A L LY and privacy violations
LO

In the future, sensors could be everywhere, mon-


R itoring every source of emissions as well as every
EL T
E VA N person’s livelihood. While this might start from a
good intention, it could go too far. Moreover, the
RISKS: Explainability, privacy and privacy swarm of new sensing devices could add layers
attacks, sustainability of embodied emissions and material waste into
the environment, and the processing and storage
There is significant scope for AI to help in reducing energy
of the vast amount of data they generate might
use and increasing efficiency in industrial, residential and
come with a cost in emissions too. Finally, data
commercial settings. Such applications can both reduce
collected through remote sensing could end up
energy bills and lower associated carbon emissions.
being maliciously used for surveillance and privacy
• In this context, AI can be employed to analyse real-time breaching applications. (For the risks relating to
building data and then provide insights on building these applications, see the “AI in surveillance”
performance. box in section 3.4.)

• As building-related applications must transmit high vol-


umes of data in real time, AI is also key to pre-process-
ing large amounts of data in large sensor networks,
allowing only what is relevant to be transmitted instead
Definition: Digital twins
of all the raw data that is being collected.

• AI can be used to forecast what temperatures are need- Data obtained from remote sensing can be used
ed throughout the system, better control to achieve to model a digital twin. A digital twin is a digital
those temperatures, and provide fault detection. representation of a physical object, process or
service. A digital twin can be a digital replica of an
• AI can also be employed to adjust how many systems
object in the physical world, such as a jet engine or
within buildings, such as lights and heating, operate
a wind farm, or even larger items such as buildings.
based on whether a building or room is occupied, there-
A digital twin is, in essence, a computer program
by improving both occupant comfort and energy use.
that uses real-world data to create simulations that
can predict how a product or process will perform.
While digital twins are a powerful tool to simulate
Definition: Remote sensing contained systems such as industrial processes,
they are not equally well suited to simulating social
RISK: Sustainability phenomena. For a more in-depth discussion on
Remote sensing is the process of detecting and how digital twins might not be the best solution
monitoring the physical characteristics of an area when modelling social processes, see Takahashi
by measuring the radiation reflected and emitted (2020).
at a distance (typically from cameras on satellites
or aircrafts). Images are collected remotely from
either normal or specialised cameras (such
as infrared ones), helping researchers “sense”
things about an object or a phenomenon.

26  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS MOBILITY

3.3. • AI has also been used to reveal the preferences of


customers traveling by high-speed rail (Sun et al., 2018).

Mobility • AI can be applied to make public transportation faster


and easier to use. For example, AI methods have been
used to predict bus arrival times and their uncertainty
(Mazloumi et al., 2011).

• Similarly, AI can improve the operational efficiency of


3.3.1. aviation, predicting runway demand and aircraft taxi
time in order to reduce the excess fuel burned in the
PUBLIC TRANSPORT
air and on the ground due to congestion in airports
(Jacquillat and Odoni, 2018; Lee, Malik et al., 2015).
SDGs: 11.2, 11.3, 11.6

3.3.1.1.
Case study: City of Los Angeles
Transportation demand and forecasting
Mobility Data Specification
HIGH
Data is often proprietary. To obtain this data, the
city of Los Angeles now requires all mobility-­as-a-
service providers, i.e., vehicle-sharing companies, to
IM
PACT use an open-source API (Application Programming
Interface). This way, data on the location, use and
RISK: Privacy condition of all those vehicles are transmitted to
the city, which can then use that data for their
AI can improve estimates of transportation usage, as
own services as well as guiding regulation (Open
well as modelling demand for public transportation and
Mobility Foundation, 2021).
infrastructure. In turn, modelling demand can help in plan-
ning new infrastructure. Data-enabled mobility platforms
can enable users to access, pay for and retrieve real-time
information on a range of transport options. This can
promote public transport usage and make it easier for 3.3.1.2.
individuals to complete their journeys. In this context,
Facilitating ride-sharing
AI is particularly apt for processing information from
diverse sources of data:
Recent years have seen the advent of “as a service” busi-
• AI is being used to learn about public transport usage ness models, where companies provide a service rather
through smart-card data. Moreover, AI modelling in than selling a product or commodity. In mobility, “mobility
combination with mobile phone sensor data can pro- as a service” refers to a shift away from personally owned
vide new means to understand personal travel demand modes of transport towards a mobility service provided
and the urban topology, such as walking route choices to a pool of users. In many cities, companies offer people
(Manley et al., 2018; Ghaemi et al., 2017; Tribby et al., a smartphone app to locate and rent a bike for a short
2017). period. Mobility as a service may also potentially reduce
congestion and greenhouse gas emissions by increasing
• AI has been used to improve the short-term forecasting
public transport.
of public transit ridership (Dai et al., 2018; Noursalehi
et al., 2018).

27  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS MOBILITY

FORECASTING OPTIMISING INTERMODALITY


AI can be used for predicting demand in ride-sharing When traveling by train, the trip to and from the station
systems. Data on mobility as a service may also be useful will often be by car, taxi, bus or bike. There are many
for municipalities, for example in helping to understand opportunities for AI to facilitate a better integration of
how a rideshare service affects urban geography and modes in both the passenger and freight sectors. As an
transport patterns. In cases where such services are example, bike-sharing and electric scooter services can
run by private entities, data-sharing agreements may offer alternatives for urban mobility that do not require
be valuable to municipal entities. ownership and integrate well with public transportation.

AI has been used to help integrate bike shares with other


modes of transportation by producing accurate travel
SYSTEM OPTIMISATION estimates (Ghanem et al., 2017).
AI techniques make it possible to optimise the use of In the freight sector, AI has been applied to analyse modal
existing physical ride-sharing infrastructure in multiple trade-offs, that is, exchanges of shipments from different
ways. Business and operating models that offer mobility types of modalities (e.g., boat to truck, truck to delivery van)
as a service can leverage digital technologies to support (Samimi et al., 2011).
more fundamental changes to how individuals access
transport services, reducing the number of personal
vehicles on the road.

• Applying AI to survey data has helped relevant stake- Case study: Citymapper
holders understand public opinion on ride-sharing
systems, such as, for instance, dockless bike-sharing Making transportation data openly available has
(Rahim Taleqani et al., 2019). supported the development of apps such as
Citymapper, which provides access to information
• One challenge is the bike-sharing rebalancing problem,
about traffic in real time so individuals can choose
where shared bikes accumulate in one location and are
less congested routes, thereby improving individ-
lacking in other locations. In this context, AI can help
ual journeys as well as reducing congestion and
by improving forecasts of bike demand and inventory.
emissions.
Moreover, AI can help to understand how usage patterns
for bike stations depend on their immediate urban
surroundings (Regue and Recker, 2014).

28  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS MOBILITY

3.3.2. 3.3.2.2.
PRIVATE TRANSPORT Facilitating autonomous vehicles’ safety
SDGs: 7.3, 7.a, 9.b and adoption

RISKS: Societal harm, distress in the local


3.3.2.1. labour market

Improving electric vehicle technology AI is essential to many aspects of the development of


autonomous vehicles (AVs).
NG -TER
O
• AI is involved in many tasks at the core of autonomous
M
L

vehicles’ functioning, including following the road, detec­


ting obstacles, interpreting other vehicles’ trajectories,
N
E

DE U
AVO managing speed, understanding driving styles and more.

• Further, AI can help to develop AV technologies


To advance the adoption of electric vehicles (EV), AI is
specifically aimed at reducing congestion as well as
a decisive technology for EV costs and usability. Work
fuel consumption. For example, AV controllers can be
in this area has focused on predicting battery state,
used to smooth out traffic involving non-autonomous
degradation and remaining lifetime. With the aim of
vehicles, reducing congestion-related energy consump-
accelerating the technology behind EVs:
tion (Wu et al., 2017; Wu, Raghavendra et al., 2022).
• AI can optimise battery charging by suggesting users
where to position themselves during wireless charging
and help provide users with better battery charge
estimation (Hansen and Wang, 2005; Tavakoli and
Pantic, 2017). RISK: Societal harm
• Battery electric vehicles are typically not used for more
While autonomous buses could decrease green-
than a fraction of the day, potentially allowing them to
house gas emissions, self-driving personal vehicles,
act as energy storage for the grid at other times. AI can
on the other hand, may increase emissions by making
help with optimising when energy should be transmitted
driving easier, as well as attracting people towards
from vehicle to grid or vice versa.
private vehicle ownership and thereby augmenting
• AI can also inform the design of batteries and next-­ the industrial production of vehicles. In the long run,
generation fuels to optimise the energy consumption this could cause harm to the environment and to
of EVs (Fujimura et al., 2013). society in general.

RISK: Distress in the local


labour market

Vehicle automation could lead to the replacement


of multiple types of human resources, such as
bus, train and truck drivers. Even in situations in
which automated vehicles do not explicitly replace
human workers, their functions could be reduced
to precarious unpaid or low-paid labour.

29  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS MOBILITY

3.3.3. 3.3.3.2.
TRANSPORTATION INFRASTRUCTURE Optimising traffic flow and control

SDGs: 11.2, 11.3, 11.6 HIGH

3.3.3.1.
IM
PACT
Optimising electric vehicle infrastructure
AI can be used for both vehicular and pedestrian traffic
In the context of building the appropriate infrastructure
forecasting based on data obtained from dedicated
for electric vehicles to coexist with traditional mobility
sensors, such as traffic cameras, and from soft-sensing,
options within the city, AI can help in multiple ways.
such as mobile devices. Moreover, AI is being applied to
• In-vehicle sensors and communication data are in- understand how vehicles are moving around city centres,
creasingly becoming available and offer an opportunity and in places has helped improve congestion prediction
to understand the travel and charging behaviour of EV by changing street design and controlling traffic lights.
owners, which can, for example, inform the placement The information can be used to ease traffic as well as
of charging stations. AI can also inform the positioning reduce emissions. Traditionally, traffic is monitored with
of battery charging towers in the city to facilitate usage ground-based counters that are installed on selected
and consumer adoption of EVs (Tao et al., 2018). roads and sometimes with video systems, in particular
when counting pedestrians and cyclists.
• Moreover, AI can help modelling EV users’ charging
behaviour, which in turn can inform the positioning
• AI can help with this by automating traffic monitoring
of battery charging towers. This will be equally useful
through computer vision.
for grid operators looking to predict electric load (see
section 3.2) (Wang, Li et al., 2019). • AI methods have made it easier to classify roads with
similar traffic patterns. In this context, remote sensing
is key to inferring infrastructure data, as satellite data
present a source of information that is globally available
and largely consistent worldwide (Krile et al., 2015;
Tsapakis and Schneider, 2015; Gastaldi et al., 2013).

• As vehicles can be detected from high-resolution


satellite images with high accuracy, AI and vehicle
image counts can serve to estimate average vehicle
traffic (Kaack et al., 2019).

• AI systems have been used for traffic light (signal) control.

• AI has been used to create platforms for interactive


data manipulation to monitor and predict traffic be-
haviour, while potentially testing out planning scenarios
at the same time.

30  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS MOBILITY

• AI has also been used to prevent the escalation of traf- 3.3.3.3.


fic problems by finding mechanisms for fleet operators
Predictive maintenance for roads and rails
and cities to work together, for example by sharing data
about congestion or pollution hotspots and rerouting HIGH
around the problem before it becomes serious.

• The same sensors used for traffic prediction can also


be used by an AI to determine how many pedestrians IM
PACT
are waiting at the light and how much time it might
require them to cross a street (Zaki and Sayed, 2016).
• In road networks, it is possible to incorporate flood
• Smart parking through AI has also been piloted, hazard and traffic information in order to uncover
deploying sensors in parking spaces and communicat- vulnerable stretches of road, especially those with
ing the information to road users through apps, with the few alternative routes. If traffic data are not directly
potential to halve congestion in busy city areas (BT, n. d.). available, it is possible to construct proxies from mobile
phone usage and city-wide CCTV streams; these are
AI can provide information about mobility patterns, which
promising in rapidly developing urban centres.
is directly necessary for agent-based travel demand
models, one of the main transport planning tools. • AI can help to improve and optimise transportation
infrastructure, for example by reducing the operations
• For example, AI makes it possible to estimate origin-­
and maintenance costs of road surface quality and
destination demand from traffic counts, and it offers new
rail. Tools for efficiently managing limited resources
methods for spatio-temporal road traffic forecasting.
for maintenance include predictive maintenance
• AI has been used to improve our understanding and anomaly detection. In predictive maintenance,
about passengers’ travel mode choices, which in turn operations are prioritised according to the predicted
informs transportation planning, such as where public probability of a near-term breakdown.
transit should be built (Omrani, 2015; Nam et al., 2017;
• Remote sensing can be used to predict road and track
Hagenauer and Helbich, 2017).
degradation (Soleimanmeigouni et al., 2018).
• Using AI on survey data can also help with understand-
• For anomaly detection, failures are discovered as soon
ing passengers’ reasons for choosing a certain mode of
as they occur, without having to wait for inspectors to
transport (Seo et al., 2019).
show up or complaints to stream in.

31  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS PUBLIC SAFETY

3.4.

Public DANGER: AI in policing

safety
Around the world, many law enforcement agencies have turned
to AI as a tool for detecting and prosecuting crimes (Almeida
et al., 2021). AI applications for policing include both pre-
dictive policing tools (as in the use of AI to identify potential
criminal activities) and facial recognition technology. All these
technologies have been shown to be biased in multiple ways
RISKS: Explainability (see section 3.2),
and to lead to harsher impacts on vulnerable communities.
mission creep, societal harm
For example, the COMPAS algorithm, used to predict the likely
Cities are prime victims of many disastrous recidivism rate of a defendant, was twice as likely to classify
events, from extreme storms to earthquakes, Black defendants as being at a higher risk of recidivism than
and, as a result, they are at the forefront of they were while predicting white defendants to be less risky
disaster management. This section reviews than they were (Larson et al., 2016). By using the past to
various applications through which AI can predict the future, predictive policing tools reproduce dis-
help mitigate disasters, aid relief and support criminatory patterns and often result in negative feedback
affected populations. For more ways in which loops, leading the police to focus on the same neighbourhoods
AI applications can effectively support and repeatedly, and therefore leading to more arrests in those
complement urban governance in preventing neighbourhoods. For example, the Strategic Subject List algo-
disasters, please see section 3.7. rithm used data from previous police records to predict how
likely people were to be involved in violence, without making any
distinction between the victims and the perpetrators (Asaro,
3.4.1. 2019). The Chicago police department then used the algorithm
ENVIRONMENTAL SAFETY to create a “heat list,” using it as a suspect list and surveillance
tool; the people on it were therefore more likely to be arrested
and detained (Asaro, 2019). Similarly, facial recognition tech-
nology that shows poor accuracy for certain demographics has
3.4.1.1.
been widely adopted by law enforcement agencies, resulting
Extreme weather event in wrongful arrests and prosecutions. AI systems tend to
forecasting perpetuate and accentuate existing biases under the guise of
mathematical neutrality. Such systems are all the more dan-
AI can predict localised flooding patterns gerous when used for detecting and preventing crime, as law
from past data, which could inform indi- enforcement agencies often have a history of discrimination
viduals buying insurance or homes. Since and prosecution of vulnerable communities. Therefore, even
AI systems are effective at predicting local with the proper transparency and governance practices in place,
flooding during extreme weather events, AI systems should never be used to make decisions impacting
these could be used to update local flood human lives and human rights in such a sensitive context.
risk estimates to benefit individuals.

32  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS PUBLIC SAFETY

3.4.1.2.

DANGER: AI in surveillance Supporting disaster response


HIGH C A L LY
From tracking individuals to smart video surveil- LO
lance, AI applications for security have proliferated
these recent years. AI surveillance tools (as in,
R
computer vision systems used in order to rec- IM EL T
PACT E VA N
ognise humans, vehicles, objects and so on) are
built into certain platforms for smart cities, remote
RISKS: Digital divides, geographic misalignment
sensing and smart policing (see the box on AI in
policing). The use of these technologies to track Within disaster management, the response phase
and monitor citizens’ movements and connections happens during and just after the phenomenon has
almost invariably results in huge privacy breaches happened. Depending on the situation, responses could
and violation of human rights. For example, an involve evacuating threatened areas, firefighting, search
investigation found that Clearview AI, a technology and rescue efforts, shelter management or humanitarian
company, has illegally practiced mass surveillance assistance (Sun et al., 2020).
of Canadians since it used billions of images
• AI has been proven useful for creating maps of areas
of people collected, without these individuals’
affected by disaster events through remote sensing,
knowledge or consent, to train a facial recognition
which can help with situational awareness, to inform
tool which was then marketed to law enforcement
evacuation planning as well as the delivery of relief
agencies around the world (Office of the Privacy
(Doshi et al., 2018; Bastani et al., 2018).
Commissioner of Canada, 2021). AI systems have
been shown to be exponential accelerants of • By comparing maps and images pre-event and post-
preexisting surveillance practices, allowing for the event, AI has been used to understand feature discrep-
questionable usage of data and unprecedented ancies and in turn, to assess damage of structures and
levels of control on citizens (Sahin, 2020). For infrastructures for prioritising response efforts (Voigt et
example, the use of facial recognition technologies al., 2007; Gupta et al., 2019).
in the Alicem app, a national identity tool for online
• AI has been used to estimate the number of people
government services in France, raised an uproar
affected after a disaster in order to provide efficient
from human rights organisations (Draetta and
humanitarian assistance. Indeed, AI systems utilising
Fernandez, 2021). Furthermore, since technological
location and density data for affected areas have been
evolution typically outpaces legislative changes
demonstrated to be a helpful alternative source of infor-
and incoming regulations, it is becoming increas-
mation, especially for precarious informal settlements.
ingly difficult to monitor such systems and fully
understand the societal impacts they can have. • AI and satellite images combine to create new approach-
AI surveillance tools can easily give way to harmful es for accurately spotting and differentiating structures
and oppressive practices such as the gathering of in urban settings. This is especially useful in places that
biometric information without consent, the mani­ are informal, inaccessible, isolated, temporary or refugee
pulation of citizens’ behaviour or the repression settlements, or where buildings made of natural materi-
of ethnic minorities. Therefore, AI technologies als might blend into their surroundings (UNITAC, n. d.).
should never be used in such contexts.
• For the same purposes, AI has been used for information
retrieval on social media data. As an example, Twitter
data can be used to gather information on populations
affected by disasters as well as to geolocalise them.

• AI applications have also been used to identify the earli-


est warning signs of earthquakes, enabling emergency
response teams to evacuate people faster.

33  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS PUBLIC SAFETY

3.4.2.2.
RISK: Digital divides Monitoring and ensuring food security

During Hurricane Sandy in New York City (2012), HIGH

Twitter served as a lifeline of information, helping


affected citizens to spread information and facili-
tate safe and rescue operations. A study following IM
PACT
the event, however, found out that most tweets
spreading useful information about the disaster
were geolocated in Manhattan, the richest region of Extreme weather phenomena caused by climate change,
the city, and the least affected by the catastrophe such as droughts, as well as geo-political events, such as
(Wang, Lam et al., 2019). wars, are already heavily impacting crop yields all around
the planet. This poses a threat to food security, especially
within communities depending on such resources. In
this context, AI offers multiple monitoring and mitigation
3.4.2. solutions:
POPULATION RISKS • AI can be used to distil information on food shortages
from mobile phones, credit cards and social media
3.4.2.1. data. Such systems represent a valuable alternative to
high-cost, slow manual surveys and can be used for
Assessing and mitigating health risks
real-time forecasting of near-term shortages (Decuyper
HIGH et al., 2014; Kim et al., 2017).

• AI can also help with long-term localised crop yield


predictions. These can be generated through aerial
IM
PACT
images or meteorological data (You et al., 2017;
Wang et al., 2018).

Industrialisation and climate change are already having • AI can also contribute to monitoring crop diseases by
a concrete impact on the world’s population exposure to allowing their identification through computer vision
health hazards. This is particularly evident when considering techniques and informing agricultural inspectors of
the ever-increasing number of heat waves in cities around possible outbreaks (Chakraborty and Newton, 2011;
the world, as well as the deterioration of air quality in highly Rosenzweig et al., 2014).
industrialised countries. Such phenomena produce detri-
mental impacts on cities’ populations, as prolonged extreme
heat and pollution episodes can trigger chronic and acute 3.4.2.3.
respiratory diseases. AI can contribute to informing citizens Managing epidemics
and cities about health hazards in various ways:

• By utilising data collected through remote sensing, AI sys- RISKS: Robustness, transparency, privacy
tems can provide insights on urban heat islands, water AI has been demonstrated to be a valuable tool for
quality and air pollution at a highly granular geographical disease surveillance and outbreak forecasting. Some
scale (Clinton and Gong, 2013; Ho et al., 2014). of these tools have important implications for equity.
• AI methods and demographic data can be used to Indeed, AI-based tools can help healthcare professionals
assess which parts of the population are mostly make diagnoses when specialised lab equipment is
impacted by climate change induced health hazards. not accessible. For further discussion on this topic,
Such information can help local healthcare authorities see section 3.6.
to drive outreach (Watts et al., 2017).

34  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS WATER AND WASTE MANAGEMENT

3.5. 3.5.1.2.

Water
Water quality prediction and wastewater
management

and waste HIGH


LO
C A L LY

management IM
PACT
R
EL
E VA N
T

AI can support water management in response to sudden


pollution events and seasonal changes and in modelling
complex pollutants. For instance, algae recognition
3.5.1. technologies can help in modelling algal occurrence
WATER patterns and understanding the presence of associated
toxins. Water quality is not only related to biological or
physical-chemical factors, but also to the continuity of the
supply with adequate levels of pressure and flow. Many
3.5.1.1.
water utility companies are beginning to amass large
Classifying consumption patterns volumes of data by means of remote sensing of flow,
and demand forecasting pressure and other variables.

• AI can be leveraged to detect the amount and compo-


Water utilities use long-range water demand forecast
sition of toxic contaminants, which can increase the
modelling to design their facilities and plan for future
efficiency of waste management systems (see section
water needs. As water supply systems become stressed
3.5.2.2) (Alam et al., 2022).
because of population growth, industrialisation and other
socioeconomic factors, water utilities must optimise Access to clean drinking water is a major challenge of the
the operation and management of their existing water modern era and one of the UN’s Sustainable Development
supply systems (Jain and Ormsbee, 2002). In addition, Goals. Water pollution caused by rapid industrialisation
water utilities need to improve their predictions of peak and population growth has emerged as a significant
water demands to avoid costly overdesign of facilities. environmental challenge in recent years. The treatment
One critical aspect for optimising water supply system and reuse of wastewater through the aid of AI offer a
operation and management is the accurate prediction unique opportunity to address both these challenges.
of short-term water demands.
• AI can be used for the modelling and optimisation
• AI can be used for forecasting water demand through of the water treatment process, such as removing
the data collected by digital water meters (DWMs). pollutants from water. In particular, AI can be used to
predict and validate the adsorption performance of
Irrigated agriculture is one of the key factors responsible
various adsorbents for the removal of dyes (Tanhaei
for decreasing freshwater availability in recent years.
et al., 2016), heavy metals (El Hanandeh et al., 2021),
Thus, the development of new tools which will help
organic compounds, nutrients, pharmaceuticals, drugs
irrigation district managers in their daily decision-making
and pesticides (Bouhedda et al., 2019; Gar Alalm and
process about the use of water and energy is essential.
Nasr, 2018).
• In this context, AI can also be used for the forecasting
of daily irrigation water demand, including in data-limited
settings (González Perea et al., 2019).

35  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS WATER AND WASTE MANAGEMENT

3.5.1.3. 3.5.2.
Water level monitoring WASTE

Events at the polar opposites of the water cycle, such SDGs: 12.4, 12.5
as floods of varying intensities on the one hand and
droughts on the other, can have devastating effects on
society. AI can offer substantial help in the monitoring 3.5.2.1.
of such events. (For ways in which AI has been used to Forecasting waste generation
mitigate the effects of extreme water-related events, see
section 3.4.) Ideally, the data obtained by the remote ob- Municipal solid waste (MSW) management is a major
servation of water-related phenomena should be used by concern for local governments working to protect human
decision-makers to assess potential interventions. These health and the environment and to preserve natural
types of AI applications have also been used to inform resources. The design and operation of an effective
farmers and in turn optimise irrigation at field scale. MSW management system requires accurate estimation
• AI algorithms (specifically, computer vision) together of future waste generation quantities, which are affected
with data from satellites have been used to identify by numerous factors.
trends in precipitation, evapotranspiration, snow and • AI has been used to forecast municipal solid waste
ice cover, melting, runoff and storage, including ground­ generation. Traditional solid waste generation forecasts
water levels. In this context, the use of physical sensors use data on population growth and average per-capita
is not suggested, as they could be easily affected by waste generation. Such historical data, however, are non-­
environmental changes (Chandra et al., 2020). linear and highly variable; AI has been demonstrated to
be a good tool to handle this uncertainty (Abbasi and
El Hanadeh, 2016).
3.5.1.4.
Predictive maintenance
3.5.2.2.
AI algorithms can provide spatial information on the Optimising waste collection,
amount and type of water losses. transportation and classification
• AI algorithms can be used to perform a continuous cali-
HIGH C A L LY
bration of the network, including analysing the structure LO

of the errors (difference between the measurements


and the model predictions) at each control point, and
R
extracting information from the error patterns. Different IM
PACT
EL
E VA N
T

types of water losses can also be distinguished—for


instance, pipe leaks versus unauthorised consumption.
Municipal waste collection is a costly and complex pro-
• For district meter area monitoring there has been cess. Trucks often visit waste bins that are only partially
increasing interest in using sensor data for abnormality full, which is an inefficient use of resources. Moreover,
detection, such as the real-time detection of bursts. the cost of waste collection and transportation accounts
for 60% to 80% of the total waste management system
costs; hence, optimising the route of vehicles for waste
collection and transportation can save time, reduce the
running distance, vehicle maintenance cost and fuel cost,
and effectively arrange vehicles and allocate human
resources (Abdallah et al., 2020). Many factors can affect
waste accumulation, which makes it difficult to predict
the fill levels of waste bins. In this context, AI can help in
multiple ways:

36  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS WATER AND WASTE MANAGEMENT

• To improve the efficiency of waste collection, AI LANDFILL


can be used to detect the fullness of a waste bin
Within landfills, AI can help with various types of prediction
by real-time monitoring of waste levels within bins.
tasks, which in turn are needed for proper design and
Intelligent detection of waste bin levels can reduce
operation and can reduce environmental impacts.
the driving distance of trucks, reducing both cost
and greenhouse gas emissions. • AI has been used to estimate landfill areas and monitor
landfill gas, and it can predict landfill leachate generation
• AI has also been used to forecast the collection
using information on factors such as temperature and
frequency for each location, reducing unnecessary
rainfall.
visits to locations with empty bins.
• AI has been used to predict biogas generation from
• AI can also be used to analyse the effects of changes
bioreactor landfills. In this context, AI can help with the
in waste composition and density on truck route
prediction and optimisation of the energy produced
optimisation.
from solid waste fractions by using the physical and
Traditional waste classification mainly relies on manual chemical composition of the waste.
selection, which is both inaccurate and inefficient. With
the development of AI, many approaches have been
proposed to improve the accuracy of recyclable waste INCINERATION
identification through various techniques. Computer
Determining the status of the municipal solid waste
vision has been particularly helpful:
incineration process is a difficult task. This is due to
• AI has been applied on image data to recognise various varying waste composition, operational differences of
types of cardboard and paper. Similarly, AI has been incineration plants, and maintenance uncertainties of
demonstrated to be efficient in recognising different the incineration devices (Kalogirou, 2003). In this context,
types of plastics. AI has been used for:

• Predicting and automating heating values during the


3.5.2.3. incineration process.

Optimising and controlling • Monitoring and minimising the emission of pollutants


treatment and disposal in the air (Glielmo et al., 1999).

Quantifying useful by-products of municipal waste such


as biogas and energy, as well as harmful by-products COMPOSTING
such as leachate and fugitive emissions, is essential for
After composting, waste becomes a hygienic and odour-
optimal waste management. In this context, AI can be
less humus, realising the key aspects of harmlessness,
used to predict the quantity and composition of different
waste reduction and recycling. AI can help model the
by-products generated from waste management processes
complex processes that occur during composting, such as:
within landfills during incineration and composting.
• Understanding waste maturity: AI has been used to
understand the morphology, texture and colour char-
acteristics of compost images and establish compost
maturity.

37  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS HEALTHCARE

3.6. 3.6.1.1.

Healthcare
Monitoring patients

Monitoring patients requires following up on patients’


health condition and well-being regularly. This implies
frequent exams and fluid communication between
practitioners and patients. However, with the increasing
Rapid urbanisation and environmental changes have
pressure on the healthcare system and the inadequate
had major health implications on city populations as
coverage of territory, appropriate care cannot always
global health challenges become more pronounced than
be provided. In particular, marginalised areas of large
ever. Furthermore, the double burden of infectious and
cities and smaller cities with aging populations are those
non-communicable diseases can be exacerbated in the
primarily affected by the phenomenon of medical deserts.
city environment (Galea and Vlahov, 2005). Many strate-
AI represents a very promising tool to better engage
gies have been implemented by local health authorities
with patients and enable remote monitoring of health
and their medical staff to improve population health. AI
conditions.
solutions for health have recently gained in popularity,
as part of a wide array of digital health technologies • Health monitoring systems (i.e., “remote health
(WHO and UN-Habitat, 2016). monitoring”) may be used for continuous healthcare
monitoring of users’ body parameters: heart rate, body
temperature, blood pressure, pulse rate, respiration
3.6.1. rate, blood oxygen saturation, posture and physical
HEALTH PROMOTION AND DISEASE activities.
PREVENTION • Self-reported health data can be collected with mobile
devices and body sensors, such as wearable devices
HIGH NG -TER
O integrated into an IoHT (Internet of Health Things) or
M
L

IoMT (Internet of Medical Things). AI techniques can


be used to directly analyse this data and communicate
N
results to the patient, who may adapt their behaviour
E

IM DE U
PACT AVO
(Chui et al., 2021; Nahavandi et al., 2022; Sujith et al.,
SDGs: 3.4, 10, 5 2022).

RISKS: Privacy, data quality, historical bias, • Real-time overviews of patient health status can be
accountability uncertainty, reliability and robustness communicated to practitioners via an electronic health
record, enabling them to act in a timely manner in
AI can help support actions to enhance communities’ the case of out-of-the-ordinary readings of vital signs
health by improving care on two levels: efficient patient (Chui et al., 2021; Santos et al., 2020).
monitoring and management and relevant support for
practitioners.

38  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS HEALTHCARE

3.6.1.2. The nature of the application and the data integrated in


the AI systems raises a series of issues pertaining to
Supporting practitioners
privacy and protection. Electronic health records repre-
sent extremely sensitive and confidential information
Medical practitioners face many challenges. A changing
on patients and their peers. Furthermore, the quality of
health landscape and increased pressure in medical fields
the data may be questioned as well as its alignment and
can result in more complex responsibilities and lower pro-
representativity of vulnerable populations and across gen-
fessional well-being. Many practitioners are required to
ders. Accountability structures regarding AI algorithms
push past the limits of their skills in order to integrate new
are also central to the implementation of computer-aided
forms of information: multimodal, complex data such
systems in the health sector as poor technical perfor-
as images, genotypic data and numerical information.
mance may be wrongly attributed to health workers.
Furthermore, even the most experienced practitioners
can be biased by their subjectivity. Both the quality of
care and the wellbeing of the medical community can 3.6.2.
be improved through the support of various AI systems.
HEALTH SYSTEM AND
• Public health practitioners can be assisted by intelligent
HEALTHCARE ORGANISATION
web-based applications and online smart devices which
use AI methods to extract health and non-health data at HIGH NG -TER
O
different levels of granularity.

M
L
• Epidemiological information can be processed by AI
techniques to provide evidence-based strategies of N
E

R
IM DE U
PACT AVO
control of chronic diseases (Shaban-Nejad et al., 2018).

• Other areas of health intervention have been supported SDGs: 3, 11, 12, 16
by smart systems. These areas include nutrition and
RISKS: Skill shortage, financial burden,
diet, fitness and physical activity, sleep, sexual and
misalignment between AI and human values
reproductive health, mental health, health-related
behaviours, environmental determinants of health High costs, workflow inefficiencies and administrative
and screening tools for pain (Cho et al., 2021). complexities are significant challenges the health sector
faces which AI can help to improve. In particular, AI systems
• AI can be used for disease diagnosis and prediction:
can support governments and agencies in the strength-
AI has successfully been integrated with other medical
ening of the health system and the operation of relevant
technologies such as medical imaging, including X-rays,
organisations (UN Committee on Economic Social and
CT scans and MRIs (Chui et al., 2021), for computer-­
Cultural Rights (CESCR), 2000).
aided diagnosis. These systems can identify areas
of concern for further evaluation as well as provide • AI can improve data entry and retrieval procedures
information on the probability of diagnosis (Santosh through the automation of administrative and docu-
et al., 2019; Zhou et Sordo, 2021). These techniques mentation tasks, including coding and billing, medica-
have been used effectively in relation to COVID-19, tion refill and reconciliations. Automated systems may
voice disorders, cardiovascular diseases, diabetes also lead to a decrease in administrative complexities
and especially Alzheimer’s disease (Chui et al., 2021). by identifying and elimi­nating fraud or waste, as well
as assisting in the optimised allocation of human
resources.

• AI can also be useful in predicting clinical key out-


comes: prior insight on patient trajectories, mortality
and readmission probability may improve the overall
logistics and the management of hospital resources.

39  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS HEALTHCARE

• AI could also be appropriate in the improvement of 3.6.3.


clinical decision-making by identifying optimal treat-
PUBLIC HEALTH SURVEILLANCE
ment choices. In some oncology applications, AI could
be used to understand and address challenges such as
OF DISEASE OUTBREAKS
poor clinical trial enrolment, disparities in oncological
HIGH
care, rising costs and non-uniform access to knowl-
edge and multidisciplinary expertise (Lin et al., 2021;
World Health Organization, 2021).
IM
PACT
• AI has also been shown to be useful for complex
decision-making, programme policy and planning.
For example, AI has been used to predict, from SDGs: 3, 11, 14
admi­nistrative data, the length of stay of the health RISKS: Optimising the non-optimisable, data
workforce in marginalised communities of South misalignment, geographic misalignment, mission creep
Africa. In Brazil, AI has proven to be advantageous
in the allocation of resources across the country AI is instrumental in the prevention and management
(World Health Organization, 2021). of public health threats, including the mitigation of,
preparedness for and response to emergencies such
• AI may also be used to measure the impact of as epidemics. The identification of early, accurate and
health-related policies aimed at increasing workforce reliable health indicators is crucial for the needs of health
efficiency (Schwalbe and Wahl, 2020). surveillance. Thanks to the generation of large amounts
The success of AI integration in health administration of health-related and population data, AI presents signifi-
and governance bodies will depend on the existing infra- cant potential for the enhancement of health surveillance
structure and the financial resources a city has available capabilities.
for health investments. Without sufficient capacity to AI systems are able to source data to perform data
design, implement and use appropriate AI systems, a analytics such as outbreak detection, early warning
balance between their benefits and risks may never be spatio-temporal analytics, risk estimation and analytics,
found. In particular, implementing AI tools in an already and context-rich trend prediction.
pressurised sector may appear to increase employee
workload because of the adaptation phase and the Additionally, AI techniques can improve the quality of ep-
digital training required. idemic modelling, simulation and response assessments
while considering complex interactions and constraints
in the environment (Zeng et al., 2021). These prospective
insights can inform authorities while comparing appropri-
ate options for prevention and control strategies (Wong et
al., 2019).

AI has already been used in the tracking of health behaviours


during disease outbreaks and the spread of diseases.

AI can also be instrumental in the detection of outbreaks


(Daughton and Paul, 2019; Gunasekeran et al., 2021;
Schwalbe and Wahl, 2020; Xu et al., 2021).

40  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS HEALTHCARE

However, although AI has evidently led to great advances


in the field of health surveillance, reducing such serious Case study: Khon Kaen, Thailand
issues as public health to technical solutions may lead
to an overestimation of what AI can actually achieve In Khon Kaen, the ninth largest municipality in
on its own. The limitations of AI-powered models were Thailand, the aging population is putting pressure
highlighted, for example, during the COVID-19 pandemic on the municipality’s public health capacity. To
as the predictive performances of many AI systems tackle this issue, the municipality developed the
were questioned (Wynants et al., 2020). Furthermore, the Khon Kaen Smart Health model, which incorpo-
quality of the predictions will depend on the quality of rates three components: preventive healthcare
input data. The risk of poor demographic representation service, smart ambulances and a smart ambulance
and geographic misalignment are prominent in a context operation centre, and health information exchange.
of limited data and poor digital coverage. These risks are Through the use of data gathered from IoT devices
all the more important as these AI systems will be used to measuring various health factors such as heart
support policy decisions that have serious implications. rate and blood pressure, the project aims to give
accurate predictions of vulnerable citizens’ health
conditions and personalised suggestions for better
health outcomes. These personalised suggestions
Case study: Kashiwanoha, South Korea range from dietetic changes to increased sleep and
increased physical activity. This made it possible to
In Kashiwanoha, smart health strategies stretched build an awareness of risk factors for non-commu-
the smart city agenda beyond technological nicable diseases among the general population, as
innovations to address localised social issues. A well as provide advice and suggestions for proper
mix of private and public actors, universities and behaviours. This initiative won the first prize of the
citizens worked to implement AI experiments in IDC Smart City Asia Pacific Awards 2018 and the
monitoring and visualisation, educational initiatives prize in the category of Public Health and Social
and a variety of incentives for behavioural change. Services (Godoy et al., 2021).
Thus, active pursuit of improved public health in
Kashiwanoha has become a key part of the city’s
identity. These smart city strategies show how
technology can create a link between the three
sides of the sustainability triangle (i.e., environment,
economy and society) (Trencher and Karvonen, 2017).

41  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS URBAN PLANNING

3.7. characteristics of neighbourhoods to inform long-term


policies and planning.

Urban • The development of computer vision has provided

planning
the means of estimating population count and socio-­
economic conditions using remote sensing and GIS
technology. This can be especially useful in contexts
of very limited data and the absence of global data
collection strategies (Xie et al., 2015).

• Population mapping is possible using new forms of


data such as telecommunication data, credit card data
3.7.1.
and social media for real-time population estimation.
PLANNING AND MANAGEMENT The interactions between individuals and communities
as well as their mobility behaviour can also be inferred.
Information can be retrieved in order to identify mean-
3.7.1.1. ingful neighbourhoods.

Population assessment • Population mapping initiatives can be leveraged by


AI techniques to evaluate the state of economic and
LO
C A L LY spatial inequalities at the urban scale.

• AI methods can be adapted to include non-residents


in the analysis of city behaviour for better planning.
R T
EL
E VA N In particular, outside commuters or tourists can be
included to provide richer information on the city.
SDGs: 1, 10, 11, 16, 17 However, because population data are very often of a
RISKS: Privacy and data protection, historical sensitive nature, carrying out population assessment for
bias, data quality, data misalignment, presence of the benefit of urban planning is not risk-free. Concerns
sensitive data over privacy and accuracy should be considered. Data
collected through individual phone signals, credit card
Understanding the social and demographic context when purchases or social media activities can reveal a person’s
planning urban projects is crucial in order to produce location at all times, be explicit about their financial
sustainable designs that are not simply implemented in a situation and reveal much about their identity. Very often
top-down manner. Population assessment is therefore a the second-hand processing of this sort of data is not
requirement for the planning and management of cities. explicitly authorised by users. Many sensitive aspects of
Machine learning can be used to measure, process and someone’s personal life can be deduced from such data,
analyse a population’s characteristics and behaviours especially when cross-referenced. This may lead to the
for various objectives. marginalisation of or discrimination against population
• Traditional machine learning methods are used to groups. Furthermore, the potential inaccuracy of those
analyse official datasets from census or surveys carried data and the resulting analyses should be accounted for,
out on the country or on the city level. These methods especially when leading to planning projects or policy-
paint an aggregate picture of the socio-demographic making. The digital divide in particular may often lead
to a misalignment between the processed data and the
affected population.

42  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS URBAN PLANNING

However, this combination of expertise, strategies, ser-


Case study: AI building tracker vices and data into what is defined as a “platform ecosys-
tem” raises a series of concerns as to the risks of urban
UN-Habitat has piloted the use of AI to map planning. Smart management tools have raised ethical
the growth of informal settlements in eThikwini, issues regarding the influence of private companies on
South Africa, by detecting informal settlements the objectives set by governments, while also questioning
on satellite imagery. Initially, there was no accurate the reduced role of citizens in decision-making processes.
data on the fast-changing informal settlements, but Moreover, by replacing the role of clerks and other city
with the production of up-to-date maps through employees, smart management heightens the risks
the use of AI, the local government is able to better associated with an increase in the automation of tasks
plan and prioritise urban upgrading interventions. previously performed by governmental officials.
As a result, they are able to improve the delivery
of basic urban services (UNITAC, n. d.).
3.7.1.3.
Risk assessment and management

3.7.1.2. SDGs: 8, 9, 10, 11

Smart urban management RISKS: Misalignment of AI and human values,


societal harm, lack of mission transparency,
SDGs: 6, 7, 8, 9, 11 geographic misalignment

RISKS: Misalignment of AI and human values, The population growth in cities and the increased risk
societal harm, lack of mission transparency, of extreme natural events combine to produce very high
shift in the labour market negative impacts in urban areas. There is now a need
to redesign the built environment to mitigate disaster
The responsibilities of urban managers include the (Caparros-Midwood and Dawson, 2015). The design of
operation of many intertwined urban services. AI technol- a resilient city requires proactively planning according to
ogy can be used to coordinate these different services in future risks and possible mitigation solutions. While AI
a more efficient way. Smart management refers to the tools are often focused on responding to adverse events
use of AI by corporations to support local governments (Sun et al., 2020), they can effectively support and com-
in planning and managing cities. plement preventive approaches as well.
• Smart management centralises the management In order to lay out a responsible and sustainable urban
of various city-led services and infrastructure into a plan, city leaders can lean on AI techniques in the different
single virtual space, with some corporations providing phases of risk mitigation. City planners will first need to
additional data to help decision-making. acquire a profound understanding of the natural context.
• Smart management tools serve to alleviate coordi- This relates to knowing exactly what kind of risks the city
nation and communication efforts between services, may be exposed to. A knowledge of the existing urban
thus reducing the costs, time and risks associated context is then crucial for a design to be adapted to the
with such complex tasks. effective needs. This implies linking the built environment
and the population information to the specific risks
• The digital platforms used for smart management identified. Finally, in light of the natural risks and the local
can present data visualisation and analysis to human environment, the spatial plan of the city can be optimised
agents, delivering key information on the status of the for more resilient urban forms. The design of infrastruc-
system in real time. ture, regulations for land use and construction standards
• Online “control rooms” can be implemented to oversee can be properly adapted for disaster mitigation. AI-driven
and combine larger sets of urban data. systems can provide very interesting insight on these key
phases.

43  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS URBAN PLANNING

• Provide accurate maps of risk: Local knowledge and 3.7.2.


historical data can be digitalised into a Geographic
NEIGHBOURHOODS
Information System (GIS) and coupled with remote
sensing data for AI methods to better map out the
potential geographic risks (Wang, 2018; Huang et al.,
2021). For a comprehensive review of risk maps see 3.7.2.1.
Sun et al. (2020). Real estate values
• Create digital twins of buildings: Related to both the C A L LY
ideas of digital twin of the city and building information LO

management (BIM), various types of data can be used


by AI to recreate models of buildings and ultimately
R T
building inventories (Wang, Lam et al., 2019). They EL
E VA N
can prove very useful when combined with population
assessments in the next mitigation step. SDGs: 10, 11
• Simulations using population assessment, digital RISKS: Negative feedback loops, data misalign-
twins and risk maps: Relying on the previous models ment, data quality, mission creep, historical bias
of the city (risk maps and digital twins), AI-based sim-
ulations can evaluate intervention scenarios in order to The rate of innovation in property technology through AI
optimise the urban plan for specific objectives such as techniques such as virtual and augmented reality has
disaster mitigation. been rapidly escalating in the last few years. Algorithms
are being adapted as part of the city management tools
The risks relating to these uses of AI will reflect those and have been useful for estimating the market value
more generally encountered when integrating AI for public of properties and forecasting future price trajectories of
safety. More specifically in this context, the quality of the neighbourhoods. Very diverse information can be used
spatial information may not be sufficient to develop use- to monitor the evolution of prices: property market data;
ful tools or may favour specific neighbourhoods for which property attributes such as size, quality and history; and
data does exist. Furthermore, the rareness of extreme area-level information such as mobility metrics, crime
events makes for scarce historical data. To circumvent rates and public amenities. This allows managers to
such limitations, learning from other contexts may be identify, intervene, control or predict the evolution of
tempting but carries the risk of geographic misalignment. neighbourhoods and the advent of deterioration.

• Virtual immersive tours provide 3D views of real estate


properties. They allow for zoom-in, audio commentary,
and 360° viewing without physically being in the prop-
erty area, which is useful for projects that are still under
construction.

• Renters and landlords are finding AI monitoring and


interacting dashboards to be a useful tool. These
dashboards are able to monitor and offer everything
from rent prices to inform landlords whether to raise or
drop rates, chat boxes to allow them to communicate
with tenants about repairs and maintenance, and
automatically scheduled appointments based on the
information provided by tenants (Cameron, 2018).

44  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS URBAN PLANNING

• AI algorithms are able to detect and identify potential • Noise barrier optimisation in urban planning, for
fraud in real estate and insurance by allowing appro- example, uses parameters such as absorption rate of
priate authorities to automatically read data from barrier material, barrier height and other information on
documents (i.e., statements, credit reports and proof noise maps to simulate what measures need to be in
of income) and efficiently allowing them to make a place during the urban planning stage to reduce noise
first round of decisions before they are passed on pollution (Trombetta et al. 2018).
to underwriters.
• AI-powered measures have been used to address
However, the negative externalities of such systems vehicle-based noise pollution and capture environmen-
on the fluctuating property market may not always be tal noise anomalies. Data collection devices measure
considered in their implementation. Therefore, acting excessive noise levels on commercial or high-risk
upon the insight provided by AI may result in the deviation streets. Quantified notifications are sent to local
of outcome from expectations. Checkpoints must be authorities when a sound exceeds the decibel range
included to keep track of the ongoing ethical issues to determined on the street level (i.e., gunshot detection,
prevent the adverse effects of housing segregation, number plate scanning) (Nokia, 2021).
for example.
• AI can be used in order to detect underlying patterns in
the noise data that may represent early signs of natural
3.7.2.2. threats. Researchers have developed a deep learning
application that determines what ground noise is natural
Noise and comfort (i.e., what is human-made and what is not) and filters
the data. Such AI systems can provide early warnings
NOISE of earthquakes, for instance, as soon as the first tremors
are detected. These seismometers—instruments that
C A L LY
LO respond to ground noises—are placed in earthquake-
prone areas (Yang et al. 2022).

R Accuracy and discrimination should be a major concern


EL T
E VA N when using AI in noise detection scenarios. Inaccurate or
incomplete training data could result in either ineffective
SDGs: 9, 11, 13, 14, 15 applications or negative bias against certain neighbour-
hoods. A deliberate effort should be made to make sure
RISKS: Data misalignment, data quality,
noise control and evaluation is as accurate as possible.
concept drift

Noise is a leading source of discomfort for city residents


and while urban noise seems unavoidable, techniques
relying on AI exist to limit it. AI can ensure a minimum
level of noise from transportation, construction, entertain-
ment and human activity by providing soundscape insight
for planners.

45  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS URBAN PLANNING

COMFORT 3.7.3.

LO
C A L LY BUILDINGS, PUBLIC SPACES
AND INFRASTRUCTURE
R T
EL
E VA N
3.7.3.1.
SDGs: 3, 11 Smart buildings
RISKS: Geographic misalignment, outcome
HIGH NG -TER
misinterpretation, negative feedback loops O

M
L
Low-quality environments contribute to negative city
settings in the form of a decreased sense of safety, N

R
IM DE U
PACT AVO
vibrancy and liveliness. These aspects contribute to what
can be defined as the comfort of an area. Urban comfort
is the collective adaptation of a group of people in an area SDGs: 11, 9, 12, 7, 3
to certain microclimatic variables. Residents become RISKS: Digital divide, violation of privacy in data
acclimated to their outdoor urban space and define a collection, concept drift
range of parameters (i.e., thermal, visual, acoustic and air
quality) at which comfort is achieved. AI techniques can The comfort, security and energy consumption of homes
encourage the inclusion of individual experience in the and workplaces can be optimised when they are what we
design of the city. call “smart buildings.” When smart devices powered by AI
are included in the technical systems during the construc-
• Models are now able to use geospatial data to quantify tion or renovation of the buildings, users are able to better
and assess urban comfort levels via active monitoring interact with these systems and adapt them to their needs.
devices that provide data to cloud servers. This type of
monitoring system enables the generation of thematic • Smart buildings, through embedded technological
maps. Indices of environmental comfort are then devices or smart objects inside or outside of buildings,
produced and can be used by residents (cyclists or can produce, process and transfer data and metadata
pedestrians) when route planning or by local autho­ about the property, its technical system and its users.
rities for policymaking (Salamone et al. 2017). • Users can control and manage the technical systems
• AI is able to predict and simulate the thermal comfort of smart buildings remotely through commands or
of urban plans before they are even executed. The sensing.
thermal comfort of different urban landscapes can • The building’s systems can interact and self-calibrate
be determined so that planners can assess whether using the IoT principle and optimisation algorithms.
factors such as sunlight exposure, views and tree
positioning affect comfort levels.

Urban comfort is part of the subjective experience of the


city and, as such, is very much a relative concept: the way
comfort is defined is context-dependent. Misalignment is
a risk planners face if the system is not properly framed
for the specific area and population. Furthermore, in order
to limit the high energy consumption of these systems,
there is a need to coordinate the management of all city
sensors and urban services.

46  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS URBAN PLANNING

Because smart building AI technology deals with domes- Many aspects of the design process could integrate
tic and work spaces, serious concerns exist regarding AI systems. For example:
the transfer of information and data to third parties
• Defining the implantation, bulk and shape of buildings
that own and design the technology or those that could
access the data it produces. Audio and image recordings • Automating iterations of building program and floor
provide personal knowledge about the user’s habits, plans layouts (As et al., 2018)
characteristics and identity. Metadata and logs provide
• Generating 3D models from 2D drawings or point
both precise and estimated information about occupants’
cloud data
routines, including their daily or long-term presence and
absence. Additionally, individuals who have access to • Creating stylistic or ornamental pattern variations
the data, such as co-occupants, can use such a system
• Defining the optimal distribution of green and blue
to monitor or watch other individuals. Furthermore, a
infrastructure
key concern pertains to the centralisation of data and
the concentration of information by dominant digital • Evaluating the social response to building or open
platform corporations that may influence urban planning space design
decisions. The construction of smart buildings can also
While the use of AI would not replace the need for
contribute to increasing inequalities between new and
professional designers, it might greatly impact the labour
old built, digitalised or marginalised communities.
market and work ethos of the design and construction
industry. Designers and engineers might be required
3.7.3.2. to validate their design or calculation using AI, creating
factitious standards and expectations from clients.
Design Such new requirements might put undue pressure on
university programs in design or architecture to teach
HIGH NG -TER
O programming, with the risk of decreasing the perceived
M
L

quality of schools that are unable to hire design profes-


sors with deep understanding of AI. The use of AI might
N
E

offer a false guarantee for design projects that do not


R

IM DE U
PACT AVO
fulfil other quality standards of aesthetics, integration
SDGs: 11, 12, 9, 7, 13 and innovation.

RISKS: Shift in labour market, lack of mission


transparency, misalignment of AI and human values

Design has been integrating machine learning, deep


learning and generative adversarial networks to test
programmatic compositions and layout, develop building
form and concepts, ease the production of technical
drawings and 3D modelling, conduct historical surveys,
optimise material usage and deepen our understanding of
existing design heritage. The integration of AI systems in
design, especially architectural design, is relatively recent
and remains explorative (Chaillou, 2022). While it is often
associated with a parametric approach in architecture,
the use of AI systems encompasses a much wider array
of applications.

47  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS URBAN PLANNING

3.7.3.3. All phases of a building life cycle can benefit from AI—
feasibility study, design, planning, construction, mainte-
Construction and structure assessment
nance, renovation and demolition—for example by:
HIGH NG -TER • Estimating construction costs and risks associated
O
M
L

with types of ground

N
• Optimising the distribution of technical systems
E

IM DE U
PACT AVO
• Predicting cost overrun from data about the design,
the material and the workforce involved
SDGs: 9, 11, 12, 13, 17
• Automating the fabrication or the prefabrication of
RISKS: Skill shortage, shift in labour market,
complex facade work
stacking of faulty AI, AI system expiration
• Increasing the reliability of structural risk assessment
Construction uses various AI systems, including machine
learning, computer vision, knowledge-based systems • Extrapolating on the lost patterns and artwork of
and natural language processing (Abioye et al., 2021). Key historical constructions
applications include generating digital copies of existing
• Predicting demolition waste (Akanbi et al. 2020)
or historical buildings and sites, streamlining project and
budget management and automating construction work. Current use of AI in construction faces many limitations.
The aim of AI applications is to deliver safer and more The lack of reliable and complete data sets along with the
sustainable infrastructure faster and with reduced costs complexity of combining material and human factors can
and risk. AI may also provide clues on past historical sites lead to inaccurate or unreliable results. Construction sites
of cultural importance. Hence, the use of AI in construc- and buildings are complex environments where human
tion relies on and helps produce many types of data such and material systems interact that are grounded in
as 2D and 3D scans of buildings or building information traditional knowledge and know-how. Changes in prac-
management (BIM). tices and job loss due to AI can lead to backlash but also
lower the capacity of construction workers to maintain,
train and transmit craftsmanship. Significant differences
between buildings remains a barrier to the adoption of
scalable standards in applying AI to construction.

48  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS CITY GOVERNANCE

3.8. • The integration of AI systems within the different public


sectors has been used to encourage the standardisa-

City tion of information systems and data-sharing process-


es in order to better coordinate operations. AI tools

governance
have also been used to design new operating models
that foster interactions between services and build on
their synergies (Anttiroiko et al., 2014).

• Software based on natural language processing or


other automation processes has been used to generate
The different applications presented in section 3.7 reports, draft government documents, fill in forms and
highlight the opportunity that adapted algorithms can create visual communications (Lindgren, 2020).
represent to help governments operate more efficient
public services. Moreover, the administration of these • Better coordination of services through AI has enabled
urban services can be improved through AI as well. the data-driven optimisation of resource allocation by
More generally, AI can be used to effectively support the task automation and the identification of duplicated
management or governance of the city and better inform efforts (Zheng et al., 2018).
the decision-making process. This section presents AI While the opportunities for reducing costs and human error
applications that can be deployed by city managers on are apparent, a combination of ethical and technical risks
three different levels: enhancing local government capa­ can arise when considering digital transformations on this
city, engaging with the public and informing policymaking. scale. Notably, implementing these tools requires digital
literacy on the part of civil servants and collaborators.
Digital transformation within traditional administrations is
3.8.1.
a long process that goes well beyond the integration of AI
ENHANCED GOVERNMENT tools. Not only can untrained employees feel marginalised
EFFICIENCY and discarded, but being overwhelmed may lead them
to misuse otherwise performant systems. Building the
O
NG -TER capacity of the government is necessary before undertak-
ing internal AI transformation projects (see section 5.3).
M
L

N
E

DE U
AVO 3.8.2.
ENGAGEMENT WITH THE PUBLIC
SDGs: 12, 16

RISKS: Skill shortage, shift in labour market, The increased digital capacity of local governments
outcome misinterpretation integrating AI systems for service coordination or task
automation can also positively impact the relationship
A city’s operation can gain in efficiency by integrating AI
between administrators and citizens. Efficient urban gov-
techniques for automating basic, time-consuming tasks.
ernance requires constant engagement with the public in
This can ultimately enable a better allocation of resources
order to ensure adequate consideration of their interests
within the administration while decreasing the risk of
in the decision-making process. Interacting AI systems
human error (Berryhill et al., 2019).
can be leveraged for meaningful engagement.

49  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS CITY GOVERNANCE

3.8.2.1. • More personalised online communication has been


enabled by AI systems such as virtual assistants and
Administrative processes
chatbots that can be used to direct users to the right
services or to the right information (Safir, 2019).
SDGs: 8, 10, 16, 17
• AI techniques have been used to assist citizens in their
RISKS: Violation of privacy, digital divide,
search for relevant documents, archives or material
geographic misalignment
made available by the municipality.
Operational efficiency within government bodies can be
• By linking personal and professional schedules,
applied to the different administrative processes that
AI-powered tools are hosted by local governments
citizens must engage in when living in the city. In a digital-
to suggest administrative appointment schedules.
ised setting, AI systems can support the redundant tasks
of receiving or making payments and retrieving personal • Real-time translation enabled by AI has been used
information or documents for application processes to help city officials communicate with the different
(Martinho-Truswell, 2018). communities that live in the city.
• AI-based systems have participated in creating secure However, integrating AI in the interaction process between
entry points that connect people with all local govern- governments and citizens can entail the transmission
ment services and provide recommendations tailored of very sensitive, often confidential information through
to individuals’ needs. Applications for government the system. Accountability issues therefore may arise if
schemes can be automated based on a person’s responsibilities and ownership of the tools are not clearly
information shared across the different sectors. defined, in the advent of data breaches, for instance.
Furthermore, transferring a majority of administrative pro-
• AI has been used to support the automation of ad-
cedures online can lead to poorer management of offline
ministrative forms by using information from citizens’
channels of communication. Negative consequences
personal archives or historical data to generate typical
may appear where access to digital devices and digital
responses (Mehr, 2017).
literacy are unequally distributed across the population.
• AI is being used to support the application processes
for social services by recommending resources that
then need to be approved by human case workers
(Barcelona Digital City, 2021). Building civic engagement and public trust
• AI has been used around the world to provide a “digital
persona” for citizens to facilitate online identification For a deeper dive into the use of AI and how it can
and access to services (Basu, 2017). affect civic engagement and public trust, we refer
to a collection of essays curated by Brandusescu
and Reia, written in parallel to the writing of this
3.8.2.2. report in 2022.

Communication Drawing from a variety of perspectives and across


continents, the collection presents both civic and
SDGs: 10, 11, 17 scholarly perspectives on engagement, partnership,
law-making and new directions for urban governance.
RISKS: Data misalignment, violation of privacy,
The essays, along with video recordings of the
digital divide
original symposium, are available in open access
The urban administrative procedures required of citizens format (Brandusescu and Reia, 2022).
are very much dependent on efficient communication
between the local government and the public. AI provides
tools that can accelerate information transmission and
improve the quality of interactions.

50  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
A P P LIC ATIO NS CITY GOVERNANCE

3.8.3. 3.8.3.2.
INFORMED POLICYMAKING Formulating and evaluating policy
While local governments should lay out a clear strategy HIGH
for the governance of AI (see section 5), AI solutions may
be harnessed for the purpose of designing effective urban
policies. In particular, the OECD advocates for the respon-
IM
PACT
sible integration of AI into the decision-making process
to ensure that decisions align with the needs of the people
and to anticipate their impact (Berryhill et al., 2019). SDGs: 10, 11, 16

RISKS: Outcome misinterpretation, lack of


robustness and reliability, lack of explainability,
3.8.3.1.
high energy consumption, geographic misalignment
Identifying local needs
Adapted solutions need to be formulated and integrated
HIGH into local policy. AI techniques can help shape people-­
centred policies and support governments throughout
their implementation. Similar tools to those developed
for urban planners in the design of a resilient city can
IM
PACT be developed for a systematic prospective analysis of
policies.
SDGs: 1, 2, 3, 4, 5, 10
• Digital twins of the urban form and the various urban
RISKS: Outcome misinterpretation, data quality, networks have been used by policymakers and urban
geographic misalignment planners to predict evolutions in the city and measure
the impact of infrastructure policies on the different
Understanding the behaviour and interaction of agents in
areas (see the “digital twins” definition box in section
the city is the first step in design policies. Research-led AI
3.2.8.2).
tools can be used by local governments to improve their
knowledge of the context and to identify the urban issues • AI can be used by cities to evaluate the relevance of
that need addressing or will need addressing in the future. potential policies. By harnessing historical population
data and research-led artificial models of society, sim-
• The data collected from the different and coordinated
ulations that mimic the local social communities can
urban services can be effectively used to carry out
be combined with digital twins of the built environment
event correlation and causal analysis for better-­
to carry out economic and social impact assessments
informed decisions (Shibasaki et al., 2020).
and optimise local policies or social schemes.
• Data-driven risk assessments and risk measurement
devices can be improved through AI techniques to
better identify and prioritise issues.

51  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K

SECTION 4

Risk
Framework
4.1. Risks overview p. 53
4.2. The AI life cycle p. 54
4.2.1. Phase 1: Framing p. 56
4.2.2. Phase 2: Design p. 65
4.2.3. Phase 3: Implementation p. 70
4.2.4. Phase 4: Deployment p. 85
4.2.5. Phase 5: Maintenance p. 90

52  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K

4.1.

Risk overview

The risk framework provides an overview of the risks of AI, along with evalu-
ation questions to assess them. The framework can be skimmed for a gen-
eral understanding, but also provides the necessary detail to support more
technical teams in starting a broad assessment of AI systems. The risks
presented are not exhaustive; the framework focuses on raising awareness
about common issues at the intersection of AI’s technical and societal
implications. The aim is to enable cities to draft their own strategies for a
responsible use of AI for sustainable urban development.

The risk framework highlights the different risks throughout an AI system’s


life cycle, which is divided into five phases: framing, design, implementation,
deployment and maintenance. Each risk is presented with a simple definition
and real-world examples from different geographical locations. Graphics
and links show the relationships between the risks.

Each risk is accompanied by a series of guiding reflexive questions which


function as an evaluation toolkit. By focusing on awareness-raising rather
than on “techno-solutionism,” the questions provide direction for locally
appropriate mitigation strategies. The questions highlight places for pos-
sible intervention in order to mitigate risks while still being based in the
specific context.

The assessment of potential risks in relation to AI systems must be done


from a holistic perspective, encompassing both technical and societal con-
siderations. Only when stakeholders clearly understand the structure and
the limitations of an AI system will they be able to take full advantage of it
and optimise the system’s functioning in each particular context.

53  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K THE AI LIFE CYCLE

4.2.

The AI Life Cycle


The AI life cycle reflects the five major phases of an AI system and how it interacts with its
environment. It is a tool that articulates the structure and processes of developing AI to
help guide the reader’s thinking. The phases are cyclical, and each one is intertwined with
the others. For instance, the way an algorithm is designed and implemented defines its
eventual deployment.

PHASE 1

FRAMING

PHASE 5 PHASE 2
MAINTENANCE DESIGN

INPUT

OUTPUT

PHASE 4 PHASE 3

DEPLOYMENT IMPLEMENTATION

54  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K THE AI LIFE CYCLE

FRAMING PHASE
The framing phase is first. It focuses on problem definition and lays the foundation for later
phases, which all point back to questions raised in the framing phase. This phase focuses on
important reflections and risks arounds the context of an AI system’s deployment.

DESIGN PHASE
The design phase focuses more on building the algorithm itself, before any coding. It builds on
the parameters identified in the framing phase. The risks in this phase include questions about
the team and the consequences to be considered, such as power and economic shifts.

IMPLEMENTATION PHASE
The implementation phase is the most technical one, and is structured around the AI pipeline.
The issues of this phase are specifically about the algorithm itself and specific technical
decisions.

DEPLOYMENT PHASE
The deployment phase begins once an algorithm is fully developed. The risks from this phase
arise when an algorithm is taken from a controlled and predictable laboratory setting to a
real-world environment.

MAINTENANCE PHASE
The maintenance phase begins after an algorithm has been deployed and concludes when it
is retired. This phase encompasses the long-term life of an AI system, including the consider-
ations necessary to keep an algorithm functional and up to date.

In an urban context, multiple AI systems are combined and interact within an AI ecosystem.
The complexity of the overarching system means early risk detection and intervention is
recommended. For practical purposes, the different phases can also be connected to local
project management processes to design concrete intervention points.

55  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K

4.2.1.

PHASE 1

Framing
PHASE 1
FRAMING
The first decision for local authorities This stage is essential because when
to make when considering the imple­ decision-makers engage in the framing PHASE 2
DESIGN
mentation of an AI system is whether process, they define the discourse and
or not to engage with AI at all. While targets that will govern the creation and PHASE 3
the discourse surrounding AI promises integration of an AI system. For an AI sys­ IMPLEMENTATION
increased efficiency, that may not always tem to solve a problem, it must be given
be the case. To determine suitability, clearly defined, quantifiable instructions. PHASE 4
one must first reflect on the challenge at As such, framing is important because DEPLOYMENT
hand, how it is currently being addressed, the decisions made during this phase will PHASE 5
and how it may be handled better in that contextualise and shape all the decisions MAINTENANCE
particular local context. Then, one should that follow through later phases. Risk
assess whether AI can in fact optimise a assessment in this phase depends on a
part or the entirety of the process (see detailed articulation of the problem for
section 2.2). AI to solve, based on a series of social
and economic factors specific to local
context. Without this, any system which
is developed can have serious flaws that
impair its ability to improve the condi­
tions defined within its objective.

4.2.1.1. Initial considerations INPUT


• Is AI the best tool to tackle this challenge? If so, why? What are the pros and cons?
• Have existing AI systems been applied in a similar context? What are the lessons learned?
• What is inadequate about current approaches to dealing with this challenge?
• How does the given mission relate to the challenge? How will it help solve the issue and to what extent?
• Do any system tasks involved require creative reasoning, such as complex human interaction? If so, an AI tool
will not be likely to optimise the solution to the problem.
• How many system tasks involve gathering subtle cues or other information that can’t be properly quantified?
If there is a significant part of the mission that can’t be quantified, AI won’t optimise the solution to the
challenge at hand.
• How does the AI system respect the best practices recommended by the responsible AI community
(e.g., the Montreal Declaration for Responsible AI and the UNESCO Recommendations on AI Ethics)?

56  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K FRAMING RISKS

4.2.1.2. Framing risks

LACK OF MISSION TRANSPARENCY Lack of mission


transparency

The risk related to a lack of mission this data will be used, and whether and R E L AT E D R I S KS
transparency arises when there is a lack of with whom it will be shared. In terms of
Uncertain accountability
public communication regarding the objec- the policy objectives, decision-makers
tives of an AI solution. In AI, transparency should inform the public about what they
Mission creep
is considered a broad concept that stretch- are trying to achieve in the context of
es throughout different stages: it is related the mission or policy at hand and who is
Regulatory breach
to algorithmic transparency, explainability, accountable for its application (Almeida
interpretability and trust (Larsson and et al., 2021).
Misalignment between AI
Heintz, 2020). In the framing phase of an and human values
Regularly engaging with the public
AI system, transparency mainly refers to
throughout the life of the system Lack of transparency
the disclosure of information about the and interpretability
encourages transparency and public
system (Li et al., 2021).
acceptance. This implies providing Outcome
Municipalities must inform the public access to information, conducting public misinterpretation
about why and how AI is able to opti- consultations or other forms of citizen Violations of privacy in
mise the solution to a public problem, engagement from an early stage, and data collection
how it is going to be applied, what the publishing the iterative process that led
intended outcomes are, and what steps to the adoption of AI. By striving for social Algorithmic aversion

will be taken to achieve them (OECD acceptance, city managers are ultimately
Unaudited algorithm
and UN ESCWA, 2021). For instance, if working towards mitigating the risks of a purchase
police enforcement makes use of a face fallout in the AI application, thus avoiding
recognition technology for surveillance significant financial, reputational and Concept drift
of public venues, it is important that it social consequences.
disclose how the technology is going to Societal harm
be applied, how it will collect data, how

QUESTIONS

• What are the public policy challenges at hand? What sector and what stake­holders does it involve? Which
communities are affected by it?
• What are the short-, medium- and long-term objectives of applying the AI system? How are those objectives
aligned with the public policy challenges identified?
• Can the mission the AI will be given be explained in clear terms in a way that an algorithm will understand?
• Does the envisioned AI solution comply with transparency, interpretability and accountability best practices?
(See the “lack of transparency and inter­pretability” portion of section 4.2.3.2 and the “lack of explainability”
portion of section 4.2.3.3.)
• Where, how and for how long will the AI system be used? Is this information disclosed to the public?
• How can citizens easily access information about the AI-driven policy at hand? If such information is not
yet disclosed to the public, how and when will it be?

57  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K FRAMING RISKS

SKILLS SHORTAGE

In the context of framing what AI is used algorithmic solutions. The HITL approach
Skills shortage
for and how it is used, the risk of skills allows for an AI design that integrates
shortage refers to the lack of human human agency into certain critical
capacities. There are two common, signif- decision-making steps of the system R E L AT E D R I S KS
icant limitations: the size of the workforce (see section 2.2 and recommendation #2 Distress in the
that is necessary to build and manage in section 5.2). There are instances for local labour market
the AI system (human capacity) and the which the integration of human judgment Inadequate
ability of this workforce to interact with improves performance, such as the infrastructure
and exercise oversight of the AI system balancing of fundamental rights. In fact,
Financial burden
(AI literacy). It is important to keep in mind in high-risk applications, regulations may
the need for skilled professionals through- specifically require an HITL (Middleton
Digital divide
out the project, regardless of promises et al., 2022; Mazzolin, 2020).
of automation. Technical support for Lack of team diversity
maintenance at the local level creates an If these issues remain unaddressed, and inclusivity
ongoing need for locally available skills. the AI infrastructure to be built and
maintained risks replicating and perpet- Geographic misalignment
Currently, there is a limited pool of AI uating the inequalities represented by
talent due in part to economic or gender-­ the skills shortage. Also, the lack of HITL Negative feedback loops
related digital divides on the global and risks leaving affected populations more
local scale, affecting primarily the Global vulnerable to a faulty AI system that is
Unaudited algorithm
purchase
South (WEF, 2020; Aguilar et al., 2020).
not optimised for the context in which
Beyond AI training, professionals must it is deployed. Coupled with a lack of AI
have cross-functional skills which allow literacy, these effects can cascade into
for a proper optimisation of AI tools in a generalised lack of trust in AI systems.
the local context. Interdisciplinary and Decision-makers should consider the
cross-functional competencies may also extent of the human resources available
help avoid techno-solutionism by enabling to design, implement, deploy and oversee
a “human in the loop” (HITL) approach to an AI system.

QUESTIONS

• What kinds of skilled professionals are needed for the mission at hand? Are these currently available in the region?
• Considering the human resources available, is there a skill shortage? If so, in what stage of the mission: data
collection? Decision oversight? Model development?
• How will a lack of skilled professionals affect the design, deployment and maintenance of the AI solution envisioned?
• Can a potential skills shortage be overcome through partnerships that allow for a context-informed approach?
If so, how?
• How important is it that the AI system envisioned be subjected to a human in the loop? What are the
associated risks of fully automated decision-making and how significant are they?
• What is the worst potential outcome of the system? Will the system potentially lead to life-or-death or fairly
complex situations to which a human should respond?
• Does the legislation demand a human in the loop? If yes, can the human oversight requirement be fulfilled?

58  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K FRAMING RISKS

DISTRESS IN THE LOCAL


LABOUR MARKET

The risk relating to distress in the local ch. 3). As another example, large-scale Distress in the
labour market arises when the adoption structural changes can be seen with how local labour market
of an AI system creates structural algorithmic platforms impact the mobility
changes in the labour market that yield sector’s dynamics by facilitating access R E L AT E D R I S KS
negative effects on local populations. to rides and food delivery services (Lee,
AI is not fully autonomous; its design, Kusbit et al., 2015; Raval, 2019; Rosenblat Skills shortage
deployment and maintenance depend and Stark, 2016; Kassens-Noor and
on human resources. While AI solutions Hintze, 2020). Regulatory breach
may create new services and new forms
Decision-makers should conduct a com-
of qualified or unqualified labour, they can Digital divide
prehensive analysis of how an AI system
also induce displacements in the labour
can impact present and future dynamics
market, as well as increasing the precarity Misalignment between AI
with regard to job opportunities, social and human values
or rendering obsolete certain trades.
assistance, economics and human
In situations where workers are not re- development. Ultimately, AI systems only Algorithmic aversion
placed, their functions are often reduced serve the population if they create a bet-
to precarious unpaid or low-paid labour ter environment for all. Although change Societal harm
that requires them to execute microtasks may also imply positive spillover effects,
that machines can’t do efficiently. For municipalities must carefully weigh their
instance, humans are needed to label ability to mitigate potential negative out-
images, as well as to translate and to comes. It is important that these shifts be
transcribe texts, all of which are essential considered in the framing phase, as the
to support supervised learning techniques effects of an AI system on the population
(Moreschi and al., 2020; Crawford, 2021, may not be straightforward.

QUESTIONS

• How does the AI system impact the job market? How does it impact different
groups of workers and the quality of services provided?
• Are certain groups disproportionately more impacted by the AI system than
others? If so, how?
• Does the AI system impact labour relations? Are these impacts determined by
algorithmic decision-making? If so, how?
• What mitigation strategies can be applied to prevent distress in the labour
market? Are they sufficient to balance potentially disruptive outcomes?

59  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K FRAMING RISKS

INADEQUATE INFRASTRUCTURE

The risk relating to inadequate infrastruc- (delays in the amount of time taken by Inadequate
ture arises when the adoption of an AI data to travel from a designated point to infrastructure
system is not backed up by the level of another). As another example, consider a
technological infrastructure it requires situation where the AI deployer ignores a R E L AT E D R I S KS
for a safe and sustainable functioning. lack of data storage capacity to support
This risk stems from the idea that “it an AI system’s application in cities. This Skills shortage
takes technology to make technology” deficit can directly impact citizens’ data
(Bughin and Van Zeebroeck, 2018): protection (see the “insufficient privacy Financial burden
AI systems depend on other layers of protection” portion of section 4.2.3.2)
infrastructure, from ICT (information and and result in security issues such as Digital divide
communications technology) infrastruc- data theft and other forms of adversarial
ture to energy systems and hardware attacks (see the “insufficient system High energy consumption
equipment. Designing and deploying AI security” portion of section 4.2.4.1).
requires the physical presence of broad- Insecure data and
A “core” digital technology substructure algorithm storage
band infrastructure with fast and reliable
must already be in place before the im-
bandwidth. Furthermore, in order to do a Insufficient system
plementation of AI can realistically occur.
great many calculations at high speeds, security
This will most likely require previous
AI necessitates powerful computing re-
investments in tangible hardware infra- Unaudited algorithm
sources, such as modern CPUs and GPUs purchase
structure, and more generally in the city’s
(central processing units and graphic
digital capacity. Municipalities must then Ai system expiration
processing units) (Wu, Raghavendra et
consider what infrastructure they have to
al., 2022). However, these processors
safely implement AI solutions, as well as
can be expensive, with potentially volatile
whether the possible AI solution requires
prices due to supply chain dynamics (J. P.
further investment in infrastructure.
Morgan Research, 2021; Rothrock, 2021).
Hence, infrastructure needs should be
While the computing power needed for AI
weighed during the framing phase, and
can be outsourced through the usage of
investments should already be planned
third parties’ data centres, it is advisable
in the design of an AI strategy phase
for an AI to be developed and deployed
(see section 5). It is not advisable to
in-house. For example, data processing
implement AI systems on a promise
centres located on a different continent
of future investments and upgrades in
could cause problems in terms of latency
the infrastructure.

QUESTIONS

• Can the AI-related task be carried out with on-premises infrastructure?


• Is the available on-premises infrastructure safe and reliable? Does it meet current security standards and best
practices?
• Do additional infrastructure needs have to be met before adopting the AI system?
• Who owns the infrastructure? Is it public or private? Are there any ownership constraints?
• If a public-private partnership is pursued, how is it going to work? Can this process be subjected to a risk
assessment?
• Are data and data management protocols available for the envisioned AI system?

60  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K FRAMING RISKS

FINANCIAL BURDEN

Risks relating to the financial burden to continuously maintain it, the lives
Financial burden
resulting from an AI solution arise when of the people affected by the system’s
decision-makers choose to deploy an prediction would be at concrete risk
algorithm they cannot afford to properly because the prediction could be wrong; R E L AT E D R I S KS
implement and later maintain. As sug- the accuracy of a deteriorating system
gested in previous sections, physical and will drastically decrease. Furthermore, Skills shortage
human capital is needed throughout the the costs relating to managing the natural
Inadequate
AI life cycle, and these often come at very and social impact of failing systems may infrastructure
high costs (Davenport and Patil, 2012). be significant.
Integrating an AI solution within an exist- Digital divide
Therefore, the investments will need to
ing system and ensuring its sustainability
be protracted throughout the lifespan
through cyber-security will incur long-run High energy consumption
of the system. In essence, the financial
costs for the public owners (Heemstra,
capacity of a city should be balanced
1992; Leung and Fan, 2002). Unaudited algorithm
with the estimated costs of technological purchase
Should this constant investment in the solutions before embarking on any pro­
Insufficient system
system not happen, actors risk deploying curement or design process. In particu­ security
dangerous systems into society. For lar, decision-makers should consider
example, if an AI system predicting whether they will be able to sustain this Ai system expiration
catastrophic natural events such as investment in the long term and bear
landslides in the city did not benefit the unforeseen costs.
from the necessary financial resources

QUESTIONS

• What is the estimated cost of the development and maintenance of the AI


application?
• Will it be possible to minimise the financial investment needed without elevating
other risks?
• Are there any resources available to address unforeseen costs? Is there an
emergency fund dedicated to the AI system in place?

61  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K FRAMING RISKS

REGULATORY BREACH

Risks related to regulatory breach arise deployment, cut its life cycle short and
Regulatory breach
when an AI system is incompatible with have important financial impacts on local
certain regulations of the jurisdiction in governments.
which it will be designed or deployed. R E L AT E D R I S KS
As such, the impacts of rules pertaining
Regulations are not limited to laws, but
to each phase of the AI life cycle must Lack of mission
rather are a broad range of “instruments transparency
inform decision-makers’ analysis of the
through which governments set require-
advantages and drawbacks of adopting
ments for enterprises and citizens,” Uncertain accountability
an AI solution. Before engaging with AI
including “laws, formal and informal
systems at any level—but especially at
orders, subordinate rules, administrative Mission creep
the framing phase—cities must undertake
formalities and rules issued by non-­
a thorough assessment of human rights
governmental or self-regulatory bodies Presence of sensitive or
and constitutional provisions, public-sec- proxy information
to whom governments have delegated
tor regulations, AI-focused regulations,
regulatory powers” (OECD, 2018). In the Violations of privacy in
privacy and data protection regulations, data collection
context of developing, purchasing and
sector-specific regulations and different
deploying AI systems, a panoply of legal Lack of transparency
country-specific regulations in procure-
and administrative requirements might and interpretability
ments of technologies designed in a
come into play. Ultimately, such breaches
different jurisdiction. Lack of explainability
can jeopardise AI development and
Unaudited algorithm
purchase
QUESTIONS
Concept drift
• What regulations apply to the context of the mission and to the AI system’s
application?
• How is the AI system being designed in accordance with the current legislation?
• How can regulatory risks be mitigated throughout the following phases?
• Does the AI system have a proven record of reliability and compliance?
• Is it possible to run an impact assessment and identify potential red flags
for regulatory requirements and legal provisions?

62  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K FRAMING RISKS

UNCERTAIN ACCOUNTABILITY

Risks related to accountability arise organisations running social services on


Uncertain accountability
when there is a lack of transparency behalf of the city might be involved in
around the parties responsible for an AI some stage of the AI life cycle.
system’s deployment and maintenance. R E L AT E D R I S KS
It is important to analyse the whole
Within the scope of the framing phase,
ecosystem around the technology at Lack of mission
accountability refers to “the ability to transparency
hand. Although the roles of the different
determine whether a decision was made
stakeholders may be intertwined in Distress in the
in accordance with procedural and sub-
a complex manner, it is crucial to be local labour market
stantive standards and to hold someone
able to identify and map these in order
responsible if those standards are not Regulatory breach
to guarantee accountability. This is a
met” (Doshi-Velez et al., 2017, p. 2).
necessary step for mitigating many
It is important to note that accountability other risks throughout the AI life cycle. Mission creep

issues might arise whether the AI system Understanding the distribution of respon-
Presence of sensitive or
is designed by public actors, co-designed sibility will be particularly useful to carry proxy information
across sectors or procured. For instance, out the risk assessment for a system
Lack of transparency
governments may resort to public pro- (Data Society, 2021). Governments and and interpretability
curement to offer certain services to the municipalities must take the lead in
public, or they may license an AI system monitoring and enforcing measures for Unaudited algorithm
purchase
application from a vendor. In addition to compliance. Ultimately, there should be
the public authority responsible for the a shared commitment by all actors to
AI design or purchase, various govern- respect these measures.
ment agencies and even civil society

QUESTIONS

• Who are the stakeholders in the AI system’s life cycle? What are their roles?
• Is accountability assigned in each phase of the AI life cycle? If so, how?
• Are all stakeholders able to justify their actions and the outcomes of their
actions or omissions regarding their role in the AI pipeline?
• How are stakeholders throughout the pipeline able to redress any harms
potentially caused by their actions or inactions?
• What public administration frameworks, mechanisms and resources (material
and human) address accountability risks? Do these need to be adapted?
• Does the administration have a framework and best practices in place to
respond to any event where it might be accountable for harms related to
the design or deployment of the AI system?

63  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K FRAMING RISKS

MISSION CREEP

The risk of mission creep encapsulates the use of this predictive system to identify
Mission creep
the practice of deviating AI systems from “hot spots” for criminal activity led police
their original purposes in a way that jeop- patrols to disproportionately target poorer
ardises their efficacy or underlying value areas, unveiling discriminatory outcomes R E L AT E D R I S KS
system (Kornweitz, 2021). Mission creep (Mehrotra et al., 2021). Another example
Lack of mission
is especially dangerous when a system is the repurposing of a satellite imagery transparency
initially meant for positive purposes is analysis system created for weather
induced into serving other purposes that forecasting and prevention of landslides. Regulatory breach
are likely to interfere with fundamental When the system is used to map poor
rights. The effects of such misuse can communities and conduct forced evic- Uncertain accountability
range from short-term unintended tions of vulnerable families instead, the
outcomes, such as discrimination and fundamental rights of those impacted Manipulation and abuse
through AI
privacy violations, to long-term conse- are jeopardised (Greenfield, 2013).
quences related to the societal impacts of Misalignment between AI
As similar AI techniques can be used and human values
such practices. Overall, mission creep can
for vastly more applications than what
be detrimental not only to the population, Presence of sensitive or
they may be initially designed for, it is
but also to the legitimacy of institutions proxy information
important to take into consideration its
and of technological development itself
original application before assigning it Lack of transparency
by increasing social distrust (Dwork and and interpretability
other tasks. Furthermore, future repur-
Minow, 2022).
posing options should be discussed when Negative feedback loops
An example of mission creep is the framing an original AI solution in order to
repurposing of a system initially invented limit the conditions in which irresponsible Unaudited algorithm
to detect earthquake aftershocks for deviations may arise. purchase

predictive policing. Evidence shows that


Algorithmic aversion

QUESTIONS Societal harm

• To what purpose was the AI system at hand designed? In which context? Concept drift
• What is the scope of the public policy challenge in which the AI system will be
applied? (See the “lack of mission transparency” portion of section 4.2.1.2.)
• Is the AI system being used for the same purpose it was designed for? Are
the contexts and purposes of the AI system compatible with the context and
purposes of the intended application?
• Are there any differences between the scope of creation and the scope of
application of the system?
• Can any such differences be mitigated? How so?
• Are the mitigation strategies sufficient to fill the gap between the AI system’s
purpose and the policy context it will be applied to? If not, the risks of designing
and deploying the envisioned AI system are likely to outweigh its benefits.

64  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K

4.2.2.

PHASE 2

Design
PHASE 1
The design phase encompasses the Biases against racial minorities, women FRAMING
theoretical foundations of an AI system. and other groups that suffer historical
PHASE 2
During this phase, decisions about key discrimination can in part be explained DESIGN
aspects of the system are made, such as by the designers’ choices regarding
the data collection, choice of algorithm which attributes to include or exclude PHASE 3
and outputs of the solution (Ugwudike, in the algorithm. For example, the use of IMPLEMENTATION

INPUT
2022). standardised test scores as attributes in
PHASE 4
enrollment and scholarship distribution
DEPLOYMENT
Human beings are involved in all stages algorithms can aggravate economic
of the design process, from the problem and racial disparities (Engler, 2021). In PHASE 5
formulation and outcome definition to the United States, the use of such test MAINTENANCE
the model construction. Their involve­ scores as metrics for student success
ment is not neutral: assumptions are has been proven to perpetuate socio-­
always made in the process of encoding economic inequalities, since students
human objectives into mathematical from less-privileged backgrounds score
ones. These assumptions come in many systematically lower (Smith and Reeves,
different forms, often mirroring the social 2020). Ultimately, the design choices may
contexts in which the humans designing OUTPUT
favour access to education for certain
the algorithm find themselves situated. population groups over others.
As such, the design of an AI system is
heavily influenced by the designers’ More generally, decisions made during
ideologies, values, theoretical assump­ the design phase can lead to the creation
tions and understanding of the task at of AI systems that replicate existing
hand (Ugwudike, 2022). The process of power structures, excluding minorities
ideation is therefore a crucial phase in and widening socio-economic inequal­
which human biases may be embedded ities. Therefore, when designing AI
in the system: at each step, choices that systems, stakeholders should take into
can lead to discriminatory outcomes are account not only the direct impact that
made (Leslie, 2019). their technology will have on the users,
but also the indirect impact it could have
on the surrounding socio-economic
environment.

65  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K DESIGN RISKS

4.2.2.1. Design risks

LACK OF TEAM DIVERSITY


AND INCLUSIVITY

The risk related to a lack of team diver­sity groups. For example, the supposedly Lack of team diversity
is that certain assumptions or decisions comprehensive health tracker launched and inclusivity
made in the design phase may not reflect by Apple in 2014 did not even contain a
the needs of all impacted communities. period tracker (Criado-Perez, 2019). R E L AT E D R I S KS
Lack of team diversity refers to a
To avoid such risks, local governments Skills shortage
homogeneity of backgrounds—ethical,
should ensure effort is being made to
educational, cultural, religious and so
create a diverse team of designers. Misalignment between AI
on—among the professionals designing and human values
Moreover, to better understand the target
the AI system, whereas lack of inclusivity
population’s needs, the different stake-
refers to a lack of decision-making power Historical bias
holders that will eventually interact with
from various stakeholders throughout the
or be impacted by the AI system should
AI life cycle. Together, the lack of team di- Inadequate demographic
be included in the design phase. Very of- representation
versity and inclusivity can lead to negative
ten, the people who are most affected by
impacts on society once an algorithm is Geographic misalignment
design decisions are the ones who have
deployed, most notably by perpetuating
the least influence on the design process
historically discriminatory practices. Unintentional breach of
(Costanza-Chock, 2018). Furthermore, safety
This lack of diversity is regularly high- requesting the transparent disclosure of
lighted in the technology industry, where the team composition and of the design Outcome
misinterpretation
design teams tend to be small and essen- process is crucial to identify the underly-
tially composed of men of middle to high ing assumptions governing an AI system. Unaudited algorithm
purchase
social status (World Economic Forum, Overall, decision-makers should ensure
2020). This recurrent scheme can result that the design of an AI system reflects Societal harm
in unadapted technologies, because they and includes the same diversity as that
are not explicitly designed for particular of the population it will impact.

QUESTIONS

• What diversity and inclusion practices are in place within the organisation
designing the proposed AI system?
• How are the needs and perspectives of the target population addressed within
the design phase?
• Has the team responsible for design identified the population subgroups which
could be more at risk of discriminatory outcomes created by the AI system?
What are the risks towards these subgroups?
• What are the principles guiding the design process of the new system? Has
inclusion and diversity been integrated into the development of the system?

66  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K DESIGN RISKS

MISALIGNMENT BETWEEN
AI AND HUMAN VALUES

The risk of misalignment between AI and although fairness can be translated into
Misalignment between AI
human objectives arises when values various mathematical concepts, it must be and human values
guiding the mission of an AI system defined in a context-specific manner.
are not reflected in the outcomes of the
The technical difficulties associated with R E L AT E D R I S KS
algorithm once it has been implemented.
how to formally encode values, norms Lack of mission
As mentioned earlier (see section 2.1.5), transparency
and human rights concerns into the AI
AI systems perform tasks with respect to
(Gabriel, 2020) are not negligible. In 2018,
concrete objectives. Therefore, in order for Mission creep
a pedestrian was killed by an autonomous
an AI system to engage with our world, a
car while pushing a bicycle across the road.
translation must occur between human and Lack of team diversity
Investigations of the accident revealed that and inclusivity
mathematical perspectives on the world
the AI system was not trained to respond to
(Korteling et al., 2021). Alignment means
a person crossing the road at an unmarked Digital divide
ensuring that AI systems capture the norms
location (Marr, 2022). Beyond the technical
and values that guide human reasoning and Manipulation and abuse
issues, there is a broader discussion sur- through AI
motivate the use of AI. Misalignment can be
rounding which values should be encoded
broken up into two challenges: which norms
in an AI system in the first place. Many Historical bias
to encode and how to encode them in the AI.
AI designs embody utilitarian principles,
There are many values one may wish though this perspective is not universal Inadequate demographic
representation
to encode into an algorithm’s reasoning and specifically contradicts many African
processes: privacy, safety, accountability cultural norms (Metz, 2021). Geographic misalignment
and so on. A difficult concept to encode
As our society becomes more entangled
is fairness, which is used to distinguish
with various ecosystems of algorithms, High energy consumption
beneficial from detrimental applications
the challenge of aligning human and
of AI. Multiple visions of what fairness Lack of transparency
algorithmic goals becomes increas-
is and how it should be measured exist and interpretability
ingly complex, and it cannot be solved
and confront one another (Mehrabi et al.,
exclusively on a technical level. Decision- Lack of explainability
2019). For example, demographic parity
makers should therefore foster dialogues
ensures that minority and majority groups
between computer scientists, ethicists,
are equally represented in the outcomes. Negative feedback loops
social scientists, jurists, policy experts
Meanwhile, individual fairness ensures
and other domain experts to allow for an Inadequate user
that two individuals with similar charac- feedback integration
alignment between AI and human values.
teristics have similar outcomes. As such,
Societal harm

QUESTIONS
Data drift
• What are the subgoals of the AI system’s main goal? How are these related to
each other? Concept drift

• How are the goals of the AI system reflected in the design choices?
• Which goals are being prioritised when designing the AI system?
• Which values are being adopted when creating the AI system?
• Are the objectives guiding the AI system reflective of societal values?
• Are notions of fairness integrated into the AI objectives? Which ones?

67  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K DESIGN RISKS

DIGITAL DIVIDE

Digital divides refer to substantial gaps poverty. The lack of real-time access to
Digital divide
in the accessibility of new technologies information or the disparities in access to
(Dijk, 2006). When they design an AI, data and opportunities can prevent many
relevant stakeholders should be aware developing countries from creating AI R E L AT E D R I S KS
of the risk that the envisioned AI system solutions adapted to the local conditions
exacerbates existing inequalities. Digital (University of Pretoria, 2018). Skills shortage
divides can arise from a lack of access
There are countless examples of Distress in the
to the physical infrastructure that is local labour market
how digital divides can materialise in
needed for AI systems, a lack of digital
individuals’ everyday lives. For example,
skills, difficulties in accessing hardware, Financial burden
or users, in ability to obtain an economic while a school might have computers
return from the technology. for pupils to use, these computers might Inadequate
be obsolete and incompatible with the infrastructure
Beyond city dynamics, digital divides latest updates of relevant educational Misalignment between AI
can be concretely noticed at a global level. software, thereby impairing students’ and human values
The high sunk costs and large amounts ability to benefit from the technology. Inadequate demographic
of data required by AI innovation leads In fact, schools in remote areas might representation
to the creation of monopolies: a small not even have access to computers and
number of global frontier firms located in broadband connection in the first place. Geographic misalignment
a few powerful countries serve the entire
global economy (Korinek et al., 2021). When designing an AI, stakeholders High energy consumption
The AI innovation race can therefore lead should be concerned with ways of closing
to a winner-takes-all dynamic, advan­ the digital divide in order to ensure the Societal harm
cing countries that are early adopters newly created technology does not
and leaving behind most emerging contribute to replicating existing inequali-
economies due to a lack of adequate ties (Eastin and LaRose, 2000). Moreover,
infrastructure, skilled labour and available municipalities should take specific
data (Korinek et al., 2021). The existence actions to help those most affected by
of digital divides is the cause of multiple digital divides, for instance by investing
phenomena that perpetuate systemic in broadband infrastructure and digital
discrimination and reinforce cycles of literacy programs (Chakravorti, 2021).

QUESTIONS

• Is specific hardware or software needed to use this AI system? Is it widely available? Who does or does not
have access to this technology?
• How easy is this AI system to use, regardless of users’ technological literacy?
• Could every municipality have access to and use this AI system, notwithstanding their geographical location?
• If the AI system’s design directly impacts the population, how does it impact different geographical spaces
in the city?
• Are specific socio-demographic groups disproportionately affected by the AI system?
• If this AI system aims to promote economic development, can everyone benefit equally, regardless of
geographical location, socio-demographic group or disability status?

68  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K DESIGN RISKS

MANIPULATION AND
ABUSE THROUGH AI

The risks related to technological mani­ against harmful or unethical products Manipulation and abuse
pulation arise when design choices are difficult. The same targeting practices through AI
intentionally meant to cause behavioural could distract individuals from essential
(or cognitive) changes in users’ interac- informational content such as healthcare
R E L AT E D R I S KS
tions with an AI system. announcements (Milano et al., 2021).
Mission creep
The massive quantities of personal Given the innovative nature of the
data gathered for the purpose of pre- integration of AI systems into channels Misalignment between AI
diction can enable AI systems to make of communication, there remains a and human values
predictions related to users’ behaviour significant lack of public literacy on Violations of privacy in
and digital consumption patterns. For the ways in which these systems can data collection
instance, records of behaviour on social impact users without their knowledge
Lack of transparency
media platforms (e.g., Facebook likes) and consent. Furthermore, the accessi- and interpretability
can be used to infer sensitive personal bility of these systems exacerbates their
attributes such as religious views or reach and manipulative power. As such, Negative feedback loops
sexual orientation, thus invading users’ decision-makers must carefully consider
privacy (Kosinski et al., 2013). Many how the underlying design mechanism of Algorithmic aversion
companies use predatory advertising an AI system might manipulate users into
and deceptive design tactics, also known certain patterns of behaviour by exploiting Societal harm
as dark patterns, that enable them to vulnerabilities in their decision-making.
influence users’ choices and interactions Although these tactics can improve
with their AI system (Petropoulos, 2022). engagement, they can also be harmful
For example, targeted advertising isolates to the societies they should be serving.
consumers and renders collective action

QUESTIONS

• Does the AI system aim to influence people into behaving in a certain way, or do
so regardless of aim? How can this influence be examined?
• Does the technology facilitate the tracking, monitoring or influencing of people?
• Are the users of the AI system aware of the strategies in place to influence them?
• Have the impacted users provided consent before engaging with the system?

69  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
INPUT
RIS K F RA M E WO R K

4.2.3.
OUTPUT
PHASE 3

Implementation
PHASE 1
FRAMING
The implementation phase corresponds It cannot be overstated how crucial the
to the process of building an algorithm. It implementation phase is to the suc­ PHASE 2
can be divided into three building blocks: cessful engagement with algorithmic DESIGN
data input, the deductive algorithm itself solutions. Although these risks are of a
and the algorithm’s outcome. During data technical nature, they are directly linked PHASE 3
input, an algorithm is given training data to the earlier and later phases of the life IMPLEMENTATION
which will inform its perception of the cycle and can often be mitigated through
PHASE 4
world. The risks within this process in­ appropriate framing and design choices. DEPLOYMENT
volve concerns related to the quality and As the technical possibilities of AI are in
the source of the data. During algorithm constant progression, it is essential that PHASE 5
design, the algorithm itself is structured. stakeholders, in particular the ones that MAINTENANCE
The choices made in this process can own and deploy AI solutions, be prop­
define the complexity, interpretability, erly informed all throughout the stages.
functionality and cost of the entire AI Namely, if local governments are aware
system. Finally, during outcome genera­ of how technical issues translate further
tion, the algorithm is given the input data down the line into urban problems, they
and generates intended outcomes. will be able to intelligently weigh the
benefits and the drawbacks of using such
During the implementation phase of an systems before engaging in the tendering
AI system life cycle, the risks associated process. Furthermore, they may be able
with the algorithmic process itself arise. to implement the necessary measures to
Similar to the previous phases, one of the avoid adverse consequences. These risks
crucial elements of the implementation are presented below with respect to each
of an AI system is the alignment between step in the general procedure for imple­
the objectives among the actors engag­ menting an AI system.
ing with AI systems. In the context of
urban development, the values informing
AI-driven strategies must coincide with
human values (such as privacy, trans­
parency, safety and fairness) in order to
result in a trustworthy system (Li et al.,
2021).

70  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K IMPLEMENTATION RISKS: DATA INPUT

4.2.3.1. Implementation risks: Data input

HISTORICAL BIAS

The risk of historical bias occurs when discrimination (Suresh and Guttag,
Historical bias
there is a limited understanding of the 2021); addressing historical biases
historical, socio-cultural and economic requires more than technical solutions
biases within datasets and the context in (Partnership on AI, 2021).
R E L AT E D R I S KS
which they were made (Crawford, 2021).
For example, the state of Oregon (US) Lack of team diversity
Data collection is more than a purely and inclusivity
has decided to retire an algorithm that
technical process, as it is shaped by
screens for child neglect after it has been Misalignment between AI
human choices that are context-dependent
shown to disproportionately target Black and human values
and difficult to trace later (Rovatsos et
families (Associated Press, 2022). This AI Inadequate demographic
al., 2019). Removing the data from its
system used data without considering the representation
context of collection can therefore lead to
contextual background where racial and
harm, even when the dataset still reflects Presence of sensitive or
income inequalities are closely linked. As proxy information
the world accurately (Suresh and Guttag,
a result, the AI system considered race as
2021).
a factor for child neglect. Geographic misalignment
Since AI systems require a large amount
Data is not a neutral resource. Preventing
of data to learn, discarding historical data Negative feedback loops
the creation of AI systems that reproduce
is not always feasible (Lattimore et al.,
existing inequalities will only be possible Unaudited algorithm
2020). Collecting more data to compen-
by acknowledging existing discrimina- purchase
sate does not mitigate the risks of unfair
tory patterns and the socioeconomic,
outcomes, since historical discrimination Societal harm
political and cultural norms that datasets
can still appear throughout the AI pipeline.
represent.
Mitigating historical biases requires a
Data drift
retrospective understanding of structural

Ai system expiration
QUESTIONS

• What is the time frame covered by the dataset? Does the dataset contain
historical data? Data produced and collected recently?
• What are the potential social biases embedded in the data? What existing
patterns of discrimination can be identified in the context surrounding the
dataset?
• What information is available about the way the data was collected, labelled
and pre-processed?
• Does the data collection process account for potential socio-historical biases?
Can potential socio-historical biases be traced back and mitigated?
• How do the existing power dynamics and biases in the context of application
exacerbate historical biases in the dataset?

71  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K IMPLEMENTATION RISKS: DATA INPUT

INADEQUATE DEMOGRAPHIC
REPRESENTATION
Inadequate demographic
representation
The risk of inadequate demographic physical features among non-white
representation arises when datasets ethnicities (Lao, 2020), and gender
do not accurately represent diversity inequalities within biometric datasets R E L AT E D R I S KS
and groups are represented unequally. have led to misdiagnosing diseases in
Regulatory breach
Population groups are defined by a set female patients (Drozdowski et al., 2020).
of characteristics, such as age or gender Poor group representation is often linked Lack of team diversity
(see the “presence of sensitive or proxy to inequalities in digital access, and data and inclusivity
information” portion of section 4.2.3.1). gaps can lead to spatial inequalities in Misalignment between AI
This risk can lead to deterioration of urban services (Crawford, 2013). For in- and human values
performance and perpetuation of discrim- stance, if a municipality seeks to improve
inatory patterns. Excluding a population, road security based on feedback from a Historical bias

although statistically “aligned” with the smartphone app, the representation will
Presence of sensitive or
true population, has negative effects on exclude neighbourhoods where many proxy information
both the performance and fairness of people don’t use smartphones.
the AI system. These imbalances can be Geographic misalignment
It is important to recognise biases
caused by undersampling or oversam-
throughout data collection as these will
pling (not taking enough or taking too Negative feedback loops
have cascading effects. Demographic
many data points from one group).
groups must be represented with care
Unaudited algorithm
Poor group representation can have while aiming for as much balance as purchase
devastating consequences (Gebru et al., possible. Especially when local authorities
2021). For instance, facial recognition do not implement these algorithms Societal harm
systems perform poorly with minorities themselves, it is important to share the
due to imbalances in training datasets local contextual knowledge they have in Data drift
(Buolamwini and Gebru, 2018). Many order to represent people’s needs (see
algorithms are unable to distinguish recommendation #2 in section 5.2).

QUESTIONS

• Which demographic groups are represented in the dataset used to train the algorithm?
• Does sufficient information exist about the dataset to understand its inclusions, exclusions and potential
biases? Is there any proper documentation?
• Has the dataset been audited for demographic balance?
• Does the dataset include the entire population or is it a sample from a larger dataset?
If the dataset is a sample from a larger dataset (Gebru et al., 2021):
• What does the entire dataset consist of?
• Is the sample representative of the entire dataset? How was this representation validated?
• If the dataset is not representative of the entire dataset, how will you correct for the non-represented or
underrepresented classes?

72  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K IMPLEMENTATION RISKS: DATA INPUT

GEOGRAPHIC MISALIGNMENT

The risk of geographic misalignment highway driving would not perform well
Geographic misalignment
arises when data collected in a particular in a chaotic urban environment where
geographical context is used to train pedestrians cross unexpectedly (Gandhi
an AI in a different place. This risk, or and Trivedi, 2008).
R E L AT E D R I S KS
portability trap, is common in data-scarce
It is important to make sure that Lack of team diversity
environments, especially in the Global and inclusivity
geographic misalignment does not
South where pre-trained systems are
create conflict between the algorithmic Misalignment between AI
often imported.
system’s objectives and the assumptions and human values
Geographic misalignments have serious embedded in a dataset. City managers
Historical bias
potential to enact biases against local should be aware of the provenance of
populations. For example, an AI system any system they choose to use in the
Inadequate demographic
which is trained to assess loan eligibility local context and demand transparency representation
in the context of a country with higher with respect to the training process of the
Unintentional breach of
wages will show biases against a pop- algorithms. Mitigating this risk will rely on safety
ulation with lower income in a different the local expertise of the contractors and
Lack of reliability and
country (University of Pretoria, 2018). the understanding of the technological robustness
An autonomous vehicle calibrated for process.
Outcome
misinterpretation

QUESTIONS Unaudited algorithm


purchase
• Can the required dataset be gathered locally?
Societal harm
• If not, what are the differences between the context presented in the dataset
and the one where the system will be deployed?
Data drift
• Are the populations or classes represented in the dataset and those present
in the environment properly aligned?
• How will classes missing from the training dataset but present in the local
context be accounted for?
• What assumptions are made about the dataset for it to work in this particular
geography?

73  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K IMPLEMENTATION RISKS: DATA INPUT

PRESENCE OF SENSITIVE
OR PROXY INFORMATION
Presence of sensitive or
proxy information
An important risk arises when sensitive aspects of someone’s data. A dataset
information is used to train the AI as a can contain proxy information which
basis for generating outcomes. A feature connects to sensitive information. Proxies R E L AT E D R I S KS
is considered sensitive if it about charac- can uncover hidden correlations, such as
Regulatory breach
teristics that are discriminated against, between social status and postal codes
such as gender, age, ethnicity or sexual (Krieger et al., 2003). Misalignment between AI
orientation (Amir Haeri and Zweig, 2020). and human values
Sensitive information will not necessarily
This can create discriminatory patterns Manipulation and abuse
pose risks. In medical datasets, it is often
in AI outcomes. through AI
impossible to avoid the use of sensitive
For example, social welfare programs data. However, is it crucial to assess Historical bias
may choose to use a recommendation whether sensitive information is essential,
algorithm as support for a human case- whether developers and deployers have Inadequate demographic
worker. If the training dataset contains considered the downstream consequenc- representation

sensitive information about applicants, es of using this information, and whether Violations of privacy in
such as residency status, gender or there is a system in place to audit for data collection
marital status, the algorithm’s outcome discriminatory outcomes. It is important Outcome
may base its suggestion on demogra­ to engage in mitigation practices around misinterpretation
phics rather than on information relevant sensitive applications, such as conduct-
for the welfare program. ing a risk assessment on data usage and Societal harm

on which variables may become proxies.


Removing sensitive attributes is
Data governance and strategy are needed
not necessarily enough (Prince and
to monitor data provenance and users.
Schwarcz, 2020), as discrimination is
often interconnected with many different

QUESTIONS

• Which sensitive variables are present in the dataset? Sensitive variables


may include information about an individual’s gender, age, ethnicity or sexual
orientation.
• What are the proxy variables present in the dataset?
• Are all the sensitive or proxy variables used in the dataset necessary for
generating outcomes?
• Is the information in the dataset aligned with the intended use of the AI system?
• How are the proxy attributes measured to assess whether they are actually
representative of the variables they are meant to represent?

74  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K IMPLEMENTATION RISKS: DATA INPUT

VIOLATIONS OF PRIVACY
IN DATA COLLECTION

The risk related to privacy violation arises tools adopted for street surveillance in
Violations of privacy in
when data collection gathers information Buenos Aires have collected sensitive data collection
about individuals without their consent. personal data, including that of children,
Preserving privacy is critical; despite which was later integrated into a criminal
R E L AT E D R I S KS
increasing regulatory data protection profiling dataset (Hao, 2020). Collecting
Lack of mission
requirements, personal data such as pur- private data demands higher security transparency
chase history information, credit scores, protocols, as private information poses
or even sexual or political preferences significant security risks to the civilians Regulatory breach
can easily be gathered without consent whose information has been gathered
(Chui and al., 2018) (see the “presence (see the “insufficient system security” Manipulation and abuse
through AI
of sensitive or proxy information” portion and “insecure data and algorithm storage”
of section 4.2.3.1). portions of section 4.2.4.1). Presence of sensitive or
proxy information
Public-sector AI systems often deploy in These risks are much higher when
Insufficient privacy
public spaces and result in collecting vast data collection is done by an AI system. protection
amounts of data from individuals. The Algorithms are currently unable to detect
more data collected, the higher the risks bias, overrepresentation, imbalance, Insufficient system
security
of unintended privacy violations, along geo­graphic misalignment and other
with cascading impacts on fairness, data input risks, which raises the chance Societal harm
regulatory compliance and security. of potential unfair outcomes. Without
Once the data is collected, data subjects human oversight, these risks are even
often have no control over how it will be less likely to be noticed.
used. For instance, facial recognition

QUESTIONS

• How was the dataset collected? By which actors? With what consent?
• Is the collected data absolutely necessary? What is the minimum required?
• Was the data collection process tailored to the needs of the local population?
• Does the procedure of data collection comply with privacy guidelines?
• Is the data properly anonymised, containing only essential attributes? What
privacy-preserving practices should be used to protect the collected data?
• Was the data collected by an algorithm? If yes, how was the data evaluated to
ensure privacy protection? Who is responsible for monitoring whether the data
collected is balanced in its demographic representation?

75  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K IMPLEMENTATION RISKS: ALGORITHM DESIGN

4.2.3.2. Implementation risks: Algorithm design

LACK OF TRANSPARENCY
AND INTERPRETABILITY

Risks relate to transparency and inter- “inadequate demographic representation” Lack of transparency
pretability arise when decision-makers portion of section 4.2.3.1), so outputs can and interpretability
cannot understand the reasoning behind derive from irrelevant factors.
an algorithm’s output, predictions or R E L AT E D R I S KS
For example, an algorithm used to detect
decisions due to its design. Algorithms
welfare fraud in the Netherlands was
produce outputs based on mathemati- Uncertain accountability
found to be discriminatory. It unfairly
cally deduced reasoning, but as they are
charged families thousands of euros,
moved into more complex situations, it Regulatory breach
and the political cabinet resigned over the
can be hard to decipher how deductions
ensuing scandal. Investigations couldn’t Manipulation and abuse
are made and based on what attributes
ascertain why the algorithm flagged a through AI
(Dhinakaran, 2021).
person for fraud if they owned multiple
Outcome
The design choices made when selecting vehicles or garages. The rules of how misinterpretation
the architecture for an algorithm affect these were codified were unclear, due to
transparency and interpretability. There a lack of interpretability in the algorithm’s Stacking of faulty AI
are many styles of algorithms which design (Bekkum and Borgesius, 2021).
cannot be interpreted by design (Lipton, Lack of explainability
Therefore, decision-makers should
2016). This lack of transparency can be
carefully consider the risks of using Unaudited algorithm
very problematic if an algorithm begins
architectures that do not allow for trans- purchase
to exhibit incorrect or unfair outcomes,
parency or interpretability. While incred-
because it is difficult to unravel what Societal harm
ible innovations have been achieved on
went wrong and extremely difficult to
specific AI tasks, these successes have
correct the problem. Algorithms base
not included untangling AI’s deductive Concept drift
their reasoning on the intricacies of their
patterns.
dataset rather than on reality (see the

QUESTIONS

• What are the inputs to the algorithm and how does it produce an outcome?
• How transparent is the architecture of the algorithm?
• What protocols are in place to enable independent audits of algorithms?
• Who is responsible for monitoring the output of the algorithm? How and when
do they communicate with those designing it?
• How can various goals be managed and prioritised in a transparent manner?
• What is the contingency plan for monitoring the application’s performance?

76  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K IMPLEMENTATION RISKS: ALGORITHM DESIGN

LACK OF RELIABILITY
AND ROBUSTNESS

Risks related to reliability and robustness crisis is broadly defined as “algorithmic Lack of reliability and
arise when unexpected conflicts occur robustness” (Xu and Mannor, 2012). robustness
as an AI system is deployed. Reliability
For example, an autonomous vehicle
refers to the likelihood that a system will R E L AT E D R I S KS
trained using a grid map of a specific
perform well in its real environment, while Inadequate
city will inevitably encounter a variety of infrastructure
robustness refers to an algorithm’s
unexpected deviations when navigating
behaviour when something hinders its Unintentional breach of
a real environment. Robustness tests
ability to function (Zissis, 2019). Although safety
would make the algorithm account for
these risks arise during deployment,
unexpected deviations, such as road work Stacking of faulty AI
mitigation steps must occur during
closures. Reliability tests would demand
the implementation phase. Once the
that regardless of where the car began, Unaudited algorithm
algorithm has been deployed, it is often purchase
it would successfully navigate to the
too costly to make structural changes.
destination.
Ai system expiration
Robustness and reliability go hand in
There are serious consequences to a sys-
hand: In order for an AI system to be
tem which has been inadequately tested
reliable, it must be robust to the variety of
for robustness and reliability. The degree
unforeseen factors inherent in real-world
to which the robustness of an AI system
contexts. Unlike the controlled laboratory
should be assessed and prioritised can
setting, in the real world, data may be
change depending on the autonomy and
incomplete, noisy or potentially adversar-
safety implications of the system (see the
ial. The ability of an AI system to react
“unintentional breach of safety” portion of
appropriately to such “abnormal” condi-
section 4.2.3.2).
tions and maintain operations during a

QUESTIONS

• How does the system react to noise, i.e., when it is given many inputs with very
slight differences?
• How does the algorithm react to anomalous data or environments?
• How has the system been tested for reliability and robustness?
• Have failures of the system been documented? What happens when the system
fails or when failures are documented?
• What are the negative consequences that can arise from a system failure?
Is there a backup plan for the occurrence of these situations?

77  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K IMPLEMENTATION RISKS: ALGORITHM DESIGN

UNINTENTIONAL
BREACH OF SAFETY

The risk of unintentional safety breaches it is not properly monitored for the equal Unintentional breach of
arises especially when algorithms are treatment of various neighbourhoods and safety
deployed in safety-critical situations. AI their residents (Rohaidi, 2017). Accidents
safety includes the minimisation of the can also occur when the AI system lacks
R E L AT E D R I S KS
risk, uncertainty and potential harm in- robustness. When deployed in a new
curred by unwanted outcomes (Varshney environment, algorithms may exhibit Financial burden
and Alemzadeh, 2017). Outside of safety poor performance because of data rep-
issues related to adversarial attacks, the resentation issues (see the “geographic Regulatory breach
size and complexity of AI systems in an misalignment” portion of section 4.2.3.1).
urban context make them vulnerable to Such AI systems can then commit harm­ Manipulation and abuse
through AI
human errors. ful actions without even realising they
are harmful, and as such, not raise any Misalignment between AI
Errors can be catastrophic because an AI and human values
alarm (Amodei et al., 2016).
system has the potential to cause great
Lack of reliability and
unintentional harm if used improperly. AI safety is highly connected to reliability robustness
Algorithms can cause harm even with­ and robustness because these elements
out being hacked, because maximising make it possible to mitigate for risks Negative feedback loops
the wrong objective function (see the related to new environments (see the
“misalignment between AI and human “lack of reliability and robustness” portion Stacking of faulty AI
values” portion of section 4.2.2.1) can of section 4.2.3.2). Having strong gover-
have unforeseen negative consequences nance principles in place is an important Insufficient system
security
when the system is deployed (Amodei et step towards minimising the risks asso­
al., 2016). For example, a system which ciated with the safety of algorithms
Societal harm
is used to map evacuation routes is (Falco et al., 2021) (see section 2.2).
highly susceptible to issues of safety if

QUESTIONS

• Is the AI system being deployed in a safety-critical environment? If yes, what is


the procedure in place in case of accidents?
• Is human oversight integrated into the design of the algorithm?
• What negative consequences could arise from the desired application of the AI
system? How are those negative consequences mitigated?

78  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K IMPLEMENTATION RISKS: ALGORITHM DESIGN

INSUFFICIENT PRIVACY
PROTECTION

The risk of insufficient privacy protection be exploited in many different ways Insufficient privacy
arises when design choices made during (see the “insufficient system security” protection
the implementation stage of an algorithm portion of section 4.2.4.1). For instance,
leave it vulnerable to adversarial invasions they may enable outcomes to be easily R E L AT E D R I S KS
of privacy. Beyond initial privacy consid- cross-­referenced back to the memorised
erations (see the “violations of privacy in training source and, in doing so, reveal Regulatory breach
data collection” portion of section 4.2.3.1), personal information about individuals.
Manipulation and abuse
crucial privacy decisions also arise once This is particularly concerning in situations through AI
the algorithm begins reasoning. Some where source data contains sensitive
technical choices, such as overparam- information, such as in the education Misalignment between AI
and human values
eterisation, can significantly increase or health sectors.
the risks of privacy attacks against AI Violations of privacy in
Privacy attacks evolve and can take data collection
systems and their data points (Tan et
multiple forms. Even with privacy-preserving
al., 2022). This is because overparame- Insecure data and
design, due diligence is still required. Failure algorithm storage
terisation increases the chances of an
to account for the risks of privacy attacks
algorithm absorbing detailed information
against AI systems and their data can Societal harm
from the dataset instead of inferring
have spillover effects in other phases and,
broad rules from data patterns.
ultimately, on the lives and rights of the
The vulnerabilities caused by insufficient individuals concerned (Tan et al., 2022).
privacy-protecting design choices can

QUESTIONS

• Do the inner workings of the algorithm save information regarding the initial
training dataset?
• Is the algorithm’s decision-making process secure? How do you know?
• Can the outcome of the AI system be used to retrieve information on the
individuals involved in the training step?
• What are the mechanisms used to ensure that the input from users is kept
private?
• What measures are being taken to secure identifying attributes, such as a user’s
location?

79  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K IMPLEMENTATION RISKS: ALGORITHM DESIGN

HIGH ENERGY CONSUMPTION

An important concern with the imple- energy production, but also contribute
High energy consumption
mentation of AI systems is the energy negatively to the local environment in
consumption necessary, especially which they run. Potentially simpler algo-
to power the training process of large rithm designs may be more appropriate
R E L AT E D R I S KS
systems. The choice of algorithm before moving to complex deep learning
Inadequate
archi­tecture also has significant impacts methods. It is also possible to integrate infrastructure
on energy consumption. Deep learning pre-trained algorithms, which allows
methods, which employ complex neural system owners to avoid the prohibitive Financial burden
networks, have extensive carbon foot- energy consumption costs of training
prints associated with the high energy a massive AI system from scratch (see Digital divide
cost of training such a system (Gebru et the “unaudited algorithm purchase”
al., 2021). It is difficult to predict exactly portion of section 4.2.4.1). Misalignment between AI
and human values
how much energy an algorithm will
Crucially, stakeholders should be aware
consume. Complex systems can end up Insecure data and
of the direct impact that the complexity algorithm storage
demanding much more throughout their
of a system will have on the amount of
life cycle than was initially expected.
energy required to power it, from both Ai system expiration
In cities, such increasingly high energy a financial (see the “financial burden”
demands could not only incur increasing portion of section 4.2.1.2) and an Societal harm
costs to the municipality and impact environmental standpoint.

QUESTIONS

• What are the energy costs associated with training and deploying the AI system?
• What design choices can mitigate these energy costs?
• Do these energy costs outweigh the potential benefits offered by the algorithm
itself?

80  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K IMPLEMENTATION RISKS: OUTCOME GENERATION

4.2.3.3. Implementation risks: Outcome generation

LACK OF EXPLAINABILITY

Explainability refers to a process of systems (Thakker et al., 2020) (see the


Lack of explainability
deciphering an outcome, regardless of “algorithmic aversion” portion of section
the design choices behind it. The risk of 4.2.4.1). Stakeholders must be able to
lack of explainability refers to the ability reason critically about the outcomes and
R E L AT E D R I S KS
of human decision-makers to understand functioning of an algorithm in non-techni-
Lack of mission
not only the outcome of an AI system, but cal terms. This is crucial for maintaining transparency
also the variables, parameters and steps social trust (Beroche, 2021). Similarly, for
involved in the algorithmic decision pro- regulatory purposes, it is often necessary Mission creep
cess (Hussain et al., 2021). Transparency for decision-makers to be able to justify
and interpretability are related to the outcomes. For example, in many coun- Misalignment between AI
and human values
design choices of the algorithm’s archi- tries in Latin America, public institutions
tecture which disclose the AI system’s are required to justify their decision-mak- Lack of transparency
and interpretability
reasoning. ing process so that citizens have a right
to challenge an outcome (Gómez Mont Unintentional breach of
The need for explainability arises when an safety
et al., 2020).
algorithm is not designed in a transparent
Outcome
manner. As a consequence, the only way It is important to adopt explainability misinterpretation
to explain the outcomes produced by the frameworks before deploying an AI
system is to trace back the rules guiding system. Developers must be able to Unaudited algorithm
purchase
its decision-making from the interactions understand how the system works in
between the original training data and its order to identify and prevent problems Societal harm
generated outcomes (Thakker et al., 2020). from occurring, and policymakers must
be able to understand potential biases
A lack of explainability can have a
or unethical behaviours that could arise
significant impact on the trustworthi-
(Thakker et al., 2020).
ness and social acceptance of these

QUESTIONS

• Can the algorithm’s inner workings and outcomes be explained?


• Are the outcomes of the algorithm used in safety-critical situations?
• Is there human oversight over the algorithm’s decision-making process?
• Has there been an assessment of the alignment between the algorithm’s
performance and the desired outcomes?

81  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K IMPLEMENTATION RISKS: OUTCOME GENERATION

OUTCOME MISINTERPRETATION

The risk of outcome misinterpretation welfare screening had difficulties using Outcome
arises when human decision-makers the outcomes produced by the algorithm misinterpretation
must apply the outcome of an algorithm because they didn’t understand the
to their decision-making process. This output; while they used to make yes-or-no
R E L AT E D R I S KS
risk is closely linked to explainability, but decisions, the algorithm provided a score
Lack of mission
it occurs at the end of an algorithmic from one to 20 (Zytek et al., 2021). transparency
interaction, whereas explainability is
The tendency toward blind trust can Lack of team diversity
related to understanding an algorithm’s
aggravate issues of misinterpretation and inclusivity
inner workings. This risk comes from the
because that tendency can override a Manipulation and abuse
difficulty of interpreting how an algorith-
person’s suspicions that the suggestions through AI
mic objective can be translated back to
are not valid (Janssen et al., 2022). For
a human one. There are two aspects to Lack of transparency
example, tourists in Amsterdam were and interpretability
outcome misinterpretation: lack of educa-
found biking within a highway tunnel
tion on what an outcome represents and
because they had been directed to do Lack of explainability
a blind trust in AI systems.
so by a GPS system (Licheva, 2018).
Lack of education becomes an issue Stacking of faulty AI
It is important for decision-makers to
when users who have no technical
mitigate outcome misinterpretation by Unaudited algorithm
expertise are asked to use the predictions
promoting education, building capacities purchase
of those systems to make high-stakes
and exercising sceptical oversight of
decisions (Zytek et al., 2021). Depending Inadequate user
AI outcomes. In situations in which an feedback integration
on the formulation of a mathematical
AI system is given autonomous power
objective, it can be difficult to understand
without consistent human intervention, Societal harm
what the outcome actually means in that
systemic risks could go unnoticed.
particular context. For example, social
workers using an AI system for child

QUESTIONS

• Are the conclusions reached by the AI system understandable for a non-


technical person?
• Can the outcome of the AI system be directly used for decision-making or does
it need to be processed first?
• Is there a direct relation between the outcome of the algorithm and the decision
it is being used for?
• How are outcomes being monitored?

82  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K IMPLEMENTATION RISKS: OUTCOME GENERATION

STACKING OF FAULTY AI

The risk of stacking of faulty AI results Ultimately, fixing all risks of a single AI
Stacking of faulty AI
from composition, or algorithmic stack- system in isolation is not sufficient when
ing, which occurs when AI applications multiple AI systems are used to build a
are combined. Although AI applications solution.
R E L AT E D R I S KS
are often perceived as stand-alone prod-
Consider any autonomous vehicle: the
ucts, in practice they are often integrated Mission creep
self-navigation system is the byproduct
into a larger network of decision-making
of many individual algorithms, each Inadequate
systems that work together. Algorithmic
pursuing their own particular outcomes infrastructure
stacking can happen on different levels
and working together. One algorithm will
within a single AI system or when multiple Uncertain accountability
be tasked with processing input from
AI systems are intertwined into an AI
sensors, another integrating these sensor
ecosystem. High energy consumption
readings with the navigation control, yet
The risks of stacking AI are twofold. First, another will monitor the speed. At the
if two algorithms that produce incom- ecosystem level, consider the complex in- Negative feedback loops
patible outcomes are integrated under frastructure behind an integrated mobility
Unintentional breach of
the same system, they may not achieve system for cities, including bike-sharing, safety
optimal performance. Second, if many buses, automated street lights, weather
algorithms are combined, existing risks prediction and traffic conditions. Beyond Unaudited algorithm
purchase
can be amplified or propagated. Privacy, each system’s functioning, they are
fairness, explainability and other issues in interconnected, with their functioning Societal harm
a single algorithm could compromise the highly dependent on each system’s
entire network (Dwork and Ilvento, 2018). performance.
Ai system expiration

QUESTIONS

• What are the downstream effects of integrating the new algorithm into an
existing AI ecosystem?
• How compatible are the algorithms which have been used in the target
application?
• How does the nature of each algorithm differ if the systems are used in tandem
to produce outcomes?
• Have the algorithms within the AI system been audited separately or in tandem?
• Are procedures in place to address inconsistencies and incongruences between
the algorithms’ outputs?

83  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K IMPLEMENTATION RISKS: OUTCOME GENERATION

NEGATIVE FEEDBACK LOOPS

Negative feedback loops are a risk that information” and “violations of privacy in
Negative feedback loops
is related to real-time algorithms which data collection” portions of section 4.2.3.1).
continue to gather data while deployed.
Another aggravating factor is when
These algorithms can augment their
algo­rithms gather their own perfor- R E L AT E D R I S KS
dataset with the responses they receive
mance feedback. This is very common in
while interacting with their environment. Uncertain accountability
recommendation systems, which either
One issue with this process is that its
explicitly query users about their feelings
choices are participating in the building of Mission creep
related to the recommendations or infer
its notion of reality (Liu, 2020). Intuitively,
them (Oard and Kim, 1998). Specifically Misalignment between AI
this is similar to an algorithmic version of
in this context, there is a large potential and human values
confirmation bias (Nickerson, 1998).
for miscommunication between an
Manipulation and abuse
An example of this negative feedback algorithm and the user, because implicit through AI
loop is illustrated in the problematic feedback (i.e., the movement of a user’s
use of predictive policing (Hao, 2019). cursor or whether or not they clicked on Historical bias
Imagine that police are dispatched to a link) is “noisy,” or not truly indicative of
Inadequate demographic
the locations indicated by an algorithm. their underlying feelings. Over time, these representation
A higher concentration of policing enables misunderstandings can accumulate,
a higher rate of crimes to be reported. This instigating unintended changes in the Presence of sensitive or
proxy information
suggests to the algorithm that more policing algorithm.
is required in that area. Consequently, we Violations of privacy in
Especially in contexts where an AI system data collection
see a negative feedback loop where the
is also integrated into an iterative data
algorithm’s predictions begin to shape its Lack of transparency
collection process, it is important to and interpretability
reality rather than the other way around.
consider human oversight frameworks
This risk is exacerbated with the potential Inadequate user
to monitor such AI systems that are likely
for biased outcomes (Jahnke, 2018) feedback integration
to create negative feedback loops.
(see the “presence of sensitive or proxy
Societal harm

QUESTIONS

• How do the decisions made by the algorithm impact its environment?


• Is the algorithm using its own decisions as an input for the next round of
predictions?
• Does the algorithm interact equally with different population groups?
• Is a process in place for monitoring the effects of the algorithm on its
environment?
• Is a system in place for comparing the initial data distribution with the
augmented one?
• Is the data collected by the AI system using explicit or implicit feedback?

84  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K

4.2.4.

PHASE 4

Deployment
PHASE 1
The deployment phase of an AI system’s life cycle includes FRAMING
all the risks associated with releasing an AI system into a real
PHASE 2
environment. Despite the mitigation efforts made during the
DESIGN
implementation phase, some risks only come to light once a
system has been deployed. Broadly, these are associated with PHASE 3
security and societal acceptance. IMPLEMENTATION

PHASE 4
Deploying an AI system at scale involves various levels of
DEPLOYMENT
resources and financial planning. Appropriate computing and
human capital are needed in order for solutions to be adopted PHASE 5
widely and efficiently. Similarly, if an AI system is to be adopted MAINTENANCE
widely, decision-makers need to make sure that the impacted
population agrees with the AI in its form and scope. Lastly, an AI
system needs to be secure from malicious attacks. In practice,
malicious attacks represent one of the greatest threats to the
wellbeing of a deployed system. Malicious actors can have a
variety of motivations, from political reasons to an arbitrary
desire to create mayhem.

For many municipalities, this may actually be the first phase in


which they directly engage, if the algorithmic system has been
purchased “out of the box” or procured. Regardless of whether
an algorithm was developed or purchased, the risks associated
with deployment must be appropriately addressed.

85  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K DEPLOYMENT RISKS

4.2.4.1. Deployment risks

UNAUDITED ALGORITHM Unaudited algorithm


purchase
PURCHASE
R E L AT E D R I S KS

The risks related to unaudited algorithm systems must consider the degree of Skills shortage
purchase arises when decision-makers transparency required.
purchase a previously implemented Financial burden
The owners of pre-made systems will
algorithm and directly deploy it without
always have less autonomy than those
considering the risks. There has been Mission creep
who develop from scratch; risk assess-
a noticeable rise in the purchase of
ment during purchasing may be challeng-
“out-of-the-box” algorithmic solutions; Misalignment between AI
ing as companies may refuse to reveal and human values
most solutions are marketed towards
the details of proprietary algorithms. It is
medical (Davenport and Kalakota, 2019;
also possible that neither developers nor Historical bias
Spatharou et al., 2020; Quinn et al., 2022),
deployers will be able to make architec-
judicial (Rissland et al., 2003; UNESCO, Inadequate demographic
tural modifications to the algorithm after representation
2020; Bench-Capon et al., 2012) and
purchase.
educational (Luckin et al., 2016) domains.
Geographic misalignment
Consider the problems when local police
Many issues may arise from the purchase
departments in the United States pur-
of a previously implemented algorithm, Lack of transparency
chased out-of-the-box facial recognition and interpretability
including those related to misalignment,
systems to identify crime suspects. Since
transparency, safety and misuse (see the
they lacked the understanding required by High energy consumption
“mission creep” portion of section 4.2.1.2,
the algorithm’s design, they used forensic
the “misalignment between AI and human
sketches rather than pixelated images Lack of explainability
values” portion of section 4.2.2.1, the “geo-
as data inputs. Given this mismatch, as
graphic misalignment” portion of section
well as the critical context, the algorithm’s Data drift
4.2.3.1, the “lack of transparency and
deployment led to poor performance
interpretability” and “unintentional breach
and discriminatory consequences on Concept drift
of safety” portions of section 4.2.3.2,
the ground (Garvie, 2019).
and the “outcome misinterpretation” and
“stacking of faulty AI” portions of section Ai system expiration
4.2.3.3). Decision-makers purchasing AI

QUESTIONS

• Has a comprehensive risk assessment been conducted, addressing the risks outlined in this document?
• Are domain experts available locally to evaluate the efficacy of the system?
• What are the responsible AI practices of the AI system’s designer? How has the algorithm been tested?
How does the algorithm perform, what are its limitations, and how transparent is this information?
• How can the system be fine-tuned to best match the needs of the project?
• Is the code open source? Can it be modified? Who will do so?
• Does purchasing the system involve sharing local users’ data with a private organisation?

86  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K DEPLOYMENT RISKS

ALGORITHMIC AVERSION

The risk of algorithmic aversion arises an algorithm’s performance. Studies


Algorithmic aversion
when the society’s response to an algo­ have found that people are quick to lose
rithmic solution impairs the solution’s confidence in the ability of an algorithm
ability to perform optimally. Algorithmic once they have seen it make a single
R E L AT E D R I S KS
aversion looks like the avoidance of mistake (Dietvorst et al., 2015).
Lack of mission
engagement with, or even the boycott transparency
General lack of trust in society and
of, an AI system by the end users it was
gover­nance can also contribute to the Distress in the
intended to cater to. Avoidance can
likelihood of rejection of a proposed local labour market
cause societal unrest, undue harm and
algorithmic solution. Since AI systems
financial loss for municipalities. This risk Uncertain accountability
rely on interactions with an environment
can extend beyond a single algorithmic
to prove their use case, without the neces-
system. Inadequate
sary engagement, an algorithmic solution infrastructure
Algorithmic aversion can arise when cannot be assessed or improved. Trust in
citizens are not sufficiently informed. algorithms is a double-edged sword (see Mission creep
For example, patients in hospitals show the “outcome misinterpretation” portion
Misalignment between AI
significant apprehension when informed of section 4.2.3.3). It is important to and human values
that their diagnosis was made with the cultivate a proper environment around
help of an AI system (Richardson et al., an AI system so that it can be trusted, Manipulation and abuse
through AI
2021). This avoidance can be exacer- yet also properly scrutinised.
bated by unreasonable expectations of Violations of privacy in
data collection

QUESTIONS Lack of explainability

Outcome
• What are the positive and negative effects of the AI system on impacted misinterpretation
communities? Will the system be integrated into citizens’ daily lives?
Unaudited algorithm
• How will the positive and negative impacts of the AI system be communicated purchase
to the community?
Insufficient system
• How has the impacted community reacted towards previous algorithmic security
systems?
Societal harm
• What level of public engagement will the proposed system require in order
to satisfy its mission statement?
Ai system expiration
• Does the public have the knowledge to evaluate the new system with respect
to their civic rights and needs?

87  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K DEPLOYMENT RISKS

INSUFFICIENT SYSTEM SECURITY

The risk associated with insufficient attack the privacy of individuals through Insufficient system
system security arises when security membership attacks, where the system security
vulnerabilities are exploited by third reveals identifying information from users
parties, through either malicious use or involved in training. A membership attack
R E L AT E D R I S KS
cyberattacks (PwC, 2018). Ultimately, could be used, for instance, to expose
a breach in security puts the personal patients’ discharge from a specific Regulatory breach
information and lives of citizens at risk hospital (Shokri et al., 2016).
Inadequate
(Gómez Mont et al., 2020).
On a system level, hackers can deploy infrastructure
There are many places in the AI pipeline a model inversion attack to reconstruct
Financial burden
where vulnerabilities can be exploited. For the deductive process of an algorithm
instance, cyber-attackers can target train- (Zhang et al., 2021). The attackers can
Misalignment between AI
ing data through the use of data poison- then create a fake version of the actual and human values
ing, where changes to the initial training AI system (Krishna, 2020). Consider
Lack of reliability and
set affect performance later (Newaz et the consequences for the safety and robustness
al., 2020). By modifying the images of the wellbeing of citizens should such an
Lack of transparency
traffic signs received by an automated attack be used on a grid used to monitor and interpretability
vehicle, an attacker can make it behave water usage across a city (International
unsafely and cause accidents (Ahmad Telecommunication Union, 2020). Algorithmic aversion
et al., 2021). Similarly, hackers can
Societal harm

QUESTIONS
Ai system expiration

• Has the AI system been tested for vulnerabilities? Have those vulnerabilities
been documented?
• How secure is the system against malicious attacks?
• What is the contingency plan in case of an attack?
• What are the consequences of an attack? How severe are these? Who will be
most impacted and how?
• How is the affected population going to be protected in case of a malicious
attack?
• Should the algorithm be deployed considering the potential consequences?

88  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K DEPLOYMENT RISKS

INSECURE DATA AND


ALGORITHM STORAGE

The risk of insecure data and algorithm create data privacy and security risks. Insecure data and
storage arises when the storage of Thus, outsourcing storage often implies algorithm storage
any component within an AI system is outsourcing security. Without the neces-
outsourced to a centralised storage sys- sary protections, organisations storing
R E L AT E D R I S KS
tem. Distributed servers where massive their data on a server can be susceptible
Inadequate
datasets and complex algorithms can be to ransomware attacks, where data is infrastructure
stored are increasingly common because held hostage. For example, consider
of the growing size of algorithms and how ransomware attacks could halt food Financial burden
their datasets (Sagiroglu and Sinanc, production if they were to target industrial
2013). farming grids that depend on complex Digital divide
AI systems (McCrimmon and Matishak,
Cloud-centric architecture is often a
2021). Unintentional breach of
contributing factor to the increasing safety
costs and risks of running an algorithm Decision-makers who engage with
Insufficient system
(Khajeh-Hosseini et al., 2010; Lin and cloud-centric architecture should assess security
Chen, 2012) (see the “financial burden” the level of harm that could occur should
portion of section 4.2.1.2). Storing private these situations arise, and carefully
data on a cloud storage architecture evaluate whether server providers have
which services many different clients can adequate safety and reliability protocols.

QUESTIONS

• How is the data stored? Can the dataset be stored locally?


• How large is the dataset? How much larger can it get over the AI system’s life
cycle?
• Who has access to the different datasets and the algorithm? Why? At what
level of access?
• Whose data is being stored? What is the impact of a potential security leak?
Who will be most negatively impacted and how?
• What security and privacy-protecting protocols are used to protect the data
over its lifespan? What about to store the AI system’s information?
• If applicable, what are the implications of relying on private infrastructure for
computation and storage?

89  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K

4.2.5.

PHASE 5

Maintenance
PHASE 1
The maintenance phase occurs after a system has been de­ FRAMING
ployed and has been operating in its environment. The mainte­
PHASE 2
nance of AI systems involves monitoring how they interact with
DESIGN
end users (such as citizens, residents and people in the city),
the environment and the algorithm’s objectives. The purpose PHASE 3
is to maintain a connection between the values and mission IMPLEMENTATION
objectives with the algorithm’s actions over the long term.
PHASE 4
DEPLOYMENT
Negative downstream effects are difficult to predict. While
mitigation techniques are important, the extent and severity PHASE 5
of a risk often surfaces only after the system has been released MAINTENANCE
for some time. The risks of the maintenance phase relate to
the consequences once an algorithm has been deployed into
the real world for some time.

The maintenance phase describes a cyclical pattern that


embodies the iterative process of algorithm design. Despite
appearing to be last, this phase is interconnected with the rest
of the AI life cycle. It is not unusual for an algorithm to go back
through design, implementation and deployment phases due to
risks that arise within the maintenance phase. As a result, struc­
tures that enable analysis and redress over time are necessary
for truly successful engagement with an algorithmic system.

90  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K MAINTENANCE RISKS

4.2.5.1. Maintenance risks

INADEQUATE USER
FEEDBACK INTEGRATION

One risk that is unique to the mainte- limitations for feedback to be meaningful. Inadequate user
nance phase revolves around a lack of Without this shared understanding, users feedback integration
action in response to user feedback. would be unable to participate in genuine
Developers can never assume that a feedback, and providers would be unable R E L AT E D R I S KS
deployed system will behave in the ways to act on that feedback to make changes.
it was intended. Particularly in an urban As such, this risk is heavily tied to risks Skills shortage
setting where a large majority of algo- related to mission transparency, outcome
rithms are directly interacting with people misinterpretation, algorithmic aversion, Financial burden
in the city, integrating feedback is crucial. accountability and lack of humans in the
For example, consider how difficult it can loop (Vathoopan et al., 2016) (see the Uncertain accountability
be to assess a chatbot’s performance in “skills shortage” and “uncertain account-
interacting with a broad range of dialects ability” portions of section 4.2.1.2). Mission creep
(Babyl, 2018). This risk arises when there
Finally, user feedback can be considered
is no structure to gather and integrate the Misalignment between AI
as one of the ways individuals can parti­­ and human values
feedback provided by those affected by
cipate in the evaluation and redesign of an
the AI system. Manipulation and abuse
AI system. By engaging citizens with the through AI
There must be a format for gathering outcomes of algorithmic solutions, policy-
user feedback. Both providers and users makers can build trust while also improv- Lack of explainability
of feedback must have a common under- ing the performance of an algorithm.
standing of the system’s capabilities and Outcome
misinterpretation

QUESTIONS Negative feedback loops

• What type of user feedback does the AI system require? What are the negative Algorithmic aversion
consequences of not receiving feedback?
• How can users be engaged most effectively? What types of background
knowledge are needed for effective participation?
• How do existing feedback processes affect people’s trust in the AI system?
• Does the system explicitly prompt feedback? How effectively does the system
track usage?
• What types of user feedback could be assimilated by existing learning
algorithms?
• How is the algorithm calibrated for receiving and integrating user feedback?
• What kind of metrics and meanings are assigned to user feedback? What
implicit and explicit information could be assimilated?

91  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K MAINTENANCE RISKS

SOCIETAL HARM

Risks relating to societal harm arise when The abuse of an AI system by users
Societal harm
a system presents widespread unintend- themselves is one unforeseen con-
ed negative consequences, particularly sequence. For example, social media
when risks from earlier phases remain platforms can be used to spread fake
R E L AT E D R I S KS
unaddressed. Given the broad range of news through automated disinformation
Lack of mission
situations in which these interactions oc- campaigns (Howard and Woolley, 2018; transparency
cur, these negative risks can materialise Vosoughi et al, 2018). Similarly, chatbots
Distress in the
in emotional, behavioural and physical with female voices can be used to enable local labour market
societal harm if left unaddressed. the practice of verbal abuse (Faggella,
2015). The downstream effects of such Mission creep
For instance, in an example of misalign-
misuses of AI systems include emotional,
ment between human and AI objectives,
behavioural and physical harm, the extent Lack of team diversity
consider how predictive policing tools led and inclusivity
of which can be exacerbated by safety,
to a disproportionate targeting of poor
reliability and robustness issues revealed Misalignment between AI
neighbourhoods (see the “mission creep” and human values
only during maintenance. For instance,
portion of section 4.2.1.2 and the “mis-
systems that are not properly audited can
alignment between AI and human values” Digital divide
end up perpetuating harmful behaviours
portion of section 4.2.2.1). The use of
by replicating harmful patterns incorpo- Manipulation and abuse
that AI system may perpetuate feelings of
rated into their training dataset. As such, through AI
danger and lack of trust among minority
it is important for decision-makers to
groups (see the “inadequate demographic Historical bias
have frameworks in place for regularly
representation” portion of section 4.2.3.1).
assessing and auditing the performance
Even worse, the increase in patrolling Inadequate demographic
of algorithms over the long term. representation
may impact targeted communities when
altercations with police erupt into violent
Geographic misalignment
encounters.
Lack of transparency
and interpretability
QUESTIONS
Insufficient privacy
protection
• What are the mechanisms in place to evaluate, determine and detect societal
harm? Outcome
misinterpretation
• What mechanisms are in place to report misuse or concerns?
• How are the individuals interacting with the AI system being protected? Negative feedback loops

• How severe is the potential societal harm? Depending on this severity, should
Unintentional breach of
the system be in operation? safety

Unaudited algorithm
purchase

Algorithmic aversion

Insufficient system
security

Ai system expiration

92  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K MAINTENANCE RISKS

DATA DRIFT

The risk of data drift occurs when the (Chandler, 2020). Any algorithm that
Data drift
representation of the world in a dataset had been trained on the dataset before
is no longer accurate. Data can become this earthquake would be working with
outdated or irrelevant due to large-scale expired data. Similarly, the long-term
R E L AT E D R I S KS
societal changes brought on after the col- degradation of sensors can affect an
lection phase. Such changes can cause algorithm’s capability to accurately Financial burden
serious issues with the functionality of an perform environmental monitoring
Misalignment between AI
AI system. One of the core assumptions and forecasting (Ditzler et al., 2015). and human values
of AI is that the dataset is used to guide
It is important to consider the applicability
future decision-making (see section 2.1.5). Historical bias
of a dataset which repeats based on a
If the past data doesn’t match the present
context-specific cycle. Some datasets
situation, the algorithm will continue to rely Inadequate demographic
need to be updated more frequently than representation
on the data regardless and the system
others. For example, a dataset used to
will lose predictive power (Saikia, 2021).
train autonomous public transport may Geographic misalignment
Data drift can happen in a single instance need to be updated at a faster pace than
Unaudited algorithm
or slowly over time. For example, when that of an algorithm used to monitor the purchase
an earthquake hit the city of Los Angeles, effects of climate change on weather
both the city topography and the future patterns. Concept drift
of construction changed drastically
Ai system expiration

QUESTIONS

• What procedures are in place to account for changes in the AI system’s context?
• Has there been a major change to the context or environment where the AI
system is deployed?
• Is the AI system being used still relevant to the task at hand?
• How frequently should the training dataset be updated? What is the cost?
• How frequently should the AI system be tested for performance?
• What methodologies will be used to test for data drift?

93  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K MAINTENANCE RISKS

CONCEPT DRIFT

The risk of concept drift arises when the a more reflective one. It can also require
Concept drift
properties of variables that an algorithm the re-training of the algorithm to replace
is trying to predict change over time outdated concepts.
(Lu et al., 2020). Different from the data
Many AI systems are built on the assump- R E L AT E D R I S KS
drift risk, concept drift does not require
tion that the concepts presented in a Lack of mission
changes to occur, only a re-interpretation transparency
dataset are stable over time. When this
of what they mean.
is not the case and a concept evolves,
Regulatory breach
For example, consider an algorithm which the reasoning of the algorithm becomes
is used to filter emails into a spam folder. obsolete. For example, AI systems that
Uncertain accountability
As societal interpretations of spam were used to predict the quality of air using
definitions change over time, so do the historical data were vastly thrown off by
algorithms that are used to make these the lack of pollution during COVID-19 Mission creep

predictions. Although the dataset itself lockdowns (Mehmood et al., 2021).


Misalignment between AI
may still be relevant, the concept of spam and human values
Responding to concept drift during main-
has changed. Similarly, if we consider
tenance is crucial since it is only visible
an algorithm that is meant to identify Geographic misalignment
with time. As a result, it is important to
harassment on a public forum, as the
adopt monitoring and control procedures Lack of transparency
definitions of harassment evolve, so must
for AI systems, especially in fields such as and interpretability
the algorithms used to predict its occur-
healthcare, governance and surveillance.
rence. These changes will often require Unaudited algorithm
purchase
a dataset to be relabelled or replaced by
Data drift

QUESTIONS
Ai system expiration

• Are the theoretical assumptions on which the algorithm is based still applicable?
• What procedures are in place to test if the model still aligns with the objectives?
• How have the impacts of the changes happening in the surrounding environment
been analysed and documented?
• Has periodic testing of the AI system been planned?
• How can resources be allocated should a re-training be required to ensure
consistent performance?

94  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
RIS K F RA M E WO R K MAINTENANCE RISKS

AI SYSTEM EXPIRATION

The risk of AI system expiration arises section 4.2.1.2). For example, recent
Ai system expiration
when an AI system which should be bans on the use of facial recognition
retired is maintained despite being prob- algorithms have forced the retirement of
lematic. Retirement means the ethical algorithmic systems in various countries
R E L AT E D R I S KS
and efficient removal of an AI system. (European Data Protection Board, 2021).
Lack of mission
There are many reasons which can lead Similarly, the decisions regarding the use transparency
to the expiry of a system, including the of privatised datasets can cause architec-
risks in this framework if they remain tures to become non-compliant. Systems Financial burden
unaddressed. which depend on banned datasets require
replacement (see the “violations of privacy Misalignment between AI
The risk of expiration arises at the final and human values
in data collection” portion of section 4.2.3.1
stages of an AI system’s life cycle at
and the “insufficient privacy protection” Historical bias
different levels of urgency. The basis of
portion of section 4.2.3.2).
expiration is the process of comparing
Lack of reliability and
the initial mission values with its current It is the responsibility of system owners robustness
state of operating. Algorithms must be to have effective mechanisms in place for
updated to align with the evolution of properly and ethically removing a system High energy consumption
societal norms (see the “concept drift” in its entirety at the time of expi­ration. This
portion of section 4.2.5.1). entails properly removing all infrastruc- Stacking of faulty AI
ture and destroying datasets according
In time, governmental changes or new
to the policies of use that should be Unaudited algorithm
regulations can require retiring an AI purchase
stipulated.
system (Madiega and Mildebrath, 2021)
Insufficient system
(see the “regulatory breach” portion of security

Data drift
QUESTIONS
Concept drift
• Does the current AI system present concrete risks? If yes, are mitigation
techniques in place to address them?
• How would a decision to retire a system be made? Has such a decision
been made?
• What additional components are relevant to the functionality of the system?
Will those be retired as well?
• What mechanism would be used to retract publicly shared trained AI systems
or datasets?

95  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
U R B A N A I S T R AT E GY

SECTION 5

Urban
AI Strategy
5.1. Urban AI strategy overview p. 97
5.2. Start from the local context p. 98
5.3. Prioritise capacity-building p. 101
5.4. Develop innovative regulatory tools for AI p. 102
5.5. Foster cross-sectoral collaborations p. 105
5.6. Build horizontal integration p. 107

96  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
U R B A N A I S T R AT E GY

5.1.

Urban AI
strategy
overview

This urban AI strategy section is a how-to guide to help cities and local
authorities develop AI systems that are in line with inclusivity and sus-
tainable development goals. An urban AI strategy is the place to anchor
the vision; it is a vehicle to articulate local, context-specific goals, as well
as to plan actionable steps.

The section focuses on recommendations and concrete practical sugges-


tions for local authorities on how to develop an AI strategy and governance
framework. It includes considerations for building an enabling environment,
fostering collaboration and building local capacity.

In addition, key tools that are specifically useful to support urban AI strat-
egies, such as algorithm registers and algorithmic impact assessments,
are highlighted in short case studies.

97  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
U R B A N A I S T R AT E GY

5.2.

START FROM THE LOCAL CONTEXT

Each local context is unique, and AI systems must be developed starting from,
and adapting to, the local context.

The successful deployment of AI systems is often determined by how the systems


interact with their environment, as outlined in the Risk Framework (see section 4).
It is therefore essential that a strategy be informed by the technical, political,
geographical, social and economic context in which it will be deployed.

RECOMMENDATION #1:
USE A PEOPLE-CENTRED DESIGN Case study: London
APPROACH TO AI SYSTEMS.
London has specifically identified collaborations
It is essential for citizens and communities to be involved between the public sector, the private sector and
in the development of an AI strategy. The first step is to universities as a means of increasing the city’s
engage the public. The active participation of a primary competitive edge. The City of London has engaged
stakeholder—the public—will enrich the contextual knowl- in a cross-sectoral collaborative city planning
edge and co-design of AI systems. Overall, engagement strategy, formalising the relationship between
with the public through consultations, surveys, town halls administration, industry and academia.
and so on should lead to a more responsible and adapted
AI strategy for the city. In the effort to manage data and AI research, both
Connected Places Catapult and the Alan Turing
To do this, it is important to clarify the strategic policy Institute have partnered with the City of London.
objective that a proposed AI system supports and to Their collaborations help start-ups and scale-ups
articulate how it will operationalise values in line with the based in London or operating there to develop their
public interest. Every AI system will embed values and unique ideas. This partnership provides qualifying
assumptions, so it is important to consciously choose start-ups and scale-ups with new chances to
which values the system will support (see section 2.2). An collaborate with academics on data-driven urban
effective AI strategy must develop a process to question challenges (Alan Turing Institute, 2018).
the embedded values and assumptions in any AI system
and its development.

The next step is to identify the affected communities


targeted by AI systems, and then actually reach out to
them and engage them through established community
networks and processes. AI systems have a life cycle
after deployment, and it is important to test the original
assumptions to see how things actually work in practice
and how communities are affected. This builds up
evidence of the ways that deployed AI systems actually
function, and can feed back into learning, monitoring and
adaptation. And it will ultimately increase public trust in
the use of AI-powered governments.

98  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
U R B A N A I S T R AT E GY

RECOMMENDATION #2: RECOMMENDATION #3:


LEVERAGE LOCAL KNOWLEDGE BUILD ON EXISTING INFRA­
AND EXPERTISE. STRUCTURE AND DATASETS.
The successful implementation of AI requires meaningful Existing resources can provide opportunities to draw
interpretation and relies heavily on local domain expertise. on, as well as set limitations on what is possible. What
Misleading conclusions can stem from failing to link AI systems can already be supported? What emerging
both the input and the output of an AI system with initiatives can be further enabled? What financial and
local knowledge. human resources are available? Does the city have stable
internet access and reliable sources of power?
To prevent these risks, local knowledge can be included
on two levels. The first is by incorporating local expertise A key local resource is data. It is very beneficial to build
and local types of knowledge in the process of shaping an inventory that answers questions such as: What data
an AI strategy. This may include meaningful deliberative sources are locally available? Which are accessible to the
processes in the development of the AI strategy, for city, and which could become accessible, for example
example. The second is by creating the conditions for through data-sharing standards? What are the limitations
which local knowledges may be systematically valued or ethical issues in the data? (See the box entitled Case
and included in future AI applications. study: City of Los Angeles Mobility Data Specification in
section 3.3.3.1.)
Different types of knowledge can contribute to shaping
a public-interest-centred, context-based AI strategy. For An important aspect to consider is the legacy systems
example, tacit knowledge comes from the things we of the city, which are previously existing technology infra-
know from experience and practice, while contextual structures and databases. Cities often have to deal with
knowledge includes social and cultural norms and the maintaining dying technology infrastructures; upgrading
way things are done locally (Ewijk and Baud, 2009; or renewing urban services sometimes implies building on
Buuren, 2009; Kitchin, 2016). top of the existing systems. Aging software and qualified
personnel turnover are common in all technology initiatives
but the challenges they present are compounded in cities
because of the population’s strong dependence on urban
services. A successful strategy must consider what sys-
tems already exist and how they may be adapted, upgraded
or retired when the cost of maintenance is too high.

99  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
U R B A N A I S T R AT E GY

RECOMMENDATION #4:
ALIGN YOUR AI STRATEGY WITH SDGS
AND NATIONAL AND LOCAL GOALS. GLOBAL SDG
How can the AI implementation support the achievement
of the SDGs?
8
It is together with local actors and local driving forces 9
that the global Sustainable Development Goals can be
articulated at the local level. These priorities can support 1
the choices of which issues to address, in particular by
reflecting on the points in section 3 regarding specific 5
17
SDGs that AI applications can target.

In addition to aligning with the SDGs, the AI strategy


should also be guided by national and local goals, includ-
ing specific measures to reduce unemployment, provide
affordable housing or reduce carbon emissions.

LOCAL DRIVING
ACTORS FORCES

5
8
17

LOCAL PRIORITIES
FOR AI STRATEGY

100  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
U R B A N A I S T R AT E GY

5.3.

PRIORITISE CAPACIT Y-BUILDING

Capacity-building is a significant element of any successful AI strategy. For an urban


AI strategy, capacity-building is defined as the process of developing and strength-
ening the skills, instincts, abilities, processes and resources that organisations and
communities need in order to plan, design and deploy AI applications.

For the public, capacity-building is about education regarding the opportunities,


challenges and risks of AI. For local authorities, capacity-building includes investing
in and providing opportunities for AI-related knowledge and skills development and
attracting talent. It is essential that local governments create the conditions both to
develop their own capacity and to build the capacity of its citizens.

RECOMMENDATION #5: As AI finds more and more useful applications in the city,
EDUCATE THE PUBLIC. the urban sector will increasingly use cross-functional
teams; that is, teams that include a mix of skillsets. The
As digital technologies and AI systems continue to
ability to communicate across disciplines and to bridge
transform everyday life, efforts to demystify and explain
perspectives so as to make the most of everybody’s
AI will play a major role in helping citizens understand
strengths and knowledge bases will be a key advantage
AI systems and in building trust in an AI-empowered
for sustainable development.
government. Increased awareness and knowledge of
AI and AI application in the city will ultimately facilitate In particular, cross-functional and interdisciplinary teams
communication with the general public and with the will be useful at all stages of project management for AI
private sector. It is important to recognise the diversity implementation, from procurement to maintenance. Each
of audiences that need such education and to accommo- of the phases of the AI life cycle require reflection and
date this diversity with a variety of educational strategies, evaluation, which is best fostered in these sorts of teams.
taking into account, for example, different generations,
Local authorities will need to unblock the appropriate
levels of digital literacy, and so on.
financial resources and create a conducive environment
for these skills to thrive in the public sector. Training
programs for employees across departments may also
RECOMMENDATION #6: benefit the development of an AI culture within governing
INVEST IN INTERDISCIPLINARY SKILL bodies.
DEVELOPMENT FOR EMPLOYEES.
Local authorities will need people with the skills to
develop, design and deploy AI systems. While technical
capacity is important, an entire ecosystem of interdis-
ciplinary skills is also required for a thriving AI imple-
mentation. For example, AI regulation and law, AI ethics
and AI business development are all key skills alongside
computer programming.

101  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
U R B A N A I S T R AT E GY

RECOMMENDATION #7: RECOMMENDATION #8:


INVEST IN BUILDING LOCAL DEVELOP DATA INFRASTRUCTURE
TECHNICAL CAPACITY. AND STEWARDSHIP.
Cities must set aside training budget to upskill their staff AI solutions require proper infrastructure and access
on both the technical and business sides. All city staff to useful data to fuel AI solutions. The assessment of
members must have the necessary education about current basic capacity is the first step for AI strategy
what AI is and how it changes current practice. A basic development. Once this is done, strengthening infra-
understanding and education about AI is required for AI structure and evaluating the implications of data-sharing
implementation, particularly for procurement functions. processes are key. City leaders should not only initiate
When integrating an AI system in an urban sector, ensure and fund the implementation of the necessary infrastruc-
that staff are trained and educated about the AI system ture but should also ensure interoperability and system
that they are going to use. Make sure that the output integration.
of the system is clearly decipherable and applicable to
their task. This requires technical and digital literacy for
positions that may not appear to be technical at first
glance.

5.4.

DEVELOP INNOVATIVE REGUL ATORY


TOOLS FOR AI

Regulation is a key tool for cities to direct the development of AI and its interaction
in the local environment. Cities can use both soft and hard levers effectively in their
jurisdiction.

RECOMMENDATION #9: CREATE AN creating the conditions for AI development and in the city.
ENABLING ENVIRONMENT. Building an enabling environment for AI means creating
the conditions for responsible AI in the city, beyond
While governments on all levels may not always be early
building the internal capacity of city governments alone.
adopters of digital innovation, they obviously play a key
role in shaping the context of AI in the city. Developing AI in the city will require medium- to long-term
change. This time horizon is sometimes difficult to discuss
With an overview of the different sectors, defining the
and implement when politicians focus on short-term
parameters of innovation through local regulation, and
priorities alone (Prins et al., 2021). While current events
with a finger on the pulse of what it is like to live in the
may serve as a catalyst for digital transformation, every
city, local authorities have the power to create the ways
seed needs fertile soil. Local governments can help create
disruptive technologies will be used to better serve
that soil.
citizens. Local authorities often have a big impact on

102  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
U R B A N A I S T R AT E GY

RECOMMENDATION #10: Different impact assessment tools are being explored as


INTRODUCE LOCAL TECHNOLOGICAL promising methods for accountability across the AI life
STANDARDS AND CERTIFICATIONS. cycle. Examples include Algorithm Impact Assessments
(AIA) and Human Rights Impact Assessments (HRIA),
Technical standards, explainability standards and ethical which can form new accountability relationships and
standards can be useful regulatory tools. A promising governance architectures. Human Rights Impact
perspective is the design of data-sharing standards which Assessments are existing methodologies that can be
enable the city to use the data collected by private actors adapted for AI systems. They can help designers and
as well as facilitating collaborative governance. implementers of the system to study its impact through
The implementation of these standards can be supported correspondence with the rights-holders (e.g., the citizens
by establishing certification systems for those who work in the city) and external stakeholders (Latonero, 2018).
with AI and developing the policies that serve to imple- Including these mechanisms and regulations works
ment them (Prins et al., 2021). towards reliability, safety and trustworthiness over time.
These mechanisms are ways to incorporate the input
of a broader array of stakeholders, including auditors,
researchers and civil society (Nagitta et al., 2022).
Data standards: Open Mobility
Foundation
RECOMMENDATION #12:
The Open Mobility Foundation is a non-profit that BUILD ON EXISTING MONITORING
developed the Open Mobility Standards. It is an
AND EVALUATION TO OVERSEE
“open-source standard that includes real-time
reporting through an API” (D’Agostino et al., 2019).
AI SYSTEMS AT SEVERAL POINTS
It tracks individual mobility using a unique ID, IN TIME.
creating a valuable location database. Originally Evaluation must be ongoing, particularly as the AI life
developed in Los Angeles, it now operates in more cycle has several phases. Monitoring must include both
than 130 cities around the world. how an AI system is working, but also its impact after
The development is important because the City of deployment.
Los Angeles uses the standards as a precondition Build a robust monitoring and evaluation framework for
for micro-mobility services; for example, in order your AI systems. Take existing evaluation frameworks
for shareable scooters to develop their services, and build on them, connecting with algorithmic auditing
they must use these open data standards, effec- and impact assessments. Monitoring should include
tively sharing their data with the city. Through the a skilled interdisciplinary team, using one-, three- and
standard, the city benefits from the urban service six-year cycles.
as well as from the data.
It is also important that monitoring frameworks consider
the public interest. While existing political processes are
founded on representation, it is insufficient to assume
RECOMMENDATION #11: that political processes are enough on their own to align
AI implementation with public values. A separate mech-
INCORPORATE AND ADAPT AI
anism for oversight is required. It can be useful to carry
ASSESSMENT TOOLS.
out a reflexive exercise to express which values the city
Evaluation is not a one-time-only process; it happens con- chooses and how to operationalise them (Jameson et al.,
tinuously and re-occurs. Cities need to design procedures 2021).
with the longer term in mind, so that when things change
the city can respond reflexively and adapt.

103  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
U R B A N A I S T R AT E GY

Algorithmic impact assessments Case study: Amsterdam’s


procurement clauses
Algorithmic impact assessments (AIAs) estimate
the harms caused by an AI to society and offer The city of Amsterdam in the Netherlands has
measures to mitigate those harms. AIAs often look been a pioneer in establishing contractual clauses
at variables such as the actors, the methods and for their public procurement process for algorithms.
the setting where algorithms are deployed. Like The clauses focus on technical transparency,
other impact assessments in other domains, AIAs procedural transparency and explainability.
will not be the answer to all the challenges raised
The Standard Clauses for Procurement of Trustworthy
by AI. However, they are currently being devel-
Algorithmic Systems are openly available and can
oped in an organic process of evolving standards
be freely downloaded from the Amsterdam City
(Metcalf et al., 2021).
Council website (City of Amsterdam Algorithm
AIAs can build on existing human rights impact Register Beta, 2022; Haataja et al., 2021). At the time
assessments, but these are often two separate of this writing, a process is underway to establish
initiatives, and it is recommended to have both. standard clauses at the European level.

RECOMMENDATION #13:
ADAPT PROCUREMENT PROCESSES. Case study: Barcelona

The vast majority of urban AI will be sourced via procure- Barcelona is considered a pioneer in developing
ment. Procurement processes are the city’s chance to a city strategy for how data, and by extension AI,
implement the design strategy. Most cities do not have should be used in the city. The city developed a
the in-house capacity to develop robust AI solutions on digital strategy that starts from the vision of putting
their own. While developing that capacity is important, people first. The idea is value-driven starting from
most developers writing algorithms work for companies the framing and design phases, beginning with
with more financial capacity to invest in higher salaries. imagining how tech could work differently. To do
As a result, cities need the capacity to evaluate the AI this, the Barcelona City Council Open Digitisation
solutions presented to them during the procurement Plan presents a toolkit known as the Ethical Digital
process. This is a key objective of the risk taxonomy: to Standards which includes methods, standards, work
enable city administrators to understand and evaluate the practices, procurement tools and software stan-
risks to be aware of. What questions can you ask when dards. Together, these standards set the conditions
buying an AI solution? One promising methodology for for working within the city, set the conditions for
city councils is contractual clauses. investment, and create a value-­driven environment
(Barcelona Digital City, 2016). By feeding back
analytic capacities into the city, the city council
proactively reverses common trends of extracting
data from citizens for profit.

The innovation of public service provision in


Barcelona was no accident. Rather, it was enabled
by two specific contextual characteristics: first,
visionary leadership supported by a political party
that came into power, and second, a history of a
strong civil society focused on technology as an
enabler of power in the city (Monge, 2022b).

104  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
U R B A N A I S T R AT E GY

5.5.

FOSTER CROSS-SECTORAL
COLL ABORATIONS

Dialogue and collaboration across sectors will be required to develop AI implemen-


tation in the city in line with inclusivity and sustainable development goals.

RECOMMENDATION #14: A safe environment for collaboration between civil society


ENCOURAGE LOCAL INNOVATION. and other actors in the ecosystem creates the context of
the city, so that when a short-term need or event happens,
For an effective AI strategy, it is important for the city to
the city has the resources and connections to adapt and
consider how to build an environment conducive to com-
respond. Civil society that focuses on technology innova-
munication and partnerships as well as how to invest in
tion can also create innovative, decentralised initiatives
the city’s capacity to make the most of these opportunities.
for AI and data governance developments. The challenge
For example, when the city provides the space for the is that these local initiatives are often unable to scale
regulator to discuss with private actors and small-scale without the support from government or political parties
technology entrepreneurs, there is a chance for commu- (Monge, 2022a).
nication. While these may not be one-time interventions,
they create a constructive environment that allows
contextually relevant solutions to emerge. RECOMMENDATION #16:
Another method is using urban planning incentives to develop ENGAGE THE PRIVATE SECTOR.
AI locally. Cities can prioritise projects from startups or Technological businesses can concentrate the appro-
established companies that serve the public interest. priate skills to both develop and manage an AI project
These incentives can take the form of loans, technical efficiently and to carry out the relevant R&D needed for
assistance, mentorships or even access to land resources. innovation. Social media, telecom operators and online
sharing platforms can provide local governments with
valuable data concerning city agents and the operation
RECOMMENDATION #15: of the city services, if the appropriate sharing mechanisms
DEFINE THE TERMS FOR are developed (see the “data standards” box in section 5.4).
PARTNERSHIPS. Collaboration across sectors must be envisaged before
Building partnerships with industry and private actors is and beyond procurement to create the conditions for
often necessary but should be conducted on the city’s success. Public-private partnerships may be challenging,
own terms. The city needs a process to define the metrics as public and private organisations often do not share
and conditions under which it will collaborate. If these the same objectives or the same timelines. The relatively
terms are socially accepted—particularly when they short-term reasoning of businesses, in line with share-
have emerged as a result of a meaningful participatory holders’ agendas, may lead to very different visions of
process—collaborations may still draw critique, but the AI implementation. In particular, their approach to risk
process of defining the rules for engagement creates an management can diverge, as businesses rarely consider
environment where it’s transparent and clear what’s being the same long-term political risks that a sustainable AI
done and why. This clarity creates trust and a space to strategy should address. While city politics often involve
move forward constructively. focusing on short-term priorities, it is important that the
city strategy re-emphasise long-term issues (Prins et al.,
2021).

105  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
U R B A N A I S T R AT E GY

New ways of engaging with the private sector can be RECOMMENDATION #18:
found as businesses are more and more encouraged ENGAGE WITH CIVIL SOCIETY.
to commit to responsible AI. More than 100 leading
The particular role of NGOs as a link between people and
organisations have joined the Partnership on AI in order
government places them at the forefront of the move-
to develop an AI to empower humanity. Guiding principles
ment towards responsible AI. They represent important
of responsible AI have been published by firms that make
drivers of AI for good; many NGOs propose AI solutions
available tools for the management and implementation
developed or co-developed with other sectors, specifically
of AI.
focused on the public interest. The mission and activities
of NGOs require a profound understanding of the context
of intervention and of the impacted population. This expert
RECOMMENDATION #17: knowledge can be leveraged by city leaders when defining
ENGAGE PUBLIC RESEARCH. their strategy or implementing AI solutions.
Public research institutes and universities around the NGOs may also collect data on behalf of marginalised
world have a meaningful role to carry out in support of communities or neighbourhoods with which they have
local and national leaders. Their position as independent worked closely. Furthermore, they can help shape the
agents that display no commercial interests make them data collection process by identifying information loop-
a very important actor in the public-private relationship holes, focusing on issues that have not been prioritised
(Gasser and Almeida, 2017). by local authorities.
The research sector provides resources that facilitate the An AI strategy should carefully consider how to protect
development and deployment of AI, including support the space for civil society to operate. Civil society is a
for the assessment of AI. On a technical level, they can powerful voice that can speak for the communities it
develop specific measures to assess the accuracy and represents and can hold others accountable for their
fairness of the outcome. On an implementation level, actions and their impacts on society. City governments
they can conduct impact assessments. Researchers can may channel the close relationship of the civil society and
provide valuable local knowledge and evidence as a base the public to raise awareness on the opportunities and
for policymaking, particularly from multi-dimensional and risks relating to AI. In certain contexts, NGOs have been
interdisciplinary perspectives. Social and legal experts responsible for building digital capacity within cities by
can also provide significant input into the framing stage providing technical equipment and encouraging digital
of the AI life cycle. literacy for all.
Research centres are therefore particularly well suited to
inducing engagement and inclusion. The resources and
capacity nurtured by universities are indispensable for
the functioning of both businesses and governments. In
that sense, they represent a privileged space for dialogue
between private and public entities. Governing organisa-
tions should harness these strengths.

106  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
U R B A N A I S T R AT E GY

5.6.

BUILD HORIZONTAL
INTEGRATION

Algorithm registers
RECOMMENDATION #19:
CREATE A MORE INTEGRATED One tool to break down information silos for AI
MUNICIPAL STRUCTURE. governance is algorithm registers. An algorithm
register is an “overview of the artificial intelligence
systems and algorithms used by the city” (City
New municipal structures or organisations may be a
of Amsterdam Algorithm Register Beta, 2020),
useful tool to carry out the vision for an AI strategy.
including the reasons they are being used and an
Integrated organisational structures are one method to
explanation of the way they function. Part of the
integrate silos across an urban municipality (Leslie et al.,
challenge in governing AI is that locally there is
2021). They can coordinate policymaking across scale
often limited understanding of what algorithms are
levels or engage with other actors in a co-production.
actually in use and what they do. Algorithm regis-
The approach of creating new, cross-sector, cross-dis- ters are a way to address that challenge.
cipline integrated structures with the specific mandate
to direct and manage data or AI-focused initiatives has
worked particularly well for larger urban agglomerations,
such as London’s regional planning office and Barcelona’s RECOMMENDATION #20: NURTURE
municipal data office. For smaller urban centres, the INNOVATION LEADERSHIP WITH
key is to identify the actor or coalition of actors from the CTOS AND CIOS.
public, private, research and non-profit spheres that can
support the AI strategy. A very interesting way for local authorities to be proactive
on digital innovation is to nurture similar leadership posi-
In order to appreciate the benefit of independent reg- tions as those found in the tech industry: Chief Innovation
ulatory and oversight institutions, expert committees Officer (CIO) or Chief Technology Officer (CTO). These are
or sectoral regulators, a needs assessment should be terms originating from industry and are relatively new to
carried out (Bulmer, 2019). Depending on the existing applications in urban development. CTOs are often at the
capacity of local authorities or the particular jurisdiction helm of municipal reorganisation.
context, the implementation of an adequate regulatory
landscape may require the implementation of new ad-hoc
bodies (United Nations, 2019).

107  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
U R B A N A I S T R AT E GY

Key roles: Chief Technology Officers

Innovation leaders function in a few different ways. These roles can be broadly described as a spectrum
between the degree of autonomy and operations management:

HIGH %

1. 3.
“Big thinker” Technology visionary
DEGREE OF AUTONOMY

and operations manager

2. 4.
External-facing Infrastructure
technologist manager

LOW % HIGH %
IT AND OPERATIONS MANAGEMENT

1. CTO as “big thinker”: In this mode, the CTO is given a lot of leeway to think about long-term development
and future approaches.

2. CTO as external-facing technologist: As an external-facing technologist, the CTO concentrates their


efforts on collaborating closely with city stakeholders to design and establish digital innovation.

3. CTO as technology visionary and operations manager: This model combines models 1 and 2. Here the
CTO is brought in early in the strategy planning process. The CTO is in charge of figuring out how technology
may be leveraged to carry out the proposed strategy and then is responsible for executing the plan.

4. CTO as infrastructure manager: The CTO demonstrates operational skills, a clear awareness of
technology management, and the ability to oversee a large and diverse team. In this mode, their main goal
is to keep the IT department running smoothly, rather than make decisions on technological strategy.

108  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
U R B A N A I S T R AT E GY

SECTION 6

Conclusion

109  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
CONCLUSION

The field of AI is growing at an unbridled pace. We In order to better support decision-makers in this
are increasingly seeing AI systems leaving research exercise, future work should explore at least three
settings to be deployed in almost all spheres of important areas: first, highlight the experiences of
human activity. As a result, AI has the potential to non-Western cities implementing AI applications
profoundly transform the way our societies operate, and consider how to support capacity-building in
including by supporting efforts on critical questions ways that are globally equitable; second, examine
such as the climate crisis, public health, education how AI can support practical urban planning pro-
and beyond. However, this ongoing societal trans- cesses in further detail; and third, develop tools and
formation entails risks that must be addressed. processes to meaningfully include local populations
There is an urgent need to develop responsible AI and civil society organisations all along the AI life
governance and practices across all scale levels of cycle.
administrative and political organisations, in both
the public and private sectors. Finally, we invite feedback on this report. We would
love to hear from cities that are actually using this
This report provides a general framework on how report on what worked, what is helpful, and what
to deploy AI responsibly in the context of cities should be improved to better respond to their
and settlements. It offers an overview of the ma- local contexts. This feedback will help us develop
jor considerations facing local authorities as they our future thinking and provide adapted advice and
make important decisions on how and when to use thought leadership to enable responsible AI across
AI. The report provides a review of AI governance domains and contexts. Together, Mila and UN-Habitat
in urban contexts, an analysis of existing AI appli- believe that when decision-makers are informed
cations. It proposes a Risk Assessment Framework about both the risks and benefits of AI, they are
that spans the entire AI life cycle and makes a set better positioned to use AI as a tool for creating
of recommendations for policy makers to consider inclusive, safe, resilient and sustainable cities and
when drafting AI strategies. Together, these elements communities, as well as reducing inequality, dis-
support Mila’s commitment to advancing AI for the crimination and poverty. This report is our hum-
benefit of all and UN-Habitat’s vision of a better ble contribution in this direction. We hope that
quality of life for all in an urbanising world. decision-­makers at all levels of government will use
and share this report widely for the betterment of
While we hope this report is helpful, most of the cities and settlements worldwide.
work lies ahead. Leadership, knowledge and plan-
ning will be required for decision-makers to imple-
ment AI strategies that are responsible, inclusive
and ambitious. While this report provides recom-
mendations to this end, there is no standardised
recipe for success, as local contexts must play a
pivotal role in designing any AI strategy.

110  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
REFERENCES

REFERENCES
Abbasi, Maryam, and Ali El Hanandeh Alam, Gulzar, Ihsanullah Ihsanullah, Anttiroiko, Ari-Veikko, Pekka Valkama, and
(2016). Forecasting municipal solid Mu Naushad, and Mika Sillanpää (2022). Stephen Bailey (2014). Smart cities in the
waste generation using artificial Applications of artificial intelligence in new service economy: Building platforms
intelligence modelling approaches. water treatment for optimization and for smart services. AI & Society, No. 29
Waste Management, No. 56 (October), automation of adsorption processes: (3 November ), pp. 323–34. https://doi.
pp. 13–22. https://doi.org/10.1016/j. Recent advances and prospects. Chemical org/10.1007/s00146-013-0464-0
wasman.2016.05.018 Engineering Journal, No. 427 (January),
As, Imdat, Siddharth Pal, and Prithwish
p. 130011. https://doi.org/10.1016/j.
Abdallah, Mohamed, Manar Abu Talib, Basu (2018). Artificial intelligence in
cej.2021.130011
Sainab Feroz, Qassim Nasir, Hadeer architecture: Generating conceptual
Abdalla, and Bayan Mahfood (2020). Alan Turing Institute (2018). New design via deep learning. International
Artificial intelligence applications in collaboration between the Alan Turing Journal of Architectural Computing, vol. 16,
solid waste management: A systematic Institute and Digital Catapult provides No. 4 (December), pp. 306–27. https://doi.
research review. Waste Management, funding for London start-ups and org/10.1177/1478077118800982
No. 109 (May), pp. 231–46. https://doi. scale-ups to open their data challenges
Asaro, Peter (2019). AI ethics in predictive
org/10.1016/j.wasman.2020.04.057 to researchers. News post, 16 August.
policing: From models of threat to an
https://www.turing.ac.uk/news/new-
Abioye, Sofiat O., Lukumon O. Oyedele, ethics of care. IEEE Technology and Society
collaboration-between-alan-turing-
Lukman Akanbi, Anuoluwapo Ajayi, Juan Magazine, No. 38 (1 June ), pp. 40–53.
institute-and-digital-catapult-provides-
Manuel Davila Delgado, Muhammad https://doi.org/10.1109/MTS.2019.2915154
funding-london
Bilal, Olugbenga O. Akinade, and Ashraf
Associated Press (2022). Oregon is drop­
Ahmed (2021). Artificial intelligence Ala-Pietilä, Pekka, and Nathalie A.
ping an artificial intelligence tool used in
in the construction industry: A review Smuha (2021). A framework for global
child welfare system. NPR, 2 June. https://
of present status, opportunities and cooperation on artificial intelligence and
www.npr.org/2022/06/02/1102661376/
future challenges. Journal of Building its governance. In Reflections on Artificial
oregon-drops-artificial-intelligence-child-
Engineering, No. 44 (December). https:// Intelligence for Humanity, Bertrand
abuse-cases
doi.org/10.1016/j.jobe.2021.103299 Braunschweig and Malik Ghallab, eds.,
pp. 237–265. Cham: Springer International Babyl (2018). The Rwandan government
Aguilar, Diego, Roxana Barrantes, Aileen
Publishing. https://doi.org/10.1007/978-3- partners with babyl to deliver the first
Agüero, Onkokame Mothobi, and Tharaka
030-69128-8_15 ever fully digital healthcare service in East
Amarasinghe (2020). Future of work
Africa using artificial intelligence. Press
in the Global South: Digital labor, new Almeida, Denise, Constantin Shmarko, and
release, 10 January. https://www.babyl.rw/
opportunities and challenges. Instituto de Elizabeth Lomas (2021). The ethics of facial
uploads/press-releases/Press-Release-
Estudios Peruanos. https://afteraccess. recognition technologies, surveillance,
MoH-babyl-VF.pdf
net/wp-content/uploads/AfterAccess- and accountability in an age of artificial
Future-of-Work-in-the-Global-South- intelligence: a comparative analysis of Barcelona Digital City (2016). Barcelona
Digital-Labor-New-Opportunities-and- US, EU, and UK regulatory frameworks. AI City Council digital plan: A government
Challenges-Working-Paper.pdf and Ethics No. 2, pp. 377–387. https://doi. measure or open digitisation: Free
org/10.1007/s43681-021-00077-w software and agile development of
Ahmad, Kashif, Majdi Maabreh, Mohamed
public administration services. https://
Ghaly, Khalil Khan, Junaid Qadir, and Almeida, Patricia Gomes Rêgo de,
ajuntament.barcelona.cat/digital/sites/
Ala Al-Fuqaha (2021). Developing Carlos Denner dos Santos, and Josivania
default/files/LE_MesuradeGovern_
future human-centered smart cities: Silva Farias (2021). Artificial intelligence
EN_9en.pdf
Critical analysis of smart city security, regulation: A framework for governance.
interpretability, and ethical challenges. Ethics and Information Technology, vol. 23, Barcelona Digital City (2021). Barcelona
arXiv:2012.09110 [cs] (5 December). http:// No. 3 (1 September ), pp. 505–25. https:// promotes the ethical use of artificial
arxiv.org/abs/2012.09110 doi.org/10.1007/s10676-021-09593-z intelligence. Blog, 21 April. https://
ajuntament.barcelona.cat/digital/en/
Aitken, Rob (2017). “All data is credit Amir Haeri, Maryam, and Katharina
blog/barcelona-promotes-the-ethical-
data”: Constituting the unbanked. Zweig (2020). The crucial role of sensitive
use-of-artificial-intelligence
Competition & Change, vol. 21, No. 4 attributes in fair classification, 2020 IEEE
(1 August), pp. 274–300. https://doi. Symposium Series on Computational Bastani, Favyen, Songtao He, Sofiane
org/10.1177/1024529417712830 Intelligence (SSCI), pp. 2993–3002. https:// Abbar, Mohammad Alizadeh, Hari
doi.org/10.1109/SSCI47803.2020.9308585 Balakrishnan, Sanjay Chawla, and Sam
Akanbi, Lukman A., Ahmed O. Oyedele,
Madden (2018). Machine-assisted map
Lukumon O. Oyedele, and Rafiu O. Amodei, Dario, Chris Olah, Jacob
editing. Proceedings of the 26th ACM
Salami (2020). Deep learning model for Steinhardt, Paul Christiano, John Schulman,
SIGSPATIAL International Conference on
demolition waste prediction in a circular and Dan Mané (2016). Concrete problems
Advances in Geographic Information
economy. Journal of Cleaner Production, in AI safety. arXiv:1606.06565 [cs] (25 July).
Systems (November 6), pp. 23–32. https://
No. 274 (November). https://doi. http://arxiv.org/abs/1606.06565
doi.org/10.1145/3274895.3274927
org/10.1016/j.jclepro.2020.122843

111  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
REFERENCES

Basu, Medha (2017). Exclusive: Hong Bouhedda, Mounir, Sonia Lefnaoui, Buuren, Arwin van (2009). Knowledge for
Kong’s vision for artificial intelligence. Samia Rebouh, and Madiha M. Yahoum governance, governance of knowledge:
GovInsider, 6 October. https://govinsider. (2019). Predictive model based on Inclusive knowledge management in
asia/smart-gov/exclusive-hong-kongs- adaptive neuro-fuzzy inference system collaborative governance processes.
vision-for-artificial-intelligence/ for estimation of cephalexin adsorption International Public Management Journal,
on the octenyl succinic anhydride vol. 12, No. 2, pp. 208–35. https://doi.
Bekkum, Marvin van, and Frederik
starch. Chemometrics and Intelligent org/10.1080/10967490902868523
Zuiderveen Borgesius (2021). Digital
Laboratory Systems, No. 193 (October),
welfare fraud detection and the Dutch Caliva, Francesco, Fabio Sousa De
p. 103843. https://doi.org/10.1016/j.
SyRI judgment. European Journal of Social Ribeiro, Antonios Mylonakis, Christophe
chemolab.2019.103843
Security, vol. 23, No. 4, pp. 323–40. https:// Demazirere, Paolo Vinai, Georgios
doi.org/10.1177/13882627211031257 Brandusescu, A., and Reia, J., eds. (2022). Leontidis, and Stefanos Kollias (2018).
Artificial Intelligence in the City: Building A deep learning approach to anomaly
Bench-Capon, Trevor, Michał Araszkiewicz,
Civic Engagement and Public Trust. detection in nuclear reactors. In 2018
Kevin Ashley, Katie Atkinson, Floris Bex,
Montreal: Centre for Interdisciplinary International Joint Conference on
Filipe Borges, Daniele Bourcier, et al.
Research on Montréal, McGill Neural Networks (IJCNN),Rio de Janeiro:
(2012). A history of AI and law in 50 papers:
University. https://www.researchgate. IEEE, pp. 1–8. https://doi.org/10.1109/
25 years of the International Conference
net/publication/362025076_Artificial_ IJCNN.2018.8489130
on AI and Law. Artificial Intelligence
Intelligence_in_the_City_Building_Civic_
and Law 20, No. No. 3 (September), pp. Cameron, Felix (2018). Artificial
Engagement_and_Public_Trust
215–319. https://doi.org/10.1007/s10506- intelligence and proptech. Data Driven
012-9131-x BT (n. d.). Intelligent cities case study: Investor (1 May). https://medium.
Saving money and reducing emissions in datadriveninvestor.com/artificial-
Bengio, Y., A. Courville, and P. Vincent
Milton Keynes using smart parking. https:// intelligence-and-proptech-ca4f419c7735
(2013). Representation learning: A review
www.iot.bt.com/assets/documents/bt-
and new perspectives. IEEE Transactions Cenek, Martin, Rocco Haro, Brandon
milton-keynes-innovative-parking-case-
on Pattern Analysis and Machine Sayers, and Jifeng Peng (2018). Climate
study.pdf
Intelligence, vol. 35, No. 8 (August), change and power security: Power load
pp. 1798–1828. https://doi.org/10.1109/ Bughin, Jacques, and Nicolas Van prediction for rural electrical microgrids
TPAMI.2013.50 Zeebroeck (2018). AI adoption: Why a using long short term memory and
digital base is critical. McKinsey. McKinsey artificial neural networks. Applied
Berryhill, Jamie, Kevin Kok Heang, Rob
Quarterly, 26 July. https://www.mckinsey. Sciences, vol. 8, No. 5 (9 May), p. 749.
Clogher, and Keegan McBride (2019).
com/business-functions/quantumblack/ https://doi.org/10.3390/app8050749
Hello, world Artificial intelligence and its
our-insights/artificial-intelligence-why-a-
use in the public sector. OECD Working Chaillou, Stanislas (2022). Artificial
digital-base-is-critical
Papers on Public Governance. https:// Intelligence and Architecture: From
www.oecd-ilibrary.org/governance/hello- Bulmer, Eliott (2019). Independent Research to Practice 1st ed. Boston:
world_726fd39d-en regulatory and oversight (fourth-branch) De Gruyter.
institutions. International Institute for
Bhattacharya, Biswarup, and Abhishek Chakraborty, S., and A. C. Newton (2011).
Democracy and Electoral Assistance.
Sinha (2017). Deep fault analysis and Climate change, plant diseases and food
https://www.idea.int/publications/
subset selection in solar power grids. security: An overview: Climate change
catalogue/independent-regulatory-and-
arXiv:1711.02810 [cs.LG]. https://doi. and food security. Plant Pathology, vol.
oversight-fourth-branch-institutions
org/10.48550/ARXIV.1711.02810 60, No. 1 (February), pp. 2–14. https://doi.
Buolamwini, Joy, and Timnit Gebru org/10.1111/j.1365-3059.2010.02411.x
Bolukbasi, Tolga, Kai-Wei Chang, James
(2018). Gender shades: Intersectional
Zou, Venkatesh Saligrama, and Adam Kalai Chakravorti, Bhaskar (2021). How to close
accuracy disparities in commercial
(2016). Man is to computer programmer the digital divide in the U.S. Harvard
gender classification. Proceedings of the
as woman is to homemaker? Debiasing Business Review, 20 July . https://hbr.
1st Conference on Fairness, Accountability
word embeddings. arXiv (July 21). http:// org/2021/07/how-to-close-the-digital-
and Transparency, pp. 77–91. Proceedings
arxiv.org/abs/1607.06520 divide-in-the-u-s
on Machine Learning Research. https://
Borgs, Christian, Ozan Candogan, proceedings.mlr.press/v81/buolamwini18a. Chandler, Jenna (2020). The Northridge
Jennifer Chayes, Ilan Lobel, and Hamid html earthquake exposed flaws in the Getty’s
Nazerzadeh (2014). Optimal multiperiod construction—and changed how LA builds.
Butler, Keith T., Daniel W. Davies, Hugh
pricing with service guarantees. Mana­ Curbed Los Angeles, 17 January. https://
Cartwright, Olexandr Isayev, and Aron
gement Science, vol. 60, No. 7 (July 1), la.curbed.com/2020/1/17/21068895/los-
Walsh (2018). Machine learning for
pp. 1792–1811. https://doi.org/10.1287/ angeles-earthquake-steel-buildings-getty
molecular and materials science. Nature,
mnsc.2013.1839
vol. 559, No. 7715 (July), pp. 547–55. https://
doi.org/10.1038/s41586-018-0337-2

112  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
REFERENCES

Chandra, Priyanjani, Pratool Bharti, and Crawford, Kate (2013). Think again: Big Decuyper, Adeline, Alex Rutherford, Amit
Michael Papka (2020). A computer vision data. Foreign Policy (blog), 10 May. https:// Wadhwa, Jean-Martin Bauer, Gautier
and AI based solution to determine the foreignpolicy.com/2013/05/10/think- Krings, Thoralf Gutierrez, Vincent D.
change in water level in stream. Poster, again-big-data/ Blondel, and Miguel A. Luengo-Oroz
SC20: International Conference for High (2014). Estimating food consumption and
Crawford, Kate (2021). Atlas of AI: Power,
Performance Computing, Networking, poverty indices with mobile phone data.
Politics, and the Planetary Costs of Arti­
Storage, and Analysis. http://sc20. arXiv:1412.2595 [physics], 22 November.
ficial Intelligence. New Haven, CT: Yale
supercomputing.org/proceedings/src_ http://arxiv.org/abs/1412.2595
University Press. https://yalebooks.yale.
poster/src_poster_pages/spostg110.html
edu/9780300264630/atlas-of-ai Dhinakaran, Aparna (2021). Overcoming
Cho, Peter Jaeho, Karnika Singh, and AI’s transparency paradox. Forbes, 10
Criado-Perez, Caroline (2019). Invisible
Jessilyn Dunn (2021). Chapter 9 – Roles of September. https://www.forbes.com/
Women: Data Bias in a World Designed
artificial intelligence in wellness, healthy sites/aparnadhinakaran/2021/09/10/
for Men. New York: Abrams Press.
living, and healthy status sensing. In overcoming-ais-transparency-
Artificial Intelligence in Medicine, Lei Xing, D’Agostino, M., Pellaton, P., and Brown, A. paradox/?sh=76d9fde34b77
Maryellen L. Giger, and James K. Min, eds. (2019). Mobility data sharing: Challenges
Dietvorst, Berkeley J., Joseph P. Simmons,
Academic Press, pp. 151–72. https://doi. and policy recommendations. UC Davis:
and Cade Massey (2015). Algorithm
org/10.1016/B978-0-12-821259-2.00009-0 Institute of Transportation Studies. https://
aversion: People erroneously avoid
escholarship.org/uc/item/47p885q8
Chui, Kwok Tai, Miltiadis D. Lytras, Anna algorithms after seeing them err. Journal
Visvizi, and Akila Sarirete (2021). Chapter 16 Dai, Xiaoqing, Lijun Sun, and Yanyan Xu of Experimental Psychology: General,
– An overview of artificial intelligence and (2018). Short-term origin-destination vol. 144, No. 1, pp. 114–26. https://doi.
big data analytics for smart healthcare: based metro flow prediction with org/10.1037/xge0000033
Requirements, applications, and probabilistic model selection approach.
Dignum, Virginia (2022). Responsible
challenges. In Artificial Intelligence and Journal of Advanced Transportation
artificial intelligence – from principles to
Big Data Analytics for Smart Healthcare, (26 June), pp. 1–15. https://doi.
practice. ACM SIGIR Forum, vol. 56, No. 1
Miltiadis D. Lytras, Akila Sarirete, Anna org/10.1155/2018/5942763
(June). https://arxiv.org/pdf/2205.10785.pdf
Visvizi, and Kwok Tai Chui, eds. Academic
Das, Utpal Kumar, Kok Soon Tey, Mehdi
Press, pp. 243–54. https://doi.org/10.1016/ Dijk, Jan A.G.M. van (2006). Digital divide
Seyedmahmoudian, Saad Mekhilef, Moh
B978-0-12-822060-3.00015-2 research, achievements and shortcomings.
Yamani Idna Idris, Willem Van Deventer,
Poetics, vol. 34, No. 4–5 (August),
Chui, Michael, Martin Harryson, James Bend Horan, and Alex Stojcevski (2018).
pp. 221–35. https://doi.org/10.1016/j.
Manyika, Roger Roberts, Rita Chung, Forecasting of photovoltaic power gene­
poetic.2006.05.004
Ashley van Heteren, and Pieter Nel (2018). ration and model optimization: A review.
Notes from the AI frontier: Applying Renewable and Sustainable Energy Ditzler, Gregory, Manuel Roveri, Cesare
AI for social good. Discussion paper. Reviews, No. 81 (January), pp. 912–28. Alippi, and Robi Polikar (2015). Learning
McKinsey Global Institute, December. https://doi.org/10.1016/j.rser.2017.08.017 in nonstationary environments: A survey.
https://www.mckinsey.com/~/media/ Computational Intelligence Magazine,
Data Society (2021). Assembling
mckinsey/featured%20insights/artificial%20 IEEE 10 (1 November), pp. 12–25. https://
accountability: Algorithmic impact
intelligence/applying%20artificial%20 doi.org/10.1109/MCI.2015.2471196
assessment for the public interest
intelligence%20for%20social%20good/mgi-
(June). https://datasociety.net/wp- Dobbe, Roel, David Fridovich-Keil, and
applying-ai-for-social-good-discussion-
content/uploads/2021/06/Assembling- Claire Tomlin (2017). Fully decentralized
paper-dec-2018.ashx
Accountability.pdf policies for multi-agent systems:
City of Amsterdam Algorithm Register Beta An information theoretic approach.
Daughton, Ashlynn R., and Michael J.
(2020). What is the Algorithm Register? arXiv:1707.06334 [nlin] (29 July). http://arxiv.
Paul (2019). Identifying protective health
https://algoritmeregister.amsterdam.nl/ org/abs/1707.06334
behaviors on Twitter: Observational study
en/ai-register/
of travel advisories and Zika virus. Journal Dobbe, Roel, Oscar Sondermeijer, David
Clinton, Nicholas, and Peng Gong (2013). of Medical Internet Research, vol. 21, No. Fridovich-Keil, Daniel Arnold, Duncan
MODIS detected surface urban heat 5), p. e13090. https://doi.org/10.2196/13090 Callaway, and Claire Tomlin (2018).
islands and sinks: Global locations and Towards distributed energy services:
Davenport, Thomas H., and D. J. Patil
controls. Remote Sensing of Environment, Decentralizing optimal power flow with
(2012). Data scientist: The sexiest job of
No. 134 (July), pp. 294–304. https://doi. machine learning. arXiv:1806.06790
the 21st century. Harvard Business Review,
org/10.1016/j.rse.2013.03.008 [cs.LG]. https://doi.org/10.48550/
1 October. https://hbr.org/2012/10/data-
ARXIV.1806.06790
Clutton-Brock, Peter, David Rolnick, Priya scientist-the-sexiest-job-of-the-21st-
L. Donti, Lynn H. Kaack, Tegan Maharaj, century Donnot, Benjamin, Isabelle Guyon,
Alexandra Luccioni, and Hari Prasanna Marc Schoenauer, Patrick Panciatici,
Davenport, Thomas, and Ravi Kalakota
Das (2021). Climate change and AI: and Antoine Marot (2017). Introducing
(2019). The potential for artificial
Recommendations for government action. machine learning for power system
intelligence in healthcare. Future
Report. GPAI (Global Partnership on AI). operation support. arXiv:1709.09527
Healthcare Journal, vol. 6, No. 2 (June),
https://www.gpai.ai/projects/climate- [cs, stat] (27 September). http://arxiv.org/
pp. 94–98. https://doi.org/10.7861/
change-and-ai.pdf abs/1709.09527
futurehosp.6-2-94

113  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
REFERENCES

Donti, Priya L., Yajing Liu, Andreas J. Ellman, Douglas (Douglas Austin) (2015). Foley, Aoife M., Paul G. Leahy, Antonino
Schmitt, Andrey Bernstein, Rui Yang, and The reference electrification model: Marvuglia, and Eamon J. McKeogh
Yingchen Zhang (2018). Matrix completion A computer model for planning rural (2012). Current methods and advances
for low-observability voltage estimation. electricity access. Thesis, Massachusetts in forecasting of wind power generation.
arXiv:1801.09799 [math.OC]. https://doi. Institute of Technology. https://dspace.mit. Renewable Energy, vol. 37, No. 1 (January),
org/10.48550/ARXIV.1801.09799 edu/handle/1721.1/98551 pp. 1–8. https://doi.org/10.1016/j.
renene.2011.05.033
Doshi, Jigar, Saikat Basu, and Guan Enderlein, Henrik, Sonja Wälti, and
Pang (2018). From satellite imagery Michael Zürn (2010). Handbook on Multi- Fujimura, Koji, Atsuto Seko, Yukinori
to disaster insights. arXiv:1812.07033 Level Governance. Cheltenham: Elgar Koyama, Akihide Kuwabara, Ippei
[cs] (17 December). http://arxiv.org/ Publishing. https://opus4.kobv.de/opus4- Kishida, Kazuki Shitara, Craig A. J. Fisher,
abs/1812.07033 hsog/frontdoor/index/index/docId/298 Hiroki Moriwake, and Isao Tanaka
(2013). Accelerated materials design of
Doshi-Velez, Finale, and Been Kim Engler, Alex (2021). Enrollment algorithms
lithium superionic conductors based
(2017). Towards a rigorous science are contributing to the crises of higher
on first-principles calculations and
of interpretable machine learning. education. Brookings (blog), 14 September.
machine learning algorithms. Advanced
arXiv:1702.08608 [cs, stat] (2 March). https://www.brookings.edu/research/
Energy Materials, vol. 3, No. 8 (August),
http://arxiv.org/abs/1702.08608 enrollment-algorithms-are-contributing-
pp. 980–85. https://doi.org/10.1002/
to-the-crises-of-higher-education/
Draetta, Laura, and Valerie Fernandez aenm.201300060
(2021). The Alicem app: A controversial European Data Protection Board (2021).
Gabriel, Iason (2020). Artificial Intelligence,
digital authentication system. I’MTech Dutch DPA issues formal warning to
Values and Alignment. Minds and
(blog), 3 February. https://imtech.imt. a supermarket for its use of facial
Machines, No. 30 (October), pp. 411–437.
fr/en/2021/02/03/the-alicem-app-a- recognition technology. Blog, 26 January.
https://doi.org/10.1007/s11023-020-09539-2
controversial-digital-authentication- https://edpb.europa.eu/news/national-
system/ news/2021/dutch-dpa-issues-formal- Galea, Sandro, and David Vlahov (2005).
warning-supermarket-its-use-facial- Urban health: Evidence, challenges,
Drozdowski, Pawel, Christian Rathgeb,
recognition_en and directions. Annual Review of
Antitza Dantcheva, Naser Damer, and
Public Health, vol. 26, No. 1), pp. 341–
Christophe Busch (2020). Demographic Ewijk, Edith van, and I. S. A. Baud
65. https://doi.org/10.1146/annurev.
bias in biometrics: A survey on an emer­ (2009). Partnerships between Dutch
publhealth.26.021304.144708
ging challenge. Institute of Electrical and municipalities and municipalities in
Electronics Engineers (IEEE) (5 March). countries of migration to the Netherlands; Gandhi, Tarak, and Mohan Manubhai
knowledge exchange and mutuality. Trivedi (2008). Computer vision and
Dwork, Cynthia, and Christina Ilvento
City-to-City Co-operation, vol. 33, No. machine learning for enhancing pedes­
(2018). Fairness under composition.
2, pp. 218–26. https://doi.org/10.1016/j. trian safety. In Computational Intelligence
arXiv:1806.06122 [cs.LG] (15 June).
habitatint.2008.10.014 in Automotive Applications, D. Prokhorov,
https://arxiv.org/abs/1806.06122
ed. Berlin, Heidelberg: Springer, pp. 59–77.
Faggella, Daniel (2015). Can abuse of
Dwork, Cynthia, and Marthe Louise Minow https://doi.org/10.1007/978-3-540-79257-4_4
AI agthe shape the future of human
(2022). Distrust of artificial intelligence:
computer interaction? Singularity blog, 13 Gar Alalm, Mohamed, and Mahmoud
Sources & responses from computer
November. https://www.singularityweblog. Nasr (2018). Artificial intelligence,
science & law. Daedalus, vol. 151, No. 2,
com/can-abuse-of-ai-agents-shape-the- regression model, and cost estimation
pp. 309–321.(spring). https://doi.
future-of-human-computer-interaction/ for removal of chlorothalonil pesticide
org/10.1162/DAED_a_01918
by activated carbon prepared from
Falco, Gregory, Ben Shneiderman, Julia
Eastin, Matthew S., and Robert LaRose casuarina charcoal. Sustainable Environ­
Badger, Ryan Carrier, Anton Dahbura,
(2000). Internet self-efficacy and the ment Research, vol. 28, No. 3 (May),
David Danks, Martin Eling, et al. (2021).
psychology of the digital divide. Journal pp. 101–10. https://doi.org/10.1016/j.
Governing AI safety through independent
of Computer-Mediated Communication, serj.2018.01.003
audits. Nature Machine Intelligence, No. 3
vol. 6, No. 1 (23 June), pp. 0–0. https://doi.
(1 July), pp. 566–71. https://doi.org/10.1038/ Garg, Nikhil, Londa Schiebinger, Dan
org/10.1111/j.1083-6101.2000.tb00110.x
s42256-021-00370-7 Jurafsky, and James Zou (2018). Word
El Hanandeh, Ali, Zainab Mahdi, and M. S. embeddings quantify 100 years of gender
Floridi, Luciano (2018). Soft ethics and the
Imtiaz (2021). Modelling of the adsorption and ethnic stereotypes. Proceedings of
governance of the digital. Philosophy &
of Pb, Cu and Ni ions from single and the National Academy of Sciences, vol. 115,
Technology, vol. 31, No. 1 (1 March), pp. 1–8.
multi-component aqueous solutions by No. 16 (17 April), pp. E3635–44. https://doi.
https://doi.org/10.1007/s13347-018-0303-9
date seed derived biochar: Comparison org/10.1073/pnas.1720347115
of six machine learning approaches. Floridi, Luciano (2019). Establishing the
Garvie, Claire (2019). Garbage in, garbage
Environmental Research, No. 192 (January. rules for building trustworthy AI. Nature
out: Face recognition on flawed data.
https://doi.org/10.1016/j.envres.2020.110338 Machine Intelligence, No. 1, pp. 261–262 (7
Georgetown Law Center on Privacy
May). https://philpapers.org/rec/FLOETR
& Technology (16 May). https://www.
flawedfacedata.com

114  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
REFERENCES

Gasser, Urs and Virgilio A. F. Almeida Gómez Mont, Constanza, Claudia May Hagenauer, Julian, and Marco Helbich
(2017). A layered model for AI governance. Del Pozo, Cristina Martínez Pinto, and Ana (2017). A comparative study of machine
IEEE Internet Computing, vol. 21, No. 6 Victoria Martín del Campo Alcocer (2020). learning classifiers for modeling travel
(November), pp. 58–62. doi:10.1109/ La inteligencia artificial al servicio del mode choice. Expert Systems with
mic.2017.4180835 bien social en América Latina y el Caribe: Applications, No. 78 (July), pp. 273–82.
Panorámica regional e instantáneas https://doi.org/10.1016/j.eswa.2017.01.057
Gastaldi, Massimiliano, Riccardo Rossi,
de doce países. Report, Inter-American
Gregorio Gecchele, and Luca Della Hansen, Terry, and Chia-Jiu Wang
Development Bank (May). https://doi.
Lucia (2013). Annual average daily traffic (2005). Support vector based battery
org/10.18235/0002393
estimation from seasonal traffic counts. state of charge estimator. Journal of
Procedia - Social and Behavioral Sciences, Gómez-Bombarelli, Rafael, Jennifer Power Sources, vol. 141, No. 2 (March),
No. 87 (October), pp. 279–91. https://doi. N. Wei, David Duvenaud, José Miguel pp. 351–58. https://doi.org/10.1016/j.
org/10.1016/j.sbspro.2013.10.610 Hernández-Lobato, Benjamín Sánchez- jpowsour.2004.09.020
Lengeling, Dennis Sheberla, Jorge
Gebru, Timnit, Jamie Morgenstern, Briana Hao, Karen (2019). AI is sending people to
Aguilera-Iparraguirre, Timothy D. Hirzel,
Vecchione, Jennifer Wortman Vaughan, jail and getting it wrong. MIT Technology
Ryan P. Adams, and Alán Aspuru-Guzik
Hanna Wallach, Hal Daumé Iii, and Kate Review 21 January . https://www.
(2018). Automatic chemical design using a
Crawford (2021). Datasheets for datasets. technologyreview.com/2019/01/21/137783/
data-driven continuous representation of
Communications of the ACM, vol. 64, No. algorithms-criminal-justice-ai/
molecules. ACS Central Science, vol. 4, No.
12 (December), pp. 86–92. https://doi.
2 (28 February), pp. 268–76. https://doi. Hao, Karen (2020). Live facial recognition
org/10.1145/3458723
org/10.1021/acscentsci.7b00572 is tracking kids suspected of being
Ghaemi, Mohammad Sajjad, Bruno criminals. MIT Technology Review, 10
González Perea, Rafael, Emilio Camacho
Agard, Martin Trépanier, and Vahid September. https://www.technologyreview.
Poyato, Pilar Montesinos, and Juan Antonio
Partovi Nia (2017). A visual segmentation com/2020/10/09/1009992/live-facial-
Rodríguez Díaz (2019). Optimisation of
method for temporal smart card data. recognition-is-tracking-kids-suspected-
water demand forecasting by artificial
Transportmetrica A: Transport Science, vol. of-crime/
intelligence with short data sets.
13, No. 5 (28 May), pp. 381–404. https://doi.
Biosystems Engineering, No. 177 (January), Heemstra, F. J. (1992). Software cost
org/10.1080/23249935.2016.1273273
pp. 59–66. https://doi.org/10.1016/j. estimation. Information and Software
Ghanem, Ahmed, Mohammed Elhenawy, biosystemseng.2018.03.011 Technology, vol. 34, No. 10 (October),
Mohammed Almannaa, Huthaifa I. pp. 627–39. https://doi.org/10.1016/0950-
Goodfellow, Ian, Yoshua Bengio, and
Ashqar, and Hesham A. Rakha (2017). Bike 5849(92)90068-Z
Aaron Courville (2016). Deep Learning:
share travel time modeling: San Francisco
Adaptive Computation and Machine Ho, Hung Chak, Anders Knudby, Paul
Bay Area case study. In 2017 5th IEEE
Learning. Cambridge: The MIT Press. Sirovyak, Yongming Xu, Matus Hodul,
International Conference on Models and
and Sarah B. Henderson (2014). Mapping
Technologies for Intelligent Transportation Greenfield, Adam (2013). Against the Smart
maximum urban air temperature on
Systems (MT-ITS), pp. 586–91. https://doi. City: Part I of The City Is Here for You to
hot summer days. Remote Sensing of
org/10.1109/MTITS.2017.8005582 Use. New York City: Do Projects.
Environment, No. 154 (November), pp. 38–
Glielmo, Luigi, Stefania Santini, and Gunasekeran, Dinesh Visva, Rachel 45. https://doi.org/10.1016/j.rse.2014.08.012
Gabriele Serra (1999). A two-time-scale Marjorie Wei Wen Tseng, Yih-Chung Tham,
Howard, Philip N., and Samuel C. Woolley
infinite-adsorption model of three-way and Tien Yin Wong (2021). Applications of
(2018). Computational Propaganda:
catalytic converters. In Proceedings of digital health for public health responses
Political Parties, Politicians, and Political
the 1999 American Control Conference to COVID-19: A systematic scoping review
Manipulation on Social Media. Oxford:
(Cat. No. 99CH36251), vol. 4, pp. 2683–87. of artificial intelligence, telehealth and
Oxford University Press.
San Diego: IEEE. https://doi.org/10.1109/ related technologies. npj Digital Medicine,
ACC.1999.786557 vol. 4, No. 1 (26 February), p. 40. https://doi. Huang, Jeffrey, Mikhael Johanes, Frederick
org/10.1038/s41746-021-00412-9 Chando Kim, Christina Doumpioti, and
Godoy, Franklin Valdebenito, Nuria
Georg-Christoph Holz (2021). On GANs,
Hartmann, and Janina Hanswillemenke Gupta, Ritwik, Richard Hosfelt, Sandra
NLP and architecture: Combining human
(2021). APEC case study: Best practices of Sajeev, Nirav Patel, Bryce Goodman,
and machine intelligences for the gene­
smart cities in a digital age. Asia-Pacific Jigar Doshi, Eric Heim, Howie Choset, and
ration and evaluation of meaningful
Economic Cooperation (APEC) SOM Matthew Gaston (2019). xBD: A dataset
designs. Technology|Architecture + Design,
Steering Committee on Economic and for assessing building damage from
vol. 5, No. 2 (3 July), pp. 207–24. https://doi.
Technical Cooperation. https://www.apec. satellite imagery. arXiv:1911.09296 [cs] (21
org/10.1080/24751448.2021.1967060
org/publications/2021/08/best-practices- November). http://arxiv.org/abs/1911.09296
of-smart-cities-in-the-digital-age Hussain, F., R. Hussain, and E. Hossain
Haataja, Meeri, Linda van de Fliert, and
(2021). Explainable artificial intelligence
Pasi Rautio (2020). Public AI registers:
(XAI): An engineering perspective.
Realising AI transparency and civic
arXiv:2101.03613 [cs] (10 January). http://
participation in government use of AI.
arxiv.org/abs/2101.03613
White paper. https://algoritmeregister.
amsterdam.nl/wp-content/uploads/
White-Paper.pdf

115  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
REFERENCES

International Telecommunication Union Kalogirou, Soteris A. (2003). Artificial Krieger, Nancy, Pamela Waterman,
(ITU) (2020). Frontier technologies to intelligence for the modeling and control of Jarvis Chen, Mah-Jabeen Soobader,
protect the environment and tackle climate combustion processes: A review. Progress S. V. Subramanian, and Rosa Carson
change. Report. https://www.itu.int/en/ in Energy and Combustion Science, vol. (2003). Zip code caveat: Bias due to
action/environment-and-climate-change/ 29, No. 6 (January), pp. 515–66. https://doi. spatiotemporal mismatches between zip
Documents/frontier-technologies-to- org/10.1016/S0360-1285(03)00058-3 codes and US census-defined geographic
protect-the-environment-and-tackle- areas. American Journal of Public Health,
Kassens-Noor, Eva, and Arend Hintze
climate-change.pdf vol. 92, No. 7 (1 July), pp. 1100–1102. https://
(2020). Cities of the future? The potential
doi.org/10.2105/ajph.92.7.1100
Jacquillat, Alexandre, and Amedeo R. impact of artificial intelligence. AI, vol, 1,
Odoni (2018). A roadmap toward airport No. 2, pp. 192–197. https://doi.org/10.3390/ Krile, Robert, Fred Todt, and Jeremy
demand and capacity management. ai1020012 Schroeder (2015). Assessing roadway
Transportation Research Part A: Policy traffic count duration and frequency
Khajeh-Hosseini, Ali, David Greenwood,
and Practice, No. 114 (August), pp. 168–85. impacts on annual average daily traffic
James W. Smith, and Ian Sommerville
https://doi.org/10.1016/j.tra.2017.09.027 estimation: Assessing accuracy issues
(2010). The cloud adoption toolkit:
related to short-term count durations. TRID:
Jahnke, Art (2018). Are computer-aided Supporting cloud adoption decisions in the
the TRIS and ITRD database (October).
decisions actually fair? The Brink, enterprise. arXiv:1008.1900 [cs.DC]. https://
https://trid.trb.org/view/1442739
14 December. https://www.bu.edu/ doi.org/10.48550/ARXIV.1008.1900
articles/2018/algorithmic-fairness/ Krishna, Kalpesh, Gaurav Singh Tomar,
Kim, Jaewoo, Meeyoung Cha, and Jong
Ankur P. Parikh, Nicolas Papernot, and
Jain, Ashu, and Lindell E. Ormsbee (2002). Gun Lee (2017). Nowcasting commodity
Mohit Iyyer (2020). Thieves on Sesame
Short-term water demand forecast prices using social media. PeerJ Computer
Street! Model extraction of BERT-based
modeling techniques: Conventional Science, No. 3 (31 July), p. e126. https://doi.
APIs. arXiv:1910.12366 [cs] (12 October).
methods versus AI. Journal - American org/10.7717/peerj-cs.126
http://arxiv.org/abs/1910.12366
Water Works Association, vol. 94,
Kiran, B. Ravi, Ibrahim Sobh, Victor
No. 7 (July), pp. 64–72. https://doi. Lao, David (2020). Clearview AI: When can
Talpaert, Patrick Mannion, Ahmad A. Al
org/10.1002/j.1551-8833.2002.tb09507.x companies use facial recognition data?
Sallab, Senthil Yogamani, and Patrick
Global News, 3 June. https://globalnews.
Jameson, Shazade, Linnet Taylor, and Pérez (2021). Deep reinforcement learning
ca/news/6621410/clearview-ai-canada-
Merel Noorman (2021). Data governance for autonomous driving: A survey. IEEE
privacy-data/
clinics: A new approach to public-interest Transactions on Intelligent Transportation
technology in cities. SSRN Scholarly Paper. Systems, vol. 23, No. 6, pp. 1–18. https://doi. Larson, Jeff, Surya Mattu, Lauren
Rochester, NY: Social Science Research org/10.1109/TITS.2021.3054625 Kirchner, and Julia Angwin (2016). How
Network, 1 September. https://doi. we analyzed the COMPAS recidivism
Kitchin, Rob (2016). The ethics of smart
org/10.2139/ssrn.3880961 algorithm. ProPublica, 23 May.
cities and urban science. Philosophical
https://www.propublica.org/article/
Janssen, Marijn, Martijn Hartog, Ricardo Transactions of the Royal Society A, vol.
how-we-analyzed-the-compas-
Matheus, Aaron Yi Ding, and George 374, No. 2083 (28 December), p. 20160115.
recidivism-algorithm?token=BqO_
Kuk (2022). Will algorithms blind people? https://doi.org/10.1098/rsta.2016.0115
ITYNAKmQwhj7daSusnn7aJDGaTWE
The effect of explainable AI and decision-
Korinek, Anton, Martin Schindler, and
makers’ experience on AI-supported Larsson, Stefan, and Fredrik Heintz (2020).
Joseph Stiglitz (2021). Technological
decision-making in government. Transparency in artificial intelligence.
progress, artificial intelligence,
Social Science Computer Review, Internet Policy Review, vol. 9, No. 2 (5 May).
and inclusive growth. IMF Working
vol. 40, No. 2), pp. 478–93. https://doi. https://policyreview.info/concepts/
Papers No. 166 (11 June). https://doi.
org/10.1177/0894439320980118 transparency-artificial-intelligence
org/10.5089/9781513583280.001.A001
Jiang, Huaiguang, and Yingchen Zhang Latonero, Mark (2018). Governing artificial
Kornweitz, Aritz (2021). A new AI lexicon:
(2016). Short-term distribution system state intelligence: Upholding human rights &
Function creep. Medium, 8 April. https://
forecast based on optimal synchrophasor dignity. Report. Data & Society, 10 October.
medium.com/a-new-ai-lexicon/a-new-
sensor placement and extreme learning https://datasociety.net/library/governing-
ai-lexicon-function-creep-1c20834fab4a
machine. In 2016 IEEE Power and Energy artificial-intelligence/
Society General Meeting (PESGM), pp. Korteling, J. E. (Hans)., G. C. van de Boer-
Lattimore, Finn, Simon O’Callaghan, Zoe
1–5. Boston: IEEE. https://doi.org/10.1109/ Visschedijk, R. A. M. Blankendaal, R. C.
Paelologos, Alistair Reid, Edward Santow,
PESGM.2016.7741933 Boonekamp, and A. R. Eikelboom (2021).
Holli Sargeant, and Andrew Thomsen
Human versus artificial intelligence.
J. P. Morgan Research (2021). How long (2020). Using artificial intelligence to
Frontiers in Artificial Intelligence, No. 4.
will the chip shortage last? https://www. make decisions: Addressing the problem
https://doi.org/10.3389/frai.2021.622364
jpmorgan.com/insights/research/supply- of algorithmic bias. Australian Human
chain-chip-shortage Kosinski, Michal, David Stillwell, and Rights Commission (24 November). https://
Thore Graepel (2013). Private traits and humanrights.gov.au/our-work/rights-and-
Kaack, Lynn H., George H. Chen, and
attributes are predictable from digital freedoms/publications/using-artificial-
M. Granger Morgan (2019). Truck
records of human behavior. Proceedings intelligence-make-decisions-addressing
traffic monitoring with satellite images.
of the National Academy of Sciences, vol.
arXiv:1907.07660 [cs] (17 July). http://arxiv.
110, No. 15 (9 April), pp. 5802–5. https://doi.
org/abs/1907.07660
org/10.1073/pnas.1218772110

116  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
REFERENCES

Lee, Hanbong, Waqar Malik, Bo Zhang, Lin, Angela, and Nan-Chou Chen (2012). Malof, Jordan M., Kyle Bradbury, Leslie
Balaji Nagarajan, and Yoon C. Jung (2015). Cloud computing as an innovation: M. Collins, and Richard G. Newell (2016).
Taxi time prediction at Charlotte Airport Perception, attitude, and adoption. Automatic detection of solar photovoltaic
using fast-time simulation and machine International Journal of Information arrays in high resolution aerial imagery.
learning techniques. In 15th AIAA Aviation Management, vol. 32, No. 6 (December), Applied Energy, No. 183 (December),
Technology, Integration, and Operations pp. 533–40. https://doi.org/10.1016/j. pp. 229–40. https://doi.org/10.1016/j.
Conference. Dallas, TX: American Institute ijinfomgt.2012.04.001 apenergy.2016.08.191
of Aeronautics and Astronautics. https://
Lin, Anthony L., William C. Chen, and Manley, Ed, Chen Zhong, and Michael
doi.org/10.2514/6.2015-2272
Julian C. Hong (2021). Chapter 8 – Batty (2018). Spatiotemporal variation
Lee, Min Kyung, Daniel Kusbit, Evan Electronic health record data mining in travel regularity through transit user
Metsky, Laura Dabbish (2015). Working for artificial intelligence healthcare. In profiling. Transportation, vol. 45, No. 3
with machines: The impact of algorithmic Artificial Intelligence in Medicine, Lei Xing, (May), pp. 703–32. https://doi.org/10.1007/
and data-driven management on Maryellen L. Giger, and James K. Min, eds. s11116-016-9747-x
human workers. CHI ‘15: Proceedings Academic Press, pp. 133–50. https://doi.
Mäntymäki, Matti, Matti Minkkinen,
of the 33rd Annual ACM Conference on org/10.1016/B978-0-12-821259-2.00008-9
Teemu Birkstedt, and Mika Viljanen (2022).
Human Factors in Computing Systems
Lindgren, S. (2020). Data Theory: Defining organizational AI governance.
(April) pp. 1603–1612. https://doi.
Interpretive Sociology and Computational AI Ethics (24 February). https://doi.
org/10.1145/2702123.2702548
Methods. John Wiley & Sons. org/10.1007/s43681-022-00143-x
Leslie, David (2019). Understanding
Lipton, Zachary (2016). The mythos of Marr, Bernard (2022). The dangers of
artificial intelligence ethics and safety.
model interpretability. arXiv:1606.03490 not aligning artificial intelligence with
arXiv:1906.05684 [cs, stat] (11 June). https://
[cs.LG] (10 June). https://arxiv.org/ human values. Forbes, (4 January).
doi.org/10.5281/zenodo.3240529
abs/1606.03490 https://www.forbes.com/sites/
Leslie, David, Christopher Burr, Mhairi bernardmarr/2022/04/01/the-dangers-
Liu, Lydia (2020). When bias begets bias:
Aitken, Josh Cowls, Mike Katell, Morgan of-not-aligning-artificial-intelligence-
A source of negative feedback loops in
Briggs, and Tim Clement-Jones (2021). with-human-values/
AI systems. Microsoft Research Blog,
Artificial intelligence, human rights,
21 January. https://www.microsoft.com/ Mathe, Johan, Nina Miolane, Nicolas
democracy, and the rule of law. The Alan
en-us/research/blog/when-bias-begets- Sebastien, and Jeremie Lequeux (2019).
Turing Institute (June). https://rm.coe.
bias-a-source-of-negative-feedback- PVNet: A LRCN architecture for spatio-
int/primer-en-new-cover-pages-coe-
loops-in-ai-systems/ temporal photovoltaic powerforecasting
english-compressed-2754-7186-0228-v-
from numerical weather prediction.
1/1680a2fd4a Liu, Yue, Tianlu Zhao, Wangwei Ju, and
arXiv:1902.01453 [cs.LG] (4 February).
Siqi Shi (2017). Materials discovery and
Leszczynski, Agnieszka (2020). Glitchy https://doi.org/10.48550/ARXIV.1902.01453
design using machine learning. Journal
vignettes of platform urbanism. Environ­
of Materiomics, vol. 3, No. 3 (September), Martinho-Truswell, Emma (2018). How
ment and Planning D: Society and Space,
pp. 159–77. https://doi.org/10.1016/j. AI could help the public sector. Harvard
vol. 38, No. 2 (1 April), pp. 189–208. https://
jmat.2017.08.002 Business Review (January). https://hbr.
doi.org/10.1177/0263775819878721
org/2018/01/how-ai-could-help-the-
Lu, Jie, Anjin Liu, Yiliao Song, and
Leung, Hareton, and Zhang Fan (2002). public-sector
Guangquan Zhang (2020). Data-driven
Software cost estimation. In Handbook
decision support under concept drift in Mazloumi, Ehsan, Geoff Rose, Graham
of Software Engineering and Knowledge
streamed big data. Complex & Intelligent Currie, and Sara Moridpour (2011).
Engineering, Volume II: Emerging
Systems, vol. 6, No. 1 (1 April), pp. 157–63. Prediction intervals to account for
Technologies. S. K. Chang, ed. Hackensack,
https://doi.org/10.1007/s40747-019-00124-4 uncertainties in neural network predictions:
NJ: World Scientific, pp. 307–24. https://doi.
Methodology and application in bus travel
org/10.1142/9789812389701_0014 Luckin, Rose, Wayne Holmes, Mark
time prediction. Engineering Applications
Griffiths, and Laurie B. Forcier (2016).
Li, Bo, Peng Qi, Bo Liu, Shuai Di, Jingen Liu, of Artificial Intelligence, vol. 24, No. 3
Intelligence Unleashed: An Argument for AI
Jiquan Pei, Jinfeng Yi, and Bowen Zhou (April), pp. 534–42. https://doi.org/10.1016/j.
in Education. Pearson Education, London.
(2021). Trustworthy AI: From principles engappai.2010.11.004
to practices. arXiv:2110.01167 [cs.AI] (4 Madiega, Tambiama André, and Hendrik
Mazzolin, Robert (2020). Artificial
October). https://arxiv.org/abs/2110.01167 Alexander Mildebrath (2021). Regulating
intelligence and keeping humans “in the
facial recognition in the EU. European
Licheva, Veronika (2018). Commotion at loop.” Centre for International Governance
Parliamentary Research Service. https://
the Piet Hein Tunnel as misguided tourists Innovation (blog), 23 November. https://
www.europarl.europa.eu/RegData/
cycle right through it. DutchReview (blog), www.cigionline.org/articles/artificial-
etudes/IDAN/2021/698021/EPRS_
2 August. https://dutchreview.com/news/ intelligence-and-keeping-humans-loop/
IDA(2021)698021_EN.pdf
weird/commotion-at-the-piet-hein-
McCarthy, John (2007). What is AI? http://
tunnel-as-misguided-tourists-cycle-right-
faculty.otterbein.edu/dstucki/inst4200/
through-it/
whatisai.pdf

117  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
REFERENCES

McCrimmon, Ryan, and Martin Matishak Middleton, Stuart E., Emmanuel Letouzé, Nahavandi, Darius, Roohallah
(2021). Cyberattack on food supply Ali Hossaini, and Adriane Chapman Alizadehsani, Abbas Khosravi, and U.
followed years of warnings. Politico, (2022). Trust, regulation, and human- Rajendra Acharya (2022). Application of
6 May. https://www.politico.com/ in-the-loop AI: Within the European artificial intelligence in wearable devices:
news/2021/06/05/how-ransomware- region. Communications of the ACM Opportunities and challenges. Computer
hackers-came-for-americans- 65, No. 4 (April), pp. 64–68. https://doi. Methods and Programs in Biomedicine,
beef-491936 org/10.1145/3511597 No. 213 (1 January), p. 106541. https://doi.
org/10.1016/j.cmpb.2021.106541
Mehmood, Hassan, Panos Kostakos, Marta Milano, Silvia, Brent Mittelstadt, Sandra
Cortes, Theodoros Anagnostopoulos, Wachter, and Christopher Russell (2021). Nam, Daisik, Hyunmyung Kim, Jaewoo
Susanna Pirttikangas, and Ekaterina Epistemic fragmentation poses a threat Cho, and R. Jayakrishnan (2017). A model
Gilman (2021). Concept drift adaptation to the governance of online targeting. based on deep learning for predicting
techniques in distributed environment for Nature Machine Intelligence, vol. 3, No. 6 travel mode choice.Semantics Scholar.
real-world data streams. Smart Cities, (June), pp. 466–72. https://doi.org/10.1038/ https://www.semanticscholar.org/paper/
vol. 4, No. 1 (March), pp. 349–71. https:// s42256-021-00358-3 A-Model-Based-on-Deep-Learning-for-
doi.org/10.3390/smartcities4010021 Predicting-Mode-Nam-Kim/56a8b850dcf
Monge, Fernando (2022a). Cities have
a7900c59c79c3fa3e962c4aa5e916
Mehr, Hila (2017). Artificial intelligence figured out a way to access private data
for citizen services and government. Ash in real time. Datapoli’s Newsletter (blog), Newaz, A. K. M. Iqtidar, Nur Imtiazul
Center for Democratic Governance and 25 March. https://fermonge.substack. Haque, Amit Kumar Sikder, Mohammad
Innovation. https://ash.harvard.edu/ com/p/cities-have-figured-out-a-way-to Ashiqur Rahman, and A. Selcuk Uluagac
files/ash/files/artificial_intelligence_for_ (2020). Adversarial attacks to machine
Monge, Fernando (2022b). A new data
citizen_services.pdf learning-based smart healthcare systems.
deal: The case of Barcelona. Datapoli’s
arXiv:2010.03671 [cs] (7 October). http://
Mehrabi, Ninareh, Fred Morstatter, Newsletter (blog), 25 February. https://
arxiv.org/abs/2010.03671
Nripstuta Saxena, Kristina Lerman, fermonge.substack.com/p/a-new-data-
and Aram Galstyan (2019). A survey on deal-the-case-of-barcelona Nguyen, Van Nhan, Robert Jenssen,
bias and fairness in machine learning. and Davide Roverso (2018). Automatic
Moreschi, Bruno, Gabriel Pereira, and
arXiv:1908.09635 [cs.LG] (23 August). autonomous vision- based power line
Fabio G. Cozman (2020). The Brazilian
https://arxiv.org/abs/1908.09635 inspection: A review of current status
workers in Amazon Mechanical Turk:
and the potential role of deep learning.
Mehrotra, Dhruv, Suya Mattu, Annie Dreams and realities of ghost workers.
International Journal of Electrical Power &
Gilbertson, and Aaron Sankin (2021). Contracampo – Brazilian Journal of
Energy Systems, No. 99 (July), pp. 107–20.
How we determined predictive policing Communication, vol. 39, No. 1. http://dx.doi.
https://doi.org/10.1016/j.ijepes.2017.12.016
software disproportionately targeted low- org/10.22409/contracampo.v39i1.38252
income, Black, and Latino neighborhoods. Nickerson, Raymond S. (1998). Confirmation
Muhammad, Khan, Jaime Lloret, and
Gizmodo (blog), 12 February. https:// bias: A ubiquitous phenomenon in many
Sung Wook Baik (2019). Intelligent and
gizmodo.com/how-we-determined- guises. Review of General Psychology,
energy-efficient data prioritization in
predictive-policing-software- vol. 2, No. 2, pp. 175–220. https://doi.
green smart cities: Current challenges and
dispropo-1848139456 org/10.1037/1089-2680.2.2.175
future directions. IEEE Communications
Metcalf, Jacob, Emanuel Moss, Elizabeth Magazine, vol. 57, No. 2 (February), Nokia (2021). Nokia scene analytics. Nokia
Anne Watkins, Ranjit Singh, and Madeleine pp. 60–65. https://doi.org/10.1109/ Digital Innovation Cloud (December).
Clare Elish (2021). Algorithmic impact MCOM.2018.1800371 https://www.dac.nokia.com/applications/
assessments and accountability: The co- scene-analytics/
Murphy, Kevin P. (2012). Machine Learning:
construction of impacts. In Proceedings
A Probabilistic Perspective. Cambridge: Noursalehi, Peyman, Haris N. Koutsopoulos,
of the 2021 ACM Conference on Fairness,
MIT Press. http://site.ebrary.com/ and Jinhua Zhao (2018). Real time transit
Accountability, and Transparency. FAccT
id/10597102 demand prediction capturing station
’21. New York: Association for Computing
interactions and impact of special events.
Machinery, pp. 735–46. https://doi. Nagitta, Pross Oluka, Godfrey Mugurusi,
Transportation Research Part C: Emerging
org/10.1145/3442188.3445935 Peter Adoko Obicci, and Emmanuel
Technologies 97 (December), pp. 277–300.
Awuor (2022). Human-centered artificial
Metz, Thaddeus (2021). African reasons https://doi.org/10.1016/j.trc.2018.10.023
intelligence for the public sector: The gate
why AI should not maximize utility. In
keeping role of the public procurement Nutkiewicz, Alex, Zheng Yang, and Rishee
African Values, Ethics, and Technology:
professional. Procedia Computer Science, K. Jain (2018). Data-driven urban energy
Questions, Issues, and Approaches,
No. 200 (1 January), pp. 1084–92. https:// simulation (DUE-S): A framework for
Beatrice Dedaa Okyere-Manu, ed., pp.
doi.org/10.1016/j.procs.2022.01.308 integrating engineering simulation and
55–72. Cham: Springer International
machine learning methods in a multi-
Publishing. https://doi.org/10.1007/978-3-
scale urban energy modeling workflow.
030-70550-3_4
Applied Energy, No. 225 (September),
pp. 1176–89. https://doi.org/10.1016/j.
apenergy.2018.05.023

118  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
REFERENCES

Oard, Douglas, and Jinmook Kim (1998). Partnership on AI (2021). Fairer algorithmic Quinn, Thomas P., Stephan Jacobs,
Implicit feedback for recommender decision-making and its consequences: Manisha Senadeera, Vuong Le, and
systems. AAAI Workshop on Recommender Interrogating the risks and benefits Simon Coghlan (2022). The three ghosts
Systems, Madison, WI, pp. 81–83. https:// of demographic data collection, use, of medical AI: Can the black-box present
scholarcommons.sc.edu/libsci_facpub/111/ and non-use. 2 December. https:// deliver? Artificial Intelligence in Medicine,
partnershiponai.org/paper/fairer- No. 124 (February), p. 102158. https://doi.
OECD (Organisation for Economic
algorithmic-decision-making-and-its- org/10.1016/j.artmed.2021.102158
Cooperation and Development)
consequences/
(2018). OECD regulatory policy Rahim Taleqani, Ali, Jill Hough, and
outlook 2018. 10 October. https://doi. Perera, Kasun S., Zeyar Aung, and Wei Lee Kendall E. Nygard (2019). Public opinion
org/10.1787/9789264303072-en Woon (2014). Machine learning techniques on dockless bike sharing: A machine
for supporting renewable energy learning approach. Transportation
OECD (2019). OECD AI principles overview
generation and integration: a survey. In Research Record: Journal of the
(May). https://oecd.ai/en/ai-principles
Proceedings of the Second International Transportation Research Board, vol. 2673,
OECD (2022). OECD Framework for the Conference on Data Analytics for No. 4 (April), pp. 195–204. https://doi.
classification of AI systems. OECD Digital Renewable Energy Integration. DARE’14. org/10.1177/0361198119838982
Economy Papers (22 February). https://doi. Cham: Springer, pp. 81–96. https://doi.
Ramchurn, Sarvapali, Perukrishnen
org/10.1787/cb6d9eca-en org/10.1007/978-3-319-13290-7_7
Vytelingum, Alex Rogers, and Nicholas R.
OECD and United Nations Economic Pertl, Michael, Kai Heussen, Oliver Gehrke, Jennings (2012). Putting the “smarts” into
and Social Commission for Western and Michel Rezkalla (2016). Voltage the smart grid: A grand challenge for
Asia (UN ESCWA) (2021). Open estimation in active distribution grids artificial intelligence. Communications of
government: Concept, definitions and using neural networks. In 2016 IEEE Power the ACM 55, No. 4, pp. 86–97.
implementation. In The Economic and and Energy Society General Meeting
Raval, Noopur (2019). Automating
Social Impact of Open Government: (PESGM). Boston: IEEE, pp. 1–5. https://doi.
informality: On AI and labour in the Global
Policy Recommendations for the Arab org/10.1109/PESGM.2016.7741758
South. Global Information Society Watch.
Countries. https://www.oecd-ilibrary.org/
Petropoulos, Georgios (2022). The dark https://giswatch.org/node/6202
governance/the-economic-and-social-
side of artificial intelligence: Manipulation
impact-of-open-government_6b3e2469-en Regue, Robert, and Will Recker (2014).
of human behaviour. Bruegel, 2 February.
Proactive vehicle routing with inferred
Office of the Privacy Commissioner of https://www.bruegel.org/2022/02/
demand to solve the bikesharing
Canada (OPC) (2021). News release: the-dark-side-of-artificial-intelligence-
rebalancing problem. Transportation
Clearview AI’s unlawful practices manipulation-of-human-behaviour/
Research Part E: Logistics and
represented mass surveillance of
Prince, Anya, and Daniel Schwarcz (2020). Transportation Review, No. 72 (December),
Canadians, commissioners say, 3
Proxy discrimination in the age of artificial pp. 192–209. https://doi.org/10.1016/j.
February. https://www.priv.gc.ca/en/opc-
intelligence and big data. Iowa Law tre.2014.10.005
news/news-and-announcements/2021/
Review, vol. 105, No. 3. https://ilr.law.uiowa.
nr-c_210203/ Reuter, Tina (2020). Smart city visions and
edu/print/volume-105-issue-3/proxy-
human rights: Do they go together ? Carr
Omrani, Hichem (2015). Predicting travel discrimination-in-the-age-of-artificial-
Center Discussion Paper Series,(Spring).
mode of individuals by machine learning. intelligence-and-big-data
https://carrcenter.hks.harvard.edu/files/
Transportation Research Procedia, No.
Prins, Corien, Haroon Sheikh, Erik cchr/files/CCDP_006.pdf
10, pp. 840–49. https://doi.org/10.1016/j.
Schrijvers, Eline de Jong, Monique
trpro.2015.09.037 Richardson, Jordan P., Cambray Smith,
Steijns, and Mark Bovens (2021).
Susan Curtis, Sara Watson, Xuan Zhu,
Open Mobility Foundation (2021). Mission AI: The new system technology.
Barbara Barry, and Richard R. Sharp
Recent court ruling supports mobility Netherlands Scientific Council for
(2021). Patient apprehensions about
data specification. 26 February. https:// Government Policy (WRR). https://
the use of artificial intelligence in
www.openmobilityfoundation.org/an- www.wrr.nl/binaries/wrr/documenten/
healthcare. npj Digital Medicine, vol. 4,
important-development-in-los-angeles- rapporten/2021/11/11/opgave-ai-
No. 1 (21 September), p. 140. https://doi.
for-the-mobility-data-specification-and- de-nieuwe-systeemtechnologie/
org/10.1038/s41746-021-00509-1
data-privacy/ Summary+WRRreport_Mission+AI_
The+New+System+Technology_R105.pdf Rissland, Edwina L., Kevin D. Ashley, and
Otieno, Fred, Nathan Williams, and
R. P. Loui (2003). AI and law: A fruitful
Patrick McSharry (2018). Forecasting PwC (2018). Fourth Industrial Revolution for
synergy. Artificial Intelligence, vol. 150,
energy demand for microgrids over the Earth: Harnessing artificial intelligence
No. 1–2 (November), pp. 1–15. https://doi.
multiple horizons. In 2018 IEEE PES/ for the Earth. January. https://www.
org/10.1016/S0004-3702(03)00122-X
IAS PowerAfrica, Cape Town: IEEE, pwc.com/gx/en/services/sustainability/
pp. 457–62. https://doi.org/10.1109/ publications/ai-for-the-earth.html
PowerAfrica.2018.8521063

119  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
REFERENCES

Robinson, Caleb, Bistra Dilkina, Jeffrey Russel, Stuart J, and Peter Norvig (2010). Santosh, K. C., Sameer Antani, D. S. Guru,
Hubbs, Wenwen Zhang, Subhrajit Artificial Intelligence: A Modern Approach. and Nilanjan Dey, eds. (2019). Medical
Guhathakurta, Marilyn A. Brown, and Upper Saddle River, NJ: Prentice Hall. Imaging: Artificial Intelligence, Image
Ram M. Pendyala (2017). Machine Recognition, and Machine Learning
Safir, Inbal Naveh (2019). Using artificial
learning approaches for estimating Techniques. New York: CRC Press.
intelligence as a tool for your local
commercial building energy consumption.
government. ICMA blog, 24 April. https:// Schmitt, Lewin (2021). Mapping global
Applied Energy, No. 208 (December),
icma.org/blog-posts/using-artificial- AI governance: A nascent regime in a
pp. 889–904. https://doi.org/10.1016/j.
intelligence-tool-your-local-government fragmented landscape. AI and Ethics,
apenergy.2017.09.060
No. 2 (17 August), pp. 303–314. https://doi.
Sagiroglu, Seref, and Duygu Sinanc (2013).
Rodríguez, Andrea G. (2021). AI ethics org/10.1007/s43681-021-00083-y
Big data: A review. In 2013 International
in olicy and action: City governance of
Conference on Collaboration Technologies Schwalbe, Nina, and Brian Wahl (2020).
algorithmic decision systems. COVID Briefs:
and Systems (CTS), pp. 42–47. https://doi. Artificial intelligence and the future of
Building back better: Post-pandemic city
org/10.1109/CTS.2013.6567202 global health. Lancet, vol. 395, No. 10236,
governance. CIDOB (Barcelona Centre
pp. 1579–86. https://doi.org/10.1016/S0140-
for International Affairs), 15 December. Sahin, Kaan (2020). The West, China, and
6736(20)30226-9
https://dossiers.cidob.org/cities-in-times- AI surveillance. Atlantic Council (blog), 18
of-pandemics/assets/pdf/Ai_ethics_in_ December. https://www.atlanticcouncil. Seo, Toru, Takahiko Kusakabe, Hiroto
policy_and_action.pdf org/blogs/geotech-cues/the-west-china- Gotoh, and Yasuo Asakura (2019).
and-ai-surveillance/ Interactive online machine learning
Rohaidi, Nurfilzah (2017). How the
approach for activity-travel survey.
Maldives uses drones to fight climate Saikia, Prarthana (2021). The importance
Transportation Research Part B:
change. November. GovInsider of data drift detection that data scientists
Methodological, vol. 123, No. C, pp. 362–73.
(3 November). https://govinsider.asia/ do not know. Analytics Vidhya (blog), 15
inclusive-gov/undp-maldives-drones- October. https://www.analyticsvidhya. Shaban-Nejad, Arash, Martin Michalowski,
climate-change-disaster-risk-maps/ com/blog/2021/10/mlops-and-the- and David L. Buckeridge (2018). Health
importance-of-data-drift-detection/ intelligence: How artificial intelligence
Rosenblat, Alex and Luke Stark (2016).
transforms population and personalized
Algorithmic labor and information Salamone, Francesco, Ludovico Danza,
health. npj Digital Medicine, vol. 1, No. 1 (2
asymmetries: A case study of Uber’s Italo Meroni, and Matteo Ghellere (2017).
October), p. 53. https://doi.org/10.1038/
drivers. International Journal of How to define the urban comfort in the
s41746-018-0058-9
Communication, vol. 10, No. 27 (30 July). era of smart cities through the use of
http://dx.doi.org/10.2139/ssrn.2686227 the do-it-yourself approach and new Shibasaki, Ryosuke, Satoru Hori, Shunji
pervasive technologies. Presented at the Kawamura, and Shigeyuki Tani (2020).
Rosenzweig, Cynthia, Joshua Elliott,
4th International Electronic Conference on Integrating urban data with urban
Delphine Deryng, Alex C. Ruane, Christoph
Sensors and Applications (ECSA 2017), 15– services. In Society 5.0: A People-
Müller, Almut Arneth, Kenneth J. Boote,
30 November. https://www.researchgate. Centric Super-Smart Society, pp.
et al. (2014). Assessing agricultural
net/publication/321076513_How_to_ 67–83. Singapore: Springer. https://doi.
risks of climate change in the 21st
Define_the_Urban_Comfort_in_the_Era_ org/10.1007/978-981-15-2989-4_4
century in a global gridded crop model
of_Smart_Cities_through_the_Use_of_
intercomparison. Proceedings of the Shokri, Reza, Marco Stronati, Congzheng
the_Do-It-Yourself_Approach_and_New_
National Academy of Sciences of the Song, and Vitaly Shmatikov (2016).
Pervasive_Technologies
United States of America, vol. 111, No. Membership inference attacks against
9 (4 March), pp. 3268–73. https://doi. Samimi, Amir, Kazuya Kawamura, machine learning models. arXiv:1610.05820
org/10.1073/pnas.1222463110 and Abolfazl Mohammadian (2011). A [cs.CR] (18 October). https://doi.
behavioral analysis of freight mode choice org/10.48550/ARXIV.1610.05820
Rothrock, Stephen M. (2021). Coronavirus,
decisions. Transportation Planning and
chip boom, and supply shortage: The Smith, Ember and Richard V. Reeves
Technology, vol. 34, No. 8 (December), pp.
new normal for global semiconductor (2020). SAT math scores mirror and
857–69. https://doi.org/10.1080/03081060.
manufacturing. International Symposium maintain racial inequity. Brookings (blog),
2011.600092
on Microelectronics 2021, No. 1 (1 October), 1 December. https://www.brookings.edu/
pp. 000026–000030. https://doi. Santos, Marcus A. G., Roberto Munoz, blog/up-front/2020/12/01/sat-math-
org/10.4071/1085-8024-2021.1.000026 Rodrigo Olivares, Pedro P. Rebouças scores-mirror-and-maintain-racial-
Filho, Javier Del Ser, and Victor Hugo C. inequity/
Rovatsos, Michael, Brent Mittelstadt,
de Albuquerque (2020). Online heart
and Ansgar Koene (2019). Landscape Soleimanmeigouni, Iman, Alireza Ahmadi,
monitoring systems on the internet of
summary: Bias in algorithmic decision- and Uday Kumar (2018). Track geometry
health things environments: A survey,
making: What is bias in algorithmic degradation and maintenance modelling:
a reference model and an outlook.
decision-making, how can we identify A review. Proceedings of the Institution
Information Fusion, No. 53 (1 January),
it, and how can we mitigate it? UK of Mechanical Engineers, Part F: Journal
pp. 222–39. https://doi.org/10.1016/j.
Government (19 July). https://www. of Rail and Rapid Transit, vol. 232, No.
inffus.2019.06.004
research.ed.ac.uk/en/publications/ 1 (January), pp. 73–102. https://doi.
landscape-summary-bias-in-algorithmic- org/10.1177/0954409716657849
decision-making-what-is-bia

120  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
REFERENCES

Southwest Research Institute (2016). SwRI Takahashi, Katsumi (2020). Social issues Trombetta, Paulo Henrique, Eriberto
developing methane leak detection system with digital twin computing. NTT Technical Nascimento, Elaine Carvalho da Paz,
for DOE. Southwest Research Institute, Review, vol. 18, No. 9 (September), pp. and Felipe do Valle (2018). Application of
11 October. https://www.swri.org/press- 36–39. https://www.ntt-review.jp/archive/ artificial neural networks for noise barrier
release/swri-developing-methane-leak- ntttechnical.php?contents=ntr202009fa5. optimization. Environments, vol. 5, No.
detection-system-doe html 12 (10 December), p. 135. https://www.
researchgate.net/publication/329543056_
Spatharou, Angela, Hieronimus Solveigh, Tan, Jasper, Blake Mason, Hamid
Application_of_Artificial_Neural_
and Jonathan Jenkins (2020). Transforming Javadi, and Richard G. Baraniuk (2022).
Networks_for_Noise_Barrier_
healthcare with AI: The impact on the Parameters or privacy: A provable tradeoff
Optimization
workforce and organizations. McKinsey, between overparameterization and
10 March. https://www.mckinsey.com/ membership inference. arXiv:2202.01243 Tsapakis, Ioannis, and William H.
industries/healthcare-systems-and- [stat.ML] (2 February). https://arxiv.org/ Schneider (2015). Use of support vector
services/our-insights/transforming- abs/2202.01243 machines to assign short-term counts
healthcare-with-ai to seasonal adjustment factor groups.
Tanhaei, Bahareh, Ali Ayati, Manu
Transportation Research Record: Journal
Sujith, A. V. L. N., Guna Sekhar Sajja, Lahtinen, Behrooz Mahmoodzadeh
of the Transportation Research Board, vol.
V. Mahalakshmi, Shibili Nuhmani, and Vaziri, and Mika Sillanpää (2016). A
2527, No. 1 (January), pp. 8–17. https://doi.
B. Prasanalakshmi (2022). Systematic magnetic mesoporous chitosan based
org/10.3141/2527-02
review of smart health monitoring using core-shells biopolymer for anionic dye
deep learning and artificial intelligence. adsorption: Kinetic and isothermal study Ugwudike, Pamela (2022). AI audits for
Neuroscience Informatics, vol. 2, No. 3 and application of ANN. Journal of Applied assessing design logics and building
(1 September). https://doi.org/10.1016/j. Polymer Science, vol. 133, No. 22 (10 June). ethical systems: The case of predictive
neuri.2021.100028 https://doi.org/10.1002/app.43466 policing algorithms. AI and Ethics, No.
2 (1 February), pp. 198–208. https://doi.
Sun, Li, and Fengqi You (2021). Machine Tavakoli, Reza, and Zeljko Pantic (2017).
org/10.1007/s43681-021-00117-5
learning and data-driven techniques for ANN-based algorithm for estimation and
the control of smart power generation compensation of lateral misalignment in UNITAC (n.d.). Using AI to map informal
systems: An uncertainty handling dynamic wireless power transfer systems settlements in eThekwini, South Africa.
perspective. Engineering, vol. 7, No. 9 for EV charging. In 2017 IEEE Energy United Nations Innovation Technology
(September), pp. 1239–47. https://doi. Conversion Congress and Exposition Accelerator for Cities. https://unitac.un.org/
org/10.1016/j.eng.2021.04.020 (ECCE), Cincinnati, OH: IEEE, pp. 2602–9. news/unitac-x-ethekwini-using-ai-map-
https://doi.org/10.1109/ECCE.2017.8096493 informal-settlements-ethekwini-south-
Sun, Wenjuan, Paolo Bocchini, and Brian
africa
D. Davison (2020). Applications of artificial Thakker, Dhavalkumar, Bhupesh Kumar
intelligence for disaster management. Mishra, Amr Abdullatif, Suvodeep United Nations (2017). New urban agenda.
Natural Hazards, vol. 103, No. 3 Mazumdar, and Sydney Simpson https://habitat3.org/the-new-urban-
(1 September), pp. 2631–89. https://doi. (2020). Explainable artificial intelligence agenda/
org/10.1007/s11069-020-04124-3 for developing smart cities solutions.
United Nations (2019). Risk management in
Smart Cities, vol. 3, No. 4 (December),
Sun, Yanshuo, Zhibin Jiang, Jinjing Gu, Min public administration in the context of the
pp. 1353–82. https://doi.org/10.3390/
Zhou, Yeming Li, and Lei Zhang (2018). sustainable development goals. In World
smartcities3040065
Analyzing high speed rail passengers’ Public Sector Report 2019. https://www.un-
train choices based on new online booking Trencher, Gregory, and Andrew Karvonen ilibrary.org/content/books/9789210041409
data in China. Transportation Research (2017). Stretching “smart”: Advancing
United Nations, Committee on Economic
Part C: Emerging Technologies, No. 97 health and well- being through the smart
Social and Cultural Rights (CESCR) (2000).
(December), pp. 96–113. https://doi. city agenda. Local Environment, vol. 24,
General comment no. 14: The right to the
org/10.1016/j.trc.2018.10.015 No. 7, pp. 610–27.
highest attainable standard of health (art.
Suresh, Harini, and John V. Guttag Tribby, Calvin P., Harvey J. Miller, Barbara 12 of the Covenant). https://www.refworld.
(2021). A framework for understanding B. Brown, Carol M. Werner, and Ken R. org/docid/4538838d0.html
sources of harm throughout the machine Smith (2017). Analyzing walking route
United Nations Educational, Scientific and
learning life cycle. EAAMO ’21: Equity and choice through built environments using
Cultural Organization (UNESCO) (2020).
Access in Algorithms, Mechanisms, and random forests and discrete choice
AI and the rule of law: Capacity building
Optimization, article no. 17, pp. 1–9. https:// techniques. Environment and Planning B:
for judicial systems.4 November. https://
doi.org/10.1145/3465416.3483305 Urban Analytics and City Science, vol. 44,
en.unesco.org/artificial-intelligence/
No. 6 (November), pp. 1145–67. https://doi.
Taeihagh, Araz (2021). Governance of mooc-judges
org/10.1177/0265813516659286
artificial intelligence. Policy and Society,
UNESCO (2021). Recommendation on the
vol. 40, No. 2, pp. 137–57. https://doi.org/10.
ethics of artificial intelligence, 62201 §.
1080/14494035.2021.1928377
https://unesdoc.unesco.org/ark:/48223/
pf0000380455

121  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
REFERENCES

United Nations, UN-Habitat (2020). Voyant, Cyril, Gilles Notton, Soteris World Health Organization (WHO)
Centering people in smart cities: Kalogirou, Marie-Laure Nivet, Christophe (2021). Ethics and governance of
A playbook for local and regional Paoli, Fabrice Motte, and Alexis Fouilloy artificial intelligence for health. https://
governments. https://unhabitat.org/ (2017). Machine learning methods for solar www.who.int/publications-detail-
programme/people-centered-smart- radiation forecasting: A review. Renewable redirect/9789240029200
cities/centering-people-in-smart-cities Energy, No. 105 (May), pp. 569–82. https://
World Health Organization (WHO) and
doi.org/10.1016/j.renene.2016.12.095
United Nations, UN-Habitat (2021). A UN-Habitat (2016). Global report on
guide: Leveraging multi-level governance Wan, Can, Jian Zhao, Yonghua Song, urban health: Equitable healthier cities
approaches to promote health equity. Zhao Xu, Jin Lin, and Zechun Hu (2015). for sustainable development.
Photovoltaic and solar power forecasting
University of Pretoria (2018). Artificial Wirtz, Bernd W., Jan C. Weyerer, and
for smart grid energy management. CSEE
intelligence for Africa: An opportunity Benjamin J. Sturm (2020). The dark sides
Journal of Power and Energy Systems,
for growth, development, and of artificial intelligence: An integrated
vol. 1, No. 4 (December), pp. 38–46. https://
democratisation. Access Partnership, 29 AI governance framework for public
doi.org/10.17775/CSEEJPES.2015.00046
November. administration. International Journal of
Wan, Jiangwen, Yang Yu, Yinfeng Wu, Public Administration, vol. 43, No. 9 (3 July),
Varshney, Kush R., and Homa Alemzadeh
Renjian Feng, and Ning Yu (2011). pp. 818–29. https://doi.org/10.1080/019006
(2017). On the safety of machine learning:
Hierarchical leak detection and 92.2020.1749851
Cyber-physical systems, decision sciences,
localization method in natural gas pipeline
and data products. arXiv:1610.01256 [cs. Wong, Zoie S. Y., Jiaqi Zhou, and Qingpeng
monitoring sensor networks. Sensors,
CY] (22 August). https://doi.org/10.48550/ Zhang (2019). Artificial intelligence for
vol. 12, No. 1 (27 December), pp. 189–214.
arXiv.1610.01256 infectious disease big data analytics.
https://doi.org/10.3390/s120100189
Infection, Disease & Health, vol. 24, No.
Vathoopan, Milan, Benjamin
Wang, Anna X., Caelin Tran, Nikhil 1 (1 February), pp. 44–48. https://doi.
Brandenbourger, and Alois Zoitl (2016). A
Desai, David Lobell, and Stefano Ermon org/10.1016/j.idh.2018.10.002
human in the loop corrective maintenance
(2018). Deep transfer learning for crop
methodology using cross domain Wu, Carole-Jean, Ramya Raghavendra,
yield prediction with remote sensing
engineering data of mechatronic systems. Udit Gupta, Bilge Acun, Newsha Ardalani,
data. In Proceedings of the 1st ACM
In 2016 IEEE 21st International Conference Kiwan Maeng, Gloria Chang, et al.
SIGCAS Conference on Computing and
on Emerging Technologies and Factory (2022). Sustainable AI: Environmental
Sustainable Societies,. COMPASS ’18.
Automation (ETFA), pp. 1–4. https://doi. implications, challenges and opportunities.
New York: Association for Computing
org/10.1109/ETFA.2016.7733603 arXiv:2111.00364 [cs] (9 January). http://
Machinery, pp. 1–5. https://doi.
arxiv.org/abs/2111.00364
Venugopalan, Subhashini, and Varun org/10.1145/3209811.3212707
Rai (2015). Topic based classification Wu, Cathy, Aboudy Kreidieh, Kanaad
Wang, Shuangyuan, Ran Li, Adrian Evans,
and pattern identification in patents. Parvate, Eugene Vinitsky, and Alexandre
and Furong Li (2019). Electric vehicle
Technological Forecasting and Social M. Bayen (2022). Flow: A modular learning
load disaggregation based on limited
Change, No. 94 (May), pp. 236–50. https:// framework for mixed autonomy traffic.
activation matching pursuits. Energy
doi.org/10.1016/j.techfore.2014.10.006 IEEE Transactions on Robotics, vol. 38,
Procedia, No. 158 (February), pp. 2611–16.
No. 2 (April), pp. 1270–86. https://doi.
Victor, David G. (2019). How artificial https://doi.org/10.1016/j.egypro.2019.02.011
org/10.1109/TRO.2021.3087314
intelligence will affect the future of energy
Wang, Zheye, Nina S. N. Lam, Nick
and climate. Brookings (blog), 10 January. Wu, Cathy, Aboudy Kreidieh, Eugene
Obradovich, and Xinyue Ye (2019). Are
https://www.brookings.edu/research/ Vinitsky, and Alexandre M. Bayen
vulnerable communities digitally left
how-artificial-intelligence-will-affect-the- (2017). Emergent behaviors in mixed-
behind in social responses to natural
future-of-energy-and-climate/ autonomy traffic. In Proceedings of the
disasters? An evidence from Hurricane
1st Annual Conference on Robot Learning,
Voigt, Stefan, Thomas Kemper, Torsten Sandy with Twitter data. Applied
Proceedings of Machine Learning Research
Riedlinger, Ralph Kiefl, Klaas Scholte, Geography, No. 108, pp. 1–8. https://doi.
(PMLR), No. 78, pp. 398–407. https://
and Harald Mehl (2007). Satellite org/10.1016/j.apgeog.2019.05.001
proceedings.mlr.press/v78/wu17a.html
image analysis for disaster and crisis-
Watts, Nick, W. Neil Adger, Sonja Ayeb-
management support. IEEE Transactions Wynants, Laure, Ben Van Calster, Gary S.
Karlsson, Yuqi Bai, Peter Byass, Diarmid
on Geoscience and Remote Sensing, vol. Collins, Richard D. Riley, Georg Heinze,
Campbell-Lendrum, Tim Colbourn, et al.
45, No. 6 (June), pp. 1520–28. https://doi. Ewoud Schuit, Marc M. J. Bonten, et al.
(2017). The Lancet countdown: Tracking
org/10.1109/TGRS.2007.895830 (2020). Prediction models for diagnosis
progress on health and climate change.
and prognosis of COVID-19: Systematic
Vosoughi, Soroush, Deb Roy, and Sinan The Lancet, vol. 389, No. 10074 (March),
review and critical appraisal. BMJ, No. 369
Aral (2018). The spread of true and false pp. 1151–64. https://doi.org/10.1016/S0140-
(7 April). https://doi.org/10.1136/bmj.m1328
news online. Science, vol. 359, No. 6380 (9 6736(16)32124-9
March), pp. 1146–51. https://doi.org/10.1126/
World Economic Forum (WEF) (2020).
science.aap9559
Global gender gap report 2020. http://
reports.weforum.org/global-gender-gap-
report-2020/dataexplorer

122  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
REFERENCES

Xie, Yanhua, Anthea Weng, and Qihao You, Jiaxuan, Xiaocheng Li, Melvin Low, Zhang, Xiao, Gabriela Hug, J. Zico Kolter,
Weng (2015). Population estimation of David Lobell, and Stefano Ermon (2017). and Iiro Harjunkoski (2016). Model
urban residential communities using Deep Gaussian process for crop yield predictive control of industrial loads and
remotely sensed morphologic data. IEEE prediction based on remote sensing energy storage for demand response.
Geoscience and Remote Sensing Letters, data. In Proceedings of the Thirty-First In 2016 IEEE Power and Energy Society
vol. 12, No. 5 (May), pp. 1111–15. https://doi. AAAI Conference on Artificial Intelligence. General Meeting (PESGM). Boston:
org/10.1109/LGRS.2014.2385597 AAAI’17. San Francisco: AAAI Press, pp. IEEE, pp. 1–5. https://doi.org/10.1109/
4559–65. PESGM.2016.7741228
Xu, Huan, and Shie Mannor (2012).
Robustness and generalization. Machine Yu, Jiafan, Zhecheng Wang, Arun Zhang, Xinyi, Chengfang Fang, and Jie Shi
Learning, vol. 86, No. 3 (1 March), pp. Majumdar, and Ram Rajagopal (2018). (2021). Thief, beware of what get you there:
391–423. https://doi.org/10.1007/s10994- DeepSolar: A machine learning framework Towards understanding model extraction
011-5268-1 to efficiently construct a solar deployment attack. arXiv:2104.05921 [cs] (12 April).
database in the United States. Joule, vol. 2, http://arxiv.org/abs/2104.05921
Xu, Zhenxing, Chang Su, Yunyu Xiao, and
No. 12 (December), pp. 2605–17. https://doi.
Fei Wang (2021). Artificial intelligence for Zheng, Yongqing, Han Yu, Lizhen Cui,
org/10.1016/j.joule.2018.11.021
COVID-19: Battling the pandemic with Chunyan Miao, Cyril Leung, and Qiang
computational intelligence. Intelligent Zaki, Mohamed H., and Tarek Sayed Yang (2018). SmartHS: An AI platform for
Medicine, vol. 2, No. 1 (21 October). https:// (2016). Automated cyclist data collection improving government service provision.
doi.org/10.1016/j.imed.2021.09.001 under high density conditions. IET Proceedings of the AAAI Conference on
Intelligent Transport Systems, vol. 10, No. 5 Artificial Intelligence, vol. 32, No. 1 (27 April).
Yang, Lei, Xin Liu, Weiqiang Zhu, Liang
(June), pp. 361–69. https://doi.org/10.1049/ https://doi.org/10.1609/aaai.v32i1.11382
Zhao, and Gregory C. Beroza (2022).
iet-its.2014.0257
Toward improved urban earthquake Zhou, Li, and Margarita Sordo (2021).
monitoring through deep-learning-based Zeng, Daniel, Zhidong Cao, and Daniel Chapter 5 – Expert systems in medicine. In
noise suppression. Science Advances, vol. B. Neill (2021). Chapter 22 – Artificial Artificial Intelligence in Medicine, Lei Xing,
8, No. 15 (13 April). https://www.science. intelligence-enabled public health Maryellen L. Giger, and James K. Min, eds.
org/doi/10.1126/sciadv.abl3564 surveillance—from local detection to Academic Press, pp. 75–100. https://doi.
global epidemic monitoring and control. In org/10.1016/B978-0-12-821259-2.00005-3
Yigitcanlar, Tan, Juan M. Corchado,
Artificial Intelligence in Medicine, Lei Xing,
Rashid Mehmood, Rita Yi Man Li, Karen Zissis, Georges (2019). The R3 concept:
Maryellen L. Giger, and James K. Min, eds.
Mossberger, and Kevin Desouza (2021). Reliability, robustness, and resilience
Academic Press, pp. 437–53. https://doi.
Responsible urban innovation with local [President’s Message]. IEEE Industry
org/10.1016/B978-0-12-821259-2.00022-3
government artificial intelligence (AI): Applications Magazine, vol. 25, No.
A conceptual framework and research Zhang, Wenwen, Caleb Robinson, Subhrajit 4, pp. 5–6. https://doi.org/10.1109/
agenda. Journal of Open Innovation: Guhathakurta, Venu M. Garikapati, Bistra MIAS.2019.2909374
Technology, Market, and Complexity, No. 7. Dilkina, Marilyn A. Brown, and Ram M.
Zytek, Alexandra, Dongyu Liu,
https://www.mdpi.com/2199-8531/7/1/71/ Pendyala (2018). Estimating residential
Rhema Vaithianathan, and Kalyan
pdf energy consumption in metropolitan
Veeramachaneni (2021). Understanding
areas: A microsimulation approach.
the usability challenges of machine
Energy, No. 155 – (July), pp. 162–73. https://
learning in high-stakes decision making.
doi.org/10.1016/j.energy.2018.04.161
arXiv:2103.02071 [cs.HC] (2 March). https://
arxiv.org/abs/2103.02071

123  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E
REFERENCES

124  A I A N D C I T I E S : R I S K S , A P P L I C AT I O N S A N D G O V E R N A N C E

You might also like