1 s2.0 S0167739X22004320 Main

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Future Generation Computer Systems 142 (2023) 4–13

Contents lists available at ScienceDirect

Future Generation Computer Systems


journal homepage: www.elsevier.com/locate/fgcs

Towards containerized, reuse-oriented AI deployment platforms for


cognitive IoT applications

Tiago Veiga a , Hafiz Areeb Asad b , Frank Alexander Kraemer b , , Kerstin Bach a
a
Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway
b
Department of Information Security and Communication Technology, Norwegian University of Science and Technology, Trondheim, Norway

article info a b s t r a c t

Article history: IoT applications with their resource-constrained sensor devices can benefit from adjusting their
Received 12 July 2022 operations to the phenomena they sense and the environments they operate in, leading to the
Received in revised form 30 October 2022 paradigm of self-adaptive, autonomous, or cognitive IoT. On the other side, current AI deployment
Accepted 22 December 2022
platforms focus on the provision and reuse of machine learning models through containers that can
Available online 24 December 2022
be wired together to build new applications. The challenge is that composition mechanisms of the AI
Keywords: platforms, albeit effective due to their simplicity, are in fact too simplistic to support cognitive IoT
Cognitive IoT applications, in which sensor devices also benefit from the machine learning results. Our objective is
Self-adaptive IoT to perform a gap analysis between the requirements of cognitive IoT applications on the one side and
Cognitive architecture the current functionalities of AI deployment platforms on the other side. In this work, we provide
Container-based deployment
an overview of the paradigms in AI deployment platforms and the requirements of cognitive IoT
applications. We study a use case for person counting in a skiing area through camera sensors, and how
this use case benefits from letting the IoT sensors have access to operational knowledge in the form of
visual attention models. We describe the implementation of the IoT application using an AI deployment
platform, analyze its shortcomings, and necessary workarounds. From the use case, we identify and
generalize five gaps that limit the usage of deployment platforms: the transparent management of
multiple instances of components, a more seamless integration with IoT devices, explicit definition of
data flow triggers, and the availability of templates for cognitive IoT architectures and reuse below
the top-level.
© 2022 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).

1. Introduction see the move towards containerized platforms, in which ready-


made AI models can be instantiated. Virtualized environments in
In a wide range of application domains, there is an increasing form of containers allow to provide the correct version of the
number of combined Internet-of-Things (IoT) / Artificial Intelli- software stack for the specific models. This ensures that models
gence (AI) applications in which decisions are made by machine run in the same environment where they were developed, tested,
learning models based on the data collected by sensor devices. In and validated in. Models and functions contributed by different
principle, such applications can be implemented using a rather containers can then be composed into more comprehensive data
simple architecture, in which the concerns between IoT and AI flows through connections between containers.
are well separated and follow a simple, uni-directional data flow: However, such a strict division between IoT and AI prevents
Sensors measure a phenomenon in the environment and transmit systems from evolving towards higher efficiency and better per-
formance: IoT sensors can greatly benefit from the ability to rea-
data into a database, from which it can be analyzed to support
son about their own operation and the environment they operate
specific decisions. In this simple division, functionality that sup-
in so that they can reuse their constrained resources strategically.
ports sensor operations regards mainly managing their software
This refers to the concept of self-adaptive [2,3], context-aware, or
and communication, to provide them with configurations, secu-
cognitive IoT [4,5]. With increasing computational power, sensor
rity or other updates and ensure their proper operation [1]. On
devices can take autonomous decision to improve the system
the other hand, the main challenge of the system related to AI
performance and actively improve their own information about
is the correct and efficient deployment of AI models. Here we
the environment. For instance, they can better decide when and
how to make measurements or which data to forward, to avoid
∗ Corresponding author. spending effort on useless data with little impact on the utility for
E-mail address: [email protected] (F.A. Kraemer). a user, also referred to as value-of-information [6]. Such behavior

https://doi.org/10.1016/j.future.2022.12.029
0167-739X/© 2022 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
T. Veiga, H.A. Asad, F.A. Kraemer et al. Future Generation Computer Systems 142 (2023) 4–13

requires some form of adaptation. Due to the scale and number does not take into account the possibility that different devices
of devices, this adaptation must happen autonomously, and, due naturally have variable computational and energy resources.
to the heterogeneity of the environments of the devices, also for Cognitive architectures [11], on the other hand, can adapt to
each sensor individually. the current conditions in the environment and drive adaptive be-
The cognitive abilities required for such adaptive behavior are havior from the system. The core feature is that such architectural
often provided through mechanisms that need similar support for designs allow data to flow arbitrarily and not only unidirection-
machine learning as the data analysis of domain data. This implies ally. Therefore, systems can adapt to what is observed from the
more complex architectures that require more fine-grained man- environment and pass updated action plans back to the devices
agement and a deeper integration with AI deployment platforms. interacting with the physical world. Several concrete examples
Instead of sensors only delivering data for analysis, they are also can be found in the literature, such as architectures for healthcare
the receivers of insights generated by AI. Such examples are coaching systems [12], smart grids [13] or the management of
models about the environment, models about the phenomena to robotic recycling plants [14]. Later, a survey analyzed several
sense or about the usage patterns, all allowing the sensor devices approaches for cognitive models and a general blueprint model
to plan their operations more strategically. was proposed [4].
Such a more sophisticated coupling between IoT and AI, how- IoT forms a three-tiered architecture, consisting of devices,
ever, goes beyond what the simplistic composition mechanisms edge computing resources and cloud computing [15]. In this
of current AI deployment platforms allow. In this paper we an- setting, cognitive architectures can be subdivided into several
alyze the intersection between architectures for IoT cognitive components, with some of its features provided by AI services. In
models and container-based composition and deployment used general, AI services can be added to an IoT network to provide
in AI platforms. We present the requirements of cognitive, self- additional features in the system. For example, the integration
adaptive IoT applications illustrated by a case study. In particular, of IoT and AI services for supply chain management [16], the
we study a use case where a visual sensor network with cam- implementation of Edge–Cloud architectures for AI services over
eras is used to estimate the busyness and utilization of a skiing 5G networks [17] or the management of cloud–edge orchestra-
area. We formulate an ideal system model for a cognitive IoT tion [18]. Furthermore, while edge computing can improve the
solution following the reference model in [4], which emphasizes quality of experience for an IoT application [19], AI can improve
the decomposition into logical components (as opposed to only the implementation of secure microservices on the edge [20].
containers), and the identification of autonomous loops as well Other approaches distribute the execution of machine learning,
as explicit triggers. We then analyze and discuss how such solu- both inference or training, over several tiers of the architecture,
tions can be implemented following the simple paradigms for AI also referred to as Edge-to-Cloud continuum [21]. Some use con-
composition platforms based on containers as much as possible. tainers for the execution of tasks [22,23], others distribute the
Here we experience that the simplicity that lies in the container computation of neural networks so that the individual layers are
composition paradigm used for AI deployments is a hurdle for processed in different system tiers [24,25]. However, while these
such an integration, and identify gaps where a straightforward approaches clearly address the need for integration of AI and IoT,
implementation of the cognitive IoT principles is not possible. they focus on the feasibility of the approaches and the handling
We therefore also suggest how current AI composition platforms of constrained resources, not the composition or aspects of reuse
can be extended to support the need for the more elaborate that is starting to get mature for cloud-based solutions.
requirements of cognitive, self-adaptive IoT applications. The deployment process for AI solutions is complex and, there-
In the following, we provide an overview of related work in fore, deployment platforms were presented with the goal of au-
Section 2, and then present the principles of container-based tomating the workflow, from solution design to orchestration.
AI solutions in Section 3. After that, we present our case study One further step is the creation of community-maintained cat-
related to person counting in a skiing area in Section 4, for which alogs, allowing practitioners to test and reuse different compo-
we first develop a simple version to illustrate the principles of nents for their specific solutions. The Acumos platform [26] offers
container-based deployment platforms. We then motivate the such tools, although initially conceived for unidirectional machine
need for cognitive IoT applications in general in Section 5, and learning pipelines. Later, the AI4EU project launched the AI4EU
describe a specific version of a cognitive variant of the use case platform [27] which extends Acumos to allow, among others,
in Section 6. We model this cognitive version in Section 7 using cyclic topologies.
a reference model as starting point, and by that identify criti- In this work we aim at bridging the gap between these sub-
cal requirements for the implementation on a container-based topics. In particular, we notice a lack of analysis of the deploy-
platform, which we will analyze in Section 8. In this section, we ment process of cognitive architectures for IoT applications which
analyze the unfulfilled requirements, possible design alternatives will benefit it generalization and easier deployment for different
and necessary workarounds, and propose extensions for future scenarios. Simultaneously, AI deployment platforms were not
versions of such platforms. developed with the specific requirements of IoT applications, and
cognitive models in particular, in mind.
2. Related work
3. Container-based AI deployment platforms
The design of architectures for IoT networks share many archi-
tectural goals with the development of service-oriented software In this section, we provide an overview of the current status
architectures [7]. Both share the need for efficient communication of container-based AI platforms and especially the principles they
between different components and facilitate the deployment of employ to make the deployment of AI pipelines less complex.
complex networks with simple building blocks. Therefore, the Without specific support, deploying AI solutions such as machine
microservice approach was introduced to IoT, for instance for learning models can be a difficult process, with challenges identi-
manufacturing systems [8] or different tiers of IoT systems in gen- fied at every step of the deployment [28]. Deployment platforms
eral [9]. Microservices encapsulate specific, self-contained func- make AI models easier deployable in various practical settings
tions as loosely coupled components that communicate through by allowing to create new solutions by wiring together reusable,
message passing. Through the loose coupling, microservices sup- off-the-shelf components. In this way, mature and tested models
port solutions based on virtualization so that the different com- and processes are available for reuse and can be combined and
ponents can execute in different environments [10]. However, it adapted to fit into new settings and applications.
5
T. Veiga, H.A. Asad, F.A. Kraemer et al. Future Generation Computer Systems 142 (2023) 4–13

The AI4EU Experiments Platform [27], which is part of an EU instance of an off-the-shelf, reusable module for object detection
project [29] and the basis for our case study, is built around two that follows the You only look once approach (YOLO, [32]), which
main components: the marketplace and the design studio. The allows for the detection of several different objects in various
marketplace is a repository for AI components and other support positions in a single inference step. The model’s output is a list
components, ranging from data sources to user interfaces. The of bounding boxes, illustrated in Fig. 2. The bounding boxes are
design studio is a visual editor allowing users to connect differ- annotated with the type of object detected and a confidence level.
ent components from the marketplace catalog and create new Finally, the image and list of detections are passed to a dashboard
solutions. component, which implements a web user interface to allow
Inspired by development in software engineering, the basis real-time visualization.
for offering reusable components are containers. A container is
a virtualized environment that runs software instances in an
5. Towards self-adaptive, cognitive IoT
isolated environment from its physical host server. This allows
developers to encapsulate whatever algorithms and methods are
served by a given micro-service as a self-contained environ- Before we transform our simple use case from above into
ment, which means freedom to use any version of the base a cognitive version, we first motivate why we need cognitive
operating system, system packages, or coding framework. Thus, applications in IoT at all. To make the deployment of IoT devices
using containers avoids compatibility issues in case of system cheaper or feasible at all, it is crucial that they are wireless,
upgrades or combining components that require different (and use a wireless protocol for communication, and are powered
potentially incompatible) software stacks. Models can run in the by batteries or preferably by energy harvesting to reduce the
exact same software stack they were originally developed, tested, need for manual maintenance. It is also desirable to keep devices
and validated for. as compact as possible to reduce costs and make them less
In the AI4EU platform, a component is composed of a pointer obtrusive, reducing the size of harvesters such as solar panels
to a Docker container available in a public or private library like and energy buffers. Energy is a scarce resource that should be
for instance Docker Hub [30]. The respective micro-services and used strategically [33]. Other relevant constraints are the trans-
data interfaces are defined in the Protocol Buffers format [31]. mission over wireless channels, which may have restrictions on
The conceptual simplicity of connecting components, which bandwidth.
sometimes only consist of linear pipelines, allows for simple In IoT, various techniques are employed to handle resources
graphical editing tools to design the data flows between compo- economically, such as low-power electronics, efficient commu-
nents. This makes the deployment process easier for the system nication protocols [34], new types of energy buffers [35], and
designers, as these editing tools are used to create solutions with energy harvesting [36]. Another class of approaches tries to maxi-
limited knowledge about the technical details of each module. mize the utility of the system to the user by minimizing the effort
Based on the definitions of the components, the graphical editor spent on data that is not significant. These can be summarized by
allows the user to connect interfaces between components. The the concept of Value of information [6]. For the control software
editor visually differentiates between input and output interfaces in devices, this means selecting more carefully when and where
and generates similar symbols for similar interfaces such that the to make measurements, and which data to process, transmit or
user can identify them. In the AI4EU platform, the orchestration store.
for the interactions of the containers is automatically generated So, in general, the idea is to only use resources on relevant
based on the data flow between the containers according to the data for the system’s utility. Such knowledge about the value of
wiring defined by the user in the design studio. information is often highly specific to the application goals, the
The modularity of these solution increases the reusability of phenomena to sense, the sensor’s environment, and the specific
architectures. The whole or part of the pipeline can be reused in
construction of the device. The knowledge may hence vary for
different scenarios, for instance, experimenting with a pipeline of
each individual sensor device, and can vary over time as the
different AI modules or databases while maintaining the overall
environment and the sensed phenomena can be non-stationary.
structure design. In case a solution cannot be realized by wiring
In general, this means that IoT sensors themselves benefit from
existing components, users can also create own components with
learning processes so that they can more optimally control their
arbitrary logic. Alternatively, the system designer can define the
operation and make tradeoffs between, for instance, energy con-
orchestration links programmatically or implement a custom or-
sumption and utility to the user to optimize overall operation.
chestration module, both alternatives that require considerable
They may also be able to degrade their level of service grace-
programming expertise.
fully, or manage to maintain a minimum service level [37] when
To sum up, AI deployment platforms here represented with
the AI4EU platform allow to create AI solutions by offering com- resources get scarce.
ponents as self-contained environments through containers. These Many of the reasoning techniques for these systems involve
components can be combined using graphical editors. machine learning [4]. For the connection between IoT and AI
this means that the IoT devices of a system are not only used
4. Use case: Person counting in a skiing area as a source of data that is analyzed and acted upon through AI,
but that AI solutions also provide feedback to the IoT devices
Our case study is a system that estimates the business of a to learn about their own properties and the environment they
skiing area. It processes camera images and counts the number of are deployed in, with the aim to optimally control their own
persons as basis for the estimation. For this simple initial version, operation. Systems hence evolve from a uni-directional, simplistic
the design within the AI4EU editor is straightforward, with only pipeline into more general cognitive models with learning and
a few components and connectors. Fig. 1 shows the first basic reasoning processes, and the ability to adapt.
workflow of the system for a single camera instance. The pipeline We focus in the following on the effect of such a paradigm
begins with a data source component, which fetches the images shift in IoT on system development with the starting point of the
published by the camera. Then, an image recognition component containerized platform that support current AI solutions, and ask
follows. This component is reused from the publicly available how they can be extended so that they fulfill the requirements of
components in the AI4EU platform catalog. It encapsulates an cognitive IoT.
6
T. Veiga, H.A. Asad, F.A. Kraemer et al. Future Generation Computer Systems 142 (2023) 4–13

Fig. 1. Screenshot of the solution designed in the AI4EU platform for a unidirectional pipeline for the simplified use case without any self-adaptation or feedback to
the sensor devices.

Fig. 4. Snapshots of the visual attention model of one camera over time (day
to day). This shows how sensor devices can acquire knowledge over time that
they can exploit to reduce image transmissions.

6. Use case: Self-adaptive, cognitive version

To enable the sensors to better adapt to constrained resources,


we will extend the system into a cognitive, self-adaptive IoT sys-
tem in which sensors can benefit from the system’s feedback to
Fig. 2. Two persons detected by the YOLO image recognition model with adjust their operation. Hence, AI functions integrate also directly
corresponding confidence. with the operation of the system. For visual sensing networks,
there exists a variety of techniques to make operations more
efficient. In the following, we focus on a concept referred to as
visual attention [38]. When revisiting Fig. 2, we see several parts
in the system where it is unlikely to observe persons, like in the
sky. When transmitting an image in constrained situations, sensor
devices could hence drop the transmission of those parts of an
image that are less likely to show persons. Compared to always
sending the complete image and then running out of energy, this
solution would let the system maintain its utility and still count
the number of persons. We therefore partition an image into a set
of N = 64 tiles. With V [t ], t ∈ {1..N } we store a visual attention
model. The higher the value for each tile, the more likely it is that
persons appear in this part of the image. The logic consists then
of two parts:

• On the server, where the object detection runs using the


YOLO model just as before, we also create a visual attention
model for each camera. This happens by analyzing the posi-
tion of the bounding boxes delivered by the object detection
and periodically updating the attention model. The server
regularly transmits the attention model to the sensor device.
• On the sensor device, we separate the image into tiles and
select which tiles to transmit to the server using a selection
policy. This policy determines the number of tiles to send
using a planning algorithm depending on its current energy
budget and then transmits only those tiles deemed valuable
by the attention model.
For this to work, the server receives the transmitted tiles, stitches
them together with a background image received earlier, and
then performs the object detection on these reduced tile sets.
The details of the algorithms for the computation of the attention
models and the policies are published in [39].
Fig. 3 shows five cameras and their corresponding attention
models as heatmaps to the right. The cells in the heatmap with
darker coloring correspond to those areas where the detection of
persons is more likely. Fig. 4 shows the visual attention map of
one camera as it develops over time, from day to day.
Fig. 3. Camera views together with their visual attention models, calculated over To evaluate our approach, we simulated five different policies
the entire range of images captured. The darker certain cells in the attention for each device, which vary in the number of tiles transmitted.
model are, the more often persons have appeared and therefore should be The fewer tiles transmitted, the lower the usage of energy re-
prioritized during transmission. sources. Fig. 5 shows the results of different simulation runs. The
7
T. Veiga, H.A. Asad, F.A. Kraemer et al. Future Generation Computer Systems 142 (2023) 4–13

potential for reuse, the loops organize the data flow through
the system, and the triggers determine when data flows are
dispatched and computation happens. Fig. 6 illustrates the design
of this cognitive architecture, which we will describe in the
following.

7.1. Component structure

Components are classified according to their function in the


cognitive model. They can encapsulate declarative (DK) or pro-
cedural knowledge (PK), the acquisition or measurement of data
and hence responsible for perception (P), execute actions (A), or
be part of the adaptation process (AP), following the taxonomy
of autonomous systems.
The sensor devices (to the left in Fig. 6) are structured by
two components. Perceive is responsible for acquiring images and
Fig. 5. Percentages of missed person detections (vertical axis) for each camera
their subsequent processing. Execute is responsible to interpret
for the five tile percentage policies (horizontal axis). Results show the visual the visual attention model and decide when, how many and
attention models (solid lines) and random policies (dashed lines). On average, which tiles to send, as described in Section 6.
using the attention models reduces the number of undetected persons by 55%. To the right are the device managers usually executed on a
cloud platform. Here, each sensor device instance has its own
instance of a manager. The (partial) images sent by the device
horizontal axis shows different percentages of tiles sent, starting are stored as part of the device-specific knowledge component. A
with 100% (all tiles) to only 20% of the tiles. The vertical axis monitor device examines the incoming data and dispatches it to
shows the percentage of persons that remained undetected. Since the learning process for the visual attention models. The image
we took the number of detected persons in the complete images recognition task is implemented within this component. These
as ground truth, the error and hence the number of undetected models are handed over to the analytical component that predicts
persons is 0 for the policy transmitting 100% of the tiles. The solid device operation and the subsequent planning component. Cur-
lines represent the policies that make use of the visual attention rently, the visual attention models are the only information pro-
models. We see that the percentages of undetected persons rise vided to the devices. More advanced solutions applying planning
as we transmit fewer tiles, which is not unexpected. to optimize for the available energy can improve the performance
When comparing the different policies using the visual atten- and autonomy of IoT devices further. Component Learn observes
tion models (solid lines) compared to random policies (dashed the learning process of the visual attention maps.
lines), we see that the ones using visual attention models per-
form better and benefit from the cognitive architecture and self- 7.2. Loops for autonomous behavior, learning and adaptation
adaptive model. The number of undetected persons using the
random tile selection is much higher. When we average over The reference model identifies three different types of loops
all cameras and transmission levels, we observe that the visual that describe data flows with different purposes to support (1)
attention models reduce the number of undetected persons by autonomous behavior, (2) learning, and (3) adaptation.
55% [39]. The exact tradeoff between energy usage and accuracy
through the selection of transmission levels is not in focus here, • Autonomic loops are the control mechanism that allow
but the error reduction compared to random policies shows the devices, to some degree, to take autonomous decisions in-
potential of a cognitive, self-adaptive approach to IoT. dependent of external information. In Fig. 6 this is loop L1.
This loop is contained within the device and consists of the
7. Cognitive IoT model data flow between the Perceive and Execute components.
This loose coupling between the device manager and the
We will now develop the implementation of the cognitive device allows the device some degree of autonomy, and
version of the use case as motivated and outlined above. It will it can operate even if the device manager fails to send a
serve as a source for the requirements we explore that should refreshed visual attention model.
be supported by containerized AI deployment platforms. The self- • Learning loops control the update of the system’s knowl-
adaptive, cognitive version of the application will turn out more edge about the environment and the state of its devices.
complex than the basic version from Section 4 and Fig. 1, as it In Fig. 6 this is loop L2. It controls the learning process
contains more data flows, includes feedback loops, and also needs for the visual attention model and is contained within the
to learn, maintain and distribute the visual attention models. server. The object detection container acts as a monitor
We approach its implementation following the general cogni- that observes the sensed event (a reconstructed image) and
tive model for IoT applications introduced by Braten et al. [4]. returns new knowledge. Other learning processes can occur
This model generalizes patterns for adaptive behavior commonly in parallel like, for example, the temporal model of presence
found in different architectures surveyed in the literature on of persons for each hour of each week day.
autonomous computing and self-adaptive systems. Since the ref- • Adaptive loops control reasoning mechanisms and ensure
erence architecture was extracted from case studies in the litera- that the devices respond well to changes in the environment
ture, it applies to a wide range of systems, which means that the or, in other words, that the plans followed by the auto-
requirements we formulate based on the cognitive version of our nomic loop adapt to the observations perceived from the
use case should also generalize well for other applications. environment. In our application in Fig. 6, L3 is an adaptive
The main elements of the reference models are components, loop that controls the transfer of an updated visual attention
loops, and triggers. Components determine the locus of compu- model to the device. It is triggered by a sensing event on
tation and encapsulation and are relevant for organization and the device, which is communicated to the manager which,
8
T. Veiga, H.A. Asad, F.A. Kraemer et al. Future Generation Computer Systems 142 (2023) 4–13

Fig. 6. Diagram with the proposed cognitive architecture for the camera deployment use case, following the reference architecture in [4]. Each device instance
(left) is represented by its own device manager instance (right). The data flows between the logical components are organized by three main loops for autonomous
behavior, learning and adaptation.

in turn, can decide to transfer an updated model after the


learning process back to the device. This corresponds to an
update of the action plan for the device and is an adaptation
of the device operation.

Each loop follows the data flow interaction between different


sub-parts of the network. In particular, L1 is contained within
the device, L2 is within the cloud platform, and L3 involves
communication between the device and the device manager in
the cloud.

7.3. Triggers

Tasks such as learning are often resource-intensive and should


only be dispatched when necessary. The best practices in [4],
therefore, recommend making triggers explicit. We identify two
critical triggers in the architecture:
Fig. 7. Container structure for the deployed solution. Sensor devices (left) are
• T1 triggers the learning process. When sufficient new data is not part of the deployment platform. Each sensor is represented by its own
available, the learning component triggers the exchange of device manager container. Containers for the YOLO object detection and database
are directly reused from the AI4EU catalog.
this information to the knowledge component to update the
visual attention model. Here the formulation of the trigger
influences the tradeoff between computational costs and the
currentness of the visual attention model. Models can be gaps and missing features, analyze our workarounds and propose
updated with each received image, or it may be sufficient improvements for the next generation of deployment platforms.
to only update the attention model once a day. Fig. 7 shows the container structure of the solution built for
• T2 triggers the process of transmitting an updated visual the deployment platform. To the left are the sensor devices,
attention model to the device. It can happen either at every which are not part of the containerized platform. The logical
update or when the planning component detects that it is components within the device manager of Fig. 6 are mapped
significantly different from the previous version. into corresponding manager containers. These manager contain-
ers act as orchestrators that connect the external devices with
We should note that the original reference model in [4], in the other modules in the cloud network. We implemented one
addition to the device managers, also contains elements for adap- container instance per device instance. Three more containers
tive behavior at the system level for concerns relevant for all provide additional functionality:
devices. We left these out for brevity as they do not change the
fundamental requirements we want to discuss here.
• YOLO Object Detection processes the images and finds the
bounding boxes of persons. Like in the simplified version of
the system in Fig. 1, this container was directly reused from
8. Container-based implementation
the AI4EU catalog.
We now proceed with the implementation of the cognitive,
• Database implements and manages a MongoDB database to
allow the system to look for previous information and an
self-adaptive model for the IoT application of Fig. 6. Of course,
external user to inspect the database if necessary. Also this
we want to realize as many of the benefits that come with the
component could be directly reused.
principles of reuse-oriented assembly of virtualized containers
(described in Section 3), so that more complex IoT applications
• Dashboard exposes a web interface with information about
the current status of the system for an external user, which
can be easily built and deployed. This means in particular cre-
include the current status of the detection system with
ating data pipelines among several containers, reusing existing
a print of the current image, with detected objects high-
containers whenever possible, and facilitating the orchestration
lighted, and the current attention map learnt by the system.
of the system.
However, we experienced that the current platforms lack fea- As indicated above, a direct deployment of this solution on the
tures which prevents a straightforward deployment of our mod- platform is impossible as it lacks support for some features re-
ular cognitive architecture. In this section, we discuss the main quired by the model in Fig. 6. We instead created and configured
9
T. Veiga, H.A. Asad, F.A. Kraemer et al. Future Generation Computer Systems 142 (2023) 4–13

Table 1
Overview of the requirements for a cognitive version of the AI platform, together with design alternatives (if available) and design choices.
Requirements Design alternatives Design choices Sect.
R1: Container Instance Management A1.1: One container instance per device We selected A1.1 as workaround, since A1.2 and A1.3
We need to manage multiple instances of a A1.2: A single container for all device are not supported. For future versions of the 8.1
component, for instance to offer one instances platform, A1.3 would offer the best solution for
manager component instance per device. A1.3: A clustered solution. developers.
R2: Closer Integration with Devices A2.1: Contain device nodes in the visual We selected A2.2 as workaround with a special
8.2
External devices should be represented in editor. component as interface to the devices.
the graphical editor. A2.2: Add generic interfaces for incoming
data from devices.
R3: Explicit and Expressive Triggers As a workaround, we implemented triggers as part of 8.3
We need to define triggers explicitly. – the manually written logic, not visible in the
graphical editor.
R4: Reuse Below the Top-Level As a workaround, we placed the container for image 8.4
Reuse of containers also at the sub-level, detection at the top level, and routed communication

not only top-level. from the device managers to it, instead of including
it inside the device manager.
R5: Architecture Templates As workaround, we constructed our model from 8.5
It should be possible to reuse also templates – scratch, without reusing any template.
of architectures.

an orchestration solution through manual programming, circum- new physical sensor devices can join or leave the system during
venting some of the constraints of the deployment platforms and runtime.
its visual editor. The main unfulfilled requirements we identified In principle, there are three alternatives for the implemen-
are: tation of device manager instances: (A1.1) one container per
device instance; (A1.2) a single container handling all device
R1 The platform currently only offers single instances of com- instances; or (A1.3) a clustered solution with several containers,
ponents and does not offer a mechanism for the manage- each handling a cluster of devices.
ment of multiple instances. This is necessary for device We used the first option as a workaround and created one
managers, for instance, as they ideally exist as one instance container instance per physical device, since this resulted with
per sensor device instance. (Section 8.1) the given restrictions in the best overview for the model. For that,
R2 The visual editor does not have explicit mechanisms to we configured access ports to each container instance to ensure
include external devices, but only covers the deployment in that each device connects to its respective manager. For our
the cloud network. Instead, it should offer the possibility to prototypical system with few cameras, this is a viable solution,
also include a representation of the devices. (Section 8.2) but for systems with a high number of device instances this may
R3 It is not possible to define triggers, as identified in the require too many resources. Containers are meant to provide
cognitive model, inside the editor. Instead, these should independent execution environments. Providing one execution
be modeling elements within the editor, as they represent environment for each sensor device instance is not necessary
critical concepts for the execution. (Section 8.3) since they will probably require the same software stack anyway.
R4 The reuse of containers is available at the top level. This Hence, options (A1.2) and (A1.3) are more suitable. In these
considerably limits the flexibility for structuring solutions cases, the data flows between containers need to carry also the
and hampers reuse. In addition, it should be possible to device ID so that the containers implementing device managers
edit hierarchical designs and introduce also reuse for sub- for several device instances can tell them apart.
components. (Section 8.4) Ideally, the management of several instances should be trans-
R5 It should be possible to also allow the reuse of architecture parent to the developer, as the required logic adds a lot of
templates. (Section 8.5) complexity without adding any application-specific value. Such
a transparent management of device manager instances could
Table 1 provides an overview of the requirements and the poten-
also be an opportunity to perform load balancing; the deploy-
tial design alternatives. Since these requirements are not specific
ment platform could not only create the instances when they are
to our use case but are relevant for also other cognitive IoT
required, but also decide where to execute them.
applications, we discuss in the following design options, how
we solved the requirements by workarounds, and how the de-
8.2. R2: Closer integration with devices
ployment platforms can support these requirements in future
versions. Another core aspect of IoT architectures is the seamless inte-
gration of data exchanged between sensor devices and managing
8.1. R1: Container instance management components. Therefore, it is desirable that the graphical platform
editor offers functionalities to easily deploy solutions that cover
Managing many devices in a single model is a central aspect both the cloud network and data exchange with external de-
of cognitive architectures. In particular, there is device-specific vices. It does not require the system designer to manually create
knowledge that needs to be maintained, updated, and pushed components to implement this interface for each use case.
down to the respective device. A manager component should One option (A2.1) is to explicitly include the device nodes
therefore represent each physical device [4]. It should also be in the visual editor. For system designers, it would appear like
possible to create and destroy such instances on-demand, as constructing a single, coherent architecture, with the possibility
10
T. Veiga, H.A. Asad, F.A. Kraemer et al. Future Generation Computer Systems 142 (2023) 4–13

to indicate where each component is intended to run. In this In the current version of the AI4EU platform, reuse can only
alternative, it needs to be considered that most IoT devices cannot happen at the level of containers and only at the top-level of
run containers (due to their limited processing power) and that the system. Therefore, the YOLO container for object detection is
a single configuration output would not be sufficient. Therefore, currently placed at the top-level, though it would be better placed
separate configuration files and scripts could be automatically within each manager instance. As a workaround, we placed it
produced by the platform for each of the network’s physical at the top level and routed the communication from inside the
components, separating code for devices from code for containers device manager to it, instead of keeping this internal to the device
running on servers. manager.
Another option (A2.2) is to offer a generic interface for in-
coming data that can act as a data source module. Currently, 8.5. R5: Architecture templates
one can encapsulate data fetching scripts for specific scenarios
and include those as specific components for particular use cases, In addition to the possibility of reusing containers at several
similar to the container Camera Data Source we used in Fig. 1. levels, we suggest the possibility of offering templates for cog-
This solution is neither reusable nor generic; therefore, we should nitive architectures. This works as a compromise between the
aim for a more general functionality that can support the most specific needs of each component and the generic property of the
used IoT data platforms. That way, users could have a generic reference model and as a mean to better categorize containers
interface to interact with different data sources and, possibly, available in the public catalogs. In contrast to a container that
with data incoming from different data platforms. Since solution can be reused, a template would allow some components to
(A2.1) requires considerable extensions of the platform, we have be open slots, into which specific components are placed once
here opted for (A2.2). the template is instantiated. It would hence encapsulate inter-
action patterns between components and useful to document
8.3. R3: Explicit and expressive triggers architecture traits that go beyond single cohesive components.

As explained in Section 7.3, correct triggering is crucial for the 9. Conclusions


system’s efficient operation, as triggers define when data flows
and computation and communications are scheduled. When trig- We analyzed the current status of deploying solutions for cog-
gers execute too often, computational resources may be wasted nitive, self-adaptive IoT applications, in which AI is not only used
on data with no or minor changes, and when triggers are exe- to analyze domain data but also aspects of the IoT devices to facil-
cuted not often enough, information may be outdated. Moreover, itate optimized operations through adapting to the environment.
in cognitive systems, triggers are not necessarily tied exclusively Our work shows the effectiveness of the cognitive architecture for
to events from the environment, but can also depend on the state a specific use case, namely the detection and counting of persons
of knowledge or the recognition of trends or other situations. in a skiing area through cameras. The operation is made more
Triggers are therefore an important design element in a cognitive efficient using visual attention models that were learned.
system that deserves careful consideration. AI platforms allow the user to create pipelines where com-
A critical trigger in our use case is the retraining of the visual ponents are based on containers reusing proven AI solutions.
attention model. In our implementation, we update the model This makes them attractive to be used in such cognitive IoT
with every reception of a new image, which is simple but implies solutions. However, our experience in this use case also shows
also unnecessary computational resources and hence energy. A that the current status of deploying platforms lack some flexible
better solution would be to update the model less frequently, and usable features to support these cognitive architectures fully.
for instance, once a day. We should hence be able to define The main gaps we identified are related to managing multiple
more expressive triggers, for instance by defining minimum and instances of the same container, integrating IoT devices in the
maximum frequencies, depending on learning progress. Triggers deployment process, including explicit triggers while creating
should be able to take external events into account but also be solutions in the platforms’ editing tools, and the availability of
independent from them if the environment suddenly does not the template architectures in the platforms. For each of them,
issue such events anymore. we analyze why they are important features and discuss possible
As a workaround, we implemented the triggers as part of the suggestions for future developments of the platforms.
manually written containers. However, this solution is neither Deployment platforms have a high potential to facilitate the
generic nor flexible as it is hard-coded and hidden inside contain- deployment process for practitioners and simplify the adoption
ers. We suggest therefore that the visual editor allow to directly of new solutions with integrated AI components. In particular,
define triggers that can be connected to data flows and interact they easily integrate most of the core features of IoT cognitive
with the components. architectures. Our analysis can help drive the development of
these platforms, allowing them to cover a broader range of solu-
8.4. R4: Reuse below the top-level tions. These developments should be made general so that they
become a default offer by the platforms and then generalize for
In the deployment process of an architecture there are typi- each specific use case.
cally generic and specific components. Generic components can
be reused in different pipelines, often across users and project, CRediT authorship contribution statement
ranging from data management (e.g., data collators that merge
incoming data from different containers to a single interface) to Tiago Veiga: Conceptualization, Methodology, Software, Vali-
data analysis services (e.g., containers encapsulating object detec- dation, Writing – original draft, Writing – review & editing. Hafiz
tion algorithms which are provided as a service by the respective Areeb Asad: Validation, Visualization, Formal analysis, Writing
container interface). In contrast, specific components are tailored – original draft, Writing – review & editing. Frank Alexander
and need to be built from scratch for specific needs of each use Kraemer: Conceptualization, Writing – original draft, Supervi-
case (e.g., dashboard containers that show specific information sion, Writing – review & editing. Kerstin Bach: Conceptualization,
from the system required by a client). Supervision, Writing – review & editing.
11
T. Veiga, H.A. Asad, F.A. Kraemer et al. Future Generation Computer Systems 142 (2023) 4–13

Declaration of competing interest [16] G. Kousiouris, S. Tsarsitalidis, E. Psomakelis, S. Koloniaris, C. Bardaki, K.


Tserpes, M. Nikolaidou, D. Anagnostopoulos, A microservice-based frame-
work for integrating IoT management platforms, semantic and AI services
The authors declare that they have no known competing finan-
for supply chain management, ICT Express 5 (2) (2019) 141–145, http:
cial interests or personal relationships that could have appeared //dx.doi.org/10.1016/j.icte.2019.04.002.
to influence the work reported in this paper. [17] G. Myoung Lee, T.-W. Um, J.K. Choi, AI as a microservice (AIMS) over 5G
networks, in: 2018 ITU Kaleidoscope: Machine Learning for a 5G Future,
Funding ITU K, 2018, pp. 1–7, http://dx.doi.org/10.23919/ITU-WT.2018.8597704.
[18] Y. Wu, Cloud-edge orchestration for the internet of things: Architecture
This work was partially funded by the European Union’s Hori-
and AI-powered data processing, IEEE Internet Things J. 8 (16) (2021)
zon 2020 research and innovation program, project AI4EU, grant 12792–12805, http://dx.doi.org/10.1109/JIOT.2020.3014845.
agreement No. 825619. [19] G. Premsankar, M. Di Francesco, T. Taleb, Edge computing for the internet
of things: A case study, IEEE Internet Things J. 5 (2) (2018) 1275–1284,
Data availability http://dx.doi.org/10.1109/JIOT.2018.2805263.
[20] F. Al-Doghman, N. Moustafa, I. Khalil, Z. Tari, A. Zomaya, AI-enabled secure
microservices in edge computing: Opportunities and challenges, IEEE
Data will be made available on request. Trans. Serv. Comput. (2022) 1, http://dx.doi.org/10.1109/TSC.2022.3155447.
[21] D. Rosendo, A. Costan, P. Valduriez, G. Antoniu, Distributed intelligence on
References the Edge-to-Cloud Continuum: A systematic literature review, J. Parallel
Distrib. Comput. 166 (2022) 71–94, http://dx.doi.org/10.1016/j.jpdc.2022.
04.004, arXiv:2205.01081.
[1] S. Sinche, D. Raposo, N. Armando, A. Rodrigues, F. Boavida, V. Pereira,
J.S. Silva, A survey of IoT management protocols and frameworks, IEEE [22] S. Wang, Y. Hu, J. Wu, KubeEdge.AI: AI platform for edge devices, arXiv,
Commun. Surv. Tutor. 22 (2) (2020) 1168–1190, http://dx.doi.org/10.1109/ 2020, http://dx.doi.org/10.48550/arxiv.2007.09227.
COMST.2019.2943087. [23] O. Debauche, S. Mahmoudi, S.A. Mahmoudi, P. Manneback, F. Lebeau, A
[2] H. Muccini, M. Sharaf, D. Weyns, Self-adaptation for cyber-physical new Edge Architecture for AI–IoT services deployment, Procedia Comput.
systems: a systematic literature review, in: Proceedings of the 11th Sci. 175 (2020) 10–19, http://dx.doi.org/10.1016/j.procs.2020.07.006.
International Symposium on Software Engineering for Adaptive and [24] S. Teerapittayanon, B. McDanel, H. Kung, Distributed Deep Neural Networks
Self-Managing Systems, 2016, pp. 75–81. over the Cloud, the Edge and End Devices, in: 2017 IEEE 37th International
[3] I. Alfonso, K. Garcés, H. Castro, J. Cabot, Self-adaptive architectures in IoT Conference on Distributed Computing Systems, ICDCS, 2017, pp. 328–339,
systems: a systematic literature review, J. Internet Serv. Appl. 12 (1) (2021) http://dx.doi.org/10.1109/icdcs.2017.226.
1–28. [25] F. Zhu, B.C. Ooi, C. Miao, H. Wang, I. Skrypnyk, W. Hsu, S. Chawla, A.
[4] A.E. Braten, F.A. Kraemer, D. Palma, Autonomous IoT device management Banitalebi-Dehkordi, N. Vedula, J. Pei, F. Xia, L. Wang, Y. Zhang, Auto-Split:
systems: Structured review and generalized cognitive model, IEEE Internet A General Framework of Collaborative Edge-Cloud AI, in: Proceedings of
Things J. 8 (6) (2021) 4275–4290, http://dx.doi.org/10.1109/JIOT.2020. the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining,
3035389. 2021, pp. 2543–2553, http://dx.doi.org/10.1145/3447548.3467078.
[5] B. Athamena, Z. Houhamdi, Cognitive and autonomic IoT system design, in: [26] S. Zhao, M. Talasila, G. Jacobson, C. Borcea, S.A. Aftab, J.F. Murray, Packaging
2021 Eighth International Conference on Software Defined Systems, SDS, and sharing machine learning models via the acumos AI open platform,
2021, pp. 1–7, http://dx.doi.org/10.1109/SDS54264.2021.9732121. in: 2018 17th IEEE International Conference on Machine Learning and
[6] F. Alawad, F.A. Kraemer, Value of Information in Wireless Sensor Network Applications, ICMLA, 2018, pp. 841–846, http://dx.doi.org/10.1109/ICMLA.
Applications and the IoT: A Review, IEEE Sens. J. 22 (10) (2022) 9228–9245, 2018.00135.
http://dx.doi.org/10.1109/jsen.2022.3165946.
[27] P. Schüller, J.P. Costeira, J. Crowley, J. Grosinger, F. Ingrand, U. Köckemann,
[7] B. Butzin, F. Golatowski, D. Timmermann, Microservices approach for the
A. Saffiotti, M. Welss, Composing complex and hybrid AI solutions, 2022,
internet of things, in: 2016 IEEE 21st International Conference on Emerging
http://dx.doi.org/10.48550/ARXIV.2202.12566.
Technologies and Factory Automation, ETFA, 2016, pp. 1–6, http://dx.doi.
[28] A. Paleyes, R.-G. Urma, N.D. Lawrence, Challenges in deploying machine
org/10.1109/ETFA.2016.7733707.
learning: A survey of case studies, ACM Comput. Surv. 55 (6) (2022)
[8] K. Thramboulidis, D.C. Vachtsevanou, A. Solanos, Cyber-physical microser-
vices: An IoT-based framework for manufacturing systems, in: 2018 IEEE http://dx.doi.org/10.1145/3533378.
Industrial Cyber-Physical Systems, ICPS, 2018, pp. 232–239, http://dx.doi. [29] AI4EU, Europe’s AI-on-demand platform, 2022, URL https://www.
org/10.1109/ICPHYS.2018.8387665. ai4europe.eu. (Last Accessed July 2022).
[9] C.J.L. de Santana, B. de Mello Alencar, C.V.S. Prazeres, Reactive microser- [30] Docker, Docker hub, 2022, URL https://www.docker.com/products/docker-
vices for the internet of things: A case study in Fog computing, in: hub/. (Last Accessed July 2022).
Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, [31] Google, Protocol buffers, 2022, URL https://developers.google.com/
SAC ’19, Association for Computing Machinery, New York, NY, USA, 2019, protocol-buffers/. (Last Accessed July 2022).
pp. 1243–1251, http://dx.doi.org/10.1145/3297280.3297402. [32] G. Jocher, A. Chaurasia, A. Stoken, J. Borovec, NanoCode012, Y. Kwon,
[10] M. Alam, J. Rufino, J. Ferreira, S.H. Ahmed, N. Shah, Y. Chen, Orchestration TaoXie, J. Fang, imyhxy, K. Michael, Lorna, V. Abhiram, D. Montes, J.
of microservices for IoT using docker and edge computing, IEEE Com- Nadar, Laughing, tkianai, yxNONG, P. Skalski, Z. Wang, A. Hogan, C. Fati, L.
mun. Mag. 56 (9) (2018) 118–123, http://dx.doi.org/10.1109/MCOM.2018. Mammana, AlexWang1900, D. Patel, D. Yiwei, F. You, J. Hajek, L. Diaconu,
1701233. M.T. Minh, ultralytics/yolov5: v6.1 - TensorRT, TensorFlow edge TPU and
[11] C. Savaglio, G. Fortino, Autonomic and cognitive architectures for the OpenVINO export and inference, 2022, http://dx.doi.org/10.5281/zenodo.
internet of things, in: G. Di Fatta, G. Fortino, W. Li, M. Pathan, F. Stahl, 6222936.
A. Guerrieri (Eds.), Internet and Distributed Computing Systems, Springer [33] H. Jayakumar, K. Lee, W.S. Lee, A. Raha, Y. Kim, V. Raghunathan, Powering
International Publishing, Cham, 2015, pp. 39–47. the internet of things, in: 2014 IEEE/ACM International Symposium on Low
[12] A. Amato, A. Coronato, An IoT-aware architecture for smart healthcare Power Electronics and Design, ISLPED, 2014, pp. 375–380, http://dx.doi.org/
coaching systems, in: 2017 IEEE 31st International Conference on Advanced 10.1145/2627369.2631644.
Information Networking and Applications, AINA, 2017, pp. 1027–1034,
[34] W. Ayoub, A.E. Samhat, F. Nouvel, M. Mroue, J.-C. Prévotet, Internet of
http://dx.doi.org/10.1109/AINA.2017.128.
Mobile Things: Overview of LoRaWAN, DASH7, and NB-IoT in LPWANs
[13] Y.C. Pranaya, M.N. Himarish, M.N. Baig, M.R. Ahmed, Cognitive architecture
Standards and Supported Mobility, IEEE Commun. Surv. Tutor. 21 (2)
based smart grids for smart cities, in: 2017 3rd International Conference on
(2019) 1561–1581, http://dx.doi.org/10.1109/COMST.2018.2877382.
Power Generation Systems and Renewable Energy Technologies, PGSRET,
2017, pp. 44–49, http://dx.doi.org/10.1109/PGSRET.2017.8251799. [35] X. Shen, J. Tuck, R. Bianchini, V. Sarkar, A. Colin, E. Ruppel, B. Lucia, A
[14] O.G. Rosado, P.F.M.J. Verschure, Distributed adaptive control: An ideal reconfigurable energy storage architecture for energy-harvesting devices,
cognitive architecture candidate for managing a robotic recycling plant, ACM SIGPLAN Not. 53 (2) (2018) 767–781, http://dx.doi.org/10.1145/
in: V. Vouloutsi, A. Mura, F. Tauber, T. Speck, T.J. Prescott, P.F.M.J. Ver- 3173162.3173210.
schure (Eds.), Biomimetic and Biohybrid Systems, Springer International [36] F.K. Shaikh, S. Zeadally, Energy harvesting in wireless sensor networks: A
Publishing, Cham, 2020, pp. 153–164. comprehensive review, Renew. Sustain. Energy Rev. 55 (2016) 1041–1054.
[15] J. Zhang, D. Tao, Empowering Things With Intelligence: A Survey of the [37] R. Ahmed, B. Buchli, S. Draskovic, L. Sigrist, P. Kumar, L. Thiele, Optimal
Progress, Challenges, and Opportunities in Artificial Intelligence of Things, power management with guaranteed minimum energy utilization for solar
IEEE Internet Things J. 8 (10) (2021) 7789–7817, http://dx.doi.org/10.1109/ energy harvesting systems, ACM Trans. Embedded Comput. Syst. 18 (4)
jiot.2020.3039359. (2019) 30, http://dx.doi.org/10.1145/3317679.

12
T. Veiga, H.A. Asad, F.A. Kraemer et al. Future Generation Computer Systems 142 (2023) 4–13

[38] A. Borji, L. Itti, State-of-the-art in visual attention modeling, IEEE Trans.


Pattern Anal. Mach. Intell. 35 (1) (2013) 185–207, http://dx.doi.org/10. Frank Alexander Kraemer received the Dipl.-Ing. de-
1109/TPAMI.2012.89. gree in electrical engineering from the University of
[39] H.A. Asad, F.A. Kraemer, K. Bach, C. Renner, T.S. Veiga, Learning attention Stuttgart, Stuttgart, Germany, in 2003, the M.Sc. de-
models for resource-constrained, self-adaptive visual sensing applications, gree in information technology from the University of
in: Proceedings of the Conference on Research in Adaptive and Convergent Stuttgart, and the Ph.D. degree in model-driven systems
Systems, 2022, pp. 165–171, http://dx.doi.org/10.1145/3538641.3561505. development from the Department of Telematics, Nor-
wegian University of Science and Technology (NTNU),
Trondheim, Norway, in 2008. He is an Associate Pro-
fessor with the Department of Information Security
Tiago Veiga received the MSc (2010) and Ph.D. (2015) and Communication Technology, NTNU, and worked
degrees in Electrical and Computer Engineering from previously as a Technology Manager at a startup for IoT
Instituto Superior Técnico, University of Lisbon, Portu- software that he co-founded. His current research interests include Internet-of-
gal. He is a postdoctoral researcher at the Department Things architectures and application development, embedded and autonomous
of Computer Science at the Norwegian University of sensor systems, and the application of statistical methods and machine learning
Science and Technology (NTNU). Previously, he held in constrained settings.
a postdoctoral research position at the Institute for
Systems and Robotics, Lisbon, Portugal, and an ERCIM
Alain Bensoussan Research Fellowship at NTNU. His
Kerstin Bach is a professor in Artificial Intelligence at
main research interests are in artificial intelligence,
the Department of Computer Science at the Norwegian
autonomous agents, planning under uncertainty, active
University of Science and Technology (NTNU). Kerstin
perception, and adaptive behavior.
received her M.Sc. in Information Management and
Technology (2007) and Dr. rer. nat. (Ph.D., 2012) from
Hafiz Areeb Asad is currently pursuing a Ph.D. degree the University of Hildesheim, Germany. She worked as
in information security and communications tech- a research engineer at Verdande Technology (2013–
nology at the Norwegian University of Science and 2014) before joining NTNU. Her research interests are
Technology, Trondheim, Norway. He received the M.Sc. Artificial Intelligence methods for developing intelli-
degree in computer science from Uppsala University, gence decision support systems involving both domain
Sweden, in 2020. He was a recipient of a Swedish experts and end-users to create explainable, inter-
Institute (SI) scholarship for global professionals. He did pretable, and trustworthy AI systems. In particular, she works on data-driven
his B.Sc. degree in computer science from National Uni- and knowledge-intensive Case-Based Reasoning. She is the deputy head of
versity of Computer and Emerging Sciences, Islamabad, the NTNU’s Data and Artificial Intelligence group, program manager of the
Pakistan in 2017. His current research interests include Norwegian Research Center for AI Innovation (NorwAI), and associated with the
autonomous, cognitive and battery-less IoT. Norwegian Open AI Lab.

13

You might also like