A Middleware For Context-Aware Agents in Ubiquitous Computing Environments
A Middleware For Context-Aware Agents in Ubiquitous Computing Environments
A Middleware For Context-Aware Agents in Ubiquitous Computing Environments
*
in Ubiquitous Computing Environments
1 Introduction
Ubiquitous Computing Environments consist of a large number of autonomous agents
that work together to transform physical spaces into smart and interactive environ-
ments. In order for an agent to function effectively in these environments, they need
to perform two kinds of tasks – they need to sense and reason about the current con-
text of the environment; and they need to interact smoothly with other agents. In this
paper, we propose a middleware for Ubiquitous Computing Environments that meets
these two needs of agents in the environment.
The role of context has recently gained great importance in the field of ubiquitous
computing. “Context” is any information about the circumstances, objects, or condi-
tions by which a user is surrounded that is considered relevant to the interaction be-
tween the user and the ubiquitous computing environment [1]. A lot of work has been
done in trying to make applications in ubiquitous computing environments context
aware so that they can adapt to different situations and be more receptive to users’
needs[1][2][3][8][13].
Humans behave differently in different contexts. They are able to sense what their
context is and they adapt their behavior to their current context. The way humans
* This research is supported by a grant from the National Science Foundation, NSF CCR
0086094 ITR and NSF 99-72884 EQ
M. Endler and D. Schmidt (Eds.): Middleware 2003, LNCS 2672, pp. 143–161, 2003.
© IFIP International Federation for Information Processing 2003
144 Anand Ranganathan and Roy H. Campbell
adapt themselves is based on rules that they learn over the course of their experiences.
Humans are, thus, able to follow socially and politically correct behavior that is
conditioned by their past experiences and their current context.
Automated agents (which may be applications, services and devices) too, can fol-
low contextually-appropriate behavior, if they are able to sense and reason about the
context in which they are operating. Ubiquitous computing environments are charac-
terized by many sensors that can sense a variety of different contexts. The types of
contexts include physical contexts (like location, time), environmental contexts
(weather, light and sound levels), informational contexts (stock quotes, sports scores),
personal contexts (health, mood, schedule, activity), social contexts (group activity,
social relationships, other people in a room), application contexts (email received,
websites visited) and system contexts (network traffic, status of printers)[9]. Agents in
these environments should be able to acquire and reason about these contexts to adapt
the way they behave.
In this paper, we argue that ubiquitous computing environments must provide mid-
dleware support for context awareness. A middleware for context awareness would
provide support for most of the tasks involved in dealing with context. Context-aware
agents can be developed very easily with such a middleware. A middleware for con-
text-awareness would also place different mechanisms at the disposal of agents for
dealing with context. These mechanisms include reasoning mechanisms like rules
written in different types of logic (first order logic, temporal logic, fuzzy logic, etc.)
as well as learning mechanisms (like Bayesian networks, neural networks or rein-
forcement learning). Developers of context-aware agents would not have to worry
about the intricate details of getting contextual information from different sensors or
developing reasoning or learning mechanisms to reason about context.
Another important requirement of middleware in ubiquitous computing environ-
ments is that they allow autonomous, heterogeneous agents to seamlessly interact
with one another. While a number of protocols and middlewares (like TCP/IP,
CORBA, Jini, SOAP, etc.) have been developed to enable distributed agents to talk to
one another, they do not address the issues of syntactic and semantic interoperability
among agents. They do not provide a common terminology and shared set of concepts
that agents can use when they interact with each other. This problem is especially
acute in the realm of contextual information since different agents could have differ-
ent understandings of the current context. They might use different terms to describe
context, and even if they use the same terms, they might attach different semantics to
these terms. A middleware for context-awareness must address this problem by ensur-
ing that there is no semantic gap between different agents when they exchange con-
textual information.
We have identified several requirements for a middleware for context-awareness in
ubiquitous computing environments. These are:
1. Support for gathering of context information from different sensors and delivery of
appropriate context information to different agents.
2. Support for inferring higher level contexts from low level sensed contexts
3. Enable agents use different kinds of reasoning and learning mechanisms
4. Facilities for allowing agents to specify different behaviors in different contexts
easily.
5. Enable syntactic and semantic interoperability between different agents (through
the use of ontologies)
A Middleware for Context-Aware Agents in Ubiquitous Computing Environments 145
3 Context Model
The values that the arguments of a predicate can take are actually constrained by
the type of context. For example, if the type of context is “location”, the first argu-
ment has to be a person or object, the second argument has to be a preposition or a
verb like “entering,” “leaving,” or “in” and the third argument must be a location. We
perform type-checking of context predicates to make sure that the predicate does
make sense.
4 Gaia
Our middleware for context awareness and semantic interoperability has been inte-
grated into Gaia[16][17]. Gaia is our infrastructure for Smart Spaces, which are ubiq-
uitous computing environments that encompass physical spaces. The main aim of
Gaia is to make physical spaces like rooms, homes, buildings and airports intelligent,
and aid humans in these spaces. Gaia converts physical spaces and the ubiquitous
computing devices they contain into a programmable computing system. It offers
services to manage and program a space and its associated state. Gaia is similar to
148 Anand Ranganathan and Roy H. Campbell
traditional operating systems in that it manages the tasks common to all applications
built for physical spaces. Each space is self contained, but may interact with other
spaces. Gaia provides core services, including events, entity presence (devices, users
and services), discovery and naming. By specifying well defined interfaces to ser-
vices, applications may be built in a generic way so that they are able to run in arbi-
trary active spaces. The core services are started through a bootstrap protocol that
starts the Gaia infrastructure. Gaia uses CORBA to enable distributed computing.
Gaia consists of a number of different types of agents performing different tasks.
There are agents that perform various core services required for the functioning of the
environment like discovery, context-sensing, event distribution, etc. There are agents
associated with devices that enable them be a part of the environment. Each user also
has an agent that keeps personal information and acts as his proxy in a variety of set-
tings. Finally there are application agents that help users perform various kinds of
tasks in the environment. Examples of application agents include PowerPoint applica-
tions, music playing applications and drawing applications.
5 Enabling Context-Awareness
The Gaia middleware provides different ways for agents to acquire various types of
contextual information and then reason about it. A diagram of our infrastructure for
context-awareness is shown in Fig 1.
There are different kinds of agents that are involved in the Context Infrastructure
within Gaia (Fig. 1). These are:
• Context Providers. Context Providers are sensors or other data sources of context
information. They allow other agents (or Context Consumers) to query them for
context information. Some Context Providers also have an event channel where
they keep sending context events. Thus, other agents can either query a Provider or
listen on the event channel to get context information.
• Context Synthesizers. Context Synthesizers get sensed contexts from various Con-
text Providers, deduce higher-level or abstract contexts from these simple sensed
contexts and then provide these deduced contexts to other agents. For example, we
have a Context Synthesizer which infers the activity going in a room based on the
number of people in the room and the applications that are running.
• Context Consumers. Context Consumers (or Context-Aware Applications) are
agents that get different types of contexts from Context Providers or Context Syn-
thesizers. They then reason about the current context and adapt the way they be-
have according to the current context.
• Context Provider Lookup Service. Context Providers advertise the context they
provide with the Context Provider Lookup Service. This service allows agents to
find appropriate Context Providers. There is one such service in a single ubiquitous
computing environment.
A Middleware for Context-Aware Agents in Ubiquitous Computing Environments 149
Context
Provider
Lookup
Service
Context
Synthesizer
Ontology
Server
Context Context
Provider Provider
Context Context
Provider History
• Context History Service. Past contexts are logged in a database. The Context His-
tory Service allows other agents to query for past contexts. There is one such ser-
vice in a single ubiquitous computing environment.
• Ontology Server. The Ontology Server maintains ontologies that describe different
types of contextual information. There is one Ontology Server per ubiquitous com-
puting environment.
These different kinds of agents are described in further detail in the following sec-
tions.
A key feature of our middleware is that it endows agents with a variety of reasoning
and/or learning mechanisms to help them reason about context appropriately. Using
these reasoning or learning mechanisms, agents can infer various properties about the
current context, answer logic queries about context or adapt the way they behave in
different contexts.
Agents can reason about context using rules written in different types of logic like
first order logic, temporal logic, description logic, higher order logic, fuzzy logic, etc.
Different agents have different logic requirements. Agents that are concerned with the
temporal sequence in which various events occur would need to use some form of
temporal logic to express the rules. Agents that need to express generic conditions
using existential or universal quantifiers would need to use some form of first order
logic. Agents that need more expressive power (like characterizing the transitive clo-
sure of relations) would need higher order logics. Agents that deal with specifying
terminological hierarchies may need description logic. Agents that need to handle
uncertainties may require some form of fuzzy logic.
150 Anand Ranganathan and Roy H. Campbell
Instead of using rules written in some form of logic to reason about context, agents
can also use various machine learning techniques to deal with context. Learning tech-
niques that can be used include Bayesian learning, neural networks, reinforcement
learning, etc. Depending on the kind of concept to be learned, different learning
mechanisms can be used. If an agent wants to learn the appropriate action to perform
in different states in an online, interactive manner, it could use reinforcement learning
or neural networks. If an agent wants to learn the conditional probabilities of different
events, Bayesian learning is appropriate. The decision on what kind of logic or learn-
ing mechanism to use depends not only on the power and expressivity of the logic,
but also on other issues like performance, tractability and decidability.
Our middleware provides agents a choice of reasoning and learning mechanisms
that they can use to understand and react to context. Our current implementation al-
lows reasoning based on many-sorted first order logic, propositional linear-time tem-
poral logic or probabilistic propositional logic. It also allows agents to learn using
Bayesian methods or through reinforcement learning. These mechanisms are provided
in the form of libraries that the agent can use. We discuss the power, expressivity and
decidability of these logics in the implementation section. In the following sections,
we describe how Context Providers, Context Synthesizers and Context Consumers
use various reasoning mechanisms to perform their tasks.
Context Providers sense various types of contexts and allow these contexts to be ac-
cessed by other agents. We have a number of Context Providers in our infrastructure
providing various types of contexts like location, weather, stock price, calendar and
schedule information, etc.
Different Context Providers use different reasoning or learning mechanisms for
reasoning about the contexts they sense and for answering queries. For example, Con-
text Providers that deal with uncertain contexts could use fuzzy logic, while those that
require the ability to quantify over variables could use first order logic. Our Location
Context Provider, for instance, uses first order logic so that it can quantify over peo-
ple or over locations. It can thus answer queries concerning all the people in a room,
or all the locations in a building. Our Weather Forecast Context Provider uses a form
of fuzzy logic to attach probabilities with different contexts. For instance, it says that
precipitation could occur with a certain probability the next day.
Context Providers provide a query interface for other agents to get the current con-
text. Depending on the type of logic or learning mechanism used, Context Providers
have different ways of evaluating queries. However, all reasoning and learning
mechanisms are based on the predicate model of context, which is defined in the on-
tology. So, in spite of different Providers using different logics, their common
grounding on the predicate model makes it easy for Context Consumers to query them
in a uniform way.
The query interface is similar to that of Prolog. If the query is a predicate with no
variables, then the result is expected to be the truth value of that predicate. The result
is, thus, either a “yes” or a “no” (or a probability of the context predicate being true).
If the query is a predicate with variables, then the result must include any unifications
of the variable with constants that make the predicate true (if there are any). The re-
A Middleware for Context-Aware Agents in Ubiquitous Computing Environments 151
sulting unified context predicates that are returned may have additional attributes like
time or probability depending on the type of logic used.
Some Context Providers (like those that provide dynamic contexts and are associ-
ated with sensors) also send events about their context on an event channel. Consum-
ers can listen on this channel. For example, our room-based location service sends an
event like “Location( Bob, Entering, Room 3231)” when Bob enters Room 3231.
Exactly when a Context Provider generates an event is set by a policy for the Context
Provider. In some cases, the provider keeps sending events periodically. For example,
our weather service keeps sending temperature updates every 5 minutes. In other
cases, the provider sends an event whenever a change in context is detected.
All Context Providers support a similar interface for getting contexts and listening
to context events. So, consumers don’t have to worry about the actual type of Context
Provider they are querying. This greatly aids development of context-aware applica-
tions.
Context Synthesizers are agents that provide higher-level contexts based on simpler
sensed contexts. A Context Synthesizer gets source contexts from various Context
Providers, applies some sort of logic to them and generates a new type of context. A
Context Synthesizer is both a Context Provider and a Context Consumer. Just like
Context Providers, Context Synthesizers also support a Prolog-like query interface
which other agents can use to get the current context. They may also send events in an
event channel.
We follow two basic approaches for inferring new contexts from existing contexts.
The first uses static rules to deduce higher-level contexts and the other uses machine
learning techniques.
Each of the rules also has a priority associated with it – so if more than one rule is
true at the same time, exactly what the activity in the room is determined using the
priorities of the rules. If two rules are true at the same time and they have the same
priority, then one of them is picked at random.
Even the context concerning the number of people in the room is an inferred con-
text. So, the Room Activity Context Provider has to keep track of these entered and
exited context events and infer the number of people in the room based on these
events. Whenever the Room Activity Context Provider deduces a change in the activ-
ity in the room, it sends an event with the new activity.
We found that a fairly small set of rules are sufficient for deducing the activity in a
room in most circumstances. The Room Activity Context Provider did accurately
deduce the activity in the room most of the times. This proves, at least empirically,
that rule-based synthesizers are fairly useful in deducing some types of contexts.
The middleware provides mechanisms for developers to specify the rules of infer-
ence for these Synthesizers very easily. The developer can browse the ontology to get
the terminology used in the environments. He can then make use of this terminology
to frame the rules. The middleware also abstracts away many tasks like getting refer-
ences to appropriate Context Providers, querying them or listening to their event
channels and sending context events at appropriate times. The developer is thus free
to concentrate on the task of writing the rules.
Synthesizers that Learn. Rule based synthesizers have the disadvantage that they
require explicit definition of rules by humans. They are also not flexible and can’t
adapt to changing circumstances. Making use of machine learning techniques to
deduce the higher-level context enables us to get around this problem.
One type of context that is extremely difficult to sense is the mood of a user. It is
difficult to write rules for predicting user mood since each user is different. There are
such a large number of factors that can influence a user’s mood. We attempted to use
a learning mechanism that took some possible factors into account like his location,
the time of day, which other people are in the room with him, the weather outside and
how his stock portfolio is faring.. Our User Mood Context Provider uses the Naïve
Bayes algorithm for predicting user mood. We make use of past contexts to train the
learner. During the training phase, we ask the user for his mood periodically. We
construct a training example by finding the values of the features (ie. location,
weather, etc.) for each time the user entered his mood. We train the learner for a
week. Once the training phase is over, the learner can predict the mood of the user
given the values of the feature contexts (which are represented as predicates). The
result of a Naïve Bayes algorithm is probabilistic. It would be possible to retain the
probabilities associated with different user moods and thus some form of fuzzy logic
or probabilistic reasoning in handling these contexts. However, we just consider the
mood with the highest probability and assume that to be true.
We found that this user mood predictor did predict the moods of user fairly well in
different situations after some training. Humans are fairly repetitive creatures –their
moods in different contexts follow certain predictable patterns. Of course, the predic-
tions are not always perfect – we can only make good guesses based on the informa-
tion we have available. It is quite difficult to take into account all possible factors that
can influence the mood of a user.
A Middleware for Context-Aware Agents in Ubiquitous Computing Environments 153
The middleware aids the training and the actual operation of the agent. It provides
support for many tasks like getting references to appropriate Context Providers, que-
rying them or listening to their event channels and sending context events at appropri-
ate times. The developer is thus free to concentrate on the task of learning.
Context Consumers are agents that consume various types of contexts and adapt their
behavior depending on the current context. As mentioned earlier, consumers can ob-
tain contexts by either querying a Context Provider or by listening for events that are
sent by Context Providers. Our middleware makes it very easy to develop and deploy
context aware applications. It is easy for applications to get the contexts they require
to make decisions. In our infrastructure, Context Consumers get references to Context
Providers using the Context Provider Lookup Service.
form of libraries, which aid in the reasoning process. The job of the agent developer
is, thus, very simple. He can concentrate on writing the rules that govern agent behav-
ior without worrying about other things.
A sample configuration file for a jukebox application agent is shown in Table 1.
This agent plays appropriate music in a room depending on who is in the room, what
the weather is outside and how the stock portfolio of the user is faring. The rules of
this agent are written in first order logic.
In our current implementation, developers of context-aware applications need to
write such a configuration file (using the appropriate kind of logic) for describing
behavior in different contexts. However, we are also working on a graphical interface,
which simplifies the developer’s task. This graphical interface would show the vari-
ous types of contexts available (as defined in the ontology), allow the developer to
construct complex rules involving these contexts in different types of logics and also
present him with a list of possible behaviors of the application for these different
contexts.
Another example, which uses temporal logic, is a slideshow agent. One of its rules
is that it starts playing a particular ppt file on a large plasma screen when Chris enters
the room and continues playing until the Activity in the room is Meeting. This rule is
written as:
Condition: Location(Chris, Entered, 2401) AND (TRUE UNTIL Ac-
tivity(2401, Meeting))
Action: PlayOnPlasmaScreen(“scenery.ppt”)
Priority: 2
Location(Manuel,In, 2401).
PlayRockMusic().
Priority:1.
Location(Manuel,In,2401)AND Temperature(Champaign,>,50)
PlaySoftMusic()
Priority:2
Location(Bhaskar,In, 2401) AND Location(Chris,In, 2401)
PlayPopMusic()
Priority:2
information, news headlines and error messages on different kinds of media like on
tickertape, by speech or by email. The Notification Service learns by sending a notifi-
cation in some situation and observing the user’s reaction to the notification. The user
can give feedback about the notification by rating its usefulness (or even by stopping
the notification midway). Depending on the feedback of the user, the Notification
Service either increases or decreases the probability that it would send the same type
of notification in a similar situation again.
As in the case of rules, the middleware provides support for getting references to
appropriate Context Providers, getting context information from them, evaluating the
concept learned and invoking appropriate actions in different contexts. It also pro-
vides libraries and utilities that aid the learning process.
The Context Provider Lookup Service allows searches for different context providers.
Providers advertise the set of contexts they provide with the Context Provider Lookup
Service. This advertisement is in the form of a first order expression. Agents can
query the Lookup Service for a context provider that provides contextual information
it needs. The Lookup Service checks if any of the context providers can provide what
the agent needs and returns the results to the application.
156 Anand Ranganathan and Roy H. Campbell
For example, a location context provider that tracks Bob’s location around the
building advertises itself as ∀Location y Location( Bob, In, y). An application that wants
to know when Bob enters room 3231, would send the query Location(Bob, In, Room
3231 )to the Lookup Service. The Lookup Service sees that the context provider does
provide the context that the application is interested in (the advertisement is a superset
of the query) and returns a reference to the context provider to the application.
Applications can make use of not just the current context, but also past contexts to
adapt their behavior for better interacting with users. We thus store contexts continu-
ously, as they occur, in a database. The Gaia event service allows event channels to be
“persistent”, ie. all events sent on these channels are stored in a database along with a
timestamp indicating when the event was sent.
It is thus possible to store all context events (or a certain subset of them) in a data-
base. Since all context events have a well-determined structure (as given by the ontol-
ogy), it is relatively simple to automatically develop schemas for storing them into a
database. Storing past contexts enables the use of data mining to learn and discover
patterns in user behavior, room activities and other contexts. This sort of data mining
can, for example, be used in security applications like intrusion detection, where any
observed behavior way outside the ordinary can be construed as an intrusion.
cating with them. The use of ontologies, thus, enables agents in different ubiquitous
computing environments to have a common vocabulary and a common set of con-
cepts while interacting with one another.
All the ontologies in Gaia are maintained by an Ontology Server. Other agents in Gaia
contact the Ontology Server to get descriptions of agents in the environment, meta-
information about context or definitions of various terms used in Gaia. It is also pos-
sible to support semantic queries (for instance, classification of individuals or sub-
sumption of concepts). Such semantic queries require the use of a reasoning engine
that uses description logics like the FaCT reasoning engine[21]. We plan on providing
support for such queries in the near future.
The Ontology Server also provides an interface for adding new concepts to existing
ontologies. This allows new types of contexts to be introduced and used in the envi-
ronment at any time. The Ontology Server ensures that any new definitions are logi-
cally consistent with existing definitions.
The use of ontologies also makes it possible for agents in different environments to
inter-operate. To support such an inter-operation, mappings need to be developed
between concepts defined in the ontologies of the two environments. We plan on
developing a framework for supporting such inter-operation very soon.
Since the ontologies clearly define the structure of contextual information, different
agents can exchange different types of context information easily. Context Consumer
agents can get the structure of contexts they are interested in from the Ontology
Server. They can then frame appropriate queries to Context Providers to get the con-
texts they need.
Context Providers and Context Synthesizers can also get the structure of contexts
that they provide from the Ontology Server. So, they know the kinds of queries they
158 Anand Ranganathan and Roy H. Campbell
can expect. They also know the structure of events that they need to send on event
channels.
Finally, ontologies also help the developer when he is writing rules or developing
learning mechanisms for context aware agents. The developer has access to the set of
terms and concepts that describe contextual information. He can thus use the most
appropriate terms and concepts while developing context aware agents.
7 Implementation
All agents in the Context Infrastructure are implemented on top of CORBA and are a
part of Gaia. This means they can be instantiated in any machine in the system, can
access event channels, can be moved from one machine to another and can be discov-
ered using standard mechanisms like the CORBA Naming Service and the CORBA
Trading Service. The Context History Service uses the MySQL database for storing
past contexts.
We currently support a number of reasoning mechanisms including many sorted
first order logic and linear time propositional temporal logic. For reasoning in first
order logic, we use XSB[19] as the reasoning engine. XSB is a more powerful form of
Prolog which uses tabling and indexing to improve performance and also allows lim-
ited higher order logic reasoning. We use the many-sorted logic model where quanti-
fication is performed only over a specific domain of values. The ontology defines
various sets of values (like Person, Location, Stock Symbol, etc). Thus, the Person set
consists of the names of all people in our system. The Location set consists of all
valid locations in our system (like room numbers and hallways). Stock Symbol con-
sists of all stock symbols that the system is interested in (e.g. IBM, MSFT, SUNW,
etc.). Each of these sets is finite. Quantification of variables is done over the values of
one of these sets. Since quantification is performed only over finite sets, evaluations
of expressions with quantifications will always terminate. More discussion on the
issues of decidability and expressiveness can be found in [10][11][12].
For temporal logic, we have developed our own reasoning engine that is based on
Templog[23]. Templog is a logic programming language, similar to Prolog, which
allows the use of temporal operators. We restrict the power of this logic to proposi-
tional logic, which makes it decidable and also simpler to evaluate.
We also currently support some machine learning mechanisms, viz. Naïve Bayes
learning and reinforcement learning. The Naïve Bayes approach involves learning
conditional probabilities between different events from a large number of training
examples. Thus given a certain context, it gives the conditional probability that some
action should be performed or some other context should be true. The reinforcement
learning approach involves trying to learn appropriate actions based on user feedback.
An ontology of all terms used in the context infrastructure has been developed in
DAML+OIL. The Ontology Server uses the FaCT reasoning engine[21] for checking
the validity of context expressions.
We have implemented a number of Context Providers in our system such as pro-
viders of location, weather, stock price, calendar contexts and authentication contexts.
We also have some context synthesizers as described earlier. Some examples are a
Synthesizer which deduces the mood of a user using Naïve Bayes learning; and an-
other which deduces the activity in the room using rules. The middleware has allowed
us to develop a number of context aware applications very easily. Some context-
A Middleware for Context-Aware Agents in Ubiquitous Computing Environments 159
8 Related Work
A lot of work has been done in the area of context-aware computing in the past few
years. Seminal work has been done by Anind Dey, et al. in defining context-aware
computing, identifying what kind of support was required for building context aware
applications and developing a toolkit that enabled rapid prototyping of context-aware
applications[1]. While the Context Toolkit does provide a starting point for applica-
tions to make use of contextual information, it does not provide much help on how to
reason about contexts. It does not provide any generic mechanism for writing rules
about contexts, inferring higher-level contexts or organizing the wide range of possi-
ble contexts in a structured format.
In [2], Jason Hong, et. al., make the distinction between a toolkit and an infrastruc-
ture. An infrastructure, according to Hong, is a well-established, pervasive, reliable
set of technologies providing a foundation for other systems. Our middleware for
context-awareness builds on Hong’s notion of an infrastructure and provides a foun-
dation for developing context-aware applications easily.
Bouquet, et al. [4] address the problem of contexts in autonomous, heterogeneous
distributed applications, where ach entity has its own notion of context depending on
its viewpoint. To interact with other entities, an entity should know the relationship
between its viewpoint and other entities’ viewpoint. Our middleware uses ontologies
to achieve this inter-operability in a more generic fashion. Paul Castro and his col-
leagues [5] have worked on developing “fusion services” which extract and infer
useful context information from sensor data using Bayesian networks. Our middle-
ware provides a more generic framework where such learning approaches can be
used.
Terry Winograd compares different architectures for context[6] and proposes one
that uses a centralized Event Heap[7]. Our system, however, provides a framework
where distributed reasoning can take place. In [3], Brumitt, et al describe their experi-
ences with multi-modal interactions in context-aware environments and how such an
environment can respond automatically to different contexts. Our middleware pro-
vides an easy way for developers to specify how an environment should automatically
respond to different contexts.
160 Anand Ranganathan and Roy H. Campbell
9 Future Work
There are a number of possible enhancements to our middleware. A new approach to
developing context-sensitive applications is by modeling them as state machines. This
allows their behavior to be determined by specific sequences of context changes. State
machine approaches to modeling applications are useful especially when a sequence
of changes in context needs to trigger a sequence of actions by the application.
We have not yet tackled the issues of privacy and security. Some context informa-
tion may be private and hence, all agents may not have access to them. The ontology
can potentially encode such privacy and security constraints. It can thus be used to
ensure that the rules developed for applications do not violate security restrictions.
We are also working on better user interfaces for developing context-aware appli-
cations, so that any ordinary user can program his or her own context-aware agent.
These interfaces can make use of the ontologies to get the structures of different types
of contexts and thus allow the user to develop rules with context information.
One aspect which we haven’t studied as yet is the usability of context-aware appli-
cations. How will ordinary users deal with applications that try to learn their behav-
iors and their preferences? Will users take the time to write rules for specifying con-
text-sensitive behavior of applications? Will users respond positively to the fact that
the behavior of applications can change according to the context, and is hence not as
predictable as current applications?
10 Conclusion
In this paper, we have described our middleware for developing context-aware appli-
cations. The middleware is based on a predicate model of context. This model enables
agents to be developed that either use rules or machine learning approaches to decide
their behavior in different contexts. The middleware uses ontologies to ensure that
different agents in the environment have the same semantic understanding of different
context information. This allows better semantic interoperability between different
agents, as well as between different ubiquitous computing environments. Our mid-
dleware allows rapid prototyping of context-sensitive applications. We have devel-
oped a number of context-sensitive agents on our middleware very easily.
References
1. Dey, A.K., et al. "A Conceptual Framework and a Toolkit for Supporting the Rapid Proto-
typing of Context-Aware Applications", anchor article of a special issue on Context-Aware
Computing, Human-Computer Interaction (HCI) Journal, Vol. 16, 2001.
2. Hong, J. I., et al. “An Infrastructure Approach to Context-Aware Computing”. HCI Journal,
‘01, Vol. 16
A Middleware for Context-Aware Agents in Ubiquitous Computing Environments 161