Smart Glasses: Interaction, Privacy and Social Implications: Ubiquitous Computing Seminar FS2014 Student Report

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Smart glasses:

Interaction, privacy and social implications



Ubiquitous Computing Seminar
FS2014
Student report

Marica Bertarini
ETH Zurich
Department of Computer Science
Zurich, Switzerland
[email protected]

ABSTRACT
Smart glasses are wearable devices that display real-time
information directly in front of users field of vision by us-
ing Augmented Reality (AR) techniques. Generally, they
can also perform more complex tasks, run some applica-
tions, and support Internet connectivity. This paper pro-
vides an overview of some methods that can be adopted to
allow gesture-based interaction with smart glasses, as well
as of some interaction design considerations. Additionally,
it discusses some social effects induced by a wide-spread
deployment of smart glasses as well as possible privacy
concerns.
Keywords: Smart glasses, Head-worn displays, input tech-
niques, interaction, body interaction, in-air interaction, pri-
vacy, social implications
INTRODUCTION
Head-worn displays (HWD) have recently gained signifi-
cant attention, in particular thanks to the release of a tem-
porary version of Google Glass. Moreover, the anticipation
of the commercial launch of Google Glass
1
in the upcom-
ing months and the fresh news that Facebook, Inc. acquired
Oculus Rift
2
increased the popularity of such devices even
further.
The trend of wearable device purchases is importantly
growing and some business analysts [1] forecast more than
20 million annual sales of Google Glass in 2018. Further-
more, researchers have been already studying and investi-
gating HWD for several years. As a consequence, it is im-
portant to give an overview of different methods that could
be used to interact with smart glasses and, above all, ana-
lysing privacy concerns and identifying the current and po-
tential social implications related to these devices has a
great significance at this point.

1
http://www.google.com/glass/start/
2
http://www.oculusvr.com/
INTERACTION
The main purpose of smart glasses is to provide users with
information and services relevant for their contexts and
useful for the users to perform their tasks; in other words,
such devices augment users senses. In addition, they allow
users to do basic operations available on today common
mobile devices such as reading, writing e-mails, writing
text messages, making notes, and answering calls.
Therefore, although most of the usage of smart glasses is
passive for the users, i.e. reading content on the little screen
of the device, active interaction with such devices is fun-
damental to control them and supply inputs. In fact, users
need ways to ask smart glasses for instance to open a par-
ticular application, answer something they need to know,
insert content for emails, messages or input fields, or to
control games.
Designing interaction techniques
Before presenting how users can interact with smart glass-
es, it is worth mentioning the main aspects that have to be
taken into account during the design process of such tech-
niques, which is summarized in [2]. As for technical char-
acteristics, a gesture recognition system for HWD should
ideally be very accurate, i.e. able to distinguish fine shape-
based gestures, insensitive to daylight, as small as possible,
consume low power, and be robust in noisy and cluttered
environment that are typical conditions in everyday life
scenarios. As far as user experience is concerned, the phys-
ical effort required to users to interact with the devices is
relevant, as well as easiness of use and encumbrance of the
device.
Interaction approaches
Two different categories of interaction methods can be dis-
tinguished for smart glasses: free form and others. The
former includes for example eye tracking, wink detection,
voice commands, and gestures performed with fingers or
hands. On the other hand, the latter comprises for instance
the use of hand-held devices, e.g. point-and-click control-
lers, joysticks, one-hand keyboards and smartphones, or
smart watches to control the HWD.


The aforementioned examples should help to understand
the difference between the two kinds of interaction. How-
ever, it can be expressed by saying that the free form does
not require any extra device other than the smart glass to be
performed and detected; on the contrary, it is obvious how
the control of the smart glass that happens via smartphones,
pointer, etc. cannot satisfy the free form criterion.
GESTURE-BASED INTERACTION
Thereinafter, we will only focus on gesture-basted interac-
tion, because it is preferable to assure a great user experi-
ence. Gesture-based interaction has been researched more
than eye tracking and wink detection so far, and voice
recognition has already reached a huge diffusion on today
mobile devices.
It is relevant to note that several different techniques to de-
tect gestures exist; analysing them in detail would lead us
to stray from the purpose of this paper. Some of them make
use of devices and sensors that have to be tied to users
body, e.g. to wrists, hands or fingers, whereas others ex-
ploit cameras that are external to the smart glass itself and
are located around the user. In addition, some of these
methods are used together with markers, e.g. reflective, in-
frared, coloured, in order to identify the position of users
hands.
Alternatively, gestures can be recognized by using cameras
or sensors, such as RGB or 3D cameras and depth sensors,
that can be embedded into smart glasses. It is essential to
stress that real free forms of interaction are realized only by
using cameras or sensors embedded into the HWD as they
nullify the need of external components and reduce the en-
cumbrance. As a consequence, these approaches are ideal
for the commercial version of the smart glasses that will be
launched on the market. Differently, the other recognition
techniques that have been classified as non-free forms are
usually used in research for several studies in this context.
Gestures on body/smart glass
Another significant distinction applicable to gestures is
based on where the gesture is performed. A common solu-
tion is doing gestures very close to or directly on some sur-
faces such as some parts of the smart glass itself or the us-
ers body. The following paragraphs present two different
studies that have investigated this approach.
Hand-to-face input. In [3], Serrano et al. describe their
study aimed at identifying which gestures and surfaces al-
low fast interaction with as little effort as possible. As re-
gards the surface, they considered and compared the results
about some parts of both the smart glass and the users
face. In particular, the face has been chosen since it is a part
of the body we interact with very often, i.e. 15.7 times/hour
in a working set; consequently, this means that gestures
performed on some parts of the face could be less intrusive
than others and may reach a good level of acceptance. On
the other hand, the high frequency of hand-face contacts
leads to the need of gesture delimiters used by the users to
inform the system that a new gesture is starting and avoid
unintentional triggers; for instance, voice commands or
long pressing on the surface can be used to invoke new ges-
tures.
The need to find an alternative to performing gestures di-
rectly on the smart glass itself arises from the fact that the
touchable area available for interaction on such devices is
usually only a narrow strip in correspondence to the users
temple; in fact, this means that many gestures are needed to
browse some pages or applications that require a lot of
panning and zooming due to the very small size of the in-
teraction surface. As a result, the time taken to reach a tar-
get in a page is negatively affected as well as the arm-
shoulder fatigue is significant because of the prolonged lift-
ing.
The first relevant result of this study is that most of the par-
ticipants preferred the temple to all the other parts of the
smart glass and voted the cheek as the best part of the face
for interaction, as it can be observed in figure 1. Moreover,
some participants found interacting on the cheek more suit-
able than performing gestures on the device thanks to its


Figure 1: Main areas identified by participants as
suitable for input on the face (left) and on the HWD
(right). The size of the circles is proportional to the
percentage of gestures. Source: [3]


Figure 2: Mean time in seconds (left) and mean
Borg value (right) for technique and interaction
area. Source: [3] modified


bigger size; also, they stated that this part of the face can be
considered somehow similar to a touchpad.
As far as the performances are concerned, figure 2 illus-
trates that cheek obtained better results in terms of both
time taken and exertion required to perform the interaction.
In addition, using the cheek took the participants roughly
20 seconds to perform three different zooming and they
considered it the easiest method compared to the over-sized
temple and regular temple. The mean Borg referenced in
the plot is a scale for ratings of perceived exertion; in par-
ticular, it allows to estimated fatigue and breathlessness
during physical work [5].
The last result that is presented about this study concerns
the social acceptance of these gestures and showed that par-
ticipants still prefer interacting with the smart glass over
using their face. In particular, this result is not a conse-
quence of only appearance matters, however it is due to
some other relevant aspects, e.g. hygienic issues, damage to
make-up, meaning of some gestures in other ethnic groups,
etc.
In conclusion, the study shows that cheek is a valid alterna-
tive to the touchable areas on smart glasses as for perfor-
mances; though, social approval is very important and may
influence users even more than interaction speed and fa-
tigue.
Palm-based imaginary interfaces. The purpose of this se-
cond study is to identify and quantify the role of visual and
tactile cues when browsing imaginary interfaces [4]. Spe-
cifically, imaginary interfaces can be defined as spatial and
non-visual interfaces for mobile devices and, in this partic-
ular case, they consist of matching between some applica-
tion to be opened or actions to be triggered and some parts
of the users hand (see figure 3).
This approach is very useful to impaired users and eye-free
interaction with smart glasses and, above all, is less intru-
sive and tiring than hand-to-face and hand-to-HWD inter-
action.
Figure 4 shows the four study conditions investigated in
this study. The study shows that when not blindfolded
(Figure 4a), the grid drawn on the fake phone (Figure 4c)
orients users on the screen and helps them to find the tar-
gets that were reached faster than on the palm (Figure 4d).
Additionally, this experiment proved that, in contrast,
touching the palm was faster than touching the fake phone
when blindfolded; this has a great importance because it
demonstrates that the tactile cues received by the users
touching their own palms is very significant.
After having found that the tactile cues are relevant, it is
interesting to understand how the two different tactile sens-
es, i.e. active and passive, contribute to help users to
browse the interface. In particular, the active sense is per-
ceived by the finger that actively touches the palm, whereas
the passive one is sensed by the palm when touched by the
finger.
Therefore, a second experiment has been conducted as a
3x2 factorial design where the measured variable and one


Figure 4: Study conditions (a) sighted vs. (b)
blindfolded, using a partial blindfold that only ob-
scures the participants view of their hands; (c)
phone vs. (d) palm. Source: [4]


Figure 3: Gustafson et al. adapted a non-visual au-
dio interface that announced targets as users touch
them, which allows users to browse an unfamiliar
imaginary interface. Source: [4]


Figure 5: Study conditions (a) palm vs. (b) fake
palm vs. (c) palm with finger cover; (d) close up of
finger cover. Source: [4]


factor, i.e. user condition sighted vs. blindfolded, remained
the ones of the first experiment; as for the other factor, i.e.
the touching surface, the touch on the palm with a cover
finger was tested in addition to the touch on both the real
and fake palm with the bare finger.
The results were meaningful since they showed that brows-
ing on the fake palm is much slower than on the real palm,
while in contrast there is no significant difference between
touching the real palm with a cover or with a bare one. As a
consequence, this certainly means that the majority of tac-
tile cues come from the passive tactile sense rather than the
active one, contrary to what authors expected.
To sum up, the study explained in [4] is notable in this
overview as it demonstrated that controlling devices such
as smart glasses by touching the palm phone has much bet-
ter performances than using the phone when blindfolded; as
a result, this means that this approach is suitable to be used
on-the-go since it does not require users to turn away from
the scene. On the other hand, detecting taps on different
parts of the hand may be difficult, above all if the aim is to
provide a free form of interaction; in fact, an external
OptiTrack system with reflective markers was used to per-
form the presented experiment. For this reason, there are
different approaches that detect gestures by using only
cameras and sensors embedded in the smart glass. The next
section discusses some of these approaches.
In-air gestures
An alternative to performing gestures on the top of a sur-
face or very close to it is doing them in-air. The following
paragraphs explain two research projects that investigated
the recognition of this kind of gestures and could potential-
ly be valid solutions to interact with smart glasses.
SixthSense is a project described in [6] and developed at
Massachusetts Institute of Technology (MIT) within the
Media Lab team. It is aimed at building the wearable ges-
ture interface that is partially represented in Figure 6. In
detail, the system is mainly constituted of a small projector
and a simple camera fixed to the users head; in addition,
some coloured markers have to be tied to the users fingers
to allow the gesture detection. A common mobile compu-
ting device has to be connected to these appliances.
Substantially, this system projects the content on any sur-
face in front of the users head, instead of displaying it on
the mobile device or on the little screen of a smart glass.
Moreover, it is able to recognize and track a users hand
and physical objects thanks to the camera, the markers, and
some computer-vision based techniques. Therefore, basing
on the mutual position of hands and projected content while
performing gestures, the system interprets the hand move-
ments as particular inputs and triggers certain actions con-
sequently.
Even though no smart glasses are directly involved in this
project, it is reasonable and very interesting to think of
building smart glasses that embed a little projector; this
may be possible thanks to the small size of the projector
used in the SixthSense project that is likely to decrease
considerably even more in the next years. Specifically,
SixthSense may be promising in the context of interaction
with smart glasses: such devices with embedded projectors
would allow users to interact with contents that they want
to see in big size on any surface, whereas they could still
see data that they want to keep private on the smart glass
lenses.
Furthermore, if HWDs alone become computationally
powerful enough, this scenario could eventually evolve to
all-in-one smart glasses that would let users dispose of to-
day common mobile devices, e.g. smartphones and tablets,
since users would be able to see their content on arbitrarily
large surfaces.
Mime is a different project developed at MIT and explained
in [2]. The purpose of Mime is to implement compact,
low-power 3D gesture sensing approach for interaction
with HWD. In this case the focus is on smart glasses and
Colaco et al. have built a model of a smart glass that im-
proves on existing technology in the field of gesture recog-
nition.
The innovative aspect it introduces is the combination of
two different techniques to detect gestures performed by a
single hand and without any marker; in particular, it ex-
ploits the data that come from both a depth sensor called
3D Time Of Flight (TOF) module that has been integrated
into the smart glass and an RGB camera also embedded.
The TOF module (Figure 7) is composed of three photodi-
odes located on the left, centre and right of the device and a
NIR LED that emits a pulsed light source. The basic opera-
tion of this 3D sensor is the following: the source emits the
signal and, as soon as the signal hits the users hand, it is
reflected back and sampled by the photodiodes on the de-
vice.
In particular, the system measures the time of flight of the
signal and it can compute the 3D position of the hand bas-
ing on these measurements: supposing to put the hand on


Figure 6: Some components of the SixthSense
wearable gestural interface. Source: [5]


Figure 6: Some components of the SixthSense
wearable gestural interface. Source: [6]


the left side of the device (Figure 7), the first photodiode
that samples the returning signal is the one on the left and it
receives the signal with the highest amplitude, then the sig-
nal arrives to the centre photodiode and finally to the right
one. Thus, it is possible to obtain a centimetre-accurate 3D
localization of the hand by considering the order in which
the photodiodes receive the signal and how long the return
flight takes.
Moreover, simple gestures such as swipe, point and click,
circle and zoom in-out, can be recognized by the TOF
module alone. In contrast, finer shape-based gestures are
detected by using the RGB camera and computer-vision
algorithms. Though, the latter technique does not operate
individually; the role of the TOF module is still fundamen-
tal for this system because the 3D coordinates of the hand
that it identifies are used to define a certain Region Of In-
terest (ROI) around the hand. Then, the computer-vision
algorithms are applied only to the ROI and not to the whole
images captured by the RGB camera. This is a key aspect
of this research project as it allows to reduce the computa-
tion and, as a consequence, the consumption of power. In
addition, this combination leads to very high accuracy since
RGB cameras alone often fail in cluttered environment and,
thanks to the previous TOF module operation and the ROI
identification, the number of failures decreases significant-
ly.
In conclusion, Mime is a promising system that allows a
free form of interaction and is suitable to be used in every-
day life scenarios thanks to its insensitivity to daylight and
the cheapness of its components, as well as to its limited
encumbrance (all the sensors needed are embedded, so the
size depends on the smart glass design).
Interaction conclusion
The first section of this paper has discussed some methods
that could be used to interact with smart glasses. The one
that plays the most significant role in this overview is
Mime as it matches many of the requirements that have to
be considered when designing such systems; moreover, it is
the only project/study among the introduced ones that pro-
poses a complete, unobtrusive and cheap approach that
could be adopted soon to control smart glasses during eve-
ryday tasks. The main drawback related to Mime is that it is
able to recognize only one-hand gestures.
PRIVACY AND SOCIAL IMPLICATIONS
The second section of this paper concerns privacy and so-
cial implications related to smart glasses. In particular, the-
se topics have already won some significant attention of the
media and the research trends related to them are growing.
As a consequence, it is reasonable to suppose that, in the
future, many of us will wear some kind of HWD always
connected to the Internet and helping us with everyday
tasks by augmenting our view and brain.
However, it is interesting to note the results of two surveys
conducted by YouGov, an internet-based market research-
er, in the United States: the first study [7] revealed that
59% of the participants would not buy and wear Google
Glass and the second study [8] showed that 54% of the par-
ticipants would not feel comfortable interacting with some-
one wearing Google Glass. A different survey conducted
recently by Wireless technology experts CSR found that
72% of participants would only buy wearable devices if
they look good and 67% of participants stated that devices
need to fit their own personal style. It is essential to men-
tion that these surveys are not scientific studies and the
conditions under which they have been done are unknown.
As a consequence, it is important to consider these data
carefully in a scientific context. In any case, they have been
reported above since they are an interesting starting point to
introduce some of the topics that are explained hereinafter,
such as the dependence of social implications on the geo-
graphic area and acceptance.
Perceived drawbacks
As it has been briefly introduced in the previous paragraph
and in some Google Explorers reports like [9], it emerges


Figure 7: Components and basic operation of the
TOF module. Source: [2]

Figure 9: Sensor data visualization for hand on the left. The
red, green, and blue curves are the responses at the left,
centre, and right photodiode, respectively. Source: [2]


that many people in some regions like USA and Europe are
sceptical about smart glass and in particular Google Glass.
The main reasons that worry these people are explained be-
low.
Acceptance. In many cases people still do not accept smart
glasses appearance and find such devices flashy and awk-
ward, so much that they would feel ashamed wearing them.
Furthermore, most of the people averse to smart glasses
find that these appliances cause a disturbance while inter-
acting and would not be ready to accept them in everyday
scenarios of social interaction.
Security. Smart glasses can be exploited to threaten the se-
curity of their users. In particular, it is interesting to note
that many applications on such devices provide immersive
feedback; as a consequence, if malicious applications get
installed, they may be able to deceive users about the real
world. Moreover, thieves could even be able to build a 3D
indoor model of users houses if they come into possession
of the images recorded by the camera in case users use such
devices at home [10].
Health. There are some people who are worried by wearing
an always-connected device close to their brain for a long
time, above all if such devices include cellular modules;
although this statement should be supported with scientific
results, it is significant to mention it because it is perceived
as a real issue among people and may brake the diffusion of
HWDs. Additionally, wearing or carrying devices such as
smart glasses and smartphones throughout the whole day
tempts users to reduce their effort and fatigue since they
can optimize their actions by using ad-hoc applications or
web services; a significant and simple example of this is the
possibility to get the shortest path to reach a place in a fast
and effective way, in particular on Google Glass. More in
general, the raise of Ubiquitous Computing increases the
sedentariness due to the diffusion and the constant progress
of smart houses and smart offices that automatically adjust
and control the environment; as a consequence, users are
not required to stand up to turn on/off lights, washing ma-
chines, ovens, heating systems, etc. anymore.
Personal data and privacy
The biggest threat that people perceive concerns privacy:
the ability of smart glasses and Ubiquitous Computing de-
vices in general to gather huge amounts of data about users
as well as record anyone and anything is seen as a subtle
way to violate ones privacy.
Firstly, it is interesting to mention some example of data,
called personal data, that a smart glass may be able to col-
lect [11]: preferences and taste about anything by analysing
the browser history and the bookmarks and by tracking
online and real-life (thanks to the camera) purchases; habits
in terms of activities and places by exploiting sensors and
Internet connectivity that provide users location and are
able to identify some of the tasks that users perform; users
mood, anything they look at in a particular scene and how
they react to something they see thanks to eye tracking and
by analysing gestures and speeches; opinions, political and
religious beliefs, gender, name, wage, bank details, text
messages, calls, emails, pictures, etc.
Then, it is worth looking at what privacy is, therefore two
different and significant definitions are considered. The
first one comes from [12] by Warren and Brandeis and de-
fines privacy as the right to be let alone and general
right to the immunity of the person, the right to ones per-
sonality; this definition shows the American way to per-
ceive privacy, that is the right to freedom from intrusions
by the state, especially in one's own home [13]. The second
definition describes privacy as the claim of individuals,
groups, or institutions to determine for themselves when,
how, and to what extent information about them is commu-
nicated to others. [14]; this second statement is closer to
the European way to perceive privacy that basically con-
sists of the rights to ones image, name, reputation, and in-
formational self-determination. In particular, the latter defi-
nition is the more relevant in this overview since it stresses
the need for data protection.
After having introduced the concept of personal data and
privacy, it is interesting to explain what privacy issues peo-
ple perceive. These issues were raised every time a revolu-
tionary technological innovation arose; for instance, when
the first Kodak portable cameras were invented in the late
19
th
century, they were initially forbidden on beaches and
into other public places since people largely felt that these
appliances invaded the sacred precincts of private and
domestic life. [15] During the last years, mobile and ubiq-
uitous devices have started to face similar issues and the
reasons are presented in the next subsection.
Access to our data
As it has already been briefly mentioned, one of the biggest
threats that people perceive is the ability of smart devices to
collect very big amounts of data. In particular, this happens
because Ubiquitous Computing is interested to ordinary ac-
tions rather than special events, therefore users actions are
monitored continuously during their everyday life. Then,
these data are analysed by using Data Mining techniques to
extract behavioural patterns and preferences; this is very
useful in order to provide both individuals and society in
general with tailored services. These analyses lead to some
issues because, even though devices are secure, i.e. do not
leak data, and service providers are trusted, users do not
know who can legitimately access their data and what their
data are used for. In fact, a European study conducted in
2011 [16] revealed that only 18% of the participants felt in
complete control of their data.
That being so, it is worth investigating who usually access-
es our data with the aim explained in the previous para-
graph: government, to segment the population and under-
stand its needs, with the final purpose to provide proper
services; financial institutions and banks, when they have
to decide whether loan customers money; employers, in or-
der to check potential employees habits, behaviours, life-
styles, etc.; law enforcement officials who need to investi-
gate; companies, to understand how they should do busi-


ness with us and, above all, to target advertisements. Ad-
vertisement targeting is one of the most significant and
popular purpose among the ones mentioned above; Google,
for instance, scans the content of Gmail users emails in
order to target advertisements and, given that all data rec-
orded by Google Glass will be stored on Google servers, it
is likely that the same will happen with data coming from
such smart glasses [17].
After having introduced peoples worries and which sub-
jects use to access our personal data, it is essential to pre-
sent another central point: users accept terms and condi-
tions when they buy or start to use such devices, as well as
when they install some applications or access some services
for the first time. By doing this, they give these subjects the
rights to access their data in a legitimate way. Even though
these conditions are sometimes difficult to understand by
users and do not always mention explicitly all the subjects
that have access to users data, law does not respond since
there is neither illegitimate access nor direct injury, i.e. not
very intimate data are accessed and users dignity and repu-
tation are not damaged.
Loss of control and skills
This subsection presents different ways in which users may
perceive to lose control of data and their choices by using
smart glasses. The first one is related to the main dreaded
issue among the public, that is being captured or recorded
by strangers during ordinary life along the street, in public
spaces, shops, and restaurant, etc. This is an end-user vs.
end-user problem and, although smart glasses users have
intentionally decided to somehow trade their privacy to use
such devices, bystanders who are recorded by them have
not made the same choice. In particular, this drawback de-
rives from the fact that people do have expectations of ano-
nymity when they are in public; the ability of smart glasses
to capture and record is considered really worrying because
of the small size of these devices and the potential possibil-
ity to identify the subjects in the frame. Though, it is essen-
tial to specify that almost all over the world, with some ex-
ceptions like Hungary, concerts, movie theatres, etc., cap-
turing and recording anyone and anything is legal in the
public space also without the consent of the subjects; dif-
ferently, as for private places the owner has the right to al-
low or forbid these actions. Even though Google Glass is
not designed to be an always recording device and its re-
cording/capturing frequency and conditions are comparable
to the ones of today smartphones, it is likely that smart
glasses will lead people to change their social and public
behaviour. In fact, people will perceive smart glass like al-
ways recording and, as a result, they will limit their actions
and oppress some forms of expressiveness with significant
social implications.
Furthermore, many smart devices that make some decisions
autonomously in place of the user, and in some cases also
smart glasses, may lead people to lose the control of them-
selves and their choices. These devices are designed to pro-
actively anticipate users needs and take action on their be-
half, so that humans can focus on higher-level tasks, with
less cognitive and physical effort. This happens often, for
example, when some services propose to the users some
items they could like or need to buy, basing on previous
purchases and tastes. Alternatively, the device may start to
bother users notifying that they should decrease the speed
while driving and respect the limits, although they want to
keep a high speed. Therefore, it has to be observed that ac-
tions done by the device on its own may not correspond to
real needs or intentions and some corrective actions may be
required and perceived annoying by users; moreover, pref-
erence of people change over time so they may not like an-
ymore what services recommend and, in some cases, users
could even consider devices as disloyal if they act to re-
spect third parties interests. All these factors may cause
cognitive dissonance, which means the device becomes
psychologically obtrusive and users may not know what
they want or need anymore.
Additionally, a significant use of such devices may lead to
lose skills, abilities and knowledge since people commit
basic tasks to the Internet and many apps more and more.
For instance, many users do not do special effort to re-
member names, definitions, and phone numbers, do calcu-
lations and orient themselves anymore.
Benefits
After having analysed in detail some perceived drawbacks
and social issues related to smart glasses, it is relevant to
mention the main benefits such devices offer.
Everyday life empowered. Smart glasses empower our
senses and brain and allow us to concentrate on most of our
ordinary tasks without having to give them up to check
emails and text messages, surf the Web, get advise from
friends or experts, etc. In other words, Context-Awareness
is significantly enhanced thanks to smart glasses and
livestreaming as well as translations on the fly, may be-
come the order of the day.
Security enhanced. Even though it is paradoxical that secu-
rity is both a drawback and a benefit, it is essential to note
that people feel much more safer when they carry mobile
devices. Specifically, smart glasses increase this safety per-
ception since they can be controlled via voice and gestures;
therefore, emergency calls or status on social networks can
be enabled quickly. Moreover, directions that appear in-
stantly directly in front of the eyes make users feel safer
while travelling. In addition, security is also improved
more generally speaking because smart glasses may allow
users to see their password as plain text on the little screen,
assuming that no one else is able to see what is on it; this
does not require users to remember their passwords and
much longer and stronger ones can be set as a consequence,
for instance, to operate at ATMs.
Scientific progress. One relevant example is progress in the
field of surgery, that has already experimented augmented
reality in general and also Google Glass, to let surgeons be
advised during operations by other experts from all over the
world. [18]


Privacy and social implications conclusion
In order to conclude the second section and the overview in
general, it is interesting to observe that the ethnical group
plays a fundamental role when social implications and the
privacy concept are considered. In contrast to the data that
have been presented at the beginning of this section about
openness to buy and wear Google Glass and acceptance in
USA, Indian people are really enthusiastic about smart
glasses and their potential and not worried about privacy
concerns at all; this may be a consequence of the really
high Indian human density and shows how people do not
care about being captured/recorded within a huge crowd,
probably because identifying and tracking a single individ-
ual is very difficult.
Moreover, worries about privacy and social implications
depend also on users age; it was really surprising to read
about two grandparents who tried Google Glass and were
very excited and satisfied about it. They even proposed in-
teresting applications like medicine consumption tracking
and livestreaming to get advised while gardening; they also
said that they would like to receive such device and use it
everyday. [9]
To sum up, what emerges from this overview is that the
biggest issues related to smart glasses are well-known: sim-
ilar matters were raised when, for instance, camera phones
became popular [19], as well as when phones started to
identify users locations [20]; though, both these features
are today commonly used in ordinary life and almost no
one renounced to use mobile devices because of these func-
tionalities. In fact, expectations of privacy change after de-
vices are used for a while: users only need to get used and
then, once they realize they feel safer and empowered, they
do not concern about privacy matters anymore.

REFERENCES
1. Statista. Google Glass annual sales forecast from 2014
to 2018. 2 May 2014
http://www.statista.com/statistics/259348/google-glass-
annual-sales-forecast/
2. Colaco, A., Kirmani, A., Hyang, H. S., Gong, N.,
Schmandt, C., Goyal, V. K. Mime: Compact, low power
3D gesture sensing for interaction with head mounted
displays. UIST 2013: 227-236
3. Serrano, M., Ens, B., Irani, P. Exploring the use of
hand-to-face input for interacting with head-worn
displays. CHI 14
4. Gustafson, S., Rabe, B., Baudisch, P. Understanding
palm-based imaginary interfaces: The role of visual and
tactile cues when browsing. CHI 13
5. Borg, G. Borgs perceived exertion and pain scale.
Champaign, IL, US: Human Kinetics, 1998
6. Pranav Mistry, SixthSense. 2 May 2014
http://www.pranavmistry.com/projects/sixthsense/
7. YouGov, Would you consider buying and wearing
Google Glasses? 2 May 2014
http://www.statista.com/statistics/259368/likelihood-of-
buying-and-wearing-google-glasses/
8. YouGov, Would you feel comfortable interacting with
someone wearing Google Glasses? 2 May 2014
http://www.statista.com/statistics/259368/likelihood-of-
buying-and-wearing-google-glasses/
9. Kunze, K. A week with Glass. 2 May 2014
http://kaikunze.de/posts/a-week-with-glass/
10. Tadayoshi, R., F., Molnar, D. Security and privacy for
augmented reality systems. CACM 2013
11. Brey, P. Freedom and privacy in ambient intelligence.
Ethics and Information Technology, 2005
12. Warren, Samuel, Brandeis, Louis: The right to privacy.
Harvard Law Review, 1890
13. Whitman, J. The two western cultures of privacy:
Dignity v liberty, Yale Law Journal, Volume 113 Issue
1151 Page 1161, 2004
14. Westin, A. Privacy and freedom. New York: Atheneum,
1970
15. Hong, J. Considering privacy issues in the context of
Google Glass. CACM, Volume 56 Issue 11 Pages 10-
11, 2013
16. European Commission. Why do we need an EU data
protection reform? 2 May 2014
http://ec.europa.eu/justice/data-
protection/document/review2012/factsheets/1_en.pdf
17. Electronic Privacy Information Center, Google Glass
and privacy. 2 May 2014
https://epic.org/privacy/google/glass/default.html
18.Kim, L. Google Glass delivers new insight during
surgery. 2 May 2014
http://www.ucsf.edu/news/2013/10/109526/surgeon-
improves-safety-efficiency-operating-room-google-
glass
19.Camera phones threat to privacy. 2 May 2014
http://news.bbc.co.uk/2/hi/technology/4017225.stm
20.Holson, M. L. Privacy lost: these phones can find you.
2 May 2014
http://www.nytimes.com/2007/10/23/technology/23mob
ile.html?_r=0

You might also like