1. Introduction
According to the Economist [
1], smartphones have become the fastest-selling gadgets in history, outselling personal computers (PCs) four to one. Today, about half the adult population owns a smartphone; by 2020, 80% will. Mobile and smart device vendors are increasingly augmenting their products with various types of sensors such as the Hall sensor, accelerometer, NFC (Near-Field Communication), heart rate, and iris scan, which are connected to each other through the Internet of Things (IoT). We have observed that around 10 new sensors have been augmented or became popular in mainstream mobile devices in less than two years; bringing the number of mobile sensors to more than 30 sensors. Examples include FaceID, Active edge, depth camera (using infra-red), thermal camera, air sensor, laser sensor, haptic sensor, iris scan, heart rate and body sensors.
Sensors are added to mobile and other devices to make them smart: to sense the surrounding environment and infer aspects of the context of use, and thus to facilitate more meaningful interactions with the user. Many of these sensors are used in popular mobile apps such as fitness and games. Mobile sensors have also been proposed for security purposes, e.g., authentication [
2,
3], authorization [
4], device pairing [
5] and secure contactless payment [
6]. However, malicious access to sensor streams results in an installed app running in the background with an exploit path. Researchers have shown that the user PINs and passwords can be disclosed through sensors such as the camera and microphone [
7], the ambient light sensor [
8] and the gyroscope [
9]. Sensors such as NFC can also be misused to attack financial payments [
10].
In our previous research [
11,
12,
13,
14], we have shown that the sensor management problem is spreading from apps to browsers. We proposed and implemented the first JavaScript-based side channel attack revealing a wide range of sensitive information about users such as phone calls’ timing, physical activities (sitting, walking, running, etc.), touch actions (click, hold, scroll and zoom) and PINs on mobile phones. In this attack, the JavaScript code embedded in the attack web page listens to the motion and orientation sensor streams without needing any permission from the user. By analysing these streams via machine learning algorithms, this attack infers the user’s touch actions and PINs with an accuracy of over 70% on the first try. The above research attracted considerable international media coverage (
springeropen.altmetric.com/details/18717318/news) including the Guardian [
15] and the BBC [
16], which reassures the importance of the topic. We disclosed the identified vulnerability described in the above to the industry. While working with W3C and browser vendors (Google Chromium, Mozilla Firefox, Apple, etc.) to fix the problem, we came to appreciate the complexity of the sensor management problem in practice and the challenge of balancing security, usability and functionality.
Through a series of user studies over the years [
13,
14], we concluded that mobile users are not generally familiar with most sensors. In addition, we observed that there is a significant disparity between the actual and perceived risk levels of sensors. In another work [
17], the same conclusion was made by Crager et. al. for motion sensors. In [
14], we discussed how this observation, along with other factors, renders many academic and industry solutions ineffective at managing mobile sensors. Given that sensors are going beyond mobile devices, e.g., in a variety of IoT devices in smart homes and cities, the sensor security problem has already attracted more attention not only from researchers, but also from hackers. In view of all this, we believe that there is much room for more focus on people’s awareness and education about the privacy and security issues of the sensor technology.
Previous research [
14,
17] has focused on individual user studies to study human aspects of sensor security. In this paper, we present the results of a more advanced teaching method—working with sensor-enabled apps—on the risk level that users associate with the PIN discovery scenario for all sensors. We reflect the results of two interactive workshops that we organized on mobile sensor security. These workshops covered the following: an introduction of mobile sensors and their applications, working with sensor-enabled mobile apps, an introduction of the security and privacy issues of mobile sensors and an overview of how to manage the app permissions on different mobile platforms.
In these workshops, the participants were sitting in groups and introduced to mobile sensors by working with sensor-enabled apps. Throughout the workshops, we asked the participants to fill in a few forms in order to evaluate the general knowledge they had about mobile sensors, as well as their perceived risk levels for these sensors after they understood their functionalities. After analysing these self-declared forms, we also measured the correlation between the knowledge and perceived risk level for mobile sensors. The results showed that knowing sensors by working with sensor-enabled apps would not immediately improve the users’ security inference of the actual risks of these sensors. However, other factors such as the prior general knowledge about these sensors and their risks have a strong impact on the users’ perception. We also taught the participants about the ways that they could audit their apps and their permissions including per app vs. per permission. Our participants found both models useful in different ways. Our findings show that when mobile users are provided with reasonable choices and intuitive teaching, they can easily self-direct themselves to improve their security and privacy.
In
Section 2, first, we list the available sensors on mobile devices and categorise them. Then, we present the current permission policies for these sensors on Android, iOS, and mobile web browsers. In
Section 3, we present the structure of these workshops in full detail.
Section 4 includes our analysis on the general knowledge and perceived risk levels that our participants had for sensors and their correlation.
Section 5 presents our observations of the apps’ and permissions’ review activities in the workshops. In
Section 6, we present a list of our recommendations to different stakeholders. Finally, in
Section 7 and
Section 8, we include limitations, future work and the conclusion.
5. Apps and Permissions Review
In the final part of the workshop, we asked our participants to review the permissions of some of the pre-installed apps on their devices through the settings. In this section, the participants had the opportunity to go beyond sensor security and investigate access to all sorts of mobile OS resources by apps (
Figure 3, left). For the second workshop, this activity was done in two forms: per app vs. per permission, as we explain later.
5.1. Reviewing Permissions Per App
The participants in both workshops picked a wide varieties of apps to investigate the permissions; ranging from system apps, social networking, gaming, banking, shopping, discount apps, etc. In most cases, they could successfully identify the functionality of the app and whether it had reasonable permissions or not. However, in some cases, the participants felt unsure about the permissions. The decision made by the users for either uninstalling the app, limiting its access or leaving it as it was before varied across users and apps for various reasons, as we explain here.
Uninstalling: Some of our participants expressed their willingness to uninstall certain apps since they were over-privileged. In the comment section, the participants explained various reasons including: they don’t really need the app, they can replace it by using a web browser, they don’t understand the necessity of the permission and/or they are concerned about their security and privacy. For example, after one of our participants discovered the permissions already given to a shopping app (camera, contacts, location, storage and telephone), she expressed: “It does not need those things- uninstalled!”. Similarly another participant could easily infer that a discount app should not be able to modify/delete the SD card and decided to remove it. In some cases, the extra permissions without explanation made our participants upset, leading them to remove the app. For example, a participant stated that he did not know about the too many permissions that some of his apps such as a university app had and would uninstall them since he was “not happy with the fact that this app uses contacts”. Another participant stated that: “I don’t see why the BBC needs access to my location”, and he decided to remove it.
Disabling/limiting access: There were cases where participants could identify the risk of extra permissions granted to apps, but instead of uninstalling, they chose to disable certain accesses or limit them to while using the app. For instance, one participant observed that if she disabled the access to contacts, storage and telephone, Spotify would still work. The same approach was taken by another participants when he limited FM Radio’s access to microphone and storage and LinkedIn’s access to camera, microphone, storage and location, and continued using them. Another participant said that she would occasionally turn off location on Twitter, e.g., if she is on holiday. In another example, one of the participants commented: “[I] would remove photos and camera permissions but still use [Uber] app”. Some participants commented that they changed the access to location to while using in some apps such as Google Maps and Trainline.
Leaving as before: In some cases, our participants reviewed the app permissions and found them reasonable and not risky. For example, when one of our participants found out that a parking payment app has access to the camera, she commented: “Camera [is] used to take pictures of payment cards”. Another comment was on a massaging app that had a variety of permissions; the user said: “[this app] needs those permissions to fully work”. Another participant said his taxi booking app uses location in the while using mode, and he thought it was “secure and functional”.
In some other cases, our participants could identify over-privileged apps, but decided to leave the apps and their permissions as before. They expressed various reasons for this decision. For example, one participant chose to continue using a discount app saying that “[I’m] not that concerned that it has access to photos”. Another participant said she would not uninstall a sleep monitoring app since “I find it useful for self-tracking. I don’t worry about people having access to that particular information [microphone, motion and fitness, mobile data] about me.” In another case, while our participant could list the extra permissions of a fitness app, she said she would not uninstall it since: “I am addicted to it”. Another participant refused to uninstall a pedometer app expressing: “[I] don’t see the need for [access to] contacts and storage, but [I would] still use [it] as other apps ask for the same [permissions].” Another attendee listed camera, contacts and location as Groupon’s (extra) permissions and commented: “[The app’s] benefits outweigh threats”. Another example is when one of our participants spotted that a university app uses location and stated: “I trust it and I frequently need it”.
Overall, we observed that this activity (app permission review) helped our participants to successfully identity over-privileged apps. However, different users chose to react differently on the matter. It seems that this decision making process was affected by some general mental models such as the ubiquity of the app, the functionality of the app, its advantages vs. the disadvantages, (not) being worried about sharing data, (not) being aware of any real exploitation of these permissions and trusting the app.
Through our discussions with the participants, they stated that they liked this permission review model since they can have an overall picture about each app and its permissions. They also argued that it helped them to keep using certain apps that they enjoy while limiting particular permissions on them.
6. Recommendations to Different Stakeholders
After we presented the sensor attacks to our participants in the workshops, we observed that they are shocked about the power of motion sensors. However, when completing the app permission review activity, they could not see whether certain apps had access to these sensors or not. For example when reviewing the permissions, one of our participants commented: “why aren’t all of the sensors on this list to review?”. Hence, even if the mobile users were very well aware of the risk of these sensors to their security and privacy, since mobile apps and websites do not ask for permission for many sensors (see
Table 2), users will not have the option to disable the access.
One way to fix this problem, which is commonly suggested by research papers, is to simply ask for permission for all sensors or sensor groups. However, this approach will introduce many usability problems. People already ignore the permission notifications required for sensitive resources such as the camera and microphone. Other solutions such as using artificial intelligence (AI) for sensor management has not been effectively implemented yet. We believe that more research (both technical and human dimensions) in the field of sensor security should be carried out to contribute to this complex usable security problem. This research should be conducted in collaboration with the industry to achieve impactful results. Based on our research, we conclude the following recommendations:
Researchers and educators: Although the amount of technical research conducted on sensor security is considerable, human dimensions of the technology, especially education aspects, have not been addressed very well. When we asked for more comments on improving sensor security at the end of the workshop, one of the participants commented: “better education/information for smartphone users [is needed, eg.] on what app permissions really mean, and how [permission setting] can compromise privacy”.
We understand that the focus of technical research might not be education, hence organizing similar workshops might not be the priority. However, apart from raising public knowledge awareness, holding such workshops for a non-technical audience is a strong medium to disseminate technical research. Part 2 of our workshop was a presentation about our research in sensor security. This part can be replaced with any other research in the field of sensor security, without diminishing the workshop’s goal. The feedback from non-technical audiences will lead technical research in an impactful direction.
We have published our workshop slides for other educators and the general public. Other ways of raising public awareness include providing related articles on massive open online courses (MOOCs) and publishing user friendly-videos on YouTube. For example, we have provided two articles entitled: “Is your mobile phone spying on you?” and “Auditing your mobile app permissions” in the Cyber Security: Safety at Home, Online, in Life online course (
futurelearn.com/courses/cyber-security), part of Newcastle University’s series of MOOCs. Through our second workshop, we also witnessed that publishing research findings via public media has an impact on the general knowledge of the users. We strongly encourage researchers to produce educational materials and report their experiences and findings on other aspects of sensor security.
App and web developers: Throughout our studies over years, we have concluded that the factors that contribute to the users’ risk inference about technology in general, and mobile sensors in particular, are complicated. As is known, security and privacy issues are low motivations in the adoption of apps. Therefore, app and web developers have a fundamental role in addressing this problem and delivering more secure apps to the users. As discussed in [
29], developers are recommended to secure tools with proven utility. Many mobile apps in app stores are “permission hungry” [
21]. These extra permission requests are likely not understood by the majority of developers who copy and paste this code into their applications [
30]. This is where app developers end up inserting extra permission requests into their code. We advise developers to not copy code from unreliable sources into their apps. Instead, they should search for stable libraries and APIs to be used in their apps. Accordingly, including minimal permission requests in the app would lead to fewer security decisions to be made by the users when installing and using the app.
Moreover, explaining the reason why the app is asking for certain permissions would improve the user experience. As an example, when one of our participants found out that a discount app has access to location, the participant commented: “Location allows me to find nearby offers- app gives explanation”. When we asked for more comments on improving sensor security at the end of the workshop, one of our participants wrote: “let the user know why permission is needed for the app to work and choose which features/permissions are reasonable”. Educating app developers about more secure products seems to be vital and is another topic of research on its own.
Android Developer has recently published best practices for app permissions to be followed by programmers (
developer.android.com/training/permissions/usage-notes). These best practices include: only use the permissions necessary for your app to work; pay attention to permissions required by libraries; be transparent; and make system accesses explicit. These are all consistent with the expectations that our participants expressed during the two workshops.
End users: As we observed in our studies, mobile users do not know that many apps have access to their mobile OS resources, either without asking for permission or via the permissions that they ignore. In order to keep their devices safer, we advise users to follow the general security practice:
Some users tend to be lazy and careless in closing apps after finishing working with them. Close background apps and web browser tabs when you are not using them.
Some users can be greedy in installing multiple apps and keeping them on their devices. This is especially true for free apps. Uninstall apps you no longer need.
Security patches are being constantly released by the vendors. Keep your phone OS and apps up to date.
Installing apps from unknown sources might impose security risks. Only install applications from approved app stores where these apps are vetted comprehensively.
Scrutinise the permission requested by apps before you install them and while using them. You can choose alternative apps with more sensible permissions if needed.
Try to audit the permissions that apps have on your device regularly via system settings.
Each of the above items can be developed by educators as educational material to be taught to mobile users.
We believe that the problem of sensor security is already beyond mobile phones. The challenges are more serious when smart kitchens, smart homes, smart buildings and smart cities are equipped with multiple sensor-enabled devices sensing people and their environment and broadcasting this information via IoT platforms. As a matter of fact, some of our participants listed a few dedicated IoT apps when they were auditing the app permissions; for example, Hive, which is described in its app description as: “a British Gas innovation that creates connected products designed to give people the control they want for their homes anytime, anywhere.” This app offers a wide range of features enabling the users to control their heating and hot water, home electrical appliances, controlling the doors and windows and reporting if movement is spotted inside the user;s home via, as described, “sophisticated sensors”. One of our participants using this app commented: “It allows me to control my heating/hot water to make it more efficient. I have turned off analytics and location for security. A bit concerned as if someone hacked, they could analyse when I am at home”. We know that the risks of hacking into IoT platforms is beyond knowing whether or not someone is at home. It could be harmful to people’s lives as described in [
31]. Hence, we encourage researchers to conduct more studies on human dimensions of sensors in IoT.