Last week at Singapore International Cyber Week and the ETSI Security Conferences, the international community gathered together to discuss cybersecurity hot topics of the day. Amidst a number of important cybersecurity discussions, we want to highlight progress on connected device security demonstrated by joint industry principles for IoT security transparency. The future of connected devices offers tremendous potential for innovation and quality of life improvements. Putting a spotlight on consumer IoT security is a key aspect of achieving these benefits. Marketplace competition can be an important driver of security improvements, with consumers empowered and motivated to make informed purchasing decisions based on device security.
As with other IoT security transparency initiatives globally, it’s great to see this topic being covered at both conferences this week. The below IoT security labeling principles are aimed at helping to improve consumer awareness and to foster marketplace competition based on security.
To help consumers make an informed purchase decision they should receive clear, consistent, and actionable information about the security of the device (e.g. security support period, authentication support, cryptographic assurance) before purchase - a communication and transparency mechanism commonly referred to as “a label” or “labeling,” although the communication is not merely a printed sticker on physical product packaging. While an IoT label will not solve the problem of IoT security on its own, transparency can both help educate consumers and also facilitate the coordination of security responsibilities between all of the components in a connected device ecosystem.
Our goal is to strengthen the security of IoT devices and ecosystems to protect individuals and organizations, and to unleash the full future benefit of IoT. Security labeling programs can support consumer purchase decisions that drive security improvements, but only if the label is credible, actionable, and easily understood. We are hopeful that the public sector and industry can work together to drive harmonized policies that achieve this goal.
Signed,
Google
ARM
Assa Abloy
Finite State
HackerOne
Keysight
NXP
OpenPolicy
Rapid7
Schlage
Silicon Labs
Mobile devices have supercharged our modern lives, helping us do everything from purchasing goods in store and paying bills online to storing financial data, health records, passwords and pictures. According to Data.ai, the pandemic accelerated existing mobile habits – with app categories like finance growing 25% year-over-year and users spending over 100 billion hours in shopping apps. It's now even more important that data is protected so that bad actors can't access the information.
Powering up Google Play Protect
Google Play Protect is built-in, proactive protection against malware and unwanted software and is enabled on all Android devices with Google Play Services. Google Play Protect scans 125 billion apps daily to help protect you from malware and unwanted software. If it finds a potentially harmful app, Google Play Protect can take certain actions such as sending you a warning, preventing an app install, or disabling the app automatically.
To try and avoid detection by services like Play Protect, cybercriminals are using novel malicious apps available outside of Google Play to infect more devices with polymorphic malware, which can change its identifiable features. They’re turning to social engineering to trick users into doing something dangerous, such as revealing confidential information or downloading a malicious app from ephemeral sources – most commonly via links to download malicious apps or downloads directly through messaging apps.
For this reason, Google Play Protect has always also offered users protection outside of Google Play. It checks your device for potentially harmful apps regardless of the install source when you’re online or offline as well. Previously, when installing an app, Play Protect conducted a real-time check and warned users when it identified an app known to be malicious from existing scanning intelligence or was identified as suspicious from our on-device machine learning, similarity comparisons, and other techniques that we are always evolving.
Today, we are making Google Play Protect’s security capabilities even more powerful with real-time scanning at the code-level to combat novel malicious apps. Google Play Protect will now recommend a real-time app scan when installing apps that have never been scanned before to help detect emerging threats.
Scanning will extract important signals from the app and send them to the Play Protect backend infrastructure for a code-level evaluation. Once the real-time analysis is complete, users will get a result letting them know if the app looks safe to install or if the scan determined the app is potentially harmful. This enhancement will help better protect users against malicious polymorphic apps that leverage various methods, such as AI, to be altered to avoid detection.
Our security protections and machine learning algorithms learn from each app submitted to Google for review and we look at thousands of signals and compare app behavior. Google Play Protect is constantly improving with each identified app, allowing us to strengthen our protections for the entire Android ecosystem.
This enhancement to Google Play Protect has started to roll out to all Android devices with Google Play services in select countries, starting with India, and will expand to all regions in the coming months.
Our Multi-Layered User Protections on Android
Android takes a multi-layered defense approach to help keep you safe from mobile malware and unwanted software on Android. Android’s built-in proactive and advanced user protections like Google Play Protect, ongoing security updates, app permission controls, Safe Browsing, and more – alongside spam and phishing protection in Messages by Google and Gmail – work together to help protect your data security and privacy. We are constantly improving this multi-layered approach to find new ways to protect our billions of users.
Keeping Android users safe is a top priority. We are committed to working with our ecosystem partners and app developer community to improve the security of apps and combat malware and unwanted software to make Android even more secure.
In July 2023, four Googlers from the Enterprise Security and Access Security organizations developed a tool that aimed at revolutionizing the way Googlers interact with Access Control Lists - SpeakACL. This tool, awarded the Gold Prize during Google’s internal Security & AI Hackathon, allows developers to create or modify security policies using simple English instructions rather than having to learn system-specific syntax or complex security principles. This can save security and product teams hours of time and effort, while helping to protect the information of their users by encouraging the reduction of permitted access by adhering to the principle of least privilege.
Google requires developers and owners of enterprise applications to define their own access control policies, as described in BeyondCorp: The Access Proxy. We have invested in reducing the difficulty of self-service ACL and ACL test creation to encourage these service owners to define least privilege access control policies. However, it is still challenging to concisely transform their intent into the language acceptable to the access control engine. Additional complexity is added by the variety of engines, and corresponding policy definition languages that target different access control domains (i.e. websites, networks, RPC servers).
To adequately implement an access control policy, service developers are expected to learn various policy definition languages and their associated syntax, in addition to sufficiently understanding security concepts. As this takes time away from core developer work, it is not the most efficient use of developer time. A solution was required to remove these challenges so developers can focus on building innovative tools and products.
We built a prototype interface for interactively defining and modifying access control policies for the BeyondCorp access control engine using the PaLM 2 Large Language Model (LLM). using the PaLM 2 Large Language Model (LLM). We used Google Colab to provide the model with a diverse, highly variable, dataset using in-context learning and fine-tuning. In-context learning allows the model to learn from a dataset of examples that are relevant to the task at hand, which we provided via few-shot learning. Fine-tuning allows the model to be adapted to a specific task by adjusting its parameters. Tuning the model with a diverse labeled dataset that we curated for this task allowed us to improve its ability to generate ACLs that are both syntactically accurate and adhered to the principle of least privilege.
With SpeakACL, and other tools leveraging AI in security, it is always recommended to take a conservative approach with the autonomy you give an AI agent. To ensure our model outputs are correct & safe to use, we combined our tool with existing safeguards that exist at Google for all access policy modifications:
Request LGTM from a teammate to ensure that the intent of the proposed change is correct.
Automated Risk Assessment occurs on proposed security policy at Google.
Manual Review by Security Engineers is performed on changes not assessed as low risk to ensure compliance with security policies and guidelines.
Linting, unit tests, and integration tests ensure that the access control language syntax is correct, and that the change does not break any expected access or permit unexpected access.
While progress in AI is impressive, it is crucial we as an industry continue to prioritize safety while navigating the landscape. Other than adding checks to syntactically and semantically verify access policies produced by our model, we also designed safeguards for sensitive information disclosure, data leaking, prompt injections, and supply chain vulnerabilities to make sure our model is performing at the highest level of security.
SpeakACL is an ACL Generation tool that has the potential to revolutionize the way access policies are created and managed. The efficiency, security, and ease of use achieved by this AI-powered ACL Generation Engine reflects Google’s ongoing commitment to leveraging AI across domains to develop cutting-edge products and infrastructure.
To that end, we have rewritten the Android Virtualization Framework’s protected VM (pVM) firmware in Rust to provide a memory safe foundation for the pVM root of trust. This firmware performs a similar function to a bootloader, and was initially built on top of U-Boot, a widely used open source bootloader. However, U-Boot was not designed with security in a hostile environment in mind, and there have been numerous security vulnerabilities found in it due to out of bounds memory access, integer underflow and memory corruption. Its VirtIO drivers in particular had a number of missing or problematic bounds checks. We fixed the specific issues we found in U-Boot, but by leveraging Rust we can avoid these sorts of memory-safety vulnerabilities in future. The new Rust pVM firmware was released in Android 14.
As part of this effort, we contributed back to the Rust community by using and contributing to existing crates where possible, and publishing a number of new crates as well. For example, for VirtIO in pVM firmware we’ve spent time fixing bugs and soundness issues in the existing virtio-drivers crate, as well as adding new functionality, and are now helping maintain this crate. We’ve published crates for making PSCI and other Arm SMCCC calls, and for managing page tables. These are just a start; we plan to release more Rust crates to support bare-metal programming on a range of platforms. These crates are also being used outside of Android, such as in Project Oak and the bare-metal section of our Comprehensive Rust course.
Many engineers have been positively surprised by how productive and pleasant Rust is to work with, providing nice high-level features even in low-level environments. The engineers working on these projects come from a range of backgrounds. Our comprehensive Rust course has helped experienced and novice programmers quickly come up to speed. Anecdotally the Rust type system (including the borrow checker and lifetimes) helps avoid making mistakes that are easily made in C or C++, such as leaking pointers to stack-allocated values out of scope.
One of our bare-metal Rust course attendees had this to say:
"types can be built that bring in all of Rust's niceties and safeties and yet still compile down to extremely efficient code like writes of constants to memory-mapped IO."
97% of attendees that completed a survey agreed the course was worth their time.
Device drivers are often written in an object-oriented fashion for flexibility, even in C. Rust traits, which can be seen as a form of compile-time polymorphism, provide a useful high-level abstraction for this. In many cases this can be resolved entirely at compile time, with no runtime overhead of dynamic dispatch via vtables or structs of function pointers.
There have been some challenges. Safe Rust’s type system is designed with an implicit assumption that the only memory the program needs to care about is allocated by the program (be it on the stack, the heap, or statically), and only used by the program. Bare-metal programs often have to deal with MMIO and shared memory, which break this assumption. This tends to require a lot of unsafe code and raw pointers, with limited tools for encapsulation. There is some disagreement in the Rust community about the soundness of references to MMIO space, and the facilities for working with raw pointers in stable Rust are currently somewhat limited. The stabilisation of offset_of, slice_ptr_get, slice_ptr_len, and other nightly features will improve this, but it is still challenging to encapsulate cleanly. Better syntax for accessing struct fields and array indices via raw pointers without creating references would also be helpful.
offset_of
slice_ptr_get
slice_ptr_len
The concurrency introduced by interrupt and exception handlers can also be awkward, as they often need to access shared mutable state but can’t rely on being able to take locks. Better abstractions for critical sections will help somewhat, but there are some exceptions that can’t practically be disabled, such as page faults used to implement copy-on-write or other on-demand page mapping strategies.
Another issue we’ve had is that some unsafe operations, such as manipulating the page table, can’t be encapsulated cleanly as they have safety implications for the whole program. Usually in Rust we are able to encapsulate unsafe operations (operations which may cause undefined behaviour in some circumstances, because they have contracts which the compiler can’t check) in safe wrappers where we ensure the necessary preconditions so that it is not possible for any caller to cause undefined behaviour. However, mapping or unmapping pages in one part of the program can make other parts of the program invalid, so we haven’t found a way to provide a fully general safe interface to this. It should be noted that the same concerns apply to a program written in C, where the programmer always has to reason about the safety of the whole program.
Some people adopting Rust for bare-metal use cases have raised concerns about binary size. We have seen this in some cases; for example our Rust pVM firmware binary is around 460 kB compared to 220 kB for the earlier C version. However, this is not a fair comparison as we also added more functionality which allowed us to remove other components from the boot chain, so the overall size of all VM boot chain components was comparable. We also weren’t particularly optimizing for binary size in this case; speed and correctness were more important. In cases where binary size is critical, compiling with size optimization, being careful about dependencies, and avoiding Rust’s string formatting machinery in release builds usually allows comparable results to C.
Architectural support is another concern. Rust is generally well supported on the Arm and RISC-V cores that we see most often, but support for more esoteric architectures (for example, the Qualcomm Hexagon DSP included in many Qualcomm SoCs used in Android phones) can be lacking compared to C.
Overall, despite these challenges and limitations, we’ve still found Rust to be a significant improvement over C (or C++), both in terms of safety and productivity, in all the bare-metal use cases where we’ve tried it so far. We plan to use it wherever practical.
As well as the work in the Android Virtualization Framework, the team working on Trusty (the open-source Trusted Execution Environment used on Pixel phones, among others) have been hard at work adding support for Trusted Applications written in Rust. For example, the reference KeyMint Trusted Application implementation is now in Rust. And there’s more to come in future Android devices, as we continue to use Rust to improve security of the devices you trust.
In 2020, we launched a novel format for our vulnerability reward program (VRP) with the kCTF VRP and its continuation kernelCTF. For the first time, security researchers could get bounties for n-day exploits even if they didn’t find the vulnerability themselves. This format proved valuable in improving our understanding of the most widely exploited parts of the linux kernel. Its success motivated us to expand it to new areas and we're now excited to announce that we're extending it to two new targets: v8CTF and kvmCTF.
Today, we're launching v8CTF, a CTF focused on V8, the JavaScript engine that powers Chrome. kvmCTF is an upcoming CTF focused on Kernel-based Virtual Machine (KVM) that will be released later in the year.
As with kernelCTF, we will be paying bounties for successful exploits against these platforms, n-days included. This is on top of any existing rewards for the vulnerabilities themselves. For example, if you find a vulnerability in V8 and then write an exploit for it, it can be eligible under both the Chrome VRP and the v8CTF.
We're always looking for ways to improve the security posture of our products, and we want to learn from the security community to understand how they will approach this challenge. If you're successful, you'll not only earn a reward, but you'll also help us make our products more secure for everyone. This is also a good opportunity to learn about technologies and gain hands-on experience exploiting them.
Besides learning about exploitation techniques, we’ll also leverage this program to experiment with new mitigation ideas and see how they perform against real-world exploits. For mitigations, it’s crucial to assess their effectiveness early on in the process, and you can help us battle test them.
How do I participate?
First, make sure to check out the rules for v8CTF or kvmCTF. This page contains up-to-date information about the types of exploits that are eligible for rewards, as well as the limits and restrictions that apply.
Once you have identified a vulnerability present in our deployed version, exploit it, and grab the flag. It doesn’t even have to be an 0-day!
Send us the flag by filling out the form linked in the rules and we’ll take it from there.
We're looking forward to seeing what you can find!
SMS texting is frozen in time.
People still use and rely on trillions of SMS texts each year to exchange messages with friends, share family photos, and copy two-factor authentication codes to access sensitive data in their bank accounts. It’s hard to believe that at a time where technologies like AI are transforming our world, a forty-year old mobile messaging standard is still so prevalent.
Like any forty-year-old technology, SMS is antiquated compared to its modern counterparts. That’s especially concerning when it comes to security.
The World Has Changed, But SMS Hasn’t Changed With It
According to a recent whitepaper from Dekra, a safety certifications and testing lab, the security shortcomings of SMS can notably lead to:
These findings add to the well-established facts about SMS’ weaknesses, lack of encryption chief among them.
Dekra also compared SMS against a modern secure messaging protocol and found it lacked any built-in security functionality. According to Dekra, SMS users can’t answer ‘yes’ to any of the following basic security questions:
But this isn’t just theoretical: cybercriminals have also caught on to the lack of security protections SMS provides and have repeatedly exploited its weakness. Both novice hackers and advanced threat actor groups (such as UNC3944 / Scattered Spider and APT41 investigated by Mandiant, part of Google Cloud) leverage the security deficiencies in SMS to launch different types of attacks against users and corporations alike.
Malicious cyber attacks that exploit the insecurity of SMS have resulted in identity theft, personal or corporate financial losses, unauthorized access to accounts and services, and worse.Users Care About Messaging Security and Privacy Now More Than Ever
Both iOS and Android users understand the importance of security and privacy when sending and receiving messages, and now, they want more protection than what SMS can provide.
A new YouGov study examined how device users across platforms think and feel about SMS texting as well as their desire for more security to protect their text messages.
It’s Time to Move on From SMS
The security landscape as it relates to SMS is simple:
The continued evolution of the mobile ecosystem will depend on users' ability to trust and feel safe, regardless of the phone they may be using. The security of the mobile ecosystem is only as strong as its weakest link and, unfortunately, SMS texting is both a large and weak link in the chain largely because texts between iPhones and Androids revert to SMS.
As a mobile ecosystem, we collectively owe it to all users, across platforms, to enable them to be as safe as possible. It’s a shame that a problem like texting security remains as prominent as it is, particularly when new protocols like RCS are well-established and would drastically improve security for everyone. Today, most global carriers and over 500 Android device manufacturers already support RCS and RCS is enabled by default on Messages by Google. However, whether the solution is RCS or something else, it’s important that our industry moves towards a solution to a problem that should have been fixed before the smartphone era ever began.
Android 14 is the third major Android release with Rust support. We are already seeing a number of benefits:
These positive early results provided an enticing motivation to increase the speed and scope of Rust adoption. We hoped to accomplish this by investing heavily in training to expand from the early adopters.
Early adopters are often willing to accept more risk to try out a new technology. They know there will be some inconveniences and a steep learning curve but are willing to learn, often on their own time.
Scaling up Rust adoption required moving beyond early adopters. For that we need to ensure a baseline level of comfort and productivity within a set period of time. An important part of our strategy for accomplishing this was training. Unfortunately, the type of training we wanted to provide simply didn’t exist. We made the decision to write and implement our own Rust training.
Our goals for the training were to:
With those three goals as a starting point, we looked at the existing material and available tools.
Documentation is a key value of the Rust community and there are many great resources available for learning Rust. First, there is the freely available Rust Book, which covers almost all of the language. Second, the standard library is extensively documented.
Because we knew our target audience, we could make stronger assumptions than most material found online. We created the course for engineers with at least 2–3 years of coding experience in either C, C++, or Java. This allowed us to move quickly when explaining concepts familiar to our audience, such as "control flow", “stack vs heap”, and “methods”. People with other backgrounds can learn Rust from the many other resources freely available online.
For free-form documentation, mdBook has become the de facto standard in the Rust community. It is used for official documentation such as the Rust Book and Rust Reference.
A particularly interesting feature is the ability to embed executable snippets of Rust code. This is key to making the training engaging since the code can be edited live and executed directly in the slides:
In addition to being a familiar community standard, mdBook offers the following important features:
mdbook test
These features made it easy for us to choose mdBook. While mdBook is not designed for presentations, the output looked OK on a projector when we limited the vertical size of each page.
Android has developers and OEM partners in many countries. It is critical that they can adapt existing Rust code in AOSP to fit their needs. To support translations, we developed mdbook-i18n-helpers. Support for multilingual documentation has been a community wish since 2015 and we are glad to see the plugins being adopted by several other projects to produce maintainable multilingual documentation for everybody.
With the technology and format nailed down, we started writing the course. We roughly followed the outline from the Rust Book since it covered most of what we need to cover. This gave us a three day course which we called Rust Fundamentals. We designed it to run for three days for five hours a day and encompass Rust syntax, semantics, and important concepts such as traits, generics, and error handling.
We then extended Rust Fundamentals with three deep dives:
A large set of in-house and community translators have helped translate the course into several languages. The full translations were Brazilian Portuguese and Korean. We are working on Simplified Chinese and Traditional Chinese translations as well.
We started teaching the class in late 2022. In 2023, we hired a vendor, Immunant, to teach the majority of classes for Android engineers. This was important for scalability and for quality: dedicated instructors soon discovered where the course participants struggled and could adapt the delivery. In addition, over 30 Googlers have taught the course worldwide.
More than 500 Google engineers have taken the class. Feedback has been very positive: 96% of participants agreed it was worth their time. People consistently told us that they loved the interactive style, highlighting how it helped to be able to ask clarifying questions at any time. Instructors noted that people gave the course their undivided attention once they realized it was live. Live-coding demands a lot from the instructor, but it is worth it due to the high engagement it achieves.
Most importantly, people exited this course and were able to be immediately productive with Rust in their day jobs. When participants were asked three months later, they confirmed that they were able to write and review Rust code. This matched the results from the much larger survey we made in 2022.
We have been teaching Rust classes at Google for a year now. There are a few things that we want to improve: better topic ordering, more exercises, and more speaker notes. We would also like to extend the course with more deep dives. Pull requests are very welcome!
The full course is available for free at https://google.github.io/comprehensive-rust/. We are thrilled to see people starting to use Comprehensive Rust for classes around the world. We hope it can be a useful resource for the Rust community and that it will help both small and large teams get started on their Rust journey!
We are grateful to the 190+ contributors from all over the world who created more than 1,000 pull requests and issues on GitHub. Their bug reports, fixes, and feedback improved the course in countless ways. This includes the 50+ people who worked hard on writing and maintaining the many translations.
Special thanks to Andrew Walbran for writing Bare-metal Rust and to Razieh Behjati, Dustin Mitchell, and Alexandre Senges for writing Concurrency in Rust.
We also owe a great deal of thanks to the many volunteer instructors at Google who have been spending their time teaching classes around the globe. Your feedback has helped shape the course.
Finally, thanks to Jeffrey Vander Stoep, Ivan Lozano, Matthew Maurer, Dmytro Hrybenko, and Lars Bergstrom for providing feedback on this post.
When you import a third party library, do you review every line of code? Most software packages depend on external libraries, trusting that those packages aren’t doing anything unexpected. If that trust is violated, the consequences can be huge—regardless of whether the package is malicious, or well-intended but using overly broad permissions, such as with Log4j in 2021. Supply chain security is a growing issue, and we hope that greater transparency into package capabilities will help make secure coding easier for everyone.
Avoiding bad dependencies can be hard without appropriate information on what the dependency’s code actually does, and reviewing every line of that code is an immense task. Every dependency also brings its own dependencies, compounding the need for review across an expanding web of transitive dependencies. But what if there was an easy way to know the capabilities–the privileged operations accessed by the code–of your dependencies?
Capslock is a capability analysis CLI tool that informs users of privileged operations (like network access and arbitrary code execution) in a given package and its dependencies. Last month we published the alpha version of Capslock for the Go language, which can analyze and report on the capabilities that are used beneath the surface of open source software.
This CLI tool will provide deeper insights into the behavior of dependencies by reporting code paths that access privileged operations in the standard libraries. In upcoming versions we will add support for open source maintainers to prescribe and sandbox the capabilities required for their packages, highlighting to users what capabilities are present and alerting them if they change.
Vulnerability management is an important part of your supply chain security, but it doesn’t give you a full picture of whether your dependencies are safe to use. Adding capability analysis into your security posture, gives you a better idea of the types of behavior you can expect from your dependencies, identifies potential weak points, and allows you to make a more informed choice about using a given dependency.
Capslock is motivated by the belief that the principle of least privilege—the idea that access should be limited to the minimal set that is feasible and practical—should be a first-class design concept for secure and usable software. Applied to software development, this means that a package should be allowed access only to the capabilities that it requires as part of its core behaviors. For example, you wouldn’t expect a data analysis package to need access to the network or a logging library to include remote code execution capabilities.
Capslock is initially rolling out for Go, a language with a strong security commitment and fantastic tooling for finding known vulnerabilities in package dependencies. When Capslock is used alongside Go’s vulnerability management tools, developers can use the additional, complementary signals to inform how they interpret vulnerabilities in their dependencies.
These capability signals can be used to
Find code with the highest levels of access to prioritize audits, code reviews and vulnerability patches
Compare potential dependencies, or look for alternative packages when an existing dependency is no longer appropriate
Surface unwanted capability usage in packages to uncover new vulnerabilities or identify supply chain attacks in progress
Monitor for unexpected emerging capabilities due to package version or dependency changes, and even integrate capability monitoring into CI/CD pipelines
Filter vulnerability data to respond to the most relevant cases, such as finding packages with network access during a network-specific vulnerability alert
We are looking forward to adding new features in future releases, such as better support for declaring the expected capabilities of a package, and extending to other programming languages. We are working to apply Capslock at scale and make capability information for open source packages broadly available in various community tools like deps.dev.
You can try Capslock now, and we hope you find it useful for auditing your external dependencies and making informed decisions on your code’s capabilities.
We’ll be at Gophercon in San Diego on Sept 27th, 2023—come and chat with us!