Integrating security into your app development lifecycle can save a lot of time, money, and risk. That’s why we’ve launched Security by Design on Google Play Academy to help developers identify, mitigate, and proactively protect against security threats.
The Android ecosystem, including Google Play, has many built-in security features that help protect developers and users. The course Introduction to app security best practices takes these protections one step further by helping you take advantage of additional security features to build into your app. For example, Jetpack Security helps developers properly encrypt their data at rest and provides only safe and well known algorithms for encrypting Files and SharedPreferences. The SafetyNet Attestation API is a solution to help identify potentially dangerous patterns in usage. There are several common design vulnerabilities that are important to look out for, including using shared or improper file storage, using insecure protocols, unprotected components such as Activities, and more. The course also provides methods to test your app in order to help you keep it safe after launch. Finally, you can set up a Vulnerability Disclosure Program (VDP) to engage security researchers to help.
In the next course, you can learn how to integrate security at every stage of the development process by adopting the Security Development Lifecycle (SDL). The SDL is an industry standard process and in this course you’ll learn the fundamentals of setting up a program, getting executive sponsorship and integration into your development lifecycle.
Threat modeling is part of the Security Development Lifecycle, and in this course you will learn to think like an attacker to identify, categorize, and address threats. By doing so early in the design phase of development, you can identify potential threats and start planning for how to mitigate them at a much lower cost and create a more secure product for your users.
Improving your app’s security is a never ending process. Sign up for the Security by Design module where in a few short courses, you will learn how to integrate security into your app development lifecycle, model potential threats, and app security best practices into your app, as well as avoid potential design pitfalls.
The Android team has been working on introducing the Rust programming language into the Android Open Source Project (AOSP) since 2019 as a memory-safe alternative for platform native code development. As with any large project, introducing a new language requires careful consideration. For Android, one important area was assessing how to best fit Rust into Android’s build system. Currently this means the Soong build system (where the Rust support resides), but these design decisions and considerations are equally applicable for Bazel when AOSP migrates to that build system. This post discusses some of the key design considerations and resulting decisions we made in integrating Rust support into Android’s build system.
A RustConf 2019 meeting on Rust usage within large organizations highlighted several challenges, such as the risk that eschewing Cargo in favor of using the Rust Compiler, rustc, directly (see next section) may remove organizations from the wider Rust community. We share this same concern. When changes to imported third-party crates might be beneficial to the wider community, our goal is to upstream those changes. Likewise when crates developed for Android could benefit the wider Rust community, we hope to release them as independent crates. We believe that the success of Rust within Android is dependent on minimizing any divergence between Android and the Rust community at large, and hope that the Rust community will benefit from Android’s involvement.
rustc,
Rust provides Cargo as the default build system and package manager, collecting dependencies and invoking rustc (the Rust compiler) to build the target crate (Rust package). Soong takes this role instead in Android and calls rustc directly for several reasons:
rustc
Cargo.toml
build.rs
Using the Rust compiler directly allows us to avoid these issues and is consistent with how we compile all other code in AOSP. It provides the most control over the build process and eases integration into Android’s existing build system. Unfortunately, avoiding it introduces several challenges and influences many other build system decisions because Cargo usage is so deeply ingrained in the Rust crate ecosystem.
A build.rs script compiles to a Rust binary which Cargo builds and executes during a build to handle pre-build tasks, commonly setting up the build environment, or building libraries in other languages (for example C/C++). This is analogous to configure scripts used for other languages.
Avoiding build.rs scripts somewhat flows naturally from not relying on Cargo since supporting these would require replicating Cargo behavior and assumptions. Beyond this however, there are good reasons for AOSP to avoid build scripts as well:
/usr/lib
Android.bp
For instances in third-party code where a build script is used only to compile C dependencies, we either use existing cc_library Soong definitions (such as boringssl for quiche) or create new definitions for crate-specific code.
cc_library
When the build.rs is used to generate source, we try to replicate the core functionality in a Soong rust_binary module for use as a custom source generator. In other cases where Soong can provide the information without source generation, we may carry a small patch that leverages this information.
Why do we support proc_macros, which are compiler plug-ins that execute code on the host within the compiler context, but not build.rs scripts?
proc_macros
While build.rs code is written as one-off code to handle building a single crate, proc_macros define reusable functionality within the compiler which can become widely relied upon across the Rust community. As a result popular proc_macros are generally better maintained and more scrutinized upstream, which makes the code review process more manageable. They are also more readily sandboxed as part of the build process since they are less likely to have dependencies external to the compiler.
proc_macros are also a language feature rather than a method for building code. These are relied upon by source code, are unavoidable for third-party dependencies, and are useful enough to define and use within our platform code. While we can avoid build.rs by leveraging our build system, the same can’t be said of proc_macros.
There is also precedence for compiler plugin support within the Android build system. For example see Soong’s java_plugin modules.
Unlike C/C++ compilers, rustc only accepts a single source file representing an entry point to a binary or library. It expects that the source tree is structured such that all required source files can be automatically discovered. This means that generated source either needs to be placed in the source tree or provided through an include directive in source:
include!("/path/to/hello.rs");
The Rust community depends on build.rs scripts alongside assumptions about the Cargo build environment to get around this limitation. When building, the cargo command sets an OUT_DIR environment variable which build.rs scripts are expected to place generated source code in. This source can then be included via:
cargo
include!(concat!(env!("OUT_DIR"), "/hello.rs"));
This presents a challenge for Soong as outputs for each module are placed in their own out/ directory2; there is no single OUT_DIR where dependencies output their generated source.
out/
OUT_DIR
For platform code, we prefer to package generated source into a crate that can be imported. There are a few reasons to favor this approach:
As a result, all of Android’s Rust source generation module types produce code that can be compiled and used as a crate.
We still support third-party crates without modification by copying all the generated source dependencies for a module into a single per-module directory similar to Cargo. Soong then sets the OUT_DIR environment variable to that directory when compiling the module so the generated source can be found. However we discourage use of this mechanism in platform code unless absolutely necessary for the reasons described above.
By default, the Rust ecosystem assumes that crates will be statically linked into binaries. The usual benefits of dynamic libraries are upgrades (whether for security or functionality) and decreased memory usage. Rust’s lack of a stable binary interface and usage of cross-crate information flow prevents upgrading libraries without upgrading all dependent code. Even when the same crate is used by two different programs on the system, it is unlikely to be provided by the same shared object4 due to the precision with which Rust identifies its crates. This makes Rust binaries more portable but also results in larger disk and memory footprints.
This is problematic for Android devices where resources like memory and disk usage must be carefully managed because statically linking all crates into Rust binaries would result in excessive code duplication (especially in the standard library). However, our situation is also different from the standard host environment: we build Android using global decisions about dependencies. This means that nearly every crate is shareable between all users of that crate. Thus, we opt to link crates dynamically by default for device targets. This reduces the overall memory footprint of Rust in Android by allowing crates to be reused across multiple binaries which depend on them.
Since this is unusual in the Rust community, not all third-party crates support dynamic compilation. Sometimes we must carry small patches while we work with upstream maintainers to add support.
We support building all output types supported by rustc (rlibs, dylibs, proc_macros, cdylibs, staticlibs, and executables). Rust modules can automatically request the appropriate crate linkage for a given dependency (rlib vs dylib). C and C++ modules can depend on Rust cdylib or staticlib producing modules the same way as they would for a C or C++ library.
rlib
dylib
proc_macro
cdylib
staticlib
In addition to being able to build Rust code, Android’s build system also provides support for protobuf and gRPC and AIDL generated crates. First-class bindgen support makes interfacing with existing C code simple and we have support modules using cxx for tighter integration with C++ code.
The Rust community produces great tooling for developers, such as the language server rust-analyzer. We have integrated support for rust-analyzer into the build system so that any IDE which supports it can provide code completion and goto definitions for Android modules.
Source-based code coverage builds are supported to provide platform developers high level signals on how well their code is covered by tests. Benchmarks are supported as their own module type, leveraging the criterion crate to provide performance metrics. In order to maintain a consistent style and level of code quality, a default set of clippy lints and rustc lints are enabled by default. Additionally, HWASAN/ASAN fuzzers are supported, with the HWASAN rustc support added to upstream.
clippy
In the near future, we plan to add documentation to source.android.com on how to define and use Rust modules in Soong. We expect Android’s support for Rust to continue evolving alongside the Rust ecosystem and hope to continue to participate in discussions around how Rust can be integrated into existing build systems.
Thank you to Matthew Maurer, Jeff Vander Stoep, Joel Galenson, Manish Goregaokar, and Tyler Mandry for their contributions to this post.
This can be mitigated to some extent with workspaces, but requires a very specific directory arrangement that AOSP does not conform to. ↩
This presents no problem for C/C++ and similar languages as the path to the generated source is provided directly to the compiler. ↩
Since include! works by textual inclusion, it may reference values from the enclosing namespace, modify the namespace, or use constructs like #![foo]. These implicit interactions can be difficult to maintain. Macros should be preferred if interaction with the rest of the crate is truly required. ↩
While libstd would usually be shareable for the same compiler revision, most other libraries would end up with several copies for Cargo-built Rust binaries, since each build would attempt to use a minimum feature set and may select different dependency versions for the library in question. Since information propagates across crate boundaries, you cannot simply produce a “most general” instance of that library. ↩
Chrome 90 for Windows adopts Hardware-enforced Stack Protection, a mitigation technology to make the exploitation of security bugs more difficult for attackers. This is supported by Windows 20H1 (December Update) or later, running on processors with Control-flow Enforcement Technology (CET) such as Intel 11th Gen or AMD Zen 3 CPUs. With this mitigation the processor maintains a new, protected, stack of valid return addresses (a shadow stack). This improves security by making exploits more difficult to write. However, it may affect stability if software that loads itself into Chrome is not compatible with the mitigation. Below we describe some exploitation techniques that are mitigated by stack protection, discuss its limitations and what we will do next to approach them. Finally, we provide some quick tips for other software authors as they enable /cetcompat for their Windows applications.
Imagine a simple use-after-free (UAF) bug where an attacker can induce a program to call a pointer of their choosing. Here the attacker controls an object which occupies space formerly used by another object, which the program erroneously continues to use. The attacker sets a field in this region that is used as a function call to the address of code the attacker would like to execute. Years ago an attacker could simply write their shellcode to a known location, then, in their overwrite, set the instruction pointer to this shellcode. In time, Data Execution Prevention was added to prevent stacks or heaps from being executable.
In response, attackers invented Return Oriented Programming (ROP). Here, attackers take advantage of the process’s own code, as that must be executable. With control of the stack (either to write values there, or by changing the stack pointer) and control of the instruction pointer, an attacker can use the `ret` instruction to jump to a different, useful, piece of code.
During an exploit attempt, the instruction pointer is changed so that instead of its normal destination, a small fragment of code, called an ROP gadget, is invoked instead. These gadgets are selected so that they do something useful (such as prepare a register for a function call) then call return.
These tiny fragments need not be a complete function in the normal program, and could even be found part-way through a legitimate instruction. By lining up the right set of “return” addresses, a chain of these gadgets can be called, with each gadget’s `ret` switching to the next gadget. With some patience, or the right tooling, an attacker can piece together the arguments to a function call, then really call the function.
Chrome has a multi-process architecture -- a main browser process acts as the logged-in user, and spawns restricted renderer and utility processes to host website code. This isolation reduces the severity of a bug in a renderer as its process cannot do much by itself. Attackers will then attempt to use another sandbox escape bug to run code in the browser, which lets them act as the logged-in user. As libraries are mapped at the same address in different processes by Windows, any bug that allows an attacker to read memory is enough for them to examine Chrome’s binary and any loaded libraries for ROP gadgets. This makes preventing ROP chains in the browser process especially useful as a mitigation.
Enter stack-protection. Along with the existing stack, the cpu maintains a shadow stack. This stack cannot be directly manipulated by normal program code and only stores return addresses. The CALL instruction is modified to push a return address (the instruction after the CALL) to both the normal stack, and the shadow stack. The RET (return) instruction still takes its return address from the normal stack, but now verifies that it is the same as the one stored in the shadow stack region. If it is, then the program is left alone and it continues to work as it always did. If the addresses do not match then an exception is raised which is intercepted by the operating system (not by Chrome). The operating system has an opportunity to modify the shadow region and allow the program to continue, but in most cases an address mismatch is the result of a program error so the program is immediately terminated.
In our example above, the attacker will be able to make their initial jump into a ROP gadget, but on trying to return to their next gadget they will be stopped.
Some software may be incompatible with this mechanism, especially some older security software that injects into a process and hooks operating system functions by overwriting the prelude with `rax = &hook; push rax; ret`.
Chrome does not yet support every direction of control flow enforcement. Stack protection enforces the reverse-edge of the call graph but does not constrain the forward-edge. It will still be possible to make indirect jumps around existing code as stack protection is only validated when a return instruction is encountered, and call targets are not validated. On Windows a technology called Control Flow Guard (CFG) can be used to verify the target of an indirect function call before it is attempted. This prevents calling into the middle of a function, significantly reducing the scope of useful instructions for attackers to use. Another approach is provided by Intel’s CET which includes an ENDBRANCH instruction to prevent jumps into arbitrary code locations. Memory tagging tools such as MTE can be used to make it more difficult to modify pointers to valid code sequences (and makes UAFs more difficult in general). We are working to introduce CFG to Chrome for Windows, and will add other techniques over time.
By itself, stack protection can be bypassed in some contexts. For instance, stack protection does not prevent an attacker tricking a program into calling an existing function by entirely replacing an object containing a function pointer. This approach does not involve ROP as the function call happens instead of the expected call, and returns to the address it was originally called from, so must be allowed. However, the called function must be useful to an attacker, and most functions will not be. An example of an attack using this method is to craft a call to add the `--no-sandbox` argument to Chrome’s command line. This results in future renderers being launched without normal protections. Over time we will identify and remove such useful tools.
In the renderer, for performance reasons, our javascript and wasm engines may use memory that is both writable and executable at the same time. This allows an attacker to modify code that v8 is already going to execute, saving them the trouble of constructing a ROP chain. This explains why it was not our first priority to make v8 CET compatible, and why stack-protection is not yet enabled in the renderer.
Finally, stack protection doesn’t stop the bugs in the first place. Everything we have discussed above is a mitigation that makes it more difficult to execute arbitrary code. If a programming error allows arbitrary writes then it is very unlikely that we can prevent this being used to run arbitrary code. Attackers will adapt and find new ways to turn memory safety errors into code execution.
You can see if Hardware-enforced Stack Protection is enabled for a process using the Windows Task Manager. Open task manager, open the Details Tab, Right Click on a heading, Select Columns & Check the Hardware-enforced Stack Protection box. The process display will then indicate if a process is opted-in to this mitigation. ‘Compatible Modules Only’ indicates that any dll marked as /cetcompat at build time will raise an exception if a return address is invalid.
You can see which Chrome processes are opted-out of CET by consulting the Mitigations field of chrome://sandbox and clicking ‘+’. All processes are included unless the mitigation CET_USER_SHADOW_STACKS_ALWAYS_OFF is present in the expanded details view.
If you are developing software, or debugging a problem in Chrome the shadow stack can be helpful as it includes only return addresses, and these cannot be corrupted by rogue writes elsewhere in the process. To see these registers use the `r` command in windbg with the mask option:
0:159> rM 8002
rax=00000000c000060a rbx=000000fa5bbfeff0 rcx=0000000000000030
rdx=0000000000000000 rsi=00007ffba4118924 rdi=000000fa5bbff1a0
rip=00007ffc1847b4a1 rsp=000000fa5bbfc0a0 rbp=000000fa5bbfc0a0
r8=000000fa5bbfc098 r9=0000000000000000 r10=0000000000000000
r11=0000000000000246 r12=000000fa5bbfe230 r13=000002c3450b5830
r14=000002c3450b7850 r15=000000fa5bbfc260
iopl=0 nv up ei pl zr na po nc
ssp=000000fa5c3fef10 cetumsr=0000000000000001
`ssp` points to the shadow stack region, `cetumsr` indicates if cet is enabled for the process.
You can then see the call stack within the shadow region using `dps @ssp`. Values are not overwritten so you can also see where you came from by looking a bit deeper: `dps @ssp-20`.
If a process is not compatible with Hardware-enforced Stack Protection, the system event log (Application Log) will include brief error reports (Id:1001). You can filter those related to cetcompat using the following powershell snippet:-
Get-WinEvent -MaxEvents 128 -FilterHashtable @{ LogName='Application'; Id='1001' } `
| Where-Object {$_.Message -match 'chrome.exe'} `
| Select-Object -First 8 `
| fl
These will include the following parameters:-
P1: application.exe
P2: application version
P3: application build ts
P4: faulting module .dll
P5: faulting module version
P6: faulting module build ts
P7: faulting offset in P4 from base_address
P8: exception code (c0000409)
P9: subcode (00...000030)
If Chrome is misbehaving and you think it might be because of cetcompat, it is possible to disable it using Image File Execution Options - we do not recommend this except for a limited period of testing. If you find you have to do this, please raise an issue on https://crbug.com so that we can investigate the failure.
/cetcompat is enabled for most processes for Chrome M90 on Windows. Enabling Hardware-enforced Stack Protection will layer with existing and future measures to make exploitation more difficult and so more expensive for an attacker, ultimately protecting the people who use Chrome every day.