Increasingly we are seeing attacks against what is now commonly referred to as the software supply chain.
One of the more notable examples in the last few months was from the Nodejs package management ecosystem [1]. In this case, an attacker convinced the owner of a popular but unmaintained Node package to transfer ownership to them. The attacker than crafted a version of the package that unsuccessfully attacked Copay, a bitcoin wallet platform.
This is just one example of this class of attack, insider attacks of the software supply chain are also becoming more prevalent. When looking at this risk it holistically it is also important to realize that as deployments move to the Cloud the lines between software and services also blur.
Though, not specifically an example of a Cloud deployment issue, in 2015 there was a public story of how some Facebooks employees have the ability to log into users accounts without the target user’s knowledge [2]. This insider risk variant of the supply chain exists in the Cloud in a number of different areas.
Probably the most notable being in the container images provided by their Cloud provider. It is conceivable that a Cloud provider could be compelled by government to build images that would attack a specific or set of customers as part of an investigation, or that an employee would do so under compulsion or in service of personal interests.
This is not a new risk, in fact, management of internal and external dependencies has always been core to building secure systems. What has changed is that in the rush to the Cloud and Open Source users have adopted the tools and resources these cloud providers have built to make this migration easier without fully understanding and managing this risk that they have assumed in doing so.
In response to this reality, Cloud providers are starting to provide tools to help mitigate this risk, some such examples include:
- Providing audit records of employee access to customer data and services,
- Building solutions to provide hardware-based trusted execution environments that provide some level of protection from cloud providers.
- Offering hardware key management solutions provided by third-parties to protect sensitive key material,
- Cryptographically signing the binaries and images that are published so that their distribution is controlled and tampering post-production can be detected.
Despite these advancements, there is still a long way to go to mitigate these risks in a holistic fashion.
One effort in this area I am actively involved in is in the adoption of the concept of Binary Transparency. This can be thought of as an evolution of legacy code signing models. In these solutions, a publisher places a cryptographic signature using a private key associated with a public certificate of some sort that is either directly trusted based on package origin and signature (such as with GPG signatures) or is authenticated based on the legal identity of the publisher of the package (as is the case with Authenticode).
These solutions, while valuable, help you authenticate a package but they do not provide you the tools to understand the history of that package. As a result, these publishers can produce packages either accidentally or on purpose that are malicious in nature that is signed with their “trusted keys” and it is not detectable until it is too late.
As an example of this risk, you only need to look at RealTek, over the years numerous times their code signing key has been compromised and used to produce malware, some of it targeted such as in the case of Stuxnet [3].
Binary Transparency addresses this risk in a few ways. At its core Binary Transparency can be thought of as an append-only ledger listing of all versions of a given binary, each of these versions having a pointer to a content addressable store where that binary is available.
This design enables the runtime that will execute the binary to do a few things that were not possible, It can, for example, ensure it is running the most recent version of a binary and to only run the binary when it, and some number of previous revisions are publicly discoverable. This also enables the relying parties of the published binaries and images to comp it can inspect all versions and potentially diff those versions to understand the differences.
When this technique is combined with the concept of reproducible builds, as is provided by Go [4] and a community of these append-only logs and auditors of those logs you can get strong assurances that:
- You are running the same version as everyone else,
- That the binary you are running is reproducible from the source you can review,
- The binary are running has not neen modified since it was published,
- That you, and others, will not run binaries or images that have not been made publicly available for inspection.
A system with these properties disincentivizes the attacker from executing these attacks as it significantly increases the probability of being caught and helps bound the impact of any compromise.
Importantly, by doing these things, it makes it possible to increase the trust in the Cloud offering because it minimizes the amount of trust the user must put into the Cloud provider to remain honest.
A recent project that implements these concepts is the Go Module Transparency project [5] [6].
Over time we will see these same techniques applied to other areas [7] [8] of the software supply chain, and with that trend, users of open source packages, automatic update systems, and the Cloud will be able to have increased peace of mind that their external dependencies are truly delivering on their promises.
- [1] Node.js Event-Stream Hack Exposes Supply Chain Security Risks
- [2] Facebook Engineers Can Access Your Account Without A Password
- [3] STUXNET Malware Targets SCADA Systems
- [4] REPRODUCING GO BINARIES BYTE-BY-BYTE
- [5] Proposal: Secure the Public Go Module Ecosystem
- [6] Transparent Logs for Skeptical Clients
- [7] Firefox Security/Binary Transparency
- [8] Contour: A Practical System for Binary Transparency