Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Distributed crate cache #1997

Closed
brson opened this issue Sep 23, 2015 · 38 comments
Closed

Distributed crate cache #1997

brson opened this issue Sep 23, 2015 · 38 comments
Labels
A-caching Area: caching of dependencies, repositories, and build artifacts C-feature-request Category: proposal for a feature. Before PR, ping rust-lang/cargo if this is not `Feature accepted`

Comments

@brson
Copy link
Contributor

brson commented Sep 23, 2015

Large scale build farms (like Gecko's) really need distributed caching of build artifacts ala distcc/ccache. This will eventually be a blocker for continued adoption of Rust in Gecko.

We've talked about this several times but don't have even a vague design.

cc @larsberg

@larsberg
Copy link

I think you want @larsbergstrom?

@brson
Copy link
Contributor Author

brson commented Sep 28, 2015

Yes, thanks larsberg!

@glandium
Copy link
Contributor

glandium commented Nov 5, 2015

For what it's worth: Gecko build slaves are using sccache, which is a tool kind of like ccache, except it works for gcc/clang and MSVC, and uses a network storage (S3). It's currently written in python, but for multiple reasons, I'm planning to rewrite it in rust. Which makes me think there's maybe a base to share here.

@eddyb
Copy link
Member

eddyb commented Feb 14, 2016

Would having a machine-local build cache be covered by this, or should I open a new issue?

@larsbergstrom
Copy link

There are two issues I was curious about here:

  1. Can we make a ccache/sccache-like tool work with Rust? Can third parties just do it themselves, or is there support needed in the compiler/cargo?

  2. For situations like artifact builds in Firefox (https://groups.google.com/forum/#!topic/mozilla.dev.builds/jGg69m0x6Ck ) where the final binaries are retrieved from a Server.

@larsbergstrom
Copy link

From @ncalexan on the second issue:

@larsbergstrom this ticket is pretty broad, and the discussion of caching doesn't really say what you want to do. Artifact builds download the major compiled libraries from Mozilla's automation; the big one is libxul.so. I have only followed the rustc integration at a distance, but it's my understanding that y'all produce .o files that are linked into existing libraries (including libxul.so). If that's true, then any Rust integration into the Gecko tree should "just work" with artifact builds. (Of course, Rust developers won't be able to get the speed advantages of artifact builds, but that'll teach you to use a compiler :))

@rillian is the person I tap to keep abreast of Rust/Gecko integration progress -- perhaps he can add color here?

@rillian
Copy link
Contributor

rillian commented Feb 17, 2016

Right now the rust code gets linked into libxul, so @ncalexan is correct that Artifact builds should be unaffected.

Having cargo be able to query a crate-level build cache would be interesting, and an easier integration point for projects outside gecko.

@eddyb asked about machine-local caches. While that's not what we're talking about here, and it would be simple enough to teach cargo to link build results into a shared cache under .cargo like it does for sources, a ccache/distcc-oriented interface would work for both.

@alexcrichton
Copy link
Member

I had some more thoughts about this today when discussing with @wycats. I think a lot of desires could be solved with something like this:

  • I built a similar set of dependencies in two locations on my computer, I shouldn't rebuild everything.
  • I have two separate Cargo projects which may have a shared crate graph, neither of which produced a final artifact. I then create a later project which links on the two. Each project builds independently, then the main project wants to link them together but not rebuild everything.
  • Build farms where I want to distribute my build across many machines, or I simply want to reuse the previously cached results from a build on another machine.

My thinking of how we'd implement this is a relatively simple system:

  • Each artifact produced by Cargo would have a unique key. This key includes all information needed to produce that artifact including, but not limited to, the rustc version, dependencies, crates.io version, maybe source code for path deps, compiler flags, and compiler target.
  • This unique key is then hashed, e.g. with sha256.
  • This smaller key is then used as a key in a key/value store to look up the artifact (or set of artifacts)
  • When Cargo does a build, it has knowledge of a "global crate cache" (perhaps configured via .cargo/config). This may default to the filesystem.
  • When building a key, it queries the cache if it has it, and if so the build is skip and the next step happens.
  • If a key is not in the cache, cargo does the build and then pushes the artifacts into the cache.

The idea here is that a the value for a key never changes (kinda like a CAS system). That way if a cache gets concurrent pushes it can ignore all but the first. (and maybe assert they're all the same). Cargo could then support custom backends for the get/put functionality, for example to push to S3 or to push to an internal company caching server.

Some caveats:

  • Build scripts and plugins may be very difficult to cache. They can perform arbitrary code and we don't actually know what the set of inputs are, nor do we always know the set of outputs.
  • Some environments have extra information as input to the key. For example it may be a source code revision in the monorepo or something else like that, so there should be a way for a custom implementation to mix in data with the key that Cargo itself generates.

@larsbergstrom
Copy link

@alexcrichton Thanks! This sounds exciting and that looks good to me at first blush.

@luser @glandium Are there cases where this key into the compiled crate might break, based on your experiences with CCACHE/SCCACHE/artifact builds?

@luser
Copy link
Contributor

luser commented Oct 10, 2016

sccache uses the output of the preprocessor + the compiler commandline as input to the hash key. I suspect the equivalent in the Rust world would be something like a hash of some IR + compiler options, and some hash of any external crates being used.

@luser
Copy link
Contributor

luser commented Dec 8, 2016

@alexcrichton, @glandium and I met yesterday to discuss this, and we came up with what seems like a workable path to using sccache for caching crate compilations that are done via cargo. We'd have cargo invoke sccache as a wrapper around rustc, just like we do for C++ compilation in Gecko. (Cargo currently doesn't support rustc wrapper binaries, we probably want to add that as a feature, but for now we can just set RUSTC=wrapper_script.) sccache does compiler detection, so we'd teach it to detect rustc, and then generate the hash key from the following:

  • The full rustc compiler commandline
  • The contents of all files specified on the commandline
  • The contents of all files listed in the dependency info output by rustc (this should be fast)
    ** we wanted to double-check that files generated from build scripts were listed in this output
  • The contents of the compiler binary (we were unsure whether we would need to also hash the sharedlibs that the rustc binary links to, since the rustc driver contains almost no code)
  • Any native libraries that are being linked to the final output. These are only provided as linker arguments, AIUI, so sccache would need to replicate the linker's search mechanism here.
  • Some environment variables that impact the output of rustc (We didn't enumerate these, it would be good if we had a specific list!)

One big caveat that Alex mentioned is that plugins providing procedural macros can break our assumptions here. We expect that most well-behaved plugins will not be a problem, as they should be idempotent on the input, so as long as we hash the input and the plugin binary that should be sufficient. We could just declare that non-idempotent plugins will produce bad results, but we might want to do some special handling there if we intend to enable this more broadly in the future. Alex also mentioned that procedural macros that use data from external files will be an issue, those files don't currently make their way into the dependency graph. It looks like the built-in macros like include_bytes do the right thing here, but we might need to add a way for procedural macros to feed extra files they reference back to the compiler.

This would all fit very well into the existing sccache codebase, and doesn't require any changes to rustc (modulo the note above about making procedural macros work better with external files), and only one small change to cargo (allowing a rustc wrapper), all of which makes it a very appealing approach. Alex believed that given the strong hashing that rustc uses for crate dependencies, which wind up embedded in rlibs, hashing all the inputs on the compiler commandline should provide a strong hash key.

Alex, Mike, if there's anything important that I left out please let me know!

We're extremely interested in this from the Gecko side, as we start to add more Rust code to the Firefox build, and it sounds like this would have broad benefit to all large Rust projects.

@alexcrichton
Copy link
Member

That sounds like an excellent summary to me, thanks for the writeup @luser! Things I'd add:

  • It'd be nifty if sccache would work like sccache rustc when invoked as rustc (e.g. hard linked to different binary name)
  • Interesting environment variables are configured by Cargo, notable exceptions being:
    • CARGO_MANIFEST_DIR - can probably ignore
    • OUT_DIR - also on this list, but in theory only affects paths included elsewhere, so as long as we track things like include! paths directly we can probably safely ignore.
    • Note that in general we can probably punt on environment variables for now and see how much of a problem it is down the road

Some further thoughts I've had with @brson in other discussions are that a neat thing we could do on the Rust side is to have a server populating a global crate cache by just building a bunch of crates every day. That way sccache installed would pull from the global crate cache or fill in locally if it's available, speeding up everyone's compiles! (just a side thought though)

@luser
Copy link
Contributor

luser commented Dec 9, 2016

Some further thoughts I've had with @brson in other discussions are that a neat thing we could do on the Rust side is to have a server populating a global crate cache by just building a bunch of crates every day. That way sccache installed would pull from the global crate cache or fill in locally if it's available, speeding up everyone's compiles! (just a side thought though)

Unless cargo is invoking rustc with entirely relative paths (which it doesn't appear to be, from looking at `cargo build -v, making this work will also rely on mozilla/sccache#35.

@luser
Copy link
Contributor

luser commented Dec 9, 2016

It'd be nifty if sccache would work like sccache rustc when invoked as rustc (e.g. hard linked to different binary name)

We can implement this, but for Firefox builds we'll still want a way to pass both the path to sccache and the path to rustc, since we don't have them in $PATH.

@Havvy
Copy link

Havvy commented Dec 9, 2016

Y'all may want to look into Nix. They've been doing cached mostly deterministic building of artifacts and know quite a bit. They've also an IRC channel.

@JonasOlson
Copy link

Any native libraries that are being linked to the final output. These are only provided as linker arguments, AIUI, so sccache would need to replicate the linker's search mechanism here.

This unsettling state of affairs, where it seems like things could easily break, has me wishing for a system where the compiler itself provides a hash key to the caching wrapper, since the compiler probably knows best what needs to go into the hash. Similarly, the compiler might subcontract to linkers etc to provide the hashes that are relevant to them.

@ben0x539
Copy link

I'm wondering if @garbas has input on this

@garbas
Copy link

garbas commented Dec 10, 2016

At RelEng team we started using Nix for our collection of services. one of the features that Nix also brings to the table (apart from build reproducible builds) is a binary cache that you get for any language specific package management (pip, elm-package, npm, cargo, ...).

I'm not sure what the scope of this ticket is, but make your builds deterministic and binary cache becomes a no brainier and just a consequence of good design.

@joshtriplett
Copy link
Member

I'd love to see support for a local compiled-crate cache in ~/.cargo, to speed up compiling many different crates with similar dependencies.

However, any kind of compiled-binary cache shared over the network should require an explicit opt-in at build time, not an opt-out. I'd still love to see it, but not as something cargo uses by default.

@luser
Copy link
Contributor

luser commented Dec 12, 2016

This unsettling state of affairs, where it seems like things could easily break, has me wishing for a system where the compiler itself provides a hash key to the caching wrapper, since the compiler probably knows best what needs to go into the hash. Similarly, the compiler might subcontract to linkers etc to provide the hashes that are relevant to them.

I agree that having to reverse-engineer the linker's behavior is not the best thing here. Aside from that everything seems very straightforward. It would certainly be nice to have cooperation with the compiler, but it's also nice to not have to make any changes to the compiler for this to work.

@luser
Copy link
Contributor

luser commented Dec 12, 2016

I'd love to see support for a local compiled-crate cache in ~/.cargo, to speed up compiling many different crates with similar dependencies.

sccache supports both a local disk cache and a network cache, so either of these should be doable. I agree that having a global shared cache is maybe not a great default, unless we provide much stronger guarantees, like signing the cache entries or having some other way of verifying them.

@luser
Copy link
Contributor

luser commented Dec 12, 2016

Y'all may want to look into Nix. They've been doing cached mostly deterministic building of artifacts and know quite a bit. They've also an IRC channel.

Nix is very neat, but I don't know that there's any real secret sauce there, just "hash all the inputs to the build" and "ensure the build is reproducible so that the same inputs produce the same output". I don't know that there's much we could actually share with them (although I would be happy to be proven wrong). It seems like you can already use cargo within nix and get some benefits from it, but we'd like to build something that's useful even if you haven't opted-in to that ecosystem.

@ElvishJerricco
Copy link

@luser, there may not be any secret sauce in Nix, but there is a lot of pre-existing infrastructure that you can get for free by using it. It just seems to give you everything you're going for without much effort. Plus, it allows you to use the same system to manage both Rust and system dependencies, giving you the whole Nix ecosystem for free. One way or another, I feel you're asking people to buy into some system. So it might as well be the one that already exists and gives a ton of tools.

@garbas
Copy link

garbas commented Dec 29, 2016

@ElvishJerricco @luser i would really like to get Nix closer to cargo, maybe the same way it was done for stack via command line option you have to opt-in --nix. this would be a quick way how to get something with binary cache, since nix would provide it when --nix option is used.

At RelEng we already start using Nix to build docker images (https://github.com/mozilla-releng/services). And if we will be looking into Reproducible Builds in the future of mozilla-central we will have to get our environments reproducible, and Nix might be a way start get us started this path. I summarized my thoughts about this after last Reproducible Builds summin in Berlin.

We are yet to provide a fully (build) reproducible environment for Gecko with Nix, but work already started here. Some are already using it to develop gecko, but is not yet at the stage where it is beginner friendly for everybody. In longer run you could bootstrap gecko environment using nix with ./mach bootstrap --nix, which is similar proposal which i wrote in the first paragraph.

@Ericson2314
Copy link
Contributor

Ericson2314 commented Jan 10, 2017

Woah, I wish I saw this thread earlier!

In haskell/cabal#3882 I propose a method for integrating Nix and Cabal that can by used by any langauge-specific package manager. [@garbas this is way tighter integration than stack + nix.] It's a really simple plan really: language package manager solves version bounds and comes up with concrete plan, and then send that off to Nix. It's a lot like the division of labor between CMake and Make.

@luser

Nix is very neat, but I don't know that there's any real secret sauce there, just "hash all the inputs to the build" and "ensure the build is reproducible so that the same inputs produce the same output".

So a big extra feature is what we in Nix-land call import from derivation. This is the only trivially-distributable way I know of to soundly implement dynamic dependencies. See https://blogs.ncl.ac.uk/andreymokhov/cloud-and-dynamic-builds/ and my thread with the author in the comments for some (more theoretical) discussion of this. In practice, this would be most useful for the many things in your "One big caveat that Alex mentioned..." paragraph.

@Kerollmops
Copy link

Kerollmops commented Jan 26, 2017

Why not following the way NixOs will probably take ?
http://sourcediver.org/blog/2017/01/18/distributing-nixos-with-ipfs-part-1/

Distributing precompiled libraries for a specific platform using IPFS or some Bittorrent DHT method.

EDIT: @Havvy already talked about Nix !

@luser
Copy link
Contributor

luser commented Mar 31, 2017

As an update, initial Rust support in sccache landed about a week ago. You can try it out by building sccache master. It's currently a bit of a pain to use, you have to hard-link or copy the sccache binary to be named rustc, then pass that as RUSTC or put it first in your $PATH to get cargo to use it. I have a patch to add support for a RUSTC_WRAPPER env var to cargo so you could simply set RUSTC_WRAPPER=sccache instead.

@luser
Copy link
Contributor

luser commented Mar 31, 2017

The RUSTC_WRAPPER patch is in #3887.

@lilianmoraru
Copy link

This could probably be integrated with rustup - storing crates caches separately for every toolchain.
@alexcrichton @brson

@elahn
Copy link

elahn commented Apr 21, 2017

An advantage of storing caches per toolchain via rustup, would be trivial implementation of cache clearing on toolchain update/remove, without affecting other toolchains' caches.

For users keeping up with the trains, this would prevent perpetual cache growth.

@carols10cents carols10cents added A-caching Area: caching of dependencies, repositories, and build artifacts C-feature-request Category: proposal for a feature. Before PR, ping rust-lang/cargo if this is not `Feature accepted` labels Sep 10, 2017
@heycam
Copy link

heycam commented Oct 1, 2017

Is distributed compilation in scope for this feature?

@alexcrichton
Copy link
Member

With the advent of sccache and RUSTC_WRAPPER I think this is effectively fixed from what we can do on Cargo's end, so closing.

@henrikno
Copy link

henrikno commented Jan 3, 2018

Are there any docs/writeups on how to use RUSTC_WRAPPER with sccache?

@SimonSapin
Copy link
Contributor

Run Cargo with the RUSTC_WRAPPER environment variable set to the path to the sccache binary, or its name if it is in $PATH. For example RUSTC_WRAPPER=sccache cargo build

@luser
Copy link
Contributor

luser commented Jan 4, 2018

With the advent of sccache and RUSTC_WRAPPER I think this is effectively fixed from what we can do on Cargo's end, so closing.

I think it'd be great to look into the feasibility of an actual global shared cache, but that has a lot more hard problems than solving the CI / local developer case (like trusting arbitrary binaries from a cache).

@clarfonthey
Copy link
Contributor

So I found this issue, and maybe it's worth opening a new one, but I honestly don't think this replaces distcc at all.

distcc is extremely useful for offloading building from one very-not-powerful computer to an actually-decently-powerful one. For example, I use distcc to defer building from an ARM mini computer to my desktop. While the use case of several worker servers sharing work is covered by this, the offloading of work from one small computer to a larger one is not. So, in that sense, this still hasn't fixed the issue of distcc for cargo IMHO.

@luser
Copy link
Contributor

luser commented Dec 4, 2018

So I found this issue, and maybe it's worth opening a new one, but I honestly don't think this replaces distcc at all.

We've just recently finished merging work to add distributed compilation support to sccache. It includes support for distributing Rust compilation which ought to solve your use case. There's a quick start guide available, and more thorough docs should be landing in the near future.

@Ciantic
Copy link

Ciantic commented Jan 1, 2020

I have only two Rust projects, and yet I'm annoyed I'm constantly rebuilding same dependencies (because I have to e.g. cargo clean for other reasons than dependencies).

I know that the solution is to use "sccache", but how come this is not the default? Providing fast builds (especially not rebuilding the dependencies constantly) should be the default configuration.

P.S. I don't think sccache works how I assumed, now that I test it. If I run cargo clean it still keeps on building miow and other dependency packages for nth-time when I cargo build with the sccache set. And it also bloats the target directory with all the dependencies, which I don't want in the target dir.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-caching Area: caching of dependencies, repositories, and build artifacts C-feature-request Category: proposal for a feature. Before PR, ping rust-lang/cargo if this is not `Feature accepted`
Projects
None yet
Development

No branches or pull requests