Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prepare global allocators for stabilization #1974

Merged
merged 7 commits into from
Jun 18, 2017

Conversation

sfackler
Copy link
Member

@sfackler sfackler commented Apr 16, 2017

@sfackler sfackler added the T-libs-api Relevant to the library API team, which will review and decide on the RFC. label Apr 16, 2017
@mark-i-m
Copy link
Member

Thanks @sfackler! This is an exciting feature to someone who enjoys writing embedded and OS code :)

Another pain point not addressed in this RFC is that if you have project that defines its own allocator, you still need define something like the allocator_stub crate. This is mildly annoying. I don't know how difficult this would be, but it would be nice if you could also allow a submodule to define itself as an allocator and let the crate use the submodule. And maybe the crate root can define itself as #[allocator] so that dependent crates know.

As its name would suggest, the global allocator is a global resource - all crates in a dependency tree must agree on the selected global allocator.

This seems a bit inflexible, which makes me nervous. It would be nice if crates could indicate whether they absolutely must have allocator X or just would prefer allocator X. Perhaps an additional annotation like #[must_have_allocator]? Or alternately, should crate writers be encouraged to make allocators optional dependencies if they can?

The standard library will gain a new stable crate - alloc_system. This is the default allocator crate and corresponds to the "system" allocator (i.e. malloc etc on Unix and HeapAlloc etc on Windows).

It would be nice if liballoc did not depend on the name of allocator crate. On no_std projects which use liballoc, the allocator crate currently has to have name alloc_system because of this. It would be nice to be able to name it whatever I want. I don't know if this falls in the scope of this RFC, though...

@ranma42
Copy link
Contributor

ranma42 commented Apr 17, 2017

Why is reallocate_inplace the only optional function?
It looks like all of the API could be implemented (possibly losing some efficiency) on top of just allocate_zeroed (or allocate) + deallocate.

/// The new size of the allocation is returned. This must be at least
/// `old_size`. The allocation must always remain valid.
///
/// Behavior is undefined if the requested size is 0 or the alignment is not a
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we go for "Behavior is undefined if the requested size is less than old_size or..."?
It might be worth spelling out explicitly whether old_size and size are the only legitimate return values or if the function can also return something inside that range.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I just copied these docs out of alloc::heap - they need to be cleaned up.

@comex
Copy link

comex commented Apr 17, 2017

Why not require global allocators to implement the same Allocator trait as is used for collections?

I gather that you can't just say type HeapAllocator = JemallocAllocator or something like that, because the choice of which allocator to use should be preserved until the final link. HeapAllocator needs to be a facade backed by some linking magic. However, I don't see any reason the backend allocator interface can't reuse the same trait, rather than defining a new ad-hoc attribute-based thingy.

@sfackler
Copy link
Member Author

sfackler commented Apr 17, 2017

@mark-i-m

I'm not sure I understand what the allocator_stub crate is doing. Is the issue you're thinking of a single-crate project that also wants to define its own custom allocator?

It would be nice if crates could indicate whether they absolutely must have allocator X or just would prefer allocator X.

If a crate absolutely must have allocator X it can stick #[allocator] extern crate X; in itself. A that point, the entire dependency tree is locked into allocator X, and compilation will fail if something other than X is asked for elsewhere.

Or alternately, should crate writers be encouraged to make allocators optional dependencies if they can?

I'm not sure I understand a context in which a crate would want to do this. Could you give an example?

It would be nice if liballoc did not depend on the name of allocator crate. On no_std projects which use liballoc, the allocator crate currently has to have name alloc_system because of this.

That seems like an implementation bug to me. liballoc shouldn't be doing anything other than telling the compiler that it requires a global allocator.

@comex

However, I don't see any reason the backend allocator interface can't reuse the same trait, rather than defining a new ad-hoc attribute-based thingy.

We could in theory use the Allocator trait, but there would still need to be some ad-hoc attribute weirdness going on. You'd presumably need to define some static instance of your allocator type and tag that as the allocator instance for that crate.

@hanna-kruppe
Copy link

@comex I'd rather not delay stabilizing this feature until the allocator traits are stabilized.

///
/// The `ptr` parameter must not be null.
///
/// The `old_size` and `align` parameters are the parameters that were used to
Copy link

@hanna-kruppe hanna-kruppe Apr 17, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ruuda made a good point in the discussion of the allocator traits: It can be sensible to allocate over-aligned data, but this information is not necessarily carried along until deallocation, so there's a good reason deallocate shouldn't require the same alignment that was used to allocate.

This requirement was supposed to allow optimizations in the allocator, but AFAIK nobody could name a single existing allocator design that can use alignment information for deallocation.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wrote an allocator for an OS kernel once that would have benefited greatly from alignment info.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would be very relevant to both this RFC and the allocators design, so could you write up some details?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm... It seems that I was very mistaken... I have to appologize 🤕

Actually, when I went back and looked at the code, I found the exact opposite. The allocator interface actually does pass the alignment to free, and my implementation of free ignores it for exactly the reasons mentioned above (more later). That said, passing alignment into the alloc function is useful (and required for correctness), so I assume that this discussion is mostly about if free should take align or not.

The code is here. It's a bit old and not very well-written since I was learning rust when I wrote it. Here is a simple description of what it does:

Assumptions

  • The kernel is the only entity using this allocator. (The user-mode allocator lives in user-mode).
  • The kernel is only using this allocator through Box, so the parameters size and align are trusted to be correct, since they are generated by the compiler.

Objective

Use as little metadata as possible.

Blocks

  • All blocks are a multiple of the smallest possible block size, which is based on the size of the free-block metadata (16B on a 32-bit machine).
  • All blocks have a minimum alignment which is the same as minimum block size (16B).
  • The allocator keeps a free-list which is simply a singly linked list of blocks.
  • Free blocks are used to store their own metadata.
  • Active blocks have no header/footer. This means that their is no header/footer overhead at all.

alloc

Allocating memory just grabs the first free block with required size and alignment, removes it from the free list, splits it if needed, and returns a pointer to its beginning. The size of the block allocated is a function of the alignment and size.

free

Freeing memory requires very little effort, it turns out. Since we assume that the parameters size and ptr are valid, we simply create block metadata and add to the linked list. If possible, we can merge with free blocks after the block we are freeing.

In fact, the alignment passed into free is ignored here because the ptr should already be aligned. The takeaway seems to be the opposite from what I said above (again, sorry). When I thought about it some more, it makes sense. A ptr inherently conveys some alignment information, so passing this information in as an argument actually seems somewhat redundant.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm actually quite relieved to hear that 😄 Yes, allocation and reallocation should have alignment arguments, it's just deallocation that shouldn't use alignment information. It's not quite true that "ptr inherently conveys alignment information", because the pointer might just happen to have more alignment than was requested, but it's true that it's always aligned as requested at allocation time (since it must be the exact pointer returned by allocation, not a pointer into the allocation).

or more distinct allocator crates are selected, compilation will fail. Note that
multiple crates can select a global allocator as long as that allocator is the
same across all of them. In addition, a crate can depend on an allocator crate
without declaring it to be the global allocator by omitting the `#[allocator]`
Copy link

@hanna-kruppe hanna-kruppe Apr 17, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to restrict this choice to "root crates" (executables, staticlibs, cdylibs) analogously to how the panic strategy is chosen? [1] I can't think of a good reason for a library to require a particular allocator, and it seems like it could cause a ton of pain (and fragmentation) to mix multiple allocators within one application.

[1]: It's true that the codegen option -C panic=... can and must be set for libraries too, but this is mostly to allow separate compilation of crates – the panic runtime to be linked in is determined by the root. There are also restrictions (can't link a panic=abort library into a panic=unwind library). In addition, Cargo exposes only the "root sets panic strategy" usage.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I share this concern. Allowing libraries to require a particular global allocator could create rifts in the crate ecosystem, where different sets of libraries cannot be used together because they require different global allocators.

Allocators share the same interface, and so the optimal allocator will depend on the workload of the binary. It seems like the crate root author will be in the best position to make this choice, since they'll have insight into the workload type, as well as be able to run holistic benchmarks.

Thus is seems like a good idea to restrict global allocator selection to the crate root author.

usage will happen through the *global allocator* interface located in
`std::heap`. This module exposes a set of functions identical to those described
above, but that call into the global allocator. To select the global allocator,
a crate declares it via an `extern crate` annotated with `#[allocator]`:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clarification request: Can all crates do this? As mentioned in another comment, I would conservatively expect this choice to be left to the root crate, as with panic runtimes.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As written, any crate can do this, yeah.

I would be fine restricting allocator selection to the root crate if it simplifies the implementation - I can't think of any strong reasons for needing to select an allocator in a non-root crate.

@mark-i-m
Copy link
Member

@sfackler

I'm not sure I understand what the allocator_stub crate is doing. Is the issue you're thinking of a single-crate project that also wants to define its own custom allocator?

It doesn't have to be a single-crate project, but yes more or less. The idea is that you might have a large crate that both defines and uses an allocator. For example, in an OS kernel, the kernel allocator might want to define this interface so you can use it with Box. But you can easily imagine such an allocator depending on, say, the paging subsystem or a bunch of initialization functions or the kernel synchronization primitives. So while the allocator may be modular enough to go into its own module, it still depends on other parts of the crate and cannot be pulled out cleanly.

I'm not sure I understand a context in which a crate would want to do this. Could you give an example?

I guess I was thinking that maybe a crate might have performance preference for some allocator without really depending on it. For example, if you know all of your allocations will be of the same size, maybe you would prefer a slab allocator, but it doesn't change correctness if someone else would like a different allocator. TBH, I don't know if anyone actually does this, but it was a thought.

That seems like an implementation bug to me. liballoc shouldn't be doing anything other than telling the compiler that it requires a global allocator.

Hmm... That's good to know... I will have to look into this sometime...

@comex

However, I don't see any reason the backend allocator interface can't reuse the same trait, rather than defining a new ad-hoc attribute-based thingy.

Hmm... I don't understand why they should use the same trait. They seem pretty disparate to me...

@hanna-kruppe
Copy link

hanna-kruppe commented Apr 17, 2017

I guess I was thinking that maybe a crate might have performance preference for some allocator without really depending on it. For example, if you know all of your allocations will be of the same size, maybe you would prefer a slab allocator, but it doesn't change correctness if someone else would like a different allocator. TBH, I don't know if anyone actually does this, but it was a thought.

In my experience, such specialized allocation behavior is usually implemented by explicitly using the allocator (and having it allocate big chunks of memory from the global allocator). And most allocators used like that aren't suitable as general purpose allocator anyway.

@comex
Copy link

comex commented Apr 17, 2017

Hmm... I don't understand why they should use the same trait. They seem pretty disparate to me...

How are they disparate? The only reason the language needs to have a built-in concept of allocator is for standard library functionality that requires one. Most or all of that functionality should be parameterized by the Allocator trait to start with, so that multiple allocators can be used in the same program. The global allocator would exist mostly or entirely to serve as backend for the default HeapAllocator, so why shouldn't it use the same interface, rather than making HeapAllocator map the calls to a slightly different but isomorphic interface?

Here is the Allocator trait in question, with some type aliases removed:

pub struct Layout { size: usize, align: usize }
pub unsafe trait Allocator {
    // required
    unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr>;
    unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout);
    // optional but allocator may want to override
    fn oom(&mut self, _: AllocErr) -> !;
    unsafe fn usable_size(&self, layout: &Layout) -> (usize, usize);
    unsafe fn realloc(&mut self, ptr: *mut u8, layout: Layout, new_layout: Layout) -> Result<*mut u8, AllocErr>;
    unsafe fn alloc_excess(&mut self, layout: Layout) -> Result<Excess, AllocErr>;
    unsafe fn realloc_excess(&mut self, ptr: *mut u8, layout: Layout, new_layout: Layout) -> Result<Excess, AllocErr>
    unsafe fn realloc_in_place(&mut self, ptr: *mut u8, layout: Layout, new_layout: Layout) -> Result<(), CannotReallocInPlace>
    // plus some convenience methods the allocator probably wouldn't override
}
pub enum AllocErr { Exhausted { request: Layout }, Unsupported { details: &'static str } }
pub struct CannotReallocInPlace; // unit struct

Here is your set of functions:

pub fn allocate(size: usize, align: usize) -> *mut u8;

Exactly equivalent to Allocator::alloc except for not taking self and indicating failure with a null pointer rather than a more descriptive AllocErr.

pub fn allocate_zeroed(size: usize, align: usize) -> *mut u8;

Not included in Allocator. You probably included this because allocators often start with blocks of zero bytes, such as those returned by mmap, and can provide zeroed allocations without doing another useless memset. I agree this is useful: but that means it should be in the Allocator trait as well. There are plenty of cases where standard containers could want zeroed blocks, such as a hash table whose internal layout guarantees that zero means unset.

pub fn deallocate(ptr: *mut u8, old_size: usize, align: usize);

Exactly equivalent to Allocator::dealloc except for not taking self.

pub fn reallocate(ptr: *mut u8, old_size: usize, size: usize, align: usize) -> *mut u8;

Exactly equivalent to Allocator::realloc except for the aforementioned caveats plus not being able to change the alignment.

pub fn reallocate_inplace(ptr: *mut u8, old_size: usize, size: usize, align: usize) -> usize;

Ditto Allocator::realloc_in_place.

Overall, there is some functionality missing in yours:

  • usable_size and its cousins alloc_excess/realloc_excess, which you explicitly removed "as it is not used anywhere in the standard library". I think it's still useful in some cases, many existing allocators provide it as a primitive, and it's easy for a custom allocator to leave the default implementation if it doesn't want to bother with it.
  • Ability to change the alignment in realloc. No reason not to support this.
  • An oom method which panics and can "provide feedback about the allocator's state at the time of the OOM" (according to the RFC).
  • AllocErr, a way for the allocator to provide explicit error messages for allocations that aren't supported (like if size is zero or way too large, or align is too large).

For each of these, HeapAllocator would have to simply not provide the corresponding functionality (even though the underlying allocator may support it), or else the global allocator API would have to be changed in the future to add it.

The only big difference is that the Allocator trait takes self while global allocators must not. But there's no need for that to cause any actual overhead. Roughly, the allocator-crate side of the linking magic would do something like

#[no_mangle]
pub extern fn magic_alloc(layout: Layout) -> Result<*mut u8, AllocErr> {
    THE_ALLOC.alloc(layout)
}

and the call to alloc would be inlined.

We could in theory use the Allocator trait, but there would still need to be some ad-hoc attribute weirdness going on. You'd presumably need to define some static instance of your allocator type and tag that as the allocator instance for that crate.

True, but it would arguably be somewhat less weird to have a single attribute than simulating the trait system by enforcing different function signatures (especially if there are extensions in the future, so you have to deal with optional functions).

Even if you don't end up literally using the Allocator trait, it would make sense to at least use the same names and signatures rather than slightly and arbitrarily different ones (allocate vs. alloc, separate size and align vs. Layout, etc.).

Also, eventually it would be nice to have a proper "forward dependencies" feature rather than special-casing specific types of dependencies (i.e. allocators). This shouldn't wait for that to be stabilized, but it would be nice if in the future the magic attribute could be essentially desugared to a use of that feature, without too much custom logic.

@comex I'd rather not delay stabilizing this feature until the allocator traits are stabilized.

The Allocator RFC has already been accepted (while this one isn't even FCP), and any nitpicks raised regarding the Allocator interface are likely to also apply to global allocators. I don't think you're actually going to save any time.

@mark-i-m
Copy link
Member

@comex Thanks for the clarifications. I had thought you were suggesting not having a global allocator instead of making allocators to implement a trait. I agree that the trait is a more maintainable interface and probably better in the long run...

I would stipulate one thing though:

#[no_mangle]
pub extern fn magic_alloc(layout: Layout) -> Result<*mut u8, AllocErr> {
   THE_ALLOC.alloc(layout)
}

I (the programmer) should be able to define a static mut variable THE_ALLOC and tag it as the global allocator. The initialization of this allocator should be controllable by the programmer because on embedded/very low-level projects, the system might need to do some work before initializing the heap or it might need to pass some system parameters to constructor of the Allocator. For example,

#[global_allocator]
static mut THE_ALLOC: MyAllocator = MyAllocator::new(start_addr, end_addr);
// where MyAllocator: Allocator

@sfackler
Copy link
Member Author

@ranma42

Why is reallocate_inplace the only optional function?
It looks like all of the API could be implemented (possibly losing some efficiency) on top of just allocate_zeroed (or allocate) + deallocate.

It seems reasonable to make everything but allocate and deallocate optional, yeah. I will update.

@comex

I think it's still useful in some cases

What are these cases?

@Ericson2314
Copy link
Contributor

Mmm besides trying to leaverage the trait as much as possible, which I fully support, there was talk in the past of using a general "needs provides" mechanism (some does of applicative functors perhaps​) for this, logging, and panicking, and other similar tasks needing a cannonical global singleton. I'd be really disappointed to retreat from that goal into a bunch of narrow mechanisms.

@sfackler
Copy link
Member Author

I don't recall there being any more talk than "hey, would it be possible to make a general needs/provides mechanism". We can't use a thing that doesn't exist.

@comex
Copy link

comex commented Apr 18, 2017

@sfackler Vec, for one - the buffer allocations can switch from alloc/realloc to alloc_excess/realloc_excess, and increase capacity by excess / element_size. See also rust-lang/rust#29931, rust-lang/rust#32075

@sfackler
Copy link
Member Author

I was referring more specifically to usable_size.

@comex
Copy link

comex commented Apr 18, 2017

In that case, nope, I can't think of a good reason to use usable_size rather than alloc_excess; in fact, I'd call it an anti-pattern, since there might be some allocator design where the amount of excess depends on the allocation. Quite possibly it should be removed from the Allocator trait.

@hanna-kruppe
Copy link

The Allocator RFC has already been accepted (while this one isn't even FCP), and any nitpicks raised regarding the Allocator interface are likely to also apply to global allocators. I don't think you're actually going to save any time.

I based this assertion on the fact that the allocators RFC was accepted with a large number of unresolved questions, and there's been little progress on resolving those. But you're right that most of those questions also apply to the global allocator.

@sfackler
Copy link
Member Author

I've pushed some updates - poke me if I've forgotten anything!

cc @nox with respect to what an actual_usable_size function could look like.

The global allocator could be an instance of the `Allocator` trait. Since that
trait's methods take `&mut self`, things are a bit complicated however. The
allocator would most likely need to be a `const` type implementing `Allocator`
since it wouldn't be sound to interact with a static. This may cause confusion
Copy link

@hanna-kruppe hanna-kruppe Apr 18, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not true. Unlike static muts, plain statics are perfectly safe to access and can, in fact, maintain state. It's just that all mutation needs to happen via thread safe interior mutability.

With an eye towards the potential confusion described in the following sentence ("a new instance will be created for each use"), a static makes much more sense than a const — the latter is a value that gets copied everywhere, while the former is a unique object with an identity, which seems more appropritate for a global allocator (besides permitting allocator state, as mentioned before).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Especially if you have access to unstable features, static with interior mutability is idiomatic and can be wrapped in a safe abstraction, while static mut is quite worse.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree completely. Allocators are inherently stateful since they need to keep track of allocations for correctness. Static + interior mutability is needed.

However, this raises a new question: initializing the global allocator. How does this happen? Is there a special constructor called? Does the constructor have to be a const fn? The RFC doesn't specify this.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I may be missing something here, but the issue is that you may not obtain a mutable reference to a static, but every method on Allocator takes &mut self:

struct MyAllocator;

impl MyAllocator {
    fn alloc(&mut self) { }
}

static ALLOCATOR: MyAllocator = MyAllocator;

fn main() {
    ALLOCATOR.alloc();
}
error: cannot borrow immutable static item as mutable
  --> <anon>:10:5
   |
10 |     ALLOCATOR.alloc();
   |     ^^^^^^^^^

error: aborting due to previous error

@mark-i-m There is no constructor.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I may be missing something here, but the issue is that you may not obtain a mutable reference to a static, but every method on Allocator takes &mut self

Ah, I see. So then, does the Allocator trait have to change? Or do we make the allocator unsafe and use static mut? If neither is possible, then we might need to switch back to the attributes approach or write a new trait with the hope of coalescing them some time...

@mark-i-m There is no constructor.

Most allocators need some setup, though. Is the intent to just do something like lazy_static? That would annoy me, but it would work, I guess. Alternately, we could add a method to the interface to do this sort of set up...

Copy link

@hanna-kruppe hanna-kruppe Apr 19, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, yeah, I totally overlooked the &mut self issue 😢 We could side-step this by changing the trait (I'm not a fan of that, for reasons I'll outline separately) or by changing how the allocator is accessed. By the latter I mean, for example, tagging a static X: MyAllocator as the global allocator creates an implicit const X_REF: &MyAllocator = &X; and all allocation calls get routed through that. This feels extremely hacky, though, and brings back the aforementioned identity confusion.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

switch back to the attributes approach

The RFC has never switched away from the attributes approach. This is an alternative.

Most allocators need some setup, though.

Neither system allocators nor jemalloc need explicit setup steps. If you're in an environment where your allocator needs setup, you can presumably call whatever functions are necessary at the start of execution.

internally since a new instance will be created for each use. In addition, the
`Allocator` trait uses a `Layout` type as a higher level encapsulation of the
requested alignment and size of the allocation. The larger API surface area
will most likely cause this feature to have a significantly longer stabilization
Copy link

@hanna-kruppe hanna-kruppe Apr 18, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not so sure about this any more. At least the piece of the API surface named here (Layout) doesn't seem very likely to delay anything. I don't recall any unresolved questions about it (there's questions about what alignment means for some functions, but that's independent of whether it's wrapped in a Layout type).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The allocators RFC hasn't even been implemented yet. We have literally zero experience using the Allocator trait or the Layout type. In contrast, alloc::heap and the basic structure of global allocators have been implemented and used for the last couple of years.

@Ericson2314
Copy link
Contributor

Ericson2314 commented Apr 19, 2017

@sfackler I think that's more for lack of time than interest. An now Haskell's "backpack" basically wrote the book on how to retrofit a module system on a language without one (and with type classes / traits), so it's not like research is needed.

I'm fine with improving how things work on an experimental basis, but moving towards stabilization seems vastly premature---we haven't even implemented our existing allocator RFC!

@hanna-kruppe
Copy link

hanna-kruppe commented Apr 19, 2017

So, using the allocator trait naturally suggests a static of a type that implements the allocator trait, but @sfackler pointed out that Allocator methods take &mut self. So a static indeed wouldn't work with the allocator trait as specified today, and both static mut and const are unacceptable substitutes IMO.

@mark-i-m brought up the possibility of changing the trait to take &self, but I don't think that is a good idea. Most uses of allocators (e.g., any time a collection owns an allocator) can offer the guarantees &mut implies, and those guarantees could greatly simplify and speed up certain allocators (no need for interior mutability or thread safety). Furthermore, with the exception of static allocators, the &mut shouldn't be a problem, since you can always introduce a handle type that implements the trait and can be duplicated for every user (e.g., a newtype around &MyAllocatorState or Arc<MyAllocatorState>).

Contrast this with the global allocator, where you can't just hand out a handle to every user, because users are everywhere. One could define an ad-hoc scheme to automatically introduce such handles (e.g., with static ALLOC: MyAlloc, the allocator trait must be implemented on &MyAlloc and allocator methods are called on a temporary &ALLOC). This is not only a very obvious rule patch, it also brings in a layer of indirection that may not be necessary for all allocators.

To me, this mismatch between "local" allocators and global ones is a strong argument to not couple the latter to the trait used for the former.

@Ericson2314
Copy link
Contributor

Ericson2314 commented Apr 19, 2017

@rkruppe the handle thing is correct. My convention the global allocator handles it's own synchronization, but that's the only magic. Allocator + Default + Sized<Size=0> is a find bound for a handle to the implicit global allocator (the third part is hypothetical but not necessary and just nice to have). In general it's ok for Allocator instances to be handles.

@hanna-kruppe
Copy link

hanna-kruppe commented Apr 19, 2017

@Ericson2314 I'm not sure I catch your drift. The global allocator (edit: by this I mean the type implementing Allocator, be it a handle or whatever) can't be a static as outlined before, a const seems rather inappropriate and risks the confusion outlined before, and static mut is unsafe. And what's the Default bound supposed to be for?

bors added a commit to rust-lang/rust that referenced this pull request Jul 3, 2017
rustc: Implement the #[global_allocator] attribute

This PR is an implementation of [RFC 1974] which specifies a new method of
defining a global allocator for a program. This obsoletes the old
`#![allocator]` attribute and also removes support for it.

[RFC 1974]: rust-lang/rfcs#1974

The new `#[global_allocator]` attribute solves many issues encountered with the
`#![allocator]` attribute such as composition and restrictions on the crate
graph itself. The compiler now has much more control over the ABI of the
allocator and how it's implemented, allowing much more freedom in terms of how
this feature is implemented.

cc #27389
bors added a commit to rust-lang/rust that referenced this pull request Jul 6, 2017
rustc: Implement the #[global_allocator] attribute

This PR is an implementation of [RFC 1974] which specifies a new method of
defining a global allocator for a program. This obsoletes the old
`#![allocator]` attribute and also removes support for it.

[RFC 1974]: rust-lang/rfcs#1974

The new `#[global_allocator]` attribute solves many issues encountered with the
`#![allocator]` attribute such as composition and restrictions on the crate
graph itself. The compiler now has much more control over the ABI of the
allocator and how it's implemented, allowing much more freedom in terms of how
this feature is implemented.

cc #27389
@tarcieri
Copy link

tarcieri commented Jul 21, 2017

How does one use a global_allocator as the default_lib_allocator in no_std environments? This used to be fairly ergonomic (just define #![allocator] and you're done), but now I'm wondering if I need to do something like copy and paste all of this code:

https://github.com/rust-lang/rust/blob/master/src/libstd/heap.rs

cc @alexcrichton

@sfackler sfackler deleted the allocators-2 branch July 21, 2017 22:13
@alexcrichton
Copy link
Member

@tarcieri it's not intended currently to be able to implement Heap ergonomically outside of libstd right now, you'd have to mirror libstd.

@tarcieri
Copy link

@alexcrichton yeah got that working today, was just curious if there was something better I could do.

pravic added a commit to pravic/winapi-kmd-rs that referenced this pull request Sep 13, 2017
@Centril Centril added A-allocation Proposals relating to allocation. A-traits-libstd Standard library trait related proposals & ideas A-types-libstd Proposals & ideas introducing new types to the standard library. A-attributes Proposals relating to attributes A-impls-libstd Standard library implementations related proposals. labels Nov 23, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-allocation Proposals relating to allocation. A-attributes Proposals relating to attributes A-impls-libstd Standard library implementations related proposals. A-traits-libstd Standard library trait related proposals & ideas A-types-libstd Proposals & ideas introducing new types to the standard library. final-comment-period Will be merged/postponed/closed in ~10 calendar days unless new substational objections are raised. T-libs-api Relevant to the library API team, which will review and decide on the RFC.
Projects
None yet
Development

Successfully merging this pull request may close these issues.