-
Notifications
You must be signed in to change notification settings - Fork 626
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider passing the Task that is driving the future to Future::poll and renaming the function #129
Comments
FWIW, we actually started with a design that passed an explicit I do agree that thread-local storage has downsides, in terms of being less explicit and in our case opening you up to runtime errors. However, I think that once you understand the basics of the library, this kind of mistake is very rare and will very reliably product panics (so shouldn't be hard to debug). Regarding the naming, I can see what you mean but none of the suggested names particularly evoke "tasks" for me in a way that the current name doesn't. |
I think implementing the Read trait this way might not be the optimal design (and also might be broken currently if it's called outside of a Future's poll function, although I suppose that's easily fixed by doing nothing if there is no current task). A possible alternative that seems more sound is to implement, if desired, the Read trait in a way that is purely nonblocking and thus does not access the current task. End users would normally use APIs that return futures (and that thus aren't Read or similar traits), while the internal implementation of those APIs can work with internal functions that take a task argument instead of the Read trait. |
The key downside of that alternative is that the work of managing callbacks to the task is no longer automatically managed by the core I/O abstractions, as it is today. In other words, with today's setup, there's no way to create a "lost wakeup" when working with primitive I/O, because the interaction with the task is handled automatically. Moving to this model was a big part of what changed when merging futures-mio into tokio-core, and I think it was a big improvement. |
I think a more plausible alternative would be to take an explicit task argument everywhere, including in The downsides would include:
|
But aren't end users supposed to normally use APIs that return futures, which already implicitly handle everything correctly? This is how other future-based systems work; for instance, in .NET, to read data, a TCP server would call Stream.ReadAsync which returns a future. The Read trait can just not be implemented at all and then if the only APIs available either return futures, or require passing the task explicitly, it's also impossible to have lost wakeups. Being able to use standard traits with implicit wakeups seems to be unsound in general and thus have problems in nontrivial code: for example, if someone sends the I/O stream to another thread (perhaps due to some automatic parallelization framework, or because they are using an Actor-like model like Servo), then the wakeup will not be registered to the proper task, since the other thread can't possibly know the right task unless the task is explicitly passed as well. Of course it's possible to document the pitfalls and accept the potential for run-time issues in case of misuse, but it doesn't seem ideal. |
It depends a little bit on which "end users" you have in mind. The current I/O integration does not expose the bottom-level reading/writing facilities as futures. That's because we don't want to impose ownership transfer of buffers. Let's take reading for an example. You'll need a buffer to read into. If you represent this as a future of the read being complete, then you have to hand over an owned buffer for the whole time you're waiting for a read to occur. That in turn means you may have to have a large number of such buffers allocated and in flight, if you have a large number of concurrent reads. By contrast, the All of the above considerations really matter when you think about something like a proxy server, which ideally can serve any number of clients with just a single buffer. See the example proxy server where we put this into action. |
In the case you are describing, sending the I/O stream to another task is a supported use case. When an I/O stream is sent to another task, the original task will receive a spurious wakeup, but that is fine. The new task will receive the I/O stream, attempt to If you send a future to a thread that is not managed by a future task, then currently there is a panic, however I think a better alternative would be returning |
What I'm talking about is complex code that implements Future::poll() by spawning several scoped threads or otherwise sends work to be executed synchronously on a thread pool (perhaps implicitly due to using an auto-parallelization library like rayon) and then accesses a Tokio's TcpStream in those separate threads. In this case, these read/write calls should register wakeups on the task driving the Future::poll, but since they are in separate threads, the TLS CURRENT_TASK variable will not be properly set, and either the function would panic or the wakeups would be silently missed. Explicitly passing the task would obviously either make the code work correctly or cause a compilation error instead of having those issues. Propagating TLS variables can fix some cases of this, but I don't think it can work in general and there might be issues even without threading (the Future::poll implementation for future A driven by task A somehow creating a closure that ends up executed inside the Future::poll implementation for future B driven by task B, and having the wakeups from that closure being registered for task B while instead they should have been registered for task A, due to the task not having being captured in the closure). |
I wrote this to try to summarize and make the arguments of the thread clearer; it doesn't add much to the previous discussion in the thread, but might perhaps reframe it in a more orderly and easily followed form. I think the first design question is whether libraries and programs that aren't directly interfacing with the OS should implement the Future trait themselves or should instead only or mainly create futures using combinators like map/and_then/etc or using the "oneshot" facility. The SOCKS5 example actually uses both coding styles, implementing the handshake via future-based combinators (starting from the "read_exact" / "write_all" Future-based I/O APIs in Tokio), but the data transfer with a custom Future::poll implementation using Read/Write traits on TcpStream. If users are not supposed to implement Future themselves, like in other designs such as .NET and JavaScript promises, then only future-returning APIs are used to create futures via combinators, or the user does the "wakeup" itself using oneshot, and thus the task-handling concerns are not present. BTW, it is possible to have a Future-returning I/O API that doesn't keep buffers in flight, for example by returning a If users are instead supposed to implement Future themselves (which might be needed in Rust to ensure optimal performance can be achieved, which is not a goal of other GC-based future designs), then it becomes useful to provide a low-level API that is non-blocking but not future-based, as well as a facility to register interest in unparking a task or both combined in a single API, for use in implementations of Future::poll(). The current design of futures-rs/tokio-io provides such a facility by implementing the Read/Write traits in a "special" way that consists of a normal non-blocking implementation that also "magically" fetches the current task from the TLS and potentially parks it and has it unparked when the stream becomes ready again. This means that wakeups cannot be accidentally missed in simple code, but as far as I can tell can be missed in complex code that passes the I/O stream to other threads or creates, for example, closures that do I/O and are then potentially called by the Future::poll() of other unrelated futures/tasks. The possible change explored in this issue is, assuming that letting users implement Future directly this way is useful, to make the wakeup registration explicit, and thus add a task argument to Future::poll() and either change the Read/Write implementations on tokio-io's TcpStream and similar to be purely non-blocking implementations and have separate ways to register the task to be unparked, or have read/write methods that mimic the Read/Write trait and take a task argument (the latter avoids potential missed wakeups, but is also less flexible and might result in unnecessary wakeups). If Read/Write is not implemented, there would be a loss of interoperation with other code using standard I/O traits, but such code might assume that they are implemented in a blocking way as they would on a file or blocking socket, and might not work anyway on a non-blocking implementation as the one in a futures framework without full "green threads" must be (regardless of whether it interacts with task parking). |
Just a quick note:
I would not say this is correct. I've found that I have been able more often than not to reuse existing code that assumes |
To clarify, implementing |
I just realized that there is a third option for read/write that might be the best: continue to implement the traditional Read/Write traits as they are implemented now in Tokio, but instead of implementing them directly on TcpStream, implement them on a struct containing a reference to the TcpStream and to the task. In other words, an implementation of Future::poll() would call This would remove the need to use TLS and make task passing explicit, while otherwise keeping the current code unchanged. This API also allows the implementation to defer some of the wakeup registration work to the destructor of the struct implementing Read or at the end of the closure, in case that allows better performance. |
I tried implementing my last suggestion and I think it might not be the best due to a unforeseen issue: you can't reuse BufReader/BufWriter and similar Read/Write stateful adapters, because implementing Read for a (TcpStream, task) tuple requires recreating the Read implementation on every call to poll, while BufReader needs a persistent Read instance to work. I have a new alternative idea: have TcpStream and friends return a token to register a wakeup as part of the error in would-block cases. I think this is possible without changes to libstd, by using the Error::new constructor and then using Any/reflection to try to cast the custom error to a trait object implementing a "WakeupSource" trait with a method that would then be passed the result of task.park(). Since errors are already supposed to propagate up the call chain, this allows the caller to either use the Read/Write implementation as a pure nonblocking implementation, or if desired to match on the error, do the downcast and register the wakeup. Unfortunately, it requires a memory allocation of the boxed error for the Error trait object every time waiting needs to happen, and also effectively requires a persistent Arc to reference from said trait object since it needs to be Send+Sync. It's also possible to extend this paradigm to the Future::poll method as well by adding a wakeup source field to the Async::NotReady variant and use that instead of having Future::poll register wakeups directly, or even better by removing the Async datatype and instead using Result with a WouldBlock error to signify "not ready" just like the I/O traits. |
I would prefer taking an explicit task argument everywhere. Using scoped thread locals is confusing for people new to the library, and takes control away from the user - it's the sort of thing I'd expect from a framework rather than a library. Doing this would mean that the compiler can catch at compile time "obvious" places where no unpark event will be triggered (when the task parameter goes unused) and specialised lints could be even more intelligent. Re: the argument about Read/Write traits - I actually think it's quite surprising, and in some sense broken, that the Read/Write implementations on tokio objects automatically register wakeups. Most Read/Write implementations do not do that. This means that given only a Far better would be to introduce new traits, which take a task as a parameter, and encapsulate both reading and registering a wakeup event. The task parameter is self documenting, and eliminates any possibility of confusion. It also allows for code to be generic over these new traits. Using a task parameter also opens up some other possibilities: one could implement custom futures-like objects which take a different type of Finally, there are other concerns with using thread locals:
|
I hadn't seen this before, but this issue for the disparity between tokio/std Read/Write implementations already exists: tokio-rs/tokio-core#61 |
Btw, is there a documentation or an example on how to implement custom |
@kkimdev yes you can find detailed docs and such at https://tokio.rs |
@alexcrichton yeah I've read that. It's a very informative doc, though I think it's also good to have a section with a concrete example on implementing custom Future. I think I'd have to agree with @rjuse that the current Future trait API is unintuitive regarding the relationship with |
@kkimdev ah yes I believe we don't have a page on specifically that, no. Would be good to add one! Want to open an issue at https://github.com/tokio-rs/website? |
I'd myself prefer explicit arguments too. I don't mind not having them if there are strong reasons but I'm not convinced right now. Maybe we should try harder designing it. |
I've opened a pull request to make the My impression is that many people who try to learn futures-rs get tripped up on the deceptively simple interface to Indeed, when making the changes in #374, I got tripped up on the fact that It seems that the main argument for making trait PollRead {
fn poll_read(&mut self, task: &Task, buf: &mut [u8]) -> Poll<usize, std::io::Error>;
}
trait PollWrite {
fn poll_write(&mut self, task: &Task, buf: &[u8]) -> Poll<usize, std::io::Error>;
} |
The bigger problem is wanting to hook Tokio into libraries that currently work with That said, there may be some other avenue for addressing this problem. @dwrensha, you say that making |
@aturon the benefit I can see is that we don't need |
@Diggsey I agree with that reaction in general (and argued privately with @carllerche along similar lines), but I do think that for the special case of transports the actual I/O piece tends to naturally fit both models. But again, there may be some way of more explicitly factoring the library to make this more clear, so that you don't have access to things like |
The killer feature to me for why we switched to a thread local was what @carllerche mentioned, all existing I/O continues to work. If we were only implementing My favorite example is TLS. We've got crates like native-tls, openssl, etc, and we surely don't want to reimplement any of it. All of these crates operate with My worry with a change such as passing the task around explicitly is that we'd have traits like @dwrensha mentioned above but they're incompatible with I personally feel that to evaluate a change as proposed in this issue we'll have to change more than the In general though I personally have really enjoyed the ergonomics of not passing tasks around. I do sympathize with @dwrensha's original point though of that it feels quite magical. My hope was that we could have documentation to combat this mentality but it may either be too fundamental or mean that the docs aren't quite up to the task. I'd personally draw an analogy with synchronous code, though, for comparison. During early init and late exit of libc surely you can't use pthread functions, but you can't tell from those function signatures. Now everyone doesn't tend to write code in those areas, but statically solving this problem would involve changing every function to have a handle to some "current thread" pointer. That way early init and late exit functions wouldn't have access to it, but all other code could access it. Clearly this sounds super painful, and I fear that's what would happen with futures. Basically literally every function would need a @dwrensha you mention that the library gets much nicer passing |
I don't see why re-implementation would be necessary, I see these possible scenarios:
|
@Diggsey The case is specifically native-tls decorates a The insight is that most code doesn't care about the details that @Diggsey listed above me. It just defers to a lower level type. Also, all the other points that @alexcrichton listed hold as well. |
As far as I understand (I'm not sure), the problem is that with explicit handles e.g. Also, is it so difficult to re-factor other libraries? Isn't there a better abstraction that is usable with both sync and async? If I misunderstood something, please point me at concrete code. |
@Kixunil
Yes, we do not control libraries like native-tls and I also have no intention of redoing all the work that @sfackler has put into those libs. Also, native-tls shouldn't have to depend on futures-rs. |
I'm not totally sure what exactly is being discussed, but TLS crates definitely work with nonblocking sockets. The async Hyper branch did this directly for a while. It's a little bit complex since a TLS write can require reads and vice versa, but that was dealt with by wrapping the raw socket in one that recorded if the last call was a read or write. When the TLS stream returns a |
@carllerche I'm worried by that. If I believe, there should be at least marker trait for it. I don't think depending on some small crate with just (marker) traits is a huge issue. Also the fact that we don't control other peoples crates doesn't mean that others are unfriendly and won't accept our PRs etc. And if there is a disagreement, there is still possibility of fork as a last resort. |
Basically, what @sfackler said: if a library can be used with a |
@Diggsey there's a significant difference, though. Today we know precisely if a library wants readability or writability. Otherwise if we just get back a plain "I need something else to happen" then we don't know precisely what's desired. I initially had to implement this for the three TLS libraries and it was a huge PITA. "falling out" of trait impls is far more ergonomic in my experience. |
@alexcrichton That's a fair point about being easier to wrap, but if we have the primitives available in futures-rs, then the authors of libraries can implement eg. I also think that saying "we know precisely if a library wants readability or writability" is not quite true - we're inferring based on what operations the library actually performs on the stream, which is OK in this case because superfluous unpark events are allowed. The most important reason I don't like the current situation is still the "abuse" of the One option is to continue doing this internally to simplify creating futures-compatible wrappers of libraries, but use a task parameter in futures-rs, and have separate |
Would a wrapper like this work? pub struct PollReader<'a, T> {
pub task: &'a Task,
pub inner: T,
}
impl<'a, T: PollRead> io::Read for PollReader<'a, T> {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
match self.inner.poll_read(self.task, buf) {
Ok(Async::Ready(x)) => Ok(x),
Ok(Async::NotReady) => Err(io::Error::from(
io::ErrorKind::WouldBlock)),
Err(e) => Err(e),
}
}
} |
The proposed task overhaul in #436 does a lot of contortion around |
I don't want to add just another "me too!" comment, but making the task argument explicit would have solved almost all of my difficulties in learning tokio/futures. I wrote a bunch of details of what I found hard here: https://www.reddit.com/r/rust/comments/76t442/ And, in fact, I wound up implementing an alternate "extended" API which made the task an explicit argument to all methods that require them.
I don't think it is analogous. When you're in code, you're in a thread. It may be the only thread in the process, but it's still a thread. When you're dealing with a program that uses futures, you may or may not be in a task. When you read futures code as a beginner, it isn't clear at all what's going on. I don't think that interoperability with blocking io is a good idea. To me, the two are different universes, and not drawing a clear boundary does not seem like a good idea. Now, I might not be given enough weight to the benefits here, since I haven't done any of the work implementing everything. If interoperability with standard IO is really necessary, then a function with a scary name that has the same functionality as |
Maybe I'm being naive, but can't a |
@gurry Not really without quite a bit of overhead: this would mean that futures would have to allocate a new channel each time they want to block, and then the executor would have to somehow select across all the channels that have ever been returned to it. An efficient implementation requires that all futures send their notifications to something equivalent to a single channel, so that the executor need only wait on that one channel, and need not allocate a new one each time. |
@Diggsey Yup. Makes sense. Classic performance versus purity of design conflict. |
So what is the status of this? I am planning on using async IO and don't really want to use |
If you have opinions on this issue, please comment on the RFC devoted to it. |
This has now been accepted as a part of 0.2! |
Currently Future::poll seems to be expected to call task::park which then fetches the current task from TLS and panics if there is no task in TLS.
This results in an unintuitive API (it's not clear at first glance that poll()'s expected interface/implementation is related to tasks) and a potential run-time failure that could be checked at compile time.
So my suggestion is to instead pass the task driving the Future explicitly to Future::poll as an additional function argument, either as a Task reference, a closure calling task::park() (if that's enough), or a similar mechanism, instead of storing it in the TLS variable CURRENT_TASK.
Also, "poll" is a confusing name, since it creates the expectation that it is a function that anyone can call to get the value of the future if it has already completed, but it is in fact an "internal" function that drives future execution instead and currently even panics if called outside a task.
Something like "drive", "execute", "run", "run_next_step" or similar would be a better name.
The text was updated successfully, but these errors were encountered: