|
|
Subscribe / Log in / New account

Blocking userfaultfd() kernel-fault handling

By Jonathan Corbet
May 8, 2020
The userfaultfd() system call is a bit of a strange beast; it allows user space to take responsibility for the handling of page faults, which is normally a quintessential kernel task. It is thus perhaps not surprising that it has turned out to have some utility for those who would attack the kernel's security as well. A recent patch set from Daniel Colascione is small, but it makes a significant change that can help block at least one sort of attack using userfaultfd().

A call to userfaultfd() returns a file descriptor that can be used for control over memory management. By making a set of ioctl() calls, a user-space process can take responsibility for handling page faults in specific ranges of its address space. Thereafter, a page fault within that range will generate an event that can be read from the file descriptor; the process can read the event and take whatever action is necessary to resolve the fault. It should then write a response describing that resolution to the same file descriptor, after which the faulting code will resume execution.

This facility is normally intended to be used within a multi-threaded process, where one thread takes on the fault-handling task. There are a number of use cases for userfaultfd(); one of the original cases was handling live migration of a process from one machine to another. The process can be moved and restarted on the new system while leaving most of its memory behind; the pages it needs immediately can then be demand-faulted across the net, driven by userfaultfd() events. The result is less downtime while the process is being moved.

Since the kernel waits for a response from the user-space handler to resolve a fault, page faults can cause an indefinite delay in the execution of the affected process. That is always the case, of course; for example, a process generating a fault on memory backed by a file somewhere else on the network will come to an immediate halt for an unknown period of time. There is a difference with userfaultfd(), though: the time it takes to resolve the fault is under the process's direct control.

Normally, there are no problems that can result from that control; the process is simply slowing itself down, after all. But occasionally page faults will be generated in the kernel. Imagine, for example, just about any system call that results in the kernel accessing user-space memory. That can happen as the result of I/O, from a copy_from_user() call, or any of a number of other ways. Whenever the kernel accesses user-space memory, it has to be prepared for the relevant page(s) to not be present; the kernel has to incur and handle a page fault, in other words.

An attacker can take advantage of this behavior to cause execution in the kernel to block at a known point for a period of time that is under said attacker's control. In particular, the attacker can use userfaultfd() to take control of a specific range of memory; they then ensure that none of the pages in that range are resident in RAM. When the attacker makes a system call that tries to access memory in that range, they will get a userfaultfd() event helpfully telling them that the kernel has blocked and is waiting for that page.

Stopping the kernel in this way is useful if one is trying to take advantage of some sort of race condition or other issue. Assume, for example, that an attacker has identified a potential time-of-check-to-time-of-use vulnerability, where the ability to change a value in memory somewhere at the right time could cause the kernel to carry out some ill-advised action. Exploiting such a vulnerability requires hitting the window of time between when the kernel checks a value and when it acts on it; that window can be quite narrow. If the kernel can be made to block while that window is open, though, the attacker suddenly has all the time in the world. That can make a difficult exploit much easier.

Attackers can be deprived of this useful tool by disallowing the handling in user space of faults incurred in kernel space. Simply changing the rules that way would almost certainly break existing code, though, so something else needs to be done. Colascione's patch addresses this problem in two steps, the first of which is to add a new flag (UFFD_USER_MODE_ONLY) for userfaultfd() which states that the resulting file descriptor can only be used for handling faults incurred in user space. Any descriptor created with this flag thus cannot be used for the sorts of attacks described above.

One could try politely asking attackers to add UFFD_USER_MODE_ONLY to their userfaultfd() calls, but we are dealing with people who are not known for their observance of polite requests. So the patch set adds a new sysctl knob, concisely called vm/unprivileged_userfaultfd_user_mode_only, to make the request somewhat less polite; if it is set to one, userfaultfd() calls from unprivileged users will fail if that flag is not provided. At that point, kernel-space fault handling will no longer be available to attackers attempting to gain root access. The default value has to be zero, though, to maintain compatibility with older kernels.

The only response to this patch set so far came from Peter Xu, who pointed out that the existing vm/unprivileged_userfaultfd knob could be extended instead. That knob can be used to disallow userfaultfd() entirely for unprivileged processes by setting it to zero, though its default value (one) allows such access. Xu suggested that setting it to two would allow unprivileged use, but for user-space faults only. This approach saves adding a new knob.

Beyond that, the suggested change seems uncontroversial. It's a small patch that has no risk of breaking things for existing users, so there does not appear to be any real reason to keep it out.

Index entries for this article
KernelSecurity/Kernel hardening
Kerneluserfaultfd()


to post comments

Blocking userfaultfd() kernel-fault handling

Posted May 8, 2020 22:57 UTC (Fri) by dvdeug (subscriber, #10998) [Link] (18 responses)

I see the argument for it, but it's yet another obscure option. By default, it can't be turned on, so it won't provide any defense to most users. Considering "the existing vm/unprivileged_userfaultfd knob", is the cost of a feature to hobble but not disable userfaultfd really worth it?

Blocking userfaultfd() kernel-fault handling

Posted May 8, 2020 23:25 UTC (Fri) by Paf (subscriber, #91811) [Link] (17 responses)

The cost is awfully small, and while another option isn’t perfect, distros can enable it if desired. If it doesn’t break much, many or most will do so.

It’s not perfect, but this option is low cost.

Blocking userfaultfd() kernel-fault handling

Posted May 9, 2020 1:30 UTC (Sat) by NYKevin (subscriber, #129325) [Link] (16 responses)

My 2 cents: If you are not one a cloud provider, then you *probably* don't need userfaultfd() at all. It's the low-level equivalent of fiddling with the garbage collection algorithm, or writing your own malloc(). Basically, there are two use cases for this:

1. You're doing live migrations of VMs.
2. You can dynamically regenerate paged-out data faster than the OS can page it in.

(1) makes very little sense if you control all of the code in the VM, because it's far easier to just use a container instead of a VM, and start/stop instances as required (with all state living in some kind of database-like-thing, or perhaps a networked filesystem, depending on your needs). Sure, this is slightly more upfront design work, but live migration consumes an incredible amount of bandwidth once you try to scale it up, whereas container orchestration is a mature and well-understood technology. Unless you are making money per VM, it's difficult to justify the cost of live migration.

(Granted, if all of your VMs are very similar to one another, you might be able to develop a clever compression algorithm that shaves a lot of bytes off of that cost, but you're still not going to beat containers on size.)

That leaves (2). What's happening in case (2) is that you're using the page fault mechanism as a substitute for some kind of LRU cache for data that is expensive to compute, but cheaper than actually hitting the disk. But you can build an LRU cache in userspace, and it'll probably be a lot more efficient and easier to tune, since you can design it to exactly fit your specific use case. Trying to rope page faults into that problem makes no logical sense.

So, in conclusion, I'd tentatively suggest that distros consider turning the whole feature off and see if anything breaks. Perhaps they should teach their package managers to enable this setting if, and only if, one or more installed packages really need it.

Blocking userfaultfd() kernel-fault handling

Posted May 9, 2020 1:36 UTC (Sat) by josh (subscriber, #17465) [Link]

There are other use cases for this. Fastly's Lucet uses it for their WebAssembly VM, to catch out-of-bounds memory accesses.

Blocking userfaultfd() kernel-fault handling

Posted May 9, 2020 2:00 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link] (6 responses)

Live migration is ABSOLUTELY justified for cloud computing providers to protect against hypervisor vulnerabilities.

Client workflows often can't be interrupted at will and even asking clients nicely to reboot their instances (so they can migrate to other hardware nodes) can take months. It's much easier to involuntarily migrate client VMs to different hardware.

Blocking userfaultfd() kernel-fault handling

Posted May 9, 2020 4:58 UTC (Sat) by wahern (subscriber, #37304) [Link] (3 responses)

AWS doesn't support live migration. Live migration is useful, but not for cloud computing, where state is kept outside the node. It's useful for traditional architectures where state is maintained on the node, with only backups (hopefully!) elsewhere. Not just useful but critical, because you're packing more work on the same piece of hardware, so reboots are more disruptive than with dedicated hardware.

Blocking userfaultfd() kernel-fault handling

Posted May 9, 2020 5:02 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

> AWS doesn't support live migration.
It actually does behind the scenes with T2 and T3 instances.

Live migration is very useful to move client software out of a failing node. So really this makes sense only for large cloud providers.

Blocking userfaultfd() kernel-fault handling

Posted May 9, 2020 7:52 UTC (Sat) by wahern (subscriber, #37304) [Link] (1 responses)

Interesting. Any sources which I could share? All I could find in a quick Google search is an HN comment, "T2 and T3 use live migration to get around this, but it's not public knowledge." https://news.ycombinator.com/item?id=17815806

Blocking userfaultfd() kernel-fault handling

Posted May 9, 2020 15:59 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link]

I worked at Amazon, but I've heard about T2/T3 migration publicaly at AWS re:Invent multiple times. These instance types are severely oversubscribed and migration is used to balance the load.

Blocking userfaultfd() kernel-fault handling

Posted May 9, 2020 20:33 UTC (Sat) by NYKevin (subscriber, #129325) [Link]

> Live migration is ABSOLUTELY justified for cloud computing providers to protect against hypervisor vulnerabilities.

I don't understand how this contradicts anything that I said...

Blocking userfaultfd() kernel-fault handling

Posted May 13, 2020 8:48 UTC (Wed) by nilsmeyer (guest, #122604) [Link]

> Client workflows often can't be interrupted at will and even asking clients nicely to reboot their instances (so they can migrate to other hardware nodes) can take months. It's much easier to involuntarily migrate client VMs to different hardware.

That is true in a lot of environments, especially when yo u are dealing with software that manages state. It's easy to say that one can design an application so this isn't necessary (though a lot of the container/cloud-native crowd completely ignores stateful systems), but the reality is very different.

Blocking userfaultfd() kernel-fault handling

Posted May 9, 2020 5:27 UTC (Sat) by kccqzy (guest, #121854) [Link] (1 responses)

I don't understand the cloud provider argument. It does seem like this feature can help with live VM migration, but when you are a cloud provider, you don't necessarily require all users to run unmodified Linux kernels. If a user runs a non-Linux VM, how can the cloud provider migrate that VM?

Blocking userfaultfd() kernel-fault handling

Posted May 9, 2020 5:37 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link]

The virtual machine that runs client's code (KVM) looks like a regular process to the host Linux.

Blocking userfaultfd() kernel-fault handling

Posted May 9, 2020 7:59 UTC (Sat) by Sesse (subscriber, #53779) [Link] (1 responses)

You're assuming data is paged out to begin with. :-) A prime candidate for this is if you want to mmap a compressed file (and have your application see uncompressed data).

(de-)compression and view are different layers

Posted May 11, 2020 7:02 UTC (Mon) by gus3 (guest, #61103) [Link]

If the kernel handles compression/decompression matters, it's to save on paging space/speed. The user space sees nothing different.

If the user space handles compression, the kernel doesn't care about it at all.

They aren't related.

Blocking userfaultfd() kernel-fault handling

Posted May 9, 2020 12:58 UTC (Sat) by roc (subscriber, #30627) [Link] (1 responses)

Our Pernosco omnisicient, record-and-replay debugger uses userfaultfd() in a way that's neither 1 nor 2.

We have a giant omniscient database which lets us reconstruct the memory state of a process at any point in its recorded history. Sometimes we want to execute an application function "as if" the process was at some point in that history. So we create a new process, ptrace it, create mappings in it corresponding to the VMAs that existed at that point in history, and enable userfaultfd() for those mappings. Then we set the registers into the right state for the function call and PTRACE_CONT. Every time the process touches a new page, we reconstruct the contents of that page from our database. Works great.

Blocking userfaultfd() kernel-fault handling

Posted May 9, 2020 13:00 UTC (Sat) by roc (subscriber, #30627) [Link]

I *think* a UFFD_USER_MODE_ONLY flag/mode would work fine for us. We don't actually allow this fake process to execute syscalls normally; we catch its syscalls with ptrace and emulate them.

Blocking userfaultfd() kernel-fault handling

Posted May 17, 2020 8:54 UTC (Sun) by smooth1x (guest, #25322) [Link]

What happens if the VM contains a database server? I can see this for that use case.

Blocking userfaultfd() kernel-fault handling

Posted Jun 17, 2020 0:48 UTC (Wed) by tobin_baker (subscriber, #139557) [Link]

How about implementing COW private mappings of shared memory with true snapshot semantics?

Blocking userfaultfd() kernel-fault handling

Posted May 9, 2020 22:33 UTC (Sat) by meyert (subscriber, #32097) [Link]

A bit OT, but I recently learned that quintessential come from Latin "quinta essentia" and means literally "fifths element" and indeed does mean the fifth element, i.e. "ether".


Copyright © 2020, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds