I’m kind of inspired by Geoffrey’s speculative write-up on Linux seccomp to do a speculative write-up of my own. Most of the SIPB people around here will recognize this discussion, as we’ve had it a couple of times. My 6.UAT TA will recognize it as well, since I presented on this as a “representative M.Eng. thesis”—that is, something that I could do, but have no intention of actually doing, for my M.Eng. thesis.
To setup the premise here, programs that do a lot of number crunching tend to run fast regardless of how they’re running, whether that’s natively, under virtualization, or whatever. They’re generally allowed to do almost everything they need to without any help from the operating system, or any other layers sitting on top of them.
On the other hand, any program that needs to interact with the outside world at all does so using a system call, which is basically a special function that causes the program to jump into the operating system itself. Because you don’t want random processes to have unfiltered access to raw hardware, a surprising amount of functionality is exposed through system calls, including read, write, send, recv. This means that applications such as, say, apache, spend almost all of their time doing system calls, since all a web server really does is read a file from disk, and then send it over the network.
The problem comes in when you consider the context switch between userspace applications and the kernelspace operating system needed to execute a system call. As it turns out, this context switch is slooow. How slow is it? Well, we can look at a paper from Microsoft Research. Their highly experimental operating system, Singularity, is flexible enough that it can run applications either with or without the context switch required in a traditional operating system. Here’s what they found:
||Cost (CPU Cycles)
] Their terminology for a system call. On each operating system tested, they specifically chose a system call that could always return very quickly.
] Surrender remaining time in the current thread of execution and schedule another thread.
] “Process-Send-Receive” – their term for an IPC
benchmark that sends a byte of data back and forth between two separate processes.
] Create a new process. Equivalent to a fork
in UNIX terminology.
] Singularity running without the hardware context switch.
] Singularity running with a hardware context switch.
What’s the take-away here? There are two. First, using hardware isolation to Singularity adds almost a factor of 4 on the time to execute a system call. Second, Singularity is way faster than other operating systems, all of which use a hardware context switch (of course, they’re also much more featureful than Singularity).
So that’s our problem. To try and solve it, we look to the techniques pioneered by VMWare for total machine virtualization.
When running an operating system under virtualization, we need some way to simulate what would otherwise be privileged operations on raw hardware. There are a lot of approaches to solving this problem, but VMWare primarily uses just-in-time binary translation (or BT). With BT, VMWare’s Virtual Machine Monitor (VMM) examines instructions just before they’re executed. If there are any unsafe instructions, they’re replaced with calls into functions in the VMM that emulate those instructions.
That on its own doesn’t make anything fast, but VMWare takes this a step further. In order to minimize the overhead of this emulation, VMWare’s VMM runs the translated code within the kernel (ring 0). It turns out that, because of this, VMWare’s VMM has an average slowdown of only 4% (see A Comparison of Software and Hardware Techniques for x86 Virtualization for detailed analysis).
Here’s the question: can we take the binary translation techniques from VMWare’s VMM and adapt them to run otherwise unmodified processes instead of operating systems within the kernel? And if we do, what is the performance impact?
If we can bypass the context switch expense measured by the Singularity team, it could easily more than compensate for the relatively small overhead of running applications under binary translation. I would go so far as to say that I expect syscall-heavy applications to run faster.
Putting the Singularity and VMWare papers right next to each other, this is a pretty obvious next step. But as far as I know, nobody’s done it yet. Does anybody else know of an implementation of this idea for a real operating system? Maybe a Linux kernel module that lets you run certain apps in-kernel? If it’s out there, I haven’t found it yet.