Yes, it would have. And it was done that way during an intermediate stage
of the skas work.
There were a few reasons for /proc/mm -
reduce host resource consumption - we still have the mm_struct and
page tables, but we do eliminate the kernel stack, task struct, and associated
data
design cleanliness - a UP UML is inherently single-threaded, so it's
pointless to have many host processes when only one of them can be running.
A host process maps much more cleanly onto a UML processor than a UML process,
and a UML process maps much more cleanly onto a host address space than a
host process.
code cleanliness - before /proc/mm, UML manipulated address spaces
through ptrace (PTRACE_M{MAP,UNMAP,PROTECT}). This meant that a process needed
to be used as a handle for the address space. Since there's no way of knowing
what process in an address space is still going to exist when you want to swap
it out, the UML mm_context was a list of task_structs. This made for some
nasty-looking code to keep that list up-to-date. It also made for some
nasty-looking locking against races between swapout and a thread exiting. Now,
the UML mm_context is an int which holds a /proc/mm file descriptor. No lists,
no races, it doesn't get any simpler.
Some smaller reasons -
With the address space manipulations in /proc/mm rather than ptrace,
it is possible that a cleaner interface can be found for it. The current
/proc/mm write semantics are morally equivalent to the previous ptrace
interface, but there is hope that something better can be found. With ptrace,
there is no hope.
scheduling fairness - when you have a single-threaded app (in the sense
that there is only one active thread at any given time) that's spreading its
work over many threads, the thread that wants to run will compete unfairly
in the scheduler with another single-threaded app that is honestly doing all
of its work in one thread. The thread in the first app will have accumulated
much less time than the second app's thread, and so will have higher priority,
even though the two apps may have used the same amount of CPU. With /proc/mm,
UML gets much closer to one host process per processor, but doesn't quite
make it. There are two host processes, one running the kernel, one running
userspace. I'm trying to think of a way of merging them.
It's much nicer to look at ps or top on the host and see a few
(currently four) processes per UML rather than, say, 100.
An unexpected benefit - UML is noticably faster with /proc/mm. That knocked
~10% off its kernel build time. With it doing a build about 40% slower than
the host, the 10% reduction in overall run time represents ~25% reduction in
UML's virtualization overhead.
Jeff
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/