They are often close to the same thing. Take a look at the constraints
on virtually cached processors like the ARM where they _are_ the same thing.
Equal mm for cpu sucking tasks often means equal cache image. On the other
hand unmatched mm pretty much definitively says "doesnt matter". The cost
of getting the equal mm/shared cache case wrong is too horrific to handwave
it out of existance using academic papers.
> By having only two queues maintain the implementation simple and solves
> 99% of common/extraordinary cases.
> The cost of a tlb flush become "meaningful" for I/O bound tasks where
> their short average run time is not sufficent to compensate the tlb flush
> cost, and this is handled correctly/like-usual inside the I/O bound queue.
You don't seem to solve either problem directly.
Without per cpu queues you spend all your time bouncing stuff between
processors which hurts. Without multiple queues for interactive tasks you
walk the interactive task list so you don't scale. Without some sensible
handling of mm/cpu binding you spend all day wasting ram bandwidth with
cache writeback.
The single cpu sucker queue is easy, the cost of merging mm equivalent tasks
in that queue is almost nil. Priority ordering interactive stuff using
several queues is easy and very cheap if you need it (I think you do hence
I have 7 priority bands and you have 1). In all these cases the hard bits
end up being
On a wake up which cpu do you give a task ?
When does an idle cpu steal a task, who from and which task ?
How do I define "imbalance" for cpu load balancing ?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/