I may been misleading mentionin a 1%, the 1% doesn't mean a 1% of ram is
wasted (otherwise adding a new dimm couldn't solve it because you would
waste even more ram :). As Ingo also mentioned, it's a fixed amount of
ram that is wasted in the mempool.
For very low end machines you can simply define a very small mempool, it
will potentially reduce scalability during heavy I/O with mem shortage
but it will waste very very little ram (potentially in the simpler case
you only need 1 entry in the pool to guarantee deadlock avoidance). And
there's nearly nothing to worry about, we always had those mempools
since 2.0 at least, look at buffer.c and search for the async argument
to the functions allocating the bhs. Now with the bio we have more
mempools because lots of people still uses the bh, so in the short term
(before 2.6) we can waste some more byte, but once the bh and
ll_rw_block will be dead most of the bio overhead will go away and we'll
only hold the advantages of doing I/O in more than one page with a
single metadata entity (2.6). The other obvious advantage of the mempool
code is that we share it across all the mempool users, so we'll save
some byte of icache too by avoiding code duplication compared to 2.4 too :).
Infact the solution 2) cannot solve your 4M/8M boot problem either,
since such memory would need to be resrved anyways, and it could act
only as clean filesystem cache. So in short the only difference between 1) and
2) would be a little more of fs cache in solution 2) but with an huge
implementation complexity and local_save_irq all over the place in the
VM so with lower performance. It wouldn't make a difference in
functionality (boot or not boot, this is the real problem you worry
about :).
Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/