Nobody's done that yet, and it's been a couple years now since that point
was made as a desirable evolution path for pci_pool. In fact the first
(only!) step on that path was the dma_pool update I just posted.
Modifying the slab allocator like that means passing extra parameters around
(for dma_addr_t), causing extra register pressure even for non-dma allocations.
I have a hard time seeing that not slow things down, even on x86 (etc) where
we know we can already get all the benefits of the slab allocator without any
changes to that critical code.
> The benefit mempool has over slab is its guaranteed minimum pool size and
> hence guaranteed non-failing allocations as long as you manage pool resizing
> correctly. I suppose this is primarily useful for storage devices, which
> sometimes need memory that cannot be obtained from doing I/O.
Yes: not all drivers need (or want) such a memory-reservation layer.
> However, you can base mempool off slab modified to use dma_alloc_coherent()
> and get the benefits of everything.
I'd say instead that drivers which happen to need the protection against
failures that mempool provides can safely layer that behavior on top of
any allocator they want. (Modulo that buglet, which requires them to be
layered on top of kmalloc.)
Which means that the mempool discussion is just a detour, it's not an
implementation tool for any kind of dma_pool but is instead an option
for drivers (in block i/o paths) to use on top of a dma_pool allocator.
- Dave
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/