You didn't make anything store or return the dma_addr_t ... that was the
issue I was referring to, it's got to be either (a) passed up from the
very lowest level, like the pci_*() calls assume, or else (b) cheaply
derived from the virtual address. My patch added slab support in common
cases where (b) applies.
(By the way -- I'm glad we seem to be agreeing on the notion that we
should have a dma_pool! Is that also true of kmalloc/kfree analogues?)
> Note that the semantics won't be quite the same as the pci_pool one since you
> can't guarantee that allocations don't cross the particular boundary
> parameter.
Go back and look at the dma_pool patch I posted, which shows how to
handle that using extra padding. Only one driver needed that, and
it looks like maybe its hardware spec changed so that's not an issue.
That's why the signature of dma_pool_create() is simpler than the
one for pci_pool_create() ... I never liked it before, and now I'm
fully convinced it _can_ be simpler.
> There's also going to be a reliance on the concept that the dma
> coherent allocators will return a page bounded chunk of memory. Most seem
> already to be doing this because the page properties are only controllable at
> that level, so it shouldn't be a problem.
I think that's a desirable way to specify dma_alloc_coherent(), as handling
page-and-above. The same layering as "normal" memory allocation. (In fact,
that's how I had updated the docs.)
- Dave
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/