Does this not also need to be done in fs/mpage.c? It's
just using BIO_MAX_SIZE.
What particular problem are you trying to solve here?
> /*
> * maybe kio is bigger than the max we can easily map into a bio.
> * if so, split it up in appropriately sized chunks.
> @@ -367,8 +370,14 @@
>
> map_i = 0;
>
> + max_pages = (max_bytes + PAGE_MASK) >> PAGE_SHIFT;
> + if (max_pages > q->max_phys_segments)
> + max_pages = q->max_phys_segments;
> + if (max_pages > q->max_hw_segments)
> + max_pages = q->max_hw_segments;
> +
I think probably this should be implemented as a block API
function.
This is going to drag us back into the BIO splitting quagmire.
> next_chunk:
> - nr_pages = BIO_MAX_SECTORS >> (PAGE_SHIFT - 9);
> + nr_pages = max_pages;
hmm. So BIO is based on PAGE_SIZE pages. Not PAGE_CACHE_SIZE.
I currently have:
unsigned nr_bvecs = MPAGE_BIO_MAX_SIZE / PAGE_CACHE_SIZE;
Which is about the only sane way in which the pagecache BIO
assembly code can go from "bytes" to "number of pages".
It's going to get interesting if someone makes PAGE_SIZE != PAGE_CACHE_SIZE.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/