Please see Rik's suggestion and my followups where we are talking about
handling this in the MM.
> Think
> for instance about mmap() where the filesystem is usually not involved
> once a pagein has occurred.
>
> > Do we really need to invalidate individual pages, or is it
> > real-life acceptable to invalidate the whole mapping?
>
> Invalidating the mapping is certainly a good alternative if it can be
> done cleanly.
But invalidate_inode_pages is (usually) just trying to do exactly that.
It doesn't work. Anyway, what if you have a 2 gig file with 1 meg
mmaped/locked/whatever by a database?
> Note that in doing so, we do not want to invalidate any reads or
> writes that may have been already scheduled. The existing mapping
> still would need to hang around long enough to permit them to
> complete.
With the mechanism I described above, that would just work. The fault
path would do lock_page, thus waiting for the IO to complete.
-- Daniel - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/