it doesn't, balance_dirty() has to work only at the highlevel.
sync_page_buffers also is no problem, we'll try again later in those
GFP_NOIO allocations.
furthmore you don't even address the writepage from loop thread on the
loop queue.
The final fix should be in rc2aa1 that I will release in a jiffy. It
takes care now of both the VM and balance_dirty().
this is the incremental fix against rc1aa1:
diff -urN loop-ref/fs/buffer.c loop/fs/buffer.c
--- loop-ref/fs/buffer.c Wed Dec 19 04:17:30 2001
+++ loop/fs/buffer.c Wed Dec 19 03:43:24 2001
@@ -2547,6 +2547,7 @@
/* Uhhuh, start writeback so that we don't end up with all dirty pages */
write_unlock(&hash_table_lock);
spin_unlock(&lru_list_lock);
+ gfp_mask = pf_gfp_mask(gfp_mask);
if (gfp_mask & __GFP_IO && !(current->flags & PF_ATOMICALLOC)) {
if ((gfp_mask & __GFP_HIGHIO) || !PageHighMem(page)) {
if (sync_page_buffers(bh)) {
diff -urN loop-ref/include/linux/mm.h loop/include/linux/mm.h
--- loop-ref/include/linux/mm.h Wed Dec 19 04:17:30 2001
+++ loop/include/linux/mm.h Wed Dec 19 04:15:52 2001
@@ -562,6 +562,15 @@
#define GFP_DMA __GFP_DMA
+static inline unsigned int pf_gfp_mask(unsigned int gfp_mask)
+{
+ /* avoid all memory balancing I/O methods if this task cannot block on I/O */
+ if (current->flags & PF_NOIO)
+ gfp_mask &= ~(__GFP_IO | __GFP_HIGHIO | __GFP_FS);
+
+ return gfp_mask;
+}
+
extern int heap_stack_gap;
/*
diff -urN loop-ref/mm/vmscan.c loop/mm/vmscan.c
--- loop-ref/mm/vmscan.c Wed Dec 19 04:17:30 2001
+++ loop/mm/vmscan.c Wed Dec 19 03:43:24 2001
@@ -611,6 +611,8 @@
int try_to_free_pages(zone_t *classzone, unsigned int gfp_mask, unsigned int order)
{
+ gfp_mask = pf_gfp_mask(gfp_mask);
+
for (;;) {
int tries = vm_scan_ratio << 2;
int failed_swapout = !(gfp_mask & __GFP_IO);
try_to_free_pages needs an explicit wrapping because it can be called
not only from the VM.
Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/