yes, so if the problem is the negative dentry caching after unlink that
I dropped, by backing out this patch you should get the original
performance back (if you test it it would be interesting also to launch
a dbench run, as said I suspect the dbench records are also thanks to
this fix that maximizes the most "useful" cache):
If it is the above patch that makes the difference I consider this more
a lmbench false positive (I mean: if that patch makes difference, then
lmbench is not benchmarking the real create speed of the filesystem,
because it is skipping the lowlevel lookup, that on huge dirs would be
faster on reiserfs or xfs than ext3 for example).
>
> Other consistent tests that showed notable improvement were context
> switching at 8p/16K, 8p/64K, and 16p/16K. The 16p/64K context switch
> latency became inconsistent and higher on pre9aa2.
>
> fork latency was consistent and improved by 10%.
Noticed that too :).
>
> > The pipe bandwith reported
> > by lmbench in pre9aa2 is also very impressive, that's Mike's patch and I
> > think it's also a very worthwhile optimizations since many tasks really
> > uses pipes to passthrough big loads of data.
>
> Yeah, that is impressive.
>
> Glancing through the original lmbench logfiles, there are some results
> that aren't in any report. creat 1k and 4k, and select on various
> numbers of regular and tcp file descripters.
>
>
> --
> Randy Hron
thanks again for the so useful work you're doing with these detailed
benchmarks.
Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/