And sorry for the vagueness of the description, I don't have my Intel
books at hand to dig this up. But that might account for wide variations
of a single function on two different processor types of the same
family.
gerrit
In message <E17b7iB-0003Lu-00@starship>, > : Daniel Phillips writes:
> On Saturday 03 August 2002 23:40, Andrew Morton wrote:
> > - total amount of CPU time lost spinning on locks is 1%, mainly
> > in page_add_rmap and zap_pte_range.
> >
> > That's not much spintime. The total system time with this test went
> > from 71 seconds (2.5.26) to 88 seconds (2.5.30). (4.5 seconds per CPU)
> > So all the time is presumably spent waiting on cachelines to come from
> > other CPUs, or from local L2.
>
> Have we tried this one:
>
> static inline unsigned rmap_lockno(pgoff_t index)
> {
> - return (index >> 4) & (ARRAY_SIZE(rmap_locks) - 1);
> + return (index >> 4) & (ARRAY_SIZE(rmap_locks) - 16);
> }
>
> (which puts all the rmap spinlocks in separate cache lines)
>
> --
> Daniel
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
>
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/