> > > of non cpu hog processes running. A lot of messaging systems have high task
> > > switch rates but very few cpu hogs. So you still need to handle the non hogs
> > > carefully to avoid degenerating back into Linus scheduler.
> >
> > In my experience, if you've I/O bound tasks that lead to a long run queue,
> > that means that you're suffering major kernel latency problems ( other than the
> > scheduler ).
>
> I don't see any evidence of it in the profiles. Its just that a lot of stuff
> gets passed around between multiple daemons. You can see similar things
> in something as mundane as a 4 task long pipe, a user mode file system or
> some X11 clients.
If you've I/O bound ( very low user space average run time ) tasks
accumulation, it's very likely that the bottom part of the iceberg (
kernel ) is becoming quite fat with respect of the userspace part.
Coming at the pipe example, let's take Larry's lat_ctx ( lmbench ).
This is bouncing data through pipes using I/O bound tasks, and running
vmstat together with a lat_ctx 32 32 ... ( long list ), you'll see the run
queue length barley reach 3 ( with 32 bouncing tasks ).
It barely reaches 5 with 64 bouncing tasks.
- Davide
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/