Despite of all apparences this method performs beautifully on Linux, pthreads are
actually slower in many cases, since you will incur some additional overhead due
to thread synchronization and scheduling.
The problem is that beyond a certain number of processes the scheduler just goes
bananas, or so it seems to me.
Since Linux threads are mapped on processes, I don't think that (p)threads woud
help in any way, unless it is the VM context switch overhead that is playing a
role here, which I wouldn't think is the case.
- Fabio
"J . A . Magallon" wrote:
> On 03.29 Fabio Riccardi wrote:
> >
> > I've found a (to me) unexplicable system behaviour when the number of
> > Apache forked instances goes somewhere beyond 1050, the machine
> > suddently slows down almost top a halt and becomes totally unresponsive,
> > until I stop the test (SpecWeb).
> >
>
> Have you though about pthreads (when you talk about fork, I suppose you
> say literally 'fork()') ?
>
> I give a course on Parallel Programming at the university and the practical
> work was done with POSIX threads. One of my students caught the idea and
> used it to modify his assignment from one other matter on Networks, and
> changed the traditional 'fork()' in a simple ftp server he had to implement
> by 'pthread_create' and got a 10-30 speedup (conns per second).
>
> And you will get rid of some process-per-user limit. But you will fall into
> an threads-per-user limit, if there is any.
>
> And you cal also control its scheduling, to make each thread fight against
> the whole system or only its siblings.
>
> --
> J.A. Magallon # Let the source
> mailto:jamagallon@able.es # be with you, Luke...
>
> Linux werewolf 2.4.2-ac28 #1 SMP Thu Mar 29 16:41:17 CEST 2001 i686
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/