the test app does:
(parent)
for (i=0; i < num_devices; i++) {
err = pthread_create(&(device[i]->thread), NULL, (void
*)run_tests, (void *) i);
..
}
/* wait for all threads to exit */
while (active_threads != 0) {
pthread_mutex_lock(&sync_thread_mutex);
gettimeofday(&now, NULL);
timeout.tv_sec = now.tv_sec + 5;
timeout.tv_nsec = now.tv_usec * 1000;
retcode = 0;
while ((active_threads != 0) && (retcode != ETIMEDOUT)) {
retcode =
pthread_cond_timedwait(&sync_thread_cond, &sync_thread_mutex, &timeout);
}
if (retcode == ETIMEDOUT) {
print_status_update();
}
pthread_mutex_unlock(&sync_thread_mutex);
}
(each worker, when it finishes)
pthread_mutex_lock(&sync_thread_mutex);
active_threads--;
pthread_cond_broadcast(&sync_thread_cond);
pthread_mutex_unlock(&sync_thread_mutex);
pthread_exit(0);
no idea what the pthread_cond_timedwait does under the covers, but i bet
its bad..
>BTW, Lincol, I still have a pending answer for you, about the mmap
>slowdown, that's because of reduced readahead mostly, you can tune it
>with page-cluster sysctl, it's not only because of the expensive page
>faults that mmap I/O implies. I've some revolutionary idea about
>replacing readahead, not that it matters for your workload that is
>reading physically contigous though.
i only added the mmap() for interests-sake; the intent of my benchmarking
was less-so to stress linux, more-so to stress some storage-networking
plumbing (iSCSI & FC stuff), but its been an interesting series of
experiments nontheless.
cheers,
lincoln.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/