On Sun, Aug 24, 2008 at 23:48, Michel Lestrade
Hi,
So if I understand your suggestion, I should create a container holding rays yet to be processed and then run a while loop until this queue/container is empty. Within that loop, I create a smaller number of threads by popping rays from the queue and let the threads run until they finish before going on to the next iteration.
Actually, the while loop lives inside the threads, since creating and destroying threads is relatively expensive, and you don't need to.
Did I understand you correctly ? It's a good idea and, from the little I know on the subject, it sounds a lot like a "thread pool".
Yes, it's exactly a thread pool.
Guess I misunderstood the concept behind thread_group since I thought it was doing that already ... There's even a hardware_concurrency() function to determine the number of cores/cpus which I thought was intended to control how many threads were to be running at once.
thread_group is just a container for threads; It doesn't have to be a thread pool (since the threads can run different functions). That said, it is obviously convenient for creating thread pools. I think what you want goes something like this: mutex main; int working = 0; queue<ray> q; void ray_processor() { ray r; for (;;) { if (!has_ray) { lock _(main); if (!working && q.empty()) { // nobody has anything to do, exit function break; } if (!q.empty()) { r = q.front(); q.pop(); has_ray = true; ++working; } } if (!has_ray) { // nothing to do, but one of the others does, so wait a bit yield(); continue; } new_rays = process_ray(r); { lock _(main); if (new_rays.empty()) { // no more work, will have to get some from queue --working; has_ray = false; } else { r = new_rays[0]; has_ray = true; for i in [1, new_rays.size()) { q.push(new_rays[i]); } } } } } q.push(initial_ray); thread_group g; for i in [0, hardware_concurrency()+1) { g.add(ray_processor); } g.join_all();