
OK, I have done a lot more work and unfortunately I don't think that the parallel containers/algorithms idea is very useful outside of very specific applications, and in those applications it's easy and clearer just to make your program multithreaded manually. I can still post what I came up with if anybody is interested but I don't see much future in it. The problem is that timing issues between threads are just too great to overcome. It takes longer for the computer to resume from a boost::conditional wait than it does to search through a million intergers. Also I would like to vote that some easier way of sleeping is put into boost::thread, right now it is too awkward to use. On 6/10/06, Clint Levijoki <clevijoki@gmail.com> wrote:
Thanks for the link, they have some brilliant ideas. I will have to do some thinking on things, after more testing my initial task_master idea is only beneficial for very narrow type of calculations.
The idea of parallel containers sounds like it could solve a lot of the issues I am having.
On 6/9/06, Benjamin A. Collins < ben.collins@acm.org> wrote:
On Thu, Jun 08, 2006 at 03:53:46AM -0600, Clint Levijoki wrote: Hey all,
This is actually a 2 part interest inquiry:
1. I think there could be some usefulness in parallel versions of the algorithm functions, like for_each and find and unique_copy.
2. In the couple days of r&d I have come up with a class named parallel_task_master which runs a thread function many times over and manages them so you keep your thread counts low and working as hard as possible. So if you have a quad-core cpu it can be specified so you never go over a 4 thread limit. It's also setup so you can keep adding tasks recursively and it will keep the thread count fixed and never deadlock.
Because the tasks in parallel_task_master run relatively in order, I think some sort of queue could be made to parallelize functions that needed to
return output in a serial form. So it could be used to decode video stream packets for example.
This parallel_task_master in itself may have some value alone, but it was written to assist in creating parallel version of the functions in <algorithm>. Some of them are not so parallelizable, some are embarrassingly so.
I think this could be interesting. My there is a research group at alma mater that deals with parallel algorithms and such, and has a project along these lines. I know the people involved (some of them), but I don't know if they'd be willing to share project sources. I suspect that they would. Either way, the papers are published and you may find them useful. Note that Bjarne is one of the faculty members involved in this project.
Go here to find out about STAPL (Standard Template Adaptive Parallel Library): http://parasol.tamu.edu/groups/rwergergroup/research/stapl/
bc -- Benjamin Collins <ben.collins@acm.org >
_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- - Clint Levijoki
"I'd rather have a bottle in front of me than a frontal lobotomy" - Tom Waits
-- - Clint Levijoki "I'd rather have a bottle in front of me than a frontal lobotomy" - Tom Waits