[boost.asio] Concurrently Ping 1000+ Hosts

Hello, So the subject line pretty much says it all. Basically, I am trying to expand off of the icmp ping example given by the boost library documentation. I would like to ping 1000+ hosts in say, a matter of minutes rather than a matter of hours. I have already expanded the class shown here: http://www.boost.org/doc/libs/1_40_0/doc/html/boost_asio/example/icmp/ping.c... Before I start posting code, I would like to discuss the strategy of trying to do this. In my Main thread, I was thinking about creating a vector of PingX objects. In the PingX class, I would have a separate io_service as well as wrap the async() functions in strands. I set the deadline timer at 800ms that way the io_service should return in less than a second. I would then call io_service run() in separate threads to *concurrently* ping these hosts. The problem appears to be that even through the async handers are wrapped in strands, there is data corruption. Is it true that when async handlers are wrapped in strands, they still cannot be executed *concurrently* ? My understanding of strand was that it serializes the handler functions to allow for thread safe access. However, I think that because I am accessing the same resources in multiple threads (such as the socket() object ), it is corrupting my expected output as well as occasionally throwing some exceptions. Are there any suggestions for the logic surrounding this or if this can be done in a multithreaded environment? Thanks! -- Kyle Ketterer

On 19/12/2013 11:12, Quoth Kyle Ketterer:
In my Main thread, I was thinking about creating a vector of PingX objects. In the PingX class, I would have a separate io_service as well as wrap the async() functions in strands. I set the deadline timer at 800ms that way the io_service should return in less than a second. I would then call io_service run() in separate threads to *concurrently* ping these hosts.
The problem appears to be that even through the async handers are wrapped in strands, there is data corruption. Is it true that when async handlers are wrapped in strands, they still cannot be executed *concurrently* ? My understanding of strand was that it serializes the handler functions to allow for thread safe access. However, I think that because I am accessing the same resources in multiple threads (such as the socket() object ), it is corrupting my expected output as well as occasionally throwing some exceptions.
It sounds like you're using the wrong approach. You want one single io_service object, and one io_service::strand per non-allowed-concurrency-task (possibly one per PingX object). You can then start up any number of threads that you wish (don't do 1000 though -- typically maximum concurrency is achieved with somewhere between #CPUs - #CPUs*2 threads) and have them all call run() on the same shared io_service object. Any handler that is queued via the same strand object will guarantee that it is not executed concurrently with any other handler on the same strand (but has no guarantees with handlers on other strands). Additionally, even without strands you can guarantee that a handler cannot execute before the line that does the async_* operation to start it, and once called cannot execute again until you make another async_* call from the handler (or elsewhere). Thus if your operations form a single chain (such as a read that is started on construction and continued only at the end of the handler) then those reads can't execute concurrently with themselves even without a strand. (But you still might want a strand or lock to protect data shared between read and write, or between multiple concurrent writes, or separate objects with shared data.) Note that if you want the "ping"s to occur concurrently you'll have to use different sockets for each one. Up to you whether you create one socket for each PingX or make a smaller pool that is shared out as needed (but the former is easier, and the second will need thread protection on the pool).

Gavin,
Since I will now be using one io_service object, how can I stop the threads
from blocking? When I had an io_service per object, I could just call
io_service.stop() and I could successfully call a join() on the thread.
Since I am now passing a reference to an io_service object, it seems as
though the thread will not join().
My PingX constructor is as follows:
public:
PingX( boost::asio::io_service & io_, const char* destination,
std::vectorstd::string & ping_results, int index=0) :
ping_results_(ping_results),
ip_addr(destination), strand_(io_), resolver_(io_), socket_(io_,
icmp::v4()),
timer_(io_), io_service_(io_), sequence_number_(0), num_replies_(0)
{
socket_.non_blocking(true);
boost::asio::socket_base::reuse_address option(true);
socket_.set_option(option);
_index_ = index;
icmp::resolver::query query(icmp::v4(), ip_addr, "");
destination_ = *resolver_.resolve(query);
strand_.post( boost::bind( &GenericMultiPing::start_send, this) );
strand_.post( boost::bind( &GenericMultiPing::start_receive, this) );
}
The async functions in start_send and start_receive are strand.wrap() 'd. I
call timer.cancel() as well as socket.close() and it seems I can't get it
to unblock. Any ideas?
Main thread would be put together something like:
boost::asio::io_service io_;
boost::asio::io_service::work work_(io_);
int max_threads = 4;
int t_count = 0;
for (..i..)
{
//buffer etc etc
p.push_back( new XPing ( io_, buffer.data(), boost::ref(ping_results), i)
);
threads.push_back( new boost::thread (
boost::bind(&boost::asio::io_service::run, boost::ref(io_) ) ) );
t_count++;
if(t_count >= max_threads) //join threads after max threads
{
for(...j...)
{
threads[j]->join();
}
t_count = 0;
}
}
//cleanup threads, etc
So, I have two vectors. "p" is a vector of XPing pointers and "threads" is
where I store the threads. I also pass a ref to a separate vector which is
just a vector of ping results.
Any ideas?
On Wed, Dec 18, 2013 at 5:29 PM, Gavin Lambert
On 19/12/2013 11:12, Quoth Kyle Ketterer:
In my Main thread, I was thinking about creating a vector of PingX
objects. In the PingX class, I would have a separate io_service as well as wrap the async() functions in strands. I set the deadline timer at 800ms that way the io_service should return in less than a second. I would then call io_service run() in separate threads to *concurrently* ping these hosts.
The problem appears to be that even through the async handers are wrapped in strands, there is data corruption. Is it true that when async handlers are wrapped in strands, they still cannot be executed *concurrently* ? My understanding of strand was that it serializes the handler functions to allow for thread safe access. However, I think that because I am accessing the same resources in multiple threads (such as the socket() object ), it is corrupting my expected output as well as occasionally throwing some exceptions.
It sounds like you're using the wrong approach.
You want one single io_service object, and one io_service::strand per non-allowed-concurrency-task (possibly one per PingX object).
You can then start up any number of threads that you wish (don't do 1000 though -- typically maximum concurrency is achieved with somewhere between #CPUs - #CPUs*2 threads) and have them all call run() on the same shared io_service object.
Any handler that is queued via the same strand object will guarantee that it is not executed concurrently with any other handler on the same strand (but has no guarantees with handlers on other strands). Additionally, even without strands you can guarantee that a handler cannot execute before the line that does the async_* operation to start it, and once called cannot execute again until you make another async_* call from the handler (or elsewhere). Thus if your operations form a single chain (such as a read that is started on construction and continued only at the end of the handler) then those reads can't execute concurrently with themselves even without a strand. (But you still might want a strand or lock to protect data shared between read and write, or between multiple concurrent writes, or separate objects with shared data.)
Note that if you want the "ping"s to occur concurrently you'll have to use different sockets for each one. Up to you whether you create one socket for each PingX or make a smaller pool that is shared out as needed (but the former is easier, and the second will need thread protection on the pool).
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
--
Kyle Ketterer

On 19/12/2013 13:40, Quoth Kyle Ketterer:
Since I will now be using one io_service object, how can I stop the threads from blocking? When I had an io_service per object, I could just call io_service.stop() and I could successfully call a join() on the thread. Since I am now passing a reference to an io_service object, it seems as though the thread will not join().
You only stop() the io_service once all the pings are done. Or if the PingX object internally knows when it's done (eg. if it reads the right things, or times out) then you just let them not requeue their async_* work when they're done and they'll "fall out" automatically. If you don't have an explicit io_service::work object then the run() will automatically terminate once there are no outstanding async_* jobs or in-progress handler calls; then you don't need to stop() at all unless the user wants to cancel before all the pings have finished. (And even then, you can just cancel the pings instead.)
icmp::resolver::query query(icmp::v4(), ip_addr, ""); destination_ = *resolver_.resolve(query);
You should probably make this async as well, otherwise it will limit performance.
The async functions in start_send and start_receive are strand.wrap() 'd. I call timer.cancel() as well as socket.close() and it seems I can't get it to unblock. Any ideas?
Technically you should cancel/close on the same strand as the read/write operations, as these are not officially cross-thread-safe operations. In practice it usually seems to be safe to not do this though, but it might depend on your platform.
boost::asio::io_service::work work_(io_);
You only need the explicit work object if you are going to have a moment when you're run()ing with no other work (no outstanding async_* requests). Typically this is only an issue if you create the threads first and have some other action that may or may not happen (eg. user activity not involving an incoming network request) that occurs later to initiate the async operations, which does not appear to be the case in your example. In your case, you should just be able to create all your ping objects, which should just queue up an async_resolve (which will then internally queue the async_read/async_send when the resolve completes), so you shouldn't need explicit work.
//buffer etc etc
p.push_back( new XPing ( io_, buffer.data(), boost::ref(ping_results), i) );
Remember that you can't share writable buffers between concurrent workers. If this is constant data that you're sending then this is ok, but otherwise not.
threads.push_back( new boost::thread ( boost::bind(&boost::asio::io_service::run, boost::ref(io_) ) ) );
t_count++;
if(t_count >= max_threads) //join threads after max threads { for(...j...) { threads[j]->join(); }
t_count = 0;
}
}
This is wrong. You should have one loop creating all your ping objects (and setting their initial async work). Then a *separate* loop that creates the threads, and then finally after that a *third* loop that joins them all. And as I said before, your number of threads should not be related to your number of ping objects; it should be related to the # of CPUs, or just a fixed (small) number.

Thanks for the suggestions, Gavin! I feel as though I've made some
progress. However, my app is still "hanging" when executing the io_service.
Here is a full example of my code:(Sorry if this is long)
Main Thread:
I am using wxWidgets for my platform so if your familiar at all with it
you'll see I'm basically reading from a listview a list of ip's.....
--------------------------------------------------------
int TOTAL_ITEMS = lstUsers->GetItemCount();
int TOTAL_FINISHED = 0;
int MAX_THREADS = 4;
if ( TOTAL_ITEMS < MAX_THREADS )
MAX_THREADS = TOTAL_ITEMS;
std::vector < XPing* > p;
std::vector< std::string> ping_results (lstUsers->GetItemCount());
std::vector< boost::thread* > threads;
boost::asio::io_service io_;
wxString ip_;
wxCharBuffer buffer;
//create ping objects
for ( int i=0; i
On 19/12/2013 13:40, Quoth Kyle Ketterer:
Since I will now be using one io_service object, how can I stop the
threads from blocking? When I had an io_service per object, I could just call io_service.stop() and I could successfully call a join() on the thread. Since I am now passing a reference to an io_service object, it seems as though the thread will not join().
You only stop() the io_service once all the pings are done. Or if the PingX object internally knows when it's done (eg. if it reads the right things, or times out) then you just let them not requeue their async_* work when they're done and they'll "fall out" automatically. If you don't have an explicit io_service::work object then the run() will automatically terminate once there are no outstanding async_* jobs or in-progress handler calls; then you don't need to stop() at all unless the user wants to cancel before all the pings have finished. (And even then, you can just cancel the pings instead.)
icmp::resolver::query query(icmp::v4(), ip_addr, "");
destination_ = *resolver_.resolve(query);
You should probably make this async as well, otherwise it will limit performance.
The async functions in start_send and start_receive are strand.wrap()
'd. I call timer.cancel() as well as socket.close() and it seems I can't get it to unblock. Any ideas?
Technically you should cancel/close on the same strand as the read/write operations, as these are not officially cross-thread-safe operations. In practice it usually seems to be safe to not do this though, but it might depend on your platform.
boost::asio::io_service::work work_(io_);
You only need the explicit work object if you are going to have a moment when you're run()ing with no other work (no outstanding async_* requests). Typically this is only an issue if you create the threads first and have some other action that may or may not happen (eg. user activity not involving an incoming network request) that occurs later to initiate the async operations, which does not appear to be the case in your example.
In your case, you should just be able to create all your ping objects, which should just queue up an async_resolve (which will then internally queue the async_read/async_send when the resolve completes), so you shouldn't need explicit work.
//buffer etc etc
p.push_back( new XPing ( io_, buffer.data(), boost::ref(ping_results), i) );
Remember that you can't share writable buffers between concurrent workers. If this is constant data that you're sending then this is ok, but otherwise not.
threads.push_back( new boost::thread (
boost::bind(&boost::asio::io_service::run, boost::ref(io_) ) ) );
t_count++;
if(t_count >= max_threads) //join threads after max threads { for(...j...) { threads[j]->join(); }
t_count = 0;
}
}
This is wrong. You should have one loop creating all your ping objects (and setting their initial async work). Then a *separate* loop that creates the threads, and then finally after that a *third* loop that joins them all. And as I said before, your number of threads should not be related to your number of ping objects; it should be related to the # of CPUs, or just a fixed (small) number.
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
--
Kyle Ketterer

because the review of boost.fiber has been announced I believe it could help in your scenario. With boost.fiber you could create as many threads as cores are on your system (lets say 2 threads because of 2 cores). On each thread you create 500 fibers which run concurrently on the threads - in fact fibers are some lightwight userland threads. On benefit is that boost.fiber integrates into boost.asio's async result framework, e.g. you don't need to scatter your code with callbacks, e.g. you can merge start_send() and handle_send() into one function, start_receive() and handle_receive() in one function etc. // fiber gets suspended until message was read boost::asio::async_read( socket_, boost::asio::buffer( channel), yield[ec]); if ( ec) throw std::runtime_error("some error"); // fiber gets suspended until message was written boost::asio::async_write( socket_, boost::asio::buffer( data_, max_length), yield[ec]); You can find more detailed infos in boost.fiber's docu: http://ok73.funpic.de/boost/libs/fiber/doc/html/fiber/asio.html The library itself contains several examples demonstrating the usage together with boost.asio.

On 19 Dec 2013 at 8:17, Oliver Kowalke wrote:
// fiber gets suspended until message was read boost::asio::async_read( socket_, boost::asio::buffer( channel), yield[ec]); if ( ec) throw std::runtime_error("some error");
// fiber gets suspended until message was written boost::asio::async_write( socket_, boost::asio::buffer( data_, max_length), yield[ec]);
You can find more detailed infos in boost.fiber's docu: http://ok73.funpic.de/boost/libs/fiber/doc/html/fiber/asio.html The library itself contains several examples demonstrating the usage together with boost.asio.
You just made me VERY interested in the forthcoming Fiber peer review, thank you. AFIO's third worked example in its tutorial is a peak performance "find in files" implementation and it, being completely asynchronous right down to even enumerating directories, is a mess of callbacks. I had assumed that was as good as it could get. It looks possible that Fiber could replace much of that mess of callbacks with something far more readable. You definitely have my attention now, I may even have a crack at adding Fiber support to AFIO, and see what happens. One again, thank you. Niall -- Currently unemployed and looking for work. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/

Are fibers just inherently threads? I read that they have all of the member functions that boost.thread offers. The async functionality looks very useful . At the very least, a promising extension of boost.thread. On a side note, anybody have any idea how I can make the above code concurrent capable? It may be impractical to accomplish what I'm trying to do but I'm feeling ambitious :) I could process all the pings in a loop with just one thread at a time and set the timeout at 500 ms. So it would take 3000 seconds to ping 3000 ip's. Just trying to be fast and efficient as possible. Nail, I am very interested in AFIO. I develop mainly for Windows and have recently began reading the msdn docs for completion ports, etc. There just doesn't seem to be many working examples and the documentation is weak. I posted on here a couple weeks ago trying to implement multithreaded io in a raid environment. I will be considering AFIO for sure. Original Message From: Niall Douglas Sent: Thursday, December 19, 2013 7:12 PM To: boost-users@lists.boost.org Reply To: boost-users@lists.boost.org Subject: Re: [Boost-users] [boost.asio] Concurrently Ping 1000+ Hosts On 19 Dec 2013 at 8:17, Oliver Kowalke wrote:
// fiber gets suspended until message was read boost::asio::async_read( socket_, boost::asio::buffer( channel), yield[ec]); if ( ec) throw std::runtime_error("some error");
// fiber gets suspended until message was written boost::asio::async_write( socket_, boost::asio::buffer( data_, max_length), yield[ec]);
You can find more detailed infos in boost.fiber's docu: http://ok73.funpic.de/boost/libs/fiber/doc/html/fiber/asio.html The library itself contains several examples demonstrating the usage together with boost.asio.
You just made me VERY interested in the forthcoming Fiber peer review, thank you. AFIO's third worked example in its tutorial is a peak performance "find in files" implementation and it, being completely asynchronous right down to even enumerating directories, is a mess of callbacks. I had assumed that was as good as it could get. It looks possible that Fiber could replace much of that mess of callbacks with something far more readable. You definitely have my attention now, I may even have a crack at adding Fiber support to AFIO, and see what happens. One again, thank you. Niall -- Currently unemployed and looking for work. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/

On 19 Dec 2013 at 20:05, reminisc3@gmail.com wrote:
Nail, I am very interested in AFIO. I develop mainly for Windows and have recently began reading the msdn docs for completion ports, etc. There just doesn't seem to be many working examples and the documentation is weak. I posted on here a couple weeks ago trying to implement multithreaded io in a raid environment. I will be considering AFIO for sure.
Glad you find it interesting, and I hope AFIO's unusual API design doesn't put you off. I have some real world benchmarks for AFIO at https://ci.nedprod.com/view/All/job/Boost.AFIO%20Build%20Peer%20Review %20Documentation/Boost.AFIO_Documentation/doc/html/afio/quickstart/asy nc_file_io/so_what.html (bottom of page) where I compare a single threaded iostreams vs OpenMP iostreams vs AFIO implementation. Needless to say, AFIO whips everything else performance wise, but it is definitely at the cost of hideous code complexity with lots of splitting execution state across callbacks. I also threw RAM at the problem which AFIO lets you do a bit too easily. Be aware that AFIO has not yet passed Boost peer review. And it has a known race condition in it, we currently suspect it's related to the race condition in Boost.Thread's future/promise implementation. Regarding Windows IOCP programming in general, I'd recommend you use one of Microsoft's very good async C++ libraries which were implemented by one of the highest calibre C++ teams in the world. Look into Casablanca, C++ AMP and the Microsoft PPL, it might ease your burden on Windows. Programming WinRT using C++ isn't too terrible either actually, and it's all IOCP and async throughout. Niall -- Currently unemployed and looking for work. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/

On 12/20/2013 06:03 AM, Niall Douglas wrote:
Glad you find it interesting, and I hope AFIO's unusual API design doesn't put you off. I have some real world benchmarks for AFIO at
I think that this is likely going to be the major obstacle for the adoption of AFIO. Have you consider adding some kind of convenience API that is easier to use for the casual user, but also more limited in functionality?

On Dec 20, 2013, at 6:17 AM, Bjorn Reese
On 12/20/2013 06:03 AM, Niall Douglas wrote:
Glad you find it interesting, and I hope AFIO's unusual API design doesn't put you off. I have some real world benchmarks for AFIO at
I think that this is likely going to be the major obstacle for the adoption of AFIO. Have you consider adding some kind of convenience API that is easier to use for the casual user, but also more limited in functionality?
AFIO + Fiber :-) (no constraint on functionality)

On 20 Dec 2013 at 8:16, Nat Goodspeed wrote:
Glad you find it interesting, and I hope AFIO's unusual API design doesn't put you off. I have some real world benchmarks for AFIO at
I think that this is likely going to be the major obstacle for the adoption of AFIO. Have you consider adding some kind of convenience API that is easier to use for the casual user, but also more limited in functionality?
AFIO + Fiber :-) (no constraint on functionality)
I think that's a very real possibility. I think for more complex use cases the multiple nested callback design is probably unavoidable, but there is a huge middle ground there where callbacks are overkill and fibers might just be the ticket. I also worry about performance with fibers - I can see them using up a lot of L1 cache because you can't avoid copy semantics with them as they dump and reload context. Callbacks encourage you to use a pointer to a state object, and already one of the biggest performance drains in AFIO is the parameter packing/unpacking overhead of std::function<> and std::bind<> because it uses up precious cache space. Anyway, very much looking forward to seeing what fibers can do for AFIO. Niall -- Currently unemployed and looking for work. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/

On 20 Dec 2013 at 12:17, Bjorn Reese wrote:
Glad you find it interesting, and I hope AFIO's unusual API design doesn't put you off. I have some real world benchmarks for AFIO at
I think that this is likely going to be the major obstacle for the adoption of AFIO. Have you consider adding some kind of convenience API that is easier to use for the casual user, but also more limited in functionality?
Can you suggest something? I honestly can't think of anything simpler which also provides strong write ordering guarantees. Niall -- Currently unemployed and looking for work. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/

On 12/20/2013 05:31 PM, Niall Douglas wrote:
Can you suggest something? I honestly can't think of anything simpler which also provides strong write ordering guarantees.
I have not given this much thought so consider the following a brainstorm. I am thinking about an API that uses handles that looks more like Asio sockets. Write ordering can be handled, not by batching operations together, but rather calling the next write operation from the callback of the previous operation (Asio-style.) This will not always yield good performance, but oftentimes that is less relevant. If you need performance, then the "advanced" dispatcher API is available. So there could be a file handle (and directory handle) class for file (directory) manipulation calls, which hides all the details of the dispatcher etc. class file_handle { public: void read(buffer, read_callback); void write(buffer, write_callback); // and so on }; class directory_handle { public: void create(name, create_callback); // uses file(single) or dir(single) void remove(name, remove_callback); // uses rmdir(single) void watch(name, watch_callback); // directory monitoring // and so on };

On 20 Dec 2013 at 19:16, Bjorn Reese wrote:
Can you suggest something? I honestly can't think of anything simpler which also provides strong write ordering guarantees.
I have not given this much thought so consider the following a brainstorm.
We really ought to move this off boost-users ... but we'll see how it goes.
I am thinking about an API that uses handles that looks more like Asio sockets. Write ordering can be handled, not by batching operations together, but rather calling the next write operation from the callback of the previous operation (Asio-style.) This will not always yield good performance, but oftentimes that is less relevant. If you need performance, then the "advanced" dispatcher API is available.
So there could be a file handle (and directory handle) class for file (directory) manipulation calls, which hides all the details of the dispatcher etc.
class file_handle { public: void read(buffer, read_callback); void write(buffer, write_callback); // and so on };
class directory_handle { public: void create(name, create_callback); // uses file(single) or dir(single) void remove(name, remove_callback); // uses rmdir(single) void watch(name, watch_callback); // directory monitoring // and so on };
I'm struggling to see the merit in such an approach - it would be incredibly verbose and complex to write even simple solutions, because i/o on files is not like sockets, *especially* that you almost never must strongly order i/o across a sequence of multiple sockets, but that is a very common case with files e.g. during ACID. AFIO tries to help users to not write callbacks except when necessary, but if you do want a user defined callback you simply chain a call() or completion() operation to the item whose completion you want called back upon. The idea is, you see, that you subclass the async_io_dispatcher class with additional completion handlers, and use those as building blocks for further subclasses of async_io_dispatcher. That hopefully gets people to break up their callbacks into reusable completion handlers, and saves people writing and debugging code. Before you say "this should be in the documentation", yes it should and will be after Paul gets his directory monitoring implementation working. I'm thinking that will form the fourth section in the beginner's tutorial - how to modularise and make reusable the normally bespoke glue code which makes up traditional callbacks. It might actually be worth adding a section 3b which shows how the naïve approach in 3a is a very stupid idea :) As much as this sort of operation dependency graph based design works well for its niche, I agree there is a swathe of code which ends up looking like the find in files implementation, and for that hopefully fibers will make look much more sane. Niall -- Currently unemployed and looking for work. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/

2013/12/20
Are fibers just inherently threads? I read that they have all of the member functions that boost.thread offers. The async functionality looks very useful . At the very least, a promising extension of boost.thread.
fibers != threads, 'In computer science http://en.wikipedia.org/wiki/Computer_science, a *fiber* is a particularly lightweight thread of executionhttp://en.wikipedia.org/wiki/Thread_of_execution .' http://en.wikipedia.org/wiki/Fiber_(computer_science)

On 20 Dec 2013 at 8:04, Oliver Kowalke wrote:
Are fibers just inherently threads? I read that they have all of the member functions that boost.thread offers. The async functionality looks very useful . At the very least, a promising extension of boost.thread.
fibers != threads,
I would call a fiber the user space portion of a thread, and therefore minus any kernel support. Niall -- Currently unemployed and looking for work. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/
participants (7)
-
Bjorn Reese
-
Gavin Lambert
-
Kyle Ketterer
-
Nat Goodspeed
-
Niall Douglas
-
Oliver Kowalke
-
reminisc3@gmail.com