[BGL] Converting between adjacency_list with listS and vecS
Hi all, The long standing question I've had for BGL is how to easily convert a graph (using adjacency_list) from listS as vertex list to vecS. (And vice versa) I understand why there needs to be 2 different implementations depending on different uses of algorithms. But is there an easy way to convert them back and forth? Any tip would be much appreciated. Thanks,
Ethan Kim
Hi all,The long standing question I've had for BGL is how to easily convert a
graph (using adjacency_list) from listS as vertex list to vecS. (And vice versa) I understand why there needs to be 2 different implementations depending on different uses of algorithms. But is there an easy way to convert them back and forth?Any tip would be much appreciated.Thanks,
_______________________________________________ Boost-users mailing list Boost-users <at> lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
Do you mean given Graph A using listS you want to produce Graph B using vecS?
How about something like this:
template
Hi Justin,
Thanks for the tip / source code! I thought the library would have a
built-in copy constructor for converting between different types, but I
guess not. On another note, since this method gives an isomorphic graph, all
the properties must be copied over manually, correct? I think I'm only using
vertex_name_t which is an internal property, but.. hmm..
Thanks again!
Ethan
On Wed, Jul 22, 2009 at 2:04 PM, Justin Leonard
Ethan Kim
writes: Hi all,The long standing question I've had for BGL is how to easily
convert a graph (using adjacency_list) from listS as vertex list to vecS. (And vice versa) I understand why there needs to be 2 different implementations depending on different uses of algorithms. But is there an easy way to convert them back and forth?Any tip would be much appreciated.Thanks,
_______________________________________________ Boost-users mailing list Boost-users <at> lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
Do you mean given Graph A using listS you want to produce Graph B using vecS? How about something like this:
template
void foo(const GraphFrom& from,GraphTo& to) { typedef typename GraphFrom::vertex_iterator viter; typedef typename GraphFrom::vertex_descriptor vdesc; typedef typename GraphFrom::edge_iterator eiter; typedef std::map map_t; map_t table;
viter vi,vend; //copy vertices for(tie(vi,vend)=from.vertices(); vi!=vend; ++vi) table[*vi] = add_vertex(to);
eiter ei,eend; //copy edges for(tie(ei,eend)=from.edges(); ei!=eend; ++ei) { vdesc s = source(*ei,from); vdesc t = target(*ei,from);
add_edge(table[s],table[t]); } }
Assuming my template mangling is correct, this should produce an isomorphic graph of the corresponding type in about O( VlgV + ElgE ) time.
Justin
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
Does this code look ok? Specifically, does this look like a good way to have a fixed # of threads service requests, and have each thread asychronously write out the response to their client? In the code below, I left out HandleHttpRequest, and that function parses the request, does some work, and writes out the response 8kb at a type, using async_write. note: the threads don't wait until the previous async_write is done, they each just keep calling async_write until all their data is out. (but they are careful to make sure their buffers are around for long enough) void CBaseWebServer::StartAsync(int port) { m_pAcceptor = shared_ptrtcp::acceptor( new tcp::acceptor(m_IoService, tcp::endpoint(tcp::v4(), port)) ); // Find out how many threads we shoudl run int numberOfThreads = boost::thread::hardware_concurrency(); StartAccept(); for ( int i = 0; i < numberOfThreads; i++ ) { // Fire up a thread to run the IO service shared_ptr<thread> pThread = shared_ptr<thread>(new thread(bind(&CBaseWebServer::RunIoService, this, i))); m_Threads.push_back(pThread); } } void CBaseWebServer::RunIoService(int threadId) { m_pPerThreadId.reset( new int(threadId) ); cout << "Starting thread: " << threadId << endl; try { m_IoService.run(); } catch ( ... ) { cout << "Unexpected exception caught in " << BOOST_CURRENT_FUNCTION << endl << boost::current_exception_diagnostic_information(); } } void CBaseWebServer::StartAccept() { try { shared_ptrtcp::socket pSocket(new tcp::socket(m_IoService)); m_pAcceptor->async_accept(*pSocket, bind(&CBaseWebServer::HandleAccept, this, pSocket, boost::asio::placeholders::error)); } catch ( ... ) { cout << "Unexpected exception caught in " << BOOST_CURRENT_FUNCTION << endl << boost::current_exception_diagnostic_information(); } } void CBaseWebServer::HandleAccept(shared_ptrtcp::socket pSocket, const boost::system::error_code& error) { if ( !error ) { try { HandleHTTPRequest(pSocket); } catch ( ... ) { cout << "Unexpected exception caught in " << BOOST_CURRENT_FUNCTION << endl << boost::current_exception_diagnostic_information(); } StartAccept(); } else { cout << "CBaseWebServer::HandleAccept received error: " << error.message().c_str() << endl; } }
I am not an asio expert by any means, though I've dabbled. YMMV.
On Wed, Jul 22, 2009 at 8:25 PM, Alex Black
Does this code look ok? Specifically, does this look like a good way to have a fixed # of threads service requests, and have each thread asychronously write out the response to their client?
Seems like the right thing to do.
In the code below, I left out HandleHttpRequest, and that function parses the request, does some work, and writes out the response 8kb at a type, using async_write.
note: the threads don't wait until the previous async_write is done, they each just keep calling async_write until all their data is out. (but they are careful to make sure their buffers are around for long enough)
Based on what I've read here recently I understand that each async_write is done in succession. This may make a difference on systems with more cores, even if you aren't seeing any problems. So you probably need a handler for the async_write that does successive async_writes until everything is written.
[...] void CBaseWebServer::StartAsync(int port) { [...] for ( int i = 0; i < numberOfThreads; i++ ) { // Fire up a thread to run the IO service shared_ptr<thread> pThread = shared_ptr<thread>(new thread(bind(&CBaseWebServer::RunIoService, this, i)));
N threads entering this->RunIoService...
[...] void CBaseWebServer::RunIoService(int threadId) { m_pPerThreadId.reset( new int(threadId) );
N threads resetting what appears to be a single shared pointer; I'm assuming its a shared_ptr. The problem is that a shared_ptr *instance* isn't thread-safe, so having N threads independently resetting and assigning it is going to cause problems (the reference counting *is* thread-safe, but each thread needs its own shared_ptr, each pointing to the same thing). Having said that, I don't see the purpose of the pointer anyway, as it isn't used in the code sample. Not sure about the error handling on accept errors; I might have it call StartAccept from HandleAccept regardless of the error, though there might be some errors I'd give up on.
participants (4)
-
Alex Black
-
Ethan Kim
-
Justin Leonard
-
Oliver Seiler