First question, and this is always the first question:
Have you actually *measured* where the performance problems are?
Lots of comments inline:
On Sun, Jan 10, 2010 at 5:29 PM, Stephen Torri
I was thinking that I could add a boost::thread to my logging class, called Trace,
You create the thread in the constructor of Trace, which is called via the first call to Singleton<Trace>::Instance(), which I assume is for the first call to log something. Does this potentially happen before main() is called? Typically creating threads before main() isn't recommended. You might need to somehow delay that.
template <typename T> class Singleton : boost::noncopyable { public:
static void init () { m_obj.reset ( new T () );
If you ever want to log from the allocator, you will probably want to
somehow avoid using 'new' here.
ie reserve some memory for T using something like:
template <typename T>
struct storage_for
{
aligned_storage
Does this 'work' or do you need to know T?
template <typename T> boost::scoped_ptr<T> myproject::trace::Singleton<T>::m_obj ( 0 ); template <typename T> boost::once_flag myproject::trace::Singleton<T>::m_flag = BOOST_ONCE_INIT;
void Trace::add_Trace ( boost::shared_ptrinterface::ITraceable trace ) {
is boost::shared_ptrinterface::ITraceable == interface::ITraceable::ptr_t ?
// Add ITraceable object to the queue
add *pointer to* ITraceable object!
{ // lock queue { boost::lock_guardboost::mutex lock ( m_lock ); m_queue.push_back ( trace ); }
m_condition.notify_one(); } }
void Trace::threadMain() { boost::unique_lockboost::mutex lock ( m_lock );
while ( m_hasWork ) { while ( m_queue.size() == 0 ) { m_condition.wait ( lock ); }
*** note that the mutex is now locked here *** (condition.wait() unlocks while waiting, then relocks before returning)
this->processTraces(); } }
void Trace::processTraces() { QueueType local_queue;
// Lock queue and copy
*** The queue is ALREADY locked (see above) ***
{ boost::lock_guardboost::mutex lock ( m_lock ); std::copy ( m_queue.begin(), m_queue.end(), local_queue.begin() );
*** don't you need to also empty m_queue? ***
}
for ( QueueType::iterator pos = local_queue.begin(); pos != local_queue.end(); pos++ ) { // Process ITraceable object and write to a file } }
And last but not least - I am concerned that ITraceable is *shared*. Typically when logging something, you capture a 'snapshot' of values in time, and log that out. With a shared snapshot, the caller may actually change values *after* 'logging' them. Maybe in an attempt to reuse the ITraceable. ie client code: do_something(); tracer = new Traceable(....); log(tracer); do_something_else(); tracer->foo = 12; log(tracer); So maybe your interface should *take* the pointer (via auto_ptr or unique_ptr or etc) instead of sharing. Alternatively, you could copy the ITraceable when placing it into the queue (maybe via a 'clone' method - I'm assuming ITraceable is polymorphic with various derivations possible? Otherwise just make it Traceable, no 'I'nterface, and pass by value...) Oh, and the destructor: Trace::~Trace() { // stop thread m_hasWork = false; } m_hasWork = false is not protected by a lock, so other CPUs won't necessarily see it be set in the order you might be expecting. Oh wait, actually, you are not *waiting* for the other thread to end. So the destructor leaves, and the memory for Trace is deleted/reused, etc, and then at some point later the thread wakes up and tries to read it! BOOOM? And actually, the thread probably will NOT wake up - it will probably still be waiting on the condition.wait() call. So you need to signal that. Trace::~Trace() { // stop thread { boost::lock_guardboost::mutex lock ( m_lock ); // this needs to be inside the lock (or at least set with an atomic_release memory barrier) // otherwise the thread might wake up from the condition notification, // and yet not see m_hasWork == false. m_hasWork = false; } m_condition.notify_one(); // use join (whatever the syntax) to wait for thread to finish join(...); } That's all I saw from a quick glance. There may be more. Tony