
Gennadiy Rozental <gennadiy.rozental <at> thomson.com> writes:
I have several questions about this:
1. How do you know in general which log present trace of first invocation, and which one second:
void acess_db( ... ) { if( some error ) log << "invalid account key" << key; }
int get_balance(...){ ... if( !access_db(...) ) return 0; }
...
log << "Mary has balance: " << get_balance() << " and John has balance: " << get_balance()
Let say an output is:
access_db Invalid account key: 25 < access_db access_db < access_db Mary has balance: 0 and John has balance: 100
-----------------------------------------
Could you say where an error occurred (without looking into code and knowledge how log system is working)?
No, I can't. But then there is an awful lot of stuff in detailed logs that falls into the same category, with or without reordering.
Would it be like this:
Mary has balance:
access_db Invalid account key: 25 < access_db 0 and John has balance: access_db < access_db 100
It would be clear.
Yes. However, I have thought of LOG(x) << "foo" << a() << b(); as a request to build then log a message, not as a reqest to write to a log stream. All the logging systems I'm familiar with have the same concept (eg syslog, windows nt event log).
I understand that later is slightly less readable but In my experience usually what happened is that log entry fragmenting occur only on hi levels of log (or in case of error). While with regular log level and normal log flow it's not an issue.
Ok. The only think I can think of that preserves the log message concept while giving the stream view that you want is to record some sort of nesting level and either cache to give something like the following directly, or just include a level tag used by a viewer to format as: Mary has balance: 0 and John has balance: 100
access_db Invalid account key: 25 < access_db access_db < access_db Next log message is back here
I'm not sure if that is close enough?
2. How does the log system know when to dump the cash? Does it introduce any scope guards?
Do you mean the cache used on startup or the buffering used to build a log message? I'll try to answer both - pick the applicable one... The cache is only used to avoid missing log info from static c'tors etc. that may run before the logging system itself has been configured (logs connected to appenders). Once the logging is initialized, the cache is not used. The buffer is only used to build a single message. Its scope is the single LOG(x) << ...; expression.
3. How deep it will work? And how many different cashed we could have the same time?
The cache is limited to prevent excessive memory use if the user forgets to configure the log. The buffer to build a message is used to build a single message.
4. What kind of overhead it introduces? Now instead of directly writing into stream we keep second copy of every piece of log essentially doubling the memory usage and increasing log overhead, isn't it?
I'm too confused about what you were asking above to answer this, sorry.
Looks in order to me. I'm still not sure what the alternative is?
It may be desirable in some circumstances. But it may be not. IMO Log should follow program flow 1:1. Including order of events during log entry output.
But then it isn't just a log, it's the whole tree ;-) I do see your point now though. I just can't quite see how to fit it into an event logging system. The above thoughts about tracing actual program flow also fail to consider how to reconcile this with the idea of multiple logs and appenders. In general you can have an L x A connection matrix of logs and appenders. It is quite likely I'd write my access_db messages to the db_log, and my account_balance etc messages to the acct_log. These 2 logs may or may not both be connected to the same (set of) appenders. I guess the merging/reordering could only work when the writes were to the same log. Darryl.