
On 03/14/2010 05:36 AM, Sean Chittenden wrote:
I think a more general way for me to classify my response/concern (instead of providing a loosely worded example) would be:
"Which class does what and how do the classes relate to each other?"
Right now answering that question results in a decent exploration of Boost.Log's code and documentation. It would have been useful to me, and likely future developers, to have the classes that comprise Boost.Log summarized so that conceptual map could have been more easily formed without having to dig in to the actual headers. Something similar to the contents of the Definitions page, but with architectural clues.
http://boost-log.sourceforge.net/libs/log/doc/html/log/defs.html
As an example, expanding on the Sinks definition from above:
Log Sink: A target, to which all log records are fed after being collected from user's application. It is sink's nature that defines where and how the log records are going to be stored.
Maybe include something as brief as:
With a Log Sink, developers can modify the filtering and add additional backends.
I consider myself a modestly competent user of boost, but still am getting parts of this wrong after over three months of using/digging in to Boost.Log.
The Definitions section is rather detached from the library implementation. It simply makes the user to understand the terms used in the rest of the docs. The information you seek is probably given in the Design overview. Having read that section, you should have the idea on the major components of the library and their relations. Then Detailed features description maps that knowledge on the concrete tools (down to classes and functions).
It'd be nice if the error message were able to somehow include the string representation of the value (attribute) it wasn't able to find. Turns out it was a type-o so I had an unregistered attribute, but the type-o would've been very easy to identify if it were included in the error message someplace (instead it took a fair amount of manual investigation and backtracking to figure out it was a filter searching for an attribute that didn't exist).
Yes, I've been requested to improve exception messages in another review. Will do.
*) Global loggers vs. per-object references to singletons
Not sure I understand.
Is it better to have global objects that reference singletons:
typedef boost::log::sources::severity_channel_logger_mt<levels::severity_level> logger_t;
BOOST_LOG_DECLARE_GLOBAL_LOGGER(default_log, logger_t); BOOST_LOG_DECLARE_GLOBAL_LOGGER_CTOR_ARGS(stats_log, logger_t, (boost::log::keywords::channel = "stats"));
or per object references to logging singletons:
typedef boost::shared_ptr< boost::log::sinks::synchronous_sink< boost::log::sinks::text_ostream_backend> > console_sink_t; typedef boost::shared_ptr< boost::log::core> logger_core_t;
struct my_obj { my_obj(logger_core_t& logger) : lg(logger) {} void foo() { BOOST_LOG(lg)<< "log msg"; } logger_core_t& lg; }
int main() { console_sink_t cerr_sink; cerr_sink = boost::log::init_log_to_console( std::cerr, boost::log::keywords::filter = boost::log::filters::attr< int>("Severity")>= warning, boost::log::keywords::format = "%TimeStamp%: %_%", boost::log::keywords::auto_flush = true); my_obj o(boost::log::core::get()); o.foo(); }
... I think that example's right.
The latter example is not correct, as the logging core is not a logger, and cannot be used with logging macros. Also, you save a dangling reference in my_obj, as core::get returns shared_ptr by value. I take it that you're trying to ask if acquiring a reference to the logger defined with BOOST_LOG_DECLARE_GLOBAL_LOGGER* macros is expensive. No, it's not, at least in the long run. Simply put, the macros define a module-wide reference to the logger, which is initialized upon the first request for the logger. All subsequent requests will return the initialized reference. So there's no point in keeping a reference to global loggers in your classes.
*) Channel severity loggers example
That example wouldn't be much different from the ones in sections about severity_logger or channel_logger. But I'll add a code sample.
Where I spent time with this was figuring out how multiple chan severity loggers interacted with the core and how my app was supposed to interact with the distinct loggers, which were linked in to the core, behind the scenes.
As you can see, the workflow with severity_channel_logger is quite the same as with severity_logger. Only logger construction is different - which is in line with channel_logger. I'd say, the resulting workflow is quite natural for the fused logger, don't you think?
*) Setting up an initial console logger as line one in main()
Hmm. Ok, I'll add something about that to the tutorial.
Easy to do (see above), but I didn't see anything that said, "it would be wise to do this first and then remove your console sink after you setup your logging environment." Initially I disregarded the init_to_console() convenience functions while reading the docs, and it wasn't until I started dealing with error messages that weren't being logged to a file because my logging setup routines hadn't been completed (or had failed for some reason due to FS perms), that I suddenly took note of needing to get console logging working first.
Well, I wouldn't say that registering a console sink first thing in the "main" is the solution for everyone (or recommended, for that matter). If logging fails to initialize, you will probably get an exception, in which case the most simple and reliable thing to do is to write its what() into std::cerr and bail out with an error code. In other, more complex cases, when logging may occur before the initialization, yes, it might be reasonable to register a console sink first. But I'd say, it's an individual choice.
*) Changing a backend filter at a later date
Just call set_filter on the backend, that's all.
This was one of the areas where there were lots of ways that I could do it, but I couldn't tell which was the right way. libs/log didn't suggest a preferred mechanism or right way to do it.
Are there other ways to set the filter? I don't remember those. :)
*) Obtaining the severity of a particular logger
You mean, the default severity? There's no public API for this, but there's a protected function default_severity in the logger feature. Why would you need it?
Internally you're using spirit for log record filtering, so it's impossible from what I can tell to do that inside of Boost.Log. Having a way of poking at a sink to obtain information about its runtime status. Using the above console sink as an example:
[snip]
I ended up adding this to my application's surrounding code.
Well, the filter is self-contained. In most cases you can't extract its criteria into a variable because it can be very complex. For example, how would you do it in case of this filter: attr< level >("Severity") == Dump || attr< level >("Severity") >= High || has_attr("Emergency") This filter may be very real in some server-side application, if raw packets (received or sent to clients) are logged at Dump level, all errors are logged with High severity, and occasionally Emergency records appear.
You can set an attribute (say, "Emergency") for that record, and setup filters so that whenever this attribute is present, the record is passed on to the console backend.
You mean a scope attribute?
Yes. Or create a global logger with this attribute. Whichever suits you best.
As for the frontends having their own queue, with the way that async frontends work and their feed_records() call, I'd assumed the backend used a time-based mechanism to come by and pick up the log records stored in per-frontend TSS queues. Something where frontends atomically pushed records on, and the backend would atomically swap() the list of records off for sorting and processing by the backend. Just a point of confusion after reading:
http://boost-log.sourceforge.net/libs/log/doc/html/log/detailed/sink_fronten...
and I never dove in to the source to investigate further.
What part of that section made you think so? No, backends are absolutely passive (at least, for now). There are no TSS queues. The feeding thread is spawned by the frontend, and it feeds record from the very same queue, which is used to store records in the logging threads.
*) I find the argument against Lazy Streaming of attributes to be weak.
Just as it is said in the rationale - the streaming expression is _not_ executed until _after_ the filtering is done. And filtering already involves attribute values, so there's no way to use these attribute values you specified in the streaming expression. Besides, after the view of attribute values is constructed, it is immutable, so nothing can be added after the filtering is done.
That's why I make the distinction between "log record preconditions," attributes that can be used in a filter, and "attributes," which is a catch-al term for data stored in a Log Record. I agree that it is impossible for streamed attributes to be used as preconditions, but streamed attributes act as an extremely convenient (from a syntax perspective) way of passing data to a backend for processing.
Well, there's no such distinction in the library now, and I doubt it would be a good idea to add it. It would lead to confusion which attribute is which, and eventually - more errors in setting them correctly.
I like the idea of using both async and sync sinks. It leaves me with a few questions, however:
How does that work if they're both using the same text file backend? Will log entries still be ordered?
It will probably crash, since the backend will be accessed from different threads. The backend has to be used from only one frontend.