
-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Gennadiy Rozental Sent: den 15 december 2005 22:39 To: boost@lists.boost.org Subject: Re: [boost] [Review] Boost.Logging: formal review
1. Enabled/disabled logger, logging macros and efficiency
The logging macro BOOST_LOG should accept two arguments instead of one. The logger and the log message. The syntax
(1) BOOST_LOG(some_log) << "my message " << foo() << bar();
should be replaced with
(2) BOOST_LOG(some_log, "my message " << foo() << bar());
and the macro being defined something like this
(3) #define BOOST_LOG(logger, msg) \ if(logger.isEnabled()) { /* code that logs msg */ ... }
Thus, disabled loggers will never take longer than simple if-statement, as long as the isEnabled() method is fast, which IMO should take no longer than comparing booleans.
1. I think this is also the case in original interface.
In the original solution (1) the method foo() will always be called, which is inefficient when the logger is disabled, in solution (2), foo() will never be called when the logger is disabled.
I think you are mistaken
Yes, my mistake.
Further more, the macro could insert extra information such as __LINE__, __FILE__ etc.
This is also true for original interface.
Yes true -- my mistake again (I didn't look enough at the implementation of the macro).
All in all I prefer message not be inside brackets. It looks much more natural and I don't see any theoretical problens to get the same performance. But I also believe that macro shouldn't be a primary interface. So if you prefer message inside the breakets - it's your choice.
However, there is one additional benefit of placing the message inside brackets. One can remove the log statement in a build. Other than that, I guess its style preferences.
2. Filtering
There has been some discussion about the ability of a library to be able to do more filtering than simple log levels, notably by Gennadiy Rozental.
Gennadiy writes:
At the bare minimum it should support:
entry level - levels set is ordered set of values indication importance of information provided. Filtering is based on threshold. Examples: DEBUG, INFO, MAJOR
entry category - categories set is a set of unique values indicating kind of information provided Filtering is based on masking. Examples: ARGS, RETURN_VALUE, ERROR_LOG, DATA_FLOW, PROG_FLOW
entry keyword - keyword set is set of user defined keywords (most frequently strings) identifying area of the program. Filtering is based on match of keywords. Keywords usually are used to mark specific part of application.
I agree that it is very important for a logging library to support this. However, I do not think that the solution is for the logging library to be aware of such special values. The library should be as simplistic as
My position that library should be configurable by any class satisfying Filter concept. This particular filters could probably be supplied by the library as an example and as most widely useful.
My proposed solution is to basically let loggers be somewhat entry (Gennadiy's definition above) unaware. Instead, let the developer be
I do not see how it's possible. IMO framework should employ some MPL magic and construct proper entry structure based on set of Filters passed as template policy parameters.
entry aware. By that I mean that the developer should log messages that belong to a specific entry category/level to a specific logger. For
This is never be acceptable IMO. Even with 3 filters, each ahving 5 possible values you looking into 125 different loggers.
Support for log functions on top of a basic framework solution is needed to alleviate this problem. I wouldn't mind that the library provides log funtions that take various filter functions or predicates to determine wether to send a log statement to a logger. In this case, the logger would be enabled and the log filter function would send log statements to the logger if the log statement meets some criteria. Still, the underlying logger framework would be the same; logger object should still be enabled or disabled/enabled, there would still be a few different logger objects for different levels and possible other user defined criteria. IMO it is very important for an administrator/programmer of a system to be able to enable/disable logs, know what loggers and appenders exists and be able to configure said entities. If filter is only done on a per log statement, how do you even know as a administrator of a system what log filters exists? Which filter criteria should be enabled and which filter criteria should be disabled? If all log statements are sent to one logger object, the logger object becomes a monolith of functionality and complexity. Runtime performance might degrade considerable if various filters is to be checked, not only if the log statements should be logged, but also if one wants to send log statements to different appenders/sinks/destinations. Cheers, Richard The content of this e-mail is intended only for the confidential use of the person(s) to whom it is addressed. If the reader of this message is not such a person, you are hereby notified that you have received this communication in error and that reading it, copying it, or in any way disseminating its content to any other person, is strictly prohibited. If you have received this message in error, please notify the author by replying to the e-mail immediately. NeoNet AB and its subsidiaries (NeoNet Securities AB, NeoNet Securities Inc. and NeoNet Technology AB) are unable to exercise control over the content of information contained in transmissions made via the Internet and hereby excludes any warranty as to the quality or accuracy of any information contained in this message and any liability of any kind for the information contained in it, or for its transmission, reception, storage or use in any way whatsoever.