
Boosters, the formal review of the Boost.Log library, written by Andrey Semashev, will start in a week from now, on March 8, 2010 and will run through March 17, 2010. The documentation for the current version is available at: http://boost-log.sourceforge.net The downloads at at: http://sourceforge.net/projects/boost-log Because the library is relatively big and important, you might want to take a look in advance, without waiting for the formal review. Please note that reports of real-world usage will be particularly appreciated -- even if you don't have the time to to do everything formal review checklist requires please still report your experiences. To clarify timelines, the formal reviews are accepted from 23:59, March 7, PST = 2:59, March 8, EST = 7:59 March 8, GMT = 10:59, March 8 MSK until 23:59, March 17, PST = 2:59, March 18, EST = 7:59 March 18, GMT = 10:59, March 18 MSK Reviews submitted outside of this frame are not guaranteed to be processed. Thanks, Volodya

Logging is something that I have been tasked with at work so I was interested at looking over this library to see how it might help. Like most things, requirements can change everything and attempting to make on library that can cover the most requirements is a real challenge. That said, there seems to be a major component missing from the library: automated replay / parsing of log files. It is one thing to dump data into a file, it is quite another to pull it out and do something useful with it. From what I have seen the library looks great, but some kind of replay tool that can allow you to iterate over log records and automatically handles the "segmented/rotated" logs would be a critical addition to the library. Considering you already have an advanced "filtering" system in place before it gets to the file, it would be nice to enable "post processing" where the input comes from the log file and can go back through the same or a different set of filters. A simple "log source" would be to have built in support for any boost::signal. Dan On Mar 1, 2010, at 2:21 AM, Vladimir Prus wrote:
Boosters,
the formal review of the Boost.Log library, written by Andrey Semashev, will start in a week from now, on March 8, 2010 and will run through March 17, 2010.
The documentation for the current version is available at:
http://boost-log.sourceforge.net
The downloads at at:
http://sourceforge.net/projects/boost-log
Because the library is relatively big and important, you might want to take a look in advance, without waiting for the formal review. Please note that reports of real-world usage will be particularly appreciated -- even if you don't have the time to to do everything formal review checklist requires please still report your experiences.
To clarify timelines, the formal reviews are accepted from
23:59, March 7, PST = 2:59, March 8, EST = 7:59 March 8, GMT = 10:59, March 8 MSK
until
23:59, March 17, PST = 2:59, March 18, EST = 7:59 March 18, GMT = 10:59, March 18 MSK
Reviews submitted outside of this frame are not guaranteed to be processed.
Thanks, Volodya
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Mar 1, 2010, at 9:22 AM, Rutger ter Borg wrote:
Daniel Larimer wrote:
[snip]
It is one thing to dump data into a file, it is quite another to pull it out and do something useful with it.
Yes, but that's not logging, is it? :-)
I guess my point is that "unreadable" binary logs are just as good as "no binary logs". If you make the job of logging easier, but do nothing on the replay (post processing) side then you really have not saved many users anything considering that the infrastructure (source/sink/filters) is practically the same. By the time I write a "generic replay service", I would be one small step away from a "generic logging service". The end result is that the output of Boost.Log is most suitable for human analysis without an eye toward making machine analysis easier.
Cheers,
Rutger
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 03/01/2010 05:42 PM, Daniel Larimer wrote:
On Mar 1, 2010, at 9:22 AM, Rutger ter Borg wrote:
Daniel Larimer wrote:
[snip]
It is one thing to dump data into a file, it is quite another to pull it out and do something useful with it.
Yes, but that's not logging, is it? :-)
I guess my point is that "unreadable" binary logs are just as good as "no binary logs". If you make the job of logging easier, but do nothing on the replay (post processing) side then you really have not saved many users anything considering that the infrastructure (source/sink/filters) is practically the same. By the time I write a "generic replay service", I would be one small step away from a "generic logging service". The end result is that the output of Boost.Log is most suitable for human analysis without an eye toward making machine analysis easier.
Well, since the library provides no binary logging sink out of box, it does not provide a tool to read them. And it doesn't provide a binary sink because its quite difficult to develop one to be generic. I'm not saying there won't be one in the future, it's just not at the top of the list. That said, nothing prevents you to write one for your needs. I believe the tools the library provides will help to do this in less time than that would require to build it from the ground. You get the whole library infrastructure as it is, all you need is to define the way to store log records. Regarding the replay feature, I really don't quite understand what exactly this is. Is it a tool to read binary logs or something more?

On Mar 1, 2010, at 11:35 AM, Andrey Semashev wrote:
On 03/01/2010 05:42 PM, Daniel Larimer wrote:
On Mar 1, 2010, at 9:22 AM, Rutger ter Borg wrote:
Daniel Larimer wrote:
[snip]
It is one thing to dump data into a file, it is quite another to pull it out and do something useful with it.
Yes, but that's not logging, is it? :-)
I guess my point is that "unreadable" binary logs are just as good as "no binary logs". If you make the job of logging easier, but do nothing on the replay (post processing) side then you really have not saved many users anything considering that the infrastructure (source/sink/filters) is practically the same. By the time I write a "generic replay service", I would be one small step away from a "generic logging service". The end result is that the output of Boost.Log is most suitable for human analysis without an eye toward making machine analysis easier.
Well, since the library provides no binary logging sink out of box, it does not provide a tool to read them. And it doesn't provide a binary sink because its quite difficult to develop one to be generic. I'm not saying there won't be one in the future, it's just not at the top of the list. Using Boost.Serialization creating binary logs could be made very generic.
That said, nothing prevents you to write one for your needs. I believe the tools the library provides will help to do this in less time than that would require to build it from the ground. You get the whole library infrastructure as it is, all you need is to define the way to store log records.
This is certainly true, the library may provide a good base from which to build binary logging. I will need to do further research to determine how effective this could be.
Regarding the replay feature, I really don't quite understand what exactly this is. Is it a tool to read binary logs or something more?
If you change the abstraction to nothing but a stream of data packets of "some type" that flows from "some source" through various "filters" to "one or more" destinations, then a "log file" becomes little more than a "buffer" in the stream. Thus, if you have some "real time monitoring tools" that can be hooked up to debug then these tools should be agnostic to whether or not the channel they receive their data on is from the original source or from the other side of a "log file filter".
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Mon, Mar 01, 2010 at 12:03:57PM -0500, Daniel Larimer wrote:
If you change the abstraction to nothing but a stream of data packets of "some type" that flows from "some source" through various "filters" to "one or more" destinations, then a "log file" becomes little more than a "buffer" in the stream.
Thus, if you have some "real time monitoring tools" that can be hooked up to debug then these tools should be agnostic to whether or not the channel they receive their data on is from the original source or from the other side of a "log file filter".
It looks like the replay tool performs some sort of input parsing on a given stream. What I am trying to understand is whether you are asking to couple those two processes. That is, do you advocate that the definition of the output format should automatically generate a parser (e.g., via Spirit) for the output? In general, I would love to see the library in Boost. My logging facilities are built on top of the current SVN version of the library and became an invaluable and flexible tool throughout my development. Matthias -- Matthias Vallentin vallentin@icsi.berkeley.edu http://www.icir.org/matthias

On Mar 1, 2010, at 1:03 PM, Matthias Vallentin wrote:
On Mon, Mar 01, 2010 at 12:03:57PM -0500, Daniel Larimer wrote:
If you change the abstraction to nothing but a stream of data packets of "some type" that flows from "some source" through various "filters" to "one or more" destinations, then a "log file" becomes little more than a "buffer" in the stream.
Thus, if you have some "real time monitoring tools" that can be hooked up to debug then these tools should be agnostic to whether or not the channel they receive their data on is from the original source or from the other side of a "log file filter".
It looks like the replay tool performs some sort of input parsing on a given stream. What I am trying to understand is whether you are asking to couple those two processes. That is, do you advocate that the definition of the output format should automatically generate a parser (e.g., via Spirit) for the output?
Lets assume you have a data source (boost::signal) and you connect it to a chain of filters that add meta data and ultimately a filter that converts the types into an archive and finally one that buffers "archives to disk". Another filter would simply take an archive and ultimately reproduce a boost::signal. You still need to have a "filter" that can convert a binary or text log entry back into the original types, but the process of fetching individual log entries and controlling how and when they are "emited" could be entirely handled. Now if the logging library loses all concept of "record boundary" when it writes to disk then there is not much you can do except create a custom log reader that determines the boundary of a specific record. But I contend that for many people they simply want to say "log this signal to this file under this name" and then open their log and say "replay signal with name to this slot" and then iterate through the log entries possibly in a multiple of real time. This opens up a ton of existing code which is already separated into "producer / consumer" or "model / view" paradigms via signals to logging and visualization of logged data without adding any new dependencies to the original code. I do not mean to put my requirements on this if my requirements are "out of scope" or to insist that such a feature is necessary out of the box, only that a boost logging library should be flexible enough to support some means of iterating over records in a log file. Another reason to support this kind of function is when logging a large amount of data at high data rates.... binary files would be an order of magnitude smaller, result in less disk access, and use less "formatting" overhead than text files. Yet it should be "trivial" to convert a binary log to a text log simply by using the source => serialization => binary file => binary to obj => text sink. If your library assumes that ostream<< is defined for every type then making an assumption on boost::serialization to support such a feature would be a relatively minor addition. If my "requirements" fall outside the scope of 'logging' then perhaps we should define what logging means. One last consideration is that often there is a desire to separate logging into a separate process that can optionally "connect" to the logging core and possibly perform some additional filtering on the other side of a socket. So my question is, how hard would it be to accomplish some of my requirements with the built in functions vs how much would I have to write from scratch to support it? I will look into it more, but perhaps someone more familiar with the package could offer an "estimate" in terms of hours required to implement some or all of the features I suggest. If they fit nicely into the framework, then I would likely contribute my own time to such an effort. If the framework starts to "get in the way" then perhaps we should consider the design. By the way, the library looks like it has great potential to manage debug print statements when layered with appropriate macros.
In general, I would love to see the library in Boost. My logging facilities are built on top of the current SVN version of the library and became an invaluable and flexible tool throughout my development.
Matthias -- Matthias Vallentin vallentin@icsi.berkeley.edu http://www.icir.org/matthias _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Daniel Larimer wrote:
On Mar 1, 2010, at 1:03 PM, Matthias Vallentin wrote:
On Mon, Mar 01, 2010 at 12:03:57PM -0500, Daniel Larimer wrote:
Lets assume you have a data source (boost::signal) and you connect it to a chain of filters that add meta data and ultimately a filter that converts the types into an archive and finally one that buffers "archives to disk". Another filter would simply take an archive and ultimately reproduce a boost::signal.
As I see it, you would like there to be a reader corresponding to every log writer so that you could recover the elements used to form the original log entry, filter them just as they could have been filtered originally, and then pass them to a different sink/back end (whatever terminology the candidate library uses -- I haven't looked yet) that can create a boost::signal or other behavior. I've never wanted to do that sort of thing with logging output -- we typically avoid logging enough to recreate an object -- so I wonder how widespread that need is. It seems plausible, but is it appropriate and worthwhile?
But I contend that for many people they simply want to say "log this signal to this file under this name" and then open their log and say "replay signal with name to this slot" and then iterate through the log entries possibly in a multiple of real time. This
I don't know where those "many people" are, but I haven't met them. For our replay purposes, we replay the system inputs, not its reactions.
opens up a ton of existing code which is already separated into "producer / consumer" or "model / view" paradigms via signals to logging and visualization of logged data without adding any new dependencies to the original code.
This is an interesting view of logging, to be sure.
Another reason to support this kind of function is when logging a large amount of data at high data rates.... binary files would be an order of magnitude smaller, result in less disk access, and use less "formatting" overhead than text files. Yet it should be "trivial" to convert a binary log to a text log simply by using the source => serialization => binary file => binary to obj => text sink.
That's an interesting idea.
One last consideration is that often there is a desire to separate logging into a separate process that can optionally "connect" to the logging core and possibly perform some additional filtering on the other side of a socket.
Anything that offloads work from the process/thread doing the logging is useful if it doesn't introduce locking or I/O overhead that makes matters worse. _____ Rob Stewart robert.stewart@sig.com Software Engineer, Core Software using std::disclaimer; Susquehanna International Group, LLP http://www.sig.com IMPORTANT: The information contained in this email and/or its attachments is confidential. If you are not the intended recipient, please notify the sender immediately by reply and immediately delete this message and all its attachments. Any review, use, reproduction, disclosure or dissemination of this message or any attachment by an unintended recipient is strictly prohibited. Neither this message nor any attachment is intended as or should be construed as an offer, solicitation or recommendation to buy or sell any security or other financial instrument. Neither the sender, his or her employer nor any of their respective affiliates makes any warranties as to the completeness or accuracy of any of the information contained herein or that this message or any of its attachments is free of viruses.

On Mar 2, 2010, at 7:12 AM, Stewart, Robert wrote:
But I contend that for many people they simply want to say "log this signal to this file under this name" and then open their log and say "replay signal with name to this slot" and then iterate through the log entries possibly in a multiple of real time. This
I don't know where those "many people" are, but I haven't met them. For our replay purposes, we replay the system inputs, not its reactions.
We log our system inputs so we can replay them.

On Mon, Mar 01, 2010 at 02:44:32PM -0500, Daniel Larimer wrote:
If my "requirements" fall outside the scope of 'logging' then perhaps we should define what logging means.
Your description of logging is similar to the notion in the database community, e.g., using WAL to store records on disk in order to reread at a later point of time in case the main data structures are inconsistent due to a crash or bug. In this scenario, I agree that a logging library should offer a vehicle to efficiently transport the data to a backend. The duality of writing the data out and reading it back in seems orthogonal to me and can be handled via boost::serialization, as you suggest. What remains is the dispatching logic that brings the "blob" archive contents back from the sink to the point where they should be deserialized, that is, the source. I believe that is currently not possible with the library because there is no notion of records in the backends. At the other end of the spectrum, logging is used synonymously for documenting program activity, often in a textual human-readable fashion. In this case, there is no need to replay the contents as they are only summaries or documentation of activity rather actual data. The library in its current form is well-suited to perform the latter task but probably needs to be extended to support the former. Matthias -- Matthias Vallentin vallentin@icsi.berkeley.edu http://www.icir.org/matthias

Zitat von Matthias Vallentin <vallentin@icsi.berkeley.edu>:
On Mon, Mar 01, 2010 at 02:44:32PM -0500, Daniel Larimer wrote:
If my "requirements" fall outside the scope of 'logging' then perhaps we should define what logging means.
Your description of logging is similar to the notion in the database community, e.g., using WAL to store records on disk in order to reread at a later point of time in case the main data structures are inconsistent due to a crash or bug. In this scenario, I agree that a logging library should offer a vehicle to efficiently transport the data to a backend. The duality of writing the data out and reading it back in
we use binary logging like that in the libraries-under-construction Boost.STLdb and .Persistent, and are in the process of uniting that system to be used by both libraries and potentially other boost libraries. this type of logging has very different requirements than logging some activity in human-readable form, from performance and data consistency viewpoints, so I'd like to know if you consider that type of logging within the scope of a boost logging library, and specifically within the scope of the proposed Boost.Log. should I review the library with that use case in mind?

On 03/03/2010 02:59 AM, strasser@uni-bremen.de wrote:
Zitat von Matthias Vallentin <vallentin@icsi.berkeley.edu>:
On Mon, Mar 01, 2010 at 02:44:32PM -0500, Daniel Larimer wrote:
If my "requirements" fall outside the scope of 'logging' then perhaps we should define what logging means.
Your description of logging is similar to the notion in the database community, e.g., using WAL to store records on disk in order to reread at a later point of time in case the main data structures are inconsistent due to a crash or bug. In this scenario, I agree that a logging library should offer a vehicle to efficiently transport the data to a backend. The duality of writing the data out and reading it back in
we use binary logging like that in the libraries-under-construction Boost.STLdb and .Persistent, and are in the process of uniting that system to be used by both libraries and potentially other boost libraries.
this type of logging has very different requirements than logging some activity in human-readable form, from performance and data consistency viewpoints, so I'd like to know if you consider that type of logging within the scope of a boost logging library, and specifically within the scope of the proposed Boost.Log.
Like I said in another post earlier, this usage pattern fits the current library structure quite well, as far as I can see. Ultimately I'm willing to add binary logging (both reading and writing) to the library. As for the replaying feature, if I see a tool generic enough that helps to implement it on the user's side, I will surely add it to the library.
should I review the library with that use case in mind?
You can surely have that use case in mind while taking a look at the library. Boost.Log doesn't offer built in tools for it, but it should not block it either.

On 03/01/2010 10:44 PM, Daniel Larimer wrote: [snip]
I do not mean to put my requirements on this if my requirements are "out of scope" or to insist that such a feature is necessary out of the box, only that a boost logging library should be flexible enough to support some means of iterating over records in a log file.
I see. That is indeed an interesting view on logging. I must say, I did not have this feature in mind while developing the library, but I think it should be doable. The library does not provide any tools to read log files, so depending on the format you choose to store logs in, you'll have to implement one. In terms of the library, this will be a log source. Depending on how you want it, you may either make it directly invoke the signal, or pass the read records through the library and then invoke the signal from a sink backend. In the latter case you get an additional opportunity of filtering and/or converting the record into another form (e.g. textual in order to store it in a human-readable file).
If your library assumes that ostream<< is defined for every type then making an assumption on boost::serialization to support such a feature would be a relatively minor addition.
Well, there is no such mandatory requirement. Rather, the type has to support formatting of some kind, if a text-oriented sink is used. In the most common case - yes, the formatting means putting it into a stream. For binary-related sinks there is no such requirement, and the term "consume" describes the process best. In your case, consuming may well involve Boost.Serialization, there are no changes in Boost.Log required to achieve that. Note however, that AFAIK, Boost.Serialization does not have a tool for portable binary serialization, which was the main reason I didn't bother with binary sinks myself.
One last consideration is that often there is a desire to separate logging into a separate process that can optionally "connect" to the logging core and possibly perform some additional filtering on the other side of a socket.
If syslog is sufficient, Boost.Log has the according sink out of box. And there are many syslog servers out there. If it's binary logging, then that should also be doable. Just write a sink backend that would send packets through the network or shared memory to another process. But in that case you'll also have to develop the server side (which, of course may use Boost.Log to process received records).
So my question is, how hard would it be to accomplish some of my requirements with the built in functions vs how much would I have to write from scratch to support it? I will look into it more, but perhaps someone more familiar with the package could offer an "estimate" in terms of hours required to implement some or all of the features I suggest. If they fit nicely into the framework, then I would likely contribute my own time to such an effort. If the framework starts to "get in the way" then perhaps we should consider the design.
Well, I can't estimate the hours, but these things will have to be made from scratch: 1. A binary logging backend. Depending on your needs, it may have to support portable encoding, network (Boost.ASIO will help here) or shared memory (Boost.Interprocess). You may take a look on how the syslog backend is implemented to get the idea of the amount of work. 2. A logging source that will receive or read the records. The amount of work here depends on what you want in particular, but the logic required by Boost.Log is quite trivial - call open_record and then push_record in the logging core. You can look at the basic_logger class to see what needs to be done. The rest is all the side work, like implementing serialization for your classes, setting up filters and sinks, arranging the signals you need, etc.

Andrey Semashev wrote:
For binary-related sinks there is no such requirement, and the term "consume" describes the process best. In your case, consuming may well involve Boost.Serialization, there are no changes in Boost.Log required to achieve that. Note however, that AFAIK, Boost.Serialization does not have a tool for portable binary serialization ...
The package includes demo_portable_archive includes such a tool. Robert Ramey

On 02.03.2010 21:32, Robert Ramey wrote:
Andrey Semashev wrote:
For binary-related sinks there is no such requirement, and the term "consume" describes the process best. In your case, consuming may well involve Boost.Serialization, there are no changes in Boost.Log required to achieve that. Note however, that AFAIK, Boost.Serialization does not have a tool for portable binary serialization ...
The package includes demo_portable_archive includes such a tool.
Last time I checked it didn't support FP types. Has that changed? Also, are there other limitations?

Andrey Semashev wrote:
On 02.03.2010 21:32, Robert Ramey wrote:
Andrey Semashev wrote:
For binary-related sinks there is no such requirement, and the term "consume" describes the process best. In your case, consuming may well involve Boost.Serialization, there are no changes in Boost.Log required to achieve that. Note however, that AFAIK, Boost.Serialization does not have a tool for portable binary serialization ...
The package includes demo_portable_archive includes such a tool.
Last time I checked it didn't support FP types. Has that changed?
No
Also, are there other limitations?
The current version in only builds and links with static version of the boost serialization library. If this were more than a demo that could addressed.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Vladimir Prus skrev:
Boosters,
the formal review of the Boost.Log library, written by Andrey Semashev, will start in a week from now, on March 8, 2010 and will run through March 17, 2010.
The documentation for the current version is available at:
Hi Andrey, I looked at the documentation, and couldn't find any comparison with other libraries besides a small log4j comparison. Therefore I wonder if such a comparison has been made, and if so, how it guided the design decisions. In particular, I would like to see a quite detailed comparison with Pantheios: http://www.pantheios.org/ It would also be good if you could use this library as one to benchmark against. I would also like to know how your library differs from the one that was rejected by John Torjo, and how your library adresses the issue that was found with that library? Thanks for your work so far -Thorsten

On 03/01/2010 01:37 PM, Thorsten Ottosen wrote:
Hi Andrey,
I looked at the documentation, and couldn't find any comparison with other libraries besides a small log4j comparison. Therefore I wonder if such a comparison has been made, and if so, how it guided the design decisions.
In particular, I would like to see a quite detailed comparison with Pantheios:
There was no complete feature-wise comparison between them, but I think you can figure it out from the docs of the libraries.
It would also be good if you could use this library as one to benchmark against.
I have plans of wrapping up a test suite to benchmark Boost.Log against different libraries. Pantheois will be one of them.
I would also like to know how your library differs from the one that was rejected by John Torjo, and how your library adresses the issue that was found with that library?
Actually, there's really not much in common between them. The most striking difference that you may notice is decoupling of loggers and sinks. Also, Boost.Log uses attributes to perform filtering and formatting of log records, which is something that was missing in John's proposal. If you have a particular issue on your mind, please, specify. I'll try to answer more specifically. If you're familiar with John's library or Pantheois, it might be worth to take a look at Boost.Log design description to grasp the difference: http://tinyurl.com/yjp4n9s

Andrey Semashev skrev:
On 03/01/2010 01:37 PM, Thorsten Ottosen wrote:
Hi Andrey,
I looked at the documentation, and couldn't find any comparison with other libraries besides a small log4j comparison. Therefore I wonder if such a comparison has been made, and if so, how it guided the design decisions.
In particular, I would like to see a quite detailed comparison with Pantheios:
There was no complete feature-wise comparison between them, but I think you can figure it out from the docs of the libraries.
Well, if I do a review, I will do that. However, you know your own library better than any. So *your* comparison would be a great starting point. Also, I think it is in general desireable that you can argue why your library is better or equally good compared to others; if not, then why should anybody use it (*)?
It would also be good if you could use this library as one to benchmark against.
Good.
I have plans of wrapping up a test suite to benchmark Boost.Log against different libraries. Pantheois will be one of them.
I would also like to know how your library differs from the one that was rejected by John Torjo, and how your library adresses the issue that was found with that library?
Actually, there's really not much in common between them. The most striking difference that you may notice is decoupling of loggers and sinks. Also, Boost.Log uses attributes to perform filtering and formatting of log records, which is something that was missing in John's proposal.
If you have a particular issue on your mind, please, specify. I'll try to answer more specifically.
I don't because I haven't followed that review and the decision to reject. However, it makes little sense to re-review the same problems. Therefore I strongly suggest that you make sure these issues have been addresses or that they don't apply to your library. -Thorsten (*) Andrey, your work looks very good and impressive, so don't take my comments too negative. Take them as something that could improve your library. Whenever one publishes a scientic paper, it is costumary to review earlier results and explan why your own results are better/different; if not, you won't get it published.

Andrey Semashev wrote:
On 03/01/2010 01:37 PM, Thorsten Ottosen wrote:
In particular, I would like to see a quite detailed comparison with Pantheios:
There was no complete feature-wise comparison between them, but I think you can figure it out from the docs of the libraries.
This reply is disturbing. I've seen some other replies from you in other contexts that suggest a similar attitude. Maybe I'm reading too much into such replies, but they come off as, "If you think another library is so good, go take a good look and compare it with mine. You'll come back to my library because it is clearly superior. Stooping to such a task is beneath me." I really hope that's not what you meant. Thorsten's request is a good one. Whether you compare your library to Pantheios or not, you should provide significant comparisons with several of the more popular logging libraries so that anyone considering Boost.Log, should it be accepted, will be able to make a quick decision in favor of Boost.Log.
It would also be good if you could use this library as one to benchmark against.
I have plans of wrapping up a test suite to benchmark Boost.Log against different libraries. Pantheois will be one of them.
This is excellent.
I would also like to know how your library differs from the one that was rejected by John Torjo, and how your library adresses the issue that was found with that library?
Actually, there's really not much in common between them. The most striking difference that you may notice is decoupling of loggers and sinks. Also, Boost.Log uses attributes to perform filtering and formatting of log records, which is something that was missing in John's proposal.
Any issue that contributed to Torjo's library being rejected could lead to yours being rejected as well. You would do well to ensure success by researching that review.
If you're familiar with John's library or Pantheois, it might be worth to take a look at Boost.Log design description to grasp the difference:
Here again is where you fail to grasp the point. You know your library and you're trying to sell it to the rest of us and to future Boost users. You should do this research and document the comparisons to answer not just Thorsten, but anyone else looking at your library. I hope I've gotten your attitude wrong. It wouldn't be the first time that the written word failed to communicate intention accurately. I hope you will augment your documentation, ideally prior to the review period, with comparative details to help all who take the time to look at your library. (I'm sure you'll get comments from folks about feature X of library Y during the review, which you can use to augment your initial comparison section.) _____ Rob Stewart robert.stewart@sig.com Software Engineer, Core Software using std::disclaimer; Susquehanna International Group, LLP http://www.sig.com IMPORTANT: The information contained in this email and/or its attachments is confidential. If you are not the intended recipient, please notify the sender immediately by reply and immediately delete this message and all its attachments. Any review, use, reproduction, disclosure or dissemination of this message or any attachment by an unintended recipient is strictly prohibited. Neither this message nor any attachment is intended as or should be construed as an offer, solicitation or recommendation to buy or sell any security or other financial instrument. Neither the sender, his or her employer nor any of their respective affiliates makes any warranties as to the completeness or accuracy of any of the information contained herein or that this message or any of its attachments is free of viruses.

In order to give a cumulative answer, I reordered some parts of the original letter. I hope this doesn't confuse anyone. On 03/02/2010 02:56 PM, Stewart, Robert wrote:
There was no complete feature-wise comparison between them, but I think you can figure it out from the docs of the libraries.
This reply is disturbing. I've seen some other replies from you in other contexts that suggest a similar attitude. Maybe I'm reading too much into such replies, but they come off as, "If you think another library is so good, go take a good look and compare it with mine. You'll come back to my library because it is clearly superior. Stooping to such a task is beneath me." I really hope that's not what you meant.
If you're familiar with John's library or Pantheois, it might be worth to take a look at Boost.Log design description to grasp the difference:
Here again is where you fail to grasp the point. You know your library and you're trying to sell it to the rest of us and to future Boost users. You should do this research and document the comparisons to answer not just Thorsten, but anyone else looking at your library.
I hope I've gotten your attitude wrong. It wouldn't be the first time that the written word failed to communicate intention accurately. I hope you will augment your documentation, ideally prior to the review period, with comparative details to help all who take the time to look at your library. (I'm sure you'll get comments from folks about feature X of library Y during the review, which you can use to augment your initial comparison section.)
Robert, I will try to clarify my point. My answer may sound a bit harsh, but believe me, there is no such intent. I'm not selling anything. I'm not trying to bend anyone's opinion with marketing speeches and shiny brochures. I'm not trying to push anything unworthy (from my point of view) into Boost by these moves. I'm glad I'm not a salesman who has to do that to earn his living. My sincere belief is that a best choice is made based on facts that one learned himself first hand. I believe, most people here are experienced enough to have an idea of logging and, perhaps, some of the mentioned libraries. Therefore, the best way to form opinion of Boost.Log is to see what it's capable of, what are the advantages, and what are the drawbacks in context of one's typical usage pattern. In other words, read the docs and examples and evaluate the library. The choice is always yours. I'm not refusing people with experience of using other libraries. On the contrary, I'm open to questions like "how do I do that thing" or "is it possible to do this thing" or whatever helps to understand the library and simplify its adoption. I'm also open to suggestions for the library improvement. But I'm not advertising anything, sorry. It is what it is. You may ask, why do I bring the library to Boost then? The answer will be this. I believe, Boost is lacking a logging tool. I think, logging is a very much awaited addition to Boost, as I've been waiting for it too some time ago. I think, that the proposed Boost.Log will help Boost users in this area and thus I'm willing to try to bring it in. This review should show, whether my beliefs have their ground. I hope this kind of attitude is good enough for the review to happen.
Thorsten's request is a good one. Whether you compare your library to Pantheios or not, you should provide significant comparisons with several of the more popular logging libraries so that anyone considering Boost.Log, should it be accepted, will be able to make a quick decision in favor of Boost.Log.
Like I said, I'm open to questions that help adoption of the library. Maybe some time I'll get tired of such questions and write a FAQ or a section in docs, covering the most frequent of them.
Any issue that contributed to Torjo's library being rejected could lead to yours being rejected as well. You would do well to ensure success by researching that review.
I participated that review, so I have a basic idea. But I'll rehearse it, thanks for the suggestion.
participants (9)
-
Andrey Semashev
-
Daniel Larimer
-
Matthias Vallentin
-
Robert Ramey
-
Rutger ter Borg
-
Stewart, Robert
-
strasserï¼ uni-bremen.de
-
Thorsten Ottosen
-
Vladimir Prus