[OpenMethod] review starts on 28th of April

Dear Boost community. The peer review of the proposed Boost.OpenMethod will start on 28th of April and continue until May 7th. OpenMethods implements open methods in C++. Those are "virtual functions" defined outside of classes. They allow avoiding god classes, and visitors and provide a solution to the Expression Problem, and the banana-gorilla-jungle problem. They also support multiple dispatch. This library implements most of Stroustrup's multimethods proposal, with some new features, like customization points and inter-operability with smart pointers. And despite all that open-method calls are fast - on par with native virtual functions. You can find the source code of the library at https://github.com/jll63/Boost.OpenMethod/tree/master and read the documentation at https://jll63.github.io/Boost.OpenMethod/. The library is header-only and thus it is fairly easy to try it out. In addition, Christian Mazakas (of the C++ Alliance) has added the candidate library to his vcpkg repository (https://github.com/cmazakas/vcpkg-registry-test). The library is also available in Compiler Explorer under the name YOMM2. As the library is not domain-specific, everyone is very welcome to contribute a review either by sending it to the Boost mailing list, or me personally. In your review please state whether you recommend to reject or accept the library into Boost, and whether you suggest any conditions for acceptance. Other questions you might want to answer in your review are: * What is your evaluation of the design? * What is your evaluation of the implementation? * What is your evaluation of the documentation? * What is your evaluation of the potential usefulness of the library? * Did you try to use the library? With what compiler? Did you have any problems? * How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? * Are you knowledgeable about the problems tackled by the library? Thanks in advance for your time and effort! Dmitry Arkhipov, Staff Engineer at The C++ Alliance.

вс, 27 апр. 2025 г. в 16:15, Дмитрий Архипов <grisumbras@gmail.com>:
The library is also available in Compiler Explorer under the name YOMM2.
I was informed by the author that it's better to use in CE the Boost-ready header https://jll63.github.io/Boost.OpenMethod/boost/openmethod.hpp. Like in this example: https://godbolt.org/z/rbhqvGvje.

On Sun, Apr 27, 2025 at 9:15 PM Дмитрий Архипов via Boost < boost@lists.boost.org> wrote:
Dear Boost community. The peer review of the proposed Boost.OpenMethod will start on 28th of April and continue until May 7th. OpenMethods implements open methods in C++. Those are "virtual functions" defined outside of classes. They allow avoiding god classes, and visitors and provide a solution to the Expression Problem, and the banana-gorilla-jungle problem. They also support multiple dispatch. This library implements most of Stroustrup's multimethods proposal, with some new features, like customization points and inter-operability with smart pointers. And despite all that open-method calls are fast - on par with native virtual functions.
You can find the source code of the library at https://github.com/jll63/Boost.OpenMethod/tree/master and read the documentation at https://jll63.github.io/Boost.OpenMethod/. The library is header-only and thus it is fairly easy to try it out. In addition, Christian Mazakas (of the C++ Alliance) has added the candidate library to his vcpkg repository (https://github.com/cmazakas/vcpkg-registry-test). The library is also available in Compiler Explorer under the name YOMM2.
I got a few questions for the author: 1. How does a registrar work? 2. What kind of hashing are we doing? 3. "When an error is encountered, the program is terminated by a call to abort" what errors are there? Just pure virtual? 4. Why do we need multiple policies and facets in a single project? Why couldn't we use a single one (at least for policies) project-wide? 5. Do you have any real-world use-cases? I looked at the ast.cpp example, but was a bit disappointed that it was a single argument method. Thanks, Klemens

1. How does a registrar work?
A registrar is a node in a doubly linked list of static objects. The node constructor does *not* initialize the forward and backward pointers. Instead it relies on initialization to zero. This prevents order-of-construction problems. See https://github.com/jll63/Boost.OpenMethod/blob/master/include/boost/openmeth...
2. What kind of hashing are we doing?
It is described here: https://jll63.github.io/Boost.OpenMethod/#virtual_ptr_description_19 Perfect (collision-free) hashing, using the fastest possible hash function: (M * value) >> S, where M and S are found by random search. Hashing can be customized, or eliminated altogether in some use-cases.
3. "When an error is encountered, the program is terminated by a call to abort" what errors are there? Just pure virtual?
Other errors are: referencing a class that was not registered, failure to find hash factors, and dynamic vs static type mismatch in a "final" construct. By default, the library calls a vectored error handler that does nothing in release mode, and prints a diagnostic in debug mode; then it `abort`s. The error handler can be set to a function that throws an exception to prevent termination. By default, the library is exception agnostic.
4. Why do we need multiple policies and facets in a single project? Why couldn't we use a single one (at least for policies) project-wide?
Possible reasons: * A library author may use open-methods internally, and may want to use a specific policy, without depending on, or interfering with, code from the application, or other libraries. * Classes, methods, and overriders involved in dynamic loading may need an extra indirection for the v-table pointer, which other sets of classes and methods may not need. * If a subset of the classes uses only "final" constructs, or intrusive vptrs (https://jll63.github.io/Boost.OpenMethod/#virtual_ptr_with_vptr), or some other sort of vptr placement, there is probably no need to include them in the hash table. * For the library's unit tests ;-)
5. Do you have any real-world use-cases?
The library is derived from YOMM2 (https://github.com/jll63/yomm2), which has users, and even contributers! You can take a look at past issues and PRs, and google YOMM2. I also got feedback like "if I had known this existed, I would have used it". That is one of my motivations for submitting to Boost.
I looked at the ast.cpp example, but was a bit disappointed that it was a single argument method.
I deliberately de-emphasize multiple dispatch. It breaks my heart when someone walks out of one of my talks, saying: multi-methods, cool! but I never had any use for that. I insist that "multi" is the cherry on the "open" cake. A study showed that, even in languages that natively support multi-methods, the majority of methods have a single virtual parameter (https://openaccess.wgtn.ac.nz/articles/thesis/Multiple_Dispatch_in_Practice/...). Here is an example of triple dispatch: https://github.com/jll63/Boost.OpenMethod/blob/master/examples/adventure.cpp. By the way, TADS (a text game oriented language) natively supports multi-methods. J-L On Mon, Apr 28, 2025 at 12:35 PM Klemens Morgenstern via Boost <boost@lists.boost.org> wrote:
On Sun, Apr 27, 2025 at 9:15 PM Дмитрий Архипов via Boost < boost@lists.boost.org> wrote:
Dear Boost community. The peer review of the proposed Boost.OpenMethod will start on 28th of April and continue until May 7th. OpenMethods implements open methods in C++. Those are "virtual functions" defined outside of classes. They allow avoiding god classes, and visitors and provide a solution to the Expression Problem, and the banana-gorilla-jungle problem. They also support multiple dispatch. This library implements most of Stroustrup's multimethods proposal, with some new features, like customization points and inter-operability with smart pointers. And despite all that open-method calls are fast - on par with native virtual functions.
You can find the source code of the library at https://github.com/jll63/Boost.OpenMethod/tree/master and read the documentation at https://jll63.github.io/Boost.OpenMethod/. The library is header-only and thus it is fairly easy to try it out. In addition, Christian Mazakas (of the C++ Alliance) has added the candidate library to his vcpkg repository (https://github.com/cmazakas/vcpkg-registry-test). The library is also available in Compiler Explorer under the name YOMM2.
I got a few questions for the author:
1. How does a registrar work? 2. What kind of hashing are we doing? 3. "When an error is encountered, the program is terminated by a call to abort" what errors are there? Just pure virtual? 4. Why do we need multiple policies and facets in a single project? Why couldn't we use a single one (at least for policies) project-wide? 5. Do you have any real-world use-cases? I looked at the ast.cpp example, but was a bit disappointed that it was a single argument method.
Thanks,
Klemens
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Hi, First, thanks Jean Luis and Dmitry for the submission and managing the review. I've got a couple of questions for the author regarding the long-term aim of the library: * Is your long-term goal to present open methods for standardization, using this library as a way to gain field experience? Or is the library aimed at the end user as-is, with no long-term standardization intent? * If standardization is your goal, have you checked on any committee member about how possible is moving this forward? Thanks, Ruben. On Sun, 27 Apr 2025 at 15:15, Дмитрий Архипов via Boost <boost@lists.boost.org> wrote:
Dear Boost community. The peer review of the proposed Boost.OpenMethod will start on 28th of April and continue until May 7th. OpenMethods implements open methods in C++. Those are "virtual functions" defined outside of classes. They allow avoiding god classes, and visitors and provide a solution to the Expression Problem, and the banana-gorilla-jungle problem. They also support multiple dispatch. This library implements most of Stroustrup's multimethods proposal, with some new features, like customization points and inter-operability with smart pointers. And despite all that open-method calls are fast - on par with native virtual functions.
You can find the source code of the library at https://github.com/jll63/Boost.OpenMethod/tree/master and read the documentation at https://jll63.github.io/Boost.OpenMethod/. The library is header-only and thus it is fairly easy to try it out. In addition, Christian Mazakas (of the C++ Alliance) has added the candidate library to his vcpkg repository (https://github.com/cmazakas/vcpkg-registry-test). The library is also available in Compiler Explorer under the name YOMM2.
As the library is not domain-specific, everyone is very welcome to contribute a review either by sending it to the Boost mailing list, or me personally. In your review please state whether you recommend to reject or accept the library into Boost, and whether you suggest any conditions for acceptance. Other questions you might want to answer in your review are:
* What is your evaluation of the design? * What is your evaluation of the implementation? * What is your evaluation of the documentation? * What is your evaluation of the potential usefulness of the library? * Did you try to use the library? With what compiler? Did you have any problems? * How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? * Are you knowledgeable about the problems tackled by the library?
Thanks in advance for your time and effort!
Dmitry Arkhipov, Staff Engineer at The C++ Alliance.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Sun, 27 Apr 2025 at 15:15, Дмитрий Архипов via Boost <boost@lists.boost.org> wrote:
Dear Boost community. The peer review of the proposed Boost.OpenMethod will start on 28th of April and continue until May 7th. OpenMethods implements open methods in C++. Those are "virtual functions" defined outside of classes. They allow avoiding god classes, and visitors and provide a solution to the Expression Problem, and the banana-gorilla-jungle problem. They also support multiple dispatch. This library implements most of Stroustrup's multimethods proposal, with some new features, like customization points and inter-operability with smart pointers. And despite all that open-method calls are fast - on par with native virtual functions. [...] * What is your evaluation of the implementation?
I've seen that many of the macros use __COUNTER__ to generate unique identifiers. If I'm reading this correctly, this includes BOOST_OPENMETHOD_DECLARE_OVERRIDER and BOOST_OPENMETHOD_INLINE_OVERRIDE, which are supposed to be safe to be placed in headers. Is this correct? I've had bad experiences with macros using __COUNTER__ in headers in the past. boost/asio/coroutine.hpp, which simulates coroutines using switch/cases, uses __COUNTER__. Placing such constructs in headers can inadvertently yield ODR violations. In my case, the ODR violation went like this: // header1 void f1() { BOOST_ASIO_CORO_REENTER(...) { ... } // Internally uses __COUNTER__ } // header2 void f2() { BOOST_ASIO_CORO_REENTER(...) { ... } // Internally uses __COUNTER__ } // f1.cpp #include "header1.hpp" // __COUNTER__ here is 0 #include "header2.hpp" // __COUNTER__ here is 1 // f2.cpp #include "header2.hpp" // __COUNTER__ here is 0 #include "header1.hpp" // __COUNTER__ here is 1 This makes f1() and f2() have distinct bodies in f1.cpp and f2.cpp, which is an ODR violation. It caused random test failures under MSVC in Release mode only, and it took me forever to identify the root cause of the issue. Are the macros I mention also vulnerable to this problem? Cheers, Ruben.

On Tue, 29 Apr 2025 at 18:30, Ruben Perez <rubenperez038@gmail.com> wrote:
On Sun, 27 Apr 2025 at 15:15, Дмитрий Архипов via Boost <boost@lists.boost.org> wrote:
Dear Boost community. The peer review of the proposed Boost.OpenMethod will start on 28th of April and continue until May 7th. OpenMethods implements open methods in C++. Those are "virtual functions" defined outside of classes. They allow avoiding god classes, and visitors and provide a solution to the Expression Problem, and the banana-gorilla-jungle problem. They also support multiple dispatch. This library implements most of Stroustrup's multimethods proposal, with some new features, like customization points and inter-operability with smart pointers. And despite all that open-method calls are fast - on par with native virtual functions. [...] * What is your evaluation of the implementation?
I've seen that many of the macros use __COUNTER__ to generate unique identifiers. If I'm reading this correctly, this includes BOOST_OPENMETHOD_DECLARE_OVERRIDER and BOOST_OPENMETHOD_INLINE_OVERRIDE, which are supposed to be safe to be placed in headers. Is this correct?
I've had bad experiences with macros using __COUNTER__ in headers in the past. boost/asio/coroutine.hpp, which simulates coroutines using switch/cases, uses __COUNTER__. Placing such constructs in headers can inadvertently yield ODR violations. In my case, the ODR violation went like this:
// header1 void f1() { BOOST_ASIO_CORO_REENTER(...) { ... } // Internally uses __COUNTER__ }
// header2 void f2() { BOOST_ASIO_CORO_REENTER(...) { ... } // Internally uses __COUNTER__ }
// f1.cpp #include "header1.hpp" // __COUNTER__ here is 0 #include "header2.hpp" // __COUNTER__ here is 1
// f2.cpp #include "header2.hpp" // __COUNTER__ here is 0 #include "header1.hpp" // __COUNTER__ here is 1
This makes f1() and f2() have distinct bodies in f1.cpp and f2.cpp, which is an ODR violation. It caused random test failures under MSVC in Release mode only, and it took me forever to identify the root cause of the issue.
Are the macros I mention also vulnerable to this problem?
Answering my own question: no, they are not because the generated symbols have always static linkage. If used in headers as shown above, several, potentially differently named symbols will be generated, but that's okay since re-registering stuff multiple times seems to be a no-op.
Cheers, Ruben.

Dmitry - thanks for being the review manager for this library, and Jean-Louis - thanks for submitting it! I am the technical writer for the C++ Alliance, and have only reviewed the documentation. *Review of OpenMethod library documentation* The documentation falls short in a number of key areas that I try to address in this review. It is good that there is enough information to provide detailed feedback. Currently though the documentation would get a "D" - not a pass mark - a decent base but requires work. INTRODUCTION An introduction to a library should be suitable for developers who are not familiar with the role of open methods. They should already be decent C++ developers. The goal of an introduction is to answer the question - "why should I be interested in this?". I suggest something like this: Open methods are a language mechanism that allows you to define new behaviors (essentially, methods) for existing types *without *modifying those types. C++ doesn't natively support open methods in the way that some dynamic languages (like Common Lisp) do. Keys to the purpose of open methods are the *Open/Closed Principle* (OCP) - where a software entity (class, module, function, etc.) should be *open *for extension but *closed *for modification - and the more complicated concept of *multiple dispatch*. In *single** dispatch* method resolution is based on the runtime type of a single object, usually the one the method is called on. With multiple dispatch method resolution is based on the runtime types of two or more arguments. C++ supports single dispatch via virtual functions, multiple dispatch has to be simulated and is coded into this library. The main advantage of open methods is that they help prevent bugs when modifying stable code. For example, say you have a stable text processor. When a new file format becomes popular (say, a variant of markdown), your code can be extended to support the new format without modifying the existing code. In simple terms, *open methods allow for safer scaling of software*. Another specific use is you can add behavior involving multiple types, for example adding collision handling between type `A` and type `B` that is to date unsupported in your code. Say you had a simulation of watercraft, but had not supported hovercraft. You now add hovercraft, and need to add collision detection, and the effects of the collision to both parties, but would prefer to do this without modifying your existing codebase. This is where multiple dispatch is useful - both as a concept and a feature. This introduction describes two use-cases that are easy to understand and can now be referenced later in the documentation as needed to clarify details. Of course, there may be better use-cases than those I have described here. The current introduction is interesting, but perhaps should be under a secondary heading "History of Open Methods" that follows the intro. To be clear - does the library implement ALL the features of the N2216 paper - if not all, then list those implemented. It is not a good idea to require a reader reference documentation outside of the library for essential knowledge. The N2216 paper is long and heavy-going, but as it's not in our control the link could break during the lifetime of the library (say, 5 to 15 years). It is totally OK to provide references, though expect some reference links to break as time passes. *Required information should be in the library doc*. A kinda nit - this is funny "You wanted a banana but what you got was a gorilla holding the banana and the entire jungle." - but - is it really true? If you have a class Banana - how do you get the gorilla and jungle - wouldn't it be the other way around (jungle inherits animals, trees, trees inherit fruit etc)? However, if this is true, perhaps give an example to show how it is a problem? GETTING STARTED Following the introduction, there should be a *Getting Started* section which goes over 1) Requirements 2) Steps to install 3) Dependencies (Boost libs, std libs, other libs, etc.) 4) hello world tutorial. Getting up and running should come before Tutorials. PERFORMANCE Perhaps add a table of performance of some sort that readers can relate to - say, comparing an open method to a class method with both doing the same thing. TUTORIALS Great that there is a range of tutorials addressing the worth of open methods. However, they are not really tutorials unless they contain steps for the reader to follow. Step 1 - copy and paste this example. Step 2 - run it and examine the output, Step 3 - add this feature and run again. Step 4 - notice how the output has changed...etc. etc. It would be most useful if the tutorials related to the use-cases first introduced, showing how the desired results are achieved. It is not clear enough what the library does, versus what the sample source code does. Great tutorials have well commented code - ideally you can follow the code flow by reading the comments. TERMS and ACRONYMS Acronyms are introduced that I am unaware of, without an initial explanation. For example, Unlike function declarations, which can occur multiple times in a TU, an overrider declaration cannot. What is a TU? Can this be defined in full on first use? Similar for ADL, AST, CRTP etc. And for terms such as "guide function" - add a definition on first use. REFERENCE If all the components of the library are listed here, great, a reference should be *complete*. I would really like to see the Reference further divided into sections based on type: * Classes ** fields ** methods * Interfaces * Structures * Constants * Macros * Functions (if external to any class) - Currently we have to examine each entry, or guess (I don't like guessing) what each entry is. The biggest missing component though is the "why" (the use-case) - the *Description *of each entry should say in what situation you would be interested in this construct - or to put it another way - what problem does having this solve - or help with ? If references are arranged alphabetically within each section (Classes, Macros etc.) - they are easy to locate. ERRORS and EXCEPTIONS It is not clear to me what errors or exceptions might be thrown by any of the entries. All errors/exceptions thrown by the library code should be listed under the entry that might fire them. For maximum usefulness, include a table of errors and exceptions and* what you should do about it if you get such an error* could be described too - say likely causes and likely resolutions. A complete structured Reference, with descriptions, return values, use cases and errors would be great to see. ACKNOWLEDGEMENTS, REFERENCES The documentation could end with any acknowledgements (designers, testers, motivators) and References such as N2216 and no doubt others. *In Summary:* I had to refer to too much information outside of the library doc to get a basic understanding of the use-cases and problems that are being addressed here. The documentation as it stands describes what the code does but almost entirely from an inward looking perspective and almost never addresses the "when I should use this" from a use-case - or more likely a component of a use-case - perspective. Keys to necessary improvements would be: 1. an introduction explaining the "why I should be interested" with some compelling use-cases. The more relatable the use-cases, the more interest and users you will get. 2. step by step tutorials showing how the concepts of open methods solve problems 3. structured and complete reference *Bonus Points* 1. As open methods would often be added to existing software, are there *Best Practices* on how to design and implement code so that the addition of open methods at a later date is as seamless as can be? If so, add a section entitled Best Practices before the Reference. This would be a good place to reference external books or papers or articles if they are useful. 2. Are there other Boost libraries that play well with OpenMethods - even if you have limited experience of this a start would be helpful. *Having said all this - the case for open methods is quite strong, from the safer scalability of software perspective in particular, and I hope my 2c is useful.* *-* thanks *Peter Turcan* On Tue, Apr 29, 2025 at 12:49 PM Ruben Perez via Boost < boost@lists.boost.org> wrote:
On Tue, 29 Apr 2025 at 18:30, Ruben Perez <rubenperez038@gmail.com> wrote:
On Sun, 27 Apr 2025 at 15:15, Дмитрий Архипов via Boost <boost@lists.boost.org> wrote:
Dear Boost community. The peer review of the proposed Boost.OpenMethod
will
start on 28th of April and continue until May 7th. OpenMethods implements open methods in C++. Those are "virtual functions" defined outside of classes. They allow avoiding god classes, and visitors and provide a solution to the Expression Problem, and the banana-gorilla-jungle problem. They also support multiple dispatch. This library implements most of Stroustrup's multimethods proposal, with some new features, like customization points and inter-operability with smart pointers. And despite all that open-method calls are fast - on par with native virtual functions. [...] * What is your evaluation of the implementation?
I've seen that many of the macros use __COUNTER__ to generate unique identifiers. If I'm reading this correctly, this includes BOOST_OPENMETHOD_DECLARE_OVERRIDER and BOOST_OPENMETHOD_INLINE_OVERRIDE, which are supposed to be safe to be placed in headers. Is this correct?
I've had bad experiences with macros using __COUNTER__ in headers in the past. boost/asio/coroutine.hpp, which simulates coroutines using switch/cases, uses __COUNTER__. Placing such constructs in headers can inadvertently yield ODR violations. In my case, the ODR violation went like this:
// header1 void f1() { BOOST_ASIO_CORO_REENTER(...) { ... } // Internally uses __COUNTER__ }
// header2 void f2() { BOOST_ASIO_CORO_REENTER(...) { ... } // Internally uses __COUNTER__ }
// f1.cpp #include "header1.hpp" // __COUNTER__ here is 0 #include "header2.hpp" // __COUNTER__ here is 1
// f2.cpp #include "header2.hpp" // __COUNTER__ here is 0 #include "header1.hpp" // __COUNTER__ here is 1
This makes f1() and f2() have distinct bodies in f1.cpp and f2.cpp, which is an ODR violation. It caused random test failures under MSVC in Release mode only, and it took me forever to identify the root cause of the issue.
Are the macros I mention also vulnerable to this problem?
Answering my own question: no, they are not because the generated symbols have always static linkage. If used in headers as shown above, several, potentially differently named symbols will be generated, but that's okay since re-registering stuff multiple times seems to be a no-op.
Cheers, Ruben.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Hi Peter, Thanks for your review! I'll think about your remarks on the introduction, and probably blend some of it in.
To be clear - does the library implement ALL the features of the N2216 paper - if not all, then list those implemented.
I'll add a dedicated top-level entry for this. Probably it should not be too close to the top, because it will speak to more hardcore users, not to people who are just discovering open-methods and who have no awareness of N2216.
It is totally OK to provide references, though expect some reference links to break as time passes. *Required information should be in the library doc*.
Ok.
A kinda nit - this is funny "You wanted a banana but what you got was a gorilla holding the banana and the entire jungle." - but - is it really true?
Yes it is :-D Take the Matrix example, where you want to add serialization to the matrix hierarchy. If you plant a `virtual void to_json(std::ostream&)` in the base class, and implement it in the subclasses, any consumer of your matrix library will drag in the iostream library. Matrix is the banana you wanted. The iostream library is the gorilla. You'll pull iostream's dependencies in as well. Jumping from hierarchy to hierarchy in that fashion, you'll pull much of the entire jungle. People did a lot of this when OOP became mainstream. Later patterns, principles, etc were designed to mitigate the problem. Visitors, dependency injection, interfaces, etc. Open-methods are a solution, but less clumsy I believe. GETTING STARTED
Following the introduction, there should be a *Getting Started* section which goes over 1) Requirements 2) Steps to install 3) Dependencies (Boost libs, std libs, other libs, etc.) 4) hello world tutorial.
Getting up and running should come before Tutorials.
OK, those should not be relegated to the reference. PERFORMANCE
Perhaps add a table of performance of some sort that readers can relate to - say, comparing an open method to a class method with both doing the same thing.
https://jll63.github.io/Boost.OpenMethod/#tutorials_performance does that, but in terms of instructions, not timings. Working on this project has destroyed my faith in micro-benchmarks.
However, they are not really tutorials unless they contain steps for the reader to follow. Step 1 - copy and paste this example. Step 2 - run it and examine the output, Step 3 - add this feature and run again. Step 4 - notice how the output has changed...etc. etc.
It would be most useful if the tutorials related to the use-cases first introduced, showing how the desired results are achieved.
On this subject, what do you think of Andrzej's distaste of the Animals example? And my motivation for it? I don't want to put my words in Andrzej's mouth, you'll find that dialogue easily. I do see his point.
TERMS and ACRONYMS
Acronyms are introduced that I am unaware of, without an initial explanation. For example,
Unlike function declarations, which can occur multiple times in a TU, an overrider declaration cannot.
What is a TU? Can this be defined in full on first use? Similar for ADL, AST, CRTP etc. And for terms such as "guide function" - add a definition on first use.
I *think* I kept the acronyms to the "advanced" parts of the doc. I think that any moderately advanced C++ programmer knows what ADL is...but I will make sure that in any given section the first mention of an acronum is: Argument Dependant Lookup (ADL).
I would really like to see the Reference further divided into sections based on type: * Classes ** fields ** methods * Interfaces * Structures * Constants * Macros * Functions (if external to any class)
So, I struggled with this. I opted for a "thematic" grouping - putting strongly related constructs close to one another in the sidebar. But that grouping jumps across categories. Some facets are templates, some are classes. Should they be scattered between templates and classes? This approach seems to work well only for the macros.
- Currently we have to examine each entry, or guess (I don't like guessing) what each entry is.
Yeah I experimented with other schemes, trying to get guidance from other adoc based documentations. There are many variations. I tried prefixing every entity with its category - e.g. "class template method", "class template method::override", but this causes a lot of wrapping in the left sidebar. I am considering removing the "Reference" level to gain one level. Instead of: Reference BOOST_OPENMETHOD Synopsis Description BOOST_OPENMETHOD_OVERRIDE Synopsis Description ...go straight to : BOOST_OPENMETHOD Synopsis Description BOOST_OPENMETHOD_OVERRIDE Synopsis Description What do you think? I like adoc but it can be limiting...
If references are arranged alphabetically within each section (Classes, Macros etc.) - they are easy to locate.
In YOMM2 I have a flat alphabetically ordered index (https://jll63.github.io/yomm2/#index). Maybe thematic grouping is a bad idea. After all it's a reference, not a novel.
ERRORS and EXCEPTIONS
It is not clear to me what errors or exceptions might be thrown by any of the entries. All errors/exceptions thrown by the library code should be listed under the entry that might fire them. For maximum usefulness, include a table of errors and exceptions and* what you should do about it if you get such an error* could be described too - say likely causes and likely resolutions.
It is hard to document accurately because of the customization points.
A complete structured Reference, with descriptions, return values, use cases and errors would be great to see.
Do you mean this style (from Variant2)? Effects: Initializes the variant to hold the same alternative and value as w. Throws: Any exception thrown by the move-initialization of the contained value. Remarks: This function does not participate in overload resolution unless std::is_move_constructible_v<Ti> is true for all i. I tried it. But it is difficult to squeeze the library's flexibility into this mold. In case of error, it aborts. But before that it calls error handling facet. Which can throw, or not - it's a customization point. I guess I could have a Errors instead of a Throws: Errors: Call the error handler specified in the policy's error_handler facet, then call abort. More importantly, I felt that this style is robotic. What are your favorite Boost documentations?
ACKNOWLEDGEMENTS, REFERENCES
The documentation could end with any acknowledgements (designers, testers, motivators) and References such as N2216 and no doubt others.
Yes.
The documentation as it stands describes what the code does but almost entirely from an inward looking perspective and almost never addresses the "when I should use this" from a use-case - or more likely a component of a use-case - perspective.
1. an introduction explaining the "why I should be interested" with some compelling use-cases. The more relatable the use-cases, the more interest and users you will get.
You seem to agree with Andrzej. By the way (BTW ;-) ) he likes YOMM2's intro better. Can you take a look and tell me what you think? https://github.com/jll63/yomm2
2. Are there other Boost libraries that play well with OpenMethods - even if you have limited experience of this a start would be helpful.
Yes. * Using a flat_unordered_map in place of a std one. That brings dispatch speed closer to my perfect hash solution. * Interoperability with intrusive smart pointers.
I hope my 2c is useful.
You are a technical writer, I am a just code writer trying to do a technical writer's job. That's worth a lot more than 2c ;) J-L

pt., 2 maj 2025 o 03:51 Jean-Louis Leroy via Boost <boost@lists.boost.org> napisał(a):
Hi Peter,
Thanks for your review!
I'll think about your remarks on the introduction, and probably blend some of it in.
To be clear - does the library implement ALL the features of the N2216 paper - if not all, then list those implemented.
I'll add a dedicated top-level entry for this. Probably it should not be too close to the top, because it will speak to more hardcore users, not to people who are just discovering open-methods and who have no awareness of N2216.
It is totally OK to provide references, though expect some reference links to break as time passes. *Required information should be in the library doc*.
Ok.
A kinda nit - this is funny "You wanted a banana but what you got was a gorilla holding the banana and the entire jungle." - but - is it really true?
Yes it is :-D
Take the Matrix example, where you want to add serialization to the matrix hierarchy. If you plant a `virtual void to_json(std::ostream&)` in the base class, and implement it in the subclasses, any consumer of your matrix library will drag in the iostream library. Matrix is the banana you wanted. The iostream library is the gorilla. You'll pull iostream's dependencies in as well. Jumping from hierarchy to hierarchy in that fashion, you'll pull much of the entire jungle.
People did a lot of this when OOP became mainstream. Later patterns, principles, etc were designed to mitigate the problem. Visitors, dependency injection, interfaces, etc. Open-methods are a solution, but less clumsy I believe.
The above is an excellent problem description. At least for me, it says a lot more than the abstract things like "the expression problem" or the "gorilla-banana-jungle" thing. The one above is concrete, representative of a real-life situation, and indirectly promising that this library will solve a real-life problem.
GETTING STARTED
Following the introduction, there should be a *Getting Started* section which goes over 1) Requirements 2) Steps to install 3) Dependencies (Boost libs, std libs, other libs, etc.) 4) hello world tutorial.
Getting up and running should come before Tutorials.
OK, those should not be relegated to the reference.
PERFORMANCE
Perhaps add a table of performance of some sort that readers can relate to - say, comparing an open method to a class method with both doing the same thing.
https://jll63.github.io/Boost.OpenMethod/#tutorials_performance does that, but in terms of instructions, not timings. Working on this project has destroyed my faith in micro-benchmarks.
However, they are not really tutorials unless they contain steps for the reader to follow. Step 1 - copy and paste this example. Step 2 - run it and examine the output, Step 3 - add this feature and run again. Step 4 - notice how the output has changed...etc. etc.
It would be most useful if the tutorials related to the use-cases first introduced, showing how the desired results are achieved.
On this subject, what do you think of Andrzej's distaste of the Animals example? And my motivation for it? I don't want to put my words in Andrzej's mouth, you'll find that dialogue easily.
I do see his point.
TERMS and ACRONYMS
Acronyms are introduced that I am unaware of, without an initial explanation. For example,
Unlike function declarations, which can occur multiple times in a TU, an overrider declaration cannot.
What is a TU? Can this be defined in full on first use? Similar for ADL, AST, CRTP etc. And for terms such as "guide function" - add a definition on first use.
I *think* I kept the acronyms to the "advanced" parts of the doc. I think that any moderately advanced C++ programmer knows what ADL is...but I will make sure that in any given section the first mention of an acronum is: Argument Dependant Lookup (ADL).
I would really like to see the Reference further divided into sections based on type: * Classes ** fields ** methods * Interfaces * Structures * Constants * Macros * Functions (if external to any class)
So, I struggled with this. I opted for a "thematic" grouping - putting strongly related constructs close to one another in the sidebar. But that grouping jumps across categories. Some facets are templates, some are classes. Should they be scattered between templates and classes? This approach seems to work well only for the macros.
- Currently we have to examine each entry, or guess (I don't like guessing) what each entry is.
Yeah I experimented with other schemes, trying to get guidance from other adoc based documentations. There are many variations.
I tried prefixing every entity with its category - e.g. "class template method", "class template method::override", but this causes a lot of wrapping in the left sidebar.
I am considering removing the "Reference" level to gain one level. Instead of:
Reference BOOST_OPENMETHOD Synopsis Description BOOST_OPENMETHOD_OVERRIDE Synopsis Description
...go straight to :
BOOST_OPENMETHOD Synopsis Description BOOST_OPENMETHOD_OVERRIDE Synopsis Description
What do you think?
I like adoc but it can be limiting...
If references are arranged alphabetically within each section (Classes, Macros etc.) - they are easy to locate.
In YOMM2 I have a flat alphabetically ordered index (https://jll63.github.io/yomm2/#index). Maybe thematic grouping is a bad idea. After all it's a reference, not a novel.
ERRORS and EXCEPTIONS
It is not clear to me what errors or exceptions might be thrown by any of the entries. All errors/exceptions thrown by the library code should be listed under the entry that might fire them. For maximum usefulness, include a table of errors and exceptions and* what you should do about it if you get such an error* could be described too - say likely causes and likely resolutions.
It is hard to document accurately because of the customization points.
A complete structured Reference, with descriptions, return values, use cases and errors would be great to see.
Do you mean this style (from Variant2)?
Effects: Initializes the variant to hold the same alternative and value as w.
Throws: Any exception thrown by the move-initialization of the contained value.
Remarks: This function does not participate in overload resolution unless std::is_move_constructible_v<Ti> is true for all i.
I tried it. But it is difficult to squeeze the library's flexibility into this mold. In case of error, it aborts. But before that it calls error handling facet. Which can throw, or not - it's a customization point. I guess I could have a Errors instead of a Throws:
Errors: Call the error handler specified in the policy's error_handler facet, then call abort.
More importantly, I felt that this style is robotic.
What are your favorite Boost documentations?
Let me offer my opinion here (which is not necessarily helpful). We do not have a good tool for documentation. Boost.Outcome had a similar problem of documenting a highly customizable behavior. You can have a look there: https://www.boost.org/doc/libs/1_88_0/libs/outcome/doc/html/index.html But then again, I am not sure if this is a satisfactory solution. I think you should probably have *Effects* section which says: Effects: Calls `error_handler::handle(e)`. Then calls `std::abort()`. If I see this -- and at this point I will think as a robot, and require a robotic description -- I will be able to figure out that: (1) I now have to look up the `error_handler` to see what it does (2) When it throws, `std::abort` is not called. Regards, &rzej;
ACKNOWLEDGEMENTS, REFERENCES
The documentation could end with any acknowledgements (designers, testers, motivators) and References such as N2216 and no doubt others.
Yes.
The documentation as it stands describes what the code does but almost entirely from an inward looking perspective and almost never addresses the "when I should use this" from a use-case - or more likely a component of a use-case - perspective.
1. an introduction explaining the "why I should be interested" with some compelling use-cases. The more relatable the use-cases, the more interest and users you will get.
You seem to agree with Andrzej. By the way (BTW ;-) ) he likes YOMM2's intro better. Can you take a look and tell me what you think? https://github.com/jll63/yomm2
2. Are there other Boost libraries that play well with OpenMethods - even if you have limited experience of this a start would be helpful.
Yes.
* Using a flat_unordered_map in place of a std one. That brings dispatch speed closer to my perfect hash solution.
* Interoperability with intrusive smart pointers.
I hope my 2c is useful.
You are a technical writer, I am a just code writer trying to do a technical writer's job. That's worth a lot more than 2c ;)
J-L
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Jean-Louis, Thanks for your feedback on my comments. To answer your specific questions:
On this subject, what do you think of Andrzej's distaste of the Animals example? And my motivation for it? I don't want to put my words in Andrzej's mouth, you'll find that dialogue easily.
I think the Animals scenario is not relatable enough for a developer scanning libraries for one that might help them out. The scenarios I like best are: 1. the new file format I included in the proposed introduction 2. adding new UI widgets in a GUI framework 3. new behaviours (AI, physics) to types in a simulation or game engine 4. extending a logging system with new output I am assuming the main use of your library is enabling developers to plug in new behavior by adding subclasses or overriding virtual functions, without breaking or editing the system’s core - though I do like the idea of adding a "multiple dispatch" scenario or two as it shows the wider flexibility of open methods - though the use-cases here seem much narrower and so specific they might seem contrived.
I am considering removing the "Reference" level to gain one level. Instead of:
Reference BOOST_OPENMETHOD Synopsis Description BOOST_OPENMETHOD_OVERRIDE Synopsis Description
...go straight to :
BOOST_OPENMETHOD Synopsis Description BOOST_OPENMETHOD_OVERRIDE Synopsis Description
What do you think?
I prefer having a top level *Reference *heading. It is in a different style than the introductions and tutorials - and typically is not read in any kind of order. Most use of a reference is to jump into the topic you are working on (class, template, macro, method, structure etc.) - extract the info you need - and then leave the doc without looking outside of that one topic. This means the reference needs to be complete (as you say "robotic" in the sense of consistent and complete). I still prefer grouping all types under a second level heading such as *Macros*, *Templates*, *Classes* etc. Grouping items "thematically" is the thesaurus approach and has some charm to it. Works for overviews, tutorials, architecture descriptions, but for a Reference I prefer to group things rigidly (type then alphabetically), but add a *See Also* heading after each description (say of a template) that has a list of links to other entries that are thematically relevant - or perform the inverse operation (open -> close, add -> delete, etc.).
What are your favorite Boost documentations?
Mostly, Boost library documentation is comprehensive but requires a higher level of assumed knowledge than I would like to see. The usual culprits are an Introduction that does not describe the purpose of the library, or code definitions/examples or output that has no descriptive text. Make it easy for your potential developers by adding descriptive text to just about all entries of source code, syntax, diagrams or output. Some good library Reference sections are in Boost.Hana and Boost.Geometry - note the one sentence introductions to each construct, description of parameters and return values, often an example - all good to see.
You seem to agree with Andrzej. By the way (BTW ;-) ) he likes YOMM2's intro better. Can you take a look and tell me what you think? https://github.com/jll63/yomm2
Yes it is better - especially the detailed list of differences from the stated paper. However, it still lacks relatable use-cases (even as simple as the four I listed in this mail would be great to see - or another list if you have better scenarios) - and it refers to essential information via a link to a paper. If this link breaks (say the site admins "forget" to pay their annual hosting fee!) then essential information is lost. Better to paraphrase the paper into a paragraph or two (or bulleted list) to capture the essential concepts - and provide the link for more detailed info. - Peter On Thu, May 1, 2025 at 6:51 PM Jean-Louis Leroy via Boost < boost@lists.boost.org> wrote:
Hi Peter,
Thanks for your review!
I'll think about your remarks on the introduction, and probably blend some of it in.
To be clear - does the library implement ALL the features of the N2216 paper - if not all, then list those implemented.
I'll add a dedicated top-level entry for this. Probably it should not be too close to the top, because it will speak to more hardcore users, not to people who are just discovering open-methods and who have no awareness of N2216.
It is totally OK to provide references, though expect some reference links to break as time passes. *Required information should be in the library doc*.
Ok.
A kinda nit - this is funny "You wanted a banana but what you got was a gorilla holding the banana and the entire jungle." - but - is it really true?
Yes it is :-D
Take the Matrix example, where you want to add serialization to the matrix hierarchy. If you plant a `virtual void to_json(std::ostream&)` in the base class, and implement it in the subclasses, any consumer of your matrix library will drag in the iostream library. Matrix is the banana you wanted. The iostream library is the gorilla. You'll pull iostream's dependencies in as well. Jumping from hierarchy to hierarchy in that fashion, you'll pull much of the entire jungle.
People did a lot of this when OOP became mainstream. Later patterns, principles, etc were designed to mitigate the problem. Visitors, dependency injection, interfaces, etc. Open-methods are a solution, but less clumsy I believe.
GETTING STARTED
Following the introduction, there should be a *Getting Started* section which goes over 1) Requirements 2) Steps to install 3) Dependencies (Boost libs, std libs, other libs, etc.) 4) hello world tutorial.
Getting up and running should come before Tutorials.
OK, those should not be relegated to the reference.
PERFORMANCE
Perhaps add a table of performance of some sort that readers can relate to - say, comparing an open method to a class method with both doing the same thing.
https://jll63.github.io/Boost.OpenMethod/#tutorials_performance does that, but in terms of instructions, not timings. Working on this project has destroyed my faith in micro-benchmarks.
However, they are not really tutorials unless they contain steps for the reader to follow. Step 1 - copy and paste this example. Step 2 - run it and examine the output, Step 3 - add this feature and run again. Step 4 - notice how the output has changed...etc. etc.
It would be most useful if the tutorials related to the use-cases first introduced, showing how the desired results are achieved.
On this subject, what do you think of Andrzej's distaste of the Animals example? And my motivation for it? I don't want to put my words in Andrzej's mouth, you'll find that dialogue easily.
I do see his point.
TERMS and ACRONYMS
Acronyms are introduced that I am unaware of, without an initial explanation. For example,
Unlike function declarations, which can occur multiple times in a TU, an overrider declaration cannot.
What is a TU? Can this be defined in full on first use? Similar for ADL, AST, CRTP etc. And for terms such as "guide function" - add a definition on first use.
I *think* I kept the acronyms to the "advanced" parts of the doc. I think that any moderately advanced C++ programmer knows what ADL is...but I will make sure that in any given section the first mention of an acronum is: Argument Dependant Lookup (ADL).
I would really like to see the Reference further divided into sections based on type: * Classes ** fields ** methods * Interfaces * Structures * Constants * Macros * Functions (if external to any class)
So, I struggled with this. I opted for a "thematic" grouping - putting strongly related constructs close to one another in the sidebar. But that grouping jumps across categories. Some facets are templates, some are classes. Should they be scattered between templates and classes? This approach seems to work well only for the macros.
- Currently we have to examine each entry, or guess (I don't like guessing) what each entry is.
Yeah I experimented with other schemes, trying to get guidance from other adoc based documentations. There are many variations.
I tried prefixing every entity with its category - e.g. "class template method", "class template method::override", but this causes a lot of wrapping in the left sidebar.
I am considering removing the "Reference" level to gain one level. Instead of:
Reference BOOST_OPENMETHOD Synopsis Description BOOST_OPENMETHOD_OVERRIDE Synopsis Description
...go straight to :
BOOST_OPENMETHOD Synopsis Description BOOST_OPENMETHOD_OVERRIDE Synopsis Description
What do you think?
I like adoc but it can be limiting...
If references are arranged alphabetically within each section (Classes, Macros etc.) - they are easy to locate.
In YOMM2 I have a flat alphabetically ordered index (https://jll63.github.io/yomm2/#index). Maybe thematic grouping is a bad idea. After all it's a reference, not a novel.
ERRORS and EXCEPTIONS
It is not clear to me what errors or exceptions might be thrown by any of the entries. All errors/exceptions thrown by the library code should be listed under the entry that might fire them. For maximum usefulness, include a table of errors and exceptions and* what you should do about it if you get such an error* could be described too - say likely causes and likely resolutions.
It is hard to document accurately because of the customization points.
A complete structured Reference, with descriptions, return values, use cases and errors would be great to see.
Do you mean this style (from Variant2)?
Effects: Initializes the variant to hold the same alternative and value as w.
Throws: Any exception thrown by the move-initialization of the contained value.
Remarks: This function does not participate in overload resolution unless std::is_move_constructible_v<Ti> is true for all i.
I tried it. But it is difficult to squeeze the library's flexibility into this mold. In case of error, it aborts. But before that it calls error handling facet. Which can throw, or not - it's a customization point. I guess I could have a Errors instead of a Throws:
Errors: Call the error handler specified in the policy's error_handler facet, then call abort.
More importantly, I felt that this style is robotic.
What are your favorite Boost documentations?
ACKNOWLEDGEMENTS, REFERENCES
The documentation could end with any acknowledgements (designers, testers, motivators) and References such as N2216 and no doubt others.
Yes.
The documentation as it stands describes what the code does but almost entirely from an inward looking perspective and almost never addresses the "when I should use this" from a use-case - or more likely a component of a use-case - perspective.
1. an introduction explaining the "why I should be interested" with some compelling use-cases. The more relatable the use-cases, the more interest and users you will get.
You seem to agree with Andrzej. By the way (BTW ;-) ) he likes YOMM2's intro better. Can you take a look and tell me what you think? https://github.com/jll63/yomm2
2. Are there other Boost libraries that play well with OpenMethods - even if you have limited experience of this a start would be helpful.
Yes.
* Using a flat_unordered_map in place of a std one. That brings dispatch speed closer to my perfect hash solution.
* Interoperability with intrusive smart pointers.
I hope my 2c is useful.
You are a technical writer, I am a just code writer trying to do a technical writer's job. That's worth a lot more than 2c ;)
J-L
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

El 27/04/2025 a las 15:15, Дмитрий Архипов via Boost escribió:
Dear Boost community. The peer review of the proposed Boost.OpenMethod will start on 28th of April and continue until May 7th.
This is my review of the Boost.OpenMethod proposal. Thanks to Jean-Louis for his work and to Dmitry for managing! As a guide to this review, I'm answering the points proposed by the review manager succintly and then a full review section is given at the end with much more detail (in no particular order). * What is your evaluation of the design? I find it adequate and well thought out, maybe leaning on overcomplexity for the sake of covering as much as possible (below). * What is your evaluation of the implementation? I didn't look closely, but what little I saw looked to me robust. Code looks clean, too. * What is your evaluation of the documentation? Quite extensive and probably covers anything that needs be covered, but I have qualms about it (below). * What is your evaluation of the potential usefulness of the library? It is very useful if you need open methods, obviously. Personally I haven't delved much into the world of virtual functions (I'm more of a template programmer), but the fact that the predecessor of this library (YOMM2) has real users is enough to convince me that the proposal fills a need. * Did you try to use the library? With what compiler? Did you have any problems? Yes, to get my head around it by writing small snippets. VS 2022. No problems detected. * How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? I spent 11 hours playing with the library and reading and re-reading the docs. * Are you knowledgeable about the problems tackled by the library? I'm familiar with open methods since I read about them in Alexandrescu's 2001 "Modern C++ Design" (where they were called multimethods). I think I'm sufficiently familiar with virtual function machinery to understand the technical challenges met by this proposal and the solutions adopted. * Do you think the library should be accepted as a Boost library? Yes, I vote to ACCEPT Boost.OpenMethod. I have some observations/reservations about it, some of them more important than others. To be clear, I'm not conditioning my vote on these observations being addressed, since I know the author will ponder them carefully and respond in the most appropriate way --which may not be the way I suggest. FULL REVIEW Main points: * The docs are quite hard to read (to me, at least). Mainly, I miss clear introductions and explanations of the many concepts involved before using them in the tutorial. A (probably incomplete) list of concepts that I had to wrap my head around is: + virtual_ptr, double role as a fat reference and as a virtual argument marker + virtual_ + compiler + guide function + policies and facets * Policies, in particular, are introduced in a way that feels to me haphazard and incomplete. I never can be sure if I have all the policy-related information correctly and exhaustively. * I’d encourage the author to try and remove from the docs (i.e. not disclose publicly) as many utilities as possible – I have the feeling some components are described that don’t really belong in the public API, though I may be wrong on this one. * I think virtual_ptr is a misnomer and the entity should be renamed to virtual_ref or something similar. * I challenge the need to have virtual_shared_ptr, virtual_unique_ptr and the like. In my opinion, the proposal need not conflate virtual arguments with object lifetimes, the latter being orthogonal to the purpose of the library. * The decision to call an arbitrary override when there’s a tie-in looks erroneous to me, and it may at least be controlled by some policy facet. * The library relies very heavily on macros, and the macro-free alternative does not seem too practicable. I guess this can’t be helped, but it’d be nice if some thought were given to how to alleviate this overdependence on macros. Elaboration and full list of observations: * virtual_ptr. I have a number of problems with this class: + The documentation starts using it right away without telling the user what it is about. At first, I thought it was merely a syntactic marker to indicate wher a virtual argument happens, which is the case, but not the whole story. + Its semantics are not those of a pointer, but behave more like a reference. This is explicitly acknowledged later in the docs, on the grounds that virtual_ref, which is a more apt name, is reserved for potential evolutions of the C++ language to include overloading of the dot operator. I think it is extremely unlikely that this C++ feature will ever be realized. + As it happens, an open method works equally well if a reference to a class is passed instead of a virtual_ptr (this is apparent as soon as one realizes that virtual_ptr can be constructed from a plain reference, but I didn’t read that far into the reference). In a private communication with the author, he told me this is indeed the case but virtual_ptr is more efficient because it stores an extra direct pointer to the associated virtual table, in a manner not dissimilar to what Rust’s fat pointers do (again, this is mentioned later in the reference). I was led astray by the implicit assumption that virtual_ptr was a sort of pointer, which is not. + What’s the point if having virtual_ptr() and virtual_ptr(nullptr_t) constructors if virtual_ptr behaves as a reference? Same with the move constructor, what’s the point of emptying the source virtual_ptr? + virtual_ptr<const C> is constructible from virtual_ptr<C> but not the other way around, as it should be. This is not readily apparent from reading the reference, though. + All in all, I strongly recommend that virtual_ptr be renamed to virtual_ref, and that an early explanation in the docs is given for its presence as opposed to using plain references. operator -> can be retained because it provides a terse syntax to access the underlying object, but consider providing as well some value() member function and/or a conversion operator to the underlying object. * “Multiple and virtual inheritance are supported, with the exception of repeated inheritance”: I don’t know what repeated inheritance means –I may make an educated guess, but I don’t think this is standard terminology. * Instead of BOOST_OPENMETHOD(poke, (std::ostream&, virtual_ptr<Animal>), void), consider the possibility of using BOOST_OPENMETHOD(poke, void(std::ostream&, virtual_ptr<Animal>)). * When virtual_ptr is used, classes are required to be polymorphic (i.e. classes with a virtual table), which is perfectly ok. But if I fail to declare any virtual function in my class hierarchy, simple snippets of code still compile and produce spurious results. It would be good if this could be caught and signaled by a compile-time error. * policies:debug and policies:release: I understand that these do not mix together? I mean, if a DLL is compiled in debug mode, its exported open methods won’t be taken by an executable compiled in release mode, right? Not sure if this is expected and/or desireable. * “If there is still no unique best overrider, one of the best overriders is chosen arbitrarily.” I’m not sure this is the best default option, probably a run-time error is more appropriate here, maybe controlled by some facet of the policy. * In the description of the release policy, it’s stated that, for instance, the facet extern_vtpr is implemented with vptr_vector and vtpr_map. This doesn’t make much sense to me, as a policy can’t possibly provide more than one implementation for a given facet. Looking at the source code I learnt that it’s vptr_vector that’s used for extern_vptr in release. * “If BOOST_OPENMETHOD_DEFAULT_POLICY is defined before including this header, its value is used as the default value for the Policy template parameter throughout the code. Otherwise, boost::openmethod::default_policy is used.” I think it woud be more correct to say “Otherwise, BOOST_OPENMETHOD_DEFAULT_POLICY is defined to boost::openmethod::default_policy”. * In the reference, initialize is declared as: template<class Policy = BOOST_OPENMETHOD_DEFAULT_POLICY> auto compiler<Policy>::initialize() -> /*unspecified*/; I understand that this should be template<class Policy = BOOST_OPENMETHOD_DEFAULT_POLICY> auto initialize() -> /*unspecified*/; * “OpenMethod supports dynamic loading on operating systems that are capable of handling C++ templates correctly during dynamic link.” I don’t understand what this means, and what operating systems are those (All the usual ones? Or is it a rare feature?) * I miss a complete specification of what facets and other pieces of info a policy can/must have, what happens when some particular facet/piece of info is not explicitly provided, and what facets/pieces of info can be user-defined instead of simply chosen from the available catalog of options in the library. * The reference includes lots of small classes such as type_id, vptr_type, etc. but it’s not clear to me if these are implementation details or things for which a user can provide alternative implementations within a policy. If the former, I’d remove them from the reference. * The section “Core API” explains how to implement the “poke” open method without utility macros. But then the example uses “poke::fn(…)” rather than plain “poke(…)”. What additional step is needed to achieve “poke(…)” syntax as macro-supported examples accomplish? (Peeking into the code, I see that BOOST_OPEN_METHOD includes the definition of a free-standing forwarding function inline auto NAME(ForwarderParameters&&... args)…) * Core API: the example uses BOOST_OPENMETHOD_REGISTER, it would be nice to show how to get rid of all macros, including this one. * A virtual argument in an open method can be defined with virtual_ptr<C> or virtual_<C>. I understand why these two exist--the former is more efficient. But then we also have virtual_shared_ptr and virtual_unique_ptr: what’s the advantage of using virtual_shared_ptr<C> instead of std::shared_ptr<virtual_ptr<C>>? This seems to add some clutter to the lib for no real benefit, plus it conflates the concepts of “virtual slot” and “object lifetime”, which in my mind are completely orthogonal, the latter not really belonging in this library, much like classical virtual functions are not concerned with object lifetime either. FWIW, this extension is not part of N2216. * Elaborating on the previous point, a cleaner approach (IMHO) would be to mark virtual slots with virtual_<C> exclusively, and then let this interoperate smoothly with virtual_ptr <C> (or virtual_ref if the name is finally changed). That is, virtual_ is concerned with the signature of open methods only, and virtual_ptr is concerned with run-time optimization of reference passing, only. * I understand that BOOST_OPENMETHOD_DECLARE_OVERRIDER and BOOST_OPENMETHOD_DEFINE_OVERRIDER separate the declaration and definition that are covered together in BOOST_OPENMETHOD_OVERRIDE. But what does BOOST_OPENMETHOD_OVERRIDER do? The explanation in the reference is quite opaque to me. * The docs talk about so-called guide functions, but I can’t seem to find this concept defined anywhere. * Regarding <boost/openmethod.hpp>, it’s stated that it “[a]lso imports boost::openmethod::virtual_ptr in the global namespace. This is usually regarded as bad practice. The rationale is that OpenMethod emulates a language feature, and virtual_ptr is equivalent to keyword, similar to virtual.” I agree this is bad practice, I’d recommend against it. * I can define and override an open method inside a namespace (say open method run my_namespace), so that’s ok. But I can’t override outside the namespace with BOOST_OPENMETHOD_OVERRIDE(my_namespace::run,…). At least, this should be mentioned as a limitation of the macro-based interface. * I know that an internal type_hashed is required because of previous communications with the author around this very point, but I have the feeling this piece of the lib is not adequately explained in the docs and a casual reader may be confused by it. Joaquin M Lopez Munoz

Hi Joaquín, Thank you for the thorough review!
* The docs are quite hard to read (to me, at least). Mainly, I miss clear introductions and explanations of the many concepts involved before using them in the tutorial.
Uh, it was hard to write too... I am going to add a glossary. When the doc stabilizes, I will add cross-references.
* Policies, in particular, are introduced in a way that feels to me haphazard and incomplete. I never can be sure if I have all the policy-related information correctly and exhaustively.
Policies are definitely advanced stuff. I found that part of the doc very difficult to write. Maybe I should split the doc into "regular" and "advanced" parts.
* I think virtual_ptr is a misnomer and the entity should be renamed to virtual_ref or something similar.
I went the other way, i.e. virtual_ptr now has pointer semantics. I had two reasons for that. 1. After all these years I realized that the Stroustrup & col papers do allow passing virtual arguments by pointer. As far as I can see, that is mentioned in a single sentence, and it never appears in any example - only pass-by-reference. Still, to be consistent with the papers, I added virtual pointer arguments (as in virtual_<Animal*>), and logically, the fat pointer followed suit. 2. When virtual_ptr was more ref-like, the interaction with the underlying smart pointer was odd. It limited its functionality. Smart pointers can be changed and reset. When blended with a virtual_ptr, they lost that. In particular, I expected virtual_ptr<std::unique_ptr<T>> to be unusable in many contexts, if you cannot detach it from the object it points to.
* I challenge the need to have virtual_shared_ptr, virtual_unique_ptr and the like. In my opinion, the proposal need not conflate virtual arguments with object lifetimes, the latter being orthogonal to the purpose of the library.
(later)
This seems to add some clutter to the lib for no real benefit, plus it conflates the concepts of “virtual slot” and “object lifetime”, which in my mind are completely orthogonal, the latter not really belonging in this library, much like classical virtual functions are not concerned with object lifetime either.
FWIW, this extension is not part of N2216.
That strays from the papers, I know... Here's my reasons... Consider the case of matrix operations. Most of the time, they will return a new matrix, allocated on the heap, managed by std::shared_ptr or maybe a boost::intrusive_ptr. For example, addition returns a new matrix. Using the N2216 syntax: class abstract_matrix {...}; class ordinary_matrix : abstract_matrix {...}; auto add(virtual abstract_matrix& a, virtual abstract_matrix& b) -> std::shared<abstract_matrix>; auto add(virtual ordinary_matrix& a, virtual ordinary_matrix& b) -> std::shared<abstract_matrix> { return std::make_shared<ordinary_matrix>(...); } Transposition is interesting. In the general case, it returns a new matrix too. auto transpose(virtual ordinary_matrix& m) -> std::shared<abstract_matrix> { return std::make_shared<ordinary_matrix>(...); } But for a symmetric matrix, it really looks like we want to return the matrix itself. Of course we cannot simply do this: auto transpose(virtual abstract_matrix& m) -> std::shared<abstract_matrix>; auto transpose(virtual symmetric_matrix& m) -> std::shared<abstract_matrix> { return std::shared_ptr<abstract_matrix>(&m); } ...since we are likely to be holding to `m` via a shared_ptr somewhere else. In this design, there is a simple solution: auto transpose_aux( virtual abstract_matrix& m, std::shared_ptr<abstract_matrix> shared) -> std::shared<abstract_matrix>; auto transpose_aux( virtual ordinary_matrix& m, std::shared_ptr<abstract_matrix> /*unused*/) -> std::shared<abstract_matrix> { return std::make_shared<ordinary_matrix>(...); } auto transpose_aux( virtual symmetric_matrix& m, std::shared_ptr<abstract_matrix> shared) -> std::shared<abstract_matrix> { return shared; } auto transpose(std::shared<abstract_matrix> m) -> std::shared<abstract_matrix> { return transpose_aux(*m.get(), m); } I believe that users could accept this idiom, if open-methods were baked into the language. I fear that, for a significant subset of those, this workaround _on top of_ an emulation resorting on macros, it will be a little too much to swallow. But there's more... Consider the AST example, using std::unique_ptr to manage the child nodes (or shared_ptr, it doesn't matter): struct Node { virtual ~Node() {} }; struct Plus : Node { Plus(std::unique_ptr<Node> left, std::unique_ptr<Node> right) : left(std::move(left)), right(std::move(right)) {} std::unique_ptr<Node> left, right; }; auto value(virtual Node&) -> int; auto value(virtual Plus&) -> int { return value(left) + value(right); } Perfectly reasonable. In OpenMethod, if I did not support virtual_ptr<>s on smart pointers, it would have to look like: struct Plus : Node { Plus(std::unique_ptr<Node> left, std::unique_ptr<Node> right) : left(std::move(left)), right(std::move(right)), left_vptr(left), right_vptr(right) {} // for use with "final": Plus( std::unique_ptr<Node> left, virtual_ptr<Node> left_vptr, std::unique_ptr<Node> right, virtual_ptr<Node> right_vptr) : left(std::move(left)), right(std::move(right)), left_vptr(left_vptr), right_vptr(left_vptr) {} std::unique_ptr<Node> left, right; boost::openmethod::virtual_ptr<Node> left_vptr, right_vptr; }; auto value(virtual Node&) -> int; auto value(virtual Plus&) -> int { return value(left_vptr) + value(right_vptr); } I think that, at this point, some users would give up on virtual_ptr, and use a virtual_<Node&> instead. But a call via virtual_vptr is three instructions only; via a reference, nine. Finally, "Design and evaluation of C++ open multi-methods", 5.3. Smart pointers states:
Defining open-methods directly on smart pointers is not possible. In the following example, (1) yields an error, as ptr1 is neither a reference nor a pointer type. The declaration of (2) is an error, because shared_ptr is not a polymorphic object (it does not define any virtual function). Even when shared_ptr were polymorphic, the open-method declaration would be meaningless. A shared_ptr<B> would not be in an inheritance relationship to shared_ptr<A>, thus the compiler would not recognize foo(virtual shared_ptr<B>&) as an overrider.
void foo(virtual shared_ptr<A> ptr1); // (1) error void foo(virtual shared_ptr<A>& ptr2); // (2) error
I don't buy this. How is it different from plain pointers? A Cat* is not in an inheritance relationship to Animal* either: Cat and Animal are, thus there is a standard _conversion_ from one to the other. My intuition is that virtual smart pointer arguments must be forbidden because it would require the compiler to be aware of the standard library.
* The decision to call an arbitrary override when there’s a tie-in looks erroneous to me, and it may at least be controlled by some policy facet.
To me too. YOMM2 doesn't do that. I am definitely bringing back that behavior as a facet. Heck, I am tempted to make it the default again, and make the N2216 resolution an opt-in. I am not hugely convinced with using the return type as a tie-breaker either. For starters, it's one of those rules that apply only in certain cases (covariant return types). I think it is more coherent, and easier to understand, if we use exactly the same rules as overload resolution.
* The library relies very heavily on macros, and the macro-free alternative does not seem too practicable. I guess this can’t be helped, but it’d be nice if some thought were given to how to alleviate this overdependence on macros.
Maybe with reflection and generation in a few years. If the compilers are capable enough. Elaboration and full list of observations:
* virtual_ptr. I have a number of problems with this class: + The documentation starts using it right away without telling the user what it is about.
I didn't want to pile up on concepts right away. I thought I'd teach by example: Look, this is just a virtual function! moved out of a class. Then dive in more...But the approach doesn't seem to work.
+ Its semantics are not those of a pointer, but behave more like a reference. This is explicitly acknowledged later in the docs, on the grounds that virtual_ref, which is a more apt name, is reserved for potential evolutions of the C++ language to include overloading of the dot operator. I think it is extremely unlikely that this C++ feature will ever be realized.
Alas! After changing virtual_ptr to have pointer semantics, I forgot to remove this bit of doc.
* “Multiple and virtual inheritance are supported, with the exception of repeated inheritance”: I don’t know what repeated inheritance means –I may make an educated guess, but I don’t think this is standard terminology.
For example: Node Node | | B C \ / A * Instead of BOOST_OPENMETHOD(poke, (std::ostream&,virtual_ptr<Animal>), void), consider the possibility of using BOOST_OPENMETHOD(poke, void(std::ostream&, virtual_ptr<Animal>)). I tried since you last suggested this. Couldn't make it work. The macro generates something like: auto poke(std::ostream& a, virtual_ptr<Animal> b) -> void { method<poke_method(std::ostream& a, virtual_ptr<Animal> b)>::fn(a, b); } There is no way to peel off the `void` from the signature with the syntax you suggest.
* When virtual_ptr is used, classes are required to be polymorphic (i.e. classes with a virtual table), which is perfectly ok. But if I fail to declare any virtual function in my class hierarchy, simple snippets of code still compile and produce spurious results. It would be good if this could be caught and signaled by a compile-time error.
This existed in the past, but it was lost when YOMM2 became capable of dealing with non-polymorphic classes in some situations. I reinstated it.
* policies:debug and policies:release: I understand that these do not mix together? I mean, if a DLL is compiled in debug mode, its exported open methods won’t be taken by an executable compiled in release mode, right? Not sure if this is expected and/or desirable.
I am considering separating them completely. As of now basing one on the other is too error-prone.
* In the description of the release policy, it’s stated that, for instance, the facet extern_vtpr is implemented with vptr_vector and vtpr_map. This doesn’t make much sense to me, as a policy can’t possibly provide more than one implementation for a given facet. Looking at the source code I learnt that it’s vptr_vector that’s used for extern_vptr in release.
Looks like a copy-paste mishap. Thanks for catching it.
* “If BOOST_OPENMETHOD_DEFAULT_POLICY is defined before including this header, its value is used as the default value for the Policy template parameter throughout the code. Otherwise, boost::openmethod::default_policy is used.” I think it woud be more correct to say “Otherwise, BOOST_OPENMETHOD_DEFAULT_POLICY is defined to boost::openmethod::default_policy”.
Maybe I am going to avoid defining it, e.g.: #ifdef BOOST_OPENMETHOD_DEFAULT_POLICY template<class Class, class Policy = BOOST_OPENMETHOD_DEFAULT_POLICY> #else template<class Class, class Policy = default_policy> #endif * In the reference, initialize is declared as:
template<class Policy = BOOST_OPENMETHOD_DEFAULT_POLICY> auto compiler<Policy>::initialize() -> /*unspecified*/;
I understand that this should be
template<class Policy = BOOST_OPENMETHOD_DEFAULT_POLICY> auto initialize() -> /*unspecified*/;
Yep.
* “OpenMethod supports dynamic loading on operating systems that are capable of handling C++ templates correctly during dynamic link.” I don’t understand what this means, and what operating systems are those (All the usual ones? Or is it a rare feature?)
Windows, I am not sure. On the Mac? I don't have one. You may need to export or import symbols via config files. That is not my concern, those are OS related issues. If your dynamic linker works well with templates, it will work well with OpenMethod.
* I miss a complete specification of what facets and other pieces of info a policy can/must have, what happens when some particular facet/piece of info is not explicitly provided, and what facets/pieces of info can be user-defined instead of simply chosen from the available catalog of options in the library.
It was a nightmare to document, and it looks like I did a poor job of it. The set of facets is closed, because the dispatcher and the compiler use them in `if constexpr` blocks etc. If you add your own facet, the library won't even look at it. The set of facet _implementations_ is opened, and designed with the goal that you can add your own.
* The reference includes lots of small classes such as type_id, vptr_type, etc. but it’s not clear to me if these are implementation details or things for which a user can provide alternative implementations within a policy. If the former, I’d remove them from the reference.
They are needed to: - write facet implementations - implement intrusive vptrs (with_vptr) - use the core API Everything else is in `detail`. Clearly the library has a "basic" interface and an advanced one. I should do a better job at separating them in the doc.
* The section “Core API” explains how to implement the “poke” open method without utility macros. But then the example uses “poke::fn(…)” rather than plain “poke(…)”. What additional step is needed to achieve “poke(…)” syntax as macro-supported examples accomplish? (Peeking into the code, I see that BOOST_OPEN_METHOD includes the definition of a free-standing forwarding function inline auto NAME(ForwarderParameters&&... args)…)
The macro generates a `poke` function that forwardes to `method<poke_boost_openmethod(...), void>::fn`. By the way, creating such a forwarder is not easy at all. YOMM2 uses Boost.PP looping macros to process the arguments. And you find OM's macro complicated? ;-) These macros are much simpler, and they can deal with types that contain parentheses, without requiring BOOST_IDENTITY_TYPE.
* Core API: the example uses BOOST_OPENMETHOD_REGISTER, it would be nice to show how to get rid of all macros, including this one.
I do:
static poke::override<poke_cat> override_poke_cat; ... In C++26, we will be able to use _ instead of inventing a one-time-use identifier. In the meantime, OpenMethod provides a small convenience macro:
Core API exists primarily to make it possible to mix open-methods and templates; secondarily to placate people completely allergic to macros ;-)
* A virtual argument in an open method can be defined with virtual_ptr<C> or virtual_<C>. I understand why these two exist--the former is more efficient. But then we also have virtual_shared_ptr and virtual_unique_ptr: what’s the advantage of using virtual_shared_ptr<C> instead of std::shared_ptr<virtual_ptr<C>>?
They're the same. virtual_shared_ptr is a templatized typedefs.
* Elaborating on the previous point, a cleaner approach (IMHO) would be to mark virtual slots with virtual_<C> exclusively, and then let this interoperate smoothly with virtual_ptr <C> (or virtual_ref if the name is finally changed). That is, virtual_ is concerned with the signature of open methods only, and virtual_ptr is concerned with run-time optimization of reference passing, only.
I think I see your point, but there are efficient alternatives to virtual_ptr: - with_vptr, i.e. intrusive pointer to v-table - you can take over vptr acquisition by providing a `boost_openmethod_vptr` function - extern_vptr facets; YOMM2 has an example with one vptr per page of objects
* I understand that BOOST_OPENMETHOD_DECLARE_OVERRIDER and BOOST_OPENMETHOD_DEFINE_OVERRIDER separate the declaration and definition that are covered together in BOOST_OPENMETHOD_OVERRIDE. But what does BOOST_OPENMETHOD_OVERRIDER do? The explanation in the reference is quite opaque to me.
It expands to the specialization of the class template that contains all the overriders of the same name in the current namespace. Actually, you are the godfather of this macro ;-) After your remark about the arguments having a different look in different constructs, and after failing to find a way to use a function type in the macros, instead of passing the return type as a separate argument, I introduced the macro to provide an uniform look.
* The docs talk about so-called guide functions, but I can’t seem to find this concept defined anywhere.
https://jll63.github.io/Boost.OpenMethod/#BOOST_OPENMETHOD
* Regarding <boost/openmethod.hpp>, it’s stated that it “[a]lso imports boost::openmethod::virtual_ptr in the global namespace. This is usually regarded as bad practice. The rationale is that OpenMethod emulates a language feature, and virtual_ptr is equivalent to keyword, similar to virtual.” I agree this is bad practice, I’d recommend against it.
I knew this would be controversial. You don't have to use it. You can just say: #include <boost/openmethod/core.hpp> #include <boost/openmethod/macros.hpp> I can also document that macros.hpp includes core.hpp, so you can confidently just say: #include <boost/openmethod/macros.hpp> If it is still a big issue, then I can do it like in YOMM2: #include <boost/openmethod/keywords.hpp> ...so it is not as prominent as <boost/openmethod.hpp> * I can define and override an open method inside a namespace (say open method run my_namespace), so that’s ok. But I can’t override outside the namespace with BOOST_OPENMETHOD_OVERRIDE(my_namespace::run,…). At least, this should be mentioned as a limitation of the macro-based interface. It is mentioned at the end of https://jll63.github.io/Boost.OpenMethod/#tutorials_headers_and_namespaces
* I know that an internal type_hashed is required because of previous communications with the author around this very point, but I have the feeling this piece of the lib is not adequately explained in the docs and a casual reader may be confused by it.
Probably it should go in the Custom RTTI tutorial. J-L

AMDG On 4/30/25 4:15 PM, Jean-Louis Leroy via Boost wrote:
Hi Joaquín,
* Instead of BOOST_OPENMETHOD(poke, (std::ostream&,virtual_ptr<Animal>), void), consider the possibility of using BOOST_OPENMETHOD(poke, void(std::ostream&, virtual_ptr<Animal>)).
I tried since you last suggested this. Couldn't make it work. The macro generates something like:
auto poke(std::ostream& a, virtual_ptr<Animal> b) -> void { method<poke_method(std::ostream& a, virtual_ptr<Animal> b)>::fn(a, b); }
There is no way to peel off the `void` from the signature with the syntax you suggest.
It doesn't need to be peeled off by the macro. You're only using it as a template parameter, so you can just pass the whole signature as a function type. In Christ, Steven Watanabe

Hi Steven,
I tried since you last suggested this. Couldn't make it work. The macro generates something like:
auto poke(std::ostream& a, virtual_ptr<Animal> b) -> void { method<poke_method(std::ostream& a, virtual_ptr<Animal> b)>::fn(a, b); }
There is no way to peel off the `void` from the signature with the syntax you suggest.
It doesn't need to be peeled off by the macro. You're only using it as a template parameter, so you can just pass the whole signature as a function type.
What about the `void` after the arrow? It might work if we could define a function using a function type: using Signature = void(std::ostream& a, virtual_ptr<Animal> b); Signature poke { method<poke_method(std::ostream& a, virtual_ptr<Animal> b)>::fn(a, b); } Alas this is illegal. Not sure why. Probably because of the parameter names... For what it's worth, here is the syntax that I would like: BOOST_OPENMETHOD(void poke(std::ostream&,virtual_ptr<Animal>)) { ... } J-L On Tue, May 6, 2025 at 5:34 PM Steven Watanabe via Boost <boost@lists.boost.org> wrote:
AMDG
On 4/30/25 4:15 PM, Jean-Louis Leroy via Boost wrote:
Hi Joaquín,
* Instead of BOOST_OPENMETHOD(poke, (std::ostream&,virtual_ptr<Animal>), void), consider the possibility of using BOOST_OPENMETHOD(poke, void(std::ostream&, virtual_ptr<Animal>)).
I tried since you last suggested this. Couldn't make it work. The macro generates something like:
auto poke(std::ostream& a, virtual_ptr<Animal> b) -> void { method<poke_method(std::ostream& a, virtual_ptr<Animal> b)>::fn(a, b); }
There is no way to peel off the `void` from the signature with the syntax you suggest.
It doesn't need to be peeled off by the macro. You're only using it as a template parameter, so you can just pass the whole signature as a function type.
In Christ, Steven Watanabe
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

AMDG On 5/6/25 4:52 PM, Jean-Louis Leroy via Boost wrote:
Hi Steven,
I tried since you last suggested this. Couldn't make it work. The macro generates something like:
auto poke(std::ostream& a, virtual_ptr<Animal> b) -> void { method<poke_method(std::ostream& a, virtual_ptr<Animal> b)>::fn(a, b); }
There is no way to peel off the `void` from the signature with the syntax you suggest.
It doesn't need to be peeled off by the macro. You're only using it as a template parameter, so you can just pass the whole signature as a function type.
What about the `void` after the arrow?
You mean like this? BOOST_OPENMETHOD(poke, (std::ostream& a, virtual_ptr<Animal> b) -> void) That's possible. You can get a function type by prepending auto, which can be used for most template parameters and (with some metaprogramming) the return type.
It might work if we could define a function using a function type:
using Signature = void(std::ostream& a, virtual_ptr<Animal> b);
Signature poke { method<poke_method(std::ostream& a, virtual_ptr<Animal> b)>::fn(a, b); }
Alas this is illegal. Not sure why. Probably because of the parameter names...
Right. The function type trick works for everything except the actual function definition (where you need the parameter names).
For what it's worth, here is the syntax that I would like:
BOOST_OPENMETHOD(void poke(std::ostream&,virtual_ptr<Animal>)) { ... }
Would be nice. Too bad it that leaves no way to get poke as a usable identifier. In Christ, Steven Watanabe

You mean like this? BOOST_OPENMETHOD(poke, (std::ostream& a, virtual_ptr<Animal> b) -> void)
That's possible. You can get a function type by prepending auto, which can be used for most template parameters and (with some metaprogramming) the return type.
I'll try. That was the "problem" that jumped to my eyes. Perhaps it's not the only one, and if not, perhaps you'll come up with a solution if I fail to find one ;-)

Alright, I'm going to cut to the chase here and say that we should emphatically accept Boost.OpenMethod. I was actually quite surprised by the library when I tried it out. I've learned about multi-methods somewhat in the past but I've never really had a use for them in the applications I've written. That being said, their applicability is obvious and this library makes it really, really nice to do. I think this library should be accepted into Boost because it's a very strong return to form for us, in the sense that this library really demonstrates how many language features you can emulate via a library which has always been a spot Boost has occupied. In the ideal world, this would have language support but this library does a good job of showing us what that hypothetical world would be like. Plus, the author did their due diligence and has reported overhead, even down to estimated cycles. I didn't verify anything on my actual machine but if we assume the author is speaking in good faith, I think the library is acceptable from a performance perspective and this doesn't need to be a point of contention. I don't expect this library to be used in production or adopted overnight by companies. But I think it still belongs in Boost because it's interesting and it's a really solid solution to a common problem. I'd put this library in line with other experimental libraries like Yap, HOF, Spirit, Lambda2, etc. yomm2, the library this is derived from, seems decently popular on Github and it has contributions from others which is a good point in its favor. So, maybe companies are using it. Technically, I've shipped Spirit code but I don't expect Spirit to be a popular choice at many companies where I've been. I do have technical feedback. First and foremost, I'd say this library can be dramatically pared down in its interface. What I mean by this, I'm left wondering if we could replace a lot of the customization points with just a set of common choices for RTTI and error handling. For example: struct dynamic_policy : boost::openmethod::default_policy::fork<dynamic_policy>::replace< boost::openmethod::policies::extern_vptr, boost::openmethod::policies::vptr_vector< dynamic_policy, boost::openmethod::policies::indirect_vptr>> {}; Committing this code to any project I've worked on would've left most of my coworkers absolutely checked out. And because this policy is used in the indirect_vptr example, showcasing how to handle dynamic loading, I was left wondering, shouldn't the library just be exposing this stuff already for me? Why is there an example teaching me how to build it? dyn_vptr seems important enough that I shouldn't have to build it. This applies similarly to the RTTI and error handling stuff. Why do I need custom RTTI? Why would I want custom RTTI? What's the library not doing for me that I should be considering doing myself? As far as error handling goes,
"When an error is encountered, the program is terminated by a call to abort"
This absolutely should be changed, even if it did make me chuckle at first. It's because "abort" is a scary word and I think most readers would drop the library instantly if they read that "when an error occurs, we abort your program". I realize now that this library means: "we internally use asserts". But I still think this is a bad default because the example code in the Error Handling section immediately shows you how to author the thing you actually want: exceptions. This is where the complexity of the library ramps up a lot as now you're introduced to facets and policies. I think there's maybe a more sane path to error handling here which is to use exceptions by default and then maybe something like an assert version which works in debug or release and then finally a "we just do UB". Kind of like an invalid static_cast<> vs dynamic_cast<>. I don't know how viable these approaches are but I think there's a more sane default we can take and I really don't think this needs to be offered so prominently in the interfaces. The use of trailing return seems to really mess up the asciidoc code examples, especially because it's for stuff like `auto main() -> int` which is tough to defend in and of itself, even as someone who's a huge trailing return fan. I had a lot of trouble trying to actually construct a virtual_ptr myself and the reference docs were not helpful because they don't have any examples. The Overview section probably belongs at the top. Do we need BOOST_OPENMETHOD_OVERRIDER? It seems like in the code examples, just calling `next()` is sufficient. This code is already so entrenched in macros and I'm not sure another one helps its case. I'm not sure about BOOST_OPENMETHOD_GUIDE either. It seemed like it was needed so we can add overriding functions to types in a different namespace without opening it. I think this is actually something we don't want to encourage for reasons mentioned by Andrzej about this being a potential footgun. I think forcing users to have to open up the namespace is a good thing and kind of mimics the orphan rule in Rust, which is: you own either the trait or the type. Otherwise, I think the library is very enjoyable. I think a lot of the stuff it offers seems extraneous on first glance and I'd much rather have the library just offer me a nice set of tools to pick and choose from instead of me "getting to" build my own solution. But overall, a really solid approach to a ubiquitous problem and I'd love to see how this library evolves in Boost. - Christian

Hi Christian, Thanks for the review!
First and foremost, I'd say this library can be dramatically pared down in its interface. What I mean by this, I'm left wondering if we could replace a lot of the customization points with just a set of common choices for RTTI and error handling.
For many years policies in YOMM2 existed in hiding, just for the benefit of unit tests. But it changed due to user feedback.
Why do I need custom RTTI? Why would I want custom RTTI? What's the library not doing for me that I should be considering doing myself?
I very much agreed with you, until someone on reddit convinced me in a snap. He was a game developer, told me that my library was my library was unusable in his field. Why? Because we disable standard RTTI. But why on Earth do that? Because some people look at the strings embedded in the binaries and use the info to reverse-engineer the game, cheat, or whatnot. Games are an important segment of the industry, also I have been eyeing embedded programming.
For example:
struct dynamic_policy : boost::openmethod::default_policy::fork<dynamic_policy>::replace< boost::openmethod::policies::extern_vptr, boost::openmethod::policies::vptr_vector< dynamic_policy, boost::openmethod::policies::indirect_vptr>> {};
Committing this code to any project I've worked on would've left most of my coworkers absolutely checked out.
Yeah, that's a mouthful. Since reading your reply, I've had ideas for improvements. I'll experiment with them during the week-end, and come back to this.
And because this policy is used in the indirect_vptr example, showcasing how to handle dynamic loading, I was left wondering, shouldn't the library just be exposing this stuff already for me? Why is there an example teaching me how to build it? dyn_vptr seems important enough that I shouldn't have to build it.
Yes, I think I can provide shortcuts for that.
As far as error handling goes,
"When an error is encountered, the program is terminated by a call to abort"
This absolutely should be changed, even if it did make me chuckle at first. It's because "abort" is a scary word and I think most readers would drop the library instantly if they read that "when an error occurs, we abort your program".
I realize now that this library means: "we internally use asserts".
No, it *literally* calls abort. I was guided by what happens when you call a pure virtual function. The program aborts, *maybe* with a short diagnostic.
But I still think this is a bad default because the example code in the Error Handling section immediately shows you how to author the thing you actually want: exceptions. This is where the complexity of the library ramps up a lot as now you're introduced to facets and policies.
I think there's maybe a more sane path to error handling here which is to use exceptions by default and then maybe something like an assert version which works in debug or release and then finally a "we just do UB".
To clear any misunderstanding, I think that exceptions are great, far superior to any alternative I know of, except in very specific and rare contexts. So why this choice? A big chunk of the community is allergic to exceptions. Promoting a form of polymorphism that relies on inheritance and tables of function pointers is an uphill battle already. Adding exceptions to the backpack makes it worse. Also, when you call a pure virtual function, it doesn't throw. UB, why not? Sadly it may be better than throwing. But you got me thinking about a "more sane path"... N2216 and the papers that precede it sort of sidestep the question...They require every open-method to have a definition. The motivation is, what should happen if no overrider is applicable. Should we throw an exception? But that is unacceptable in the context of embedded programming. So they require every base-method to have an implementation, problem solved. Then they add "solutions" for dealing with ambiguities. In the first paper, it's using covariant return types as tie-breakers. Eventually, in N2216, they pick "an" overrider after every attempt at finding an only best one have failed. In the end, the open-function mechanism *itself* doesn't need to throw or abort, because it has eliminated every reason to do so. You have to look hard, though, to find an *example* of a base-method implementation. In N2216: bool intersect(virtual Shape&, virtual Shape&) { } This doesn't look good, it silently returns something in a situation that is likely a bug. In Solodkyy's paper "Simplifying the Analysis of c++ Programs": int eval (virtual const Expr&) { throw std::invalid argument ("eval"); } This is better. I think that it is what most base-method implementations will look like. A counter-example is my "animals meet" example: they ignore each other. But it's a made-up example. So why not synthesize an erroring base-method overrider if none has been provided? I can enforce the paper's requirement of a base-method overrider at runtime (in initialize), maybe even at link time (but it would break some regularities elsewhere). It would be a way of copping out of the issue altogether. It would probably make noexcept open-methods feasible without complexifying the policy even more.
The use of trailing return seems to really mess up the asciidoc code examples, especially because it's for stuff like `auto main() -> int` which is tough to defend in and of itself, even as someone who's a huge trailing return fan.
Since the macros follow that order, I changed all the code to follow it as well, hoping to achieve some sort of subliminal teaching ;-)
I had a lot of trouble trying to actually construct a virtual_ptr myself and the reference docs were not helpful because they don't have any examples.
It's a very static-polymorphic thing. The interaction between objects, virtual_ptrs to objects, and virtual smart ptr to objects. I have to improve that part. And yes, provide short examples.
Do we need BOOST_OPENMETHOD_OVERRIDER? It seems like in the code examples, just calling `next()` is sufficient. This code is already so entrenched in macros and I'm not sure another one helps its case.
I think that `next` is the likely correct way in most cases. But you are not forced to use BOOST_OPENMETHOD_OVERRIDER, it's just there in case you feel you need it.
I'm not sure about BOOST_OPENMETHOD_GUIDE either. It seemed like it was needed so we can add overriding functions to types in a different namespace without opening it. I think this is actually something we don't want to encourage for reasons mentioned by Andrzej about this being a potential footgun. I think forcing users to have to open up the namespace is a good thing and kind of mimics the orphan rule in Rust, which is: you own either the trait or the type.
So this is a late addition after a comment from Joaquin. I leaned towards "agree" because this is legal: namespace foo {} void foo::bar() {} But yeah I can easily flip the other way.
Otherwise, I think the library is very enjoyable.
This warms my heart :)

For example:
struct dynamic_policy : boost::openmethod::default_policy::fork<dynamic_policy>::replace< boost::openmethod::policies::extern_vptr, boost::openmethod::policies::vptr_vector< dynamic_policy, boost::openmethod::policies::indirect_vptr>> {};
Committing this code to any project I've worked on would've left most of my coworkers absolutely checked out.
Yeah, that's a mouthful.
With the changes in the "review" branch, it now reads: struct dynamic_policy : boost::openmethod::default_policy::fork<dynamic_policy>::with< boost::openmethod::policies::indirect_vptr> {}; See policy.adoc for more information and examples. J-L

struct dynamic_policy : boost::openmethod::default_policy::fork<dynamic_policy>::replace< boost::openmethod::policies::extern_vptr, boost::openmethod::policies::vptr_vector< dynamic_policy, boost::openmethod::policies::indirect_vptr>> {};
Committing this code to any project I've worked on would've left most of my coworkers absolutely checked out.
With the changes in the review branch, you can now say: struct dynamic_policy : boost::openmethod::default_policy::fork<dynamic_policy>::add< boost::openmethod::policies::indirect_vptr>> {}; I am pretty sure I can make this work: struct map_policy : default_policy::fork<map_policy>::with<vptr_map<map_policy>> {}; I.e. instead of saying `replace<Facet1, Implementation1>::replace<Facet2, Implementation2>` you can just say `with<Implementation1, Implementation2>`. If an implementation of the same facet exists in the policy, it will be replaced with the new one, otherwise the implementation will be added. Not in the review branch yet.

Hi Jean-Louis, First, thanks for all the effort you put into writing a solid library on an experimental subject. I have a few questions. 1. Concerning the overrider selection, when the perfect match is not found, it is said that an arbitrary choice is made. My tests on virtual inheritance and two dimensions dispatch showed that this is the last defined compatible overrider that is chosen. Do you confirm ? Or what is the algorithm ? 2. The class virtual_ptr<Class> accepts only classes that inherit Class. Would it be possible to accept inheritance unrelated classes e.g. std::filesystem::path and std::string ? 3. Would it be possible to get rid of the class declarations ? Indeed they are already declared in the template parameter of the virtual_ptr<> arguments so they could be implicitly defined here. Yannick On Sun, May 4, 2025 at 11:15 AM Jean-Louis Leroy via Boost < boost@lists.boost.org> wrote:
struct dynamic_policy :
boost::openmethod::default_policy::fork<dynamic_policy>::replace<
boost::openmethod::policies::extern_vptr, boost::openmethod::policies::vptr_vector< dynamic_policy, boost::openmethod::policies::indirect_vptr>> {};
Committing this code to any project I've worked on would've left most of
my
coworkers absolutely checked out.
With the changes in the review branch, you can now say:
struct dynamic_policy : boost::openmethod::default_policy::fork<dynamic_policy>::add< boost::openmethod::policies::indirect_vptr>> {};
I am pretty sure I can make this work:
struct map_policy : default_policy::fork<map_policy>::with<vptr_map<map_policy>> {};
I.e. instead of saying `replace<Facet1, Implementation1>::replace<Facet2, Implementation2>` you can just say `with<Implementation1, Implementation2>`. If an implementation of the same facet exists in the policy, it will be replaced with the new one, otherwise the implementation will be added. Not in the review branch yet.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

First, thanks for all the effort you put into writing a solid library on an experimental subject.
Hi Yannick! Thank you for the feedback.
1. Concerning the overrider selection, when the perfect match is not found, it is said that an arbitrary choice is made.
Everybody seems to dislike this, including myself. It is the behavior specified in N2216.
My tests on virtual inheritance and two dimensions dispatch showed that this is the last defined compatible overrider that is chosen. Do you confirm ?
I don't. It's a secret ;-) Only I can use that knowledge in my unit tests. The only guarantee is that the same overrider is always picked for the same set of virtual arguments for the duration of the process.
2. The class virtual_ptr<Class> accepts only classes that inherit Class. Would it be possible to accept inheritance unrelated classes e.g. std::filesystem::path and std::string ?
Not directly. What would the method signature, and the method call look like? You could wrap them in a class hierarchy, e.g. AbstractPath, FileSystemPath and StringPath. I have some ideas about supporting std::any virtual parameters though, where the overriders would specify an exact type, something like: BOOST_OPENMETHOD(delete, (virtual_<std::any> path), void); BOOST_OPENMETHOD_OVERRIDE(delete, (std::filesystem::path path), void) { ... } BOOST_OPENMETHOD_OVERRIDE(delete, (std::string path), void) { ... }
3. Would it be possible to get rid of the class declarations ? Indeed they are already declared in the template parameter of the virtual_ptr<> arguments so they could be implicitly defined here.
Ah but you are forgetting about intermediate classes in the hierarchy, between the method and the overrider: struct A { ... }; struct I : A { ... }; struct B : I { ... }; BOOST_OPENMETHOD(whatever, (virtual_ptr<A>), void); BOOST_OPENMETHOD_OVERRIDE(whatever, (virtual_ptr<B>), void) { ... } auto p = std::make_unique<I>(); whatever(*p); That being said, with_vptr must be called at every level of the hierarchy, and indeed, it does class registration for you. At some point in the future, reflection might be good enough to help with this. J-L On Sun, May 4, 2025 at 4:11 PM Yannick Le Goc <ylegoc@gmail.com> wrote:
Hi Jean-Louis,
First, thanks for all the effort you put into writing a solid library on an experimental subject. I have a few questions.
1. Concerning the overrider selection, when the perfect match is not found, it is said that an arbitrary choice is made. My tests on virtual inheritance and two dimensions dispatch showed that this is the last defined compatible overrider that is chosen. Do you confirm ? Or what is the algorithm ?
2. The class virtual_ptr<Class> accepts only classes that inherit Class. Would it be possible to accept inheritance unrelated classes e.g. std::filesystem::path and std::string ?
3. Would it be possible to get rid of the class declarations ? Indeed they are already declared in the template parameter of the virtual_ptr<> arguments so they could be implicitly defined here.
Yannick
On Sun, May 4, 2025 at 11:15 AM Jean-Louis Leroy via Boost <boost@lists.boost.org> wrote:
struct dynamic_policy : boost::openmethod::default_policy::fork<dynamic_policy>::replace< boost::openmethod::policies::extern_vptr, boost::openmethod::policies::vptr_vector< dynamic_policy, boost::openmethod::policies::indirect_vptr>> {};
Committing this code to any project I've worked on would've left most of my coworkers absolutely checked out.
With the changes in the review branch, you can now say:
struct dynamic_policy : boost::openmethod::default_policy::fork<dynamic_policy>::add< boost::openmethod::policies::indirect_vptr>> {};
I am pretty sure I can make this work:
struct map_policy : default_policy::fork<map_policy>::with<vptr_map<map_policy>> {};
I.e. instead of saying `replace<Facet1, Implementation1>::replace<Facet2, Implementation2>` you can just say `with<Implementation1, Implementation2>`. If an implementation of the same facet exists in the policy, it will be replaced with the new one, otherwise the implementation will be added. Not in the review branch yet.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Another question concerning the memory footprint. Suppose we have a class hierarchy of size N so that: struct A1 {...} struct A2 : A1 {...} ... struct AN : AN-1 {...} Now we define an open method with M virtual arguments BOOST_OPENMETHOD(big, (virtual_ptr<A1>, virtual_ptr<A1>, virtual_ptr<A1>, ..., virtual_ptr<A1>), void); What set of overriders would be the worst case in terms of memory footprint ? I suppose it would be the complete set of overriders that covers all the tuples possibilty, for which size is N^M. Then would the size of the dispatch table be O(N^M) ? On Mon, May 5, 2025 at 3:11 AM Jean-Louis Leroy via Boost < boost@lists.boost.org> wrote:
First, thanks for all the effort you put into writing a solid library on an experimental subject.
Hi Yannick! Thank you for the feedback.
1. Concerning the overrider selection, when the perfect match is not found, it is said that an arbitrary choice is made.
Everybody seems to dislike this, including myself. It is the behavior specified in N2216.
My tests on virtual inheritance and two dimensions dispatch showed that this is the last defined compatible overrider that is chosen. Do you confirm ?
I don't. It's a secret ;-) Only I can use that knowledge in my unit tests.
The only guarantee is that the same overrider is always picked for the same set of virtual arguments for the duration of the process.
2. The class virtual_ptr<Class> accepts only classes that inherit Class. Would it be possible to accept inheritance unrelated classes e.g. std::filesystem::path and std::string ?
Not directly. What would the method signature, and the method call look like? You could wrap them in a class hierarchy, e.g. AbstractPath, FileSystemPath and StringPath. I have some ideas about supporting std::any virtual parameters though, where the overriders would specify an exact type, something like:
BOOST_OPENMETHOD(delete, (virtual_<std::any> path), void);
BOOST_OPENMETHOD_OVERRIDE(delete, (std::filesystem::path path), void) { ... }
BOOST_OPENMETHOD_OVERRIDE(delete, (std::string path), void) { ... }
3. Would it be possible to get rid of the class declarations ? Indeed they are already declared in the template parameter of the virtual_ptr<> arguments so they could be implicitly defined here.
Ah but you are forgetting about intermediate classes in the hierarchy, between the method and the overrider:
struct A { ... }; struct I : A { ... }; struct B : I { ... };
BOOST_OPENMETHOD(whatever, (virtual_ptr<A>), void);
BOOST_OPENMETHOD_OVERRIDE(whatever, (virtual_ptr<B>), void) { ... }
auto p = std::make_unique<I>(); whatever(*p);
That being said, with_vptr must be called at every level of the hierarchy, and indeed, it does class registration for you.
At some point in the future, reflection might be good enough to help with this.
J-L
On Sun, May 4, 2025 at 4:11 PM Yannick Le Goc <ylegoc@gmail.com> wrote:
Hi Jean-Louis,
First, thanks for all the effort you put into writing a solid library on
I have a few questions.
1. Concerning the overrider selection, when the perfect match is not found, it is said that an arbitrary choice is made. My tests on virtual inheritance and two dimensions dispatch showed that
Do you confirm ? Or what is the algorithm ?
2. The class virtual_ptr<Class> accepts only classes that inherit Class. Would it be possible to accept inheritance unrelated classes e.g. std::filesystem::path and std::string ?
3. Would it be possible to get rid of the class declarations ? Indeed
an experimental subject. this is the last defined compatible overrider that is chosen. they are already declared in the template parameter of the virtual_ptr<> arguments so they could be implicitly defined here.
Yannick
On Sun, May 4, 2025 at 11:15 AM Jean-Louis Leroy via Boost <
struct dynamic_policy :
boost::openmethod::default_policy::fork<dynamic_policy>::replace<
boost::openmethod::policies::extern_vptr, boost::openmethod::policies::vptr_vector< dynamic_policy, boost::openmethod::policies::indirect_vptr>> {};
Committing this code to any project I've worked on would've left most
of my
coworkers absolutely checked out.
With the changes in the review branch, you can now say:
struct dynamic_policy : boost::openmethod::default_policy::fork<dynamic_policy>::add< boost::openmethod::policies::indirect_vptr>> {};
I am pretty sure I can make this work:
struct map_policy : default_policy::fork<map_policy>::with<vptr_map<map_policy>> {};
I.e. instead of saying `replace<Facet1, Implementation1>::replace<Facet2, Implementation2>` you can just say `with<Implementation1, Implementation2>`. If an implementation of the same facet exists in the policy, it will be replaced with the new one, otherwise the implementation will be added. Not in
boost@lists.boost.org> wrote: the review
branch yet.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

I replied privately instead of posting to the list. Sorry, very much under the weather ATM. Re-posting to the list, with one extra remark.
Another question concerning the memory footprint.
Suppose we have a class hierarchy of size N so that: struct A1 {...} struct A2 : A1 {...} ... struct AN : AN-1 {...}
Now we define an open method with M virtual arguments BOOST_OPENMETHOD(big, (virtual_ptr<A1>, virtual_ptr<A1>, virtual_ptr<A1>, ..., virtual_ptr<A1>), void);
What set of overriders would be the worst case in terms of memory footprint ? I suppose it would be the complete set of overriders that covers all the tuples possibilty, for which size is N^M. Then would the size of the dispatch table be O(N^M) ?
What set of overriders would be the worst case in terms of memory footprint ? I suppose it would be the complete set of overriders that covers all the tuples possibilty, for which size is N^M.
If each virtual tuple gets its own overrider? In that case, indeed, the dispatch table will be N^M. And that wouldn't even be bad. It would just be what it takes. Any solution in the tune of switches or M-dispatch (and good luck figuring it out, you'd have to generate it) would cost more in terms of code space than it saves in terms of data space. Unless you start playing tricks like putting the shortest possible integers in the dispatch table, indexing a vector of function pointers. Otherwise, it depends. In the extreme case where there is only one overrider for the fallback case (all virtual_ptr<A1>s), the table will have just one cell. The worst, I think, would be overriders on the diagonal only. Maximum waste. On Tue, May 6, 2025 at 1:06 AM Yannick Le Goc <ylegoc@gmail.com> wrote:
Another question concerning the memory footprint.
Suppose we have a class hierarchy of size N so that: struct A1 {...} struct A2 : A1 {...} ... struct AN : AN-1 {...}
Now we define an open method with M virtual arguments BOOST_OPENMETHOD(big, (virtual_ptr<A1>, virtual_ptr<A1>, virtual_ptr<A1>, ..., virtual_ptr<A1>), void);
What set of overriders would be the worst case in terms of memory footprint ? I suppose it would be the complete set of overriders that covers all the tuples possibilty, for which size is N^M. Then would the size of the dispatch table be O(N^M) ?
On Mon, May 5, 2025 at 3:11 AM Jean-Louis Leroy via Boost <boost@lists.boost.org> wrote:
First, thanks for all the effort you put into writing a solid library on an experimental subject.
Hi Yannick! Thank you for the feedback.
1. Concerning the overrider selection, when the perfect match is not found, it is said that an arbitrary choice is made.
Everybody seems to dislike this, including myself. It is the behavior specified in N2216.
My tests on virtual inheritance and two dimensions dispatch showed that this is the last defined compatible overrider that is chosen. Do you confirm ?
I don't. It's a secret ;-) Only I can use that knowledge in my unit tests.
The only guarantee is that the same overrider is always picked for the same set of virtual arguments for the duration of the process.
2. The class virtual_ptr<Class> accepts only classes that inherit Class. Would it be possible to accept inheritance unrelated classes e.g. std::filesystem::path and std::string ?
Not directly. What would the method signature, and the method call look like? You could wrap them in a class hierarchy, e.g. AbstractPath, FileSystemPath and StringPath. I have some ideas about supporting std::any virtual parameters though, where the overriders would specify an exact type, something like:
BOOST_OPENMETHOD(delete, (virtual_<std::any> path), void);
BOOST_OPENMETHOD_OVERRIDE(delete, (std::filesystem::path path), void) { ... }
BOOST_OPENMETHOD_OVERRIDE(delete, (std::string path), void) { ... }
3. Would it be possible to get rid of the class declarations ? Indeed they are already declared in the template parameter of the virtual_ptr<> arguments so they could be implicitly defined here.
Ah but you are forgetting about intermediate classes in the hierarchy, between the method and the overrider:
struct A { ... }; struct I : A { ... }; struct B : I { ... };
BOOST_OPENMETHOD(whatever, (virtual_ptr<A>), void);
BOOST_OPENMETHOD_OVERRIDE(whatever, (virtual_ptr<B>), void) { ... }
auto p = std::make_unique<I>(); whatever(*p);
That being said, with_vptr must be called at every level of the hierarchy, and indeed, it does class registration for you.
At some point in the future, reflection might be good enough to help with this.
J-L
On Sun, May 4, 2025 at 4:11 PM Yannick Le Goc <ylegoc@gmail.com> wrote:
Hi Jean-Louis,
First, thanks for all the effort you put into writing a solid library on an experimental subject. I have a few questions.
1. Concerning the overrider selection, when the perfect match is not found, it is said that an arbitrary choice is made. My tests on virtual inheritance and two dimensions dispatch showed that this is the last defined compatible overrider that is chosen. Do you confirm ? Or what is the algorithm ?
2. The class virtual_ptr<Class> accepts only classes that inherit Class. Would it be possible to accept inheritance unrelated classes e.g. std::filesystem::path and std::string ?
3. Would it be possible to get rid of the class declarations ? Indeed they are already declared in the template parameter of the virtual_ptr<> arguments so they could be implicitly defined here.
Yannick
On Sun, May 4, 2025 at 11:15 AM Jean-Louis Leroy via Boost <boost@lists.boost.org> wrote:
struct dynamic_policy : boost::openmethod::default_policy::fork<dynamic_policy>::replace< boost::openmethod::policies::extern_vptr, boost::openmethod::policies::vptr_vector< dynamic_policy, boost::openmethod::policies::indirect_vptr>> {};
Committing this code to any project I've worked on would've left most of my coworkers absolutely checked out.
With the changes in the review branch, you can now say:
struct dynamic_policy : boost::openmethod::default_policy::fork<dynamic_policy>::add< boost::openmethod::policies::indirect_vptr>> {};
I am pretty sure I can make this work:
struct map_policy : default_policy::fork<map_policy>::with<vptr_map<map_policy>> {};
I.e. instead of saying `replace<Facet1, Implementation1>::replace<Facet2, Implementation2>` you can just say `with<Implementation1, Implementation2>`. If an implementation of the same facet exists in the policy, it will be replaced with the new one, otherwise the implementation will be added. Not in the review branch yet.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

I'm not sure about BOOST_OPENMETHOD_GUIDE either. It seemed like it was needed so we can add overriding functions to types in a different namespace without opening it. I think this is actually something we don't want to encourage for reasons mentioned by Andrzej about this being a potential footgun.
I added it because we can add a function to a namespace "from outside": namespace foo {} void foo::bar() { ... } ...but it drags users into the internals of the macros. I am probably going to remove it, and document that BOOST_OPENMETHOD_OVERRIDE must be in a namespace in which the method is visible by normal lookup rules.

On Sun, 27 Apr 2025 at 15:15, Дмитрий Архипов via Boost < boost@lists.boost.org> wrote:
Dear Boost community. The peer review of the proposed Boost.OpenMethod will start on 28th of April and continue until May 7th. OpenMethods implements open methods in C++. Those are "virtual functions" defined outside of classes.
Thank you, this is a welcome development. I am working through the documents and found myself a little troubled that there is a need to initialise the subsystem in main(): #include <boost/openmethod/compiler.hpp> // only needed in the file that calls boost::openmethod::initialize() auto main() -> int { boost::openmethod ::initialize(); // ... } This prevents this library being used as a dependent subsystem within other libraries, as the publisher of the library would need to communicate the need for this manual initialisation down the chain of dependent projects. This is brittle. It occurs to me that this can be solved using the Schwartz Counter (or Nifty Counter) method, which because of inline variables, works fully using only header files as of C++17. In case this is new information, the Schwartz Counter is the mechanism by which std::cout and std::cin are made available by the standard library. Thanks for your time.

I am working through the documents and found myself a little troubled that there is a need to initialise the subsystem in main():
I know...I don't like it either...the Dlang version auto-initializes.
It occurs to me that this can be solved using the Schwartz Counter (or Nifty Counter) method, which because of inline variables, works fully using only header files as of C++17.
I don't think so. The problem is that initialize() must be called after all the static registrar objects have executed, and before the first method call. With msvc, I could use init_seg. But I see no general solution. But I would be delighted to be proven wrong.
Thanks for your time.
Thank you for your feedback! J-L On Fri, May 2, 2025 at 7:24 AM Richard Hodges via Boost <boost@lists.boost.org> wrote:
On Sun, 27 Apr 2025 at 15:15, Дмитрий Архипов via Boost < boost@lists.boost.org> wrote:
Dear Boost community. The peer review of the proposed Boost.OpenMethod will start on 28th of April and continue until May 7th. OpenMethods implements open methods in C++. Those are "virtual functions" defined outside of classes.
Thank you, this is a welcome development.
I am working through the documents and found myself a little troubled that there is a need to initialise the subsystem in main():
#include <boost/openmethod/compiler.hpp> // only needed in the file that calls boost::openmethod::initialize() auto main() -> int { boost::openmethod ::initialize(); // ... }
This prevents this library being used as a dependent subsystem within other libraries, as the publisher of the library would need to communicate the need for this manual initialisation down the chain of dependent projects. This is brittle.
It occurs to me that this can be solved using the Schwartz Counter (or Nifty Counter) method, which because of inline variables, works fully using only header files as of C++17.
In case this is new information, the Schwartz Counter is the mechanism by which std::cout and std::cin are made available by the standard library.
Thanks for your time.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Fri, 2 May 2025 at 20:24, Jean-Louis Leroy via Boost < boost@lists.boost.org> wrote:
I am working through the documents and found myself a little troubled that there is a need to initialise the subsystem in main():
I know...I don't like it either...the Dlang version auto-initializes.
It occurs to me that this can be solved using the Schwartz Counter (or Nifty Counter) method, which because of inline variables, works fully using only header files as of C++17.
I don't think so. The problem is that initialize() must be called after all the static registrar objects have executed, and before the first method call.
Why? Does dispatch lookup not work with partially populated dispatch maps?
With msvc, I could use init_seg. But I see no general solution. But I would be delighted to be proven wrong.
Thanks for your time.
Thank you for your feedback!
J-L
On Fri, May 2, 2025 at 7:24 AM Richard Hodges via Boost <boost@lists.boost.org> wrote:
On Sun, 27 Apr 2025 at 15:15, Дмитрий Архипов via Boost < boost@lists.boost.org> wrote:
Dear Boost community. The peer review of the proposed Boost.OpenMethod
start on 28th of April and continue until May 7th. OpenMethods implements open methods in C++. Those are "virtual functions" defined outside of classes.
Thank you, this is a welcome development.
I am working through the documents and found myself a little troubled
will that
there is a need to initialise the subsystem in main():
#include <boost/openmethod/compiler.hpp> // only needed in the file that calls boost::openmethod::initialize() auto main() -> int { boost::openmethod ::initialize(); // ... }
This prevents this library being used as a dependent subsystem within other libraries, as the publisher of the library would need to communicate the need for this manual initialisation down the chain of dependent projects. This is brittle.
It occurs to me that this can be solved using the Schwartz Counter (or Nifty Counter) method, which because of inline variables, works fully using only header files as of C++17.
In case this is new information, the Schwartz Counter is the mechanism by which std::cout and std::cin are made available by the standard library.
Thanks for your time.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Some further questions: 1. I've tried playing with custom RTTI, because that's what the clang AST API uses. When implementing boost_openmethod_vptr, I used default_policy::static_vptr<T>. But that seems to defeat the purpose of policies and scoping - is there a way to obtain the policy that invoked boost_openmethod_vptr? 2. It sounds strange to me that policies have two functions: a. they behave behavior (e.g. what to do in case of error) and b. they act as a type registry. That confused me a lot, because in other libraries I got the impression that policies only do a. Is there anything preventing the separation of concerns here? 3. Some of the complexity with policies (like needing a fork function) seem to be stemming from the point above. What do you think? For instance, couldn't facets be made regular members? Then add/fork can be implementing by just inheriting from a base policy, using regular C++. 4. I had the same impression as Joaquin that virtual_ should be used to mark virtual arguments in method definitions, and virtual_ptr and regular references in overriders. What do you think? 5. I got the impression that virtual_ptr can't be used with custom RTTI. Is that true? Thanks, Ruben.

1. I've tried playing with custom RTTI, because that's what the clang AST API uses. When implementing boost_openmethod_vptr, I used default_policy::static_vptr<T>. But that seems to defeat the purpose of policies and scoping - is there a way to obtain the policy that invoked boost_openmethod_vptr?
Ah, good point. I'm going to look into this. Since we cannot specialize function templates, I'll probably have to pass a Policy& as a parameter. Logically, with_vptr should usable several times in the same hierarchy, adding several vptrs to the objects. It makes sense, because external vptrs already permit associating several vpts to the same object. Can you share your code? My first idea would be a clang policy. I guess that you put a switch in boost_openmethod_vptr?
2. It sounds strange to me that policies have two functions: a. they behave behavior (e.g. what to do in case of error) and b. they act as a type registry. That confused me a lot, because in other libraries I got the impression that policies only do a. Is there anything preventing the separation of concerns here?
In general, policies can have (static) state. For example, vectored_error_handler contains a static std::function. Each policy that uses it needs its own. That's the reason for all the CRTPing, and why `fork` rebinds the first template argument of all the templatized facets to the new policy. All the methods in the same policy count, at dispatch time, on data created by initialize<Policy>(). That is why methods have to be scoped in the policy. The same holds for class registrars: the type_ids they store at static construction time are not necessarily the same thing in different policies.
3. Some of the complexity with policies (like needing a fork function) seem to be stemming from the point above. What do you think? For instance, couldn't facets be made regular members? Then add/fork can be implementing by just inheriting from a base policy, using regular C++.
Do you mean static or instance members? If it is the latter, then policies would become objects. Which we can pass as template parameters, but it would run into difficulties, because we don't have universal template parameters yet. A simple example: use_classes. It uses std::is_base_of to detect if the last class is a policy. So a policy has got to be a type (unless we change the contracts a lot). If you mean static members, we are back to the problem of separating my_policy's error stream from your_policy's. Policies and facets are definitely advanced stuff. I expect that, over time, we will have a header with a clang policy, one with a Unity RTTI policy, etc. In a team or an org, the local expert will create a policy specific to their need - say, a minimal policy suitable for embedded programming, without any form of RTTI (using only final constructs), hashing, diagnostic output, etc. It will be installed as the default policy via BOOST_OPENMETHOD_DEFAULT_POLICY, something that brings us very close to ODR violations, and requires expertise. Also, I would expect library authors who use OpenMethod as an implementation detail, to use their own private policy.
4. I had the same impression as Joaquin that virtual_ should be used to mark virtual arguments in method definitions, and virtual_ptr and regular references in overriders. What do you think?
So, to make sure that we are on the same page, you mean: BOOST_OPENMETHOD(poke, (std::ostream &, virtual_<Animal>), void); BOOST_OPENMETHOD_OVERRIDE( poke, (std::ostream & os, virtual_ptr<Cat> cat), void) { ... } The problem with this: BOOST_OPENMETHOD would need to foresee what the overrides will look like. It could be: BOOST_OPENMETHOD_OVERRIDE( poke, (std::ostream & os, Cat& cat), void) { ... } ...when using with_vptr. Or if the user doesn't need the best performance, maybe the body of the overriders does enough work to make the cost of the extra instructions irrelevant. A secondary benefit: with virtual_ptr in both declarations and overriders, it is easier to explain the (basic) rules. If you look at YOMM2's doc or my talks, I often talk about "peeling off" the "virtual_ decorators". With virtual_ptr at all levels, it is more straightforward to explain. Finally, it is (a bit) closer to the N2216 syntax. 5. I got the impression that virtual_ptr can't be used with custom RTTI. Is that true? No it's not, that would be very disappointing ;-) virtual_ptr obtains the vptr from the policy, which is the second template parameter with the default value BOOST_OPENMETHOD_DEFAULT_POLICY. J-L

On Sat, 3 May 2025 at 22:22, Jean-Louis Leroy via Boost <boost@lists.boost.org> wrote:
1. I've tried playing with custom RTTI, because that's what the clang AST API uses. When implementing boost_openmethod_vptr, I used default_policy::static_vptr<T>. But that seems to defeat the purpose of policies and scoping - is there a way to obtain the policy that invoked boost_openmethod_vptr?
Ah, good point. I'm going to look into this. Since we cannot specialize function templates, I'll probably have to pass a Policy& as a parameter.
Logically, with_vptr should usable several times in the same hierarchy, adding several vptrs to the objects. It makes sense, because external vptrs already permit associating several vpts to the same object.
Can you share your code? My first idea would be a clang policy. I guess that you put a switch in boost_openmethod_vptr?
Since building with the clang API itself is a nightmare, I've created a toy example, but with the same idea: enum class kind : std::uintptr_t { unknown, n1, n2 }; class base_node { kind k_; protected: base_node(kind k) noexcept : k_(k) { } public: kind getKind() const { return k_; } }; class node1 : public base_node { public: node1() noexcept : base_node(kind::n1) { } }; class node2 : public base_node { public: node2() noexcept : base_node(kind::n2) { } }; boost::openmethod::vptr_type boost_openmethod_vptr(const base_node& b) { switch (b.getKind()) { case kind::n1: return boost::openmethod::default_policy::static_vptr<node1>; case kind::n2: return boost::openmethod::default_policy::static_vptr<node2>; default: return boost::openmethod::default_policy::static_vptr<base_node>; } } [snip]
3. Some of the complexity with policies (like needing a fork function) seem to be stemming from the point above. What do you think? For instance, couldn't facets be made regular members? Then add/fork can be implementing by just inheriting from a base policy, using regular C++.
Do you mean static or instance members? If it is the latter, then policies would become objects. Which we can pass as template parameters, but it would run into difficulties, because we don't have universal template parameters yet. A simple example: use_classes. It uses std::is_base_of to detect if the last class is a policy. So a policy has got to be a type (unless we change the contracts a lot).
If you mean static members, we are back to the problem of separating my_policy's error stream from your_policy's.
I see, I meant static members. I was thinking of somehow splitting the state part on another object - but it might complicate things further.
4. I had the same impression as Joaquin that virtual_ should be used to mark virtual arguments in method definitions, and virtual_ptr and regular references in overriders. What do you think?
So, to make sure that we are on the same page, you mean:
BOOST_OPENMETHOD(poke, (std::ostream &, virtual_<Animal>), void);
BOOST_OPENMETHOD_OVERRIDE( poke, (std::ostream & os, virtual_ptr<Cat> cat), void) { ... }
Yes, this is what I mean.
The problem with this: BOOST_OPENMETHOD would need to foresee what the overrides will look like. It could be:
BOOST_OPENMETHOD_OVERRIDE( poke, (std::ostream & os, Cat& cat), void) { ... }
...when using with_vptr. Or if the user doesn't need the best performance, maybe the body of the overriders does enough work to make the cost of the extra instructions irrelevant.
Yes, either of these would be valid: BOOST_OPENMETHOD_OVERRIDE(poke, (std::ostream & os, Cat& cat), void) { ... } BOOST_OPENMETHOD_OVERRIDE(poke, (std::ostream & os, virtual_ptr<Cat> cat), void) { ... } And I guess you'd need some if constexpr's to create the virtual_ptr or not, and some logic to handle the case where the user declared both of them, which should be illegal.
A secondary benefit: with virtual_ptr in both declarations and overriders, it is easier to explain the (basic) rules. If you look at YOMM2's doc or my talks, I often talk about "peeling off" the "virtual_ decorators". With virtual_ptr at all levels, it is more straightforward to explain.
In regular C++ OOP, we already say "virtual" in base classes and "override" in implementations, so I'm not sure about this.
5. I got the impression that virtual_ptr can't be used with custom RTTI. Is that true?
No it's not, that would be very disappointing ;-)
virtual_ptr obtains the vptr from the policy, which is the second template parameter with the default value BOOST_OPENMETHOD_DEFAULT_POLICY.
When explaining virtual_, the documentation states: "By itself, virtual_ does not provide any benefits. Passing the virtual argument by reference almost compiles to the same code as creating a virtual_ptr, using it for one call, then throwing it way. The only difference is that the virtual argument is passed as one pointer instead of two. However, we can now customize how the vptr is obtained. When the method sees a virtual_ parameter, it looks for a boost_openmethod_vptr function that takes the parameter (by const reference), and returns a vptr_type. ..." Which suggests that my boost_openmethod_vptr won't be called unless I use virtual_. How would I write my example above so it calls boost_openmethod_vptr? Cheers, Ruben.

1. I've tried playing with custom RTTI, because that's what the clang AST API uses. When implementing boost_openmethod_vptr, I used default_policy::static_vptr<T>. But that seems to defeat the purpose of policies and scoping - is there a way to obtain the policy that invoked boost_openmethod_vptr?
Ah, good point. I'm going to look into this. Since we cannot specialize function templates, I'll probably have to pass a Policy& as a parameter.
Logically, with_vptr should usable several times in the same hierarchy, adding several vptrs to the objects. It makes sense, because external vptrs already permit associating several vpts to the same object.
Can you share your code? My first idea would be a clang policy. I guess that you put a switch in boost_openmethod_vptr?
Since building with the clang API itself is a nightmare, I've created a toy example, but with the same idea:
enum class kind : std::uintptr_t { unknown, n1, n2 };
class base_node { kind k_;
protected: base_node(kind k) noexcept : k_(k) { }
public: kind getKind() const { return k_; } };
class node1 : public base_node { public: node1() noexcept : base_node(kind::n1) { } };
class node2 : public base_node { public: node2() noexcept : base_node(kind::n2) { } };
boost::openmethod::vptr_type boost_openmethod_vptr(const base_node& b) { switch (b.getKind()) { case kind::n1: return boost::openmethod::default_policy::static_vptr<node1>; case kind::n2: return boost::openmethod::default_policy::static_vptr<node2>; default: return boost::openmethod::default_policy::static_vptr<base_node>; } }
[snip]
5. I got the impression that virtual_ptr can't be used with custom RTTI. Is that true?
No it's not, that would be very disappointing ;-)
virtual_ptr obtains the vptr from the policy, which is the second template parameter with the default value BOOST_OPENMETHOD_DEFAULT_POLICY.
When explaining virtual_, the documentation states:
"By itself, virtual_ does not provide any benefits. Passing the virtual argument by reference almost compiles to the same code as creating a virtual_ptr, using it for one call, then throwing it way. The only difference is that the virtual argument is passed as one pointer instead of two.
However, we can now customize how the vptr is obtained. When the method sees a virtual_ parameter, it looks for a boost_openmethod_vptr function that takes the parameter (by const reference), and returns a vptr_type. ..."
Which suggests that my boost_openmethod_vptr won't be called unless I use virtual_. How would I write my example above so it calls boost_openmethod_vptr?
Answering my own question again, and for reference, if you implement a custom RTTI facet, you don't need boost_openmethod_vptr - this function is performed by the dynamic_type function of the facet. What's the rationale behind having these two ways of achieving the same thing? Thanks, Ruben.

2. It sounds strange to me that policies have two functions: a. they behave behavior (e.g. what to do in case of error) and b. they act as a type registry. That confused me a lot, because in other libraries I got the impression that policies only do a. Is there anything preventing the separation of concerns here?
In general, policies can have (static) state. For example, vectored_error_handler contains a static std::function. Each policy that uses it needs its own. That's the reason for all the CRTPing, and why `fork` rebinds the first template argument of all the templatized facets to the new policy.
All the methods in the same policy count, at dispatch time, on data created by initialize<Policy>(). That is why methods have to be scoped in the policy. The same holds for class registrars: the type_ids they store at static construction time are not necessarily the same thing in different policies.
3. Some of the complexity with policies (like needing a fork function) seem to be stemming from the point above. What do you think? For instance, couldn't facets be made regular members? Then add/fork can be implementing by just inheriting from a base policy, using regular C++.
Do you mean static or instance members? If it is the latter, then policies would become objects. Which we can pass as template parameters, but it would run into difficulties, because we don't have universal template parameters yet. A simple example: use_classes. It uses std::is_base_of to detect if the last class is a policy. So a policy has got to be a type (unless we change the contracts a lot).
If you mean static members, we are back to the problem of separating my_policy's error stream from your_policy's.
As I think I didn't explain myself enough in my previous message, I'd like to expand on what I meant by this. Take the current vptr_vector facet, for example (simplified): template<class Policy, typename Facet = void> class vptr_vector { static std::vector<element_type> vptrs; public: template<typename ForwardIterator> static auto register_vptrs(ForwardIterator first, ForwardIterator last); }; And the current basic_policy implementation (also simplified): // domain<Policy> contains static members template<class Policy, class... Facets> struct basic_policy : abstract_policy, domain<Policy>, Facets... { using facets = mp11::mp_list<Facets...>; }; Could it be possible to write: // Members are no longer static template<class Policy, typename Facet = void> class vptr_vector { std::vector<element_type> vptrs; public: template<typename ForwardIterator> auto register_vptrs(ForwardIterator first, ForwardIterator last); }; // domain now contains regular members (no longer static) template<class Policy, class... Facets> struct basic_policy : abstract_policy, domain, Facets... { std::tuple<Facets...> facets; }; // A method_container is linked to a policy, and is what you use to register methods struct debug_method_container { static debug_policy policy; // Instead of many static members, just this one }; // Register classes and methods BOOST_OPENMETHOD_CLASSES(base_node, node1, node2, debug_method_container) If this is possible, you could simplify how you store facets, making them regular members so you don't need add, replace and fork: // Marker class to say "this facet it not implemented" struct facet_not_implemented {}; struct abstract_policy { facet_not_implemented rtti; facet_not_implemented extern_vptr; facet_not_implemented type_hash; facet_not_implemented error_handler; facet_not_implemented runtime_checks; facet_not_implemented error_output; facet_not_implemented trace_output; }; struct release_policy : abstract_policy { facet_not_implemented rtti; fast_perfect_hash type_hash; vptr_vector<fast_perfect_hash> extern_vptr; vectored_error_handler<facet_not_implemented> error_handler; }; struct debug_policy : release_policy { runtime_checks runtime_checks; basic_error_output<> error_output; basic_trace_output<> trace_output; vectored_error_handler<basic_error_output<>> error_handler; }; This avoids CRTP and all the facets machinery. It also makes dependencies between facets explicit, which I think it's good. Currently, replacing type_hash seems to influence how vptr_vector behaves, but I could only arrive at this conclusion by inspecting the source code. What are your thoughts on this proposal?

вс, 27 апр. 2025 г. в 16:15, Дмитрий Архипов <grisumbras@gmail.com>:
Dear Boost community. The peer review of the proposed Boost.OpenMethod will start on 28th of April and continue until May 7th.
I would like to remind everyone that tomorrow is the last day to send your reviews.

On Sun, 27 Apr 2025 at 15:15, Дмитрий Архипов via Boost <boost@lists.boost.org> wrote:
Dear Boost community. The peer review of the proposed Boost.OpenMethod will start on 28th of April and continue until May 7th. OpenMethods implements open methods in C++. Those are "virtual functions" defined outside of classes. They allow avoiding god classes, and visitors and provide a solution to the Expression Problem, and the banana-gorilla-jungle problem. They also support multiple dispatch. This library implements most of Stroustrup's multimethods proposal, with some new features, like customization points and inter-operability with smart pointers. And despite all that open-method calls are fast - on par with native virtual functions.
Hi all, This is my formal review of the proposed Boost.OpenMethod. Thanks Jean-Luis for proposing the library, and Dmitry for managing the review. I'll follow the points suggested by the review manager. I've numbered my comments to make replying easier.
* What is your evaluation of the design?
1. The library achieves the goal of emulating open methods without language support, which is quite impressive. The amount of macros in the interface makes it a little bit dirty, but I guess that's unavoidable when trying to emulate a language feature. I have to say I'm not a big fan of open methods because they move a lot of checks that usually happen at compile time to runtime. But I acknowledge that this pays off with extra flexibility. Taking into account the popularity of yomm2, the predecessor of the proposed library, it looks like open methods fit a real need. 2. That said, I think that policies and facets would benefit from some re-designing. a. Aside from rtti, the other facets have no formal specification on their requirements. This means that there is no formal API for these types. This makes it prone to breaking users in subsequent releases, and may block evolution by fearing breaking users. b. The design heavily relies on static members in policies and facets, which are effectively global variables. It then adds a set of tools and techniques (CRTP, basic_policy::add, basic_policy::fork) to deal with the problems caused by the former. I think that these problems can be avoided altogether by redesigning the system. c. Policies have two functions: stating how to do things (i.e. stating that vptrs should be stored in a vector) and acting as a type registry (i.e. allocating the concrete vector for vptrs as a static member). I think that this breaks separation of concerns. Policies should only state how to do things, and a separate "type registry" entity should be in charge of creating the required static variables required to register types. I've already suggested a possible design for this in a previous message (https://lists.boost.org/Archives/boost/2025/05/259451.php), so I won't repeat it here. If the library ends up being accepted, I can make a PR with the proposed changes. 3. The boost_openmethod_vptr customization point does not seem to play well with policy scoping. Given a type to be inspected, T, the signature is: std::uintptr_t boost_openmethod_vptr(const T&); There is no way to know which policy T was registered with. This function normally uses Policy::static_vptr, so this is a problem. Ideally, it should be redesigned to cope well with policy scoping. 4. Compile-time errors are difficult to interpret. For example, at one point, I wrote: BOOST_OPENMETHOD(print, (virtual_<base_node>), void); This is a mistake because the argument to virtual_ should be a reference type. This is a snippet of the compiler error I got: "error: no member named 'peek' in 'boost::openmethod::detail::parameter_traits<boost::openmethod::virtual_<base_node>, custom_policy>'" I had to look at the implementation to see what was the problem, and it took me some time to figure it out. I think that this problem can be alleviated by using C++20 concepts in places like virtual_. Since the library's minimum standard is C++17, you'll need some macro machinery to make this work. Other libraries like Asio do it with good results. 5. virtual_<T&>, with T& not being polymorphic, compiles. In the program I was writing, this was a programming error - I had forgotten a virtual destructor. This behavior seems to exist to support "programs that call methods solely via virtual_ptrs created with the "final" construct", as stated in minimal_rtti's docs. When would I want to do this? Does this not defeat the purpose of using open methods themselves? 6. The <boost/openmethod.hpp> file is unusual. It does not include all the headers in the library, and it pours definitions in the global namespace. I'd advise against doing this, because it's not what most users expect. My recommendation is to: a. Rename this file to <boost/openmethod/keywords.hpp> or any other name that you think it's helpful. b. Make <boost/openmethod.hpp> include all headers except for keywords.hpp. Another possibility would be to create a namespace boost::openmethod::keywords that contains using directives for virtual_, virtual_ptr and similar pseudo-keywords. The user can then consume these with a using namespace directive, similar to how custom literals are consumed. 7. boost::openmethod::type_id is a type alias for std::uintptr_t. I would try to avoid such an alias, since it doesn't add much and makes code more difficult to read. It might be misleading given the existence of the typeid operator. * What is your evaluation of the implementation? I haven't examined it in detail. Aside from my comments on static members, the pieces I looked into seemed good. CIs would benefit from some extra coverage - at the moment, only the latest compiler versions, with their default standard C++, seem to be checked. * What is your evaluation of the documentation? It contains enough information to cover the basic use cases, but it needs work. These are my main comments: 8. I think that the idea of splitting the docs into a "basic" and an "advanced" section, as proposed by the author before, is great and should happen. 9. I dislike the single-page architecture. I know that Boost.Unordered (https://www.boost.org/doc/libs/1_88_0/libs/unordered/doc/html/unordered/intr...) uses Antora to render asciidoc as multiple pages - maybe it's a technology that's worth to explore. 10. Facets need a formal API - a section that documents what members are required and which signatures should they have. 11. In general, some concepts seem to be used without being introduced first: a. Overrider. When I first read through the docs, I assumed it meant "the set of functions that implement an abstract method". When I reached the description of BOOST_OPENMETHOD_OVERRIDER, I saw that it refers to a concrete C++ struct with a concrete naming scheme and set of members. I think that this should be explained better, before BOOST_OPENMETHOD_OVERRIDER is introduced. b. Domain. By reading the code, it refers to a subset of the type registry. It's used in the reference (e.g. "debug extends release but it does not fork it. Both policies use the same domain"). c. "virtual_ptr's "final" constructs". Mentioned in the docs without defining what they are. 12. The custom RTTI example uses a virtual destructor. This is important because the default policy's is_polymorphic implementation uses std::is_polymorphic, but I don't think it's explained. I don't know if having a virtual destructor is common in custom RTTI scenarios, but I'm inclined to think that it's not, as AFAIK a virtual destructor adds standard RTTI information to a type. Clang's AST nodes don't have it. I'd advise to add an example on how to support a custom RTTI scheme without virtual destructors, as it wasn't trivial for me to implement it. More minor points at the end of this email. * What is your evaluation of the potential usefulness of the library? I don't have a use case for it, and I wouldn't use it in my projects, since I think it adds too much complexity for the benefit it brings. However, considering the traction that its predecessor, yomm2, has, I think it would be useful in Boost. It's a pity that no standardization is pursued. Being a language emulation library, it looks like eventual standardization should be a goal. But I understand that it might not be feasible at this point. * Did you try to use the library? With what compiler? Did you have any problems? I built one of the examples and played with it. I then wrote a toy example to check how an API like clang's AST could integrate with this library. I had some problems at first, due to my lack of understanding of the library design and the amount of checks that move to runtime. But I was successful in the end. With some documentation improvements, it'd be enough to be able to use the library without problems even in advanced use cases. I used clang 20, C++23, with CMake's Debug configuration, in an Ubuntu 24.04 system. * How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? I've spent around 15h reading the documentation, building and playing with examples, reading parts of the source code, asking questions and writing this review. * Are you knowledgeable about the problems tackled by the library? I heard about open methods at the author's talk at using std::cpp 2024. I hadn't used them before this review. I have designed and used class hierarchies and OOP before. * Affiliation disclosure I am currently affiliated with the C++ Alliance. * Final verdict Although I'm not fully convinced, my recommendation is to ACCEPT this library into Boost, as I think Boost is better with than without it. I strongly recommend redesigning the facet/policy system, but I won't make it an acceptance condition because I don't know if what I propose is fully viable. * Minor documentation comments 13. Code in the dynamic loading section looks a little bit too C. In particular, I'd try to avoid strcat (vs. strncat), since it leads to bad programming practices that can end up in buffer overruns. 14. In the description of the RTTI policy, in the type_name function: "This function is optional; if it is not provided, "type_id(type)" is used.". I'd phrase it as "the numeric type_id is inserted into the stream, as per stream << type". I originally thought it was a typo and that typeid(type).name() was used. 15. I'd try to avoid the "bom" alias, in favor of "namespace openmethod = boost::openmethod;". 16. The std_rtti::type_index reference says that the return value is unspecified, and then it states that it returns a std::type_index. 17. Reference docs for use_classes mention Policy as an extra template argument, which is not what the actual code looks like. Instead, the passed arguments are inspected, and if one is a policy, it is used as the class registry. This should be noted in the docs. It is also not clear what happens if several policies are specified in a single use_classes instantiation. Regards, Ruben.

On Tue, May 6, 2025 at 7:21 AM Ruben Perez via Boost <boost@lists.boost.org> wrote:
9. I dislike the single-page architecture. I know that Boost.Unordered ( https://www.boost.org/doc/libs/1_88_0/libs/unordered/doc/html/unordered/intr... ) uses Antora to render asciidoc as multiple pages - maybe it's a technology that's worth to explore.
Yes! I forgot to shill this during my review. But yes, the docs are incredibly hard to read because of the single-page format. This is how Unordered has it setup: https://github.com/boostorg/unordered/tree/develop/doc I'd recommend to just blanket copy-paste the Unordered approach and then edit it into something workable. Antora has beautiful documentation output and is dramatically easier to navigate. - Christian

I'd recommend to just blanket copy-paste the Unordered approach and then edit it into something workable.
Antora has beautiful documentation output and is dramatically easier to navigate.
Yes, I like Unordered's doc too. Alas when I was looking for docs to imitate, it was still single-page.

Thank you Ruben for taking the time to review.
a. Aside from rtti, the other facets have no formal specification on their requirements. This means that there is no formal API for these types.
I thought I tried ;-) e.g. https://jll63.github.io/Boost.OpenMethod/#virtual_ptr_extern_vptr. When I reorganize the reference to use groups as suggested by Peter (macros, classes, etc) maybe I should add two categories: Facets and Facet Implementation. It would make the specs of the facets more "findable". Also I am considering changing the terminology to Facet Category / Facet.
4. Compile-time errors are difficult to interpret. For example, at one point, I wrote:
BOOST_OPENMETHOD(print, (virtual_<base_node>), void);
This is a mistake because the argument to virtual_ should be a reference type. This is a snippet of the compiler error I got:
"error: no member named 'peek' in 'boost::openmethod::detail::parameter_traits<boost::openmethod::virtual_<base_node>, custom_policy>'"
For that one I can static_assert with a clear message. I will try to strengthen compile-time detection as much as possible. I am also going to add a Troubleshooting Guide.
I think that this problem can be alleviated by using C++20 concepts in places like virtual_. Since the library's minimum standard is C++17, you'll need some macro machinery to make this work. Other libraries like Asio do it with good results.
I would love to switch to C++20 altogether. The internals make heavy use of enable_if. Concepts would be neater. But it feels too early just yet. Also, what is Boost's policy WRT upping standard requirements?
5. virtual_<T&>, with T& not being polymorphic, compiles. In the program I was writing, this was a programming error - I had forgotten a virtual destructor.
In the "review" branch this is a compile-time error.
This behavior seems to exist to support "programs that call methods solely via virtual_ptrs created with the "final" construct", as stated in minimal_rtti's docs. When would I want to do this?
I am eyeing resource-tight contexts like embedded programming. Also hierarchies that use with_vptr need not be polymorphic. Or rather, they are polymorphic without using virtual member functions.
6. The <boost/openmethod.hpp> file is unusual. It does not include all the headers in the library, and it pours definitions in the global namespace. I'd advise against doing this, because it's not what most users expect. My recommendation is to: a. Rename this file to <boost/openmethod/keywords.hpp> or any other name that you think it's helpful. b. Make <boost/openmethod.hpp> include all headers except for keywords.hpp.
I won't die on that hill :-D
Another possibility would be to create a namespace boost::openmethod::keywords that contains using directives for virtual_, virtual_ptr and similar pseudo-keywords. The user can then consume these with a using namespace directive, similar to how custom literals are consumed.
Yes I considered that. Probably the direction I'll take.
7. boost::openmethod::type_id is a type alias for std::uintptr_t. I would try to avoid such an alias, since it doesn't add much and makes code more difficult to read. It might be misleading given the existence of the typeid operator.
Hmmm yeah maybe. std::uintptr_t is a bit hard to type...
12. The custom RTTI example uses a virtual destructor.
It's there because std::unique_ptr requires it, not OpenMethod.
This is important because the default policy's is_polymorphic implementation uses std::is_polymorphic, but I don't think it's explained.
But custom_rtti provides its own. That being said, the "review" branch had a bug related to this for one day: virtual_ptr did not channel the test through the policy.
I don't know if having a virtual destructor is common in custom RTTI scenarios, but I'm inclined to think that it's not, as AFAIK a virtual destructor adds standard RTTI information to a type.
I'm pretty sure it doesn't.
Clang's AST nodes don't have it.
I can imagine scenarios where you allocate objects, not with `new`, but from e.g. deques. In that case you don't need a virtual destructor. Just speculating...

AMDG On 5/6/25 10:17 PM, Jean-Louis Leroy via Boost wrote:
7. boost::openmethod::type_id is a type alias for std::uintptr_t. I would try to avoid such an alias, since it doesn't add much and makes code more difficult to read. It might be misleading given the existence of the typeid operator.
Hmmm yeah maybe. std::uintptr_t is a bit hard to type...
I think it's fine to have a type alias. type_id is easily confused as being related to typeid/std::type_info/std::type_index, but std::uintptr_t isn't any better, as it's just another integer type and doesn't indicate what it's for.
I don't know if having a virtual destructor is common in custom RTTI scenarios, but I'm inclined to think that it's not, as AFAIK a virtual destructor adds standard RTTI information to a type.
I'm pretty sure it doesn't.
According to the standard, it does (any virtual function adds RTTI), but those using custom RTTI typically use a non-conforming implementation by disabling standard RTTI. In Christ, Steven Watanabe

According to the standard, it does (any virtual function adds RTTI), but those using custom RTTI typically use a non-conforming implementation by disabling standard RTTI.
Oh yes of course, the compiler cannot predict if typeid(expr) will be used somewhere, so it has to store it, typically at vtable[-1], just in case. It's a banana-gorilla-jungle problem again. J-L

Hello everybody, Once again thank you Jean-Louis for your work and implication in your different multiple dispatch related projects. Here is my review: * Introduction As the author mentions it in the introduction of the documentation of the OpenMethod project, the library implements the features described in the N2216 paper with extra elements. The N2216 paper describes an extension to the C++ language with the virtual extra keyword and new rules of dispatch for functions identified as multi-methods by the compiler. * What is your evaluation of the design? OpenMethod is the adaptation of a language based feature into a library. Efforts were made to be as close as possible to the proposal: open-methods are non-member functions and the virtual keyword is replaced with a virtual_ptr or virtual_ class template acting as identifier for the "virtual" parameters of an open-method. However there are some differences: N2216 proposes to only add the virtual qualifier to a parameter of function and let the compiler decide on what is an open-method and what is an overrider of that open-method, allowing to have a minimal approach.In the OpenMethod project, it is necessary to first define a "guide" function which is not a proper function definition but an open-method declaration. Then the overriders can be defined as normal functions with a body. The code is using extensively class templates to implement the open-methods but macros are necessary to improve the readibilty. The YOMM2 project shows that it is possible to get rid of these macros with a compiler supporting C++17. The initial design proposed in N2216 was to provide the most static implementation as possible of open-methods. However I think this has too many limitations. Indeed implementing multi-methods could have been done differently. This could have been an STL feature proposing a std::multi_function class extending the concept of std::function. As polymorphism is implemented in the core C++ language, it seemed natural to want to extend it to another language feature but adding more dimensions to the type-based dispatch introduces difficulties (ambiguities, memory size, visibility). My opinion is that a full library design is a better choice and can provide more flexibility. The OpenMethod design demonstrates that the language feature is maybe not a good approach. Indeed it seemed mandatory for the author to add facets to the library. Transposed into the original language based feature, this would have been compiler options. Then we could rethink the design of C++ multi-methods as a full library feature. To conclude, I think that the author made good design choices with all the constraints he had to respect but I think that a full library-based approach would be better. * What is your evaluation of the implementation? I made some tests in the https://github.com/da-project/boost-open-method-test/ project. The tests TestConst, TestNonConst, TestCovariance showed that types passed to virtual_ptr can be const and covariant return types can be used. Concerning the API, TestNoMacro showed that to get rid of the macros, some useless names have to be defined. The test also showed that it is possible to explicitly call a specific overrider rather than rely on the choice of the library by calling next(). It means that the next() function could be removed. One major issue highlighthed in N2216 is the potential ambiguities in the multi-methods resolution. They can happen with multiple inheritance in single dispatch and of course multiple dispatch. In case of a single dispatch without multiple inheritance, TestNoPerfectOverrider1 showed that the library selects the expected overrider which is the most specialized one. However in case of virtual inheritance and single dispatch, TestNoPerfectOverrider1Virtual showed that the selected overrider was different by simply changing the order of definition of the overriders. In case of double dispatch, same behavior was shown in TestNoPerfectOverrider2. Overriders can be defined in different compilation units which for runtime reasons could be called in a random order meaning an undefined behavior of the program, which is not acceptable. This can be a tricky subject, but in my opinion, the selection of an overrider when there is not a perfect match should be clear to the programmer (documented as well). In any case, a facet should be provided to always throw an exception when there is an ambiguity. I did not run performance tests but the documentation shows that good performances have been obtained thanks to dedicated internal data structures. The smart pointers unique_virtual_ptr are not really interesting in my opinion since they introduce a dependency to the library in the data objects which is contrary to the idea of non-intrusive open-methods. * What is your evaluation of the documentation? There should be more documentation on the "normal" use even if some parts can seem obvious. For instance, there could be examples of const types as well covariant return types. There should definitely be a section concerning the resolution of ambiguities. The documentation should be split in multiple pages, but I think that this is usual in Boost. * What is your evaluation of the potential usefulness of the library? Do you already use it in industry? The library implements an experimental feature but in simple cases, the library should be usable in industry. * Did you try to use the library? With which compiler(s)? Did you have any problems? I used gcc 11.4.0. The tests compiled well. However in Eclipse CDT 11.6.1, the macro BOOST_OPENMETHOD_OVERRIDE generates an error in the editor. * How much effort did you put into your evaluation? I tried the library on some specific tests and read the documentation. * Are you knowledgeable about the problem domain? I quickly present myself because it is linked to the subject and I may not be fully objective in my review. In my first job at the INRIA Montbonnot, a computer science research center, I had to develop algorithms on C++ 3D graph scenes implemented with OOP. I rapidly realized that I needed a tool to implement process functions outside the class hierarchy. Moreover depending on the algorithms, the tree traversal strategy may vary (DFS or BFS). All that lead me to a specialized implementation of open-methods limited to one dimension. Then on my free time I worked on a generalized version of open-methods or multi-methods and after reading the N2216 proposed implementation of which OpenMethod is mainly inspired, I wrote a peer-reviewed article "EVL: A framework for multi-methods in C++" (1) in which I expose my prototype and also compare it to the N2216's one. What is funny is that, I also cited YOMM11 in the article. At that time I also realized that the Boost community was not ready for such a tool because I remember Jean-Louis Leroy having posted an email on the YOMM11 implementation but nobody answered. Then I implemented a Java version (2) which was much easier than the C++ one. My evaluation is ACCEPT but the ambiguities should be treated better. To go further, I think that multiple dispatch belongs to the C++ dynamic language paradigm and must be assumed. It is even a step further in the dynamic language paradigm. In my opinion, in the context of multi-methods, want to have everything solved at compile-time is a mistake. That also means that using multi-methods implies possible exceptions at runtime. But C++ is a rich multi-paradigm language and one can choose to not use its dynamic paradigm implementation i.e. not use multi-methods. Multi-methods remain an experimental subject and I think that it is interesting to test the concept, which at the end may reveal not useful. But that is an interesting debate. (1) https://www.sciencedirect.com/science/article/pii/S0167642314003360#bbr0070 (2) https://da-project.github.io/evl/ * More information If you are interested in multi-methods, I encourage you to read my article (1) and also have a look at my Java implementation (2) if you are not allergic to Java. I took the time to provide a list of examples ( https://da-project.github.io/evl/docs/examples.html), including the implementation of some design patterns using multi-methods. I give examples of multiple dispatch by for instance adding a simple state object. I also explain my generalization of multiple dispatch ( https://da-project.github.io/evl/docs/theory.html). A summary of the main differences between N2216 and EVL: - Generalization of the multiple dispatch mechanism based on the comparison of tuples of distance. It aims to provide an abstraction able to solve ambiguities at a higher level. - Cache strategy rather than static dispatch table to control memory footprint and avoid to have entries that will never be called and solve ambiguities that will never arise. - Multi-methods are objects so that their control is easier (visibility, configuration, etc.). My C++ implementation required a rough reflection implementation to be able to provide an inheritance graph calculated from the dynamic_cast operator. It is presented in the article (1). Best regards, Yannick On Sun, Apr 27, 2025 at 3:15 PM Дмитрий Архипов via Boost < boost@lists.boost.org> wrote:
Dear Boost community. The peer review of the proposed Boost.OpenMethod will start on 28th of April and continue until May 7th. OpenMethods implements open methods in C++. Those are "virtual functions" defined outside of classes. They allow avoiding god classes, and visitors and provide a solution to the Expression Problem, and the banana-gorilla-jungle problem. They also support multiple dispatch. This library implements most of Stroustrup's multimethods proposal, with some new features, like customization points and inter-operability with smart pointers. And despite all that open-method calls are fast - on par with native virtual functions.
You can find the source code of the library at https://github.com/jll63/Boost.OpenMethod/tree/master and read the documentation at https://jll63.github.io/Boost.OpenMethod/. The library is header-only and thus it is fairly easy to try it out. In addition, Christian Mazakas (of the C++ Alliance) has added the candidate library to his vcpkg repository (https://github.com/cmazakas/vcpkg-registry-test). The library is also available in Compiler Explorer under the name YOMM2.
As the library is not domain-specific, everyone is very welcome to contribute a review either by sending it to the Boost mailing list, or me personally. In your review please state whether you recommend to reject or accept the library into Boost, and whether you suggest any conditions for acceptance. Other questions you might want to answer in your review are:
* What is your evaluation of the design? * What is your evaluation of the implementation? * What is your evaluation of the documentation? * What is your evaluation of the potential usefulness of the library? * Did you try to use the library? With what compiler? Did you have any problems? * How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? * Are you knowledgeable about the problems tackled by the library?
Thanks in advance for your time and effort!
Dmitry Arkhipov, Staff Engineer at The C++ Alliance.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Thank you for taking the time to write a review, Yannick. Gosh, someone actually remembers YOMM11 :-D

As the library is not domain-specific, everyone is very welcome to contribute a review either by sending it to the Boost mailing list, or me personally. In your review please state whether you recommend to reject or accept the library into Boost, and whether you suggest any conditions for acceptance. Other questions you might want to answer in your review are:
I recommend to *ACCEPT* the library into boost. Even though I don't have an application, this seems like a nice tool to have. It would be great if the language supported it, but since it doesn't a library implementation is the next best thing. I consider boost a good place for solid implementation for an advanced and niche feature.
* What is your evaluation of the design?
I think it's a bit too customizable; it is suffering from policy-based design. That is, I would prefer something more opinionated, because that's what a language feature would be as well. Other than that it looks good, the basic use-case is pretty straight forward. * What is your evaluation of the implementation?
It looks right. I.e. I don't know what to do differently.
* What is your evaluation of the documentation?
It is clear & detailed enough.
* What is your evaluation of the potential usefulness of the library?
I think it's useful for certain problems, although I don't have any need at the moment. It is really more of a tool that will come in handy at an unexpected time.
* Did you try to use the library? With what compiler? Did you have any problems?
Yes, very simple. No problems.
* How much effort did you put into your evaluation? A glance? A quick reading? In-depth study?
I read the docs & code and tried out a simple example. I think about 4h time invested.
* Are you knowledgeable about the problems tackled by the library?
Not when it comes to open-methods, but I have of course used plenty of virtual functions in the past.

AMDG Here's an incomplete review. There definitely isn't enough time left for me to finish it today. Review revision: ea7dbe86b511797f0281ef07b3ada645fe569328 (master) virtual_ptr should not be put in the global namespace. s/Chhetah/Cheetah/ "It can take be used with any class name..." - remove "take" "That is because the overrider containers exist in both the canides and felides" The code has "canines" and "felines" BOOST_OPENMETHOD_OVERRIDER: I think it would be better to make this macro expand to something callable, i.e., add ::fn. That way the name fn can be an implementation detail. Why does BOOST_OPENMETHOD_OVERRIDER need the return type? BOOST_OPENMETHOD_OVERRIDERS: The documentation of friendship claims that this can be used to grant friendship to all overriders. Is this really all overriders, or is it only overriders in the same namespace? "Be aware, though, that the overriders of any method called poke - with any signature" Does this imply that we can define multiple open methods with the same name, but different argument types? Switching the default policy based on NDEBUG is dangerous. It's not uncommon to link code build with NDEBUG to code built without it. I don't understand the name vectored_error_handler. How is it vectored? I'm confused about how error handling works. The docs say "When an error is encountered, the program is terminated by a call to abort. If the policy contains an error_handler facet, it provides an error member function (or overloaded functions) to be called with an object identifying the error." I'm guessing this means that abort is called if there is no error handler or if the error handler returns normally. I think the facet interface could be simplified. - Why do we need to have the implementation inherit from the facet? Alternately, if we have this inheritance, we should be able to specify just the implementation in replace<>. - I'm a bit confused about the difference between release::add<runtime_checks>, and fork<default_policy>::replace<error_handler, throw_if_not_implemented> I'd rather have a single way to create a new policy, and the former seems simpler. In many places the if constexpr(has_facet<>) logic could be simplified by using a no-op implementation of the facets instead of letting facets be missing. detail/static_list.hpp: - Most of the iterator functions can be constexpr detail/trace.hpp: - It's a bit weird to have the indent of 4 split into 2*2. It would make more sense for indentation_level to indicate either the exact number of spaces or the depth. policies/basic_policy.hpp: - fork_facet is quite dangerous. It assumes that if a facet is a template, the first template parameter is the policy. This is potentially surprising and I don't see where it's documented. policies/basic_error_output.hpp: - Why use virtual inheritance? policies/vptr_vector.hpp: 32: Does using namespace policies do anything? compiler.hpp: - class_::is_base_of: Why use std::find on an unordered_set? 266: in compiler::static_type if constexpr (std::is_base_of_v< policies::deferred_static_rtti, policies::rtti>) { Did you mean to check the rtti facet of Policy? ...seems to be unused. 343: if (*method.vp_end == 0) { Since this is inside the loop, won't it only resolve the first type? Actually, I'm really confused about the whole loop. Do we want to process the overriders for each virtual parameter? In Christ, Steven Watanabe

чт, 8 мая 2025 г. в 07:14, Jean-Louis Leroy via Boost <boost@lists.boost.org>:
Hi Steven,
Here's an incomplete review. There definitely isn't enough time left for me to finish it today.
If Dmitry agrees, we can extend it a bit. I need more time to "process" Andrzej's review; and I don't want to miss the rest of your feedback.
If the review wizards aren't against it, I have no problem with extending the review period for one day.

On Thu, May 8, 2025 at 00:41, Дмитрий Архипов via Boost <boost@lists.boost.org> wrote: чт, 8 мая 2025 г. в 07:14, Jean-Louis Leroy via Boost <boost@lists.boost.org>: > > Hi Steven, > > > Here's an incomplete review. There definitely isn't enough > > time left for me to finish it today. > > If Dmitry agrees, we can extend it a bit. I need more time to "process" > Andrzej's review; and I don't want to miss the rest of your feedback. If the review wizards aren't against it, I have no problem with extending the review period for one day. I see no issue with an extra day since the next period does not begin until 13 May. Matt

чт, 8 мая 2025 г. в 19:07, Matt Borland <matt@mattborland.com>:
I see no issue with an extra day since the next period does not begin until 13 May.
The review period for OpenMethod library has ended. I want to thank everyone who participated in the review process, particularly those who have sent their reviews. I will post the results in a week or two.

Thanks to everyone for putting in the time and effort to review my library. And a special thank you to Дмитрий for managing the review. The remarks and discussions were all interesting and useful, and showed me several ways in which OpenMethod can be improved. Whatever the final outcome, my library will benefit from these intense ten days. Jean-Louis On Sat, May 10, 2025 at 11:22 AM Дмитрий Архипов via Boost <boost@lists.boost.org> wrote:
чт, 8 мая 2025 г. в 19:07, Matt Borland <matt@mattborland.com>:
I see no issue with an extra day since the next period does not begin until 13 May.
The review period for OpenMethod library has ended. I want to thank everyone who participated in the review process, particularly those who have sent their reviews. I will post the results in a week or two.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Hi Steven, Thank you for the review.
virtual_ptr should not be put in the global namespace.
OK OK I surrender ;-)
"That is because the overrider containers exist in both the canides and felides" The code has "canines" and "felines"
Six years of Latin in high school...
BOOST_OPENMETHOD_OVERRIDER: I think it would be better to make this macro expand to something callable, i.e., add ::fn. That way the name fn can be an implementation detail.
For consistency with BOOST_OPENMETHOD_OVERRIDERS. `fn` is used in other places. OpenMethod makes more internals public for interoperation between the macro and core interfaces. I still have to try your (and Joaquin's) suggestion of replacing the comma before the return type with an arrow in macros like BOOST_OPENMETHOD. If it works (very likely), BOOST_OPENMETHOD_OVERRIDER loses its raison d'être and I will probably remove it.
Why does BOOST_OPENMETHOD_OVERRIDER need the return type?
N2216 lookup. Considering return types as tie-breakers to resolve some ambiguities. Which I am demoting to an opt-in.
BOOST_OPENMETHOD_OVERRIDERS: The documentation of friendship claims that this can be used to grant friendship to all overriders. Is this really all overriders, or is it only overriders in the same namespace?
In the same namespace. Doc fix.
"Be aware, though, that the overriders of any method called poke - with any signature" Does this imply that we can define multiple open methods with the same name, but different argument types?
Yes. See https://github.com/jll63/Boost.OpenMethod/blob/master/examples/matrix.cpp
Switching the default policy based on NDEBUG is dangerous. It's not uncommon to link code build with NDEBUG to code built without it.
Hmmm yes I see that. I am completely separating the release and debug policies in the new design of the policy system. default_policy is a typedef, and, if I understand properly, having a typedef aliasing to different types is not an ODR violation in itself, but it makes it easier to create one. Do you have a better suggestion?
I don't understand the name vectored_error_handler. How is it vectored?
I guess it's boomer-speak... We used to call indirect function calls "vectored". Do you have a better name?
I'm confused about how error handling works. The docs say "When an error is encountered, the program is terminated by a call to abort. If the policy contains an error_handler facet, it provides an error member function (or overloaded functions) to be called with an object identifying the error." I'm guessing this means that abort is called if there is no error handler or if the error handler returns normally.
Yes indeed. I have problems phrasing that, perhaps related to English not being my first language.
I think the facet interface could be simplified.
Following the discussion with Ruben, I came up with a much simpler, safer design. CRTP and `fork` are gone.
In many places the if constexpr(has_facet<>) logic could be simplified by using a no-op implementation of the facets instead of letting facets be missing.
Yeah I actually do that in a couple of places. Like finalize() inherited from a base class by all policies. I sort of felt uneasy with resorting to shadowing.
detail/static_list.hpp: - Most of the iterator functions can be constexpr
For what benefit? It is an implementation detail specifically designed to be used by static ctors...
detail/trace.hpp: - It's a bit weird to have the indent of 4 split into 2*2. It would make more sense for indentation_level to indicate either the exact number of spaces or the depth.
Oh right...but you didn't see it, it's in `namespace detail` ;-)
policies/basic_policy.hpp: - fork_facet is quite dangerous. It assumes that if a facet is a template, the first template parameter is the policy. This is potentially surprising and I don't see where it's documented.
It's better than documented now, it's GONE.
policies/basic_error_output.hpp: - Why use virtual inheritance?
No good reason. Not in the new design.
policies/vptr_vector.hpp: 32: Does using namespace policies do anything?
It imports the names in the current scope? ;-) I do that in the body of some functions.
compiler.hpp: - class_::is_base_of: Why use std::find on an unordered_set?
I think those sets used to be vectors. Maybe they should be again.
266: in compiler::static_type [...] ...seems to be unused.
Yes, I will remove it.
343: if (*method.vp_end == 0) { Since this is inside the loop, won't it only resolve the first type? Actually, I'm really confused about the whole loop. Do we want to process the overriders for each virtual parameter?
That code has nothing to do with resolving overriders. It is for deferred RTTI. For interfacing with custom RTTI schemes that also use static ctors. If the deferred_static_rtti is present, I store a pointer to a function that returns a type_id, instead of the type_id itself. initialize() calls the function and then replaces the function pointer with the type_id. Since initialize() can be called multiple times, I add a flag just after method.vp_end. J-L

AMDG On 5/8/25 8:13 PM, Jean-Louis Leroy via Boost wrote:
343: if (*method.vp_end == 0) { Since this is inside the loop, won't it only resolve the first type? Actually, I'm really confused about the whole loop. Do we want to process the overriders for each virtual parameter?
That code has nothing to do with resolving overriders. It is for deferred RTTI. For interfacing with custom RTTI schemes that also use static ctors. If the deferred_static_rtti is present, I store a pointer to a function that returns a type_id, instead of the type_id itself. initialize() calls the function and then replaces the function pointer with the type_id. Since initialize() can be called multiple times, I add a flag just after method.vp_end.
For reference here's the code in question: for (auto& method : Policy::methods) { for (auto& ti : range{method.vp_begin, method.vp_end}) { if (*method.vp_end == 0) { resolve(&ti); *method.vp_end = 1; } for (auto& overrider : method.specs) { if (*overrider.vp_end == 0) { for (auto& ti : range{overrider.vp_begin, overrider.vp_end}) { resolve(&ti); } *overrider.vp_end = 1; } } } } There are four nested loops. The outer and innermost loops look sane to me. The two middle loops do not. The second loop is iterating over the parameter types of the base method and resolving them. However, because it sets *method.vp_end = 1 on the first iteration, only the type of the first parameter will be resolved. I think this will fail for multimethods (See below for a program that fails). The third loop, I think should be directly inside the outer loop, instead of being nested inside the second, but it looks harmless because *overrider.vp_end == 0 will deduplicate. #include <boost/openmethod.hpp> #include <boost/openmethod/compiler.hpp> #include <boost/openmethod/policies.hpp> namespace bom = boost::openmethod; struct B { virtual ~B() = default; bom::type_id type = static_type; static bom::type_id static_type; }; struct D : B { D() { type = static_type; } static bom::type_id static_type; }; bom::type_id B::static_type; bom::type_id D::static_type; struct deferred_rtti : bom::policies::deferred_static_rtti { template <class T> static constexpr bool is_polymorphic = std::is_base_of_v<B, T>; template <typename T> static auto static_type() -> bom::type_id { if constexpr (is_polymorphic<T>) { return T::static_type; } else { return 0; } } template <typename T> static auto dynamic_type(const T &obj) -> bom::type_id { if constexpr (is_polymorphic<T>) { return obj.type; } else { return 0; } } }; struct deferred_rtti_policy : boost::openmethod::policies::debug::fork<deferred_rtti_policy>::replace< boost::openmethod::policies::rtti, deferred_rtti> {}; BOOST_OPENMETHOD_CLASSES(B, D, deferred_rtti_policy) BOOST_OPENMETHOD(foo, (virtual_ptr<B, deferred_rtti_policy>, virtual_ptr<B, deferred_rtti_policy>), void, deferred_rtti_policy); BOOST_OPENMETHOD_OVERRIDE(foo, (virtual_ptr<D, deferred_rtti_policy>, virtual_ptr<D, deferred_rtti_policy>), void) {} int main() { B::static_type = 23; D::static_type = 4; bom::initialize<deferred_rtti_policy>(); D d; B &b(d); foo(b, b); } $ BOOST_OPENMETHOD_TRACE=1 ./a.out Static class info: type_id(4): (type_id(23), type_id(4)) type_id(23): (type_id(23)) Inheritance lattice: type_id(4) bases: (type_id(23)) derived: () covariant: (type_id(4)) type_id(23) bases: () derived: (type_id(4)) covariant: (type_id(4), type_id(23)) Methods: type_id(0) (type_id(23), type_id(4227983)) unkown class 4227983(type_id(4227983)) for parameter #1 unknown class type_id(4227983) Aborted (core dumped) In Christ, Steven Watanabe

AMDG On 5/8/25 8:13 PM, Jean-Louis Leroy via Boost wrote:
<snip lots>
Switching the default policy based on NDEBUG is dangerous. It's not uncommon to link code build with NDEBUG to code built without it.
Hmmm yes I see that. I am completely separating the release and debug policies in the new design of the policy system. default_policy is a typedef, and, if I understand properly, having a typedef aliasing to different types is not an ODR violation in itself, but it makes it easier to create one.
Do you have a better suggestion?
My preference would be to make release the default and leave debug to be specified explicitly.
I don't understand the name vectored_error_handler. How is it vectored?
I guess it's boomer-speak... We used to call indirect function calls "vectored". Do you have a better name?
I guess I can see that. It's just that, in a C++ context, when I see vector I immediately think std::vector, which is a completely different meaning of the word. Some random ideas: dyn_error_handler runtime_error_handler default_error_handler
policies/vptr_vector.hpp: 32: Does using namespace policies do anything?
It imports the names in the current scope? ;-) I do that in the body of some functions.
It just looks odd because policies /is/ the current namespace. In Christ, Steven Watanabe

AMDG Sorry for the late feedback. I finally had enough time to get through most of the implementation. boost_openmethod_vptr returns a result that is policy dependent. Should it be possible to overload it based on the policy? At the very least, it should be possible to verify that the policy matches. Custom RTTI: - "is_polymorphic is used to check if a class is polymorphic. This template is required." What does it mean for a type to be polymorphic? It clearly doesn't strictly mean a C++ polymorphic type, because std::is_polymorphic would always be right, then, but I don't see a definition of what it does mean. - It would be helpful for debugging if method dispatch asserted that the policy was initialized. It took me a while to figure out that I was still initializing the default policy and not my custom policy. I think virtual_ptr is doing too much. What would it take to make it possible to implement virtual_ptr without any special support in the core library? - boost_openmethod_vptr provides a way to get the vtable - The rtti policy and/or static_cast should allow downcasting to the overrider parameter type. - The only thing that is clearly missing is the ability to indicate that virtual_ptr is a virtual parameter. I tried using shared_ptr as a parameter type like this: BOOST_OPENMETHOD_CLASSES(Base, Derived) BOOST_OPENMETHOD(foo, (virtual_<std::shared_ptr<Base>>), void) BOOST_OPENMETHOD_OVERRIDE(foo, (std::shared_ptr<Derived>), void) { std::cout << "derived" << std::endl; } - The guide function doesn't work. I don't understand why, since shared_ptr<Derived> is convertible to shared_ptr<Base>. - I tried adding an rtti facet, but it didn't help. The (lack of) type safety is quite concerning. - I don't think the standard guarantees that a function pointer can round-trip through void*. IIRC, an arbitrary function pointer type like void(*)() is guaranteed to work. - I'm particularly concerned about the way reinterpret_cast leaks into user code in the rtti policy's type_name. throw_error_handler directly throws the error type as an exception, but openmethod_error does not inherit from std::exception. Even though this is perfectly legal, dealing with such exception can be rather painful. I don't think that the default error behavior for unimplemented overrides is good. I'm okay with an abort when initialization fails. An error that happens reliably on program startup is not much worse than a compile error. If an unimplemented method aborts, that implies that it is considered a programming error. In this case I want to check for completeness in initialization and fail if anything is missing. If a missing override isn't a bug, then it needs to be recoverable, or at least allow clean shutdown, which means throwing an exception. Unlike some others, I'm actually okay with choosing some overrider when it's ambiguous. It's not a great solution, but it's still better than aborting. Failing on initialization would also be acceptable. I'm not particularly fond of using the covariant return type to choose. It's unlikely to be less surprising than just picking one at random, since that's not how C++ normally does overload resolution. What about prioritizing the leftmost parameter? That's simple and predictable. I'd like to be able to use std::any or boost::type_erasure::any in open methods. - std::any can use .type() - boost::type_erasure::any can either use std::type_info or it can be configured to store a vptr. Both provide any_cast which can be used to cast the value. I defined a custom rtti facet. It doesn't work, and it appears that the reason is that int does not inherit from any, which causes it to not be considered by assign_tree_slots. So, I used class_declaration_aux directly to forcibly make any a base of int, and then it seems to work. The simplest way to solve this is to allow a policy to customize is_base_of and is_abstact. I've attached files that show the result of my experiments (test_te.cpp and test_any.cpp). compiler.hpp: 405-478: Calculating transitive_bases and direct_bases. I don't think this works. The documentation claims that the only restriction is that every type must include its direct base classes, but the loop at 405 does not compute the transitive closure. This can cause direct_bases to contain indirect bases as well. Consider the following example: struct Base { virtual ~Base() = default; }; struct D1 : Base {}; struct D2 : D1 {}; struct D3 : D2 {}; struct D4 : D3 {}; struct D5 : D4 {}; BOOST_OPENMETHOD_CLASSES(Base, D1, D2, D3) BOOST_OPENMETHOD_CLASSES(D2, D3) BOOST_OPENMETHOD_CLASSES(D3, D4) BOOST_OPENMETHOD_CLASSES(D4, D5, D3) ... Inheritance lattice: D3 bases: (D2) derived: (D4, D5) covariant: (D4, D5, D3) compiler.hpp:554: typo "unkown" compiler.hpp:735: if (!cls.used_by_vp.empty()) { for (const auto& mp : cls.used_by_vp) { The if seems redundant. The algorithm for assign_lattice_slots looks pretty inefficient to me. The worst case is O(slots^2 * classes^2). It can also generate results that are obviously silly (at least to a human) pretty easily. I know optimal slot allocation is NP-hard, but I think it can be tweaked in a few ways. Considering the optimization of not storing leading unused slots, filling the lowest available slot is a suboptimal strategy. We can pack it tighter by finding the ranges from bases (and derived) and filling in holes. Notice that the algorithm's correctness does not depend on visiting the nodes in any particular order. If we're allocating from 0, it's probably most effective to visit types bases first instead of depth first. If we're trying to allocate contiguous ranges, we can move both up and down from types that are already visited. I believe that in the absence of virtual inheritance, this is guaranteed to assign slots contiguously without leaving any holes. After assigning a slot, we only need to propagate a single bit across the hierarchy. We don't need to merge all the bit vectors. Finally, we could pull reserved slots up from transitive derived before allocating slots instead of pushing them up to transitive bases of transitive derived after slot allocation. This would reduce the nesting depth of the loops. compiler.hpp:1147: *cls.static_vptr = gv_iter - cls.first_slot; This is quite scary. It's undefined behavior if it goes before the beginning of the vector. I think I can trigger undefined behavior with the following hierarchy C / A B \ / D (see test_vptr_oob.cpp) ... Initializing v-tables at 7c59751e1240 0 7c59751e1240 vtbl for A slots 2-3 ... Notice that the vtable for A is placed first in the vector, but first_slot is 2. In Christ, Steven Watanabe

Sorry for the late feedback. I finally had enough time to get through most of the implementation.
You have sharp eyes :)
boost_openmethod_vptr returns a result that is policy dependent. Should it be possible to overload it based on the policy? At the very least, it should be possible to verify that the policy matches.
Ruben pointed that out too. boost_openmethod_vptr() is a recent invention. YOMM2 looks for a *public* boost_openmethod_vptr *member* in the object. I changed it to a function because it allows a base class to use whatever name it wants for the member; and, more importantly, the said member can be private, and the base can define boost_openmethod_vptr() as an inline friend. I did not anticipate that one would have the idea of calculating the *value* of the vptr there. The way I use it is to retrieve the vptr set by with_vptr's constructor. I experimented with passing a Policy* as the first argument to the function after Ruben's remarks. It's not hard to make it work, but it requires boost_openmethod_vptr() to either specify the exact policy it expects, or be a template if it wants to ignore it. It also makes it possible to associate more than one vptr to an object. Why not? As we say in French, who can do more of it can do less of it (qui peut le plus peut le moins).
Custom RTTI: - "is_polymorphic is used to check if a class is polymorphic. This template is required." What does it mean for a type to be polymorphic?
It means polymorphic from the point of view of the rtti facet. For std_rtti it means having virtual function(s). For custom rtti, it could be deriving from a base, or having certain members. I'll need to explain that in the doc.
- It would be helpful for debugging if method dispatch asserted that the policy was initialized. It took me a while to figure out that I was still initializing the default policy and not my custom policy.
Yes. I've been bitten by that too. https://github.com/jll63/Boost.OpenMethod/issues/16
I think virtual_ptr is doing too much. What would it take to make it possible to implement virtual_ptr without any special support in the core library? - boost_openmethod_vptr provides a way to get the vtable - The rtti policy and/or static_cast should allow downcasting to the overrider parameter type. - The only thing that is clearly missing is the ability to indicate that virtual_ptr is a virtual parameter.
In YOMM2, virtual_ptr is an afterthought. While preparing OpenMethod for review, I struggled with explaining clearly and simply that `virtual_` should be used only in the base-method, and *not* in the overriders. YOMM2 beginners often struggled with that. Then I though, on top of that, virtual_ptr gets us dispatch in three instructions instead of nine (for 1-methods). Shouldn't it be the "golden path"? And virtual_ be demoted to an entry point into advanced stuff?
I tried using shared_ptr as a parameter type like this:
BOOST_OPENMETHOD_CLASSES(Base, Derived) BOOST_OPENMETHOD(foo, (virtual_<std::shared_ptr<Base>>), void) BOOST_OPENMETHOD_OVERRIDE(foo, (std::shared_ptr<Derived>), void) { std::cout << "derived" << std::endl; }
- The guide function doesn't work. I don't understand why, since shared_ptr<Derived> is convertible to shared_ptr<Base>.
I am surprised. I have a unit test for that (cast_args_shared_ptr_by_value). You didn't attach the example, here is what I tried:
#include <boost/openmethod.hpp> #include <boost/openmethod/compiler.hpp> #include <boost/openmethod/shared_ptr.hpp>
#include <iostream>
class Base { public: virtual ~Base() { } };
class Derived : public Base {};
BOOST_OPENMETHOD_CLASSES(Base, Derived);
BOOST_OPENMETHOD( foo, (boost::openmethod::virtual_<std::shared_ptr<Base>>), void);
BOOST_OPENMETHOD_OVERRIDE(foo, (std::shared_ptr<Derived> dog), void) { std::cout << "Derived\n"; }
auto main() -> int { boost::openmethod::initialize(); auto obj = std::make_shared<Derived>(); foo(obj);
return 0; }
https://godbolt.org/z/s7EhM1Yn4 By the way, do you have an opinion on allowing smart pointers as virtual parameters?
The (lack of) type safety is quite concerning. - I don't think the standard guarantees that a function pointer can round-trip through void*. IIRC, an arbitrary function pointer type like void(*)() is guaranteed to work.
Adding an issue for this. V-tables can contain a mixture of function pointers, data pointers and indexes. Can a uintptr_t hold a function pointer? cppreference.com says:
uintptr_t (optional) unsigned integer type capable of holding a pointer to void
Optional? Does it mean that an implementation is not obliged to provide it? And it looks like it can hold a data pointer, it doesn't say "any pointer". At some point YOMM2 used a union for v-table entries, I may have to go back to that.
- I'm particularly concerned about the way reinterpret_cast leaks into user code in the rtti policy's type_name.
That's only for users creating their own rtti facet. I consider that advanced usage. What a facet's type_name() gets is what the same facet returned from static_type() and dynamic_type(), so the cast is safe (as long as the data fits in a type_id). I don't think think it's reasonable to templatize the entire library on type_id.
throw_error_handler directly throws the error type as an exception, but openmethod_error does not inherit from std::exception. Even though this is perfectly legal, dealing with such exception can be rather painful.
I am not exception-phobic. Quite the opposite. But doing this would mean that the error classes would be different whether or not exceptions are enable. I tried to avoid ifdefs as much as I could...
I don't think that the default error behavior for unimplemented overrides is good. I'm okay with an abort when initialization fails. An error that happens reliably on program startup is not much worse than a compile error.
Agreed. You can check the `report` in initalize()'s return value for that. And, some day, I will provide a pre-linker or at least a linter.
If a missing override isn't a bug, then it needs to be recoverable, or at least allow clean shutdown, which means throwing an exception.
Eh! The unit tests do that. So you would prefer to make the exception-throwing handler the default, and let people who cannot or will not accept exceptions tune the policy?
Unlike some others, I'm actually okay with choosing some overrider when it's ambiguous. It's not a great solution, but it's still better than aborting.
It will be a choice. Now the question is to pick the best default. I am leaning towards the YOMM2 way: no more and no less than (static) overload resolution.
I'm not particularly fond of using the covariant return type to choose.
So you are halfway to N2216 :-D I can always split the "pick random" and "use covariant return type" into two independent choices.
What about prioritizing the leftmost parameter? That's simple and predictable.
IIRC that's what CLOS does. Dylan too probably. And nothing in C++ works that way.
I'd like to be able to use std::any or boost::type_erasure::any in open methods.
I considered std::any virtual parameters before. They seem to be within close reach. Unlike std::variant, they're open, so it makes sense. I have a design, in which you have to register the possible types, just like registering classes.
I defined a custom rtti facet. It doesn't work, and it appears that the reason is that int does not inherit from any, which causes it to not be considered by assign_tree_slots. So, I used class_declaration_aux directly to forcibly make any a base of int, and then it seems to work.
Where there's a will... :-D
The simplest way to solve this is to allow a policy to customize is_base_of and is_abstact.
This is an interesting research path...abstract sub-categorization away from inheritance...I think that Clojure does something like that. A few times people requested value-based dispatch for YOMM2. Like in CLOS. I have a design for this. But then the latest requester wnats dispatch on *ranges* of values. I wonder if there is a general way of allowing such extensions, that is also practical. Having open-methods in a library rather than in the language is sort of liberating...
I've attached files that show the result of my experiments (test_te.cpp and test_any.cpp).
I'll look at them closely. I haven't yet as I am writing this.
compiler.hpp: 405-478: Calculating transitive_bases and direct_bases. I don't think this works. The documentation claims that the only restriction is that every type must include its direct base classes, but the loop at 405 does not compute the transitive closure. This can cause direct_bases to contain indirect bases as well.
So I thought I had found a smart trick for collecting the transitive closure at compile time, but you are right, in your example, D5 is not aware that D3 is a base of D4. That shouldn't take me long to fix.
It can also generate results that are obviously silly (at least to a human) pretty easily.
Do you mean silly and incorrect, or just silly?
I know optimal slot allocation is NP-hard, but I think it can be tweaked in a few ways.
I am preserving the following paragraphs in https://github.com/jll63/Boost.OpenMethod/issues/19 I will get back to it, but it is not a priority at the moment. Thanks a lot for the input though.
I believe that in the absence of virtual inheritance, this is guaranteed to assign slots contiguously without leaving any holes.
I am puzzled by this remark, because YOMM2 and OpenMethod are a bit myopic regarding virtual and repeated inheritance. At the bottom of it, the library suffers from being able to have only one v-table per object. That is why repeated inheritance is not supported. And that is the reason for the contortions about slot allocation.
compiler.hpp:1147: *cls.static_vptr = gv_iter - cls.first_slot; This is quite scary. It's undefined behavior if it goes before the beginning of the vector. I think I can trigger undefined behavior with the following hierarchy
C / A B \ / D
(see test_vptr_oob.cpp) ... Initializing v-tables at 7c59751e1240 0 7c59751e1240 vtbl for A slots 2-3 ... Notice that the vtable for A is placed first in the vector, but first_slot is 2.
Oh, right. I doubt any real program will crash on this, but you are right, this is UB, and it can be fixed, so let's fix it. It is probably just a matter of ordering the v-tables from lowest first_slot up, and perhaps from highest size up if first_slot is the same. I guess that you will frown at the idea of using the same trick to avoid storing entries in the perfect hash table that come before the minimum value in the hash function's image. J-L

AMDG On 5/10/25 6:33 PM, Jean-Louis Leroy via Boost wrote:
<snip> I experimented with passing a Policy* as the first argument to the function after Ruben's remarks. It's not hard to make it work, but it requires boost_openmethod_vptr() to either specify the exact policy it expects, or be a template if it wants to ignore it.
I think specifying the exact policy is going to be the right thing most of the time. Do we ever want a vptr to be able to be used with any policy? virtual_ptr has a Policy template param and checks compatibility.
I think virtual_ptr is doing too much. What would it take to make it possible to implement virtual_ptr without any special support in the core library? - boost_openmethod_vptr provides a way to get the vtable - The rtti policy and/or static_cast should allow downcasting to the overrider parameter type. - The only thing that is clearly missing is the ability to indicate that virtual_ptr is a virtual parameter.
In YOMM2, virtual_ptr is an afterthought. While preparing OpenMethod for review, I struggled with explaining clearly and simply that `virtual_` should be used only in the base-method, and *not* in the overriders. YOMM2 beginners often struggled with that.
Can we strip off the virtual_? struct NAME ## _overrider<...> { template<typename T> using virtual_ = T; ... }; Actually, what if we make virtual a keyword handled by the macro: BOOST_OPENMETHOD(foo, (virtual T&), void) It would look something like #define HAS_VIRTUAL_TESTvirtual ~,~ #define HAS_VIRTUAL_I(a, b, r, ...) r #define HAS_VIRTUAL(arg) HAS_VIRTUAL_I(HAS_VIRTUAL_TEST ## arg, 1, 0, ~) #define REMOVE_VIRTUALvirtual #define REMOVE_VIRTUAL(arg) REMOVE_VIRTUAL ## arg This won't quite work as is. It needs some work to adjust the order of macro expansion. There's also the problem that the preprocessor's view of the arguments is not the same as the compiler's. So we can insert a dummy argument into the signature, which can be stripped out by metaprogramming: void(next_parameter_is_virtual, T&)
Then I though, on top of that, virtual_ptr gets us dispatch in three instructions instead of nine (for 1-methods). Shouldn't it be the "golden path"? And virtual_ be demoted to an entry point into advanced stuff?
To get the performance benefit, virtual_ptr has to be integrated into the surrounding code. If we're doing that, then it should act more like a normal pointer, and the conversion from a reference (which makes some sense when it's being used as a method parameter type) is not such a good idea.
I tried using shared_ptr as a parameter type like this:
<snip>
I am surprised. I have a unit test for that (cast_args_shared_ptr_by_value). You didn't attach the example, here is what I tried:
Maybe I forgot to make Derived inherit from Base. I remember doing that at least once.
<snip>
By the way, do you have an opinion on allowing smart pointers as virtual parameters?
I think it should be allowed. Can I bring my own smart pointer?
<snip> throw_error_handler directly throws the error type as an exception, but openmethod_error does not inherit from std::exception. Even though this is perfectly legal, dealing with such exception can be rather painful.
I am not exception-phobic. Quite the opposite. But doing this would mean that the error classes would be different whether or not exceptions are enable. I tried to avoid ifdefs as much as I could...
That's not necessarily true. The actual type thrown can be an internal type that inherits from both std::exception and the error type.
<snip>
The simplest way to solve this is to allow a policy to customize is_base_of and is_abstact.
This is an interesting research path...abstract sub-categorization away from inheritance...I think that Clojure does something like that.
A few times people requested value-based dispatch for YOMM2. Like in CLOS. I have a design for this. But then the latest requester wnats dispatch on *ranges* of values. I wonder if there is a general way of allowing such extensions, that is also practical.
Value-based dispatch can sort of be done now by registering a type for each distinct group of values and adjusting dynamic_type to distinguish them. This won't work well with virtual_ptr or any other type that caches the vptr, though. The main issue with ranges is that they can overlap in various ways. You'd need to look at all the ranges that might be used for a given parameter, and find all the subranges where the ranges intersect with each other. For a parameter type T, and a range specification R, the user would need to provide a function that partitions the set of values of type T into subsets. template<typename T> struct partition { // To determine the best match we need to know // whether an input set is a subset of another input set. // To determine whether an overrider matches we need to // know whether a particular subset is part of an input set. bitmatrix subset_info; // When dispatching we need to quickly find the right subset, // which can then be used to look up the std::function<std::size_t(T)> select_subset; }; partition user_defined_make_partition(std::vector<R> input_sets); If the sets are ranges, we can store the boundaries and do a lower_bound search.
<snip>
It can also generate results that are obviously silly (at least to a human) pretty easily.
Do you mean silly and incorrect, or just silly?
Just silly. I worked through what it would generate for several hierarchies and was a little surprised.
<snip>
I believe that in the absence of virtual inheritance, this is guaranteed to assign slots contiguously without leaving any holes.
I am puzzled by this remark, because YOMM2 and OpenMethod are a bit myopic regarding virtual and repeated inheritance.
I'm puzzled by it too. The problem is cycles in the undirected inheritance graph, and I forgot that virtual inheritance isn't the only way to get a cycle.
At the bottom of it, the library suffers from being able to have only one v-table per object. That is why repeated inheritance is not supported. And that is the reason for the contortions about slot allocation.
compiler.hpp:1147: *cls.static_vptr = gv_iter - cls.first_slot; This is quite scary. It's undefined behavior if it goes before the beginning of the vector. <snip>
Oh, right. I doubt any real program will crash on this, but you are right, this is UB, and it can be fixed, so let's fix it. It is probably just a matter of ordering the v-tables from lowest first_slot up, and perhaps from highest size up if first_slot is the same.
I think just ordering by first_slot is sufficient, as every vtable that has a slot before first_slot will come before it and every slot should be used in at least one vtable. What if the slots are reserved by an abstract base that has no concrete derived classes? Does that situation get filtered out somewhere earlier?
I guess that you will frown at the idea of using the same trick to avoid storing entries in the perfect hash table that come before the minimum value in the hash function's image.
Yeah, but it's less obvious how to fix it. In Christ, Steven Watanabe

Can we strip off the virtual_?
struct NAME ## _overrider<...> { template<typename T> using virtual_ = T; ... };
You mean allow virtual_ in the overrider's parameters? The macro gets: meet, (virtual_<Dog&> dog, virtual_<Cat&> cat) ...and it changes into: meet(Dog& dog, Cat& cat) I don't see a way of doing this while carrying the parameter's names.
Actually, what if we make virtual a keyword handled by the macro:
BOOST_OPENMETHOD(foo, (virtual T&), void)
It would look something like #define HAS_VIRTUAL_TESTvirtual ~,~ #define HAS_VIRTUAL_I(a, b, r, ...) r #define HAS_VIRTUAL(arg) HAS_VIRTUAL_I(HAS_VIRTUAL_TEST ## arg, 1, 0, ~) #define REMOVE_VIRTUALvirtual #define REMOVE_VIRTUAL(arg) REMOVE_VIRTUAL ## arg
This won't quite work as is. It needs some work to adjust the order of macro expansion.
It took me a while to see how this could work, and now it's taking me a while to see why it doesn't want to - see https://godbolt.org/z/16nensohc. Probably related to your "order of expansion" comment? It is appealing but it has problems too. Any comma in the parameter's types will break it. YOMM2 uses Boost.PP loops over macro arguments to build the forwarding function, so whenever there is a comma in the parameter types, you have to use tricks like BOOST_IDENTITY_TYPE. For OpenMethod, I moved as much macro-magic to TMP as I could.
To get the performance benefit, virtual_ptr has to be integrated into the surrounding code.
Yes, I think that there will be two usage patterns: casual use of open-methods, and open-method aware design, like the AST example using unique_virtual_ptrs.
If we're doing that, then it should act more like a normal pointer, and the conversion from a reference (which makes some sense when it's being used as a method parameter type) is not such a good idea.
Yes, it's a dilemma. My rationalization (i.e. lie to myself) is that virtual_ptr(snoopy) is like &snoopy, it makes a pointer from a reference.
By the way, do you have an opinion on allowing smart pointers as virtual parameters?
I think it should be allowed. Can I bring my own smart pointer?
Yes. #include <iostream> #include <boost/openmethod.hpp> #include <boost/openmethod/compiler.hpp> #include <boost/smart_ptr/intrusive_ptr.hpp> #include <boost/smart_ptr/intrusive_ref_counter.hpp> template<typename Class, class Policy> struct boost::openmethod::virtual_traits< boost::intrusive_ptr<Class>, Policy> { using virtual_type = std::remove_cv_t<Class>; static auto peek(boost::intrusive_ptr<Class> arg) -> const Class& { return *arg; } template<class Other> using rebind = boost::intrusive_ptr<Other>; template<class Other> static auto cast(const boost::intrusive_ptr<Class>& obj) { if constexpr (detail::requires_dynamic_cast< Class*, typename Other::element_type*>) { return boost::dynamic_pointer_cast< typename virtual_traits<Other, Policy>::virtual_type>(obj); } else { return boost::static_pointer_cast< typename virtual_traits<Other, Policy>::virtual_type>(obj); } } }; struct Node : boost::intrusive_ref_counter<Node> { virtual ~Node() {} }; struct Literal : Node { explicit Literal(int value) : value(value) {} int value; }; BOOST_OPENMETHOD(value, (virtual_ptr<boost::intrusive_ptr<Node>>), int); BOOST_OPENMETHOD_OVERRIDE( value, (virtual_ptr<boost::intrusive_ptr<Literal>> node), int) { return node->value; } BOOST_OPENMETHOD_CLASSES(Node, Literal) auto main() -> int { boost::openmethod::initialize(); boost::intrusive_ptr<Node> x = new Literal{42}; std::cout << value(x) << "\n"; return 0; } ...and another specialization for `const intrusive_ptr<Class>&`. I should probably make requires_dynamic_cast public.
// BOOST_OPENMETHOD_CLASSES(int, std::any, any_policy) using boost::openmethod::detail::class_declaration_aux; class_declaration_aux<any_policy, mp_list<std::any>> x; class_declaration_aux<any_policy, mp_list<int, std::any>> y; class_declaration_aux<any_policy, mp_list<double, std::any>> z;
Smashing! And the type_erasure one too. I did not realize we were so close to it already. Creating an issue...
What if the slots are reserved by an abstract base that has no concrete derived classes? Does that situation get filtered out somewhere earlier?
Not at the moment.

AMDG On 5/12/25 3:10 PM, Jean-Louis Leroy via Boost wrote:
Can we strip off the virtual_?
struct NAME ## _overrider<...> { template<typename T> using virtual_ = T; ... };
You mean allow virtual_ in the overrider's parameters? The macro gets: meet, (virtual_<Dog&> dog, virtual_<Cat&> cat)
...and it changes into: meet(Dog& dog, Cat& cat)
I don't see a way of doing this while carrying the parameter's names.
The above would make virtual_ into an alias that just expands to its argument, but this only works when virtual_ is used unqualified.
<snip>
It took me a while to see how this could work, and now it's taking me a while to see why it doesn't want to - see https://godbolt.org/z/16nensohc. Probably related to your "order of expansion" comment?
It is appealing but it has problems too. Any comma in the parameter's types will break it.
Not necessarily. You snipped the trick I proposed to deal with that.
YOMM2 uses Boost.PP loops over macro arguments to build the forwarding function, so whenever there is a comma in the parameter types, you have to use tricks like BOOST_IDENTITY_TYPE. For OpenMethod, I moved as much macro-magic to TMP as I could.
#define HAS_VIRTUAL_TESTvirtual ~,~ #define HAS_VIRTUAL_II(a, b, r, ...) r #define HAS_VIRTUAL_I(...) HAS_VIRTUAL_II(__VA_ARGS__) #define HAS_VIRTUAL(arg) HAS_VIRTUAL_I(HAS_VIRTUAL_TEST ## arg, 1, 0, ~) #define REMOVE_VIRTUALvirtual #define REMOVE_VIRTUAL(arg) REMOVE_VIRTUAL ## arg #define IDENTITY(arg) arg #define MARK_VIRTUAL(arg) boost::openmethod::detail::virtual_next, REMOVE_VIRTUAL ## arg #define REMOVE_IF_VIRTUAL(arg) \ BOOST_PP_IIF(HAS_VIRTUAL(arg), REMOVE_VIRTUAL, IDENTITY)(arg) #define MARK_IF_VIRTUAL(arg) \ BOOST_PP_IIF(HAS_VIRTUAL(arg), MARK_VIRTUAL, IDENTITY)(arg) MARK_IF_VIRTUAL(virtual test) REMOVE_IF_VIRTUAL(virtual test) MARK_IF_VIRTUAL(nonvirtual) REMOVE_IF_VIRTUAL(nonvirtual) With this (virtual B, int, virtual C), void becomes void(boost::openmethod::detail::virtual_next, B, int, boost::openmethod::detail::virtual_next, C) which can be turned into void(virtual_<B>, int, virtual_<C>) by TMP. In the function definition, we simply remove the virtual, and everything else, including the argument name is preserved. The main problem is that it blows up when a comma is followed by something that can't be token pasted: (::global, templ<I, (X + Y)>) In Christ, Steven Watanabe
participants (12)
-
Andrzej Krzemienski
-
Christian Mazakas
-
Jean-Louis Leroy
-
Joaquin M López Muñoz
-
Klemens Morgenstern
-
Matt Borland
-
Peter Turcan
-
Richard Hodges
-
Ruben Perez
-
Steven Watanabe
-
Yannick Le Goc
-
Дмитрий Архипов