Interest in Remote Procedure Call Library?

I have been playing with some code and recently developed a very efficient, extensible, and easy to use API for performing remote procedure calls (with multiple protocols). I am writing to gauge interest and get feedback on the user API. The native "protocol" is a custom combined tcp/udp protocol using multi-cast for discovery, but it would be very easy to add a shared memory solution, a dbus, XML/RPC, CORBA, etc back end to the user api. Example: class SomeClass { public: int add( int a, int b ) { return a + b ); void sub( int a, int b, int* result ) { if(result) *result = a -b; }; void inout( int& a ) { a += 5; } }; META_INTERFACE( SomeClass, METHOD(add) METHOD(sub) METHOD(inout) ) // server SomeClass myclass; Server myserver( &myclass, "my.named.service" ); // client RemoteInterface<SomeClass> ri( "my.named.service" ); int result = ri.add(1,4); assert( result == 5 ) ri.sub( 5, 1, &result ); assert( result == 4 ); ri.inout(result); assert( result == 9 ); API can support any type that is serializable. It also supports the signal/slot paradigm. SomeClass could have any number of base-classes both virtual and non-virtual. With respect to performance, it can perform 15,000+ synchronous (invoke, wait for return) operations per-second on localhost/linux. Asynchronous operations are capable of 500,000+ invokes/sec. The library doubles as a built-in reflection system allowing run time inspection member functions, parameter types, inheritance, etc. I am working to convince my employer (small company of 25) that contributing this library would bring more benefits than "costs" to the company. The reasons I think it is a good idea include: 1) Expose the company to more people (marketing) 2) We can give our customers an "open" API for communicating with our products 2) Offload some of the development / maintenance / support to the open source community 3) I like open source and have personal reasons for wanting it free and open. So I need to get some more support to convince my employer that this will be a good move. Active interest would be a good sign. Other ideas on the positive aspects of contributing vs keeping it internal would also help me make my case. Thanks, Dan

----- Original Message ----- From: "Daniel Larimer" <dlarimer@gmail.com> To: <boost@lists.boost.org> Sent: Sunday, February 07, 2010 3:52 AM Subject: [boost] Interest in Remote Procedure Call Library?
I have been playing with some code and recently developed a very efficient, extensible, and easy to use API for performing remote procedure calls (with multiple protocols). I am writing to gauge interest and get feedback on the user API. The native "protocol" is a custom combined tcp/udp protocol using multi-cast for discovery, but it would be very easy to add a shared memory solution, a dbus, XML/RPC, CORBA, etc back end to the user api.
Example:
class SomeClass { public: int add( int a, int b ) { return a + b ); void sub( int a, int b, int* result ) { if(result) *result = a -b; }; void inout( int& a ) { a += 5; } };
META_INTERFACE( SomeClass, METHOD(add) METHOD(sub) METHOD(inout) )
Hi, Does the library takes care of overloading? Could you show an example? Does the library takes care of exceptions? Could you show an example? Best, Vicente

The macro system currently does not provide a way to handle "overloading" though I would love to hear your ideas on how to achieve it. The macro currently generates something along the general idea of the code below only using a lot of "typeof" and other template tricks to automatically determine the signatures: MetaClass { class methodNameMeta { RTN operator( ARG, ... ); }methodName; class anotherMethodMeta { RTN operator( ARG, ... ); }anotherMethod; } Thus allowing the syntax: MetaClass mc; mc.methodName(...) Effectively I would need to define the operator() for each of the overloaded versions and thus I would need to get a member function pointer which would require the developer to specify more than just the method name in the TMETHOD() macro. So I suppose I could probably achieve it with the following syntax: META_METHOD( CLassName, TMETHOD_OVERLOAD(methodName, SIGNATURE( int (float) ), SIGNATURE( void (float, float) ), ... ) ) As the code is currently written the TMETHOD() macro is relatively short and simply derives from a template base class based upon the arity, parameters, and class name. I would either need to adopt multiple inheritance or place more "real code" inside the macro. So, to answer your question, it is possible though not as straight forward as the non-overloaded methods. Thoughts? Dan On Feb 7, 2010, at 5:24 AM, vicente.botet wrote:
Does the library takes care of overloading? Could you show an example?

Hi, ----- Original Message ----- From: "Daniel Larimer" <dlarimer@gmail.com> To: <boost@lists.boost.org> Sent: Sunday, February 07, 2010 5:58 PM Subject: Re: [boost] Interest in Remote Procedure Call Library?
The macro system currently does not provide a way to handle "overloading" though I would love to hear your ideas on how to achieve it.
It is not easy to give you my idead on hwo to achieve overloading on your library as far as we don't have access to docummentation and code. I hope you company will let you show it to us.
The macro currently generates something along the general idea of the code below only using a lot of "typeof" and other template tricks to automatically determine the signatures:
MetaClass { class methodNameMeta { RTN operator( ARG, ... ); }methodName; class anotherMethodMeta { RTN operator( ARG, ... ); }anotherMethod; }
Thus allowing the syntax:
MetaClass mc; mc.methodName(...)
Effectively I would need to define the operator() for each of the overloaded versions and thus I would need to get a member function pointer which would require the developer to specify more than just the method name in the TMETHOD() macro. So I suppose I could probably achieve it with the following syntax:
If you define an operator() by overloaded function, the user will don't need to give more information that the name and the specific parameters, preserving the current C++ syntax. Which other informations do you need to select the correct oberloading?
META_METHOD( CLassName, TMETHOD_OVERLOAD(methodName, SIGNATURE( int (float) ), SIGNATURE( void (float, float) ), ... ) )
As the code is currently written the TMETHOD() macro is relatively short and simply derives from a template base class based upon the arity, parameters, and class name. I would either need to adopt multiple inheritance or place more "real code" inside the macro.
I can not help you in this as I have not enough information. BTW, the interface you presented initialy was really simple. META_INTERFACE( SomeClass, METHOD(add) METHOD(sub) METHOD(inout) ) And was able to get the single function signature for add. I don't know which techniques do you use to get this, but can't the same techniques get all the signatures sharing the same function name? Could you explain how do you get the complete signature?
So, to answer your question, it is possible though not as straight forward as the non-overloaded methods.
Thoughts?
I'm sure that some C++ developers will not use the library if they can not overload the member functions.
Dan
On Feb 7, 2010, at 5:24 AM, vicente.botet wrote:
Does the library takes care of overloading? Could you show an example?
Please don't top post, it is against the guidelines of this ML as we lost the context. Best, Vicente

On Sun, Feb 7, 2010 at 1:08 PM, vicente.botet <vicente.botet@wanadoo.fr> wrote:
/* snip RPC stuff */
For note, I created an RPC system a while ago (of which I donated some of the code to RakNet, but my version is still more complete). It uses no macros, it handles overloads fine, and it was to be plugged into any system as well (I had a networking interface, as well as an interface into a little custom scripting language). It was used like this: // Some functions and member function to link in void _myFunc(someClass *m1, anotherClass &m2, yetAnotherClass m3, int i, float f, RPCInfo *rpc, std::string s) { /* do stuff */ } class myClass { public: void _myMemberFunc(int i, float f); static RPCHandler::type_of_callback<myClass::_myMemberFunc>::type myMemberFunc; } // You "register" it to the RPC system like this: RPCHandler global_rpc; auto myFunc = global_rpc.register(_myfunc, 0); // Either globally void someFunctionSomewhere(RPCHandler &rpc) { auto myLocalFunc = global_rpc.register(_myfunc1); // or locally // Registing a member function is the same way myClass::myMemberFunc = global_rpc.register(myClass::_myMemberFunc, "theMemberFunc") // and yes, the second parameter, the id, can be an int or string, whichever is easier for your situation } To use it you can set flags in the RPC class as to how to route that callback, whether it is local only, remote only, both, etc... And to use it, you just call the returned object from the register function as just any other function: myFunc(m1,m2,m3,42,3.14,"Hello World"); If that need to be called on a remote system, then it will be serialized up, sent out, and called remotely. If it is supposed to be called locally, then it will just call it locally right then and there. myClass m; myClass::myMemberFunc(m, 42, 3.14); // You would probably make a wrapped around this though... RPC calls can be registered at any time, and can be unregistered, and with bind they can be bound to specific objects and so forth as well. But the register call returns a specialized callback struct (that you can stuff into a boost::function if you do not care about overloading, or can be used as-is by the type_of_callback thing or auto). I had talked about this on boost before, hinted at making it a library, but no interest was expressed. I still have my code, it is well tested and in the wild (through RakNet), so if anyone is interested then I can clean it up and write documentation and submit it. The way it works though is just very heavy use of Boost.Fusion and function traits and a few other things. In the RakNet version I serialized things out to its bitstream, otherwise I use Boost.Serialization (which can be overridden to use yet another thing if you want, like I do for my scripting language version). But if a parameter is by value, it just packages it up according to Boost.Serialization (or whatever you use internally). If it is a reference or pointer it still follows the same serialization pattern, unless a few conditions are met. If it is a pointer or reference and it is to an RPCInfo type, then it passes in information about the call, like if it remote, local, and a few other things. In the RakNet version I added another overload so if a class was passed by pointer/reference and it was a subclass of NetworkIDObject, then instead of serializing the object it just sends its network ID number and looks up that same object on the remote system (perfect for *this and any other passed in parameters). It is easy to add new overrides too, and it threw a compile error in a specific spot if the thing could not be serialized and/or did not match other things and various other conditions as well, all compile-time. But yes, if anyone wants the code, I can easily give it, just no documentation and bit a messy, but it works quite well, no macros needed, no needing to register types, etc... And as you may notice, mine will silently drop return values, it is asynchronous only, but functionality like futures or synchronized calls could easily be added (although I would prefer async futures or async callbacks personally). And let me say, Boost.Fusion is so awesome!

Overmind, Thanks for offering your code. I can appreciate the desire not to use macros, but I believe that carefully used they can make a world of difference to the end user and eliminate many kinds of "typo bugs" that don't show up until runtime. Since I am new to this list, I will try not to foist to much of my "world view" on how an API should be developed, but I firmly believe that syntax matters and that the end user (average C++ college grads who know little of templates, member function pointers, etc) can use the API. At my company I am the resident expert in C++ and working with a bunch of mechanical engineers whom I have been teaching C++ "on the job". So I prefer the macros. If there is some amazing way to achieve the same without macros then I will support it. But nothing keeps the reflection synchronized with method names better than a macro. Certainty the macro's should just build upon the "real API" and thus there should be some considerable effort made to make the "real api" as friendly as possible. Unfortunately, I am not as skilled with the pre-processor as I would like to be and so often my limited ability to manipulate tokens with the pre-processor is compensated for by code structure. Perhaps the discussion on what should be in an RPC library should begin with requirements. 1) Minimize code/syntax required to expose a class to the "transport" 2) Ability to expose any number of methods (overloaded or not) 3) Fully support multiple/virtual inheritance of interfaces 4) Support Boost.Signals 5) Support Boost.Serialization - on this note, I have some concern over performance and "predictability" for interfacing with other code not using Boost.Serialization. 6) Ability to implement any "protocol" (TCP/UDP/SharedMemory/SOAP/etc) 7) Support for exceptions - this requires a factory that is able to dynamically create the exception type from serialized data. Normal return values do not need this because it is known at compile time what the serialization of the return type is; however, with exceptions it could be "anything". For this I have used boost::any combined with a 32 bit hash of the class name. 8) Some kind of minimum performance metric? In my opinion the library should perform on par with the best RPC libraries available only provide a simple, cross-platofrm, c++ native solution and not depend upon code generation. As an added bonus, the same system should work to expose any object to any scripting engine because all of the required meta-information would be available. If anyone else on this list has ideas on what kind of requirements / API (macros/no macros) is best for RPC please chime in. In reality there are two libraries here. One for RPC and one for "reflection" and creation of proxy objects. The reflection library could use some of the features of Boost.Mirror instead of its own mechanism for getting the name of a type: TYPESTRING(type) // register the type in a header TypeString<type>::str() // get the const char* string "type" TypeString<type*>::str() // get the const char* string "type*" etc.... Clearly the idea of a reflection library in boost is not a new one. What are the main drivers that have prevented any of the proposed reflection techniques from being adopted? Is there anything I am doing / proposing that is clearly taboo for boost libraries? Dan On Feb 7, 2010, at 6:52 PM, OvermindDL1 wrote:
On Sun, Feb 7, 2010 at 1:08 PM, vicente.botet <vicente.botet@wanadoo.fr> wrote:
/* snip RPC stuff */
For note, I created an RPC system a while ago (of which I donated some of the code to RakNet, but my version is still more complete). It uses no macros, it handles overloads fine, and it was to be plugged into any system as well (I had a networking interface, as well as an interface into a little custom scripting language). It was used like this:
// Some functions and member function to link in void _myFunc(someClass *m1, anotherClass &m2, yetAnotherClass m3, int i, float f, RPCInfo *rpc, std::string s) { /* do stuff */ }
class myClass { public: void _myMemberFunc(int i, float f); static RPCHandler::type_of_callback<myClass::_myMemberFunc>::type myMemberFunc; }
// You "register" it to the RPC system like this: RPCHandler global_rpc;
auto myFunc = global_rpc.register(_myfunc, 0); // Either globally
void someFunctionSomewhere(RPCHandler &rpc) { auto myLocalFunc = global_rpc.register(_myfunc1); // or locally
// Registing a member function is the same way myClass::myMemberFunc = global_rpc.register(myClass::_myMemberFunc, "theMemberFunc") // and yes, the second parameter, the id, can be an int or string, whichever is easier for your situation }
To use it you can set flags in the RPC class as to how to route that callback, whether it is local only, remote only, both, etc... And to use it, you just call the returned object from the register function as just any other function:
myFunc(m1,m2,m3,42,3.14,"Hello World"); If that need to be called on a remote system, then it will be serialized up, sent out, and called remotely. If it is supposed to be called locally, then it will just call it locally right then and there.
myClass m; myClass::myMemberFunc(m, 42, 3.14); // You would probably make a wrapped around this though...
RPC calls can be registered at any time, and can be unregistered, and with bind they can be bound to specific objects and so forth as well. But the register call returns a specialized callback struct (that you can stuff into a boost::function if you do not care about overloading, or can be used as-is by the type_of_callback thing or auto).
I had talked about this on boost before, hinted at making it a library, but no interest was expressed. I still have my code, it is well tested and in the wild (through RakNet), so if anyone is interested then I can clean it up and write documentation and submit it.
The way it works though is just very heavy use of Boost.Fusion and function traits and a few other things. In the RakNet version I serialized things out to its bitstream, otherwise I use Boost.Serialization (which can be overridden to use yet another thing if you want, like I do for my scripting language version). But if a parameter is by value, it just packages it up according to Boost.Serialization (or whatever you use internally). If it is a reference or pointer it still follows the same serialization pattern, unless a few conditions are met. If it is a pointer or reference and it is to an RPCInfo type, then it passes in information about the call, like if it remote, local, and a few other things. In the RakNet version I added another overload so if a class was passed by pointer/reference and it was a subclass of NetworkIDObject, then instead of serializing the object it just sends its network ID number and looks up that same object on the remote system (perfect for *this and any other passed in parameters). It is easy to add new overrides too, and it threw a compile error in a specific spot if the thing could not be serialized and/or did not match other things and various other conditions as well, all compile-time.
But yes, if anyone wants the code, I can easily give it, just no documentation and bit a messy, but it works quite well, no macros needed, no needing to register types, etc...
And as you may notice, mine will silently drop return values, it is asynchronous only, but functionality like futures or synchronized calls could easily be added (although I would prefer async futures or async callbacks personally).
And let me say, Boost.Fusion is so awesome! _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Sun, Feb 7, 2010 at 6:26 PM, Daniel Larimer <dlarimer@gmail.com> wrote:
Thanks for offering your code. I can appreciate the desire not to use macros, but I believe that carefully used they can make a world of difference to the end user and eliminate many kinds of "typo bugs" that don't show up until runtime.
Oh, it is not a desire to not use macros, it is just that they were unnecessary in that design. I use macros all the time, have no qualms about that, the preprocessor is my friend. :) That is the thing about my design, there is no need to reflect anything, it used the function type decomposed into an mpl vector (Boost.Function_traits) which then built up a recursive callback (Boost.Fusion) to create a standalone callback function that handle everything from base function calling to (de)serialization (more Boost.Fusion), all with the single call to register, which its signature is something like this: template<typename FunctionType> RPCHandler::type_of_callback<FunctionType> RPCHandler::register(FunctionType func); Internally it built up recursive templated callback (which was optimized rather completely out into a single function call), of which all power was done at compile time, type-checking, everything. There was just no need for macro's at all. A reflection system of any sort is rather worthless for this. On Sun, Feb 7, 2010 at 6:26 PM, Daniel Larimer <dlarimer@gmail.com> wrote:
Since I am new to this list, I will try not to foist to much of my "world view" on how an API should be developed, but I firmly believe that syntax matters and that the end user (average C++ college grads who know little of templates, member function pointers, etc) can use the API. At my company I am the resident expert in C++ and working with a bunch of mechanical engineers whom I have been teaching C++ "on the job". So I prefer the macros. If there is some amazing way to achieve the same without macros then I will support it. But nothing keeps the reflection synchronized with method names better than a macro.
Exactly, I love simple syntax, that is why my system required just one line of code to register an RPC call, whether a standalone function, or a simple/complex, short/long-lived class. No need to reflect anything, no huge amounts of code. See my example above, it is very short, *very* efficient, easy to use, and has been in heavy use for multiple years now in a variety of products, including some class A games. On Sun, Feb 7, 2010 at 6:26 PM, Daniel Larimer <dlarimer@gmail.com> wrote:
Certainty the macro's should just build upon the "real API" and thus there should be some considerable effort made to make the "real api" as friendly as possible. Unfortunately, I am not as skilled with the pre-processor as I would like to be and so often my limited ability to manipulate tokens with the pre-processor is compensated for by code structure.
Exactly. I actually did have one macro in mine, it was just something like this though: #define REGISTER_RPC(RPC, FUNC, NAME) RPCHandler::type_of_callback<FUNC> NAME = global_rpc.register(FUNC, #NAME); Or something like that, just cut out the short amount of typing to something shorter, but it was completely unnecessary.
Perhaps the discussion on what should be in an RPC library should begin with requirements.
1) Minimize code/syntax required to expose a class to the "transport"
Exactly, mine requires one line of code per exposed function, yours requires more, I am still not sure why you need to reflect it... On Sun, Feb 7, 2010 at 6:26 PM, Daniel Larimer <dlarimer@gmail.com> wrote:
2) Ability to expose any number of methods (overloaded or not)
Yep, mine supports any number, including overloaded ones with no issue. On Sun, Feb 7, 2010 at 6:26 PM, Daniel Larimer <dlarimer@gmail.com> wrote:
3) Fully support multiple/virtual inheritance of interfaces
Yep, mine supports that with absolutely no problem as well. On Sun, Feb 7, 2010 at 6:26 PM, Daniel Larimer <dlarimer@gmail.com> wrote:
4) Support Boost.Signals
I do not see why mine would have any problems with this, but untested, just a simple functor call so it should work with no problem. On Sun, Feb 7, 2010 at 6:26 PM, Daniel Larimer <dlarimer@gmail.com> wrote:
5) Support Boost.Serialization - on this note, I have some concern over performance and "predictability" for interfacing with other code not using Boost.Serialization.
Mine uses Boost.Serialization for my network version, a custom-made thing for my script link, and RakNet::Bitstream for my RakNet version, it is very easy to replace that at will. On Sun, Feb 7, 2010 at 6:26 PM, Daniel Larimer <dlarimer@gmail.com> wrote:
6) Ability to implement any "protocol" (TCP/UDP/SharedMemory/SOAP/etc)
Yep, mine uses Boost.ASIO by default (thus it support TCP/UDP/SharedMemory/etc... out of the box), and mine supports an interface into my little scripting, and mine supports interfacing into the highly efficient RakNet networking library, so yes, it is very easy to implement other things on top of it. On Sun, Feb 7, 2010 at 6:26 PM, Daniel Larimer <dlarimer@gmail.com> wrote:
7) Support for exceptions - this requires a factory that is able to dynamically create the exception type from serialized data. Normal return values do not need this because it is known at compile time what the serialization of the return type is; however, with exceptions it could be "anything". For this I have used boost::any combined with a 32 bit hash of the class name.
Mine does not support return types, mine is purely asynchronous, so this is not implemented, however if futures or callbacks or so were implemented (I hate synchronous network calls with a passion, so do not expect me to ever make such a thing, callbacks and futures for sure! someone else could add it though), then it might be worth it to set such a thing up, although I see an easier way of doing it thanks to Boost.Fusion that would not require a factory and would be more efficient (have this idea thanks to my work in Boost.Spirit oddly enough, Joel and Hartmut are awesome). On Sun, Feb 7, 2010 at 6:26 PM, Daniel Larimer <dlarimer@gmail.com> wrote:
8) Some kind of minimum performance metric? In my opinion the library should perform on par with the best RPC libraries available only provide a simple, cross-platofrm, c++ native solution and not depend upon code generation.
Mine was made for pure efficiency. RakNet already had an RPC system (two actually), mine replaced both due to a few things, it executed faster, it was type-safe, it was more powerful, it allowed for passing RPC related data easier, etc.. Thus, mine is purely cross-platform (RakNet runs on a lot of systems), purely C++ native, no code generation or anything, pure C++ template work. On Sun, Feb 7, 2010 at 6:26 PM, Daniel Larimer <dlarimer@gmail.com> wrote:
As an added bonus, the same system should work to expose any object to any scripting engine because all of the required meta-information would be available.
I am not sure what meta-information you are talking about, but as stated, I originally made mine as an RPC callback system into a little scripting language. On Sun, Feb 7, 2010 at 6:26 PM, Daniel Larimer <dlarimer@gmail.com> wrote:
In reality there are two libraries here. One for RPC and one for "reflection" and creation of proxy objects. The reflection library could use some of the features of Boost.Mirror instead of its own mechanism for getting the name of a type:
I do not see how RPC and reflection overlap though, at all, should those not be split into two libraries? On Sun, Feb 7, 2010 at 6:26 PM, Daniel Larimer <dlarimer@gmail.com> wrote:
Clearly the idea of a reflection library in boost is not a new one. What are the main drivers that have prevented any of the proposed reflection techniques from being adopted? Is there anything I am doing / proposing that is clearly taboo for boost libraries?
I am curious about this too, a good reflection system for C++ (I still have no clue why it does not have one built-in, would simplify so much of Boost's work) would be quite useful, but I still wonder why you have one in an RPC library.

I am curious about this too, a good reflection system for C++ (I still have no clue why it does not have one built-in, would simplify so much of Boost's work) would be quite useful, but I still wonder why you have one in an RPC library.
I include one in my RPC library because the "API" of the server is auto-discovered by the client which pairs up method calls based upon the exposed function signature. I allow discovery of network services by both name AND "type/interface". If both sides were implemented in C++ then I would be fine and would not need the method names; however, the underlying protocol is implemented in labview and C for embedded. This ability to publish your "API" to is critical for CORBA or DBUS and other RPC systems. Effectively the protocol is "self documenting" and "auto discoverable" with the intent of making it scriptable. If you are implementing an XML based RPC then you would want the signature to be human readable... but I suppose all of that functionality could be handled by the delegates. In reality though, the "cost" of adding "reflection" to the RPC interface amounts to 2 lines of code to expose the #NAME of the method passed to the function macro and the interface macro. You are right, perhaps this is a separate requirement from RPC; however, from the perspective of my fellow developers having to register the methods once for RPC and a second time for scripting would be annoying. Another requirement I should add is "non-obtrusive". The developer user of the API should not have to do anything more than say "expose this instance to the network under this name" other than specify the meta information one time. Under your system, the user would have to define multiple RPCHandler's, register the same functions twice to get two instances. Thus they would probably create a function to "create" their global rpc. Correct me if I am wrong, but wouldn't your user need to register all of the methods in the interface 2x if they wanted to expose it to RPC AND to a scripting engine? I have seen Boost.Fusion mentioned several times, and started to look into it. An amazing concept! I started playing with BOOST_PP today and it is amazing as well. Reducing code and eliminating bugs! With respect to synchronous vs asynchronous, it is really the end user's choice. The previous version of my library was "single threaded" and thus could not be synchronous unless the user waited by recursively calling the main event loop which caused all kinds of re-entrant problems. The new version is multi-threaded so the user can "block" until data comes in and boost::asio will still process the packets. Feedback from my co-workers is that synchronous is much easier and creating a chain of handlers is a lot of work if you have N operations that depend upon the completion of the previous operation (we need libdispatch on windows!). So any library should support both. One time "start up calls" can be synchronous while mission critical calls are not. Plus, the user has the choice to "invoke without return" or "invoke without guarantee" or "invoke without order". This allows the underlying transport to pick between TCP/UDP/Sequence Numbers/Acks/etc. Based on initial response it appears that there are about 3-4 people interested in the topic. I still need to gather support for open sourcing our code (something I want to do as primary developer). So if anyone else is interested in working on such a library and preparing it for boost then speak up. Dan On Feb 7, 2010, at 9:29 PM, OvermindDL1 wrote:
I am curious about this too, a good reflection system for C++ (I still have no clue why it does not have one built-in, would simplify so much of Boost's work) would be quite useful, but I still wonder why you have one in an RPC library.

OvermindDL1 wrote:
class myClass { public: void _myMemberFunc(int i, float f); static RPCHandler::type_of_callback<myClass::_myMemberFunc>::type myMemberFunc; }
Hello, do you defer the completion of the full signature of the member function until it's called, or do you acquire it through some other trickery? Thanks, Cheers, Rutger

On Mon, Feb 8, 2010 at 7:25 AM, Rutger ter Borg <rutger@terborg.net> wrote:
OvermindDL1 wrote:
class myClass { public: void _myMemberFunc(int i, float f); static RPCHandler::type_of_callback<myClass::_myMemberFunc>::type myMemberFunc; }
Hello,
do you defer the completion of the full signature of the member function until it's called, or do you acquire it through some other trickery?
I do defer it until it is called (Boost.Bind'ish), fusion ensures that everything matches (with 'near'-perfect forwarding to boot). Basically, if you call the functor that is returned from register, fusion packages up all the arguments into a fusion vector. Then if it needs to take the path that requires it to be called directly it is then passed to fusion's invoke and it is called directly (this call path, even if not taken, still insures that the type signature matches before it is ever serialized to be sent out, a compile-time type check). And/Or if it needs to take the path where it is packaged up it is then sent to a recursive static struct::apply function (of the fusion design) where each recursive call operates on the first parameter that was passed into the function, then the next, then the next, etc... until they are all consumed, and the fallback apply function is then called which takes the completed packaged up data (this apply struct is what you can customize to use your own serialization technique or change to use a different form of communication) and either transmits it on a network, passes it to the scripting engine, whatever, along with the earlier registered ID name. If it receives a remote request to run it locally, say from ASIO, then it passes the data chunk and/or stream to it, it looks up in an unordered_map the ID and gets a boost::function that it calls with the data chunk and/or stream and calls that. That boost::function is originally created at the time of the register call (and is also contained in the struct that is returned from the register call to skip a map lookup if you use it directly), but it contains another fusion apply struct template that specializes on the original passed in function, and dserializes each chunk in turn, then fusion invokes the actual function with the deserialized generated fusion vector of the arguments.

OvermindDL1 wrote:
That boost::function is originally created at the time of the register call (and is also contained in the struct that is returned from the register call to skip a map lookup if you use it directly), but it contains another fusion apply struct template that specializes on the original passed in function, and dserializes each chunk in turn, then fusion invokes the actual function with the deserialized generated fusion vector of the arguments.
Thanks, much more clear already. How does the boost::function object know what kind of fusion::vector to deserialize? Do you keep some kind of extra mapping for argument-type to function-type? Cheers, Rutger

On Tue, Feb 9, 2010 at 2:12 AM, Rutger ter Borg <rutger@terborg.net> wrote:
OvermindDL1 wrote:
That boost::function is originally created at the time of the register call (and is also contained in the struct that is returned from the register call to skip a map lookup if you use it directly), but it contains another fusion apply struct template that specializes on the original passed in function, and dserializes each chunk in turn, then fusion invokes the actual function with the deserialized generated fusion vector of the arguments.
Thanks, much more clear already. How does the boost::function object know what kind of fusion::vector to deserialize? Do you keep some kind of extra mapping for argument-type to function-type?
Yes, the boost function just holds a pointer to a templated struct that is refined based on the function itself, that is how that struct know how to do everything, kind of like a subclass with a single virtual function.

OvermindDL1 wrote:
Yes, the boost function just holds a pointer to a templated struct that is refined based on the function itself, that is how that struct know how to do everything, kind of like a subclass with a single virtual function.
Yes, I guess I got that part. My question was actually: when is the deserialization code generated? Due to the deferred calling, it can only be done on the point of function invocation, i.e., without an actual function call, there's no deserialization code? On the receiving side, you have id -> boost::function where this function knows what to do based on the arguments passed on the sending side. I.e., how does the receiving side know what to deserialize? If you do auto my_func = register.get( func_type, id ); and later on my_func( arg1, arg2, arg2 ); doesn't this require to somehow register the fusion vector for the arguments, too? I mean, if you generate somehow boost::function< void( void*, std::size_t ) > m_type_erased_functor = my_func.magic_trick(); then, at that point, my_func hasn't seen { arg1, arg2, arg3 } yet. Is there another trick to tackle this? Cheers, Rutger

On Tue, Feb 9, 2010 at 3:19 AM, Rutger ter Borg <rutger@terborg.net> wrote:
OvermindDL1 wrote:
Yes, the boost function just holds a pointer to a templated struct that is refined based on the function itself, that is how that struct know how to do everything, kind of like a subclass with a single virtual function.
Yes, I guess I got that part. My question was actually: when is the deserialization code generated? Due to the deferred calling, it can only be done on the point of function invocation, i.e., without an actual function call, there's no deserialization code?
It deserializes based on the actual function signature, not based on the input stream (although with RakNet, Boost.Serialization, and my scripting engine it was all type safe both ways, and if it did not match I threw an exception). On Tue, Feb 9, 2010 at 3:19 AM, Rutger ter Borg <rutger@terborg.net> wrote:
On the receiving side, you have
id -> boost::function
where this function knows what to do based on the arguments passed on the sending side. I.e., how does the receiving side know what to deserialize? If you do
auto my_func = register.get( func_type, id );
and later on
my_func( arg1, arg2, arg2 );
doesn't this require to somehow register the fusion vector for the arguments, too? I mean, if you generate somehow
boost::function< void( void*, std::size_t ) > m_type_erased_functor = my_func.magic_trick();
then, at that point, my_func hasn't seen { arg1, arg2, arg3 } yet. Is there another trick to tackle this?
Think of it this way (in pseudo-code in some cases, copy/pasted code in others): typedef boost::function<void(RakNet::BitStream &)> invoker_function_t; template<typename Function> class RPC_Caller; class RPC_Caller_Information { protected: friend class ODL1RPC; std::string name; invoker_function_t invoker; public: ODL1RPC *odl1Rpc; }; typedef std::map<std::string, RPC_Caller_Information> invoker_function_map_t; invoker_function_map_t m_Invokers; template<typename Function> class RPC_Caller { protected: public: friend class ::RakNet::ODL1RPC; ::RakNet::ODL1RPC::RPC_Caller_Information information; Function func; public: typedef void result_type; template<class Seq> void operator()(Seq & s) const { if(/*somecondition*/true) // Call locally { boost::fusion::invoke(func,s); } if(/*somecondition*/true) // Call remotely { BitStream bs; _invoker<Function>::template reduce<Seq>(s); information.odl1Rpc->__SendCall(bs, information); } } }; template< typename Function , class From = typename boost::mpl::begin< boost::function_types::parameter_types<Function> >::type , class To = typename boost::mpl::end< boost::function_types::parameter_types<Function> >::type , class Enable = void > struct _invoker { // add an argument to a Fusion cons-list for each parameter type template<typename Args> static inline void apply(Function func, RakNet::BitStream &bs, Args const &args) { typedef typename boost::mpl::deref<From>::type arg_type; typedef typename boost::mpl::next<From>::type next_iter_type; arg_type t; t << bs; ::RakNet::ODL1RPC::_invoker<Function, next_iter_type, To>::apply ( func, bs, boost::fusion::push_back(args, t) ); } template<typename Args> static inline void reduce(RakNet::BitStream &bs, Args const &args) { typedef typename boost::mpl::deref<From>::type arg_type; typedef typename boost::mpl::next<From>::type next_iter_type; boost::mpl::at_c<0>(args) >> bs; ::RakNet::ODL1RPC::_invoker<Function, next_iter_type, To>::apply ( bs, boost::fusion::pop_front(args) ); } }; template<typename Function, class To> struct _invoker<Function,To,To> { // the argument list is complete, now call the function template<typename Args> static inline void apply(Function func, RakNet::BitStream &, Args const &args) { boost::fusion::invoke(func,args); } template<typename Args> static inline void reduce(RakNet::BitStream &bs, Args const &args) { // Do Nothing } }; template<class Function> boost::fusion::unfused_generic< RPC_Caller<Function> > register_function(std::string const & name, Function f, unsigned int flags) { // Yes, the this-> is required to keep the template params dependent RPC_Caller_Information &information = this->m_Invokers[name]; information.odl1Rpc = this; information.name = name; information.invoker = boost::bind(&_invoker<Function>::template apply<boost::fusion::nil>, f, 1, boost::fusion::nil()); RPC_Caller<Function> toReturn; toReturn.information = information; toReturn.func = f; return boost::fusion::unfused_generic< RPC_Caller<Function> >(toReturn); } That was my very first RakNet version as a demonstration at just how simple that code made his RPC system so much better. It eventually evolved into the huge amount of specializations and new features that you see in RPC3 in RakNet today. I based all my code from some example somewhere in Boost, fusion maybe... perhaps function_traits... This was back when I first learned Boost.Fusion and such (it was a newish library then), looking back at my old code I could write it so much better now with even better error reporting... But yes, as you can see, it is quite simple, and you should be able to see how easy it is to add specializations and new features.

OvermindDL1 wrote:
template< typename Function , class From = typename boost::mpl::begin< boost::function_types::parameter_types<Function> >::type , class To = typename boost::mpl::end< boost::function_types::parameter_types<Function> >::type , class Enable = void
Thanks. I guess I didn't know that "&class::member_function" is enough to get a full signature when there's no overload of that function in the class. I.e., that function_types will just give the full signature in that case. Cheers, Rutger

On Feb 7, 2010, at 3:08 PM, vicente.botet wrote:
BTW, the interface you presented initialy was really simple.
META_INTERFACE( SomeClass, METHOD(add) METHOD(sub) METHOD(inout) )
And was able to get the single function signature for add. I don't know which techniques do you use to get this, but can't the same techniques get all the signatures sharing the same function name? Could you explain how do you get the complete signature?
I didn't see this question initially... but to answer how I accomplished that "black magic"... Given macro parameters CLASS and METHOD_NAME The signature can be determined using boost TYPE_OF and the following trick: BOOST_TYPEOF( testFunc(&CLASS::METHOD_NAME) ) template<typename R BOOST_PP_ENUM_TRAILING_PARAMS(n,typename A), typename C> boost::function_traits<R(BOOST_PP_ENUM_PARAMS(n,A))> testFunc( R (C::*)(BOOST_PP_ENUM_PARAMS(n,A) ); Then using the equivalent of Boost.Mirror get_typename<T>::str() I can deduce the string equivalent of each parameter type. The reason I cannot do it with overloads is because the call to testFunc(&X::Y) become ambiguous for because there are two possible matches for testFunc(&X::Y), additionally if the user only wanted to register one of the overloads and not the other then they would have no means to identify which ones they want.

On Sun, Feb 7, 2010 at 3:52 AM, Daniel Larimer <dlarimer@gmail.com> wrote:
I have been playing with some code and recently developed a very efficient, extensible, and easy to use API for performing remote procedure calls (with multiple protocols). I am writing to gauge interest and get feedback on the user API. The native "protocol" is a custom combined tcp/udp protocol using multi-cast for discovery, but it would be very easy to add a shared memory solution, a dbus, XML/RPC, CORBA, etc back end to the user api.
[snip]
META_INTERFACE( SomeClass, METHOD(add) METHOD(sub) METHOD(inout) )
Concerning this meta-interface declaration you might want to take a look at the Mirror library, which can provide a lot of meta-data at both compile-time and run-time. Both the C++98 and C++0x versions are in the Boost Vault and the C++0x version can be also found on sourceforge: http://sourceforge.net/projects/mirror-lib/ the docs for the new version can be found here (though they are nowhere near to finished, yet) [snip]

On Sun, Feb 7, 2010 at 12:52 PM, Matus Chochlik <chochlik@gmail.com> wrote:
On Sun, Feb 7, 2010 at 3:52 AM, Daniel Larimer <dlarimer@gmail.com> wrote:
I have been playing with some code and recently developed a very efficient, extensible, and easy to use API for performing remote procedure calls (with multiple protocols). I am writing to gauge interest and get feedback on the user API. The native "protocol" is a custom combined tcp/udp protocol using multi-cast for discovery, but it would be very easy to add a shared memory solution, a dbus, XML/RPC, CORBA, etc back end to the user api.
[snip]
META_INTERFACE( SomeClass, METHOD(add) METHOD(sub) METHOD(inout) )
Concerning this meta-interface declaration you might want to take a look at the Mirror library, which can provide a lot of meta-data at both compile-time and run-time.
Both the C++98 and C++0x versions are in the Boost Vault and the C++0x version can be also found on sourceforge: http://sourceforge.net/projects/mirror-lib/
the docs for the new version can be found here (though they are nowhere near to finished, yet) here (sorry): http://kifri.fri.uniza.sk/~chochlik/mirror-lib/html/
[snip]
BR matus

Mirror looks interesting and for some reason never showed up on my google searches! After reading the documentation it was unclear to me how one would use the "mirrored" interface other than to query the text description of the interface component types/names. For example, there was no abstract (virtual) way of invoking a method either using boost::any, void*, or serialized parameters. Perhaps I missed it. It also appears that Mirror requires C++0x which limits its application. With respect to exceptions, that is certainly supported. The code below would work "as expected" and the "divide by zero" exception would be marshaled back. RemoteInterface<SomeClass> ri("my.named.service"); try { float rtn = ri.divide( 5, 0 ); } catch ( const std::exception& e ) { .. } The asynchronous interface would be: future<float> rtn = ri.divide( 5, 0, AsyncFlag ); ri.divide( 5, 0, AsyncFlag | NoReturn ); future<float> rtn2 = ri.divide( 5, 0, AsyncFlag ); try { if( !rtn.wait(timeout) ) { err...timeout } } catch ( const std::exception& e ) { divide by zero caught async } float auto_cast = rtn2; Earlier versions of the library used a meta-object system similar to mirror and I ended up with "slow" code that looked something like: ri.invoke( "float divide(float,float)", 5, 0 ); That approach was error prone (no compile time checks on signature). So I replaced it with: ri.invoke( SLOT(SomeClass,divide), 5, 0 ); But I realized I was providing all of the compile time information necessary to optimize the serialization yet the implementation was still dependent upon polymorphic functions dealing with "generic" parameters. Plus, my ultimate goal was to make using the remote object as seamless/similar to using the local object as possible. Thus what I really needed was the concept of a "proxy" object that provided all of the same methods as the "real object" and then provided a means to "delegate" the actual invocation of those methods. You will notice that whether it is dbus, corba, etc they all have the fundamental layer that looks something like: ri.invoke( "float divide(float,float)", 5, 0 ); And then they use code generation or manual implementation to create a "proxy class" that has real methods that then invoke the fundamental layer to hide the messy invocation details from the user of the remote object. The META_INTERFACE(...) macro defines a generic "Proxy Template" that can then be specialized for different kinds of back-ends via a delegate template parameter. template<typename SomeClass, typename SomeDelegate=DefaultDelegate> MetaInterface : ... The default delegate simply takes a pointer to the "real object" and would enable local scripting engines or the server side of a RPC library. The RemoteInterfaceDelegate replaces the call to the "real object" with a remote procedure call using your "protocol of choice". You would have a different RemoteInterfaceDelegate for XML-RPC, Corba, dbus... etc In reality what I believe I have achieved is automatic generation of proxy classes without the use of an "interface description language", "Qt moc compiler", or other code generation schemes outside of standard C++. With this auto-generation the "fundamental layer" gets full access to all of the type info and thus can implement very efficient in-line serialization/deserialization of parameters and return values. Obviously, the library will need a lot of work to fit with boost naming conventions and I will need some help with macros to auto-define all combinations of parameters (currently 0-8 are supported, but manually defined). Dan On Feb 7, 2010, at 6:55 AM, Matus Chochlik wrote:
On Sun, Feb 7, 2010 at 12:52 PM, Matus Chochlik <chochlik@gmail.com> wrote:
On Sun, Feb 7, 2010 at 3:52 AM, Daniel Larimer <dlarimer@gmail.com> wrote:
I have been playing with some code and recently developed a very efficient, extensible, and easy to use API for performing remote procedure calls (with multiple protocols). I am writing to gauge interest and get feedback on the user API. The native "protocol" is a custom combined tcp/udp protocol using multi-cast for discovery, but it would be very easy to add a shared memory solution, a dbus, XML/RPC, CORBA, etc back end to the user api.
[snip]
META_INTERFACE( SomeClass, METHOD(add) METHOD(sub) METHOD(inout) )
Concerning this meta-interface declaration you might want to take a look at the Mirror library, which can provide a lot of meta-data at both compile-time and run-time.
Both the C++98 and C++0x versions are in the Boost Vault and the C++0x version can be also found on sourceforge: http://sourceforge.net/projects/mirror-lib/
the docs for the new version can be found here (though they are nowhere near to finished, yet) here (sorry): http://kifri.fri.uniza.sk/~chochlik/mirror-lib/html/
[snip]
BR
matus _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Sun, Feb 7, 2010 at 5:30 PM, Daniel Larimer <dlarimer@gmail.com> wrote:
Mirror looks interesting and for some reason never showed up on my google searches! After reading the documentation it was unclear to me how one would use the "mirrored" interface other than to query the text description of the interface component types/names. For example, there was no abstract (virtual) way of invoking a method either using boost::any, void*, or serialized parameters. Perhaps I missed it. It also appears that Mirror requires C++0x which limits its application.
Sigh, the docs still leave a lot to be desired, but I'm working on it so it hopefully gets better soon. Anyway, there is a run-time layer nicknamed "Lagoon" which provides a dynamic interface based on the compile-time meta-data registered with Mirror, that allows you to iterate through the member variables/ base classes/constructors/*member functions*/various specifiers/etc. the docs can be found here: http://tinyurl.com/yz6pdbv there is a template function called reflected_class<Class>() (docs: http://tinyurl.com/ylag2tq) that returns a polymorphic interface (meta_class: http://tinyurl.com/ylepo6e) reflecting a Class. *Currently* it lacks the ability to call a function dynamically, but there is a compile-time facility which can invoke a constructor or basically any (member) function with a custom parameter supplier, through a uniform interface and allows to pass the parameters for the function call from any external source, a GUI, a database, an XML file, etc. Similar thing will be added to the run-time layer, once some minor issues are resolved. This way it will be possible to use a polymorphic interface to create instances or to call a member function and to supply the arguments in an application defined way. But, what I was suggesting is that you could use the compile-time meta-data provided by Mirror to generate the meta-data you are currently registering with the META_INTERFACE and METHOD macros (I did not look at the definition of these macros but there is a good chance that this is possible) There is a set of meta-function templates which work with the meta_class returned by BOOST_MIRRORED_CLASS(Class), for example: member_variables<MetaClass> (http://tinyurl.com/yfygtay), member_functions<MetaClass> (http://tinyurl.com/yggfqhe), base_classes<MetaClass> (http://tinyurl.com/ylyvzkd), etc. which return ranges of meta-objects describing the members, base classes, etc. by using the compile-time meta-programming utilities (http://tinyurl.com/ygv2ash) you can traverse or transform these ranges into (nearly ;-)) whatever form you need.
With respect to exceptions, that is certainly supported. The code below would work "as expected" and the "divide by zero" exception would be marshaled back.
RemoteInterface<SomeClass> ri("my.named.service");
try { float rtn = ri.divide( 5, 0 ); } catch ( const std::exception& e ) { .. }
The asynchronous interface would be:
future<float> rtn = ri.divide( 5, 0, AsyncFlag ); ri.divide( 5, 0, AsyncFlag | NoReturn ); future<float> rtn2 = ri.divide( 5, 0, AsyncFlag );
try { if( !rtn.wait(timeout) ) { err...timeout } } catch ( const std::exception& e ) { divide by zero caught async }
float auto_cast = rtn2;
Earlier versions of the library used a meta-object system similar to mirror and I ended up with "slow" code that looked something like:
ri.invoke( "float divide(float,float)", 5, 0 );
That approach was error prone (no compile time checks on signature). So I replaced it with:
ri.invoke( SLOT(SomeClass,divide), 5, 0 );
But I realized I was providing all of the compile time information necessary to optimize the serialization yet the implementation was still dependent upon polymorphic functions dealing with "generic" parameters. Plus, my ultimate goal was to make using the remote object as seamless/similar to using the local object as possible. Thus what I really needed was the concept of a "proxy" object that provided all of the same methods as the "real object" and then provided a means to "delegate" the actual invocation of those methods.
You will notice that whether it is dbus, corba, etc they all have the fundamental layer that looks something like:
ri.invoke( "float divide(float,float)", 5, 0 );
And then they use code generation or manual implementation to create a "proxy class" that has real methods that then invoke the fundamental layer to hide the messy invocation details from the user of the remote object.
The META_INTERFACE(...) macro defines a generic "Proxy Template" that can then be specialized for different kinds of back-ends via a delegate template parameter.
template<typename SomeClass, typename SomeDelegate=DefaultDelegate> MetaInterface : ...
The default delegate simply takes a pointer to the "real object" and would enable local scripting engines or the server side of a RPC library.
The RemoteInterfaceDelegate replaces the call to the "real object" with a remote procedure call using your "protocol of choice". You would have a different RemoteInterfaceDelegate for XML-RPC, Corba, dbus... etc
In reality what I believe I have achieved is automatic generation of proxy classes without the use of an "interface description language", "Qt moc compiler", or other code generation schemes outside of standard C++. With this auto-generation the "fundamental layer" gets full access to all of the type info and thus can implement very efficient in-line serialization/deserialization of parameters and return values.
Obviously, the library will need a lot of work to fit with boost naming conventions and I will need some help with macros to auto-define all combinations of parameters (currently 0-8 are supported, but manually defined).
Dan
On Feb 7, 2010, at 6:55 AM, Matus Chochlik wrote:
On Sun, Feb 7, 2010 at 12:52 PM, Matus Chochlik <chochlik@gmail.com> wrote:
On Sun, Feb 7, 2010 at 3:52 AM, Daniel Larimer <dlarimer@gmail.com> wrote:
I have been playing with some code and recently developed a very efficient, extensible, and easy to use API for performing remote procedure calls (with multiple protocols). I am writing to gauge interest and get feedback on the user API. The native "protocol" is a custom combined tcp/udp protocol using multi-cast for discovery, but it would be very easy to add a shared memory solution, a dbus, XML/RPC, CORBA, etc back end to the user api.
[snip]
META_INTERFACE( SomeClass, METHOD(add) METHOD(sub) METHOD(inout) )
Concerning this meta-interface declaration you might want to take a look at the Mirror library, which can provide a lot of meta-data at both compile-time and run-time.
Both the C++98 and C++0x versions are in the Boost Vault and the C++0x version can be also found on sourceforge: http://sourceforge.net/projects/mirror-lib/
the docs for the new version can be found here (though they are nowhere near to finished, yet) here (sorry): http://kifri.fri.uniza.sk/~chochlik/mirror-lib/html/
[snip]
BR
matus _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- ________________ ::matus_chochlik

Hi Daniel, On Sun, Feb 7, 2010 at 3:52 AM, Daniel Larimer <dlarimer@gmail.com> wrote:
So I need to get some more support to convince my employer that this will be a good move. Active interest would be a good sign. Other ideas on the positive aspects of contributing vs keeping it internal would also help me make my case.
Just a few words from my side. We have built a quite similar library just recently here. What proved essential and appeared to be missing in you draft is support for multiple languages. Our library here can offer methods in C++ and Java and can be called from clients in C++, Java and Python. Which turned out a very good thing. Given there's plenty of such solutions around I suggest you consider multilanguage support for yours. I believe there's no point in developing such a lib without that nowadays. Cheers, Stephan

Do you mean that multiple languages should support the RPC protocol? Does LabView count? We currently have an embedded (C) version and a labview one. I was looking to make the C++ api flexible enough to support any protocol, the fact that we have developed a light-weight low bandwidth protocol should not distract from the real potential of the API to support any protocol backend. That said, I do hope that the meta information would make it easy to bind scripting languages to our C++ code. Dan On Feb 8, 2010, at 2:58 AM, Stephan Menzel wrote:
Hi Daniel,
On Sun, Feb 7, 2010 at 3:52 AM, Daniel Larimer <dlarimer@gmail.com> wrote:
So I need to get some more support to convince my employer that this will be a good move. Active interest would be a good sign. Other ideas on the positive aspects of contributing vs keeping it internal would also help me make my case.
Just a few words from my side. We have built a quite similar library just recently here. What proved essential and appeared to be missing in you draft is support for multiple languages. Our library here can offer methods in C++ and Java and can be called from clients in C++, Java and Python. Which turned out a very good thing. Given there's plenty of such solutions around I suggest you consider multilanguage support for yours. I believe there's no point in developing such a lib without that nowadays.
Cheers,
Stephan _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Daniel, On Mon, Feb 8, 2010 at 9:39 AM, Daniel Larimer <dlarimer@gmail.com> wrote:
Do you mean that multiple languages should support the RPC protocol? Does LabView count? We currently have an embedded (C) version and a labview one.
I'm not sure I understand correctly. My point is: you should be able to offer RPC services in more than one language. And you should be able to call them (possibly natively) in multiple languages. That's all ;-) Cheers, Stephan

Daniel Larimer wrote:
// server
SomeClass myclass; Server myserver( &myclass, "my.named.service" );
// client
RemoteInterface<SomeClass> ri( "my.named.service" ); int result = ri.add(1,4); assert( result == 5 ) ri.sub( 5, 1, &result ); assert( result == 4 ); ri.inout(result); assert( result == 9 );
Hello Dan, I'm interested. Is the assumption of classes and member functions needed? What about mimicking the boost::function interface? asio::io_service ios; client m_client( ios, "rpc://server/" ); remote_function< void( int ) > m_func = m_client( "some.resource" ); Cheers, Rutger

Rutger, The only reason I assumed classes is because in our application there are many functions grouped together and we are leveraging hierarchal interfaces. I suppose that your interface below could work equally well as an alternative interface within the same library. I kind of like it for its simplicity! An update on where I am. I have successfully added support for overloaded member functions and re-factored the code based upon some feedback given here and in the process have adopted the boost/stl naming conventions. I have been thinking about the nature of the library and what I would call it and have concluded the following. 1) The "core library" should be free of any network code or dependency on boost::asio because networking is only one way to "invoke a method" on an object outside of your memory space (or even inside your memory space). 2) The real "generic" function I am attempting to provide is automatic "stub" generation to create a "proxy object" that controls the dispatch of method calls, signal emits, and parameter queries while providing a developer experience as similar as possible to working with a local object and minimizing the amount of time spent using the IDL (Interface Description Language), which in our case is templates and c-preprocessor. 3) Using "Boost.Stub" I would then provide different Stub delegates for handling different kinds of dispatch mechanisms. I believe I have identified some useful applications for Boost.fusion in some stubs. If done right, we should be able to create a stub that would give any class an "active object" interface to synchronize calls across threads. 4) With my latest rework I believe I will be able to get the object code and memory usage small enough to work on an embedded system running LWIP with 64K of ram and 150K of flash. This requirement means that the library should function well even in environments where dynamic heap allocation, rtti, and exceptions are forbidden! Support for the embedded platform should not come at the expense of desktop users. Achieving this should be very possible if the stub library really sticks to its goal of only defining the template hooks that allow any number of delegates to be written. 5) With all of that said, I believe that the core library will be header only. What I was calling the "meta interface" is really just providing the necessary type descriptions/hooks/macros to enable the development of delegate classes that do all of the real work. Some delegates may be compiled into a library while others (such as the default delegate, a pointer to an object in the same memory space) can be header only as well. With this approach there is almost no "abstraction layer" overhead between the caller, the delegate, and the actual network code (if it happens to be a network RPC delegate). I need to run some benchmarks, but I believe that the "cost" of using a stub<MyClass,default_delegate> vs a direct pointer to MyClass is at most one function call and a member function pointer de-reference and at best, completely inlined so as to be "identical" at run time. Does anyone have any pointers on how to "compare" the asm generated by a particular line or group of lines of code? I would like to see how much inlining is actually going on and monitor the cause/effects of different template/code structures on the resulting code. Thoughts? Dan On Feb 10, 2010, at 9:38 AM, Rutger ter Borg wrote:
Daniel Larimer wrote:
// server
SomeClass myclass; Server myserver( &myclass, "my.named.service" );
// client
RemoteInterface<SomeClass> ri( "my.named.service" ); int result = ri.add(1,4); assert( result == 5 ) ri.sub( 5, 1, &result ); assert( result == 4 ); ri.inout(result); assert( result == 9 );
Hello Dan,
I'm interested. Is the assumption of classes and member functions needed? What about mimicking the boost::function interface?
asio::io_service ios; client m_client( ios, "rpc://server/" );
remote_function< void( int ) > m_func = m_client( "some.resource" );
Cheers,
Rutger
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Wed, Feb 10, 2010 at 7:38 AM, Rutger ter Borg <rutger@terborg.net> wrote:
// server
SomeClass myclass; Server myserver( &myclass, "my.named.service" );
// client
RemoteInterface<SomeClass> ri( "my.named.service" ); int result = ri.add(1,4); assert( result == 5 ) ri.sub( 5, 1, &result ); assert( result == 4 ); ri.inout(result); assert( result == 9 );
Hello Dan,
I'm interested. Is the assumption of classes and member functions needed? What about mimicking the boost::function interface?
asio::io_service ios; client m_client( ios, "rpc://server/" );
remote_function< void( int ) > m_func = m_client( "some.resource" );
Yep, that is how mine worked, just based on a Boost.Function interface. On Wed, Feb 10, 2010 at 8:57 AM, Daniel Larimer <dlarimer@gmail.com> wrote:
I need to run some benchmarks, but I believe that the "cost" of using a stub<MyClass,default_delegate> vs a direct pointer to MyClass is at most one function call and a member function pointer de-reference and at best, completely inlined so as to be "identical" at run time.
Does anyone have any pointers on how to "compare" the asm generated by a particular line or group of lines of code? I would like to see how much inlining is actually going on and monitor the cause/effects of different template/code structures on the resulting code.
If you come up with some benchmarks I would like to compare it to what I made a couple of years ago (where I was admittedly not as experienced with such things as I am now), so if you could post the complete compilable example with included libraries, I am interested.

I cannot post the code at this time (still pending employer approval in spite of the fact that 100% of the development of this code refractor is unpaid!) Ok, benchmark running on Mac OS X with g++ -O2 MyClass c; MyClass* cp = &c; stub<MyClass> s(cp); int (MyClass::*a)(int,int) = &MyClass::add; comparing (cp->*a)(1,2) to s.add(1,2) On average there is less than .001 us per call difference between the two methods for a trial run of 1024^3 invocations. Of course the compiler could optimize c.add(1,2) to the point that it was "unmeasurable". It is as I would have expected, the entire abstraction layer has almost no run time overhead and so the rest of the performance will be in how fast you can serialize your data and send it out the port. Dan On Feb 10, 2010, at 5:32 PM, OvermindDL1 wrote:
On Wed, Feb 10, 2010 at 7:38 AM, Rutger ter Borg <rutger@terborg.net> wrote:
// server
SomeClass myclass; Server myserver( &myclass, "my.named.service" );
// client
RemoteInterface<SomeClass> ri( "my.named.service" ); int result = ri.add(1,4); assert( result == 5 ) ri.sub( 5, 1, &result ); assert( result == 4 ); ri.inout(result); assert( result == 9 );
Hello Dan,
I'm interested. Is the assumption of classes and member functions needed? What about mimicking the boost::function interface?
asio::io_service ios; client m_client( ios, "rpc://server/" );
remote_function< void( int ) > m_func = m_client( "some.resource" );
Yep, that is how mine worked, just based on a Boost.Function interface.
On Wed, Feb 10, 2010 at 8:57 AM, Daniel Larimer <dlarimer@gmail.com> wrote:
I need to run some benchmarks, but I believe that the "cost" of using a stub<MyClass,default_delegate> vs a direct pointer to MyClass is at most one function call and a member function pointer de-reference and at best, completely inlined so as to be "identical" at run time.
Does anyone have any pointers on how to "compare" the asm generated by a particular line or group of lines of code? I would like to see how much inlining is actually going on and monitor the cause/effects of different template/code structures on the resulting code.
If you come up with some benchmarks I would like to compare it to what I made a couple of years ago (where I was admittedly not as experienced with such things as I am now), so if you could post the complete compilable example with included libraries, I am interested. _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Wed, Feb 10, 2010 at 4:36 PM, Daniel Larimer <dlarimer@gmail.com> wrote:
I cannot post the code at this time (still pending employer approval in spite of the fact that 100% of the development of this code refractor is unpaid!)
Ok, benchmark running on Mac OS X with g++ -O2
MyClass c; MyClass* cp = &c;
stub<MyClass> s(cp);
int (MyClass::*a)(int,int) = &MyClass::add;
comparing (cp->*a)(1,2) to s.add(1,2)
On average there is less than .001 us per call difference between the two methods for a trial run of 1024^3 invocations.
Of course the compiler could optimize c.add(1,2) to the point that it was "unmeasurable".
It is as I would have expected, the entire abstraction layer has almost no run time overhead and so the rest of the performance will be in how fast you can serialize your data and send it out the port.
Any chance of just compiling it into a standalone exe/dll for win32 then so I can test it on the same platform as mine?

I really think that benchmarking is pre-mature at this point. Initial "benchmarks" were only to demonstrate that most of the work is done at compile time and the result is a near "optimal" light-weight call into your stub's delegate code which is responsible for "invoking the RPC method". In the example below s.add(1,2) was an inline call to: inline typename traits::result_type operator() (PARAM_ARGS) { return ((T*)( method_impl_base::d->self)->*m_ptr)(PARAM_NAMES); } Where T == MyClass method_impl_base::d == pointer to delegate self == &c m_ptr == &MyClass::add struct delegate<T> { T* self; struct method_impl_base { delegate* d; } }; Suppose you wanted to provide a backend that invoked (synchronously) over a socket, you would define a new delegate type that provided the types for each method and the function above would become (something like) inline typename traits::result_type operator() (PARAM_ARGS) { method_impl_base::d->socket << method_id << params; traits::result_type result; method_impl_base::d->socket >> result; return result; } Now clearly you would want to change the return type to future<result_type> and make the whole exchange asynchronous and perhaps even give the user some control over the quality of service, priority, acks, ordering, etc. The point is that the delegate class can define any and all of those kinds of details while the writer of MyClass merely had to specify their interface in terms of a generic "delegate template" that can be defined later. In my case, I have defined a TCP/UDP RPC protocol and hooked it up to a delegate. None of the other developers need to deal with packing/unpacking or modifying their code for new protocols. As far as they are concerned they write a class, list the class name, base classes, method names, and signals in a macro and it can be exposed to the network and easily accessed remotely through the stub/proxy object. Compare this to the alternatives: write an interface description in some kind of IDL, hook it into your build process to generate the code, ... and after all of that work, if you want to change from dbus to corba to shared memory or subspace you need to start all over again. The key to making this truly portable is to define the delegate concept clearly in terms of both synchronous and asynchronous operations and abstract ideas regarding delivery characteristics (guaranteed, ordered, sync, async, priority ) I have a good first stab at this and a working proof of concept as well as several earlier revisions based upon boost:;any and dynamic runtime type system. The dynamic system works, but suffers from the following drawbacks: 1) Requires registering of data types in a factory 2) Limited compile time checking of your RPC 3) Lots of dynamic memory allocation for dealing with dynamic types 4) Not really practical for embedded systems How does this relate to boost? Well I figured that such a library would be sufficiently general enough and hopefully elegant enough that it would fit in next to all of the other great libraries. Plus, I am tired of writing my own code (ptree w/ json, meta unit conversion, signals, reflection system), only to have some boost library come out that was substantially similar to my own code. Regardless, I welcome any collaboration on this subject from this community. My one concern is that this thread of discussion is may not be appropriate for this forum. If there is a better place to hash out what would make the ideal RPC / IDL library for boost let me know. Some of the challenges I face in designing this library include: Given the following: struct stub<MyClass, delegate> { delegate* d; struct method1 : public delegate::method<signature> { } m1; struct method2 : public delegate::method<signature> { } m2; std::vector< delegate::method*> methods; }; What is the best way to get methods automatically populated with pointers to m1 and m2. Hand coding the constructor to stub<> would either require some MACRO MAGIC or the developer to list every method twice. Additionally m1 and m2 need to be initialized with a pointer to the delegate or even just to stub<MyClass,delegate> so that they have the necessary context to carry out their call. I have managed to accomplish it using some "temporary global" variables (thread safe), and some clever use of constructor/destructors in base classes. I am curious if anyone else has ever had such a problem or knew of a good way to solve it. I would also be curious if boost::fusion could perhaps make it possible to iterate over the types in stub? Ultimate goal is the following macro which is the minimum information the user must provide to define their interface so that it can be used with any delegate. DEFINE_STUB( CLASS, INHERITS( BASE1, BASE2, ... ) METHOD(add) METHOD(sub, int(double,double), float(int,int)) // overloaded ) If there were some way to automatically deduce more information then all the better! Leveraging something like Boost.Mirror would actually require more input from the user. A potential option if they are using Boost.Mirror for other things, but not ideal of they only want to do RPC. Dan On Feb 10, 2010, at 8:00 PM, OvermindDL1 wrote:
On Wed, Feb 10, 2010 at 4:36 PM, Daniel Larimer <dlarimer@gmail.com> wrote:
I cannot post the code at this time (still pending employer approval in spite of the fact that 100% of the development of this code refractor is unpaid!)
Ok, benchmark running on Mac OS X with g++ -O2
MyClass c; MyClass* cp = &c;
stub<MyClass> s(cp);
int (MyClass::*a)(int,int) = &MyClass::add;
comparing (cp->*a)(1,2) to s.add(1,2)
On average there is less than .001 us per call difference between the two methods for a trial run of 1024^3 invocations.
Of course the compiler could optimize c.add(1,2) to the point that it was "unmeasurable".
It is as I would have expected, the entire abstraction layer has almost no run time overhead and so the rest of the performance will be in how fast you can serialize your data and send it out the port.
Any chance of just compiling it into a standalone exe/dll for win32 then so I can test it on the same platform as mine? _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Daniel Larimer wrote:
packing/unpacking or modifying their code for new protocols. As far as they are concerned they write a class, list the class name, base classes, method names, and signals in a macro and it can be exposed to the network and easily accessed remotely through the stub/proxy object.
How does connecting to a remote signal work? Doesn't that presume the availability of an event dispatcher loop? Cheers, Rutger

OvermindDL1 wrote:
asio::io_service ios; client m_client( ios, "rpc://server/" );
remote_function< void( int ) > m_func = m_client( "some.resource" );
Yep, that is how mine worked, just based on a Boost.Function interface.
If you come up with some benchmarks I would like to compare it to what I made a couple of years ago (where I was admittedly not as experienced with such things as I am now), so if you could post the complete compilable example with included libraries, I am interested.
If your application is designed to be synchronous, you can never be faster than a full roundtrip. How would a asynchronous version look like? This would be the part I'm most interested in, the true asynchronous case. On invocation, you would do the send only, i.e., the function returns immediately (perhaps even before doing the send). You would issue a completion handler with your request. perhaps remote_function< void( int ) > m_func; could be called // synchronous case, throwing, blocking send/receive m_func( 2 ); // asynchronous case, nonthrowing, fully async, completion handler // result_type must be void m_func( 2, some_handler ) or, something along the line of future semantics? future<...> = m_func(2); Cheers, Rutger

Rutger, I apologize for being unable to show you the code at the moment, but the interface that I would like to see would be something along the lines of: int x = stub.add(1,2); // synchronous future<int> x = stub.add(1,2) // asynchronous stub.add(1,2); // one way (no return value) The way I envision that working is for stub to return an generic "call<int>" object, that when cast to an integer causes a synchronous call to be made, when cast to a future causes the an asynchronous call, and when destructed before being cast causes a one-way call to be made. The down side to the syntax above is that it causes the creation of a temporary object and several additional function calls (constructor, cast, constructor, destructor) to be made before your request can be made. The current syntax is: int x = stub.add(1,2); // synchronous future<int> x = stub.add(1,2, CALL_FLAGS); Then in my "asyc-throughput tests" I do this: vector<future<int> > return_values[10000]; for( int i(0); i < return_values.size(); ++i ) return_values[i] = stub.add(i,i+1, ASYNC); // block until I have all 10,000 values return_values.back().wait(timeout); // call my callback when complete return_values.back().finished.connect( some_callback ); I get through put on the order of 500,000 calls per second in the asynchronous mode, but latency does go up some. Doing synchronous calls your theoretical max call rate (local host, linux (10us ping)) is about 100,000 calls/second. In practice I am seeing 15,000 calls per second. Over ethernet 300us ping, your max call rate is about 3300/second. There is a place for both synchronous and asynchronous calling conventions because in practice the end user will often create "chains" of asynchronous calls that are really synchronous in nature. Thus it is more work for them. On embedded, asynchronous is the only method possible because you are single threaded. Dan On Feb 11, 2010, at 3:38 AM, Rutger ter Borg wrote:
OvermindDL1 wrote:
asio::io_service ios; client m_client( ios, "rpc://server/" );
remote_function< void( int ) > m_func = m_client( "some.resource" );
Yep, that is how mine worked, just based on a Boost.Function interface.
If you come up with some benchmarks I would like to compare it to what I made a couple of years ago (where I was admittedly not as experienced with such things as I am now), so if you could post the complete compilable example with included libraries, I am interested.
If your application is designed to be synchronous, you can never be faster than a full roundtrip. How would a asynchronous version look like? This would be the part I'm most interested in, the true asynchronous case. On invocation, you would do the send only, i.e., the function returns immediately (perhaps even before doing the send). You would issue a completion handler with your request.
perhaps
remote_function< void( int ) > m_func;
could be called
// synchronous case, throwing, blocking send/receive m_func( 2 );
// asynchronous case, nonthrowing, fully async, completion handler // result_type must be void m_func( 2, some_handler )
or, something along the line of future semantics?
future<...> = m_func(2);
Cheers,
Rutger
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Daniel Larimer wrote:
An update on where I am. I have successfully added support for overloaded member functions and re-factored the code based upon some feedback given here and in the process have adopted the boost/stl naming conventions.
I have been thinking about the nature of the library and what I would call it and have concluded the following.
1) The "core library" should be free of any network code or dependency on boost::asio because networking is only one way to "invoke a method" on an object outside of your memory space (or even inside your memory space).
I agree, although there are quite a few Boost Libraries that could potentially be used for transport purposes, * Boost.Asio (event loop, TCP/IP, unix domain sockets) * Boost.Interprocess (shared memory stuff) * Boost.Function / Boost.Thread (local stuff) * Boost.MPI (all of the above :-)) Cheers, Rutger

Zitat von Daniel Larimer <dlarimer@gmail.com>:
I have been playing with some code and recently developed a very efficient, extensible, and easy to use API for performing remote procedure calls (with multiple protocols).
I don't have much time to participate in this discussion right now, but I still wanted to express my interest in such a library. In particular, for this purpose: https://svn.boost.org/svn/boost/sandbox/persistent/libs/persistent/doc/html/... (Remote resource manager) Regards,
participants (7)
-
Daniel Larimer
-
Matus Chochlik
-
OvermindDL1
-
Rutger ter Borg
-
Stephan Menzel
-
strasser@uni-bremen.de
-
vicente.botet