
Hi Jeff, On 4/24/07, Jeff Garland <jeff@crystalclearsoftware.com> wrote:
Stjepan Rajko wrote:
In addition to what's listed at the website, I'm wondering what the proper way of returning the function call results would be... For example, if the function is void something(int &x), should the modified "x" value on the remote computer appear in the local x after the remote call returns?
Just to add some perspective, 'full RPC' systems typically support this way of returning values. In CORBA, parameters are characterized as 'in', 'out' or 'in-out' in the IDL method descriptions.
In the case when a parameter was specified as an out parameter, or presumed to be an out parameter due to its type, I wasn't sure whether the returned value should go directly into the passed argument, or stored elsewhere. For example, int i=1; remote_call inc_call(inc_function_id, i); rpc_client.make_remote_call(inc_call); // alternative one: now i == 2 // alternative two: now my_call.arg1() == 2 but i==1 Alternative one is a lot cleaner and intuitive, and it doesn't require a lasting call object (i.e., it makes it possible to make the remote call available through a local function that behaves as if the remote function was local: something like remote_inc_call(i) which performs rpc_client.make_remote_call(remote_call(inc_function_id, i)). In retrospect, I don't think there are any good reasons to have alternative two available (at least for sync calls), but I might be wrong. The user can always do int j = i; remote_inc_call(j); .. if they wanted the i value to stay unchanged. So far, I've implemented "alternative one" for sync calls and "alternative two" for async calls. Regarding the in/out/inout specification, I have the following thoughts: * The server can specify which parameters of the function are used which way (in/out/inout). * By default, every parameter is "in" and non-const references are "inout" * The client can override the specified behavior in the following ways: * for an "out" parameter, the client can specify that the output value should not be marshaled back (if the client is not interested in it, to save on the communication, and/or to prevent the old value to be overwritten) * for an "in" parameter, the client can specify that the server should instantiate a default object of the type (if possible), rather than marshaling the default value over the network.
I can see that as being reasonable if the RPC is synchronous, but if
Well, not necessarily.
Are you referring to the possibility that the parameter might not be specified as an out parameter, or do you have something else in mind?
it is asynchronous maybe something like a future would be a good way of providing the modified value? (speaking of, could someone suggest a futures implementation out of the ones that were floating around a while back?)
Thanks, I downloaded it and will start playing with it for async calls.
If you want to study some past experience, there's a wealth of literature on the designs and tradeoffs. Just a couple examples:
st-www.cs.uiuc.edu/~hanmer/PLoP-97/Proceedings/ritosilva.pdf www.cin.ufpe.br/~phmb/papers/DistributedAdaptersPatternPLOP2001.pdf
I checked them out - thanks for the links!
I have a one other comment for the moment - in the doc you say:
The entire server-side code is running in only one thread. This is probably not good. Should there be one thread per client? One thread per function call? Is there a "best" solution or should there be options on this?
There's not a 'simple' best answer to this. A single thread might be perfectly fine for something that executes a fast function and doesn't serve many clients at the same time (say calculate the current time). Something that needs to execute a function that performs significant computation, thus taking substantial time, needs a different strategy. It might spawn a sub-process or a thread to do the actual work allowing the main thread to wait for and process other inbound connections and requests. A typical strategy for problems that require scalability is to use a thread pool. At any given moment one thread from the pool is waiting for any new i/o on the network -- when it is received that thread begins processing the request and will process it to completion. At the start of request processing another thread takes over waiting for network i/o. This approach allows for minimal context switching w.r.t to processing a request and can be tuned to the number of processors actually available to handle requests and the nature of the processing. Usually the number of threads in this sort of scheme is significantly less than the number of simultaneous clients. Anyway, the 'thread per client' approach is inherently not scalable...which is fine -- as long as you don't need scalability.
Anyway, it's an area of some significant design depth -- and one for which boost doesn't provide all the facilities needed. We don't have the thread pool or thread-safe queue implementations that might be needed in some of the various strategies you might desire.
Great thoughts... Along the lines of a thread pool, asio does provide the facility that would allow me to give it multiple threads as working threads, and it's behavior would I think match the one you describe - a received packet would go to an idle thread, which would then stay busy until the call was completed and returned. Perhaps I can try going this route. Thanks for your thoughts, much appreciated! Stjepan