Re: [boost] data_binding

Stjepan Rajko writes: I have tried to tackle the value/domain transformation problem a while back, inspired by Brook Milligan's (CC-d) probability library (which can nicely transition probability values between linear and log domains).
Brook Milligan also worked on a generic domain transformation library, so he might comment on this as well. There is also the Boost.Units library which might offer transformations for some cases.
Thanks, Stjepan. I've continued to refactor that, actually, to make it support the more generic problem under discussion here. So far it is mostly done. I think the only issues are some silly overloading resolution issues that have little to do with the basic ideas. For the discussion I'll outline my approach.
The goal I have/desire is to preform type transformations and allow the programmer to specify these transformations in a generic way For type T1 and T2 allow the programmer to specify f(T1 t1, T2 t2 ) and g(T1 t1, T2 t2 ) where f and g are transformation functions from T1 -> T2 and T2->T1 respectively. Overloading operator= and specifying f(T1 t1, T2 t2 ) := t2 = t* 1/2 and g(T1 t1, T2 t2 ):= t1 = t2*2 allowing the programmer to type float t1; float t2; bind_values<float ,float > t1t2ValueBinder( t1, t2 ); t1 = 12; printf( "%f", t2 ); // produces output of 6 Transformations shall allow chaining/cascaded to be specified. int a; int b; float c; boost::uint32_t d; bind_values<int,int> abBinder( a, b ); bind_values<int,float> abBinder( b, c ); bind_values<float,boost::uint32_t> abBinder( c, d ); // a->b->c->d a = 12; printf( "%f", d ); // produces output of 6 Transformations shall allow for fanout int a; int b; float c; boost::uint32_t d; boost::uint32_t e; boost::uint32_t f; bind_values<int,int> abBinder( a, b ); bind_values<int,float> abBinder( a, c ); bind_values<float,boost::uint32_t> abBinder( c, d ); bind_values<float,boost::uint32_t> abBinder( c, e ); bind_values<float,boost::uint32_t> abBinder( c, f ); // a-+->b // | // +->c-+->d // | // +->e // | // +->f // a->b->c->d a = 12; printf( "%f", f ); // produces output of 12 as f and g default to 1 to one mapping a = 12; printf( "%f", f ); // produces output of 12 as f and g default to 1 to one mapping printf( "%f",e); // produces output of 12 as f and g default to 1 to one mapping f=32; printf( "%f", a ); // produces output of 32 as f and g default to 1 to one mapping printf( "%f",f ); // produces output of 32 as f and g default to 1 to one mapping printf( "%f",c ); // produces output of 32 as f and g default to 1 to one mapping Transformations shall support bidirectional and unidirectional updates.
The basic observation that motivates this is that in some (many?) situations there exist a group of related types that clearly have common semantics in the application domain but may impose different constraints in their implementation.
Yes. Agreed
Indeed, some operations may not be practical or feasible for some of the types but could be for others and there may exist transformations from one to another.
Where if there exists no clear transformation... the programmer shall be allowed to specify one.
In the case of probabilities think of two types, one for a probability and one for its logarithm with appropriate operators defined for each domain and the interconversions. Another example involves the representation of polynomials, which can be in terms of coefficients or in spectral terms that ease their multiplication. In this case, it may be completely undesirable to implement a polynomical multiplication for the coefficient representation.
Yes so some types can not be automatically transformed. My example provides absolutely no support for this except for 1 to 1 translation... for disparate types it is... well... compiler errors waiting to happen if no translation functions are supplied.
Conceptually then, the Domain library seeks to create a framework for implementing sets of such types that work smoothly together, and enable the compiler to make choices about how to handle mixed domain operations (e.g., adding a probability and a log probability) or operations that must be performed in another domain (e.g., multiplying two polynomials represented as coefficients). I think these are most of the salient points, all of which are currently supported.
The software engineer shall be allowed to specify a transform is the compiler cannot automatically figure it out. How difficult is it to specify new types and tranforms.... Are we talking about the types like thoes defined by the section titled "A Deeper Look at Meatafunctions - 3.1 Dimensional Analysis" in "C++ Template Metaprogramming" by David and Aleksey? If so ... ouch... could be quite painful to specify new types and conversions... What is your approach?
- Allow definition of families of types (domains) based upon each member of a family sharing common semantics in the application domain.
Providing type conversions across known domains where they exist. Yes sounds great.
- Allow implementation (or not) of any within-domain operators that make sense in the application domain.
- Allow implementation (or not) of any appropriate interdomain type
Yes sounds great with the exception of the "operators portion"... like every one in boost using operators for new purposes (think Spirit - and I don't mean this in a bad way) we are running out of operators which provide meaningful schematics... Food for thought: Wouldn't it be nice if we had our compilers be able to compile unicode source files ... call them .ucpp... and define new operators in the C++ language... then we would be able to utilize these new operators to write code which looks ascetically pleasing and readable. Another benifit could be easier support internationalization of software. I'll digress. transformations.
- Decouple the domain type information from the value types used to represent the internal domain-specific information.
This I also provided for in my simple example. Though kinda a cheesy example as "domain" is completely specified by the programmer and there is no automatic domain logic .
- Allow specification of a default value type for any domain type so that unspecified templates (i.e., T<>) provide meaningful results.
- Allow specification of which domains are interconvertible as a series of pairwise conversions.
so you have a chain a->b->c->d change a and b, c, and d are changed... correct? This was in my example Should also allow for fanout // a-+->b // | // +->c-+->d // | // +->e // | // +->f and avoid recursion. a->b->c->a->a->b->c->a... you get the idea
- Allow specification of which domains support which operations so that the compiler can seek an appropriate domain for arguments that may (or may not) be directly usable.
What happens when and operation cannot be found in a domain, but the programmer can or needs to specify one? Is it extensible?
The goal is to clearly define all these points of extension so that the "real" types (e.g., probabilities or polynomials) can be constructed on top of the Domain library, thereby simplifying the specification of domain families for any application or library.
The easier it is to provide new specifications for types and their transformations the better.... you'll never think of every domain and translation. Need to supply a clear path to adding new types, domains, and translations.
I have been using this to refactor the Probability library as a proof of concept. As mentioned, most of what the old version could do, the new one will also do.
There are a few issues with overloads that are confusing to me, so help is welcome.
How do I obtain the code and where is the problem?
Thus, I am certain that there is some merit in this approach to a generic domain solution. However, I am also certain that there are other ideas out there that could improve this.
Gerneric domain is exactly what I am after. We have type T1, T2, T3,....TN and we have to translate between them... What's the easiest way to specify this for the programmer?
Because of the interaction in development between the two libraries (Domain and Probability) I haven't quite finished everything. I'll try to roll them up for viewing shortly, though.
I look forward to continuing discussion, receiving input, and discovering how these ideas might be put to use in other contexts.
Context.... Let me speak to context... Let's say your one of thoes down to earth engineer types... you got to get your product out... you have all this incoming and out going data at multiple layers of the software form hardware<->driver<->user space lib<->executable<->network<->client and you have to translate between types for each interface... Bitfields... don't get me started on bit fields... Why is a bit such a degenerative type in C++ (try typeid(bit.in_a_bitfield).name() and wonder why you get back unsigned... would be nice to get the position info of where the bit or bits are withing a bit field) ... I'll digress again... any way you want to easily specify type conversions at each layer. You've received the event in the past and called the appropriate conversion routine after conversion routine and thought to yourself.. Hey is there any way to get rid of all of these calls to conversion functions in my callback routine... they're kinda making the code messy. Sure would be nice if I had a data_binding library where I could specify the types and specify the conversion functions and when the value arrives just set it and have the conversion take place bidirectionally. Then you write it up... think to yourself who might be able to use this... maybe those boost guys :-) Hope my input is helpful. Brain

Brian Davis writes:
The goal I have/desire is to preform type transformations and allow the programmer to specify these transformations in a generic way
From what you say below, though, it seems that there may be some differences in design or focus. You seem to focus on what happens during an assignment, and want a single assignment to be reflected in
That is more or less the same goal as I have for the Domain library. the value of potentially a number of other variables. In contrast, I am thinking of what happens when other functions (including but not limited to operators) are called. The idea was to allow the user of the Domain library (I envision this person most likely being a developer of another domain-specific library) to craft the set of type tags, conversion functions, and functions/operators, and then have an idiomatic MPL-based means of specifying how they all go together. The job of the Domain library would be to invoke the MPL code for domain selection, etc. so that the user of the library would not have to worry about this and even more so, so that the user of that library would never have to worry about domain transformations when they call the functions/operators but could rely on domain correctness, even in completely generic algorithms. I will give more thought to rolling the assignment operator into this, but I'm not sure yet how the two ideas fit together.
For type T1 and T2 allow the programmer to specify f(T1 t1, T2 t2 ) and g(T1 t1, T2 t2 ) where f and g are transformation functions from T1 -> T2 and T2->T1 respectively.
Yes, I have in mind exactly these sorts of functions being made by the user of the Domain library.
Overloading operator= and specifying f(T1 t1, T2 t2 ) := t2 = t* 1/2 and g(T1 t1, T2 t2 ):= t1 = t2*2 allowing the programmer to type
Transformations shall allow chaining/cascaded to be specified.
Transformations shall support bidirectional and unidirectional updates.
As mentioned above, this is what I have not really been thinking about.
Where if there exists no clear transformation... the programmer shall be allowed to specify one.
Of course.
Yes so some types can not be automatically transformed. My example provides absolutely no support for this except for 1 to 1 translation... for disparate types it is... well... compiler errors waiting to happen if no translation functions are supplied.
You mention the possibility of a 1:1 transformation. In my context, I'm not certain what that means. To me that signifies two different types in the same domain. Those can only be differentiated with respect to their value types (i.e., the types actually holding the data as opposed to the parts that track the domain information). In my scheme, as long as one value type can be constructed from another, there is always such a transformation within the same domain. No need for special support. Did you have something else in mind? It will always be the case that certain operations make no sense for certain domains. Thus, it is possible to try to use expressions on the generic domain types that cannot be resolved by the appropriate template specializations. I see this as no different from trying to use any ill-defined expression, though. The compiler will catch it, you will pay more attention to the documentation that describes what is legal for that domain, and you will fix your code. What else is possible?
The software engineer shall be allowed to specify a transform is the compiler cannot automatically figure it out.
More precisely, the engineer must specify a transformation between domains that must be interconverted. Except for the trivial case of transforming between value types within a domain, which is really not a domain transformation at all, the compiler cannot possibly know what transformation is appropriate. The goal here is to make it easier for an engineer to construct a library appropriate to a particular set of domains (an application domain if you will) so that the users of his/her library in turn have a truly easy set of generic, but domain-aware, operations to work with.
How difficult is it to specify new types and tranforms.... Are we talking about the types like thoes defined by the section titled "A Deeper Look at Meatafunctions - 3.1 Dimensional Analysis" in "C++ Template Metaprogramming" by David and Aleksey? If so ... ouch... could be quite painful to specify new types and conversions... What is your approach?
Developing a domain-aware library based upon the Domain library essentially involves the following: - Naming the set of domains and the individual domains within the set. This is only a matter of creating a set of tags for dispatching; essentially this is an organized set of empty structures to create convenient names that guide template specialization. - Potentially, creating an appropriate set of value types to hold whatever information is needed. In many cases, these will be preexisting types; in others, they will need special construction. As always, it all depends on the application domain. - Creating the domain transformation functions. These are a set of specializations of template functions within a defined namespace. - Potentially creating any appropriate definitions for operators. These are a set of specializations of classes within a defined namespace. - Potentially creating non-operator functions on the domain types. Since these cannot be known in advance (unlike the operators), a bit more work is required on the part of the user of the Domain library. However, this is largely idiomatic and easily supported within the framework. - Identifying which domain transformations are available. This involves a specialization of a single class within a defined namespace. - Identifying which domains are appropriate for which operations. For example, additive operations may be defined for all domains, but multiplicative ones defined only for some. This is handled by a set of simple MPL vectors of domains. The library takes care of sorting through the domains of function arguments, the set of domain interconversions, and the domains that are appropriate for the called operation to find an appropriate domain. The arguments are converted to the target domain if necessary before the operation is performed. This is perhaps the main thing that makes the library useful, as this makes the operations generic while maintaining domain-correctness.
Yes sounds great with the exception of the "operators portion"... like every one in boost using operators for new purposes (think Spirit - and I don't mean this in a bad way) we are running out of operators which provide meaningful schematics...
I am thinking of cases in which the operators already have a natural meaning in the application domain being modeled by all the domain types, but that the implementations might differ among domain types and so interconversions are required. This is quite different than overloading operators to describe an entirely new language, as Spirit does.
What happens when and operation cannot be found in a domain, but the programmer can or needs to specify one? Is it extensible?
Yes, generally by specializing things in well-known namespaces.
The easier it is to provide new specifications for types and their transformations the better.... you'll never think of every domain and translation. Need to supply a clear path to adding new types, domains, and translations.
Of course. The goal is to make it easy and idiomatic for others to do this, not to anticipate all possible uses. As I mentioned, the role of this is as a base layer upon which application domain-specific libraries are easy to build.
How do I obtain the code and where is the problem?
I'll try to get it in shape for public consumption this weekend. I have a worked example based on a toy polynomial library to illustrate how to build a library on top of the Domain library. However, the documentation is mainly in the form of Doxygen comments and so I need to give a bit more of the overview. Stay tuned. ...
Gerneric domain is exactly what I am after. We have type T1, T2, T3,....TN and we have to translate between them... What's the easiest way to specify this for the programmer?
That and how to automate the process of choosing the transformations is what I'm trying to make easy. As soon as I can get the code out there, please let me know how to improve the ideas. Thanks alot for your interest so far. I look forward to additional comments. Cheers, Brook

The goal I have/desire is to preform type transformations and allow the programmer to specify these transformations in a generic way
From what you say below, though, it seems that there may be some differences in design or focus. You seem to focus on what happens during an assignment, and want a single assignment to be reflected in
That is more or less the same goal as I have for the Domain library. the value of potentially a number of other variables.
Yes that is what I want. This is what I mean by data binding. A bunch of values can be bound together so that when one changes they all change instead of having set a value to call each conversion function in a chain.
In contrast, I am thinking of what happens when other functions (including but not limited to operators) are called. The idea was to allow the user of the Domain library (I envision this person most likely being a developer of another domain-specific library) to craft the set of type tags, conversion functions, and functions/operators, and then have an idiomatic MPL-based means of specifying how they all go together. The job of the Domain library would be to invoke the MPL code for domain selection, etc. so that the user of the library would not have to worry about this and even more so, so that the user of that library would never have to worry about domain transformations when they call the functions/operators but could rely on domain correctness, even in completely generic algorithms.
Users of the libraries often become the extenders of the library when ultimately it does not meet there needs. All libs suffer from this. So making it extensible is important to allow the user to extend it to their domain. This library is then a library for library developers for domain specific transformations?
I will give more thought to rolling the assignment operator into this, but I'm not sure yet how the two ideas fit together.
Give the demo code a try if you haven't... Either you will be aghast by what I have done or find it a very nifty bit of code. I guess there will be very little middle ground for those who see how it works.
Yes so some types can not be automatically transformed. My example provides absolutely no support for this except for 1 to 1 translation... for disparate types it is... well... compiler errors waiting to happen if no translation functions are supplied.
You mention the possibility of a 1:1 transformation. In my context, I'm not certain what that means. To me that signifies two different types in the same domain. Those can only be differentiated with respect to their value types (i.e., the types actually holding the data as opposed to the parts that track the domain information). In my scheme, as long as one value type can be constructed from another, there is always such a transformation within the same domain. No need for special support. Did you have something else in mind?
What I mean by 1 to 1 is that if no transformation functions are specified a values A set and bound to value B will be equal 1:1. If you set A to 1 B will be 1 if you set B to 45 A will be 45. What I mean by compiler errors waiting to happen is that there is not type checking for primitive types (it is as was only a simple example of what I had in mind.) so if you pass in T1 = float and a class for T2 without transformation functions you will get compiler warnings when you set A to 2.34 ... what does the class B equal? ... compiler errors.
It will always be the case that certain operations make no sense for certain domains. Thus, it is possible to try to use expressions on the generic domain types that cannot be resolved by the appropriate template specializations. I see this as no different from trying to use any ill-defined expression, though. The compiler will catch it, you will pay more attention to the documentation that describes what is legal for that domain, and you will fix your code. What else is possible?
That seems perfectly reasonable and correct.
The software engineer shall be allowed to specify a transform is the compiler cannot automatically figure it out.
More precisely, the engineer must specify a transformation between domains that must be interconverted. Except for the trivial case of transforming between value types within a domain, which is really not a domain transformation at all, the compiler cannot possibly know what transformation is appropriate.
The goal here is to make it easier for an engineer to construct a library appropriate to a particular set of domains (an application domain if you will) so that the users of his/her library in turn have a truly easy set of generic, but domain-aware, operations to work with.
How difficult is it to specify new types and tranforms.... Are we talking about the types like thoes defined by the section titled "A Deeper Look at Meatafunctions - 3.1 Dimensional Analysis" in "C++ Template Metaprogramming" by David and Aleksey? If so ... ouch... could be quite painful to specify new types and conversions... What is your approach?
Developing a domain-aware library based upon the Domain library essentially involves the following:
- Naming the set of domains and the individual domains within the set. This is only a matter of creating a set of tags for dispatching; essentially this is an organized set of empty structures to create convenient names that guide template specialization.
- Potentially, creating an appropriate set of value types to hold whatever information is needed. In many cases, these will be preexisting types; in others, they will need special construction. As always, it all depends on the application domain.
- Creating the domain transformation functions. These are a set of specializations of template functions within a defined namespace.
- Potentially creating any appropriate definitions for operators. These are a set of specializations of classes within a defined namespace.
- Potentially creating non-operator functions on the domain types. Since these cannot be known in advance (unlike the operators), a bit more work is required on the part of the user of the Domain library. However, this is largely idiomatic and easily supported within the framework.
- Identifying which domain transformations are available. This involves a specialization of a single class within a defined namespace.
- Identifying which domains are appropriate for which operations. For example, additive operations may be defined for all domains, but multiplicative ones defined only for some. This is handled by a set of simple MPL vectors of domains. The library takes care of sorting through the domains of function arguments, the set of domain interconversions, and the domains that are appropriate for the called operation to find an appropriate domain. The arguments are converted to the target domain if necessary before the operation is performed. This is perhaps the main thing that makes the library useful, as this makes the operations generic while maintaining domain-correctness.
Yes sounds great with the exception of the "operators portion"... like every one in boost using operators for new purposes (think Spirit - and I don't mean this in a bad way) we are running out of operators which provide meaningful schematics...
I am thinking of cases in which the operators already have a natural meaning in the application domain being modeled by all the domain types, but that the implementations might differ among domain types and so interconversions are required. This is quite different than overloading operators to describe an entirely new language, as Spirit does.
Let me just clarify what I mean here. Consider Matrix calculations where it would be nice to be able to use operators standard to the domain where you have X = { 1 } { 2 } { 3 | As a column vector It would be great to type in c++ X' and get {1 2 3 } the row vector, but alas we cannot type this in C++, but we might be able to if we had access to other operators which look like this because we would be writing code in a Unicode editor with a complier that supports Unicode text and has been expanded to utilize the new operators that using such a coding scheme would provide. I would not see backward compatibility issues here as if you are using a Unicode Editor you are writing forward compatible code. Have you ever been writing code only to find that you wish the syntax you desire was possible, but you are limited by the limited set of operators by which to overload. They can speak to it, but I am sure the Spirit and Karma guys might agree with me here. This however is a off topic for this and a little too forward thinking. Sadly I feel we have been crippled as developers by standards based on UTF8 encoding :-( Lets free our compilers and free our code... Unicode ... Who's with me! ... I'll digress again...
What happens when and operation cannot be found in a domain, but the programmer can or needs to specify one? Is it extensible?
Yes, generally by specializing things in well-known namespaces.
The easier it is to provide new specifications for types and their transformations the better.... you'll never think of every domain and translation. Need to supply a clear path to adding new types, domains, and translations.
Of course. The goal is to make it easy and idiomatic for others to do this, not to anticipate all possible uses. As I mentioned, the role of this is as a base layer upon which application domain-specific libraries are easy to build.
How do I obtain the code and where is the problem?
I'll try to get it in shape for public consumption this weekend. I have a worked example based on a toy polynomial library to illustrate how to build a library on top of the Domain library. However, the documentation is mainly in the form of Doxygen comments and so I need to give a bit more of the overview. Stay tuned. ...
I am familiar with running Doxygen and bjam. Can't say I am an expert in either, but I can find my way around with these tools. I am interested to see what you've got. It sounds like you have given this a considerable amount more thought that I.
Gerneric domain is exactly what I am after. We have type T1, T2, T3,....TN and we have to translate between them... What's the easiest way to specify this for the programmer?
That and how to automate the process of choosing the transformations is what I'm trying to make easy. As soon as I can get the code out there, please let me know how to improve the ideas.
Thanks alot for your interest so far. I look forward to additional comments.
Out of curiosity do you think this library could support time domain transforms to Z-transforms and S-transforms. How complex a transformation do you think it will support? I help where I can. Provide input where I can and hope it helps. Brian

Brian Davis writes:
Yes that is what I want. This is what I mean by data binding. A bunch of values can be bound together so that when one changes they all change instead of having set a value to call each conversion function in a chain.
At this point I'm thinking I agree with Stjepan. To me this seems like a combination of two distinct things: passing messages about when things happen (i.e., your binding stuff) and interspersing conversions when necessary. Both your approach and the DataFlow approach seem aimed at the former. My Domain approach is clearly aimed at the latter. From my (quick) look over your code it looks like your types could be my domains and that whenever transformations were required they would either just happen or you could explicitly make use of my domain_cast() function. Thus, there might be a very simple merger that would handle all the conversions easily and behind the scenes.
Users of the libraries often become the extenders of the library when ultimately it does not meet there needs. All libs suffer from this. So making it extensible is important to allow the user to extend it to their domain. This library is then a library for library developers for domain specific transformations?
Absolutely, they do. Indeed, even library writers play the two different roles of developer and user. I don't imagine too many people develop libraries purely for the fun of doing so with no intent of using them themselves. The entire goal of my Domain library is to make the development of these sorts of libraries systematic and idiomatic. If it succeeds, then building libraries of types representing related domains becomes much simpler. Of course, the details of conversions and so on still lies in the developers domain and could be very hard in certain cases. However, the remainder of the challenge could be abstracted away so that only the essential elements are left to the library developer.
Give the demo code a try if you haven't... Either you will be aghast by what I have done or find it a very nifty bit of code. I guess there will be very little middle ground for those who see how it works.
I've looked it over, but not yet tried it. I'll see if my thoughts above are actually correct about bridging them easily.
What I mean by 1 to 1 is that if no transformation functions are specified a values A set and bound to value B will be equal 1:1. If you set A to 1 B will be 1 if you set B to 45 A will be 45. What I mean by compiler errors waiting to happen is that there is not type checking for primitive types (it is as was only a simple example of what I had in mind.) so if you pass in T1 = float and a class for T2 without transformation functions you will get compiler warnings when you set A to 2.34 ... what does the class B equal? ... compiler errors.
I presume by this you mean that the value types A and B are interconvertible by your use of numeric_cast? All my types are more complex than just the value type; they carry their domain information as well. Thus, there is no such thing as having two different domains being interconvertible without an explicit conversion.
Out of curiosity do you think this library could support time domain transforms to Z-transforms and S-transforms. How complex a transformation do you think it will support?
Absolutely. There is no inherent limit to the types of conversions possible. You just have to make the conversion functions available. One way to view this is as a generic type-tagging and dispatching system that gives the generic code a systematic means of figuring out what to do.
I help where I can. Provide input where I can and hope it helps.
Thanks for your input already. I'm getting a clearer picture of how the two sets of ideas might mesh. Cheers, Brook

To make the discussion of the Domain library a bit more concrete, I have posted a web page http://abies.nmsu.edu/software/domain/polynomial_library.html describing an example of using it to develop (a tiny part of) a polynomial library. For the moment it is a pretty raw page with broken links, etc. However, perhaps it will serve to give a better flavor of how this can be used to build other libraries. Even better, perhaps it will give a better opportunity for you to contribute new ideas that will improve the library. Thanks for your constructive feedback. Cheers, Brook

I think it can be interesting, but can you write a sample that makes a deeper use of your lib ? Thank you. On Sun, Jul 27, 2008 at 12:29 AM, Brook Milligan <brook@biology.nmsu.edu>wrote:
To make the discussion of the Domain library a bit more concrete, I have posted a web page
http://abies.nmsu.edu/software/domain/polynomial_library.html
describing an example of using it to develop (a tiny part of) a polynomial library. For the moment it is a pretty raw page with broken links, etc. However, perhaps it will serve to give a better flavor of how this can be used to build other libraries.
Even better, perhaps it will give a better opportunity for you to contribute new ideas that will improve the library.
Thanks for your constructive feedback.
Cheers, Brook _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Alp Mestan --- http://blog.mestan.fr/ --- http://alp.developpez.com/ --- In charge of the Qt, Algorithms and Artificial Intelligence sections on Developpez

Alp Mestan writes:
I think it can be interesting, but can you write a sample that makes a deeper use of your lib ?
By "deeper use", do you mean at the application level or at the library implementation level? In the context of the polynomial example, I'm asking whether you are seeking something more at the level of _using_ the polynomials for some application-specific purpose or at the level of _implementing_ a more complete polynomial library with a fuller set of operations? Cheers, Brook

I ask for the first, but depending on what you'll show I could ask for the second. I mean this lib is a very good idea but that I would like to see what it is capable of in order to see if it would be fine as it is now or if it needs to be completed with 'a fuller set of operations'. Thanks. On Sun, Jul 27, 2008 at 3:47 AM, Brook Milligan <brook@biology.nmsu.edu>wrote:
Alp Mestan writes:
I think it can be interesting, but can you write a sample that makes a deeper use of your lib ?
By "deeper use", do you mean at the application level or at the library implementation level? In the context of the polynomial example, I'm asking whether you are seeking something more at the level of _using_ the polynomials for some application-specific purpose or at the level of _implementing_ a more complete polynomial library with a fuller set of operations?
Cheers, Brook _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Alp Mestan --- http://blog.mestan.fr/ --- http://alp.developpez.com/ --- In charge of the Qt, Algorithms and Artificial Intelligence sections on Developpez

Alp Mestan writes:
I ask for the first, but depending on what you'll show I could ask for the second.
Well, the short answer is that all the obvious main operators are supported. This includes all the comparison operators, the two unary operators, the two additive operators, and the two multiplicative operators. Notable exceptions are things like shift operators, modulo, etc., but that is mainly because I haven't needed them just yet and haven't quite gotten around to implementing them. There are no substantive issues blocking that though.
From the library developers' viewpoint, any or all of these may be implemented using techniques that essentially duplicate those described for the polynomial example. Indeed, I am refactoring my Probability library (which requires a fairly rich set of operators and functions) to use this framework. Thus, if you can imagine a set of domains, appropriate implementations of the operators, and the interconversions, the library should support it.
The long answer, i.e., how _exactly_ to do all this to develop a library, will have to wait a bit more until I can tidy up the code and write more documentation. I'm hoping that won't take too long, so please stay tuned. In the meantime, I really am interested in feedback on the overall design (e.g., idea of making this type of library systematic, the means of providing extensibility, etc.) and improvements in the implementation. Thanks for your interest. Cheers, Brook

Brook Milligan wrote:
To make the discussion of the Domain library a bit more concrete, I have posted a web page
http://abies.nmsu.edu/software/domain/polynomial_library.html
describing an example of using it to develop (a tiny part of) a polynomial library. For the moment it is a pretty raw page with broken links, etc. However, perhaps it will serve to give a better flavor of how this can be used to build other libraries.
Even better, perhaps it will give a better opportunity for you to contribute new ideas that will improve the library.
I'm concerned about the behaviour of standard algorithms in this context. For example, consider a general 'power' function template<typename T, typename Integer> power(T, Integer); which will involve a lot of multiplying. Adapting your example code, if we do this: typedef polynomials::polynomial<>::type polynomial_c; typedef polynomials::polynomial_pv<>::type polynomial_pv; polynomial_c p1, p2; p2 = power(p1, 100); then the power function will be instantiated with T=polynomial_c, which (depending on its exact implementation) will probably lead to many conversions back and forth between polynomial_c and polynomial_pv. More efficient would be to convert once at the beginning and once back at the end. (Of course even more efficient is probably to have a polynomial-specific power function, but one can't be expected to reimplement every generic algorithm for polynomials). To get efficient behaviour here the user will have to realise that power will involve a lot of multiplications and pass the appropriate type to it. This is less than ideal. It's even worse if the algorithm is best implemented by first doing lots of computation in one domain and then doing a lot in another. I'm not sure what the best solution is, but one possibility is this: Support a runtime-domained type, which acts something like a variant<polynomial_c, polynomial_pv>, but with operators defined on it. The operators will return a polynomial in whatever representation is most convenient for them, which in the case of multiplication will always be polynomial_pv. When you pass this type to power it will automatically be working with polynomial_pv variables after one round of multiplications, and (again, depending on the specific implementation) it may only have to switch domains once. The user can now pass this type to an algorithm whenever they're not sure which domain is most appropriate for it. The down side of this of course is that there's additional runtime cost with every operation. It's possible (but I don't think likely) that most of this could be optimized away in simple cases such as the above if all the operators are inlined. John

John Bytheway writes:
I'm concerned about the behaviour of standard algorithms in this context. For example, consider a general 'power' function
template<typename T, typename Integer> power(T, Integer);
which will involve a lot of multiplying.
You are absolutely right that an algorithm like this could be much less efficient if given the "wrong" type for T than if given a "better" type for T. You are also correct to observe that this might impose a requirement that the user of the algorithm know something about the interaction between the algorithm and the type T in order to understand the performance issues. There are some benefits deriving from the Domain library that I hope will reduce your concern. - Use of the generic power() is guarranteed to give the correct answer, regardless of the choice made for T. Perhaps that guarrantee is worth something. - A reasonable implementation of power() would use operator*=() and a reasonable implementation of polynomial_c (as described) would not define that. Thus, the user code you describe would never compile and a search for the reason why would quickly lead to the polynomial_pv type which gives the minimal conversions you seek. - The actual difference in performance will be small for low-degree polynomials as the scaling is asymptotic. A different library design allowing multiplication of both polynomial types might be appropriate. Alternatively, why not have a customizable polynomial library that can be tuned for large (where even two conversions may be preferred over one multiplication) versus small (where it may matter little) polynomials? Any of these solutions could be constructed using Domain. - Developing a Domain-aware specialization of the power() algorithm could resolve the performance issues altogether. The algorithm would be generic and thus applicable to all types based upon the library, not just polynomials.
I'm not sure what the best solution is, but one possibility is this:
Support a runtime-domained type, which acts something like a variant<polynomial_c, polynomial_pv>, but with operators defined on it.
Rather than have this runtime cost, the library has a type deduction system that determines the appropriate return types based upon the axioms that govern the library's design. Thus, a Domain-aware implementation of the power() function can guarrantee that the argument is transformed at most once[1] and then only when necessary. This, I believe, offers the best possibility: any Domain-based argument type works, the minimum number of conversions is performed, there are no runtime costs associated with deciphering the types, and the algorithm is still completely generic. My hope is that the Domain library successfully provides a framework for developing application domain-specific libraries that provide both correctness and performance, while maintaining generic approaches. Please continue to raise issues to see if the library meets that goal. I hope I can flesh out the documentation so that more of this is apparent there, but in the meantime this discussion helps me understand the issues that need better descriptions. Thanks for the help. Cheers, Brook [1] Perhaps two conversions are required if the return type _must_ conform to something specific that is incompatible with multiplication.

Brook Milligan wrote:
John Bytheway writes:
I'm concerned about the behaviour of standard algorithms in this context. For example, consider a general 'power' function <snip> There are some benefits deriving from the Domain library that I hope will reduce your concern.
- Use of the generic power() is guarranteed to give the correct answer, regardless of the choice made for T. Perhaps that guarrantee is worth something.
Indeed. It's always a good first step! :)
- A reasonable implementation of power() would use operator*=() and a reasonable implementation of polynomial_c (as described) would not define that. Thus, the user code you describe would never compile and a search for the reason why would quickly lead to the polynomial_pv type which gives the minimal conversions you seek.
A good point, but the implementation I was looking at (from the gcc 4.3.1 header ext/numeric) doesn't, because it actually supports arbitrary MonoidOperations, not just multiply (and that in turn makes me think that there's something missing in that implementation, but that's another issue).
- Developing a Domain-aware specialization of the power() algorithm could resolve the performance issues altogether. The algorithm would be generic and thus applicable to all types based upon the library, not just polynomials.
Could you easily write it such a way as to work well for both Domain-based types and other types? (Clearly this is possible with sufficient metaprogramming, but is it easy?)
I'm not sure what the best solution is, but one possibility is this:
Support a runtime-domained type, which acts something like a variant<polynomial_c, polynomial_pv>, but with operators defined on it.
Rather than have this runtime cost, the library has a type deduction system that determines the appropriate return types based upon the axioms that govern the library's design. Thus, a Domain-aware implementation of the power() function can guarrantee that the argument is transformed at most once[1] and then only when necessary. This, I believe, offers the best possibility: any Domain-based argument type works, the minimum number of conversions is performed, there are no runtime costs associated with deciphering the types, and the algorithm is still completely generic.
Indeed. I agree that's the best solution, but it might be overoptimistic to assume it's always a practical one. I feel runtime typing has its uses, but I can't come up with a concrete example, so perhaps I'm being too pessimistic. John

John Bytheway writes:
A good point, but the implementation I was looking at (from the gcc 4.3.1 header ext/numeric) doesn't, because it actually supports arbitrary MonoidOperations, not just multiply (and that in turn makes me think that there's something missing in that implementation, but that's another issue).
The basic implementation you speak of has a more general form that takes a monoid as an argument. Thus, it is possible to make this same algorithm work for a variety of tasks. Nevertheless, you are correct that the algorithm itself does not handle the conversions that could make it more general. Thus, it could impose unnecessary interconversions for Domain-aware types.
Could you easily write it such a way as to work well for both Domain-based types and other types? (Clearly this is possible with sufficient metaprogramming, but is it easy?)
Yes, but I believe it requires a slight restriction on the monoid type to do so. Specifically, an adaptable monoid concept would be needed that would include at least a result_type typedef. Suppose such a monoid existed. Then the following is a potential defintion of a power() function. (Note that this is solely for illustration as it is not optimized to use squares, etc. as the ext/numeric one is.) template < typename T, typename Integer, typename Monoid > typename Monoid::result_type power (const T& t, Integer n, Monoid monoid_op) { typename Monoid::result_type t_(t); for (Integer i = 0; i < n; ++i) t_ = monoid_op(t_,t_); return t_; } This will perform the conversion internally and at most once. Thus, the problem of lots of conversions can be avoided for types that interconvert correctly. Of course, this transfers a bit of the complexity to the design of the monoid type, as it needs to deduce things like the result type. For general types that do not have interconversions, monoids can be written as the obvious extension to existing ones: just add the following typedef (where T is the primary value type used by the monoid): typedef T result_type; For types created by the Domain library, which would have interconversions and would have to do type deduction for the result type, something like the following is possible: template < typename DomainFamily , typename DomainClass = typename boost::domains::extensions::domains::tag::multiplicative , typename Domain = typename boost::domains::mpl::default_domain< DomainClass , DomainFamily >::type , typename Value = typename boost::domains::mpl::default_value< DomainFamily , Domain >::type , typename Result = boost::domains::domain<Domain,Value> , typename Monoid = std::multiplies<Value> > struct monoid { typedef Result result_type; template < typename D1, typename V1, typename D2, typename V2 > result_type operator () (const boost::domains::domain<D1,V1>& d1, const boost::domains::domain<D2,V2>& d2) { typedef typename result_type::domain_type domain_type; typedef typename result_type::value_type value_type; return result_type(Monoid()(value_type(d1.value_cast(result_type())), value_type(d2.value_cast(result_type())))); } }; I'm not at all certain that this is the best design. However, it does illustrate the point that a completely flexible monoid is possible and that lots of the type deduction can be encapsulated within library-provided MPL code. Thus, users of the Domain library can construct types such as this monoid that provide lots of flexibility without having to worry about the details of type deduction. By the way, the monoid above will work with the generic ext/numeric algorithm as well as with the more general one above. Likewise, the algorithm above works with other types that are not part of the Domain library so long as the monoid provides the result_type typedef. Thus, I think the answer is that, yes, it is easy to provide these algorithms that are very flexible, work with "normal" types, maintain type safety, but will also support automatic domain interconversions for types that are Domain-aware. All this can be accomplished without placing a great burden on the user of the Domain library to master the arcane details of MPL. Cheers, Brook
participants (4)
-
Alp Mestan
-
Brian Davis
-
brook@biology.nmsu.edu
-
John Bytheway