Re: [Boost-users] Extending boost::function for mathematics?

Thanks as always Stephen (who I still refuse to accept is either a single
individual or human): Sorry to all if this is getting so long and specific,
this project is justified later.
Your comments generated a lot of thought about what my main problems are. I
think I distracted myself by the generation of derivatives here before I had
worked out some other issues. But your implementation looks very elegant.
For now lets assume that people supply the gradient/jacobian directly or
from their own finite_difference/autodifferentiation. So here are some,
more basic, questions.
Question 1: Based on your template, how do I forward a function to the
boost::function's operator() in this wrapper? Why might we want to do this?
Because we are trying to implement a concept with operator() and
.gradient(..) with a function wrapper here. Library writers may want to
write generic code that operates on any of these, not just ones wrapped in
math_function... Then we may end up with true inlining of operator() and
.gradient(...) when a full differentiable function object is implemented,
instead of always going through an indirection pointer in boost::function.
template<class Signature>
class math_function {
public:
typedef boost::function<Signature>::result_type result_type;
template<class F>
math_function(const F& f, const G& f) : f_(f) : g_(g) {}
result_type operator()(How to extract from the signature the input
parameters? myvalues...)
{
return f_(myvalues....);
}
result_type gradient()(How to extract from the signature the input
parameters???????)
{
return g_(myvalues...);
}
private:
boost::function<Signature> f_;
boost::function<Signature> g_;
};
void test_mathfunctions()
{
using lambda::_1;
using lambda::_2;
math_function

On Sun, Jan 25, 2009 at 10:41 AM, Jesse Perla
Question 1: Based on your template, how do I forward a function to the boost::function's operator() in this wrapper? Why might we want to do this? Because we are trying to implement a concept with operator() and .gradient(..) with a function wrapper here.
Do you require that a math_function object be callable? I think you're complicating things by defining operator() and .gradient(). From what I'm reading, I'd rather have operations like gradient(), derivative(), etc. return boost::function objects. My rationale is that code that just needs to refer to a function and call it shouldn't be coupled with math_function. Also, it makes sense to apply operations like .gradient on a boost::function as well. For this, you could define conversion from boost::function to math_function and then call a member function, but can't these operations be namespace-scope overloads? Then you can overload them for boost::function as well as math_function.
1) Everyone is using dynamic polymorphism for these overloading a virtual operator() and some type of gradient/jacobian... I assume this is because there is no agreed upon concept for functions objects with gradients... In reality, none of these seem to require dynamic polymorphism.
Boost::function implements its own dynamic dispatching machinery, so boost::function calls would rarely, if ever, be inlined. I'm contradicting myself here, but this might be a good enough reason for you to not use boost::function at all in your framework, and only define a separate module to convert to boost::function for interfacing with 3rd party code unaware of math_function. Emil Dotchevski Reverge Studios, Inc. http://www.revergestudios.com/reblog/index.php?n=ReCode

Do you require that a math_function object be callable?
It is a good question, and one I have wrestled with. The short answer
is that the algorithms you might use it with are for these is in
scientific processing with an incredibly high number of executions.
So for the same reason you might want to write function objects that
directly implement operator() and have typedef result_type for
adaptation, you would want a similar option when designing generic
algorithms. If you don't use math_function, you can inline.. if you
do, then it will use boost::function dispatching for its underlying
object.
I think that part of the design question here is whether this is a
(type erasure) wrapper class that implements the concept of a
differentiable function, and what that concept looks like. For
example, here is what I am thinking for a root finder:
template <class F>
double find_root(F f, double initial_value)
{
//implement newton's method executing f(x) and f.gradient(x) to
find the zero
//This might want to do a dynamic or static assertion that the
function has some sort of single_crossing trait associated with
it.... For later, but this is why I think those traits are useful.
}
struct square {double operator()(double x){return x*x;} double
gradient(double x){return 2 * x;} }; //quadratic differentiable
function
struct scaled_square {double operator()(double x, double a){a * x *
x;} double gradient(double x, double a){return 2 * a * x;} }; //
scaled quadratic
double zero= find_root(square(), .1); //inlines everything.
double zero = find_root(math_bind(scaled_square, _1, 1.0)); //Would
uses the dynamic dispatching in boost::function since the math_bind
creates a math_function object which has a boost::function
double zero = find_root(math_function

jesseperla wrote:
Do you require that a math_function object be callable?
It is a good question, and one I have wrestled with. The short answer is that the algorithms you might use it with are for these is in scientific processing with an incredibly high number of executions. So for the same reason you might want to write function objects that directly implement operator() and have typedef result_type for adaptation, you would want a similar option when designing generic algorithms. If you don't use math_function, you can inline.. if you do, then it will use boost::function dispatching for its underlying object.
I think that part of the design question here is whether this is a (type erasure) wrapper class that implements the concept of a differentiable function, and what that concept looks like. For example, here is what I am thinking for a root finder:
This discussion is very interesting. Probably you know this already but if not, given an f smooth over the desired interval, this paper has a technique for making the calculation of the derivatives generic: http://homepage.mac.com/sigfpe/paper.pdf Russell
template <class F> double find_root(F f, double initial_value) { //implement newton's method executing f(x) and f.gradient(x) to find the zero //This might want to do a dynamic or static assertion that the function has some sort of single_crossing trait associated with it.... For later, but this is why I think those traits are useful. }
struct square {double operator()(double x){return x*x;} double gradient(double x){return 2 * x;} }; //quadratic differentiable function struct scaled_square {double operator()(double x, double a){a * x * x;} double gradient(double x, double a){return 2 * a * x;} }; // scaled quadratic double zero= find_root(square(), .1); //inlines everything. double zero = find_root(math_bind(scaled_square, _1, 1.0)); //Would uses the dynamic dispatching in boost::function since the math_bind creates a math_function object which has a boost::function double zero = find_root(math_function
(_1 * _1, 2 * _1)); //constructs a new math_function... similar to previous line. Alternatively, if it is in a class: template
> class newton_root_finder { F f_; template<class G> newton_root_finder(const G& f) : f_(f) //Here it will construct the underlying object as required.
double solve() { implement newton's method using f() and f.gradient() } }; _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users


I cannot open this link with either IE or Firefox. I'm using Windows XP SP2. What's the problem? B/Rgds Max

Hi,
On Wed, Jan 28, 2009 at 11:31 AM, Max
I cannot open this link with either IE or Firefox. I'm using Windows XP SP2.
What's the problem?
I am able to open it. I would suggest that you save the link (file) or use a command line tool like wget to download it if you still have problems. -dhruva -- Contents reflect my personal views only!

I am able to open it. I would suggest that you save the link (file) or use a command line tool like wget to download it if you still have problems.
Hello dhruva, Thanks for your reply. But wget tool is not available under Windows. Could you please send a copy directly to me? Thanks in advance. B/Rgds Max

Max a écrit :
Hello dhruva,Thanks for your reply. But wget tool is not available under Windows. Off topic but :
http://users.ugent.be/~bpuype/wget/ -- ___________________________________________ Joel Falcou - Assistant Professor PARALL Team - LRI - Universite Paris Sud XI Tel : (+33)1 69 15 66 35

Sorry if off-topic. I've downloaded wget.exe and use it under command prompt to get the paper: wget http://homepage.mac.com/sigfpe/paper.pdf But I still got this error msg: --2009-01-28 17:20:33-- http://homepage.mac.com/sigfpe/paper.pdf Resolving homepage.mac.com... 17.250.248.34 Connecting to homepage.mac.com|17.250.248.34|:80... failed: Network is unreachable. Any help is appreciated. B/Rgds Max

Quoting Max
Sorry if off-topic. I've downloaded wget.exe and use it under command prompt to get the paper:
wget http://homepage.mac.com/sigfpe/paper.pdf
But I still got this error msg:
--2009-01-28 17:20:33-- http://homepage.mac.com/sigfpe/paper.pdf Resolving homepage.mac.com... 17.250.248.34 Connecting to homepage.mac.com|17.250.248.34|:80... failed: Network is unreachable.
Any help is appreciated.
I have sent you the paper via private email. You have some network problem so FF, IE or wget all fail. You have a "sina.com" email address - wild speculation but could it possibly be that the Great Firewall of China is blocking that site? Pete

I have sent you the paper via private email.
You have some network problem so FF, IE or wget all fail. You have a "sina.com" email address - wild speculation but could it possibly be that the Great Firewall of China is blocking that site?
Pete
This is a free email account. The cause is probably the firewall, as you guessed. Paper received. Thanks. B/Rgds Max

On Wed, Jan 28, 2009 at 05:29:47PM +0800, Max wrote:
Sorry if off-topic. I've downloaded wget.exe and use it under command prompt to get the paper:
wget http://homepage.mac.com/sigfpe/paper.pdf
But I still got this error msg:
--2009-01-28 17:20:33-- http://homepage.mac.com/sigfpe/paper.pdf Resolving homepage.mac.com... 17.250.248.34 Connecting to homepage.mac.com|17.250.248.34|:80... failed: Network is unreachable.
Any help is appreciated.
B/Rgds Max
Max Just tried this and see [bob@snoogen VirusSignatures]$ wget http://homepage.mac.com/sigfpe/paper.pdf --16:21:55-- http://homepage.mac.com/sigfpe/paper.pdf => `paper.pdf' Resolving homepage.mac.com... done. Connecting to homepage.mac.com[17.250.248.34]:80... connected. HTTP request sent, awaiting response... 200 OK Length: 157,855 [application/pdf] 100%[=================================================================================================================>] 157,855 69.25K/s ETA 00:00 16:21:58 (69.25 KB/s) - `paper.pdf' saved [157855/157855] [bob@snoogen VirusSignatures]$ Do you have a network connection? Can you run traceroute? Could you try again? Can you ping the server? What does netstat -rn say? Can you use a browser and go to http://homepage.mac.com/sigfpe Bob -- Now playing : Chevelle Franklin & Lady G - Thank You -- Fortune Cookie : Because we don't think about future generations, they will never forget us. -- Henrik Tikkanen

Do you have a network connection?
Yes, I do.
Can you run traceroute?
No, it seems not to be an available internal or external command of WinXP.
Could you try again?
Yes, but the problem persists.
Can you ping the server?
Yes, but the response is "Request timed out"
What does netstat -rn say?
Route Table =========================================================================== Interface List 0x1 ........................... MS TCP Loopback interface 0x20002 ...00 0e 35 bd c5 ca ...... Intel(R) PRO/Wireless 2200BG Network Connect ion - 数据包计划程序微型端口 =========================================================================== =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.0.1 192.168.0.100 25 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 169.254.0.0 255.255.0.0 192.168.0.100 192.168.0.100 30 192.168.0.0 255.255.255.0 192.168.0.100 192.168.0.100 25 192.168.0.100 255.255.255.255 127.0.0.1 127.0.0.1 25 192.168.0.255 255.255.255.255 192.168.0.100 192.168.0.100 25 224.0.0.0 240.0.0.0 192.168.0.100 192.168.0.100 25 255.255.255.255 255.255.255.255 192.168.0.100 192.168.0.100 1 Default Gateway: 192.168.0.1 =========================================================================== Persistent Routes: None
Can you use a browser and go to http://homepage.mac.com/sigfpe
No. As far as the paper is concerned, I've received a copy from Peter. Thanks for your attention. B/Rgds Max

AMDG Jesse Perla wrote:
Thanks as always Stephen (who I still refuse to accept is either a single individual or human): Sorry to all if this is getting so long and specific, this project is justified later.
Your comments generated a lot of thought about what my main problems are. I think I distracted myself by the generation of derivatives here before I had worked out some other issues. But your implementation looks very elegant. For now lets assume that people supply the gradient/jacobian directly or from their own finite_difference/autodifferentiation. So here are some, more basic, questions.
Question 1: Based on your template, how do I forward a function to the boost::function's operator() in this wrapper? Why might we want to do this? Because we are trying to implement a concept with operator() and .gradient(..) with a function wrapper here. Library writers may want to write generic code that operates on any of these, not just ones wrapped in math_function... Then we may end up with true inlining of operator() and .gradient(...) when a full differentiable function object is implemented, instead of always going through an indirection pointer in boost::function.
We can assume that anything assigned to a math_function is a pure function, right? The easiest way is to inherit from boost::function. To make this relatively safe we can use private inheritance: template<class Signature> class math_function : private boost::function<Signature> { public: using boost::function::operator(); };
Question 2: How can this wrapping interact with binding? void test_mathfunctions_binding() { using lambda::_1; using lambda::_2; //But you might also want a constant thrown in there... Why a separate parameter? Because you don't take the derivative over the other stuff. And you want to keep your functions pure so you can bind, not force them to be stateful, etc. math_function
myfunc(_1*_1 + _2, 2* _1); //simple quadratic translated by a constant //But how would you work with bind? Would these following work immediately? boost::function
myfunc_bound = bind(myfunc, _1, 2.0); boost::function myfuncderiv_bound = bind(myfunc::gradient, _1, 2.0); //or something like this... I always forget member function notation.
Yes. These should just work.
//Any way to bind that entire structure back to a math_function? Very useful since we would bind parameters before calling optimizers, etc. With dynamic programming we usually have a grid of values and call the optimizer binding over the entire grid. math_function
myfunc_bound = math_bind(myfunc, _1, 2.0); cout << my_func_bound.gradient(1.0); //Note that the binding can now use the gradient. }
It would take a bit of work, but it could be done. To make it really efficient, you would have to duplicate most of Boost.Bind with appropriate modifications.
Question 3: How would we adapt a structure implementing the operator() and gradient() to be stored here? Just like boost::function accepting any function object? struct square { double operator()(double x) {return x * x;} double gradient(double x){return 2 * x;} }; math_function
myfunc(square); Given the previous code, would we write a constructor like?: template<class F> math_function(const F& f) : f_(f), g_(bind(F::gradient, f, _1...?) //Here I don't see how to bind to the arbitrary signature, extracting the inputs from the Signature which (may) be more than 1
Easily. I guess the question is whether you want to detect the presence of F::gradient and fall back to some default numeric algorithm if it isn't available.
Question 3: What are the thoughts on whether/how this is extensible to higher dimensions? Lets start on the gradient with mapping functions: f : R^N -> R It is perfectly reasonable for these types of problems for the first parameter in the function to be the only one that the derivative is taken over. The dimensionality of the derivative/gradient/jacobian would come out of a combination of the result_type from the signature and the dimensionality of the domain. For example, something like: template
class math_function;
Sure. But wouldn't you want DomainDim to be the arity of the function (at
least by default).
template
Question 4: Now lets say that what we really want is to manage f: R^N -> R^M functions.
I don't see any particular reason that this should present any extra challenges. From the standpoint of designing the concepts for math functions, a vector is not all that different from a simple double.
Question 5: I want to also associate information about the nature of the function. Whether it is convex/concave, increasing, strictly increasing, etc. These tend to have an inheritance structure, but it isn't absolutely necessary. I was thinking about using trait classes. But I assume that we would lose all those traits when we use type erasure? Are we better off storing a bunch of enums in this data structure? Increasing the size of the class shouldn't be an issue.
If you need to know this information at compile time then, yes you lose it when you apply type erasure. However, it is fairly easy to give math_function members that return these traits at runtime.
Let me justify why this is all very useful and a general problem: Right now, everyone writing non-linear algorithms in C++ has their own structures for managing mathematical functions that contain derivatives, jacobians, hessians. I am running into this right now because I am trying to wrap a bunch of other people's algorithms for my own work. A few comments: 1) Everyone is using dynamic polymorphism for these overloading a virtual operator() and some type of gradient/jacobian... I assume this is because there is no agreed upon concept for functions objects with gradients... In reality, none of these seem to require dynamic polymorphism. Why shouldn't a newton's method optimizer have inlining of the operator() and derivative through static polymorphism where possible?
It depends on how complex the function is. Are the ones in question sufficiently complex that the inlining wouldn't help much anyway?
2) Most of these guys have conflated functionality/settings specific to their optimizers/solves with the evaluation of the function/gradient. This makes generic code difficult without wrappers/adapters. 3) Many use their own data structures for arrays, matrices, multidim results. This may be inevitable, but means that you need to have these types parameterized in any mathematical callback routine. 4) While a lot could be gained from having people agree on a static concept for mathematical applications akin to standardizing on operator() for functors, it is also very useful to have a generic callback wrapper akin to boost;:function. The main reason for this is that you will tend to write parameterized, pure functions where you want to bind parameters. Worst case, people could write their own functions with this mathematical callback structure and adapters could be written for the different libraries.
If you have an appropriate concept, you can always either a) non-intrusively adapt any class that provides the correct functionality to model the concept or b) Create a wrapper that adapts classes to model the concept. I would say that the most important thing is the concept and math_function can then be defined as being able to hold any object that models the concept.
5) Not to disparage these superb libraries, but a few examples to see what I mean: http://quantlib.org/reference/class_quant_lib_1_1_cost_function.html and http://trilinos.sandia.gov/packages/docs/r6.0/packages/didasko/doc/html/nox_... and http://www.coin-or.org/CppAD/Doc/ipopt_cppad_simple.cpp.xml
If anyone is interested in working with me on this very interesting and useful problem, please give me an email. I think it has a lot of potential for general use.
In Christ, Steven Watanabe

On automatic differentiation: Russel: I am glad you brought this up. I think that this is about the coolest application of generic/functional programming I have ever seen, and was a driver for what I have been talking about here with math_function. To give a brief summary of AD: * All functions that are implemented on a computer (+, *, sin(), etc.) are actually composed of fairly simple atomic operations. * If you use a (compile-time) stream of recursive applications of the chain-rule and product-rule, you can get the gradient of a function that is as close to analytically correct as the implementation of the underlying functions themselves. * If anyone reading the "[Proto] implementing an computer algebra system with proto" thread is reading this, I believe it is a very similar type of problem. * No more use of expensive and inaccurate finite-difference since you will have a single direct function. * Because we are usually dealing with multi-dimensional functions, you typically need to think through how all the product/chain rule interacts with an array as well. * Many implementations of AD use a text pre-processor for Fortran, etc. So you can immediately guess that one way to approach this is with expression templates. The paper you mentioned is a great reference. Another thing for those interested to look at is the Sacado subproject of Trilinos(at Sandia). See https://cfwebprod.sandia.gov/cfdocs/CCIM/docs/Sacado_06.ppt Another implementation I was looking at is: http://www.fadbad.com/download/FlexibleAD-talk.pdf A few notes on these implementations: * In order to use expression templates and overloading, these have ended up having to subvert and replace the type systems or replace standard functions with their own templates. This may be necessary in some form, but right now you end up having to write all of your code with non-standard data structures, types, and functions. And you often need to set up a complicated framework to get AD support. * You need expression templates for all of the functions. But boost has a lot of functions would want to use in http://www.boost.org/doc/libs/1_35_0/libs/math/doc/html/index.html * But for many problems, one needs to be able to have their own functions AD aware as well. For example, my research usually involves solving a functional equation (dynamic programming). You pass in another function as part of its parameters (For an example, see see http://jacek.rothert.googlepages.com/vfi.pdf) * I am already far beyond my level of competence, but it appears to me that a lot of the problems here for making things look syntactically easy are similar to what lambda did. Perhaps it is possible to do all of these without types and functions. Using boost::proto may make it much more standard for people to convert their own functions to being AD-ready. * You can probably see here why I was thinking about how a general math_function concept is necessary to build on if we want to start adding in things like this. Of course, AD guys would really need to get involved with trying to add their stuff to boost to get everything working well. Stephen: Thanks as always, a few follow-ups:
Question 3: How would we adapt a structure implementing the operator() and gradient() to be stored here? Just like boost::function accepting any function object? struct square { double operator()(double x) {return x * x;} double gradient(double x){return 2 * x;} }; math_function
myfunc(square); template<class F> math_function(const F& f) : f_(f), g_(bind(F::gradient, f, _1...?) //Here I don't see how to bind to the arbitrary signature, extracting the inputs from the Signature which (may) be more than 1
Easily. Great to hear. I think that getting the constructor working might be a good first step. Any hints on how to extract the list of inputs from the signature regardless of arity? I think this seems to be a common problem I have in dealing with functors and boost::bind. What might the "...?" look like in the example I posted?
Question 3: What are the thoughts on whether/how this is extensible to higher dimensions? Lets start on the gradient with mapping functions: f : R^N -> R Sure. But wouldn't you want DomainDim to be the arity of the function (at least by default). Might be nice to extend it that way, but things may be a lot easier if you just assume that the dimension is only in the first parameter and it is passed in as some kind of array concept. Why? Because most multi-dimensional stuff in optimizers, solvers is already done with arrays for the parameters. And often the other arguments in the function are constants/parameters that you don't want to differentiate over. Last: What is the type of the gradient? If we allow different types then it ends up having to be a tuple. What about the second derivative? A cross-product of tuples? Probably possible, but not worth it. Last, the math_function concept should really have an in/ out concept. For example, .gradient(const array
& x, array & grad_out){...}. This is for compatability with a lot of other libraries that use this approach, especially when you are worried about temporaries.
However, it is fairly easy to give math_function members that return these traits at runtime. Is there a canonical example of a way to do this the best way?
Thanks, Jesse

On automatic differentiation: Russel: I am glad you brought this up. I think that this is about the coolest application of generic/functional programming I have ever seen, and was a driver for what I have been talking about here with math_function.
I wrote an AD class using forward method in 2002. Meanwhile it is using boost::mpl to deal with different template instantiations containing different partial derivatives. Cannot contribute it here, because my boss says, that we are not in the business of helping our competitors. BTW -- it is possible to calculate higher derivatives by applying such a class to itself.
* If anyone reading the "[Proto] implementing an computer algebra system with proto" thread is reading this, I believe it is a very similar type of problem.
don't see what proto has to do with this. Or I still don't understand what proto is intended for.
* No more use of expensive and inaccurate finite-difference since you will have a single direct function.
right -- finally
* Because we are usually dealing with multi-dimensional functions, you typically need to think through how all the product/chain rule interacts with an array as well.
I don't understand Simply replace double with the type of your to be written AD class.
* Many implementations of AD use a text pre-processor for Fortran, etc.
antique
So you can immediately guess that one way to approach this is with expression templates. The paper you mentioned is a great reference. Another thing for those interested to look at is the Sacado subproject of Trilinos(at Sandia). See https://cfwebprod.sandia.gov/cfdocs/CCIM/docs/Sacado_06.ppt Another implementation I was looking at is: http://www.fadbad.com/download/FlexibleAD-talk.pdf
there are books out there describing the forward approach of AD and the reverse approach. The forward approach is usually faster, since it does not need to create a stack of operations to be interpreted later.
A few notes on these implementations: * In order to use expression templates and overloading, these have ended up having to subvert and replace the type systems or replace standard functions with their own templates. This may be necessary in some form, but right now you end up having to write all of your code with non-standard data structures, types, and functions. And you often need to set up a complicated framework to get AD support. * You need expression templates for all of the functions. But boost has a lot of functions would want to use in http://www.boost.org/doc/libs/1_35_0/libs/math/doc/html/index.html * But for many problems, one needs to be able to have their own functions AD aware as well. For example, my research usually involves solving a functional equation (dynamic programming). You pass in another function as part of its parameters (For an example, see see http://jacek.rothert.googlepages.com/vfi.pdf)
other functions will have to be written as templates so that they can either accept a double or your to be written AD-type. I sincerely hope that such a class makes it into boost soon. Peter

don't see what proto has to do with this. Or I still don't understand what proto is intended for. Well, I would guess that if this is built out with proto, it will make it much easier and standard to generalize your own functions and for boost math to implement this for all of theirs. Or maybe I was just trying to sell the proto/expression template guys on getting interested in this problem because they are all geniuses.
I don't understand. Simply replace double with the type of your to be written AD class. Sure, and if boost standardized those types it would be great. But there are two things here. All of your functions you use have to use these new types. Which may be reasonably easy to implement if all of your functions have been written generically (and tougher if you are a library scavenger like me). But a bigger problem is when dealing with multi-dimensions. Forcing conversions of all arrays back and forth between different formats when you use other libraries (I will be using this in solvers and optimizers as well) can be a real issue unless they were all written with the same AD types. And matrices are even worse since there doesn't seem to be an agreed on matrix concept for element access to write generic code(e.g. boost ublas focuses on mymat(index1, index2), and mymat.begin1() to get an column iterator, where multi_array
is myarr[i][j] and has different iterator concepts).
I sincerely hope that such a class makes it into boost soon.
Amen!

don't see what proto has to do with this. Or I still don't understand what proto is intended for. Well, I would guess that if this is built out with proto, it will make it much easier and standard to generalize your own functions and for boost math to implement this for all of theirs. Or maybe I was just trying to sell the proto/expression template guys on getting interested in this problem because they are all geniuses.
I know little about proto, and could not imagine right now what spesific problem/domain is suitable for it. But I have the same feeling as yours, that utilizing proto in domain like AD and other numeric ones will probably make things even cooler. I also want to raise those geniuses'(including Steven) interests in digging deeper in these fields. :-) B/Rgds Max

AMDG jesseperla wrote:
Stephen: Thanks as always, a few follow-ups:
Err, I usually spell my name with a 'v'
Question 3: How would we adapt a structure implementing the operator() and gradient() to be stored here? Just like boost::function accepting any function object? struct square { double operator()(double x) {return x * x;} double gradient(double x){return 2 * x;} }; math_function
myfunc(square); template<class F> math_function(const F& f) : f_(f), g_(bind(F::gradient, f, _1...?) //Here I don't see how to bind to the arbitrary signature, extracting the inputs from the Signature which (may) be more than 1
You'll have to either specialize math_function for different arities or push the construction of the function that holds the gradient into a separate class which is specialized for each arity or use a custom function object that contains all the overloads of operator() needed instead of Boost.Bind.
However, it is fairly easy to give math_function members that return these traits at runtime.
Is there a canonical example of a way to do this the best way?
It's pretty straightforward: // default implementation template<class T> bool is_convex(const T& t) { return(false); } // overload for math_function template<class S> bool is_convex(const math_function<S>& m) { return(m.is_convex()); } // overloads for other objects should be found by ADL. template<class S> class math_function { // ... math_function(F f) : is_convex_(is_convex(f)) {} bool is_convex_; }; This is the simplest solution. Of course, if necessary, you can use more type erasure to delay the calculation of is_convex until its needed. In Christ, Steven Watanabe
participants (11)
-
Bob Wilkinson
-
dhruva
-
Emil Dotchevski
-
Jesse Perla
-
jesseperla
-
Joel Falcou
-
Max
-
Peter Bartlett
-
peter_foelsche@agilent.com
-
Russell L. Carter
-
Steven Watanabe