
On Mon, Jul 18, 2011 at 12:21 PM, Emil Dotchevski <emildotchevski@gmail.com>wrote:
On Mon, Jul 18, 2011 at 11:14 AM, Jeffrey Lee Hellrung, Jr. <jeffrey.hellrung@gmail.com> wrote:
On Mon, Jul 18, 2011 at 12:24 AM, Emil Dotchevski
as could proxy structures (c*v could be a proxy vector where w<0>(c*v) = 1 is equivalent to w<0>(v) = 1/c).
This requirement means that none of the boost::qvm functions can return temporary objects for proxies. The mutable proxies Boost QVM avoid temporary objects by clever type casting. This helps control the abstraction penalty of the library.
I don't understand. Can you elaborate?
For example, here is the col_m function which maps a vector type A as a matrix-column:
template <class A> typename boost::enable_if_c< is_v<A>::value, qvm_detail::col_m_<A> &>::type BOOST_QVM_INLINE_TRIVIAL col_m( A & a ) { return reinterpret_cast<typename qvm_detail::col_m_<A> &>(a); }
The v_traits<col_m_>::r and v_traits<col_m_>::w functions cast back to A (which is stored as v_traits<col_m_>::OriginalVector) before accessing its elements:
template <int Row,int Col> static BOOST_QVM_INLINE_CRITICAL scalar_type & w( this_matrix & x ) { BOOST_QVM_STATIC_ASSERT(Col==0); BOOST_QVM_STATIC_ASSERT(Row>=0); BOOST_QVM_STATIC_ASSERT(Row<rows); return v_traits<OriginalVector>::template w<Row>(reinterpret_cast<OriginalVector &>(x)); }
This technique avoids the creation of temporaries which are often the source of abstraction penalties.
Wow, I guess I'm surprised the temporaries wouldn't be optimized away, at least in this case... I understand your desire to play it safe for now and require lvalue references for w. However, I don't see how the above example precludes the use of proxy references :/ Sorry for being dense. I would think col_m_<A> would simply use whatever reference type (lvalue or proxy) that A uses... In any case, I can't really think of a compelling use case for proxy references yet (can you?), so it's not a huge deal.
Why do you require the dimension of vectors (and, likely, matrices; I
haven't checked) to be strictly greater than 0? Sometimes a 0-dimensional vector is convenient to have when writing dimension-independent code.
I wasn't aware of that. What can you do with a zero dimensional vector?
Not much, to be sure (all zero-dim vectors of a given scalar type, at least, would be equal). I can't give a concrete example at the moment, but I seem to remember some recursion on dimension I've done where the base case was simpler to express at 0 than at 1.
In principle I'm not against lifting this requirement, but I want to make sure it's needed first. Doesn't this only affect the specialization of v_traits? I mean, you can still write recursive template meta programs that use zero as the base case as long as they don't define zero size in a v_traits specialization.
Again, this is still kind of pure speculation. I could imagine an algorithm written using the QVM abstraction which, say, recurses in dimension by (lazily) removing the first or last element of vectors (does QVM support such operations?), and at some point you'd remove the first or last element of a 1-vector. But it's fair to avoid lifting the requirement until needed. I was just wondering if there was some other rationale for requiring a positive dimension other than "I hadn't seen a need for it".
What algorithm do you use for computing determinants?
The general case is pretty straight forward recursion, defined in boost/qvm/detail/determinant_impl.hpp. However, the library comes with a code generator (libs/qvm/gen.cpp) capable of defining overloads for any specific size, unrolling the recursion.
The code generator is used instead of template metaprogramming, again to control the abstraction penalty of the library.
I'll take a look. I ask because I believe for 4x4 and larger matrices, a dynamic programming solution ends up significantly reducing the number of operations over the O(n!) recursive solution. But, I believe, for matrices larger than 5x5, still other techniques take fewer operations. Still, dynamic programming might improve the 4x4 and 5x5 cases. But maybe you've already looked into this...?
No I have not. The determinant computations are currently pretty straight-forward, save the use of an off-line code generator.
I just ran through a quick count of the number of multiplies, and for 4x4 matrices, dynamic programming will reduce the number of multiplications from 40 to 28 over straightforward recursion, while for 5x5 matrices, the reduction from 205 to 75 (or so).
I guess that the documentation isn't clear but boost/qvm/math.hpp defines function templates that correspond to the functions from <math.h>. The templates are specialized for float and double, but can be specialized for any other scalar. That said, I don't have tests using any other scalar type. Perhaps a fixed point scalar should be implemented to make sure there isn't something missing.
Are complex scalar types within the scope of the library?
I think so. I've certainly been very careful to design a type system that permits non-trivial scalar types.
I figured complex scalars might be tacitly supported, as you don't require ordering comparisons on scalar types, but I'm not sure the scalar type requirements you give are sufficient (e.g., you probably need conjugation somehow). In any case, this was more of a curiosity than an actual need. I'll continue to browse through the library and let you know if I have additional comments or questions. - Jeff