Needs advices on design ( mpl or processor )

Hi This is not a specific questions directly related to Boost::mpl, but a request for help if somebody has the time... For the last couple of month I have been trying to learn to use the boost::mpl library. To learn to use the library I decided to use it for creating a "compressed-enum" library. The library should make it possible to store several enums in a single variable (for instance an unsigned int) Here is a simple example to give en idea: // example enums to play with struct TagA { enum Size { size = 4 }; enum Max { max = 3 }; enum E { A0 = 0, A1 = 1, A2 = 2, A3 = 3, }; }; struct TagB { enum Size { size = 4 }; enum Max { max = 3 }; enum E { B0 = 0, B1 = 1, B2 = 2, B3 = 3, }; }; struct TagC { enum Size { size = 4 }; enum Max { max = 3 }; enum E { C0 = 0, C1 = 1, C2 = 2, C3 = 3, }; }; struct First{}; struct Second{}; int main() { // using the compressed enums library typedef unsigned short STORAGE; typedef CompressedEnums< STORAGE, CompressedEnum<STORAGE, 1, TagA>, CompressedEnum<STORAGE, 4, TagB, First>, CompressedEnum<STORAGE, 16, TagB, Second>, CompressedEnumArray<STORAGE, 64, TagC, 5> > ENUMS; assert(sizeof(ENUMS) == 2)); ENUMS enums; enums.set<TagA>(TagA::A2); enums.set<Second>(TagB::B2); enums.set<3, TagC>(TagC::C2); cout << enums.get<TagA>() << " " << enums.get<Second>(TagB::B2) << " " << enums.get<3, TagC>(TagC::C2) << endl; } CompressedEnums contains the storage variable, and inherit from all its template arguments, and make their methods available through tags. CompressedEnum and CompressedEnumArray only contains static methods which can find the enum in the storage variable. Here is a snippet of my implementation: template < typename STORAGE_TYPE, int OFFSET, typename ENUM_CONTAINER, typename TAG = typename ENUM_CONTAINER::E > struct CompressedEnum { typedef TAG ACCESS_TAG; typedef typename ENUM_CONTAINER::E INNER_TYPE; enum Offset { offset = OFFSET }; enum Size { size = ENUM_CONTAINER::size }; static INNER_TYPE get(const STORAGE_TYPE& data) { return (INNER_TYPE)((data / OFFSET) % size); } ----%<----%<----%<----%<----%<----%<----%<----%<-- } template < typename STORAGE_TYPE, typename B00 = Dummy< 0>, typename B01 = Dummy< 1>, typename B02 = Dummy< 2>, typename B03 = Dummy< 3> > struct CompressedEnums : public B00, B01, B02, B03 { typedef boost::mpl::map<boost::mpl::pair<typename B00::ACCESS_TAG, B00>, boost::mpl::pair<typename B01::ACCESS_TAG, B01>, boost::mpl::pair<typename B02::ACCESS_TAG, B02>, boost::mpl::pair<typename B03::ACCESS_TAG, B03> > TYPE_MAP; template<typename TAG> typename boost::mpl::at<TYPE_MAP, TAG>::type::INNER_TYPE get() const { return boost::mpl::at<TYPE_MAP, TAG>::type::get(data); } ----%<----%<----%<----%<----%<----%<----%<----%<-- STORAGE_TYPE data; }; The code works and does what it is suppose to (as far as I know) but I'm not very happy with the interface: typedef CompressedEnums< STORAGE, // 1 because it is the first CompressedEnum<STORAGE, 1, TagA>, // 4 because TagA::size == 4 CompressedEnum<STORAGE, 4, TagB, First>, // 16 because TagA::size * TagB::size == 16 CompressedEnum<STORAGE, 16, TagB, Second>, // 64 because TagA::size * TagB::size * TagB::size == 64 CompressedEnumArray<STORAGE, 64, TagC, 5> > ENUMS; These magic numbers makes it very easy to make bugs when the code is restructured or changed, and it should be possible to calculate them automatic. If the magic numbers are replaced with their expresions the code will still break of the arguemnts are reordered. To accomplice this I have considered two solutions: macro or more templating... Using macroes I imagin the interface could look something like this: typedef CompressedEnums< STORAGE, ENUM_PACK( STORAGE, 1, (CompressedEnum, TagA) (CompressedEnum, TagB, First) (CompressedEnum, TagB, Second) (CompressedEnumArray, TagC, 5) ) > ENUMS; Which then would expand to the code manual entered above. Alternative using template I imagin that CompressedEnums could change CompressedEnum<STORAGE, 0, TagB, Second> -> CompressedEnum<STORAGE, 16, TagB, Second> automatic and then derive from that one automatic. I tried both, but non of my idea worked out very well, which is why I hope that one of you c++ experts could give me some hints. Best regards Allan W. Nielsen

This is a much simple illustration of the problem I try to solve: template<int OFFSET> struct A { enum O { offset = OFFSET }; enum S { size = 2 }; }; template<int OFFSET> struct B { enum O { offset = OFFSET }; enum S { size = 4 }; }; template < typename B0, typename B1, typename B2 > struct C : public B0, B1, B2 { }; int main(int argc, const char *argv[]) { C< A<1>, B< A<1>::offset * A<1>::size >, A< B< A<1>::offset * A<1>::size >::offset * B< A<1>::offset * A<1>::size >::size > > c1; // does the same C< A<1>, B< A<1>::size >, A< A<1>::size * B< A<1>::size >::size > > c2; return 0; } Is there a simpler way to let template arguments propagate through derived classes? On Fri, Jan 6, 2012 at 5:19 PM, Allan Nielsen <a@awn.dk> wrote:
Hi
This is not a specific questions directly related to Boost::mpl, but a request for help if somebody has the time...
For the last couple of month I have been trying to learn to use the boost::mpl library.
To learn to use the library I decided to use it for creating a "compressed-enum" library. The library should make it possible to store several enums in a single variable (for instance an unsigned int)
Here is a simple example to give en idea:
// example enums to play with struct TagA { enum Size { size = 4 }; enum Max { max = 3 }; enum E { A0 = 0, A1 = 1, A2 = 2, A3 = 3, }; }; struct TagB { enum Size { size = 4 }; enum Max { max = 3 }; enum E { B0 = 0, B1 = 1, B2 = 2, B3 = 3, }; }; struct TagC { enum Size { size = 4 }; enum Max { max = 3 }; enum E { C0 = 0, C1 = 1, C2 = 2, C3 = 3, }; };
struct First{}; struct Second{};
int main() { // using the compressed enums library typedef unsigned short STORAGE; typedef CompressedEnums< STORAGE, CompressedEnum<STORAGE, 1, TagA>, CompressedEnum<STORAGE, 4, TagB, First>, CompressedEnum<STORAGE, 16, TagB, Second>, CompressedEnumArray<STORAGE, 64, TagC, 5> > ENUMS;
assert(sizeof(ENUMS) == 2)); ENUMS enums; enums.set<TagA>(TagA::A2); enums.set<Second>(TagB::B2); enums.set<3, TagC>(TagC::C2);
cout << enums.get<TagA>() << " " << enums.get<Second>(TagB::B2) << " " << enums.get<3, TagC>(TagC::C2) << endl; }
CompressedEnums contains the storage variable, and inherit from all its template arguments, and make their methods available through tags. CompressedEnum and CompressedEnumArray only contains static methods which can find the enum in the storage variable.
Here is a snippet of my implementation:
template < typename STORAGE_TYPE, int OFFSET, typename ENUM_CONTAINER, typename TAG = typename ENUM_CONTAINER::E > struct CompressedEnum { typedef TAG ACCESS_TAG; typedef typename ENUM_CONTAINER::E INNER_TYPE;
enum Offset { offset = OFFSET }; enum Size { size = ENUM_CONTAINER::size };
static INNER_TYPE get(const STORAGE_TYPE& data) { return (INNER_TYPE)((data / OFFSET) % size); }
----%<----%<----%<----%<----%<----%<----%<----%<-- }
template < typename STORAGE_TYPE, typename B00 = Dummy< 0>, typename B01 = Dummy< 1>, typename B02 = Dummy< 2>, typename B03 = Dummy< 3> > struct CompressedEnums : public B00, B01, B02, B03 { typedef boost::mpl::map<boost::mpl::pair<typename B00::ACCESS_TAG, B00>, boost::mpl::pair<typename B01::ACCESS_TAG, B01>, boost::mpl::pair<typename B02::ACCESS_TAG, B02>, boost::mpl::pair<typename B03::ACCESS_TAG, B03> > TYPE_MAP;
template<typename TAG> typename boost::mpl::at<TYPE_MAP, TAG>::type::INNER_TYPE get() const { return boost::mpl::at<TYPE_MAP, TAG>::type::get(data); }
----%<----%<----%<----%<----%<----%<----%<----%<-- STORAGE_TYPE data; };
The code works and does what it is suppose to (as far as I know) but I'm not very happy with the interface:
typedef CompressedEnums< STORAGE,
// 1 because it is the first CompressedEnum<STORAGE, 1, TagA>,
// 4 because TagA::size == 4 CompressedEnum<STORAGE, 4, TagB, First>,
// 16 because TagA::size * TagB::size == 16 CompressedEnum<STORAGE, 16, TagB, Second>,
// 64 because TagA::size * TagB::size * TagB::size == 64 CompressedEnumArray<STORAGE, 64, TagC, 5>
> ENUMS;
These magic numbers makes it very easy to make bugs when the code is restructured or changed, and it should be possible to calculate them automatic. If the magic numbers are replaced with their expresions the code will still break of the arguemnts are reordered.
To accomplice this I have considered two solutions: macro or more templating...
Using macroes I imagin the interface could look something like this:
typedef CompressedEnums< STORAGE, ENUM_PACK( STORAGE, 1, (CompressedEnum, TagA) (CompressedEnum, TagB, First) (CompressedEnum, TagB, Second) (CompressedEnumArray, TagC, 5) ) > ENUMS; Which then would expand to the code manual entered above.
Alternative using template I imagin that CompressedEnums could change CompressedEnum<STORAGE, 0, TagB, Second> -> CompressedEnum<STORAGE, 16, TagB, Second> automatic and then derive from that one automatic.
I tried both, but non of my idea worked out very well, which is why I hope that one of you c++ experts could give me some hints.
Best regards Allan W. Nielsen

On 01/06/2012 05:49 PM, Allan Nielsen wrote:
Is there a simpler way to let template arguments propagate through derived classes?
I'm sorry I don't understand the question. Either use inheritance or repeat the name. template<class T> struct A { typedef T type; }; template<class A> struct B : A { }; or template<class A> struct B { typedef typename A::type type; }; you can do the same with non-type template parameters.

On 01/06/12 10:49, Allan Nielsen wrote:
This is a much simple illustration of the problem I try to solve:
template<int OFFSET> struct A { enum O { offset = OFFSET }; enum S { size = 2 }; };
template<int OFFSET> struct B { enum O { offset = OFFSET }; enum S { size = 4 }; };
What is the purpose of the OFFSET template argument?
From the name and glancing at the following code, it looks like it has something to do with the offset of the structure with respect to the containing structure, such as the c1 or c2 in the following code.
Could you explain a little more. Also, the A and B structs contain no data; yet, your first code example had enums.get<TagA>(), suggesting there was data stored in the structs.
template < typename B0, typename B1, typename B2 > struct C : public B0, B1, B2 { };
int main(int argc, const char *argv[]) { C< A<1>,
B< A<1>::offset * A<1>::size >,
A< B< A<1>::offset * A<1>::size >::offset * B< A<1>::offset * A<1>::size >::size > > c1;
// does the same C< A<1>,
B< A<1>::size >,
A< A<1>::size * B< A<1>::size >::size > > c2;
return 0; }
Is there a simpler way to let template arguments propagate through derived classes?
[snip] -regards, Larry

On 01/06/12 10:19, Allan Nielsen wrote:
Hi
This is not a specific questions directly related to Boost::mpl, but a request for help if somebody has the time...
For the last couple of month I have been trying to learn to use the boost::mpl library.
To learn to use the library I decided to use it for creating a "compressed-enum" library. The library should make it possible to store several enums in a single variable (for instance an unsigned int)
Wouldn't that variable have to be large enough to contain all the enums. At first glance this looks like: fusion::vector<Enum_1,Enum_2,...Enum_n>: (See http://www.boost.org/doc/libs/1_48_0/libs/fusion/doc/html/fusion/container/v...) Why wouldn't a fusion::vector be suitable? -regards, Larry

Wouldn't that variable have to be large enough to contain all the enums. At first glance this looks like:
fusion::vector<Enum_1,Enum_2,...Enum_n>:
The idea is that four enums which each only can contain four values, can together only represent 256 ( 4^4) values which can be stored in a char. This off cause assume that their associated values are defined from 0 to 3. It also works for numbers which are not powers of 2: enum A{ a0 = 0, a1 = 1, a2 = 2, a3 = 3, a4 = 4 }; enum B{ b0 = 0, b1 = 1, b2 = 2, b3 = 3, b4 = 4 }; CompressedEnums< unsigned char, CompressedEnum<unsigned char, 1, A>, CompressedEnum<unsigned char, 5, B> > ENUMS; sizeof(ENUMS) == sizeof(unsigned char); I do not know boost::fusion very well, but I do not think this use full for this.

On 01/06/12 12:56, Allan Nielsen wrote:
Wouldn't that variable have to be large enough to contain all the enums. At first glance this looks like:
fusion::vector<Enum_1,Enum_2,...Enum_n>:
The idea is that four enums which each only can contain four values, can together only represent 256 ( 4^4) values which can be stored in a char.
This off cause assume that their associated values are defined from 0 to 3.
It also works for numbers which are not powers of 2:
enum A{ a0 = 0, a1 = 1, a2 = 2, a3 = 3, a4 = 4 }; enum B{ b0 = 0, b1 = 1, b2 = 2, b3 = 3, b4 = 4 };
CompressedEnums< unsigned char, CompressedEnum<unsigned char, 1, A>, CompressedEnum<unsigned char, 5, B> > ENUMS;
sizeof(ENUMS) == sizeof(unsigned char);
I do not know boost::fusion very well, but I do not think this use full for this.
I think you're right. IIRC, fusion::vector stores all its values in member variables name m1, m2, ...,mn, where, in your case: A m1; B m2; So, IIUC, you want (as the name compressed_enums suggests) want to sum the sizes of each enum, then create a buffer with at least that number of bits, and then check that that buffer size is < sizeof(StorageType). I've thought some more about the problem and started coding something; however, it's not complete, but you may be able to complete it. As it is, it just prints 3, which is: size_enum<E1>::size+sizer_enum<E1>::size BTW, the attached code uses variadic templates; however, I'm guessing you could easily use non-variadic template compiler after some code changes. HTH. -regards, Larry

On 6 January 2012 13:26, Larry Evans <cppljevans@suddenlink.net> wrote:
So, IIUC, you want (as the name compressed_enums suggests) want to sum the sizes of each enum,
Assuming size is the number of unique values in a given enum, you want to sum the ceiling of log2 of the size of each enum to determine how many bits you need. <http://www.boost.org/doc/libs/1_48_0/libs/integer/doc/html/boost_integer/log2.html> can help with this (although it computes floor(log2(n))). Note: to get an even denser packing where different enums can share the same bit, remove "ceiling of" from the previous sentence. Example: if the first enum has 4 distinct values and the second enum has 8 distinct values, you need to be about to store 4x8=32 distinct combinations. ceil(log2(4)) + ceil(log2(8)) == 2 + 3 == 5 bits. -- Nevin ":-)" Liber <mailto:nevin@eviloverlord.com> (847) 691-1404

My problem is not to calculate the actually size, but move the calculation from the instantiation into the definition of the library. On Fri, Jan 6, 2012 at 11:04 PM, Nevin Liber <nevin@eviloverlord.com> wrote:
On 6 January 2012 13:26, Larry Evans <cppljevans@suddenlink.net> wrote:
So, IIUC, you want (as the name compressed_enums suggests) want to sum the sizes of each enum,
Assuming size is the number of unique values in a given enum, you want to sum the ceiling of log2 of the size of each enum to determine how many bits you need. <http://www.boost.org/doc/libs/1_48_0/libs/integer/doc/html/boost_integer/log2.html> can help with this (although it computes floor(log2(n))). Note: to get an even denser packing where different enums can share the same bit, remove "ceiling of" from the previous sentence.
Example: if the first enum has 4 distinct values and the second enum has 8 distinct values, you need to be about to store 4x8=32 distinct combinations. ceil(log2(4)) + ceil(log2(8)) == 2 + 3 == 5 bits. -- Nevin ":-)" Liber <mailto:nevin@eviloverlord.com> (847) 691-1404 _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users

On Sat, Jan 7, 2012 at 1:54 AM, Larry Evans <cppljevans@suddenlink.net> wrote:
On 01/06/12 16:19, Allan Nielsen wrote:
My problem is not to calculate the actually size, but move the calculation from the instantiation into the definition of the library. I have not idea what this means. Please clarify.
First of all, thanks for your input. I will try to illustrate my design problem in a more simple example: struct TagA { enum Size { size = 4 }; enum E { A0 = 0, A1 = 1, A2 = 2, A3 = 3, }; }; struct TagB { enum Size { size = 4 }; enum E { B0 = 0, B1 = 1, B2 = 2, B3 = 3, }; }; struct TagC { enum Size { size = 4 }; enum E { C0 = 0, C1 = 1, C2 = 2, C3 = 3, }; }; // Simple edition of CompressedEnum, but has the same problem template<typename T, int OFFSET> struct CompressedEnum { enum O { offset = OFFSET }; enum S { size = T::size }; T get( ) { ... } }; // Simple edition of CompressedEnums, but has the same problem template < typename B0, typename B1, typename B2 > struct CompressedEnums: public B0, B1, B2 { template < typename T> get() { ... } }; void example1() { // As you see, the expresions for calculating the offset get quite long, // and are not easy to maintain. C< CompressedEnum<tagA, 1>, CompressedEnum<tagB, CompressedEnum<tagA, 1>::offset * CompressedEnum<tagA, 1>::size >, CompressedEnum<tagC, CompressedEnum<tagB, CompressedEnum<tagA, 1>::offset * CompressedEnum<tagA, 1>::size >::offset * CompressedEnum<tagB, CompressedEnum<tagA, 1>::offset * CompressedEnum<tagA, 1>::size >::size > > c1; } void example2() { // same as example1 but more readable. Here the offset expresions are much more // readable: B::size * B::offset. But I would still like to avoid this in the // usage of the code. #define A CompressedEnum<tagA, 1> #define B CompressedEnum<tagB, A::size * A::offset> #define C CompressedEnum<tagC, B::size * B::offset> C< A, B, C > c1; } //////////////////////////////////////////////////////////////////////////// // A solution to the problem can be expressed in c++0x (I think): template<typename T, int OFFSET> struct CompressedEnum { enum O { offset = OFFSET }; enum S { size = T::size }; T get( ) { ... } }; #define B0_ B0< 1 > #define B1_ B1< B0_::size * B0::offset > #define B2_ B2< B1_::size * B1::offset > template < template <int O> class B0, template <int O> class B1, template <int O> class B2 > struct CompressedEnums: public B0_, B1_, B2_ { template < typename T> get() { ... } }; template <int N> using AA = CompressedEnum<tagA, N> template <int N> using BB = CompressedEnum<tagB, N> template <int N> using CC = CompressedEnum<tagC, N> void example3() { // same as example1 and example2, but needs a c++11 compiler C< AA, BB, CC > c1; } In the c++11 code the offset calculation appears in the CompressedEnums class instead of in the type definition, which is much more user friendly. The problem is that this requirer a c++11 compiler which is not available at the project where I intend to use this. I'm therefor looking for a way to achieve the same (or similar) interface which does not need c++11. Best regards Allan W. Nielsen

On 01/07/12 02:42, Allan Nielsen wrote:
On Sat, Jan 7, 2012 at 1:54 AM, Larry Evans <cppljevans@suddenlink.net> wrote:
On 01/06/12 16:19, Allan Nielsen wrote:
My problem is not to calculate the actually size, but move the calculation from the instantiation into the definition of the library. I have not idea what this means. Please clarify.
First of all, thanks for your input.
I will try to illustrate my design problem in a more simple example:
struct TagA { enum Size { size = 4 }; enum E { A0 = 0, A1 = 1, A2 = 2, A3 = 3, }; };
struct TagB { enum Size { size = 4 }; enum E { B0 = 0, B1 = 1, B2 = 2, B3 = 3, }; };
struct TagC { enum Size { size = 4 }; enum E { C0 = 0, C1 = 1, C2 = 2, C3 = 3, }; };
// Simple edition of CompressedEnum, but has the same problem
Sorry, what was that problem again?
template<typename T, int OFFSET> struct CompressedEnum { enum O { offset = OFFSET }; enum S { size = T::size }; T get( ) { ... } };
// Simple edition of CompressedEnums, but has the same problem template < typename B0, typename B1, typename B2 > struct CompressedEnums: public B0, B1, B2 { template < typename T> get() { ... } };
void example1() { // As you see, the expresions for calculating the offset get quite long, // and are not easy to maintain.
Here's where I don't understand why the calculation are done outside the temple and then passed as args to the template. Why not have a template do the calculations and accumulate the results, somewhat like mpl::fold or the code I posted earlier?
C< CompressedEnum<tagA, 1>,
CompressedEnum<tagB, CompressedEnum<tagA, 1>::offset * CompressedEnum<tagA, 1>::size >,
CompressedEnum<tagC,
CompressedEnum<tagB, CompressedEnum<tagA, 1>::offset * CompressedEnum<tagA, 1>::size >::offset * CompressedEnum<tagB, CompressedEnum<tagA, 1>::offset * CompressedEnum<tagA, 1>::size >::size > > c1;
}
[snip] -regards, Larry

On 01/07/12 17:28, Larry Evans wrote:
On 01/07/12 02:42, Allan Nielsen wrote: [snip]
void example1() { // As you see, the expresions for calculating the offset get quite long, // and are not easy to maintain.
Here's where I don't understand why the calculation are done outside the temple and then passed as args to the template. Why not have a template do the calculations and accumulate the results, somewhat like mpl::fold or the code I posted earlier?
C< CompressedEnum<tagA, 1>,
CompressedEnum<tagB, CompressedEnum<tagA, 1>::offset * CompressedEnum<tagA, 1>::size >,
CompressedEnum<tagC,
CompressedEnum<tagB, CompressedEnum<tagA, 1>::offset * CompressedEnum<tagA, 1>::size >::offset * CompressedEnum<tagB, CompressedEnum<tagA, 1>::offset * CompressedEnum<tagA, 1>::size >::size > > c1;
}
[snip] The attached code produces output:
:eos_t::offset=9 :get_ol(T1)=2 :get_ol(T2)=0 :get_ol(T3)=0 :get<T3>=0 putting: :get<T1>=0 :get<T2>=1 :get<T3>=2 The output before the putting: lines is caused by the obviously erroneous initialization of the buffer (no Enum e's stored in the buffer, only unsigned ints). The output after the putting: lines shows the effect of the put<Tag,Enum>(Enum e). However, be warned! I'm not sure all the casting within the get_ol and put_ol is portable. Is this about what you want? I've looked briefly at Vicente's bit_mask library and it looks more complicated; however, that extra complication is probably because it provides extra capabilities. HTH. -Larry

Hi Sorry, I have been away from my computer doing the weekend. I'm looking at your suggested code, and the questions you have raised now. I will answer as soon as possible. Best regards Allan W. Nielsen On Sun, Jan 8, 2012 at 7:32 PM, Larry Evans <cppljevans@suddenlink.net> wrote:
On 01/07/12 17:28, Larry Evans wrote:
On 01/07/12 02:42, Allan Nielsen wrote: [snip]
void example1() { // As you see, the expresions for calculating the offset get quite long, // and are not easy to maintain.
Here's where I don't understand why the calculation are done outside the temple and then passed as args to the template. Why not have a template do the calculations and accumulate the results, somewhat like mpl::fold or the code I posted earlier?
C< CompressedEnum<tagA, 1>,
CompressedEnum<tagB, CompressedEnum<tagA, 1>::offset * CompressedEnum<tagA, 1>::size >,
CompressedEnum<tagC,
CompressedEnum<tagB, CompressedEnum<tagA, 1>::offset * CompressedEnum<tagA, 1>::size >::offset * CompressedEnum<tagB, CompressedEnum<tagA, 1>::offset * CompressedEnum<tagA, 1>::size >::size > > c1;
}
[snip] The attached code produces output:
:eos_t::offset=9 :get_ol(T1)=2 :get_ol(T2)=0 :get_ol(T3)=0 :get<T3>=0 putting: :get<T1>=0 :get<T2>=1 :get<T3>=2
The output before the putting: lines is caused by the obviously erroneous initialization of the buffer (no Enum e's stored in the buffer, only unsigned ints). The output after the putting: lines shows the effect of the put<Tag,Enum>(Enum e).
However, be warned! I'm not sure all the casting within the get_ol and put_ol is portable.
Is this about what you want?
I've looked briefly at Vicente's bit_mask library and it looks more complicated; however, that extra complication is probably because it provides extra capabilities.
HTH.
-Larry
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users

On 01/09/12 03:27, Allan Nielsen wrote:
Hi
Sorry, I have been away from my computer doing the weekend. I'm looking at your suggested code, and the questions you have raised now. I will answer as soon as possible.
Best regards Allan W. Nielsen
The attached has output which shows the structure of the type and shows that the last Enum passed to template has least offset, which you might find counterintuitive. The only solution I can think of is to reverse the args. The attached also uses: http://svn.boost.org/svn/boost/sandbox/variadic_templates/boost/iostreams/ut... If you don't want to bother to download that, then just comment out the indent_buf_{in,out} and the indent_outbuf declaration. The output is: ./build/gcc4_7v/boost-svn/ro/trunk/sandbox/rw/variadic_templates/sandbox/compressed_enums/compressed_enums.exe :eos_t::offset=9 :get_ol(T1)=0 :get_ol(T2)=0 :get_ol(T3)=0 :get<T3>=0 putting: :get<T1>=1 :get<T2>=2 :get<T3>=3 compressed_enums_impl < Derived, pair< T1, E1> , compressed_enums_impl < Derived, pair< T2, E2> , compressed_enums_impl < Derived, pair< T3, E3> , compressed_enums_impl < Derived >::offset=0 >::offset=4
::offset=7 ::offset=9
Compilation finished at Mon Jan 9 11:55:18

Hi I have been reading you code, and refactoring my own proposals several times. I'm not used to this kind of meta programming, so it did take me quite some time to understand the code you have proposed, but I'm learning, so thank you.. I will try to answer the questions
Here's where I don't understand why the calculation are done outside the temple and then passed as args to the template. Why not have a template do the calculations and accumulate the results, somewhat like mpl::fold or the code I posted earlier?
This might also be the very core of my problem. I did some tries where I used the c++11 "using" facility to bind all the template arguments except the offset, it did work, but the usage of the interface was not practical. Instead I have now fractured the design to avoid the this calculation outside (like you suggested). This seems to work out better.
The output before the putting: lines is caused by the obviously erroneous initialization of the buffer (no Enum e's stored in the buffer, only unsigned ints). The output after the putting: lines shows the effect of the put<Tag,Enum>(Enum e).
However, be warned! I'm not sure all the casting within the get_ol and put_ol is portable.
Is this about what you want? Pretty much, plus some extra stuff ;-)
- It must be possible to specify arrays of enums - It must be possible to group enums. But I think I got some ideas to move on, so thanks a lot. Best regards Allan W. Nielsen

On 01/09/12 12:01, Larry Evans wrote:
On 01/09/12 03:27, Allan Nielsen wrote:
Hi
Sorry, I have been away from my computer doing the weekend. I'm looking at your suggested code, and the questions you have raised now. I will answer as soon as possible.
Best regards Allan W. Nielsen
The attached has output which shows the structure of the type and shows that the last Enum passed to template has least offset, which you might find counterintuitive. The only solution I can think of is to reverse the args.
Another solution is to use: http://svn.boost.org/svn/boost/sandbox/variadic_templates/boost/mpl/if_recur... as in the attached. This version does not use CRTP because the if_recur template makes it unnecessary. Output is: ./build/gcc4_7v/boost-svn/ro/trunk/sandbox/rw/variadic_templates/sandbox/compressed_enums/compressed_enums.if_recur.exe :eos_t::size_bits=9 :get_ol<E1,0>:get_ol(T1)=0 :get_ol<E2,2>:get_ol(T2)=0 :get_ol<E3,5>:get_ol(T3)=0 :get_ol<E3,5>:get<T3>=0 putting: :get_ol<E1,0>:get<T1>=1 :get_ol<E2,2>:get<T2>=2 :get_ol<E3,5>:get<T3>=3 -regards, Larry

Hi Just for the record, here is the souce code I ended up using. It is quite different from the compressed-tuple source, and the source code Larry suggested. It is most likely not as generric as these alternativies and it will only work with simple types as storege-buffer. But it has some features which is important for me: - Struct like groups EXAMPLE: CompressedEnums< unsigned, CompressedEnum<e1>, EnumStruct<e2, CompressedEnum<e1>, CompressedEnum<e2>, CompressedEnum<e3> >, CompressedEnum<e3>, CompressedEnum<e4> > ce2; ce2.set<e1>(e1::E1); std::cout << ce2.get<e1>() << std::endl; ce2.set<e2, e1>(e1::E1); std::cout << ce2.get<e2, e1>() << std::endl; - Unions like groups CompressedEnums< unsigned, CompressedEnum<e1>, EnumUnion<e2, CompressedEnum<e1>, CompressedEnum<Abc>, CompressedEnum<YesNoMaby> >, CompressedEnum<e3> > ce7; ce7.set<e2, Abc>(Abc::B); cout << ce7.get<e2, e1>() << " " << ce7.get<e2, Abc>() << " " << ce7.get<e2, YesNoMaby>() << endl; ce7.set<e2, Abc>(Abc::A); cout << ce7.get<e2, e1>() << " " << ce7.get<e2, Abc>() << " " << ce7.get<e2, YesNoMaby>() << endl; ce7.set<e2, YesNoMaby>(YesNoMaby::Maby); cout << ce7.get<e2, e1>() << " " << ce7.get<e2, Abc>() << " " << ce7.get<e2, YesNoMaby>() << endl; - Arrays (of leafs only) CompressedEnums< unsigned, CompressedEnum<e1>, CompressedEnumArray<Abc, 5>, CompressedEnum<e3> > ce6; ce6.data = 0; for (int i = 0; i < ce6.get_size<Abc>(); i++) { std::cout << ce6.get<4, Abc>() << ce6.get<3, Abc>() << ce6.get<2, Abc>() << ce6.get<1, Abc>() << ce6.get<0, Abc>() << std::endl; ce6.next<0, Abc>(); } Comments are always appreciated, but before you dive into the code, I should warn you that this is the first time I have tried to sue the boost::mpl for something ( other than playing around ). Thanks for all the help and inputs. Best regards Allan W. Nielsen

On 01/10/12 11:43, Allan Nielsen wrote:
Hi
Just for the record, here is the souce code I ended up using.
It is quite different from the compressed-tuple source, and the source code Larry suggested. It is most likely not as generric as these alternativies and it will only work with simple types as storege-buffer.
But it has some features which is important for me:
- Struct like groups EXAMPLE: CompressedEnums< unsigned, CompressedEnum<e1>, EnumStruct<e2, CompressedEnum<e1>, CompressedEnum<e2>, CompressedEnum<e3> >, CompressedEnum<e3>, CompressedEnum<e4> > ce2;
ce2.set<e1>(e1::E1); std::cout << ce2.get<e1>() << std::endl;
ce2.set<e2, e1>(e1::E1); std::cout << ce2.get<e2, e1>() << std::endl;
- Unions like groups CompressedEnums< unsigned, CompressedEnum<e1>,
EnumUnion<e2, CompressedEnum<e1>, CompressedEnum<Abc>, CompressedEnum<YesNoMaby> >, CompressedEnum<e3> > ce7;
ce7.set<e2, Abc>(Abc::B); cout << ce7.get<e2, e1>() << " " << ce7.get<e2, Abc>() << " " << ce7.get<e2, YesNoMaby>() << endl; ce7.set<e2, Abc>(Abc::A); cout << ce7.get<e2, e1>() << " " << ce7.get<e2, Abc>() << " " << ce7.get<e2, YesNoMaby>() << endl; ce7.set<e2, YesNoMaby>(YesNoMaby::Maby); cout << ce7.get<e2, e1>() << " " << ce7.get<e2, Abc>() << " " << ce7.get<e2, YesNoMaby>() << endl;
- Arrays (of leafs only) CompressedEnums< unsigned, CompressedEnum<e1>, CompressedEnumArray<Abc, 5>, CompressedEnum<e3> > ce6;
ce6.data = 0; for (int i = 0; i < ce6.get_size<Abc>(); i++) { std::cout << ce6.get<4, Abc>() << ce6.get<3, Abc>() << ce6.get<2, Abc>() << ce6.get<1, Abc>() << ce6.get<0, Abc>() << std::endl; ce6.next<0, Abc>(); }
Comments are always appreciated, but before you dive into the code, I should warn you that this is the first time I have tried to sue the boost::mpl for something ( other than playing around ).
Thanks for all the help and inputs.
You're most welcome. I'm interested because I had a similar problem with tuples and tagged unions which was implemented here: http://svn.boost.org/svn/boost/sandbox/variadic_templates/boost/composite_st... The tuple was implemented in: container_all_of_aligned.hpp and the tagged union in: container_one_of_maybe.hpp However, that implementation was complicated by having to calculate the correct alignment, a problem absent in your CompressedEnums since you want to pack everything in as small a space as possible and only unpack when needed (via the get functions). IIUC, the EnumUnion has no tag, which means you could store an Enum1 and retrieve an Enum2 without any warning. Is that correct? In UnionModel, there's: static const size_t size = head::size > _tail::size ? head::size : _tail::size; which makes sense because a union just has to be large enough to store the largest enumeration. However, in the case of StructModel_r, I don't understand: static const size_t size = head::size * _tail::size; around line 200. I would think that there would just be additions since you want to store one value after another. That's why, in my previous post, in template sum_bits, there's: , integral_c<unsigned,Bits+enum_bits<Enum>::size> Could you explain why multiplication instead of addition is used to calculate the size in StructModel_r? Also, is there any check that the storage_type is large enough to store all the bits? -regards, Larry

Hi
IIUC, the EnumUnion has no tag, which means you could store an Enum1 and retrieve an Enum2 without any warning. Is that correct?
In UnionModel, there's:
static const size_t size = head::size > _tail::size ? head::size : _tail::size;
Almost, the EnumUnion has a tag, but this is only used to find the specific union. Consider: CompressedEnums< unsigned, EnumUnion<Tag1, CompressedEnum<e1>, CompressedEnum<Abc>, CompressedEnum<YesNoMaby> >, EnumUnion<Tag2, CompressedEnum<e1>, CompressedEnum<Abc>, CompressedEnum<YesNoMaby> >
double_union;
Here I have two union, where three enums shares the same storage.
which makes sense because a union just has to be large enough to store the largest enumeration. However, in the case of StructModel_r, I don't understand:
static const size_t size = head::size * _tail::size;
around line 200. I would think that there would just be additions since you want to store one value after another. That's why, in my previous post, in template sum_bits, there's:
, integral_c<unsigned,Bits+enum_bits<Enum>::size>
Could you explain why multiplication instead of addition is used to calculate the size in StructModel_r?
This is because the offset is not really an offset, I just could not come up with a better name. Here is the explanation: Assume we need to store 5 different enumerated types which each represents 3 different values ( like the A, B C). These 5 enums can all together represent 3^5 = 243 different values ( AAAAA, AAAAB, AAAAC, AAABA, AAABB ...). As 243 is less than 256 ( all the different values in a char), it is possible to store the values in a char. But in a binary number system 2 bits are required to store one tri-state value. If we use 3 * 5 bits we end up with 15 bits ( more than one char) which is not the most efficient encoding. Therefore we encode this in a base_3 number system which is defined by: ... d_2 * n^2 + d_1 * n^1 + d_0 * n^0, where d is the digits we want to encode, and n is 3 because all the enums are tri-states. Example: Encode BCACB -> 12021 1*3^4 + 2*3^3 + 0*3^2 + 2*3^1 + 1*3^0 = 142 To extract enum number 4 the reverse operation must be done: (142 / 3^3) % 3 = 2 If different enums are with different sizes are used then n in the equation above will be different for each digit. Then to set digit number 4 one has to multiply n_0 * n_1 * n_2 * n_3 and then use this as base. Therefore the multiplications.
Could you explain why multiplication instead of addition is used to calculate the size in StructModel_r? Currently no, but I would like to add this. Also a range check on the array access.
Best regards Allan W. Nielsen

On 01/10/12 14:08, Allan Nielsen wrote: [snip]
the largest enumeration. However, in the case of StructModel_r, I don't understand:
static const size_t size = head::size * _tail::size;
around line 200. I would think that there would just be additions since you want to store one value after another. That's why, in my previous post, in template sum_bits, there's:
, integral_c<unsigned,Bits+enum_bits<Enum>::size>
Could you explain why multiplication instead of addition is used to calculate the size in StructModel_r?
This is because the offset is not really an offset, I just could not come up with a better name. Here is the explanation:
Assume we need to store 5 different enumerated types which each represents 3 different values ( like the A, B C).
These 5 enums can all together represent 3^5 = 243 different values ( AAAAA, AAAAB, AAAAC, AAABA, AAABB ...).
As 243 is less than 256 ( all the different values in a char), it is possible to store the values in a char. But in a binary number system 2 bits are required to store one tri-state value. If we use 3 * 5 bits we end up with 15 bits ( more than one char) which is not the most efficient encoding.
Therefore we encode this in a base_3 number system which is defined by: ... d_2 * n^2 + d_1 * n^1 + d_0 * n^0, where d is the digits we want to encode, and n is 3 because all the enums are tri-states.
Example: Encode BCACB -> 12021
1*3^4 + 2*3^3 + 0*3^2 + 2*3^1 + 1*3^0 = 142
To extract enum number 4 the reverse operation must be done: (142 / 3^3) % 3 = 2
If different enums are with different sizes are used then n in the equation above will be different for each digit. Then to set digit number 4 one has to multiply n_0 * n_1 * n_2 * n_3 and then use this as base. Therefore the multiplications.
Ah! Now I see. This reminds me of apl decode and encode: http://www.sigapl.org/encode.htm the radix vector mentioned there represents the sizes (=max value+1) of the enums to be stored. For example, let: enum E_0{e0_0,e0_1,e0_2,...,e0_n0}; enum E_1{e1_0,e1_1,...,e1_n1}; ... enum E_m{em_0,em_1,...,em_nm}; then the radix vector, rv, would be: unsigned rv[m+1]={n0+1,n1+2,...,nm+1}; and to encode the enum values: struct Ev { E_0 e0; E_1 e1; ... E_m em; }; Ev ev={e0_i0, e1_i1, ..., em_im}; decode would be used: unsigned dv=decode(rv,ev) where decode is the c++ equivalent of the decode described on encode.htm. Is that about right? -regards, Larry

On 01/10/12 14:42, Larry Evans wrote:
On 01/10/12 14:08, Allan Nielsen wrote: [snip]
the largest enumeration. However, in the case of StructModel_r, I don't understand:
static const size_t size = head::size * _tail::size;
around line 200. I would think that there would just be additions since you want to store one value after another. That's why, in my previous post, in template sum_bits, there's:
, integral_c<unsigned,Bits+enum_bits<Enum>::size>
Could you explain why multiplication instead of addition is used to calculate the size in StructModel_r?
This is because the offset is not really an offset, I just could not come up with a better name. Here is the explanation:
Assume we need to store 5 different enumerated types which each represents 3 different values ( like the A, B C).
These 5 enums can all together represent 3^5 = 243 different values ( AAAAA, AAAAB, AAAAC, AAABA, AAABB ...).
As 243 is less than 256 ( all the different values in a char), it is possible to store the values in a char. But in a binary number system 2 bits are required to store one tri-state value. If we use 3 * 5 bits we end up with 15 bits ( more than one char) which is not the most efficient encoding.
Therefore we encode this in a base_3 number system which is defined by: ... d_2 * n^2 + d_1 * n^1 + d_0 * n^0, where d is the digits we want to encode, and n is 3 because all the enums are tri-states.
Example: Encode BCACB -> 12021
1*3^4 + 2*3^3 + 0*3^2 + 2*3^1 + 1*3^0 = 142
To extract enum number 4 the reverse operation must be done: (142 / 3^3) % 3 = 2
If different enums are with different sizes are used then n in the equation above will be different for each digit. Then to set digit number 4 one has to multiply n_0 * n_1 * n_2 * n_3 and then use this as base. Therefore the multiplications.
Ah! Now I see. This reminds me of apl decode and encode:
http://www.sigapl.org/encode.htm
the radix vector mentioned there represents the sizes (=max value+1) of the enums to be stored. For example, let:
enum E_0{e0_0,e0_1,e0_2,...,e0_n0}; enum E_1{e1_0,e1_1,...,e1_n1}; ... enum E_m{em_0,em_1,...,em_nm};
then the radix vector, rv, would be:
unsigned rv[m+1]={n0+1,n1+2,...,nm+1};
and to encode the enum values:
struct Ev { E_0 e0; E_1 e1; ... E_m em; }; Ev ev={e0_i0, e1_i1, ..., em_im};
decode would be used:
unsigned dv=decode(rv,ev)
where decode is the c++ equivalent of the decode described on encode.htm.
Is that about right?
-regards, Larry Also, something similar is done for multidimensional array classes. for example, given an m-dimensional array, a, the array index expression:
a[i0][i1]...[im], accesses some element in internal data array: Type data[(n0+1)*(n1+1)*...*(nm+1)]; where ni, for i=1...m, is the max index for the i-th dimension. The offset value into data for the is given by: i0*s0+i1*s1+...+im*sm where s0,...sm are the strides of the array, and the strides are just the partial products of the ni's. IOW, for fortran storage order: s0=1; s1=s0*(n0+1); s2=s1*(n1+1); ... I think indices_at_offset here: http://svn.boost.org/svn/boost/sandbox/variadic_templates/sandbox/stepper/bo... which returns the array indices corresponding to some offset into the array, is doing something similar to what your get<Tag>() is doing. However, get<Tag> just returns 1 index where Tag corresponds to some dimension in the array and indices_at_offset resturns all the indices. Sound right? -regards, Larry

On 01/10/12 15:35, Larry Evans wrote:
On 01/10/12 14:42, Larry Evans wrote:
On 01/10/12 14:08, Allan Nielsen wrote: [snip]
the largest enumeration. However, in the case of StructModel_r, I don't understand:
static const size_t size = head::size * _tail::size;
around line 200. I would think that there would just be additions since you want to store one value after another. That's why, in my previous post, in template sum_bits, there's:
, integral_c<unsigned,Bits+enum_bits<Enum>::size>
Could you explain why multiplication instead of addition is used to calculate the size in StructModel_r?
This is because the offset is not really an offset, I just could not come up with a better name. Here is the explanation:
Assume we need to store 5 different enumerated types which each represents 3 different values ( like the A, B C).
These 5 enums can all together represent 3^5 = 243 different values ( AAAAA, AAAAB, AAAAC, AAABA, AAABB ...).
As 243 is less than 256 ( all the different values in a char), it is possible to store the values in a char. But in a binary number system 2 bits are required to store one tri-state value. If we use 3 * 5 bits we end up with 15 bits ( more than one char) which is not the most efficient encoding.
Therefore we encode this in a base_3 number system which is defined by: ... d_2 * n^2 + d_1 * n^1 + d_0 * n^0, where d is the digits we want to encode, and n is 3 because all the enums are tri-states.
Example: Encode BCACB -> 12021
1*3^4 + 2*3^3 + 0*3^2 + 2*3^1 + 1*3^0 = 142
To extract enum number 4 the reverse operation must be done: (142 / 3^3) % 3 = 2
If different enums are with different sizes are used then n in the equation above will be different for each digit. Then to set digit number 4 one has to multiply n_0 * n_1 * n_2 * n_3 and then use this as base. Therefore the multiplications.
Ah! Now I see. This reminds me of apl decode and encode:
http://www.sigapl.org/encode.htm
the radix vector mentioned there represents the sizes (=max value+1) of the enums to be stored. For example, let:
enum E_0{e0_0,e0_1,e0_2,...,e0_n0}; enum E_1{e1_0,e1_1,...,e1_n1}; ... enum E_m{em_0,em_1,...,em_nm};
then the radix vector, rv, would be:
unsigned rv[m+1]={n0+1,n1+2,...,nm+1};
and to encode the enum values:
struct Ev { E_0 e0; E_1 e1; ... E_m em; }; Ev ev={e0_i0, e1_i1, ..., em_im};
decode would be used:
unsigned dv=decode(rv,ev)
where decode is the c++ equivalent of the decode described on encode.htm.
Is that about right?
-regards, Larry Also, something similar is done for multidimensional array classes. for example, given an m-dimensional array, a, the array index expression:
a[i0][i1]...[im],
accesses some element in internal data array:
Type data[(n0+1)*(n1+1)*...*(nm+1)];
where ni, for i=1...m, is the max index for the i-th dimension.
The offset value into data for the is given by:
i0*s0+i1*s1+...+im*sm
where s0,...sm are the strides of the array, and the strides are just the partial products of the ni's. IOW, for fortran storage order:
s0=1; s1=s0*(n0+1); s2=s1*(n1+1); ...
I think indices_at_offset here:
http://svn.boost.org/svn/boost/sandbox/variadic_templates/sandbox/stepper/bo...
which returns the array indices corresponding to some offset into the array, is doing something similar to what your get<Tag>() is doing. However, get<Tag> just returns 1 index where Tag corresponds to some dimension in the array and indices_at_offset resturns all the indices.
Sound right?
-regards, Larry
Based on what I just learned from your explanations and code, there's no need for either the CRTP or the if_recur. Neither is used in the attached, which just implements the compressed tuple and produces output:
::stride=12 ::stride=24 :get<T1>=0 :get<T2>=0 :get<T3>=0
:eos_t::stride=24 compressed_enums_impl < pair< T1, E1> : compressed_enums_impl < pair< T2, E2> : compressed_enums_impl < pair< T3, E3> : compressed_enums_impl < >::stride=1 >::stride=4 putting: put<T1>(1) put<T2>(2) put<T3>(3) :get<T1>=1 :get<T2>=2 :get<T3>=3 putting: put<T1>(0) put<T2>(1) put<T3>(2) :get<T1>=0 :get<T2>=1 :get<T3>=2
Looking at your code made me realize there was no need vor the CRTP to allow access to storage, it could just be passed as an argument to functions higher up in the inheritance hierarchy. Thanks. -regards, Larry

Also, something similar is done for multidimensional array classes.
Yep you are right, I did not realize this before you pointed it out. Thanks.
which returns the array indices corresponding to some offset into the array, is doing something similar to what your get<Tag>() is doing. However, get<Tag> just returns 1 index where Tag corresponds to some dimension in the array and indices_at_offset resturns all the indices.
Sound right?
I did not have the time to read indices_at_offset.hpp, but as the problem solved in compressed_enums is almost the same as in multi dimensional arrays I'm pretty sure they does the same. Regards Allan W. Nielsen

Hi
Ah! Now I see. This reminds me of apl decode and encode:
where decode is the c++ equivalent of the decode described on encode.htm.
Is that about right?
I have only read it very fast, but it seems to be very much what I'm doing. Regards Allan

On 01/06/12 13:26, Larry Evans wrote: [snip]
So, IIUC, you want (as the name compressed_enums suggests) want to sum the sizes of each enum, then create a buffer with at least that number of bits, and then check that that buffer size is < sizeof(StorageType).
I've thought some more about the problem and started coding something; however, it's not complete, but you may be able to complete it. As it is, it just prints 3, which is: size_enum<E1>::size+sizer_enum<E1>::size
BTW, the attached code uses variadic templates; however, I'm guessing you could easily use non-variadic template compiler after some code changes.
[snip] The attached code is a revision of the previously posted code. It does not properly calculate the size of the storage needed for the compressed enums; however, IIUC this could easily be corrected using Nevin Liber's suggestion about using log2. The int enums_offsets<>::buffer[] in the real code would be what you've called: STORAGE_TYPE data; in your OP. As mentioned, the size is incorrect but that's easily remedied. The revised code also illustrates the use of CRTP to allow access of the super types to the storage buffer. These super types use the overloaded get's with the proper TAG argument to access the proper portion of the buffer. The output of the attached is: :eos_t::offset=9 :eos_v.buffer()[2]=2 :get(T1)=7 :get(T2)=4 :get(T3)=0 and was compiled with gcc4.7. HTH. -Larry

Le 06/01/12 17:19, Allan Nielsen a écrit :
Hi
This is not a specific questions directly related to Boost::mpl, but a request for help if somebody has the time...
For the last couple of month I have been trying to learn to use the boost::mpl library.
To learn to use the library I decided to use it for creating a "compressed-enum" library. The library should make it possible to store several enums in a single variable (for instance an unsigned int)
Hi, I have no see yet your implementation and how I can help you in, but I guess that you can find your own responses in the implementation of this library https://svn.boost.org/svn/boost/sandbox/SOC/2010/bit_masks/lib/integer/doc/h... In addition to what you want to do, I think that all this stuff can be generalized to a compressed_tuple. About compressed_tuple. Tuples of types can be compressed depending on the bits needed to store the underlying type of each one of the tuple elements. For example a compressed_tuple<month,day,weekday> would take 3 bytes, but month needs only 5 bits, day 3 bits and weekday 3 bits, that is 11 bits which can be represented using just 2 bytes. This is quite close to the bitfield library. The main difference is that bitfield worked only with built-in types and required to state explicitly the number of bits for each field while compressed_tuple can work with UDT for which the user has stated once for all the number of bits needed to store the UDT. In addition, the UDT need to DefaultConstructible be ExplicitlyConstructible from its underlying type and be ExplicitlyConvertible to it's underlying type. There is yet another difference. The motivation of the bitfield library was to make possible to work with bitfields in a portable way (endianness). The motivation of the compressed_tuple, is to compress UDT which cannot be used with C-bitfields. The compressed_tuple_traits template needs to be specialized for each UDT and define the width_in_bits, underlying_type ... As the bitfield library, the total number of needed bits cannot exceed 64 bits. This should work also for ordinal types. An ordinal type is a type that allows to get the value from an index (0..n) and retrieve the associated position of a value (See https://svn.boost.org/svn/boost/sandbox/enums/libs/enums/doc/html/index.html for more details - section Tutorial/Ordinal Enums). Let me know if you are interested. Best, Vicente

I have no see yet your implementation and how I can help you in, but I guess that you can find your own responses in the implementation of this library
https://svn.boost.org/svn/boost/sandbox/SOC/2010/bit_masks/lib/integer/doc/h... Cool, I will have a look at it.
In addition to what you want to do, I think that all this stuff can be generalized to a compressed_tuple.
About compressed_tuple.
Tuples of types can be compressed depending on the bits needed to store the underlying type of each one of the tuple elements. For example a compressed_tuple<month,day,weekday> would take 3 bytes, but month needs only 5 bits, day 3 bits and weekday 3 bits, that is 11 bits which can be represented using just 2 bytes.
It sounds interesting, and might be quite close to what I want, but not exactly: Say I need to store 11 weekday ( encoded as an int 0-6 ) in a compressed_tuple, then this would require 11 * 3 bits = 33 -> 5 bytes. If I store 11 weekdays in a compressed_enum the only requirement is that 7^11 < MAX_STORAGE_SIZE. 7^11 = 1977326743, which is less than 2^32 and would therefor require 4 bytes. Best regards Allan W. Nielsen

Le 07/01/12 10:27, Allan Nielsen a écrit :
I have no see yet your implementation and how I can help you in, but I guess that you can find your own responses in the implementation of this library
https://svn.boost.org/svn/boost/sandbox/SOC/2010/bit_masks/lib/integer/doc/h... Cool, I will have a look at it.
In addition to what you want to do, I think that all this stuff can be generalized to a compressed_tuple.
About compressed_tuple.
Tuples of types can be compressed depending on the bits needed to store the underlying type of each one of the tuple elements. For example a compressed_tuple<month,day,weekday> would take 3 bytes, but month needs only 5 bits, day 3 bits and weekday 3 bits, that is 11 bits which can be represented using just 2 bytes. It sounds interesting, and might be quite close to what I want, but not exactly:
Say I need to store 11 weekday ( encoded as an int 0-6 ) in a compressed_tuple, then this would require 11 * 3 bits = 33 -> 5 bytes.
If I store 11 weekdays in a compressed_enum the only requirement is that 7^11< MAX_STORAGE_SIZE. 7^11 = 1977326743, which is less than 2^32 and would therefor require 4 bytes. You are right, the compression you propose higher, but has the disadvantage that to need integer product and division, while the compressed tuple I envision needs just shifts which should perform better.
Anyway both implementations are possible for the same interface, and this could result into two classes modeling the same concept with different space and speed constraints. Best, Vicente
participants (5)
-
Allan Nielsen
-
Larry Evans
-
Mathias Gaunard
-
Nevin Liber
-
Vicente J. Botet Escriba