[serialization] Enhancing Archives with Archve Adaptors

Attached is the way I would approach the extension of the serialization library in order to provide special implementations for certain combinations of serializable type and specific archives. The example is the application of the following optimization of the serialization of collections. The default implementation of collection serialization is to serialize each member of the collection. This will always work. In some cases, this might not be the fastest way. For example // 10,000 integers int iarray[10000] // default implementation results in: for(unsigned int i = 0; i < 10000; ++i){ ar << iarray[i]; } Now for some types of archives - specifically a native binary archive, we might gain performance by replacing the above with ar.save_binary(iarray, sizeof(iarray); However, we can't just to that for all types of archives. Application of the above transformation would result in the archive losing its "text-like" character which we might not want. So the question is how can we override the default serialization for specific combinations of type and archives in the most simplest and most efficient way. Ideally our method would have the following features: a) application would optional. It would be presumptuous for us to impose our view regarding the utility on users. Its impossible for us to know that every user will want our enhancement. b) It should leverage on already existing archives without have to recode them. This means our improvements can be buit and tested incrementally without creating regressions in the library development and without impinging on other's efforts to improve the library. c) It would be applicable to any archive with out having to recode that archive. For example, Suppose one author makes an enhancement which renders the saving of certain types with save_binary and tests it with binary_oarchive. Great. Now someone else comes along, looks at binary_archive and decides it could be much faster without being based on basic_ostream and makes a new version. He's pleased also. Now a third person should be able to apply the enhancement created by the first programmer to the archive created by the second programmer without any recode and have a high confidence that it's going to work as expected. d) It should be orthogonal to other other such enhancements. That is, if our enhancement is optional, and someone else's enhancement is also optional, then from one basic archive class, we should be able to create four different variations without doing any significant recoding. e) for each combination of enhancement and data type we should only have to specify the special code once. That is we shouldn't have to repeat this coding where ever the enhancement is applied. f) It would be undesirable for coding such as the following to be required as part of the current headers: tempate<class Archive> void serialize( Archive & ar, std::vector<T> & t, const unsigned int version ){ // if Archive is one of the types which supports // enhancement X // then ar.enhanced_serialize(t) // else // default loop serialzation // end } The problem is that when the next enhancement comes along the above has to be changed to: tempate<class Archive> void serialize( Archive & ar, std::vector<T> & t, const unsigned int version ){ // if Archive is one of the types which supports // enhancement X // then ar.X_serialize(t) // If Archive supports enhancement Y ar.Y_serialize(t) // default loop serialzation // end } This kind of thing would have several undesirable effects: i) it would create a maintainence pain in the neck. Each time a new enhancement comes along, all the serializations which might take advantage of have to be updated. It would be hard to fix responsabilty for fixing bugs. ii) it requires that the serializable types import knowledge of all the enhancements implemented. That it is most likely requires that these serializations import headers from a variety of enhancements. Worse, some serializations will use some enhancements while others use others. Its just too easy to keep things from becoming too complicated. That is, its not conceptually scalable. iii) It provides no way of specifying the priority of enhancements. One enhancement might optimize arrays while another might optimise certain structures. Which should be applied first? Maybe in some cases its one while in other cases its the other. It might seem difficult or impossible to achieve all the above objectives. I believe it is possible. Here is how I would go about it. New Concept - Archive Adaptor ============================= An archive adaptor transforms an archive class into another archive class by adding code to perform special processing for certain types. Archive Adaptors have the following features: a) The are class templates with the following signature template<class BaseArchive> struct enhance_archive : public BaseArchive { }; b) when instantitiated with an archive as a template argument, the resulting class is an archive. That is, it fulfill the requirements of the Saving or Loading Archive concept as appropriate. The attached file "bitwise_oarchive_adaptor.hpp" is an archive adaptor which, when applied to a native binary archive, will render types which can be rendered as a sequence of raw bits with the binary archive member function save_binary" We call these types as "bitwise serializable". "bitwise serializable" is defined thusly: /// concept definition All types for which the serialization trait boost::serialization::implentation_level == primitive_type. By default this includes all fundamental C++ types. It also includes all types to which this trait has been explicitly assigned. For example typedef struct { unsigned char red; unsigned char green; unsigned char blue; } RGB; BOOST_CLASS_IMPLEMENTATION(RGB, boost::serialization::primitive_type) /// In the serialization library, primitive types are not subdivide any further. For native binary archives, the bits are written out with save_binary. For text files, the data is output with the << operator (Its presumed to exist for these types) For "bitwise serializable" types, there are special functions which render saving of data with save_binary on those archives created with this adaptor. Default serialization of certain of the following types has been implemented in order to use save_binary. b) All C++ arrays of type T where T is "bitwise serializable". c) All std::vector<T> where T is "bitwise serializable". d) All std::valarray<T> where T is "bitwise serializable". These special implemtations are at the end of the file "bitwise_oarchive_adaptor.hpp" So they are available whenever any archive class built with this adaptor is used but never seen by other archives. If a user has his own collection, which he happens to know will benefit from this particular optimization, he can easily include like the following in his own application or header. // my_personal_collection template<class Base, class T, int N> void override( boost::archive::bitwise_oarchive_adaptor<Base> &ar, const my_personal_collection<T> & t, boost::mpl::true_ ){ const unsigned int count = t.size(); ar << count; if(count) ar.save_binary(t.size() * sizeof(T), get_data(t)); } template<class Base, class T, int N> void override( boost::archive::bitwise_oarchive_adaptor<Base> &ar, const my_personal_collection<T> & t ){ override(ar, t, boost::serialization::is_bitwise_serializable<T>::type()); } Note that the user has to write is "special code" once and only once regardless of how many archives it might be applied to. There is no "MxN" problem. Now that we have our archive adaptor any overrides we want to select we can apply to any appropriate archives. I chose to apply it do binary_oarchive and call it bitwise_binary_oarchive. It can be found in the attached file "bitwise_binary_oarchive.hpp". Its basically boiler plate code. Finally we need to explicitly instantiate some base class template code. This is done in "bitwise_binary_oarchive.cpp" And last - we make a small test demo. Running this with debugger permits me to trap the invocations of the overriding functions and be assured that everything works as advertised. In addition to fullfilling all the requirements in the above list he implemenation has the following features: a) its short The total (actually one half - the save part) implementation consists of 220 lines of code including comments. b) it doesn't require any alterations in the library. c) it doesn't require any alterations of existing serialization code. This completes my example. Note that in order to prevent the example from being more complicated than necessary - I left the definition of "bitwise serializable" simpler than it really should be. A better definition would be: A type is T bitwise_serializable if it fulfills one of the following requirements: a) boost::serialization::implentation_level<T>::value == primitive_type. In addition, if T is "bitwise serializable" then the following are also "bitwise serializable" C++ Arrays of type T some collections. THis would open the door to automatic optmization of things like T[32][45] even though they aren't specifically mentioned. Robert Ramey begin 666 bitwise_archive_adaptor.ZIP M4$L#!!0````(`%UM>#.I)A?(@P(``(@%```;````:7-?8FET=VES95]S97)I M86QI>F%B;&4N:'!PG51;;]HP%'Y'XC\<":D""1'HUFU**5)A:(M$H2)9*^W% M,LD)>$KL+':@;.I_WXG#4KJ6/LQ/G\_E.U>[)6(980R>S\9><._Y4^9/E][U MS/M^/9Y-V=?;VV:C119"XDFCRJK9<!RX\2%4:<:-6"5HH4@PUZ"++%.Y@5:6 M\W7*0<D0B5C$4'%';7;C3]C==-F!LS.H;S"Z@D'_O-\A8WCNBS(2<17V< 8U M.J_1NQJ]K]%%C3[4Z&.-/ME"A&8K879"(].8"YZ(7YQ*ZFVRS#W4VIYT8**R M?2[6&P/G_?X%+-4*J<HE3W%O;;YI[$*J*%,14E.4!"XCB(0VN5@55B#*[JQ^ M8&C *# ;A+%2VH"O8K/C.5J>F0A1EEQWU,W2:]#K]Z#M(P(/;<?E7L@UQ-1N MF'F3Z9R&-]GGDPH'*:1+8';BS7QIC,=9S=;M=;E9%Z*E\[__AT#C5"&>$U M!XB)M<@B;E!W(5)AD:(TML2NK3''K;"9;JA8E>][)6-+R# I(H2A)7)")6.Q M+ILZ>JE,L\3!GP5/F%%OF$AEWM!2*B>T9I\A,SD71CLT;4TS.U@V&Y(N.N,A M@K6%W\>B>A_L^$AED")1(X9APK6&@"AHO@4-],06E4Y ITRA?'WD[KJ4*1M6 M\O)8&=5V+"N/3<AU#QD/@RYH$[ENN5!R/7JR'76?<.OPRL>+A1^P^8+YP6=V M[P=+;_[EOR/NW@YY>)[/R/].\_4`S_I*\<@%ZZ5B"6XQ&0;',6I>(2GIUUFR M7*3"B"V-FII]G&P%1W8(E]6EZH\?7 ?>A$T6<T+SH$W$21>V/"D0KJRYZ]I; MYQ*:C<?+<F$>@1[+B15YH;69VN=0_V%_`%!+`P04````" "T>'DSQK)X^N<! M``",! ``&P```&)I='=I<V5?8FEN87)Y7V]A<F-H:79E+F-P<)62;VO;0 S& MWP?\'43WIH7,3K*_>*6PA< *98%D[5MS/LNQAGUWW"E-O;'OOO/5,>G:=*M? MR;*>GZ1'3I+^F>Z#9#9$;X;H[1"]&Z+W0_1AB#Y&HR2!G'A'#K.<E+!MIH65 M%=UB+(U)HU$H.9V?P5R;UM*F8IA-)C-8Z1PMPTHTV,)KJ)A-FB2[W2ZVUA6Q MU W$$,37#L?0Z()*DH))*Q"J@((<6\JW(4$.W#;_@9*!-7"%\$5KQ[#6)>^$ MQ<"Y(HFJ8]V@=9UJ&D]B.%TC@I"^GQ&J);6!DFJ$J\OYXMMZD4VS2<QW#-J" M] N X, Z?O.L7:;I*_-&?]\M!U>$H`I:=N32$8W1@*+;<-*@XKCL..%F\I M3%KY9;5MXX[XBI2LMP7"N<=8%,U%R!98DO)[+Y?K[]GGU?SKY<TB6R^O5_/% M@>;DV+$J8TX>TL.82?\]H<;422X<R4%J+#7$G9B,N7B1=-_U/X3]2V8T*4:; M:8>61$T_T?;R:*3\3^2,D @!`+\.4SV@2X9SX)VI21+7+9!R+!23/T"X!7N? M@5N#H$NXGQ7N/8Y&C'Z:KE#6PCDX8L7Y$7_'GE.DZ?YB\.D?P%YV#/<DX&%) MUKGW$GV!+*A.TV<,?Q87C7Z#M_>1\8_RX4;1Z ]02P,$% ````@`87AY,V:# ML,.>`@``?P4``!L```!B:71W:7-E7V)I;F%R>5]O87)C:&EV92YH<'"M5-]O MVC 0?D?B?S@5J0*I"]"MVY3^D"A"6C0**&&=NA?+21SPE-B>[91E%?O;9YL0 MM:P\3%I>?#G??9^_N[,[-&,IR>!V/H^6:!2./P7W$W0;++\&D5UGH_ !S??^ M3XM%N]4Q\921?TEIM_I]N(L@X87 FL8Y<2;-B52@2B&XU- 1$J\*#)PEQ+#0 M#'9$:1?=16-T/PE[<'H*S1_<7,-P<#[HF6!XF4M82K,=;?T-&^N\L=XVUKO& MNFBL]XWUH;$^.B$QU1NJ"(HIP[)"',MD31^)MQ:BEMH=]V#,127I:JWA?#"X M@)#'Q(@,<4$J> -KK87?[V\V&T]*E7JF'."!2_ZBR!D4W"B@B2D69X!9"BE5 M6M*X= YJJQ9_)XD&S4&O33,X5QHBGND-EL3A3&E"F,6Z-U6V64-OX$$W(@1P MXCK!*LI6D)DVP#083V:F?4,T\/1/#5R:#HD*L'98SXX;6R:/RU7_(*=7BP?+ M\%H"9 :U%"G61)U!RI.R($P[B6=.HR2/U)UT;<1R67DORGU8YPYE25ZF!*X< M0[_>[K_2EIMGT2>'> BG6!@Z&WAB@=LM9KJD!$X(.&AX>NZJTZRSW4IRK-2Q MB0"_W0+SB3+.:?*7D#WQU4$:HH7(KXY@WH 1\[3#U94@]O+^+V"(L?'KRW9K M=^+Z^$<2NDJGOF_*(PDNX!2X:6K)%%V9*PN4:<ARO%)P#8/>OA .S7%T;;0+ MZ.VVGK:[]?<QMIZ-V%[:HF^W]:A)\J.DTM#%%:@"2XT2;-J5N>G=OR^,:[#" MB9TV._ N59AHBG/0Q&R9B00E2&(<])<;R'9K][S=AO//DQD:S^\6P702HN7# M8H*6X2A81BA:3,;!:!I\&RV#^:SK1L7WZ_/Z_A$A[IKL7Z@_4$L#!!0````( M`'QV>3/>%$6LHP0``(T/```<````8FET=VES95]O87)C:&EV95]A9&%P=&]R M+FAP<-57;6_;-A#^'B#_X= `@=4ZLI.NW:"X!A+710,D<6"KV8=A$&B)LKG) MHD#2<=T@_WUW%*U8KHUT`YI@_A**XKT\SSUW5 Y$FB<\A?/!8!1&9\/>YXO; M?G1^$?Y^,>I'@]7&V<>SFW PC#[?W.SO':"!R/F_LMG?:[7@:@2QG!7,B''& M[5)D7&G0\Z*0RL!!H=ADQD#F,<<P(H4R4M*(KD:]Z+8_].#P$*HGZ'Z X_9) MV\/#4+?E>2+2,JS['5>KDVKUMEK]4JW>5:OWU>K7:O6;!3(69B$TCR13\53< M\8@EK#!2^=.B<%@;/0]ZLE@J,9D:.&FWW\%0CCFB'+(97\(13(TI@E9KL5CX M2NG$1S[ !VO\1?,FS"1"$#&R)7-@>0*)T$:)\=QN"*)M_!>/#1@)9HKED%(; M&,G4+)CBUL^EB'E.OFZ19K(Z]ML^-$:<`XMM*?*ER">08AW@\J+7O\8"'D=M MWWPU(!66J%@",];76KICBN1+-6EMV'@./%"$;0:0HM=YD3##=1,2&<]G/#<6 M8M-B5/Q.V$RG"%:JI>\\.H+!\%F1H36!CJ<LGR"2'',LZP!QQK0&D>-;!BE# M/FIOK"N#KC'.D4!/G,)KH" '(H^S><*A8[-M%:F15,_N]Z\T91Q'Z!#KN>,, M.F_A*MM\_4KH:"4?-!<L$]\8M@.=>T5YY"@/7;"8@W4$]^M;*S3W='+%1:=$ M?<XTQTCEPRZ%0K"_!_@KYN-,Q-9F?^^^W#/+@M,LV&7;L1&P)Q5N)I$Y+7NT M#:U6.0JN!]%5_^J\/XS"_M7-Y5G8CSX-+_K7'T<H]_Y7KF)T:R5 >E6<H:LR M=*H$=JPKG\4=!"YZ$"3<,)$%`9:5JQ1IJ#+K5+ET3[<Y8AK+-!8Y4\O_9E,H M,1.F9M4$;9(@P!01P&R[CPT`FA&)<<RU)LYX1J27%7#EV"AEV"VW[Z1(P%K+ M.ZZ42'@CA$/ '$K";SZ%`U*[YRI(O^KD:Q)Z$XSG4GQX'(NUV+8EJ"Q9)A<T M#; ]L'\64OV->I-SA$6O&4V:*3/U?"H7(B?J7)]QC98<1[(T.*!X\C1(Q=,, MCV[@7,=%:(ZZ)$%':'44Z6BO@:0_NR3<6*\=1I!(T#S78H*7#*09FVCX@,Y6 M74(_BMB@<_:U5[ZXQS@/I]2$#P3>17%/MOQN;F$_L7EFMC9K$QZ)L"14B!R( M#1D]T9>'3#6AM QQ<N<X.Y#"_;T5B4SYW[%LQ?&P8Y@\2W[-FB_,(0A2A@T2 M_6CB2+(N>(R#%*IY2O>)=H-&:AJ;BBTU+*8BGD+,4*H\QR^+\F)+%5Z\+O>Z M"^>^]^8-G%D/3Q'5I&:$ZY_&5X-ZXH_K/[>09M1\@S/;).4@HQ;1XAN7*1+W M0R5_+B2/&5<AZ##FZV+4"H)WP/;;LQ-VD0*\OQI>31C4[7<H'*G6;^ARI_M2 M')0,K.6&V>]HA8VJEI;5P*+T8AS0!H>6\:G #>\45@* 3J=\>[I=$LX`7J^D M$7I-F' 3X=<9>VF=[.+H.?3",CLO:HIQ>]V79\-E\CR:$6G#/GKP>"'^;W54 M9^ZG*0E@]:%%'P/U?Y)7'K;_D_P/4$L#!!0````(`!EW>3-Z7D_"V@$``(,# M```8````9&5M;U]A<F-H:79E7V%D87!T;W(N8W!P=5%-:^,P$+T'\A_>=B_. MXO57VN[BICELZ&&A[*%A>PG!R+(2:XDE(RE-0YK_7DMQ[+2P`X+'FYDW\T9A MV$9\!F'2H7&'KCMTTZ';#OWHT,_A( Q1L$IF1-&2O[",%*0V4@6TKH<#E_9F M(\QDO5=\71HD493@2>9,&3R1BNWQ':4Q=1J&N]TN4$H7`945`KCFOYKYJ&3! M5YP2PZ4`$04*KHWB^=817$-O\W^,&A@)4S+\DE(;S.7*[(AB3N>14R:LUC-3 MVG;%013 FS,&0IMY-1%[+M98\0W#X^_9PY_Y0Q9G46!>#:0";0R &*=UL6YN M)P52K<-//:/6O"[E=E.@)EK#CN&;W@5[9=19& Z^<D$WVX)AHAMGC%33"^XJ MYV;'-<MR+HC:9[*]=5#6]96=TG>[?<(V'[;U_*)^:O<:#K@PJ @7'BPB:DU] MT)*H;Q:_+)9H]C\,!VB"2M$<TY4MQDO<XQ#[2'R,CW>G`I>*V]RUCQL?MS9W MRFI3I*G]+;$^.8/6;>.A>2=DPZV>INVN:?H?TZAS23RM1W=]JZ4PF8"TW+'5 M=S]PEO\HQWLY?I;KBRV)Z;2Q]5%1,;-5`AY91$M\N;>VH^4(;V^6BL]4W%') MF4J6=L#1GOX=4$L!`A0`% ````@`76UX,ZDF%\B#`@``B 4``!L````````` M`0`@`````````&ES7V)I='=I<V5?<V5R:6%L:7IA8FQE+FAP<%!+`0(4`!0` M```(`+1X>3/&LGCZYP$``(P$```;``````````$`( ```+P"``!B:71W:7-E M7V)I;F%R>5]O87)C:&EV92YC<'!02P$"% `4````" !A>'DS9H.PPYX"``!_ M!0``&P`````````!`" ```#<! ``8FET=VES95]B:6YA<GE?;V%R8VAI=F4N M:'!P4$L!`A0`% ````@`?'9Y,]X41:RC! ``C0\``!P``````````0`@```` MLP<``&)I='=I<V5?;V%R8VAI=F5?861A<'1O<BYH<'!02P$"% `4````" `9 M=WDS>EY/PMH!``"#`P``& `````````!`" ```"0# ``9&5M;U]A<F-H:79E B7V%D87!T;W(N8W!P4$L%!@`````%``4`:P$``* .```````` ` end

"Robert Ramey" <ramey@rrsd.com> writes:
a) its short The total (actually one half - the save part) implementation consists of 220 lines of code including comments. b) it doesn't require any alterations in the library. c) it doesn't require any alterations of existing serialization code.
This completes my example.
Robert, I didn't have time to do a deep analysis of what appears to be a very intricate design, but: 1. Assuming that you meant a successful test to return a status code of zero the test you posted fails on every compiler I can find. 2. Is this the promised simplification of the design we posted in http://lists.boost.org/Archives/boost/2005/11/97002.php? If so, by what measure is your approach a simplification? I actually don't want to get into a discussion of which non-intrusive design is best. The social and code interoperability dynamics of any non-intrusive design are the same, and that's really what I want to discuss. Please let me know when you're ready to talk about that. -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
"Robert Ramey" <ramey@rrsd.com> writes:
a) its short The total (actually one half - the save part) implementation consists of 220 lines of code including comments. b) it doesn't require any alterations in the library. c) it doesn't require any alterations of existing serialization code.
This completes my example.
Robert,
I didn't have time to do a deep analysis of what appears to be a very intricate design, but:
1. Assuming that you meant a successful test to return a status code of zero the test you posted fails on every compiler I can find.
I didn't make the load part so the demo/test only invokes the save part. I neglected to comment out the comparison of the saved and loaded data so it returns non-zero. Also if I were to invest more effort in it I might review things like names , namespaces etc. Note that the library already uses exactly this technique to add a polymorphic interface to any existing archive class. So this idea of adding an enhancement/extention by means of an "archive adaptor" is pretty well established though it has never been explicitly described as a general technique in the way I did in the previous post. For the reasons I described in the post, I do have a strong preference for it. But I recognize that it may seem foreign and unfamiliar to many programers. I sent a previous verserion to Matthias some weeks ago as a suggestion but apparently it wasn't convincing. I felt I had done all I could. So I was inclined to just let it rest. Unfortunately, this was unfairly characterised as "dismissing someone else's concerns" and I got sucked into a really pointless and unpleasant episode which I'm happy to forget and will not repeat any more in the future. And of course there are lots of different ways to do things so I don't expect everyone to share my preference. And even if I did there is no way I'm going to convince every user to do things my way. So I'm content to demonstrate what I believe is the best way to do things and let others extend the library in the direction they want in the manner they prefer as long as it doesn't cut into my time. I had considered adding a section on the manual along with a demo like this but it seemed like a lot of work and I didn't really have a good simple example. Also I was reluctant to do something like this because people might start to use it and I would be stuck explaining it again and again. Now that the bitwise optimization has come about as an example, and I was brow beat into making the whole solution and memo, I might recycle it into the documentaition. Robert Ramey

"Robert Ramey" <ramey@rrsd.com> writes:
David Abrahams wrote:
"Robert Ramey" <ramey@rrsd.com> writes:
a) its short The total (actually one half - the save part) implementation consists of 220 lines of code including comments. b) it doesn't require any alterations in the library. c) it doesn't require any alterations of existing serialization code.
This completes my example.
Robert,
I didn't have time to do a deep analysis of what appears to be a very intricate design, but:
1. Assuming that you meant a successful test to return a status code of zero the test you posted fails on every compiler I can find.
I didn't make the load part so the demo/test only invokes the save part. I neglected to comment out the comparison of the saved and loaded data so it returns non-zero.
Oh, I didn't realize this was just half of another test you had written. When it failed to run with a zero return code I just stopped looking at it.
Also if I were to invest more effort in it I might review things like names , namespaces etc.
Note that the library already uses exactly this technique to add a polymorphic interface to any existing archive class. So this idea of adding an enhancement/extention by means of an "archive adaptor" is pretty well established though it has never been explicitly described as a general technique in the way I did in the previous post. For the reasons I described in the post, I do have a strong preference for it. But I recognize that it may seem foreign and unfamiliar to many programers.
There's nothing the least bit unfamiliar to me about it.
I sent a previous verserion to Matthias some weeks ago as a suggestion but apparently it wasn't convincing. I felt I had done all I could. So I was inclined to just let it rest.
Unfortunately, this was unfairly characterised as "dismissing someone else's concerns"
Uh, no, neither working on an alternative design nor stopping your work on it was characterized as "dismissing someone else's concerns."
Now that the bitwise optimization has come about as an example, and I was brow beat into making the whole solution and memo,
I don't see how you can say you were browbeaten into it. I tried to tell you that it probably wouldn't make much difference and we really want to discuss something other than the design details. How could that possibly be construed as pressure to make the solution and demo? I'd really appreciate it if you could answer this question from my previous post: 2. Is this the promised simplification of the design we posted in http://lists.boost.org/Archives/boost/2005/11/97002.php? If so, by what measure is your approach a simplification? And also, I'd appreciate it if you'd respond to the paragraph below. I actually don't want to get into a discussion of which non-intrusive design is best. The social and code interoperability dynamics of any non-intrusive design are the same, and that's really what I want to discuss. Please let me know when you're ready to talk about that. -- Dave Abrahams Boost Consulting www.boost-consulting.com

I'd really appreciate it if you could answer this question from my previous post:
2. Is this the promised simplification of the design we posted in http://lists.boost.org/Archives/boost/2005/11/97002.php? If so, by what measure is your approach a simplification?
Honestly, I don't remember what it was specfically in response to. It was intended to illustrate my view that the library can and should be extended without adding things to base classes, and finally that it simpler and more effective to do it this way. The code attached implements all of the save_array functionality included in mattias system (actually more) in far fewer lines of code and with the benefits described in the posts.
And also, I'd appreciate it if you'd respond to the paragraph below.
I actually don't want to get into a discussion of which non-intrusive design is best. The social and code interoperability dynamics of any non-intrusive design are the same, and that's really what I want to discuss. Please let me know when you're ready to talk about that.
Sorry, I don't even know what that means. Robert Ramey

"Robert Ramey" <ramey@rrsd.com> writes:
I'd really appreciate it if you could answer this question from my previous post:
2. Is this the promised simplification of the design we posted in http://lists.boost.org/Archives/boost/2005/11/97002.php? If so, by what measure is your approach a simplification?
Honestly, I don't remember what it was specfically in response to.
In http://lists.boost.org/Archives/boost/2005/11/97201.php you wrote: Shortly, I'll post some code that I believe addresses all your design goals in a much simpler and effective way. I'm asking if the code you just posted represents that promised simplification.
It was intended to illustrate my view that the library can and should be extended without adding things to base classes
That it _can_ be so extended has been well established. We know that it is possible since http://lists.boost.org/Archives/boost/2005/11/97002.php also extends the library without adding anything to base classes.
and finally that it simpler and more effective to do it this way.
We believe there are some aspects of "effectiveness" that you haven't yet considered. We'd like to discuss those with you.
The code attached implements all of the save_array functionality included in mattias system (actually more) in far fewer lines of code and with the benefits described in the posts.
By "mattias system" I suppose you're referring to something proposed before 11-21-2005? Are you simply refusing to look at the code we have replaced that proposal with (only 159 lines posted including extensive comments which are mostly exposition)? The point of that code was to present something that wouldn't modify the existing library so that you could be comfortable thinking about the consequences of a non-intrusive design without thinking we were trying to make changes in your library. Our 159 lines are much shorter and simpler than what you just posted; it should be easier to think about the effects of using the smaller system.
And also, I'd appreciate it if you'd respond to the paragraph below.
I actually don't want to get into a discussion of which non-intrusive design is best. The social and code interoperability dynamics of any non-intrusive design are the same, and that's really what I want to discuss. Please let me know when you're ready to talk about that.
Sorry, I don't even know what that means.
It means I want to discuss what happens when the people implementing serialization functions for specific classes are not the same people choosing the archive that will be used, and what happens when there is a second, "non-official" interface to the serialization library that must be used whenever it's important to get the best performance. In http://lists.boost.org/Archives/boost/2005/11/97201.php you wrote: Now my question is why do you need anything from me? and I failed to answer. Let me be perfectly up front about what I want: 1. I want you to understand what I believe to be the (so far unconsidered) consequences of the non-intrusive choice. That's what I'd like to discuss next, when you are ready. 2. I want you to either: a. decide that you agree with us about what will probably happen, and that it is unpleasant enough to warrant an intrusive design, **OR** b. understand why we will have to encourage everyone to use the non-official interface instead of the one in the serialization library, so that there is no ill will when that happens. If all of that fails I want the rest of the community to understand why we are doing what we're doing so that our work will be accepted, both by users, and -- we hope -- as a separate Boost library. -- Dave Abrahams Boost Consulting www.boost-consulting.com

If all of that fails I want the rest of the community to understand why we are doing what we're doing so that our work will be accepted, both by users, and -- we hope -- as a separate Boost library.
You should just make your own "improved" version of the library and get it reviewed as a replacement or alternative for the current one. Sounds like you've got a couple of competent and interested people on board - just do it! Good Luck Robert Ramey

"Robert Ramey" <ramey@rrsd.com> writes:
If all of that fails I want the rest of the community to understand why we are doing what we're doing so that our work will be accepted, both by users, and -- we hope -- as a separate Boost library.
You should just make your own "improved" version of the library and get it reviewed as a replacement or alternative for the current one. Sounds like you've got a couple of competent and interested people on board - just do it!
No, we don't want to maintain a whole serialization library. We're willing, if necessary, to maintain a small library built upon Boost.Serialization, like what's proposed in http://lists.boost.org/Archives/boost/2005/11/97002.php Are you unwilling to discuss the topics I raised in my previous message? -- Dave Abrahams Boost Consulting www.boost-consulting.com

Robert Ramey wrote:
I'd really appreciate it if you could answer this question from my previous post:
2. Is this the promised simplification of the design we posted in http://lists.boost.org/Archives/boost/2005/11/97002.php? If so, by what measure is your approach a simplification?
Honestly, I don't remember what it was specfically in response to.
What? You previously said (post dated Fri, 25 Nov 2005 14:38:41 -0800) http://lists.boost.org/Archives/boost/2005/11/97201.php |Shortly, I'll post some code that I believe addresses all your design |goals in a much simpler and effective way. That may be helpful |in resolving this misunderstanding. This was part of a long thread discussing the proposal in the link Dave provided.
It was intended to illustrate my view that the library can and should be extended without adding things to base classes, and finally that it simpler and more effective to do it this way. The code attached implements all of the save_array functionality included in mattias system (actually more) in far fewer lines of code and with the benefits described in the posts.
Did you forget to include the attachment? You cannot have intended the demo you posted at the start of this thread, as that only implements a fraction of Matthias' system. Using either Matthias' original proposal, or Dave and Matthias' revised proposal, an MPI archive that implements the save_array() function via a call to MPI_Send() would be rather trivial to write (or at least the actual MPI_Send() part would :-). Your previous code offers no such functionality, the only thing it does to speed array processing of some 'bitwise_serializable' types. If I got this wrong, then I apologise profusely for misunderstanding your proposal, and I would be grateful if you could help me understand its potential. For example, by showing how such an MPI archive would be written using the functionality of bitwise_oarchive_adaptor ? For this purpose, you could treat the MPI function as having the signature template <typename T> void MPI_Send(T* data, std::size_t count); where T is any fundamental (non-pointer) type. If instead you want to sketch any of the other proposals floating around in the last few weeks, such as some other archive format like XDR or HDF, or some other archive of your choice that demonstrates array functionality, that would be fine too. I am just interested in a sketch of the basic idea here.
And also, I'd appreciate it if you'd respond to the paragraph below.
I actually don't want to get into a discussion of which non-intrusive design is best. The social and code interoperability dynamics of any non-intrusive design are the same, and that's really what I want to discuss. Please let me know when you're ready to talk about that.
Sorry, I don't even know what that means.
Maybe Dave is being a bit too subtle? All I know is I feel an urge to run away whenever I see "social" and "dynamics" in the same sentence ;) Cheers, Ian

Robert Ramey wrote:
I actually don't want to get into a discussion of which non-intrusive design is best. The social and code interoperability dynamics of any non-intrusive design are the same, and that's really what I want to discuss. Please let me know when you're ready to talk about that.
Sorry, I don't even know what that means.
Social dynamics aside, one point is that any intrusive design, no matter how clever, will always be more intricate and complex than it needs to be if the library doesn't provide a bit of help. But let's put complexity aside for a moment and just think about the problem from the perspective of a programmer that needs to provide serialization support for his class X. It so happens that X holds a contiguous array of char. The programmer knows that for some archives it can be a major performance gain to invoke their char[]-writing method, but he doesn't know which these archives are. Some of them may not have been written yet. So we need to give these programmers a _documented_ function that they can call when they serialize arrays. And it is best for this function to be described in the documentation of the serialization library, because that's what these programmers are using. They aren't writing archives or archive adaptors, they may not have heard of Dave's enhancements; they just serialize their own type. Whether this function should be named save_array and only accept (pointer, size) arrays, or save_sequence and accept arbitrary (iterator, size) sequences (as I proposed), is a matter of debate, but it doesn't change the fundamental point. The serialization library should provide a function that class authors can call when they serialize arrays... ... because no one else can. Now, if we add this function to the library, the next logical step is for the library to use its own function when serializing arrays, std::vector and std::valarray. It would be pretty odd not to do so. :-) We (*) can propose a complete list of changes to the library if you are interested, so that you can evaluate its impact. The support for objects without a default constructor does complicate matters a bit in the std::vector case, but it can be done. Existing binary archives should receive a significant speedup in the char[] case without any changes to the external archive format. (*) Hopefully the rest of the "we" agrees.

"Peter Dimov" <pdimov@mmltd.net> writes:
Robert Ramey wrote:
I actually don't want to get into a discussion of which non-intrusive design is best. The social and code interoperability dynamics of any non-intrusive design are the same, and that's really what I want to discuss. Please let me know when you're ready to talk about that.
Sorry, I don't even know what that means.
Social dynamics aside, one point is that any intrusive design, no matter how clever, will always be more intricate and complex than it needs to be if the library doesn't provide a bit of help.
Peter, Robert has many times expressed unwillingness to consider an intrusive design. I understand that you're trying to help, but intrusive designs have been proposed many times in the past (starting at least 3 years ago!), and every time, Robert has dug in his heels further because he believes it's the first step down the road toward an unmaintainable library. I think it's high time we respect his concerns and find a way to get fast array serialization that doesn't violate his non-intrusiveness constraint. ,---- | That's why our current design, described in | http://lists.boost.org/Archives/boost/2005/11/97002.php, makes no | changes to the Serialization library. `----
But let's put complexity aside for a moment and just think about the problem from the perspective of a programmer that needs to provide serialization support for his class X.
It so happens that X holds a contiguous array of char. The programmer knows that for some archives it can be a major performance gain to invoke their char[]-writing method, but he doesn't know which these archives are. Some of them may not have been written yet.
So we need to give these programmers a _documented_ function that they can call when they serialize arrays. And it is best for this function to be described in the documentation of the serialization library, because that's what these programmers are using. They aren't writing archives or archive adaptors, they may not have heard of Dave's enhancements; they just serialize their own type.
Whether that function is going to be documented in the Serialization library or in some other library is 100% up to Robert.
Whether this function should be named save_array and only accept (pointer, size) arrays, or save_sequence and accept arbitrary (iterator, size) sequences (as I proposed), is a matter of debate
Maybe not a matter of debate. I have no objection to a proposal that uses an (iterator,size) interface as long as it handles std::vector and any other cases not covered by the code shown in your earlier posting.
, but it doesn't change the fundamental point.
The serialization library should provide a function that class authors can call when they serialize arrays...
... because no one else can.
I hate to contradict you because I agree with the spirit of your argument, but of course someone else _can_ provide the function, and it's important that we say so, or Robert will quite reasonably feel we're forcing the conclusion down his throat. As proof, our design described in http://lists.boost.org/Archives/boost/2005/11/97002.php makes no changes to the library and could be packaged as a separate add-on library built upon Boost.Serialization. If Robert insists that the function be provided separately, he is also buying into a situation where this function in the add-on library has to be used by every serialization function that _might_ be used in a performance-critical context, and every archive choice made in what _might_ be a performance-critical context must come from the add-on library, if an appropriate archive exists there (I am thinking e.g., of binary archives that would be present in the add-on library while text-based archives probably would not). That's what I want him to think about. If he understands what that means and prefers to avoid intrusion on the library design anyway, Matthias and I are willing to accept that and never bring it up again. After three years of hammering on this one point I can't blame Robert for being tired, and I have no reason to believe new arguments are likely to change his mind about it.
Now, if we add this function to the library, the next logical step is for the library to use its own function when serializing arrays, std::vector and std::valarray. It would be pretty odd not to do so. :-)
We (*) can propose a complete list of changes to the library if you are interested, so that you can evaluate its impact. The support for objects without a default constructor does complicate matters a bit in the std::vector case, but it can be done. Existing binary archives should receive a significant speedup in the char[] case without any changes to the external archive format.
(*) Hopefully the rest of the "we" agrees.
In principle, yes, but in practice, no. Since Robert has made it very clear that he doesn't want to consider any intrusion on the library design, I can't join in such a proposal. The best I can hope for is that he understands the consequences of his choice and feels comfortable with what other people are planning to do with his library. -- Dave Abrahams Boost Consulting www.boost-consulting.com

Robert, I needed a trivial patch to get the code to compile with gcc 3.4: diff bitwise_oarchive_adaptor.hpp~ bitwise_oarchive_adaptor.hpp 99c99 < override(ar, t, boost::serialization::is_bitwise_serializable<T>::type()); ---
override(ar, t, typename
boost::serialization::is_bitwise_serializable<T>::type()); However, I have the same problem that Dave apparantly has, is the demo supposed to do anything useful? I guess not, because the 'override' function that is the basis of the extension mechanism seems to only allow saving, the loading half seems to be missing. It also seems to be a strange choice for a demonstration. Is the bitwise_?archive_adaptor supposed to be useful for anything other than a native binary archive? In what sense is it then an 'adaptor' ? I spent a little while looking over the design, and thought a bit how to implement various archives (array optimizations, MPI, MPI-IO, netCDF, etc, and various types, arrays, matrices, multi_array and so on) using that basic framework. After I cleaned the vomit off the floor (so to speak), I started writing a comparison between what the 'ideal' version of Dave's proposal would allow (including the minimal intrusive changes that Dave hinted were on offer - although the thread never got as far as describing them before being apparantly rejected outright), and what your proposal would allow. Then I realized there is absolutely no point doing this. Many of the points I was going to make have already been covered, in some cases two, three, four or more times, in the previous threads, and if anyone was going to change their mind they surely would have long ago. Besides, since both proposals are "non-intrusive", it makes no difference which proposal is actually implemented - indeed, both could probably live side by side if someone were sufficiently masochistic. The impact on the current serialization library will be identical in each case. So, I will do us both a favour and save yet another trip around the merry-go-round. Regards, Ian

Ian McCulloch wrote:
Is the bitwise_?archive_adaptor supposed to be useful for anything other than a native binary archive?
This is an alternative way of implementing the save_binary optimization. Application of this adaptor to any existing archive class will create a new archive class that includes the save_binary optimization. So it could be applied to say a, text_oarchive. But of course it would make no sense to do this as it would result in a slower archive (in text archives, save_binary renders binary output as base64 text). Also, saving data as bitwise binary maintains the representation of the host machine so the application of this adaptor would result in an archive which would be non-portable accross machines.
In what sense is it then an 'adaptor' ?
Thus I have referred to that as an "Adaptor" Robert Ramey

Robert Ramey wrote:
Ian McCulloch wrote:
Is the bitwise_?archive_adaptor supposed to be useful for anything other than a native binary archive?
This is an alternative way of implementing the save_binary optimization. Application of this adaptor to any existing archive class will create a new archive class that includes the save_binary optimization. So it could be applied to say a, text_oarchive. But of course it would make no sense to do this as it would result in a slower archive (in text archives, save_binary renders binary output as base64 text). Also, saving data as bitwise binary maintains the representation of the host machine so the application of this adaptor would result in an archive which would be non-portable accross machines.
Ok. Would it be correct to describe the adaptor's function as overriding serialization for a particular type (or set of types) and serializing it in a different format, using facilities already available to the archive(s) ? It is clear that bitwise_serializable_adator, even when applied to a native binary archive will, in general, produce an archive that is incompatible with the base archive (say, if the user marks some POD type as bitwise_serializable, and the POD type contains some padding). Thus, I agree this belongs firmly in the class 'adaptor' - the resulting archive is distinct from the base archive. I can even imagine that there are some interesting uses for it. So I apologise for some of the language I used in my previous post, it was out of line. As far as I can tell, what distinguishes the 'adaptor' case from the array extensions that have been discussed at length, is 1. In the default case the save_array() function (**) reproduces the existing behaviour. That is, an array_adaptor<Base> would produce a bit-for-bit identical archive to using the Base archive type itself, for all Base archives that currently exist in the serialization library. 2. In the non-trivial case where save_array() does something different to the default, it needs to invoke functionality that does _not_ already exist in the serialization library, and is _archive_ _specific_. Both of these points strongly suggest, to me at least, that the save_array() extension is not properly an adaptor. Do you have a different interpretation? (**) Just for the record, I agree with Peter Dimov that something like save_sequence(Iterator, size) is better. This is needed for archives that directly support arrays as a distinct structure, if you want to serialize say a deque using the archive array format. Regards, Ian

Ian McCulloch wrote:
Would it be correct to describe the adaptor's function as overriding serialization for a particular type (or set of types) and serializing it in a different format, using facilities already available to the archive(s) ?
Its a little more broad than that. It would permit ANY code to be attached to a specified combination of archve-datatype. Just to speculate on an imaginary example - please don't treat this as a serious proposal which I have to defend. Suppose someone comes along and looks at the xml_archive. He says wow - now that is way cool - But its really not done. What I need is to create an xml schema along with with my xml_archive so i can use my xml wizebang tool to browse and maybe edit my archive!. Damn - that means I have to make my own implementation xml_archive with makes special versions name/value pair serialization. Oh that's not that bad. I can just derive from the current xml_archive and add my overrides there. Damn - it turns out that I have TWO current xml_archives - one for wide characters and wide stream i/o and one for narrow with characters. OK I can fix up the the base class. Damn - that's a lot of tricky code - I don't want to mess that. OK I can insert a new base class above xml_oarchive. Damn - now I"ve changed xml_oarchive for existing archives. Which might be OK or might not - in any case now someone has to maintain a more elaborate version of xml_archive which now inherits code for a special case - creating of xml_schema along side of the archive. And not everyone else wants this extra feature so I need to add tothe documentation to show how to turn it on and off and specify the file name which should be applied for it and accomodate different variations of xml_schemas. So now every user of xml_archive has to go review an expanded interface for features which most likely interest him. Now that its "part of the library" someone has to field questions on this most likely from people who will come to conclude that they don't need it. Now comes along the guy with the "next" great feature. He want's to move the collection count into the tag rather than as the first data member. He really needs this. Now the whole cycle starts all over again. In no time at all, the xml_archive has all the "required" enhancements but now they are interacting - and writing overrides for the serialization functions is getting almost impossible. Of course, by this time, saving of collections also include save_array so its all mixed in there. Now compare this with the "archive adaptor" concept. One archive adapter - xml_schema_adaptor which adds code to create an xml_schema alongside of the original xml_archive. Of course the author of this adaptor has to specify enhanced versions of serialization code for certain types - but that's what he wanted to do in the first place. He applies his adaptor to each of the two existing xml_archives xml_oarchive and xml_woarchive and thus creates two new ones - xml_with_schema_oarchive and xml_with_schema_woarchive (note the "w"). Now he documents his two NEW archives - or perhaps his adaptor with the new calls they have - atlease a new constructor and probably more. (Note these two new archive classes may not be able to read the archive data created by original xml_archive class. This is to be expected as they implement new functionality). So everyone has what he want's. Override functions for special combination of archive and type are written only once by the person who wants/needs them - so he's happy The library user who want's to just make a minimal xml_oarchive is no more unhappy than he is now. Nothing has changed for him. Users who want the new functionality get it and have a document which describes the functionality as an extension to the original xml_archive. Much easier to digest as well as write. Users who have made there own extensions to the xml_oarchive can now "mix-in" the new schema generation into their own archive with just a little bit of boiler plate code. The library maintain has no more work to do. The author of the adaptor is happy to maintain his code which is small and depends only upon the quite "narrow" interface of the core serialization library. Furthermore, the author of the extension might be pleased to find that more people are using his extension as it can be applied to archives of which he had no knowledge.
As far as I can tell, what distinguishes the 'adaptor' case from the array extensions that have been discussed at length, is
1. In the default case the save_array() function (**) reproduces the existing behaviour.
That is, an array_adaptor<Base> would produce a bit-for-bit identical archive to using the Base archive type itself, for all Base archives that currently exist in the serialization library.
Hmm - I'm not sure what you mean by this but I can say the following: I believe that new the archive produced by the application bitwise_archive_adaptor to binary_oarchive would produce a bit for bit identical archive as the current binary_oarchive does. But in the face of varying compilers configuratons and lots of small things I really couldn't say for sure without undergoing a tremendously tedieous examination at a very low level. Of course, if the bitwise_archive_adaptor where applied to something like a text_oarchive it would result in something quite different. const float x[2] = {1.0, 2.5} ar << x; Now looks like 1.0 2.5 in a text archvive. The archive class which results from application of the bitwise_archive_adaptor would use save_binary to produce an equivalent (for this platorm) string of characters in base64 code. So it would be odd to me that someone might want to do this.
2. In the non-trivial case where save_array() does something different to the default, it needs to invoke functionality that does _not_ already exist in the serialization library, and is _archive_ _specific_.
correct.
Both of these points strongly suggest, to me at least, that the save_array() extension is not properly an adaptor. Do you have a different interpretation?
I've attempted to illustrate that it can be implemented as an adaptor and doing so will have certain benefits over alternatives. In the particular case we've been discussiing all he high_performance computing archives will be derived from a common base class so my adaptor idea isn't required. But then, someone is going to insist that the old binary_oarchive really needs this enhancement. Since save_array would now be part of the high_performance computing archives base class it wouldn't be available to others. However, my adaptor would be available to mix in for anyone who feels they have to have it. Of course I only know about binary_oarchive. For all I know someone out there has made his own derivation or variation of binary_oarchive. But I don't have to know - my adaptor can be use by them if they want. Here is the key point. I'm not really concerned about specifically about save_array. The save_array optimization is one example of any number of enhancements and/or extentions that people might want to make. But it is not the only example. We can't go mixing every great idea into the core library without running into an intractible scalabilty problem. Robert Ramey

"Robert Ramey" <ramey@rrsd.com> writes:
Here is the key point. I'm not really concerned about specifically about save_array.
The save_array optimization is one example of any number of enhancements and/or extentions that people might want to make. But it is not the only example. We can't go mixing every great idea into the core library without running into an intractible scalabilty problem.
To me at least, it is very clear that you hold that view, and it has been clear for a long time. That's why I'd like to have a different discussion with you, mentioned in several foregoing messages: If Robert insists [on a non-intrusive design], he is also buying into a situation where this function in the add-on library has to be used by every serialization function that _might_ be used in a performance-critical context, and every archive choice made in what _might_ be a performance-critical context must come from the add-on library, if an appropriate archive exists there (I am thinking e.g., of binary archives that would be present in the add-on library while text-based archives probably would not). That's what I want him to think about. If he understands what that means and prefers to avoid intrusion on the library design anyway, Matthias and I are willing to accept that and never bring it up again. After three years of hammering on this one point I can't blame Robert for being tired, and I have no reason to believe new arguments are likely to change his mind about it. This really should be a very short discussion, after which you'll have me and Matthias completely out of your hair on this topic. There should be no need for long and time-consuming posts like the one I'm replying to here. Can't we do that? Afterward we could all relax and enjoy the holidays. :) -- Dave Abrahams Boost Consulting www.boost-consulting.com

Robert Ramey wrote:
Just to speculate on an imaginary example - please don't treat this as a serious proposal which I have to defend. Suppose someone comes along and looks at the xml_archive. He says wow - now that is way cool - But its really not done. What I need is to create an xml schema along with with my xml_archive so i can use my xml wizebang tool to browse and maybe edit my archive!. [...]
[...]
Here is the key point. I'm not really concerned about specifically about save_array.
The save_array optimization is one example of any number of enhancements and/or extentions that people might want to make. But it is not the only example. We can't go mixing every great idea into the core library without running into an intractible scalabilty problem.
True. But there is a fundamental difference between your enhanced archive examples and array serialization. One function of the library is to act as a mediator between programmers that write serialization functions for their types and programmers who implement archives. The library provides a common language so that these two groups of programmers can communicate without ever having to coordinate their efforts. When the author of X that has two fields x and y wants to serialize it into _any_ archive, he just "says" save(x) and save(y) to the archive. However, the author of Y that contains an array currently can't just say save_array(a) to the archive, because save_array is not part of the current vocabulary. He needs to say save(a[0]), save(a[1]), ..., save(a[n-1]). This works, but it makes it needlessly complicated for the archive to detect that it is being fed an array, rather than a sequence of ordinary save calls. In contrast, the "someone might want to do..." enhanced archive examples do not involve communication between these two groups of programmers. The programmer of the archive just decides to implement a specific format and that's it.

Peter Dimov wrote:
When the author of X that has two fields x and y wants to serialize it into _any_ archive, he just "says" save(x) and save(y) to the archive.
However, the author of Y that contains an array currently can't just say save_array(a) to the archive, because save_array is not part of the current vocabulary. He needs to say save(a[0]), save(a[1]), ..., save(a[n-1]).
he needs to say save(a) that is ar << a. In any of the proposals the correct override will be invoked. Robert Ramey

Robert Ramey wrote:
Peter Dimov wrote:
When the author of X that has two fields x and y wants to serialize it into _any_ archive, he just "says" save(x) and save(y) to the archive.
However, the author of Y that contains an array currently can't just say save_array(a) to the archive, because save_array is not part of the current vocabulary. He needs to say save(a[0]), save(a[1]), ..., save(a[n-1]).
he needs to say save(a) that is ar << a. In any of the proposals the correct override will be invoked.
This only works for C-style arrays with the size fixed at compile time. Think about how one would write "save" for the following: template<class T> struct my_array { T * data_; unsigned size_; };

Peter Dimov wrote:
Robert Ramey wrote:
Peter Dimov wrote:
When the author of X that has two fields x and y wants to serialize it into _any_ archive, he just "says" save(x) and save(y) to the archive.
However, the author of Y that contains an array currently can't just say save_array(a) to the archive, because save_array is not part of the current vocabulary. He needs to say save(a[0]), save(a[1]), ..., save(a[n-1]).
he needs to say save(a) that is ar << a. In any of the proposals the correct override will be invoked.
This only works for C-style arrays with the size fixed at compile time. Think about how one would write "save" for the following:
OK that's how I interpreted the [] brackets. Having thought about this a little more - and knowing I faced this issue before I remembered the concept of "serialization wrapper" as documented in the manual. ( I see now that the explanation is slightly out of whack - but I'll address that later.) This is used to implement name-value pairs. The implementation is such that those archives that don't use them don't have to have them included in the archive header. Those that do can take advantage of them. Looking back - I now remember I invented the "serialization wrapper to address exactly this situation" To summarize The basic idea is to define something

Peter Dimov wrote:
Robert Ramey wrote:
Peter Dimov wrote:
When the author of X that has two fields x and y wants to serialize it into _any_ archive, he just "says" save(x) and save(y) to the archive.
However, the author of Y that contains an array currently can't just say save_array(a) to the archive, because save_array is not part of the current vocabulary. He needs to say save(a[0]), save(a[1]), ..., save(a[n-1]).
he needs to say save(a) that is ar << a. In any of the proposals the correct override will be invoked.
This only works for C-style arrays with the size fixed at compile time. Think about how one would write "save" for the following:
template<class T> struct my_array { T * data_; unsigned size_; };
OK - I depended on the [] indicating C++ arrays Take a look at the "Serialization Wrapper" concept as described in the manual. (The explanation is a little messed up but I'll fx that later). The basic idea is to define template<class T> struct nvp : public pair<const char *, T*> { .... // default implementation of serialize template<class Archive> void serialize(Archive &ar, const unsigned int){ // default implemention just throws away the tag name // and serializes the value ar & t; } .... }; text and binary archives don't have to do anything special regarding nvp - they just hand it off the serialization library as they do for anyother type. No save(nvp ... appears in any header other than in xml_archive. So the default serialization gets invoked for those archives. In xml_archives, there are save/load_override functions like the following. // special treatment for name-value pairs. template<class T> void save_override( const ::boost::serialization::nvp<T> & t, int ){ this->This()->save_start(t.name()); archive::save(* this->This(), t.const_value()); this->This()->save_end(t.name()); } This implements special behavior for the nvp type when used with the xml_archive. I invented this so I wouldn't have some special functions for xml having to be defined for all archives. It bothered me that binary_archive would have to "be aware" or implement anything related to xml. I think the same would work for arrays. define an array wrapper something like: template<class T> struct array { std::size_t & m_element_count; T & * m_t; explicit array(std::size &s, T & t) : m_element_count(s), m_t(t) {} // default implementation template<class Archive> void serialize(Archive &ar, const unsigned int){ // default implemention does the loop std::size count = m_element_count; T * t = m_t; while(0 < count--){ ar << *t; } } hpc_oarchive and/or its derivatives would contain something like the following. template<class T> void save_override( const ::boost::serialization::array<T> & t, int ){ this->This()->save_array(t.m_element_count, t.m_t)[ } Note this require refinement to deal with composition with other wrappers, compiler quirks, and things like that. I hope this doesn't obscure the main point. So the net effect is Everyone who want's to wrap his "arrays" in boost::serialization::array is free to do so. Archive classes which don't have special code for such arrays just pass it to the library by default which eventually resolves to an item by item serialization. Archives which have facilities suitable for handling arrays in a special way can overload save_override(boost::serialization::array ... and do thier thing. No currently existing archives need be changed. Archives which don't have special handling for these "array" (the majority) can completely ignore this facility. These will work exactly as before. It would mean that our current implementation of std::vector and C++ arrays would have to be altered to wrap the "arrays" but I could live with that. I already bit that bullet with nvp's. The main thing is that it avoids the that which I was most unhappy with - having to add something to all archive classes just to accomodate some special feature of one of them. Finally, it defines the "array-ness" of a data structure independantly of the the whole archive concept. That its "array-ness" becomes an optional feature of the data. This seems more "correct" to me. It certainly more in keeping with the spirt and design of the library to date. Robert Ramey

Hi Robert, Robert Ramey wrote: [...]
So the net effect is
Everyone who want's to wrap his "arrays" in boost::serialization::array is free to do so. Archive classes which don't have special code for such arrays just pass it to the library by default which eventually resolves to an item by item serialization.
Archives which have facilities suitable for handling arrays in a special way can overload save_override(boost::serialization::array ... and do thier thing.
No currently existing archives need be changed.
I like this a lot! Of course there are details to work out, but as far as I can tell everything that has been mentioned as a possible array-aware archive is possible within this framework. I will leave those details for Dave and Matthias, who I expect will also be excited about this development. Best regards, Ian

Ian McCulloch <ianmcc@physik.rwth-aachen.de> writes:
Robert Ramey wrote:
[...]
So the net effect is
Everyone who want's to wrap his "arrays" in boost::serialization::array is free to do so. Archive classes which don't have special code for such arrays just pass it to the library by default which eventually resolves to an item by item serialization.
Archives which have facilities suitable for handling arrays in a special way can overload save_override(boost::serialization::array ... and do thier thing.
No currently existing archives need be changed.
I like this a lot! Of course there are details to work out, but as far as I can tell everything that has been mentioned as a possible array-aware archive is possible within this framework. I will leave those details for Dave and Matthias, who I expect will also be excited about this development.
I agree; this sounds like a big step in the right direction for us. It puts the hooks on which to hang array serialization where they need to be -- within the serialization library -- in order to avoid having a second, "semi-official" fast array serialization library interface. AFAICT (although I still need to check with Matthias about this) it will allow us to achieve everything we need for our MPI archives. I also see that the proposed change is more in keeping with the spirit of the serialization library and the separations that Robert is trying to maintain. It's an impressively elegant solution. It would be a shame, I think, if the library's own binary archives weren't given a save_override for boost::serialization::array in order to take advantage of the new functionality, but if it never happened, that would be of no major importance to me personally. I want to thank Robert for perservering through a discussion that, if it was difficult for a few of those asking for change, must have been ten times as difficult for Robert. Fielding requests from so many different directions at once can never be easy. He has again shown the tenacity and dedication I referred to in http://lists.boost.org/Archives/boost/2005/11/96923.php. Thanks again, Robert. -- Dave Abrahams Boost Consulting www.boost-consulting.com

At 9:11 PM -0800 11/27/05, Robert Ramey wrote:
Take a look at the "Serialization Wrapper" concept as described in the manual. [...] text and binary archives don't have to do anything special regarding nvp - they just hand it off the serialization library as they do for anyother type. No save(nvp ... appears in any header other than in xml_archive.
Don't forget the polymorphic archives. Other than that, this seems very promising. From all the preceding discussion it *has* seemed that there were a lot of close analogies between what Dave&Matthias were proposing and the xml archive with its supporting nvp mechanism.

Hi Robert, In implementing your array wrapper proposal I encountered the two following issues: The current version of the std::vector<T> serialization works also for non-default constructible types T since it does the following (version A): unsigned int count; ar >> BOOST_SERIALIZATION_NVP(count); s.reserve(count); while(count-- > 0){ typedef BOOST_DEDUCED_TYPENAME Container::value_type type; stack_construct<Archive, type> t(ar); ar >> boost::serialization::make_nvp("item", t.reference()); s.push_back(t.reference()); ar.reset_object_address(& s.back() , & t.reference()); } On the other hand, any of the fast array serialization variants requires the type T to be default constructible since deserialization would proceed as (version B): unsigned int count; ar >> BOOST_SERIALIZATION_NVP(count); s.resize(count); if (count) ar >> array(count,&s[0]); Thus, the array wrapper can be used only for vectors of default- constructible types. I see two ways how this can be implemented and wanted to discuss what option is best in your opinion: i) the load function for std::vector could dispatch to either version A or B depending on the type traits has_trivial_constructor<T> ii) one could leave std::vector serialization untouched, meaning always use version A, and use the optimized version B only in the archive wrapper for archives implementing fast array serialization. The advantage of this is that these archives know for which types they provide fast array serialization, and could override the std::vector serialization just for these types. Also, as a second issue I want to bring up the size_type serialization issue again, since treating size_type different from unsigned int is essential for serialization of huge containers on 64- bit platforms, as well we for efficient MPI serialization. In previous exchanges this was found to be non-controversial and there was a consensus that a "strong typedef" will do the trick. My question to you is now where such a strong typedef should be placed. The other strong typedefs (e.g. class_id_type) are all defined in the header boost/archive/basic_archive.hpp and in namespace boost::archive. Thus one option would be to define the size_type strong typedef also in that place. However, this will introduce a coupling between serialization and archive, since now the serialize functions for containers will have to include the boost/ archive/basic_archive.hpp to . I thus believe hat it would be closer to your design goals to define a size_type wrapper ("strong typedef") in boost/serialization? Matthias

Matthias Troyer <troyer@itp.phys.ethz.ch> writes:
I see two ways how this can be implemented and wanted to discuss what option is best in your opinion:
i) the load function for std::vector could dispatch to either version A or B depending on the type traits has_trivial_constructor<T>
ia) define the has_default_constructor<T> trait, which by default is derived from has_trivial_constructor<T> on implementations without magic compiler support.
ii) one could leave std::vector serialization untouched, meaning always use version A, and use the optimized version B only in the archive wrapper for archives implementing fast array serialization. The advantage of this is that these archives know for which types they provide fast array serialization, and could override the std::vector serialization just for these types.
That one scares me a lot. The archive author doesn't know the full range of element types T for which vector<T> can be fast-array-serialized, does he? What happens when I invent a new POD that I want to stick in vectors that will be serialized? Do I have to go modify the archive? Or have I misundestood this altogether? -- Dave Abrahams Boost Consulting www.boost-consulting.com

On Dec 8, 2005, at 9:52 PM, David Abrahams wrote:
Matthias Troyer <troyer@itp.phys.ethz.ch> writes:
I see two ways how this can be implemented and wanted to discuss what option is best in your opinion:
i) the load function for std::vector could dispatch to either version A or B depending on the type traits has_trivial_constructor<T>
ia) define the has_default_constructor<T> trait, which by default is derived from has_trivial_constructor<T> on implementations without magic compiler support.
ii) one could leave std::vector serialization untouched, meaning always use version A, and use the optimized version B only in the archive wrapper for archives implementing fast array serialization. The advantage of this is that these archives know for which types they provide fast array serialization, and could override the std::vector serialization just for these types.
That one scares me a lot. The archive author doesn't know the full range of element types T for which vector<T> can be fast-array-serialized, does he? What happens when I invent a new POD that I want to stick in vectors that will be serialized? Do I have to go modify the archive? Or have I misundestood this altogether?
Since the archive implements the fast array serialization for these types, it has to know for which types it should be used. After all that's what the use_array_optimization lambda expression member of the archive in your proposal was for. For all these types it is safe to first resize() the array and then do the appropriate fast deserialization, while for other types he safe default (de) serialization of std::vector can be used. Regarding your question what happens when someone invents a new POD, the answer in case of your proposal applied to MPI archives is that you have to specialize the is_mpi_datatype<T> traits class for your type T, since the lambda expression there is: typedef is_mpi_datatype<mpl::_1> use_array_optimization; The same lambda expression can also be used with the array wrappers. Actually, using has_default_constructor<T> scares me more, since a user might (for some strange reason) have overloaded save_construct_data although the type is default constructible. Although this is not forbidden by the library, although I do not see any reason why somebody would do. But in case someone has done it, using has_default_constructor<T> to decide whether the current version of vector serialization, or a new version using the array wrapper is used will break backward compatibility. I believe that nobody will have done that, but it scares me more than the point that you raise. Matthias

Hi Robert, I have now implemented the array wrappers and, as expected, all of my archives also work with that solution. In running the regression tests I realized that besides changing the C-array and std::vector serialization, the following changes neeed to be done to the core serialization library: 1. in archive/detail/iserializer.hpp the following change is needed: @@ -584,6 +586,10 @@ inline void load(Archive &ar, const serialization::binary_object &t){ boost::archive::load(ar, const_cast<serialization::binary_object &>(t)); } +template<class Archive, class T> +inline void load(Archive &ar, const serialization::array<T> &t){ + boost::archive::load(ar, const_cast<serialization::array<T> &>(t)); +} This is needed because the wrappers are usually passed as const arguments, and you did the same overload also for nvp and binary_object. 2. in archive/basic_xml_oarchive.hpp the following is needed: @@ -100,6 +101,20 @@ this->This()->save_end(t.name()); } + // specific overrides for arrays + // want to trap them before the above "fall through" + + template<class T> + void save_override( + #ifndef BOOST_NO_FUNCTION_TEMPLATE_ORDERING + const + #endif + ::boost::serialization::array<T> & t, + int + ){ + archive::save(* this->This(), t); + } + // specific overrides for attributes - not name value pairs so we // want to trap them before the above "fall through" BOOST_ARCHIVE_OR_WARCHIVE_DECL(void) and in archive/basic_xml_iarchive.hpp: @@ -81,6 +82,19 @@ load_end(t.name()); } + + // specific overrides for arrays + template<class T> + void load_override( + #ifndef BOOST_NO_FUNCTION_TEMPLATE_ORDERING + const + #endif + boost::serialization::array<T> & t, + int + ){ + archive::load(* this->This(), t); + } + // specific overrides for attributes - handle as // primitives. These are not name-value pairs // so they have to be intercepted here and passed on to load. to allow the array wrapper to be serialized although it is not a nvp. Note that we cannot just put the array wrapper into an nvp since that opuld break compatibility with the old XML archive format. These changes are in addition to changing the C-array and std::vector serialization. My question to you is whether in light of this additional intrusion into the unrelated XML archives you still prefer this solution over the save_array/load_array free functions proposed by Dave? If you do then I can send you my implementation. Also, when you find time, could you please answer to my mail of a few days ago regarding the issues in std::vector serialization. Matthias

Hi Robert, Thanks for posting your proposal! There is a close similarity between your proposal and Dave's. Dave's classes array::oarchive and array::iarchive are archive adaptors, just like the one you are proposing. We all understand what you mean by archive adaptor. If you take a look a closer look at Dave's proposal then you will surely see that he built on your idea of using archive adaptors. Aside from naming differences, and other minor things the main difference between your proposal and Dave's is the choice of customization point used by the authors of serialization functions for new array-like classes (such as e.g. std::valarray). How can they profit from the optimized saving of contiguous arrays of some data types? In your proposal these authors should provide an overload of template<class Base, class T> void override(boost::archive::bitwise_oarchive_adaptor<Base> &, T const &) In this function they have to re-implement the serialization of the specific class. Dave on the other hand proposes that these authors call a function save_array, which by default will just do a simple loop (as in the current library), but dispatch to an optimized function when available. Let me state clearly that both these approaches can coexist and there is no conflict. Dave's proposal uses a wrapper just as the one you use to override the default serialization provided by your library, but in addition provides a save_array and load_array function that can be used with any archive (and without modifications to your library). Let me take std::valarray as an example of what would have to be implemented by the author of std::valarray serialization. In your scheme that would be: ---------------------------------------------------- // the default serialize function template<class Base, class T> void save( boost::archive::bitwise_oarchive_adaptor<Base> &ar, const std::valarray<T> & t, ){ const unsigned int count = t.size(); ar << count; for (unsigned int i=0; i<t.size();++i) ar << make_nvp("item",t[i]); } // the optimized overload of the override function template<class Base, class T> void override( boost::archive::bitwise_oarchive_adaptor<Base> &ar, const std::valarray<T> & t, boost::mpl::true_ ){ const unsigned int count = t.size(); ar << count; ar.save_binary(t.size() * sizeof(T), get_data(t)); } // the dispatch either to the optimized or the default version of override template<class Base, class T, int N> void override( boost::archive::bitwise_oarchive_adaptor<Base> &ar, const std::vector<T> & t ){ override(ar, t, boost::serialization::is_bitwise_serializable<T>::type()); } ---------------------------------------------------- Contrast this with Dave's proposal, where one a *single* function needs to be written to have both unoptimized and optimized serialization. ---------------------------------------------------- template<class Base, class T> void save(boost::archive::bitwise_oarchive_adaptor<Base> &ar, const std::valarray<T> & t,) { const unsigned int count = t.size(); ar << count; save_array(ar, get_data(t),t.size()); } ---------------------------------------------------- Not only is this much shorter, it is even simpler and less error- prone and easier to maintain than the default serialization function in your suggestion, since the for-loop over the elements of the std::valarray is omitted. And please keep in mind that if the archive does not provide an optimized version of save_array, the code that is executed will be *exactly* the for-loop in the other example. The simplicity of Dave's proposal is no chance. I know that he spent many hours thinking about the problem to come up with an elegant and simple solution. Let me stress again that both options can coexist, i.e. we can write an array_adaptor that provides both your override() mechanism and a save_array() function. There is no conflict at all between the two proposals. It will then be up to the authors of the serialization function of a new class to choose which mechanism they prefer. For me the choice is obvious, but your mileage may vary. Matthias

Matthias Troyer wrote:
---------------------------------------------------- template<class Base, class T> void save(boost::archive::bitwise_oarchive_adaptor<Base> &ar, const std::valarray<T> & t,) { const unsigned int count = t.size(); ar << count; save_array(ar, get_data(t),t.size()); }
----------------------------------------------------
Hmmm - did you perchance mean to use "bitwise_oarchive_adaptor<Base> in the above ? Changing the names a little bit for clarity, I always anticipated archive developers would use something like the following: template<class T> void save(boost::hpc_archive &ar, const std::valarray<T> & t...) { const unsigned int count = t.size(); ar << count; save_array(ar, get_data(t),t.size()); } This would apply the save_array enhancement to all classes derived from hpc_archive. In fact I would expect that this is the way people are doing it now. The only problem with this was that it would only apply to one "family" of archives - those sharing a common base class. In particular it wouldn't apply to binary_oarchive. Previously, you raised the concern that code like the above would have to be replicated in order to add enhancements for different archives - particularly binary_oarchive. So that is my motivation for suggesting the "archive adaptor" approach. But as it turns out - you won't be using binary_oarchive in any case. Dave left the same class name but put it in a different namespace but in fact it will be a different archive - if for no other reason that to avoid backward compatibility issues with currently existing archive data. Besides this seems pretty clear that stream i/o is not the highest performance solution so you won't be deriving from the current binary_primitive either. So with a little renaming I would anticipate that things would look like class hpc_oarchive : public .... { ... all the stuff in daves oarchive }; class mpi_oarchive : public hpc_archive { ... implementation for mpi }; class xdr_oarchive : public hpc_archive { ... implementation for xdr }; etc. In this case the simple overload above would be fine. its applied to he base class - its automatically applied to all the deriviations. The only thing "missing" is that its not applied to the current binary_oarchive. But I don't think that is an issue anymore. If it is - its outside the context of the high performance computing archive (hpc_oarchive) and if someone is interested, he can apply my adaptor. So all the "enhancements" can be applied without requiring changes in the core library and without requiring serialzation authors to be aware which enhancements need to be used with which archives. This is the view I've advocated from the beginning. Robert Ramey

Robert Ramey wrote:
Changing the names a little bit for clarity, I always anticipated archive developers would use something like the following:
template<class T> void save(boost::hpc_archive &ar, const std::valarray<T> & t...) { const unsigned int count = t.size(); ar << count; save_array(ar, get_data(t),t.size()); }
This would apply the save_array enhancement to all classes derived from hpc_archive. In fact I would expect that this is the way people are doing it now.
This doesn't work well for several reasons. First, the static type of ar is hpc_archive, so hpc_archive must be a polymorphic archive base, and this is not desirable since it's High Performance. Second, if you add an overload for std::vector: template<class T> void save(boost::hpc_archive &ar, const std::vector<T> & t...) the version in the Serialization library will take precedence since it's an exact match for the archive argument, and the overload above requires a derived to base conversion. Even if it did work, I don't see in which circumstances a class author would like to _not_ take advantage of the save_array enhancement. Ideally, he should just call save_array in save, without restricting it to a specific set of archives. I don't see what you gain by denying him this opportunity - assuming that it can be provided without negative consequences for the current code base or existing archive formats.

"Peter Dimov" <pdimov@mmltd.net> writes:
Even if it did work, I don't see in which circumstances a class author would like to _not_ take advantage of the save_array enhancement. Ideally, he should just call save_array in save, without restricting it to a specific set of archives. I don't see what you gain by denying him this opportunity - assuming that it can be provided without negative consequences for the current code base or existing archive formats.
While I agree with this argument, it's been made more times than I can count, to no avail. I don't see why it should succeed this time, even coming from you. It seems to me it can only make Robert feel more beleaguered. I'd really like to remove the pressure from Robert to do what the rest of us think is best so that he can consider the following (quoting myself): If Robert insists [on a non-intrusive design], he is also buying into a situation where this function in the add-on library has to be used by every serialization function that _might_ be used in a performance-critical context, and every archive choice made in what _might_ be a performance-critical context must come from the add-on library, if an appropriate archive exists there (I am thinking e.g., of binary archives that would be present in the add-on library while text-based archives probably would not). That's what I want him to think about. If he understands what that means and prefers to avoid intrusion on the library design anyway, Matthias and I are willing to accept that and never bring it up again. After three years of hammering on this one point I can't blame Robert for being tired, and I have no reason to believe new arguments are likely to change his mind about it. Robert, I am happy to elaborate on the first paragraph if you still don't understand what I'm talking about. -- Dave Abrahams Boost Consulting www.boost-consulting.com

Peter Dimov wrote:
Robert Ramey wrote:
Changing the names a little bit for clarity, I always anticipated archive developers would use something like the following:
template<class T> void save(boost::hpc_archive &ar, const std::valarray<T> & t...) { const unsigned int count = t.size(); ar << count; save_array(ar, get_data(t),t.size()); }
This would apply the save_array enhancement to all classes derived from hpc_archive. In fact I would expect that this is the way people are doing it now.
This doesn't work well for several reasons. First, the static type of ar is hpc_archive, so hpc_archive must be a polymorphic archive base, and this is not desirable since it's High Performance.
The motivating use case for this discussion has been a benchmark which uses save_array to replace 1000000000 invocations of save_binary which write one byte each with one invocation of save_binary to write 100000000 bytes. I doubt that the overhead related to one call through a virtual function table will be very significant here. Second, its not clear that it will always need to be a virtual function Actually I was thinking that the default implementation of save_array for hpc_oarchive would be just to invoke save_binary. Any derived classes - e.g. MPI_oarchive or whatever would implement there own versions. So one would have template<class T> void save(boost::MPI_archive &ar, const std::valarray<T> & t...) { // assuming save_binary isn't a good implementation // use another one. ... } Since the serialization libary templates make calls through the most derived class, I would expect the appropriate function to be invoked without going through any virtual function table.
Second, if you add an overload for std::vector:
template<class T> void save(boost::hpc_archive &ar, const std::vector<T> & t...)
the version in the Serialization library will take precedence since it's an exact match for the archive argument, and the overload above requires a derived to base conversion.
Hmmm the version in the serialization library looks like:
template<class Archive, class T> void save(Archive &ar, const std::vector<T> & t...)
I was pretty sure that a conversion from a derived class to a base would take precedence of the more general case - Now I'm not so sure. I double check this.
Even if it did work, I don't see in which circumstances a class author would like to _not_ take advantage of the save_array enhancement.
LOL, its impossible to predict things like this. We can't think of everything ahead of time.
Ideally, he should just call save_array in save, without restricting it to a specific set of archives. I don't see what you gain by denying him this opportunity
I know it seems attractive and its just "One Nore Small Little Thing" and if it were the "Last Thing" I might be able to see it. But its actually the "First Thing" of this nature.
- assuming that it can be provided without negative consequences for the current code base
All the current archives would have to be modified in some way to add this function and its default implementation.
or existing archive formats.
that would remain to be seen. Verifying this could be difficult. Robert Ramey

Robert Ramey wrote:
Peter Dimov wrote:
- assuming that it can be provided without negative consequences for the current code base
All the current archives would have to be modified in some way to add this function and its default implementation.
A good proposal should not require any changes to existing archives that don't need to take advantage of the array optimization. The default behavior of the library ought to remain the same.
or existing archive formats.
that would remain to be seen. Verifying this could be difficult.
We could add tests for that. For most archives that implement the array functionality (including the binary archives, if we decide to enhance them), the optimization should be just that, an optimization; it should produce the same results as before, just in less time. I understand now that the task is not as trivial as it first seemed. But if we were to propose something along these lines that satisfies the above constraints, would you be willing to consider it for inclusion?

On Nov 27, 2005, at 5:36 PM, Robert Ramey wrote:
Matthias Troyer wrote:
---------------------------------------------------- template<class Base, class T> void save(boost::archive::bitwise_oarchive_adaptor<Base> &ar, const std::valarray<T> & t,) { const unsigned int count = t.size(); ar << count; save_array(ar, get_data(t),t.size()); }
----------------------------------------------------
Hmmm - did you perchance mean to use "bitwise_oarchive_adaptor<Base> in the above ?
Sorry, I meant template<class Archive, class T> void save(Archive> &ar, const std::valarray<T> & t,) There is just ONE serialize function necessary to invoke either the standard or optimized version Matthias
participants (6)
-
David Abrahams
-
Ian McCulloch
-
Kim Barrett
-
Matthias Troyer
-
Peter Dimov
-
Robert Ramey