
In this version : * EXISTS * consistency check in INSERT...SELECT * DISTINCT, ALL * transactions The transaction stuff is pellicular. It simply maps to vendor calls. I am hesitating a bit, either make a common interface for the most common operations (like execute) directly in each backend (aside with vendor-specifc calls) or put them in a separate wrapper or even decide that it's not in the scope. There's more redesign in the select from which I think I can get an elegant implementation of vendor-specific features (like LIMIT). The idea is to pass all the data in a fusion map, allowing vendor specific parts of the statement to slip their own data inside it. In fact this also is how DISTINCT works. I dropped the composition of lists via chained calls to operator (). It's all preprocessor-based now. The () chain system was limited by FUSION_MAX_VECTOR_SIZE anyway. I have installed MySQL on my computer. I am still accessing it via ODBC but I'll probably start the native support soon. Yesterday I tried to upload and just when I pushed the upload button the vault server crashed. I suspect that it was my upload that triggered it. J-L

Anywhere we can read some doc/tutorial/samples ? On Mon, Sep 28, 2009 at 7:36 AM, Jean-Louis Leroy <jl@yorel.be> wrote:
In this version : * EXISTS * consistency check in INSERT...SELECT * DISTINCT, ALL * transactions
The transaction stuff is pellicular. It simply maps to vendor calls. I am hesitating a bit, either make a common interface for the most common operations (like execute) directly in each backend (aside with vendor-specifc calls) or put them in a separate wrapper or even decide that it's not in the scope.
There's more redesign in the select from which I think I can get an elegant implementation of vendor-specific features (like LIMIT). The idea is to pass all the data in a fusion map, allowing vendor specific parts of the statement to slip their own data inside it. In fact this also is how DISTINCT works.
I dropped the composition of lists via chained calls to operator (). It's all preprocessor-based now. The () chain system was limited by FUSION_MAX_VECTOR_SIZE anyway.
I have installed MySQL on my computer. I am still accessing it via ODBC but I'll probably start the native support soon.
Yesterday I tried to upload and just when I pushed the upload button the vault server crashed. I suspect that it was my upload that triggered it.
J-L
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Alp Mestan http://blog.mestan.fr/ http://alp.developpez.com/

Alp Mestan wrote:
Anywhere we can read some doc/tutorial/samples ?
I start documenting this week. In the meantime look at the test suites (in libs/rdb/test), it's quite easy to understand how to use the thing - it's full of meta-stuff inside but on the outside it's a very simple and predictible syntax. I hope ;-) J-L

Jean-Louis Leroy wrote:
In this version : * EXISTS * consistency check in INSERT...SELECT * DISTINCT, ALL * transactions
The transaction stuff is pellicular. It simply maps to vendor calls. I am hesitating a bit, either make a common interface for the most common operations (like execute) directly in each backend (aside with vendor-specifc calls) or put them in a separate wrapper or even decide that it's not in the scope.
There's more redesign in the select from which I think I can get an elegant implementation of vendor-specific features (like LIMIT). The idea is to pass all the data in a fusion map, allowing vendor specific parts of the statement to slip their own data inside it. In fact this also is how DISTINCT works.
I dropped the composition of lists via chained calls to operator (). It's all preprocessor-based now. The () chain system was limited by FUSION_MAX_VECTOR_SIZE anyway.
I have installed MySQL on my computer. I am still accessing it via ODBC but I'll probably start the native support soon.
Yesterday I tried to upload and just when I pushed the upload button the vault server crashed. I suspect that it was my upload that triggered it.
J-L
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost Concerning the transactions, I thought ODBC already was the common interface beginning, committing and rollingback transactions. Please see functions SQLSetConnectAttr SQLEndTran

Concerning the transactions, I thought ODBC already was the common interface beginning, committing and rollingback transactions. Please see functions SQLSetConnectAttr SQLEndTran That's how they are implemented in the current drop. However, my plan is to support native bindings. In ODBC a transaction looks like this :
set autocommit off do work commit or roll back do work commit or roll back etc This pattern is directly reflected in the current implementation. See the test suite in libs/rdb/test/test_odbc.cpp. Other vendors may have a different pattern : begin transaction do work commit or roll back begin transaction do work commit or roll back Of course I could say that all backends have three functions : start_transaction, commit and rollback. In the case of ODBC start_transaction would be a no-op. Or maybe it would turn auto-commit off (and throw if the underlying db is not tx-capable). But I am hesitating a bit. The current philosophy of my lib is much like C's : you look at any piece of code and you know exactly how it will translate into machine code. No hidden 17 dtor calls hidden inside a closing brace. From that point you either use rdb to write apps or as a foundation to build higher-level tools. OTOH maybe I'm splitting hairs wrt transactions. Besides the two patterns above, does anybody see a third possibility ? Nested transactions ? They fit nicely in the second pattern... J-L

Am Monday 28 September 2009 18:26:49 schrieb Jean-Louis Leroy:
Of course I could say that all backends have three functions : start_transaction, commit and rollback. In the case of ODBC start_transaction would be a no-op. Or maybe it would turn auto-commit off (and throw if the underlying db is not tx-capable). But I am hesitating a bit. The current philosophy of my lib is much like C's : you look at any piece of code and you know exactly how it will translate into machine code. No hidden 17 dtor calls hidden inside a closing brace. From that point you either use rdb to write apps or as a foundation to build higher-level tools.
OTOH maybe I'm splitting hairs wrt transactions. Besides the two patterns above, does anybody see a third possibility ? Nested transactions ? They fit nicely in the second pattern...
I'm not sure if the BEGIN/COMMIT syntax is supported by some vendors for nested transactions, but others defniitely only support SAVEPOINT/RELEASE. so you'd have to break with your concern that you'd like the rdb SQL syntax look like what is executed under the hood anyway, if I understood it correctly. another thing, probably minor at this stage, you might want to consider is 2-phase-transactions, which would add an additional PREPARE statement, and a way to retrieve the transactions that are in prepare-state after a crash, to the interface. see e.g. http://dev.mysql.com/doc/refman/5.0/en/xa-statements.html

Jean-Louis Leroy wrote:
my plan is to support native bindings. In ODBC a transaction looks like this :
set autocommit off do work commit or roll back do work commit or roll back etc
This pattern is directly reflected in the current implementation. See the test suite in libs/rdb/test/test_odbc.cpp.
Other vendors may have a different pattern :
begin transaction do work commit or roll back begin transaction do work commit or roll back
If you provide an RAII class that does a start/begin/whatever in the ctor, a roll back in the dtor unless cancelled or committed, and provides member functions to commit or cancel on demand, then all back end schemes should be covered. The ctor can throw an exception should a particular back end not support transactions. Whether a back end supports nested transactions or not puts a wrinkle in the abstraction, of course. I suggest that you model nested transactions and simulate them for back ends that don't support them. In the latter case, the transaction class must use functions in some implementation layer that will track outstanding "nested" transaction objects in order to correctly decide whether to actually roll back when a transaction object's dtor runs or when commit() and roll_back() are called based upon what has already happened to the underlying, tracked state. _____ Rob Stewart robert.stewart@sig.com Software Engineer, Core Software using std::disclaimer; Susquehanna International Group, LLP http://www.sig.com IMPORTANT: The information contained in this email and/or its attachments is confidential. If you are not the intended recipient, please notify the sender immediately by reply and immediately delete this message and all its attachments. Any review, use, reproduction, disclosure or dissemination of this message or any attachment by an unintended recipient is strictly prohibited. Neither this message nor any attachment is intended as or should be construed as an offer, solicitation or recommendation to buy or sell any security or other financial instrument. Neither the sender, his or her employer nor any of their respective affiliates makes any warranties as to the completeness or accuracy of any of the information contained herein or that this message or any of its attachments is free of viruses.

Hi, First, thanks for trying to provide yet another relational database C++ front-end library. I have some general remarks. As the query language the DSLE emmulates is SQL, dont you think that you could put it in a specific sql directory and namespace. Other query language can be provided as Joel has already shown. More inline ... ----- Original Message ----- From: "Stewart, Robert" <Robert.Stewart@sig.com> To: "List to discuss standard rdb" <std_rdb@mail-lists.crystalclearsoftware.com>; <boost@lists.boost.org> Sent: Wednesday, September 30, 2009 5:37 PM Subject: Re: [boost] [std_rdb] [rdb] 0.0.09
Jean-Louis Leroy wrote:
my plan is to support native bindings. In ODBC a transaction looks like this :
set autocommit off do work commit or roll back do work commit or roll back etc
This pattern is directly reflected in the current implementation. See the test suite in libs/rdb/test/test_odbc.cpp.
Other vendors may have a different pattern :
begin transaction do work commit or roll back begin transaction do work commit or roll back
If you provide an RAII class that does a start/begin/whatever in the ctor, a roll back in the dtor unless cancelled or committed, and provides member functions to commit or cancel on demand, then all back end schemes should be covered.
I agree that a transaction abstraction is nedeed to encapsulates the patterns transaction() { db.set_autocommit(off); } ~transaction() { if not commited db.rollback(); db.set_autocommit(on); } or transaction() { db.begin_transaction(off); } ~transaction() { if not commited db.rollback(); } Note please the following C++ schema for (transaction T; !T.committed() && T.restart(); T.commit()) { } That allows us to reiterate as far as the transaction has not successfully commited. The transaction will off course rolled back if not commited on its destructor. This schema has the advantage to preserve the block structure, and the library could provide a macro. #define BOOST_RDB_ATOMIC(T) for (transaction T; !T.committed() && T.restart(); T.commit()) that allows to have language-like atomic blocks BOOST_RDB_ATOMIC(T) { // do something atomic } TBoost.STM provides already some of these language-like macros. The autocommit feature can also be emmulated for backends don't providing it. A autocommit off will create a hidden transaction, which will be make the commit and rollback functions to restart the hiden transaction. A autocommit on will associate a transaction to the execute function. If Boost.RDB provide both, free to the user to choose its prefered style.
The ctor can throw an exception should a particular back end not support transactions.
If the backend do not support a feature it will be preferable to have the information at compile time, isn't it?
Whether a back end supports nested transactions or not puts a wrinkle in the abstraction, of course. I suggest that you model nested transactions and simulate them for back ends that don't support them. In the latter case, the transaction class must use functions in some implementation layer that will track outstanding "nested" transaction objects in order to correctly decide whether to actually roll back when a transaction object's dtor runs or when commit() and roll_back() are called based upon what has already happened to the underlying, tracked state.
I agree again. This seems not too complex to simulate. Best regards, Vicente

First, thanks for trying to provide yet another relational database C++ front-end library.
Well sometimes I feel that I am re-inventing the wheel (except maybe the static typing) but I need a Boost rdb layer to base a Boost object-relational mapper upon so...
I have some general remarks. As the query language the DSLE emmulates is SQL, dont you think that you could put it in a specific sql directory and namespace. Other query language can be provided as Joel has already shown.
Well I thought exactly the same last night. I also know that there is another query language, closer to relational algebra, that some people would want to have. So I put everything sql in boost::rdb::sql and its own directories. rdb::odbc only minds that the concepts are properly implemented, except for one type : select_statement_tag, which maybe doesn't really belong to rdb::sql. Other query languages would also make it possible to create queies, wouldn't they ? Maybe I should rename it to query_tag and put it straight in boost::rdb.
That allows us to reiterate as far as the transaction has not successfully commited. The transaction will off course rolled back if not commited on its destructor. This schema has the advantage to preserve the block structure, and the library could provide a macro.
#define BOOST_RDB_ATOMIC(T) for (transaction T; !T.committed() && T.restart(); T.commit())
that allows to have language-like atomic blocks
BOOST_RDB_ATOMIC(T) { // do something atomic }
Those macros look nice. However, I agree with Stefan. Currently I can define the scope of my work in one sentence, without any "except for"'s. OTOH I know that if I had to use my own lib right now, one of the first things I'd do would be to implement features like you describe. I had them in my previous works on ORM (nested tx implemented as a nesting count, when it drops to zero, commit etc). Maybe this could go in a decorator class in a rdb::utility namespace ? Hmm. In fact this looks like a Database, in addition to rdb::odbc, rdb::mysql etc we could have a yet another "backend" with extra functionality... J-L

----- Original Message ----- From: "Jean-Louis Leroy" <jl@yorel.be> To: <boost@lists.boost.org> Sent: Thursday, October 01, 2009 6:53 PM Subject: Re: [boost] [std_rdb] [rdb] 0.0.09
First, thanks for trying to provide yet another relational database C++ front-end library.
Well sometimes I feel that I am re-inventing the wheel (except maybe the static typing) but I need a Boost rdb layer to base a Boost object-relational mapper upon so...
I have some general remarks. As the query language the DSLE emmulates is SQL, dont you think that you could put it in a specific sql directory and namespace. Other query language can be provided as Joel has already shown.
Well I thought exactly the same last night. I also know that there is another query language, closer to relational algebra, that some people would want to have.
So I put everything sql in boost::rdb::sql and its own directories. rdb::odbc only minds that the concepts are properly implemented, except for one type : select_statement_tag, which maybe doesn't really belong to rdb::sql. Other query languages would also make it possible to create queies, wouldn't they ? Maybe I should rename it to query_tag and put it straight in boost::rdb.
This seems a beeter organization to me.
That allows us to reiterate as far as the transaction has not successfully commited. The transaction will off course rolled back if not commited on its destructor. This schema has the advantage to preserve the block structure, and the library could provide a macro.
#define BOOST_RDB_ATOMIC(T) for (transaction T; !T.committed() && T.restart(); T.commit())
that allows to have language-like atomic blocks
BOOST_RDB_ATOMIC(T) { // do something atomic }
Those macros look nice. However, I agree with Stefan. Currently I can define the scope of my work in one sentence, without any "except for"'s. OTOH I know that if I had to use my own lib right now, one of the first things I'd do would be to implement features like you describe. I had them in my previous works on ORM (nested tx implemented as a nesting count, when it drops to zero, commit etc).
Maybe this could go in a decorator class in a rdb::utility namespace ? Hmm. In fact this looks like a Database, in addition to rdb::odbc, rdb::mysql etc we could have a yet another "backend" with extra functionality...
IMHO, even if you don't provide this features in your library you must probe that they can be implemented without any change. I think that this can be done as examples. Only then we can see your library is open, and decide if this functionalities should be let to the users of the library or must be directly included. So, I'waiting for the implementation of the suggested features without changes on the library. Best, Vicente

Hi J-L, you seem to be making good progress on your rdb library, so I wanted to share with you the interface of boost.persistent that is intended to integrate a boost object-relational mapper. boost.persistent implements many of vicente's ideas like RAII transactions, a macro like the one shown below, smart pointers to access database objects like they were regular objects managed by a boost shared_ptr etc. I'm working on making boost.persistent generic enough to integrate other backends than the default one (which implements transactions and storage itself, without a RDBMS backend). so here's the preliminary interface, feedback is welcome. there are many supporting classes for serialization, object caching etc, but they are all called by the resource manager, so o-r mapping to statically defined rdb tables that you prefer can also be implemented. concept ResourceManager{ public: typedef implementation-defined transaction_token; typedef implementation-defined object_transaction_state; transaction_token begin_root_transaction(); transaction_token begin_nested_transaction(transaction_token parent); void commit_transaction(transaction_token); void rollback_transaction(transaction_token); template<class T> shared_ptr<object> new_object(transaction_token,T *); template<class T> shared_ptr<instance> get_instance(transaction_token,shared_ptr<object> const &); template<class T> shared_ptr<instance const> get_read_instance(transaction_token,shared_ptr<object> const &); template<class T> shared_ptr<instance> get_write_instance(transaction_token,shared_ptr<object> const &); void remove_object(transaction_token,shared_ptr<object> const &); //resource managers that support reference counted objects: template<class T> shared_ptr<object> new_shared_object(transaction_token,T *); void make_object_shared(transaction_token,shared_ptr<object> const &); void count_object(transaction_token,shared_ptr<object> const &,int strong,int weak); bool object_expired(transaction_token,shared_ptr<object> const &); //resource managers that support distributed transactions: void prepare_transaction(transaction_token); }; the interface is not intended to be called by the user, but is called by a transaction manager. another way to plug in another backend is implementing a StorageEngine and using the default resource manager, but a storage engine only implements object I/O, so if you wanted the RDBMS to handle the transactions, you'd implement a resource manager. Am Thursday 01 October 2009 18:53:33 schrieb Jean-Louis Leroy:
First, thanks for trying to provide yet another relational database C++ front-end library.
Well sometimes I feel that I am re-inventing the wheel (except maybe the static typing) but I need a Boost rdb layer to base a Boost object-relational mapper upon so...
I have some general remarks. As the query language the DSLE emmulates is SQL, dont you think that you could put it in a specific sql directory and namespace. Other query language can be provided as Joel has already shown.
Well I thought exactly the same last night. I also know that there is another query language, closer to relational algebra, that some people would want to have.
So I put everything sql in boost::rdb::sql and its own directories. rdb::odbc only minds that the concepts are properly implemented, except for one type : select_statement_tag, which maybe doesn't really belong to rdb::sql. Other query languages would also make it possible to create queies, wouldn't they ? Maybe I should rename it to query_tag and put it straight in boost::rdb.
That allows us to reiterate as far as the transaction has not successfully commited. The transaction will off course rolled back if not commited on its destructor. This schema has the advantage to preserve the block structure, and the library could provide a macro.
#define BOOST_RDB_ATOMIC(T) for (transaction T; !T.committed() && T.restart(); T.commit())
that allows to have language-like atomic blocks
BOOST_RDB_ATOMIC(T) { // do something atomic }
Those macros look nice. However, I agree with Stefan. Currently I can define the scope of my work in one sentence, without any "except for"'s. OTOH I know that if I had to use my own lib right now, one of the first things I'd do would be to implement features like you describe. I had them in my previous works on ORM (nested tx implemented as a nesting count, when it drops to zero, commit etc).
Maybe this could go in a decorator class in a rdb::utility namespace ? Hmm. In fact this looks like a Database, in addition to rdb::odbc, rdb::mysql etc we could have a yet another "backend" with extra functionality...
J-L
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Hello Stefan. Thanks for the preview. It looks like your work is several layers above mine, so there may be room to slip my lib under yours. Whether or not it will help much, it's difficult to judge by looking the the interface you posted. My first object-relational mapper used Rogue Wave DBTools++. It did help a lot but it was a run-time system and my mapper was of that kind too (that was in 1997 anyway, a long time before it was "discovered" all that you could do with templates). So currently you do all the binding to the db yourself ? Which one do you use ? What kind of mapping do you plan to support ? Vertical, filtered, horizontal, or let the user decide ? Does your tool create the tables (typical for object-relational mappers) or does it "objectify" an existing database (what I call "relational-object mapping") ? One of the many problems with ORM in presence of transactions is that you (probably) have a table that maps between persistent objects and object ids. If new objects are persisted in a transaction and the transaction is rolled back, the map must be rolled back too... J-L

Am Sunday 04 October 2009 01:27:32 schrieben Sie:
So currently you do all the binding to the db yourself ? Which one do you use ? What kind of mapping do you plan to support ? Vertical, filtered, horizontal, or let the user decide ? Does your tool create the tables (typical for object-relational mappers) or does it "objectify" an existing database (what I call "relational-object mapping") ?
my library doesn't do o-r-mapping. it started out as a library that lets you transparently store objects with an interface as close as possible to using objects in memory. it doesn't use an external DB but handles all storage and transaction issues itself. one could call it an object database, but I generally don't, since from a "database" you'd expect a few more things, like a query language etc. that's what it was until I started making the internals more generic so you can plug in other backends at different points. one of those points is using a different ResourceManager with the interface shown, so you could e.g. implement one that stores the objects mapped to tables in a RDBMS, instead of my database files.
One of the many problems with ORM in presence of transactions is that you (probably) have a table that maps between persistent objects and object ids. If new objects are persisted in a transaction and the transaction is rolled back, the map must be rolled back too...
in my own implementation of a resource manager there are indeed object ids, but this is up to the implementation of the resource manager. object ids are not exposed to the user but abstracted by what I call a "locator". it resembles the concept of a pointer: a locator describes an object "somewhere", either in memory, in a file on disk, or mapped in a RDBMS. when a locator is dereferenced, it moves the object to memory and returns a pointer to it. so object ids are hidden from the user and the implementation of the resource manager (or storage engine) can choose how to store objects, and how to assign ids, as well as the type of mapping, like vertical etc. I have not implemented any o-r mapping so far. e.g. linked list of db objects: shared_loc<node> current=...; transaction tx; while(current){ current->value++; current=current->next; } tx.commit(); shared_loc::operator-> in this example would eventually call get_instance() in the resource manager interface in my last email to obtain an actual object instance for the user to read and modify. how this object is obtained is up to the resource manager implementation, in this case an o-r mapper. most of my library is stuff to accomplish transactional object storage without a RDBMS backend, so implementing another resource manager is like replacing half my library, so you might wonder what's the benefit of integrating the two, besides reusing a few smart pointers and transaction objects. but I still think it's good abstraction. when I'm finished with the generalization you can replace one backend with another with a few typedefs, so you could develop a prototype of an app using my object storage backend and then define the mappings and move to a RDBMS without any changes to the code. also, you could use more than one resource managers at a time, so you could have distributed transactions between two databases, between a file and a database, etc., and can have references across resources. (a locator can reference an object in any resource, and can be stored as part of an object) I have some documentation ready, but it still refers to the state of the library when there was no way to exchange the components to support other backends. most of the code is in this stage, too, the generalization is in its early stages.

Stefan Strasser wrote:
also, you could use more than one resource managers at a time, so you could have distributed transactions between two databases, between a file and a database, etc., and can have references across resources. (a locator can reference an object in any resource, and can be stored as part of an object)
Very interesting project. At this point my feeling is that there is room to slip not one but two of my libs under yours ! J-L

Hi Vicente, I think these are all good ideas, but I don't think a generic relational database layer is the right place to put them. take e.g. if you wanted to extend BoostSTM to support making some objects persistent. a layer that emulates nested transactions, begins and rolls back transactions based on RAII etc. would be an obstacle. abstracting the features a backend DOES support and provide a generic boost interface to them is hard enough given the various SQL dialects, and is a good basis to implement your ideas. if you start to emulate features a db doesn't supoprt, you end up emulating isolation levels the db doesn't support (not trivial), emulating 2-phase-transactions, and emulating half a RDBMS eventually. Am Thursday 01 October 2009 08:06:48 schrieb vicente.botet:
Hi,
First, thanks for trying to provide yet another relational database C++ front-end library.
I have some general remarks. As the query language the DSLE emmulates is SQL, dont you think that you could put it in a specific sql directory and namespace. Other query language can be provided as Joel has already shown.
More inline ...
----- Original Message ----- From: "Stewart, Robert" <Robert.Stewart@sig.com> To: "List to discuss standard rdb" <std_rdb@mail-lists.crystalclearsoftware.com>; <boost@lists.boost.org> Sent: Wednesday, September 30, 2009 5:37 PM Subject: Re: [boost] [std_rdb] [rdb] 0.0.09
Jean-Louis Leroy wrote:
my plan is to support native bindings. In ODBC a transaction looks like this :
set autocommit off do work commit or roll back do work commit or roll back etc
This pattern is directly reflected in the current implementation. See the test suite in libs/rdb/test/test_odbc.cpp.
Other vendors may have a different pattern :
begin transaction do work commit or roll back begin transaction do work commit or roll back
If you provide an RAII class that does a start/begin/whatever in the ctor, a roll back in the dtor unless cancelled or committed, and provides member functions to commit or cancel on demand, then all back end schemes should be covered.
I agree that a transaction abstraction is nedeed to encapsulates the patterns
transaction() { db.set_autocommit(off); } ~transaction() { if not commited db.rollback();
db.set_autocommit(on); }
or
transaction() { db.begin_transaction(off); } ~transaction() { if not commited db.rollback();
}
Note please the following C++ schema
for (transaction T; !T.committed() && T.restart(); T.commit()) {
}
That allows us to reiterate as far as the transaction has not successfully commited. The transaction will off course rolled back if not commited on its destructor. This schema has the advantage to preserve the block structure, and the library could provide a macro.
#define BOOST_RDB_ATOMIC(T) for (transaction T; !T.committed() && T.restart(); T.commit())
that allows to have language-like atomic blocks
BOOST_RDB_ATOMIC(T) { // do something atomic }
TBoost.STM provides already some of these language-like macros.
The autocommit feature can also be emmulated for backends don't providing it. A autocommit off will create a hidden transaction, which will be make the commit and rollback functions to restart the hiden transaction. A autocommit on will associate a transaction to the execute function.
If Boost.RDB provide both, free to the user to choose its prefered style.
The ctor can throw an exception should a particular back end not support transactions.
If the backend do not support a feature it will be preferable to have the information at compile time, isn't it?
Whether a back end supports nested transactions or not puts a wrinkle in the abstraction, of course. I suggest that you model nested transactions and simulate them for back ends that don't support them. In the latter case, the transaction class must use functions in some implementation layer that will track outstanding "nested" transaction objects in order to correctly decide whether to actually roll back when a transaction object's dtor runs or when commit() and roll_back() are called based upon what has already happened to the underlying, tracked state.
I agree again. This seems not too complex to simulate.
Best regards, Vicente
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Hi Stefan, ----- Original Message ----- From: "Stefan Strasser" <strasser@uni-bremen.de> To: <boost@lists.boost.org> Sent: Thursday, October 01, 2009 8:16 PM Subject: Re: [boost] [std_rdb] [rdb] 0.0.09
Hi Vicente,
I think these are all good ideas, but I don't think a generic relational database layer is the right place to put them. take e.g. if you wanted to extend BoostSTM to support making some objects persistent. a layer that emulates nested transactions, begins and rolls back transactions based on RAII etc. would be an obstacle.
I don't see why this could be an obstacle. Could you clarify your concern?
abstracting the features a backend DOES support and provide a generic boost interface to them is hard enough given the various SQL dialects, and is a good basis to implement your ideas.
I agree. I think the work done on Boost.RDB could be a great work. If we can implement my ideas on top of the RDB library this will mean that the library is enogh open, but we need to probe it. Could my proposed features be implemented without any change? Otherwise we need to see what is needed at the library inteface to allow it. Only then, the proposed interface could be considered as close. Of course this is a personal opinion.
if you start to emulate features a db doesn't supoprt, you end up emulating isolation levels the db doesn't support (not trivial), emulating 2-phase-transactions, and emulating half a RDBMS eventually.
Well, I've not asked to emulate 2-phase-transactions or tleast not yet. If the backend do not support a given abstraction, it is good to know it, and in my opinion a compile time error will be the best in this case. Best regards, Vicente
Am Thursday 01 October 2009 08:06:48 schrieb vicente.botet:
Hi,
First, thanks for trying to provide yet another relational database C++ front-end library.
I have some general remarks. As the query language the DSLE emmulates is SQL, dont you think that you could put it in a specific sql directory and namespace. Other query language can be provided as Joel has already shown.
More inline ...
----- Original Message ----- From: "Stewart, Robert" <Robert.Stewart@sig.com> To: "List to discuss standard rdb" <std_rdb@mail-lists.crystalclearsoftware.com>; <boost@lists.boost.org> Sent: Wednesday, September 30, 2009 5:37 PM Subject: Re: [boost] [std_rdb] [rdb] 0.0.09
Jean-Louis Leroy wrote:
my plan is to support native bindings. In ODBC a transaction looks like this :
set autocommit off do work commit or roll back do work commit or roll back etc
This pattern is directly reflected in the current implementation. See the test suite in libs/rdb/test/test_odbc.cpp.
Other vendors may have a different pattern :
begin transaction do work commit or roll back begin transaction do work commit or roll back
If you provide an RAII class that does a start/begin/whatever in the ctor, a roll back in the dtor unless cancelled or committed, and provides member functions to commit or cancel on demand, then all back end schemes should be covered.
I agree that a transaction abstraction is nedeed to encapsulates the patterns
transaction() { db.set_autocommit(off); } ~transaction() { if not commited db.rollback();
db.set_autocommit(on); }
or
transaction() { db.begin_transaction(off); } ~transaction() { if not commited db.rollback();
}
Note please the following C++ schema
for (transaction T; !T.committed() && T.restart(); T.commit()) {
}
That allows us to reiterate as far as the transaction has not successfully commited. The transaction will off course rolled back if not commited on its destructor. This schema has the advantage to preserve the block structure, and the library could provide a macro.
#define BOOST_RDB_ATOMIC(T) for (transaction T; !T.committed() && T.restart(); T.commit())
that allows to have language-like atomic blocks
BOOST_RDB_ATOMIC(T) { // do something atomic }
TBoost.STM provides already some of these language-like macros.
The autocommit feature can also be emmulated for backends don't providing it. A autocommit off will create a hidden transaction, which will be make the commit and rollback functions to restart the hiden transaction. A autocommit on will associate a transaction to the execute function.
If Boost.RDB provide both, free to the user to choose its prefered style.
The ctor can throw an exception should a particular back end not support transactions.
If the backend do not support a feature it will be preferable to have the information at compile time, isn't it?
Whether a back end supports nested transactions or not puts a wrinkle in the abstraction, of course. I suggest that you model nested transactions and simulate them for back ends that don't support them. In the latter case, the transaction class must use functions in some implementation layer that will track outstanding "nested" transaction objects in order to correctly decide whether to actually roll back when a transaction object's dtor runs or when commit() and roll_back() are called based upon what has already happened to the underlying, tracked state.
I agree again. This seems not too complex to simulate.
Best regards, Vicente
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
participants (6)
-
Alp Mestan
-
Jarrad Waterloo
-
Jean-Louis Leroy
-
Stefan Strasser
-
Stewart, Robert
-
vicente.botet