
On 8/18/05, Brock Peabody <brock.peabody@npcinternational.com> wrote:
The only ones I've used (MySQL, PostgreSQL, Access, DB2) have had random access. In all the code I've written I don't think I've done anything that required random access iterators for the database though.
Hm, I wasn't aware MySQL offered random-access result iteration facilities. I have only used the mysql_fetch_result interface, but now I see mysql_data_seek. Neat. However, random access says to me that the client must receive and consume the entire result set from the server first. This is not practical for large result sets that may not fit into available memory.
The main reason I went with random access was because it made the implementation of the iterator and my network database proxy easier.
If there's no real use for random access iterators and some widely used databases don't support them then I agree we shouldn't either.
Perhaps two classes of iterators could be provided? RandomAccess for those platforms that support them and Sequential for the ones that don't. Actually, it should be a simple matter to build RandomAccess iterators on top of Sequential ones for those vendor APIs that don't natively support RA. Personally I don't see a lot of value in the RandomAccess iterators, but some may.
Oops! That's easy to fix at least. If we're using the correct syntax is there still a need to add extra virtual functions?
I think so (see below).
The way I've got this implemented, I never have to nest transactions. I don't think any of the databases I use let you nest transactions, which is why the interface is set up the way it is. Think of it as a nested transactions simulator. Without this it's almost impossible for modules to use transactions.
Agreed. FYI, the implementation in your email will emit multiple "BEGINS" ("START TRANSACTION") but only emits a single "COMMIT" or "ROLLBACK", which I think is a bug.
I like your class so I'll use it in my example:
Thanks :-)
transaction t(d); save_customer(d, c); save_order(d,o); t.commit();
save_customer() may be called in other situations and it may have transactions of its own or call other functions that do. Counting transaction depth allows you to do this on databases that don't allow nesting.
I think it would be nice to support nested transactions on those platforms where they exist. Someone *might* have reason to use sub-transactions within a larger transaction, some of which get committed and others which get rolled back before the ultimate parent transaction's commit/rollback. For example, a complex business process might involve inserts/updates to multiple tables. Perhaps one of those operations fails in a way that is deemed harmless (e.g. inserting a Customer record that already exists), and we wish to rollback this sub-operation but continue processing the larger transaction towards an ultimate COMMIT. It seems to me that if the underlying vendor product supports this, a Boost.Database ought not to stand in the way of using it. In part because supporting it seems quite simple to do.
* Prepared Statements. Sending data to a database is frequently done by preparing an SQL statement with placeholder arguments (e.g. "INSERT INTO foo ( col1, col2, ... ) VALUES ( ?, ?, ... )" and then binding program variables to those arguments and executing the prepared statement. Do you think it would be good to add this ability?
What is the benefit of building a query this way? Personally, I'd rather see a type-safe SQL building DSEL. What do you think?
That would be a very nice addition, as long as the syntax would not be too tortured :-) However, I suspect some users will come up with baroque and complex queries that may be difficult or impossible to represent in this DSEL. As with some complex grammars in Spirit, they may just be too much for the compiler to handle without choking or they may introduce unwanted compile-time penalties. Supporting user-generated SQL could also ease the porting of existing code that already has queries embedded in it. At the very least, Boost.Format or something like it could be used to build query strings in a type-safe and properly-escaped-string way. As to prepared statements, whether they be created from strings or a DSEL, they can be a major win performance-wise for applications which do lots of database writes. The overhead of parsing the SQL on each insert can become suprisingly high.
The second (and more exciting to me) part of this library is going to be the serializer and it should provide facilities similar to but more powerful than this.
I stopped myself from making that same point, as there is potentially a large overlap with Boost.Serialization as Dave A. mentioned. I do think a generalized object description framework (e.g. some form of Reflection) could be put to great use by a Database library, the Serialization library, the Langbinding code I hope will come into being some day, and any number of other applications. The possibilities are near endless. -- Caleb Epstein caleb dot epstein at gmail dot com