[rdb] Uploaded 0.0.07

This version features : * insert...set syntax (but no insert...select yet) * update [where] * delete from [where] * create and drop follow the same syntax pattern as the others * attempts to simplify the internals (in insert - select still needs a second look) * some concept checks It's *beginning* to look like something one could use in simple real apps. What's missing before it qualifies is imo `like', 'exists' and transactions. That should not take long. This version compiles with msvc9 and gcc 4.4.1. I tested on Win32 with an Access db via ODBC. The sql engine tests pass on Linux but I have not tried the ODBC test suite on Unix yet. If somebody tries it, please tell me. I have changed the naming scheme : first number is major revision ; second is the milestone (crud with direct execution ; transactions ; prepared statements ; dialects ; constraints ; dynamic sql - which btw I am now nearly sure I can fit elegantly in my framework ; views). Third number is just there for getting ++'ed. The test suite is still the doc but after the transactions I'll begin documenting. Probably with QuickBook. It's funny to see that some people are still downloading 0.04. As I write this the downlaod count bumped to 21 :-D To avoid confusion due to the change in the versioning scheme I deleted the old versions. J-L

Since you're using ODBC, theoretically you can just bind your library to any RDBMS supporting ODBC ? On Wed, Sep 23, 2009 at 10:51 PM, Jean-Louis Leroy <jl@yorel.be> wrote:
This version features : * insert...set syntax (but no insert...select yet) * update [where] * delete from [where] * create and drop follow the same syntax pattern as the others * attempts to simplify the internals (in insert - select still needs a second look) * some concept checks
It's *beginning* to look like something one could use in simple real apps. What's missing before it qualifies is imo `like', 'exists' and transactions. That should not take long.
This version compiles with msvc9 and gcc 4.4.1. I tested on Win32 with an Access db via ODBC. The sql engine tests pass on Linux but I have not tried the ODBC test suite on Unix yet. If somebody tries it, please tell me.
I have changed the naming scheme : first number is major revision ; second is the milestone (crud with direct execution ; transactions ; prepared statements ; dialects ; constraints ; dynamic sql - which btw I am now nearly sure I can fit elegantly in my framework ; views). Third number is just there for getting ++'ed.
The test suite is still the doc but after the transactions I'll begin documenting. Probably with QuickBook.
It's funny to see that some people are still downloading 0.04. As I write this the downlaod count bumped to 21 :-D To avoid confusion due to the change in the versioning scheme I deleted the old versions.
J-L
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Alp Mestan http://blog.mestan.fr/ http://alp.developpez.com/

Alp Mestan wrote:
Since you're using ODBC, theoretically you can just bind your library to any RDBMS supporting ODBC ?
Indeed. That's why I used it as the first backend to support, but with time I hope that we'll have native support for all major vendors, for speed and for supporting specific features (e.g. table or row locking). J-L

On Thu, Sep 24, 2009 at 11:48 AM, Jean-Louis Leroy <jl@yorel.be> wrote:
Alp Mestan wrote:
Since you're using ODBC, theoretically you can just bind your library to any RDBMS supporting ODBC ?
Indeed. That's why I used it as the first backend to support, but with time I hope that we'll have native support for all major vendors, for speed and for supporting specific features (e.g. table or row locking).
From my experience, many ODBC wrapper around DB's tend to have speed hits, as well as not properly optimizing calls for the underlying DB?
I still say a Boost.RDB should be modeled after Python's SQLAlchemy library, such a wonderful design after using it for a year, and it optimizes your SQL for the underlying DB (so some may use joins, some may use other things, etc... whatever is best for the DB).

Such a thing can be targetted only once rdb will be able to build and query on a large set of the SQL norm in an easy to use DSEL. Before that, no expert of RDBMS would be willing to help on such tasks, since I don't think pure Oracle, MySQL, PostgreSQL, etc experts are also C++ experts and Boost Developers at the same time (correct me if I'm wrong, though). On Thu, Sep 24, 2009 at 11:36 PM, OvermindDL1 <overminddl1@gmail.com> wrote:
On Thu, Sep 24, 2009 at 11:48 AM, Jean-Louis Leroy <jl@yorel.be> wrote:
Alp Mestan wrote:
Since you're using ODBC, theoretically you can just bind your library to any RDBMS supporting ODBC ?
Indeed. That's why I used it as the first backend to support, but with time I hope that we'll have native support for all major vendors, for speed and for supporting specific features (e.g. table or row locking).
From my experience, many ODBC wrapper around DB's tend to have speed hits, as well as not properly optimizing calls for the underlying DB?
I still say a Boost.RDB should be modeled after Python's SQLAlchemy library, such a wonderful design after using it for a year, and it optimizes your SQL for the underlying DB (so some may use joins, some may use other things, etc... whatever is best for the DB). _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Alp Mestan http://blog.mestan.fr/ http://alp.developpez.com/

I still say a Boost.RDB should be modeled after Python's SQLAlchemy library, such a wonderful design after using it for a year, and it optimizes your SQL for the underlying DB (so some may use joins, some may use other things, etc... whatever is best for the DB). After having a look at SQLAlchemy, I have the impression that what I am working on is just a tiny part of it, namely what is described here : http://www.sqlalchemy.org/docs/05/sqlexpression.html
The goal of this lib is not ORM or anything, it merely tries to make it possible to talk to the database with fine control over the sql, and in a typesafe manner at that (something Python doesn't care about). It's just "embedded SQL done in C++". If it makes decisions behing your back, then it's missing its goal. Later higher-level tools - among them ORMs - can be built upon it. That is my initial goal in fact - build an ORM. J-L

On Thu, Sep 24, 2009 at 4:22 PM, Jean-Louis Leroy <jl@yorel.be> wrote:
I still say a Boost.RDB should be modeled after Python's SQLAlchemy library, such a wonderful design after using it for a year, and it optimizes your SQL for the underlying DB (so some may use joins, some may use other things, etc... whatever is best for the DB).
After having a look at SQLAlchemy, I have the impression that what I am working on is just a tiny part of it, namely what is described here : http://www.sqlalchemy.org/docs/05/sqlexpression.html
The goal of this lib is not ORM or anything, it merely tries to make it possible to talk to the database with fine control over the sql, and in a typesafe manner at that (something Python doesn't care about). It's just "embedded SQL done in C++". If it makes decisions behing your back, then it's missing its goal.
Later higher-level tools - among them ORMs - can be built upon it. That is my initial goal in fact - build an ORM.
Ah, useful. :) For note, Python is strongly-typed, have to be correct about what you send to the DB with regards to types, if you send an int to a string column, or a string to an int, or whatever, it will throw an exception.

Ah, useful. :)
For note, Python is strongly-typed, have to be correct about what you send to the DB with regards to types, if you send an int to a string column, or a string to an int, or whatever, it will throw an exception. I know that it's typed - it's *dynamically* typed. C++ is *statically* typed. "Type-safety" is a term that has a well-defined meaning in C++
parlance : it means that type checking happens at compile-time. And btw I am *not* a static analysis bigot, as my other language of expertise is Perl and my favorite is Lisp. J-L

On Thu, Sep 24, 2009 at 4:44 PM, Jean-Louis Leroy <jl@yorel.be> wrote:
Ah, useful. :)
For note, Python is strongly-typed, have to be correct about what you send to the DB with regards to types, if you send an int to a string column, or a string to an int, or whatever, it will throw an exception.
I know that it's typed - it's *dynamically* typed. C++ is *statically* typed. "Type-safety" is a term that has a well-defined meaning in C++ parlance : it means that type checking happens at compile-time. And btw I am *not* a static analysis bigot, as my other language of expertise is Perl and my favorite is Lisp.
Lisp is just wonderful. :) Actually, that link you did is exactly what the base of SQLAlchemy is (and mostly what I use, I rarely use the ORM parts). That is the part that is optimizes expressions specifically for the back-end. When you create the database it returns a refinement of that database, anything you create from it will be optimized specifically for that DB. it is a lot more work, but it allows it to be the most efficient SQL library for Python, and allows new SQL dialects to be added with no front-end change.

OvermindDL1 wrote:
On Thu, Sep 24, 2009 at 4:44 PM, Jean-Louis Leroy <jl@yorel.be> wrote:
Ah, useful. :)
For note, Python is strongly-typed, have to be correct about what you send to the DB with regards to types, if you send an int to a string column, or a string to an int, or whatever, it will throw an exception.
I know that it's typed - it's *dynamically* typed. C++ is *statically* typed. "Type-safety" is a term that has a well-defined meaning in C++ parlance : it means that type checking happens at compile-time. And btw I am *not* a static analysis bigot, as my other language of expertise is Perl and my favorite is Lisp.
Lisp is just wonderful. :)
Actually, that link you did is exactly what the base of SQLAlchemy is (and mostly what I use, I rarely use the ORM parts). That is the part that is optimizes expressions specifically for the back-end. When you create the database it returns a refinement of that database, anything you create from it will be optimized specifically for that DB. it is a lot more work, but it allows it to be the most efficient SQL library for Python, and allows new SQL dialects to be added with no front-end change. _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost I believe you are referring to when standard SQL is not enough either because the database vendor supports it poorly or not at all or their is some feature that application developers need which isn't standard SQL but most databases have via there proprietary extensions. These examples are a few examples that highlight the need for the generated SQL to vary on the database type and version both of which if memory serves is available from ODBC.
When standard SQL is not enough either because the database vendor supports it poorly or not at all: 1) Consider table value constructors according to SQL-92 or SQL-99. There are 3 ways insert data INSERT .. VALUES (...) for 1 row, INSERT ... SELECT for many rows and table value constructor which is similar to VALUES but support multiple rows of static data; INSERT ... VALUES (...),(...),(...),(...) ... Each of the (...) is considered a tuple. Many databases support the standard but some don't support tuples whether they be table or row value constructors. Consider Microsoft SQL Server 2000 and 2005. Neither supports table value constructors but they can emulate it with INSERT ... SELECT UNION [ALL] SELECT UNION [ALL] SELECT. Microsoft SQL Server 2008 supports both table and row value constructors but mistakenly call them both row value constructors. Table valued constructor even if emulated is required by application developers because it is significantly faster than running a INSERT hundreds of times. Also because it is all in one statement, it mitigates deadlocks since single statements are naturally ACID. I am not familiar with SQLAlchemy but Hibernate requires on configuration specifying the dialect being used. So for MSSQL 2000 and 2005 in the boolean field 'support table value constructor' would be set to false and the framework would generate the correct SQL. When it isn't standard SQL but most databases have via there proprietary extensions: 2) Auto increment is common practice though not necessarily standard. As such after doing a single insert it would be nice to get the generated id. MSSQL: SELECT SCOPE_IDENTITY() MySQL: SELECT LAST_INSERT_ID() SQLite: SELECT last_insert_rowid() ORACLE: earlier versions unfortunately didn't support auto increment, rather they used sequences which are named queryable number generators Current I have only noticed JDBC providing the low level API's for doing so and theirs an even work on mutiple insert. I am not sure how they performed that in standard SQL if even possible and currently I only know MSSQL 2005 and above supporting a select before and after image on insert, updates and deletes. 3) At some point developers are going to want to make tools and will need properly abstracted way of getting the metadata. Because SQLite doesn't support INFORMATION_SCHEMA, its dialect would require a more elaborate code based transformation and parsing to glean the information from sqlite_master. 4) Paging support is potential complex but essential for developers Many databases such as MySQL and SQLite make it easy by providing LIMIT and OFFSET clauses that gets tacked onto ones SQL statement. Once again MSSQL makes it unbearable by requiring one's SQL to be massively transformed to generate row numbers and to filter on them. I hope these examples help illustrate that some transformation needs to be able to performed earlier rather than latter. By providing some minor concessions in one library can developers from having to tackle these issues over and over again thus resulting in more portable code. With over a decade of database experience using Oracle, SQL Server, MySQL, SQLite and others using ODBC, OLEDB, ADO, ADO.NET and JDBC and also using Hibernate, NHibernate and LINQ: I would be glad to volunteer my time to help identify and justify these and other requirements.

On Fri, Sep 25, 2009 at 8:30 AM, Jarrad Waterloo <jwaterloo@dynamicquest.com> wrote:
OvermindDL1 wrote:
On Thu, Sep 24, 2009 at 4:44 PM, Jean-Louis Leroy <jl@yorel.be> wrote:
Ah, useful. :)
For note, Python is strongly-typed, have to be correct about what you send to the DB with regards to types, if you send an int to a string column, or a string to an int, or whatever, it will throw an exception.
I know that it's typed - it's *dynamically* typed. C++ is *statically* typed. "Type-safety" is a term that has a well-defined meaning in C++ parlance : it means that type checking happens at compile-time. And btw I am *not* a static analysis bigot, as my other language of expertise is Perl and my favorite is Lisp.
Lisp is just wonderful. :)
Actually, that link you did is exactly what the base of SQLAlchemy is (and mostly what I use, I rarely use the ORM parts). That is the part that is optimizes expressions specifically for the back-end. When you create the database it returns a refinement of that database, anything you create from it will be optimized specifically for that DB. it is a lot more work, but it allows it to be the most efficient SQL library for Python, and allows new SQL dialects to be added with no front-end change. _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
I believe you are referring to when standard SQL is not enough either because the database vendor supports it poorly or not at all or their is some feature that application developers need which isn't standard SQL but most databases have via there proprietary extensions. These examples are a few examples that highlight the need for the generated SQL to vary on the database type and version both of which if memory serves is available from ODBC.
When standard SQL is not enough either because the database vendor supports it poorly or not at all:
1) Consider table value constructors according to SQL-92 or SQL-99. There are 3 ways insert data INSERT .. VALUES (...) for 1 row, INSERT ... SELECT for many rows and table value constructor which is similar to VALUES but support multiple rows of static data; INSERT ... VALUES (...),(...),(...),(...) ... Each of the (...) is considered a tuple. Many databases support the standard but some don't support tuples whether they be table or row value constructors. Consider Microsoft SQL Server 2000 and 2005. Neither supports table value constructors but they can emulate it with INSERT ... SELECT UNION [ALL] SELECT UNION [ALL] SELECT. Microsoft SQL Server 2008 supports both table and row value constructors but mistakenly call them both row value constructors. Table valued constructor even if emulated is required by application developers because it is significantly faster than running a INSERT hundreds of times. Also because it is all in one statement, it mitigates deadlocks since single statements are naturally ACID. I am not familiar with SQLAlchemy but Hibernate requires on configuration specifying the dialect being used. So for MSSQL 2000 and 2005 in the boolean field 'support table value constructor' would be set to false and the framework would generate the correct SQL.
When it isn't standard SQL but most databases have via there proprietary extensions:
2) Auto increment is common practice though not necessarily standard. As such after doing a single insert it would be nice to get the generated id.
MSSQL: SELECT SCOPE_IDENTITY() MySQL: SELECT LAST_INSERT_ID() SQLite: SELECT last_insert_rowid() ORACLE: earlier versions unfortunately didn't support auto increment, rather they used sequences which are named queryable number generators
Current I have only noticed JDBC providing the low level API's for doing so and theirs an even work on mutiple insert. I am not sure how they performed that in standard SQL if even possible and currently I only know MSSQL 2005 and above supporting a select before and after image on insert, updates and deletes.
3) At some point developers are going to want to make tools and will need properly abstracted way of getting the metadata.
Because SQLite doesn't support INFORMATION_SCHEMA, its dialect would require a more elaborate code based transformation and parsing to glean the information from sqlite_master.
4) Paging support is potential complex but essential for developers Many databases such as MySQL and SQLite make it easy by providing LIMIT and OFFSET clauses that gets tacked onto ones SQL statement. Once again MSSQL makes it unbearable by requiring one's SQL to be massively transformed to generate row numbers and to filter on them.
I hope these examples help illustrate that some transformation needs to be able to performed earlier rather than latter. By providing some minor concessions in one library can developers from having to tackle these issues over and over again thus resulting in more portable code. With over a decade of database experience using Oracle, SQL Server, MySQL, SQLite and others using ODBC, OLEDB, ADO, ADO.NET and JDBC and also using Hibernate, NHibernate and LINQ: I would be glad to volunteer my time to help identify and justify these and other requirements.
That would be wonderful, and yes, are the issues I was talking about, as well as ones of efficiency (some of those methods are faster then other). SQLAlchemy handles all that, and the front-side API is reduced just enough so it works on everything (and SQLAlchemy does support a number of DBs). SQLAlchemy does still have a sql query that lets you send a full SQL string if you know you DB well and want to bypass all optimizations and transforms, but the recommended interface is the python query language they created.
participants (4)
-
Alp Mestan
-
Jarrad Waterloo
-
Jean-Louis Leroy
-
OvermindDL1