
On Fri, Sep 25, 2009 at 8:30 AM, Jarrad Waterloo <jwaterloo@dynamicquest.com> wrote:
OvermindDL1 wrote:
On Thu, Sep 24, 2009 at 4:44 PM, Jean-Louis Leroy <jl@yorel.be> wrote:
Ah, useful. :)
For note, Python is strongly-typed, have to be correct about what you send to the DB with regards to types, if you send an int to a string column, or a string to an int, or whatever, it will throw an exception.
I know that it's typed - it's *dynamically* typed. C++ is *statically* typed. "Type-safety" is a term that has a well-defined meaning in C++ parlance : it means that type checking happens at compile-time. And btw I am *not* a static analysis bigot, as my other language of expertise is Perl and my favorite is Lisp.
Lisp is just wonderful. :)
Actually, that link you did is exactly what the base of SQLAlchemy is (and mostly what I use, I rarely use the ORM parts). That is the part that is optimizes expressions specifically for the back-end. When you create the database it returns a refinement of that database, anything you create from it will be optimized specifically for that DB. it is a lot more work, but it allows it to be the most efficient SQL library for Python, and allows new SQL dialects to be added with no front-end change. _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
I believe you are referring to when standard SQL is not enough either because the database vendor supports it poorly or not at all or their is some feature that application developers need which isn't standard SQL but most databases have via there proprietary extensions. These examples are a few examples that highlight the need for the generated SQL to vary on the database type and version both of which if memory serves is available from ODBC.
When standard SQL is not enough either because the database vendor supports it poorly or not at all:
1) Consider table value constructors according to SQL-92 or SQL-99. There are 3 ways insert data INSERT .. VALUES (...) for 1 row, INSERT ... SELECT for many rows and table value constructor which is similar to VALUES but support multiple rows of static data; INSERT ... VALUES (...),(...),(...),(...) ... Each of the (...) is considered a tuple. Many databases support the standard but some don't support tuples whether they be table or row value constructors. Consider Microsoft SQL Server 2000 and 2005. Neither supports table value constructors but they can emulate it with INSERT ... SELECT UNION [ALL] SELECT UNION [ALL] SELECT. Microsoft SQL Server 2008 supports both table and row value constructors but mistakenly call them both row value constructors. Table valued constructor even if emulated is required by application developers because it is significantly faster than running a INSERT hundreds of times. Also because it is all in one statement, it mitigates deadlocks since single statements are naturally ACID. I am not familiar with SQLAlchemy but Hibernate requires on configuration specifying the dialect being used. So for MSSQL 2000 and 2005 in the boolean field 'support table value constructor' would be set to false and the framework would generate the correct SQL.
When it isn't standard SQL but most databases have via there proprietary extensions:
2) Auto increment is common practice though not necessarily standard. As such after doing a single insert it would be nice to get the generated id.
MSSQL: SELECT SCOPE_IDENTITY() MySQL: SELECT LAST_INSERT_ID() SQLite: SELECT last_insert_rowid() ORACLE: earlier versions unfortunately didn't support auto increment, rather they used sequences which are named queryable number generators
Current I have only noticed JDBC providing the low level API's for doing so and theirs an even work on mutiple insert. I am not sure how they performed that in standard SQL if even possible and currently I only know MSSQL 2005 and above supporting a select before and after image on insert, updates and deletes.
3) At some point developers are going to want to make tools and will need properly abstracted way of getting the metadata.
Because SQLite doesn't support INFORMATION_SCHEMA, its dialect would require a more elaborate code based transformation and parsing to glean the information from sqlite_master.
4) Paging support is potential complex but essential for developers Many databases such as MySQL and SQLite make it easy by providing LIMIT and OFFSET clauses that gets tacked onto ones SQL statement. Once again MSSQL makes it unbearable by requiring one's SQL to be massively transformed to generate row numbers and to filter on them.
I hope these examples help illustrate that some transformation needs to be able to performed earlier rather than latter. By providing some minor concessions in one library can developers from having to tackle these issues over and over again thus resulting in more portable code. With over a decade of database experience using Oracle, SQL Server, MySQL, SQLite and others using ODBC, OLEDB, ADO, ADO.NET and JDBC and also using Hibernate, NHibernate and LINQ: I would be glad to volunteer my time to help identify and justify these and other requirements.
That would be wonderful, and yes, are the issues I was talking about, as well as ones of efficiency (some of those methods are faster then other). SQLAlchemy handles all that, and the front-side API is reduced just enough so it works on everything (and SQLAlchemy does support a number of DBs). SQLAlchemy does still have a sql query that lets you send a full SQL string if you know you DB well and want to bypass all optimizations and transforms, but the recommended interface is the python query language they created.