sqlpp11: SQL for C++
Hi, over the last four or five years I developed several SQL libraries for C++. With C++11 I was finally able to create an SQL library that meets my own expectations and requirements. It is being used in production, I recently put it on github, and I would really like to hear from you guys whether this is something that could be interesting for you personally or for boost? https://github.com/rbock/sqlpp11 https://github.com/rbock/sqlpp11/wiki sqlpp11 is a template library representing an embedded domain specific language (EDSL) that allows you to * define types representing tables and columns, * construct type safe queries checked at compile time for syntax errors, type errors, name errors and even some semantic errors, * interpret results by iterating over query-specific structs with appropriately named and typed members. This results in several benefits, e.g. * the library user operates comfortably on structs and functions, * the compiler reports many kinds of errors long before the code enters unit testing or production, * the library hides the gory details of string construction for queries and interpreting string based results returned by select calls. I.e. you don't need to use strings in query construction where you wouldn't use them in SQL and there is no need to use positional arguments or to parse strings when obtaining fields from a result row (the latter being true unless you do not know which columns to select at compile time). The library supports both static and dynamic queries. The former offers greater benefit in terms of type and consistency checking. The latter makes it easier to construct queries on the flight. Specific traits of databases (e.g. unsupported or non-standard features) are known at compile time as well. This way, the compiler can tell the developer at compile time if a query is not accepted by the database (e.g. if a feature is missing). And the library can form the query in the correct manner, for instance if the engine uses concat instead of operator|| to concatenate strings. Two Examples: ============= Static Select: -------------- // selecting zero or more results, iterating over the results for (const auto& row : db.run( select(foo.name, foo.hasFun) .from(foo) .where(foo.id > 17 and foo.name.like("%bar%")))) { if (row.name.is_null()) std::cerr << "name will convert to empty string" << std::endl; std::string name = row.name; // text fields are implicitly convertible to string bool hasFun = hasFun; // bool fields are implicitly convertible to bool } Dynamic Select: ---------------- auto s = dynamic_select(db, tab.foo).dynamic_columns().from(tab); if (userWantsBar) s.add_column(tab.bar); for(const auto& row : run(s)) { std::cerr << "row.foo: " << row.foo; if (userWantsBar) std::cerr << "row.bar" << row.at("bar"); std::cerr << std::endl; }; Please let me know your questions/thoughts/suggestions/rants. Contributions welcome, of course :-) Regards, Roland
On 2013-11-09 23:03, Michael Marcin wrote:
On 11/9/2013 4:03 PM, Roland Bock wrote:
Please let me know your questions/thoughts/suggestions/rants. Contributions welcome, of course :-)
Could you compare your library with SOCI?
I consider SOCI to be a string and position based approach, meaning that you (as a library user) have to use strings to construct your queries and positions to extract results. Here is one of SOCI's introductory examples: // ------------------------ int id = 17; string name; int salary; sql << "select name, salary from persons where id = " << id, into(name), into(salary); // ------------------------ In sqlpp11 this would read something like this // --------------------- auto result = db.run(select(persons.name, persons.salary).from(persons).where(persons.id == 17)); if (!result.empty()) { const auto row = result.front(); std::string name = row.name; int salary = row.salary; } // --------------------- Notice that there are no strings being used in the sqlpp11 code at all. The compiler can see and check the syntax of your query. If you mistype anything, or if you compare apples and oranges (e.g. a text column and an int), or if you forget the 'from' before the 'where' for instance, it will let you know with compile errors. Also, it is much harder to mix up stuff in the results. Imagine a slightly larger list of columns being selected. With SOCI you need to keep the "select a, b, c, d" in sync with the into(a), into(b), into(c), into(d) sqlpp11 takes care of this responsibility for you and gives you rows with appropriately named and typed member variables. It is much harder to use those the wrong way without the compiler yelling at you. sqlpp11 assumes that you know your tables at compile time. Thus you can declare types representing tables and columns with appropriate names and types. You can then construct SQL queries and analyze the results with the full armory of syntax and type checking that C++ and template meta programming have to offer. Regards, Roland
On 9 November 2013 22:58, Roland Bock <rbock@eudoxos.de> wrote:
On 2013-11-09 23:03, Michael Marcin wrote:
On 11/9/2013 4:03 PM, Roland Bock wrote:
Please let me know your questions/thoughts/suggestions/rants. Contributions welcome, of course :-)
Could you compare your library with SOCI?
I consider SOCI to be a string and position based approach, meaning that you (as a library user) have to use strings to construct your queries and positions to extract results.
Yes, that's correct. In fact, SQL commands play an important role as part of DBMS access abstraction. SQL is a part of SOCI interface, by design.
sqlpp11 takes care of this responsibility for you and gives you rows with appropriately named and typed member variables. It is much harder to use those the wrong way without the compiler yelling at you.
sqlpp11 assumes that you know your tables at compile time. Thus you can declare types representing tables and columns with appropriate names and types. You can then construct SQL queries and analyze the results with the full armory of syntax and type checking that C++ and template meta programming have to offer.
Thank you for this clarification. I think sqlpp11 is an extremely interesting exercise. I have been considering to add non-string layer to SOCI directly based on your idea, soon after I saw it some time ago in your initial experiments. Best regards, -- Mateusz Loskot, http://mateusz.loskot.net
On 2013-11-10 00:12, Mateusz Loskot wrote:
On 9 November 2013 22:58, Roland Bock <rbock@eudoxos.de> wrote:
On 2013-11-09 23:03, Michael Marcin wrote:
On 11/9/2013 4:03 PM, Roland Bock wrote:
Please let me know your questions/thoughts/suggestions/rants. Contributions welcome, of course :-)
Could you compare your library with SOCI?
I consider SOCI to be a string and position based approach, meaning that you (as a library user) have to use strings to construct your queries and positions to extract results. Yes, that's correct.
In fact, SQL commands play an important role as part of DBMS access abstraction. SQL is a part of SOCI interface, by design.
sqlpp11 takes care of this responsibility for you and gives you rows with appropriately named and typed member variables. It is much harder to use those the wrong way without the compiler yelling at you.
sqlpp11 assumes that you know your tables at compile time. Thus you can declare types representing tables and columns with appropriate names and types. You can then construct SQL queries and analyze the results with the full armory of syntax and type checking that C++ and template meta programming have to offer. Thank you for this clarification.
I think sqlpp11 is an extremely interesting exercise. :-)
I have been considering to add non-string layer to SOCI directly based One option would be to write a sqlpp11 connector library for SOCI, similar to the existing ones for using MySQL/MariaDb and Sqlite3.
on your idea, soon after I saw it some time ago in your initial experiments.
Best regards, You're referring to the discussion in 2010? http://comments.gmane.org/gmane.comp.lib.boost.devel/208623
Cool, I wasn't aware of the impact :-) sqlpp11 is much more mature, than what I had back then. Among other things * auto allows for a much leaner perceived API * table definitions are also much simpler. I am still using a code generator for those (DDL->C++, which I'll add to the repository soon), but the types are quite comprehensible even for a casual user, I guess * compile times are better * sqlpp11 "understands" sub-selects, which were a nightmare in all my previous attempts. Sub-selects can be used very naturally now, i.e. as selected values, as input for functions like exists(), as operands in where-conditions and as pseudo-tables in from(). The documentation on that is still pretty thin, I guess, but there are usage hints here and there: https://github.com/rbock/sqlpp11/wiki/Select#sub-select https://github.com/rbock/sqlpp11/wiki/Select#aliased-sub-select * you can add columns to your select at runtime now, if required * I would not call the library code simple, but it is certainly much easier to understand and to extend than any of the earlier versions Cheers, Roland
Hi Roland, On 2013-11-09 23:58, Roland Bock wrote:
On 2013-11-09 23:03, Michael Marcin wrote:
On 11/9/2013 4:03 PM, Roland Bock wrote:
Please let me know your questions/thoughts/suggestions/rants. Contributions welcome, of course :-)
Could you compare your library with SOCI?
I consider SOCI to be a string and position based approach, meaning that you (as a library user) have to use strings to construct your queries and positions to extract results. Here is one of SOCI's introductory examples:
// ------------------------ int id = 17; string name; int salary; sql << "select name, salary from persons where id = " << id, into(name), into(salary); // ------------------------
In sqlpp11 this would read something like this
// --------------------- auto result = db.run(select(persons.name, persons.salary).from(persons).where(persons.id == 17)); if (!result.empty()) { const auto row = result.front(); std::string name = row.name; int salary = row.salary; } // ---------------------
Notice that there are no strings being used in the sqlpp11 code at all. The compiler can see and check the syntax of your query. If you mistype anything, or if you compare apples and oranges (e.g. a text column and an int), or if you forget the 'from' before the 'where' for instance, it will let you know with compile errors.
Also, it is much harder to mix up stuff in the results. Imagine a slightly larger list of columns being selected. With SOCI you need to keep the "select a, b, c, d" in sync with the into(a), into(b), into(c), into(d)
sqlpp11 takes care of this responsibility for you and gives you rows with appropriately named and typed member variables. It is much harder to use those the wrong way without the compiler yelling at you. There is a library (Metaparse) which supports creating a string-based interface for libraries which is processed at compile-time. It could be used to provide a string-based interface. For example instead of this:
auto result = db.run(select(persons.name, persons.salary).from(persons).where(persons.id == 17)); Your library could have the following interface: auto result = db.run(QUERY("select name,salary from persons where id=17")); The string could be parsed by a template metaprogram and the right classes could be built out of it. It could provide all the static guarantees you have described above. The documentation of Metaparse is here: http://abel.web.elte.hu/mpllibs/metaparse/ You can find examples here: https://github.com/sabel83/mpllibs/tree/master/libs/metaparse/example Regards, Ábel
On 11/11/13 13:35, Abel Sinkovics wrote:
Hi Roland,
On 2013-11-09 23:58, Roland Bock wrote:
On 2013-11-09 23:03, Michael Marcin wrote:
On 11/9/2013 4:03 PM, Roland Bock wrote:
Please let me know your questions/thoughts/suggestions/rants. Contributions welcome, of course :-)
Could you compare your library with SOCI?
I consider SOCI to be a string and position based approach, meaning that you (as a library user) have to use strings to construct your queries and positions to extract results. Here is one of SOCI's introductory examples:
// ------------------------ int id = 17; string name; int salary; sql << "select name, salary from persons where id = " << id, into(name), into(salary); // ------------------------
In sqlpp11 this would read something like this
// --------------------- auto result = db.run(select(persons.name, persons.salary).from(persons).where(persons.id == 17)); if (!result.empty()) { const auto row = result.front(); std::string name = row.name; int salary = row.salary; } // ---------------------
Notice that there are no strings being used in the sqlpp11 code at all. The compiler can see and check the syntax of your query. If you mistype anything, or if you compare apples and oranges (e.g. a text column and an int), or if you forget the 'from' before the 'where' for instance, it will let you know with compile errors.
Also, it is much harder to mix up stuff in the results. Imagine a slightly larger list of columns being selected. With SOCI you need to keep the "select a, b, c, d" in sync with the into(a), into(b), into(c), into(d)
sqlpp11 takes care of this responsibility for you and gives you rows with appropriately named and typed member variables. It is much harder to use those the wrong way without the compiler yelling at you. There is a library (Metaparse) which supports creating a string-based interface for libraries which is processed at compile-time. It could be used to provide a string-based interface. For example instead of this:
auto result = db.run(select(persons.name, persons.salary).from(persons).where(persons.id == 17));
Your library could have the following interface:
auto result = db.run(QUERY("select name,salary from persons where id=17"));
The string could be parsed by a template metaprogram and the right classes could be built out of it. It could provide all the static guarantees you have described above.
Guessing from code here: https://github.com/rbock/sqlpp11/blob/master/include/sqlpp11/select.h#L574 The sql string written to oss would be something like the argument to your QUERY function. IOW, IIUC, there's no need for parsing a string to build the right classes. OTOH, the string passed to the actual database (via the db on select.h#L574) would have to be parsed, I assume, by dbms, which might issue some error message or return some error code if the sql string were not right. I think Roland's code almost guarantee's the sql string would be correct. Is that about right Roland? [snip]
On 2013-11-11 21:14, Larry Evans wrote:
On 11/11/13 13:35, Abel Sinkovics wrote:
Hi Roland,
On 2013-11-09 23:58, Roland Bock wrote:
On 2013-11-09 23:03, Michael Marcin wrote:
On 11/9/2013 4:03 PM, Roland Bock wrote:
Please let me know your questions/thoughts/suggestions/rants. Contributions welcome, of course :-)
Could you compare your library with SOCI?
I consider SOCI to be a string and position based approach, meaning that you (as a library user) have to use strings to construct your queries and positions to extract results. Here is one of SOCI's introductory examples:
// ------------------------ int id = 17; string name; int salary; sql << "select name, salary from persons where id = " << id, into(name), into(salary); // ------------------------
In sqlpp11 this would read something like this
// --------------------- auto result = db.run(select(persons.name, persons.salary).from(persons).where(persons.id == 17)); if (!result.empty()) { const auto row = result.front(); std::string name = row.name; int salary = row.salary; } // ---------------------
Notice that there are no strings being used in the sqlpp11 code at all. The compiler can see and check the syntax of your query. If you mistype anything, or if you compare apples and oranges (e.g. a text column and an int), or if you forget the 'from' before the 'where' for instance, it will let you know with compile errors.
Also, it is much harder to mix up stuff in the results. Imagine a slightly larger list of columns being selected. With SOCI you need to keep the "select a, b, c, d" in sync with the into(a), into(b), into(c), into(d)
sqlpp11 takes care of this responsibility for you and gives you rows with appropriately named and typed member variables. It is much harder to use those the wrong way without the compiler yelling at you. There is a library (Metaparse) which supports creating a string-based interface for libraries which is processed at compile-time. It could be used to provide a string-based interface. For example instead of this:
auto result = db.run(select(persons.name, persons.salary).from(persons).where(persons.id == 17));
Your library could have the following interface:
auto result = db.run(QUERY("select name,salary from persons where id=17"));
Metaparse requires const char[N] arguments, right? That would be a rather atypical case for using a query interface, I'd say. Personally I have never used queries without variable parameters except in examples like the one above.
The string could be parsed by a template metaprogram and the right classes could be built out of it. It could provide all the static guarantees you have described above.
Guessing from code here:
https://github.com/rbock/sqlpp11/blob/master/include/sqlpp11/select.h#L574
The sql string written to oss would be something like the argument to your QUERY function. IOW, IIUC, there's no need for parsing a string to build the right classes.
OTOH, the string passed to the actual database (via the db on select.h#L574) would have to be parsed, I assume, by dbms, which might issue some error message or return some error code if the sql string were not right. I think Roland's code almost guarantee's the sql string would be correct.
Is that about right Roland?
That is correct, Larry, nicely guessed from the code, indeed :-) The query is constructed via functions and objects to build an expression which /can/ be evaluated as a string which is then being sent to the database. This is also the current use case. But there have been several ideas presented in this thread what else could be done (evaluating XML or JSON or incoming streams). In those cases, it might be better to transform the query into another representation. Regarding the correctness of the string: That's the goal, yes. Best regards, Roland
Hi,
On 2013-11-11 21:14, Larry Evans wrote:
On 11/11/13 13:35, Abel Sinkovics wrote:
There is a library (Metaparse) which supports creating a string-based interface for libraries which is processed at compile-time. It could be used to provide a string-based interface. For example instead of this:
auto result = db.run(select(persons.name, persons.salary).from(persons).where(persons.id == 17));
Your library could have the following interface:
auto result = db.run(QUERY("select name,salary from persons where id=17"));
Metaparse requires const char[N] arguments, right?
No. It uses the preprocessor to generate something like metaparse::string<'s','e','l','e','c','t',' ','n',',...> The downside is, that metaparse has a configurable maximum string length and increasing it also increases compile times. I am not sure, weather it would be possible to have reasonable compile times with SQL strings as they exist in real world applications. Christof -- okunah gmbh i.L. Software nach Maß Zugspitzstr. 211 www.okunah.de 86165 Augsburg cd@okunah.de Registergericht Augsburg Geschäftsführer Augsburg HRB 21896 Christof Donat UStID: DE 248 815 055
On Mon, Nov 11, 2013 at 9:59 PM, Christof Donat <cd@okunah.de> wrote:
No. It uses the preprocessor to generate something like
metaparse::string<'s','e','l','e','c','t',' ','n',',...>
The downside is, that metaparse has a configurable maximum string length and increasing it also increases compile times. I am not sure, weather it would be possible to have reasonable compile times with SQL strings as they exist in real world applications.
doesn't it still makes impossible to have variables in the request?
On Mon, Nov 11, 2013 at 7:02 PM, Klaim - Joël Lamotte <mjklaim@gmail.com>wrote:
On Mon, Nov 11, 2013 at 9:59 PM, Christof Donat <cd@okunah.de> wrote:
No. It uses the preprocessor to generate something like
metaparse::string<'s','e','l','e','c','t',' ','n',',...>
doesn't it still makes impossible to have variables in the request?
In theory, you could bind using placeholders like :a, :b. I.e.: string<'name', '=:a ', 'and ', 'age=', ':age'> // name=:a and age=:age These would be bound using the database backend. Not sure if all DBs support this, though. Regards, Rodrigo Madera
Hi, Am Montag, 11. November 2013, 22:02:50 schrieb Klaim - Joël Lamotte:
On Mon, Nov 11, 2013 at 9:59 PM, Christof Donat <cd@okunah.de> wrote:
No. It uses the preprocessor to generate something like
metaparse::string<'s','e','l','e','c','t',' ','n',',...>
The downside is, that metaparse has a configurable maximum string length and increasing it also increases compile times. I am not sure, weather it would be possible to have reasonable compile times with SQL strings as they exist in real world applications.
doesn't it still makes impossible to have variables in the request?
Yes and no. That could be solved with positional parameters like boost::format: db.run(QUERY("select name,salary from persons where id=%1" % 17)); Christof -- okunah gmbh i.L. Software nach Maß Zugspitzstr. 211 www.okunah.de 86165 Augsburg cd@okunah.de Registergericht Augsburg Geschäftsführer Augsburg HRB 21896 Christof Donat UStID: DE 248 815 055
Hi, On 2013-11-11 21:59, Christof Donat wrote:
The downside is, that metaparse has a configurable maximum string length and increasing it also increases compile times. I am not sure, weather it would be possible to have reasonable compile times with SQL strings as they exist in real world applications. If it turns out to be too slow, a constexpr/template metaprogram combined approach might help. For example constexpr functions could be used to tokenise the input. There is an example showing that constexpr and metaprogram parsers can be combined:
https://github.com/sabel83/mpllibs/tree/master/libs/metaparse/example/conste... It parses "a*b*a*", where the "a*" parts are parsed with metaprograms while the "b*" parts are parsed with constexpr. Regards, Ábel
Hi, On 2013-11-11 21:31, Roland Bock wrote:
Metaparse requires const char[N] arguments, right? That would be a rather atypical case for using a query interface, I'd say. Personally I have never used queries without variable parameters except in examples like the one above. Yes, it takes char[N] arguments, however, you can use a boost::format-like syntax (as mentioned by Christof) or a printf-like, type checked one.
The string could be parsed by a template metaprogram and the right classes could be built out of it. It could provide all the static guarantees you have described above. Guessing from code here:
https://github.com/rbock/sqlpp11/blob/master/include/sqlpp11/select.h#L574
The sql string written to oss would be something like the argument to your QUERY function. IOW, IIUC, there's no need for parsing a string to build the right classes.
OTOH, the string passed to the actual database (via the db on select.h#L574) would have to be parsed, I assume, by dbms, which might issue some error message or return some error code if the sql string were not right. I think Roland's code almost guarantee's the sql string would be correct.
Is that about right Roland? That is correct, Larry, nicely guessed from the code, indeed :-)
The query is constructed via functions and objects to build an expression which /can/ be evaluated as a string which is then being sent to the database. This is also the current use case. But there have been several ideas presented in this thread what else could be done (evaluating XML or JSON or incoming streams). In those cases, it might be better to transform the query into another representation.
Regarding the correctness of the string: That's the goal, yes.
If you don't want to transform the string, just validate it (and maybe do some variable substitution) you can approach it in a similar way the type-checked printf does it: it parses the string, does the validation at compile-time and then uses the original string at runtime. Code of it: https://github.com/sabel83/mpllibs/tree/master/mpllibs/safe_printf Example using it: https://github.com/sabel83/mpllibs/blob/master/libs/safe_printf/example/safe... (here check the C++11 one at the bottom) Regards, Ábel
On 2013-11-11 22:41, Abel Sinkovics wrote:
Hi,
On 2013-11-11 21:31, Roland Bock wrote:
Metaparse requires const char[N] arguments, right? That would be a rather atypical case for using a query interface, I'd say. Personally I have never used queries without variable parameters except in examples like the one above. Yes, it takes char[N] arguments, however, you can use a boost::format-like syntax (as mentioned by Christof) or a printf-like, type checked one.
The string could be parsed by a template metaprogram and the right classes could be built out of it. It could provide all the static guarantees you have described above. Guessing from code here:
https://github.com/rbock/sqlpp11/blob/master/include/sqlpp11/select.h#L574
The sql string written to oss would be something like the argument to your QUERY function. IOW, IIUC, there's no need for parsing a string to build the right classes.
OTOH, the string passed to the actual database (via the db on select.h#L574) would have to be parsed, I assume, by dbms, which might issue some error message or return some error code if the sql string were not right. I think Roland's code almost guarantee's the sql string would be correct.
Is that about right Roland? That is correct, Larry, nicely guessed from the code, indeed :-)
The query is constructed via functions and objects to build an expression which /can/ be evaluated as a string which is then being sent to the database. This is also the current use case. But there have been several ideas presented in this thread what else could be done (evaluating XML or JSON or incoming streams). In those cases, it might be better to transform the query into another representation.
Regarding the correctness of the string: That's the goal, yes.
If you don't want to transform the string, just validate it (and maybe do some variable substitution) you can approach it in a similar way the type-checked printf does it: it parses the string, does the validation at compile-time and then uses the original string at runtime.
Code of it: https://github.com/sabel83/mpllibs/tree/master/mpllibs/safe_printf
Example using it: https://github.com/sabel83/mpllibs/blob/master/libs/safe_printf/example/safe... (here check the C++11 one at the bottom)
I see use cases for printf and regex for instance, where the user provides a textual representation of something at compile time. In those cases, compile time validation of strings is a wonderful tool, and I have highest respect for it. But in the context of sqlpp11 I don't see how or why I should use it? The library is constructing the query string at runtime. There is no string to be validated at compile time. This is a major difference to most other C++ SQL libraries. Best regards, Roland
I see use cases for printf and regex for instance, where the user provides a textual representation of something at compile time. In those cases, compile time validation of strings is a wonderful tool, and I have highest respect for it.
But in the context of sqlpp11 I don't see how or why I should use it? The library is constructing the query string at runtime. There is no string to be validated at compile time. This is a major difference to most other C++ SQL libraries. You can make it possible for the user to provide a textual representation of the SQL query in the SQL language. His SQL queries are
Hi, On 2013-11-11 23:16, Roland Bock wrote: then validated at compile-time. If you want to move away from SQL and build your own DSL for writing SQL-like queries (and then generate the SQL queries from that), it won't help. Regards, Ábel
Le 11/11/13 23:16, Roland Bock a écrit :
Hi,
On 2013-11-11 21:31, Roland Bock wrote:
Metaparse requires const char[N] arguments, right? That would be a rather atypical case for using a query interface, I'd say. Personally I have never used queries without variable parameters except in examples like the one above. Yes, it takes char[N] arguments, however, you can use a boost::format-like syntax (as mentioned by Christof) or a printf-like, type checked one.
The string could be parsed by a template metaprogram and the right classes could be built out of it. It could provide all the static guarantees you have described above. Guessing from code here:
https://github.com/rbock/sqlpp11/blob/master/include/sqlpp11/select.h#L574
The sql string written to oss would be something like the argument to your QUERY function. IOW, IIUC, there's no need for parsing a string to build the right classes.
OTOH, the string passed to the actual database (via the db on select.h#L574) would have to be parsed, I assume, by dbms, which might issue some error message or return some error code if the sql string were not right. I think Roland's code almost guarantee's the sql string would be correct.
Is that about right Roland? That is correct, Larry, nicely guessed from the code, indeed :-)
The query is constructed via functions and objects to build an expression which /can/ be evaluated as a string which is then being sent to the database. This is also the current use case. But there have been several ideas presented in this thread what else could be done (evaluating XML or JSON or incoming streams). In those cases, it might be better to transform the query into another representation.
Regarding the correctness of the string: That's the goal, yes. If you don't want to transform the string, just validate it (and maybe do some variable substitution) you can approach it in a similar way the type-checked printf does it: it parses the string, does the validation at compile-time and then uses the original string at runtime.
Code of it: https://github.com/sabel83/mpllibs/tree/master/mpllibs/safe_printf
Example using it: https://github.com/sabel83/mpllibs/blob/master/libs/safe_printf/example/safe... (here check the C++11 one at the bottom) I see use cases for printf and regex for instance, where the user
On 2013-11-11 22:41, Abel Sinkovics wrote: provides a textual representation of something at compile time. In those cases, compile time validation of strings is a wonderful tool, and I have highest respect for it.
But in the context of sqlpp11 I don't see how or why I should use it? The library is constructing the query string at runtime. There is no string to be validated at compile time. This is a major difference to most other C++ SQL libraries.
Hi, I think that what others are saying is that as your interface is a SQL on, maybe a textual interfaces is even closer to the SQL one ;-) Thus this concrete example |for (const auto& row : db.run(select(foo.name, foo.hasFun) .from(foo) .where(foo.id > 17 and foo.name.like("%bar%")))) | |could be rewritten with something like| || | | |for (const auto& row : db.run<"*select* name, hasFun *from* foo *where ( *id > 17 *and* name *like* "%bar%" )">()) // This need to take care of escaping '"' or a way to quote |||"%bar%"| :( |Clearly this interface can be learn quicker by a SQL developer and can be checked statically using something like Metaparser or the techniques used by Metaparser. Whether this is a good idea depends on * the performances at compile time and :( * the kind of errors the user would have :( * there is something that can be done with the first syntax that can not be done with the ct-string syntax or that can be easier to do. I can understand that you don't want to go this path as it it s complex and could take a lot of time to stabilize it. Anyway, if you can show some examples that you think can not be done parsing a ct-string, this could reinforce your current interface. Note that I like your EDSL interface, but a SQL EDSL interface text based could complement quite nicely the yours. Both approaches could be provided either by your library or by a library on top of yours. HTH, Vicente
On 2013-11-11 23:53, Vicente J. Botet Escriba wrote:
Le 11/11/13 23:16, Roland Bock a écrit :
On 2013-11-11 22:41, Abel Sinkovics wrote:
[snip] I see use cases for printf and regex for instance, where the user provides a textual representation of something at compile time. In those cases, compile time validation of strings is a wonderful tool, and I have highest respect for it.
But in the context of sqlpp11 I don't see how or why I should use it? The library is constructing the query string at runtime. There is no string to be validated at compile time. This is a major difference to most other C++ SQL libraries.
Hi,
I think that what others are saying is that as your interface is a SQL on, maybe a textual interfaces is even closer to the SQL one ;-) Sure, since SQL is text based :-) Thus this concrete example
|for (const auto& row : db.run(select(foo.name, foo.hasFun) .from(foo) .where(foo.id > 17 and foo.name.like("%bar%"))))
|
|could be rewritten with something like| ||
| | |for (const auto& row : db.run<"*select* name, hasFun *from* foo *where ( *id > 17 *and* name *like* "%bar%" )">()) //
This need to take care of escaping '"' or a way to quote |||"%bar%"| :(
|Clearly this interface can be learn quicker by a SQL developer and can be checked statically using something like Metaparser or the techniques used by Metaparser. Whether this is a good idea depends on * the performances at compile time and :( * the kind of errors the user would have :( * there is something that can be done with the first syntax that can not be done with the ct-string syntax or that can be easier to do.
From the top of my head I can think of the following things that I would consider extremely hard to do with ct string parsing:
_Typos and similar errors:_ ct-parsing can certainly be trained to parse a string, even with placeholders. But what would be the result for the following string? "select nam, hasVun, from bar where name > 17 and id like '%bar%'" Can it detect errors here? sqlpp11 on the other hand gets this: select(foo.nam, foo.hasVun).from(bar).where(foo.name > 17 and foo.id.like("%bar%")); The current sqlpp11 will detect 4 errors here at compile time: Two typos in the selected columns, two times comparing apples and oranges. Starting with one of the next few iterations it will even detect that you used the wrong table. _Result types:_ For static queries, based on the names and types of the selected columns you can access the fields of the result row as appropriately named members with appropriate types, in this case: std::string = row.name; // OK bool hasFun = row.hasFun; // OK bool wrong = row.name; // compile error Could that be achieved with ct string parsing of the query? I wouldn't know where to begin. But I am certainly not an expert in this area. _Dynamic queries:_ sqlpp11 allows you to build queries dynamically way beyond substituting parameters. Selected columns, required tables, where conditions etc can be added at runtime. For example the user could query table Person and have option to add related information from table Job to the result set. With strings? Well you could construct it like "select person.* " + (userWantsJobInfo ? ", job.title" : "") + " from person " + (userWantsJobInfo ? ", job" + " where person.active = 1 " + (userWantsJobInfo ? " and job.id = person.jobId" : ""); I hope I got that right. Can a ct string parser handle it in a useful way? I honestly don't know, but I doubt it. With sqlpp11, on the other hand, it is easy: auto s = dynamic_select(db, all_of(person)).from(person).dynamic_where(person.active == true); if (userWantsJobInfo) { s.add_columns(job.title); s.add_from(job); s.add_where(job.id == person.jobId); } sqlpp11 will simply build the query in the correct way.
I can understand that you don't want to go this path as it it s complex and could take a lot of time to stabilize it. Anyway, if you can show some examples that you think can not be done parsing a ct-string, this could reinforce your current interface.
Note that I like your EDSL interface, but a SQL EDSL interface text based could complement quite nicely the yours. Both approaches could be provided either by your library or by a library on top of yours.
Thanks for your input and summary. I don't think I would go the road of compile time string parsing for SQL, because personally I don't think it is worth the effort. But that should not stop anybody, of course. I could be totally wrong. And as you said, it might be a nice complement to the EDSL approach. Best regards, Roland
Le 12/11/13 09:13, Roland Bock a écrit :
On 2013-11-11 23:53, Vicente J. Botet Escriba wrote:
Le 11/11/13 23:16, Roland Bock a écrit :
On 2013-11-11 22:41, Abel Sinkovics wrote: [snip] I see use cases for printf and regex for instance, where the user provides a textual representation of something at compile time. In those cases, compile time validation of strings is a wonderful tool, and I have highest respect for it.
But in the context of sqlpp11 I don't see how or why I should use it? The library is constructing the query string at runtime. There is no string to be validated at compile time. This is a major difference to most other C++ SQL libraries.
Hi,
I think that what others are saying is that as your interface is a SQL on, maybe a textual interfaces is even closer to the SQL one ;-) Sure, since SQL is text based :-) Thus this concrete example
|for (const auto& row : db.run(select(foo.name, foo.hasFun) .from(foo) .where(foo.id > 17 and foo.name.like("%bar%"))))
|
|could be rewritten with something like| ||
| | |for (const auto& row : db.run<"*select* name, hasFun *from* foo *where ( *id > 17 *and* name *like* "%bar%" )">()) //
This need to take care of escaping '"' or a way to quote |||"%bar%"| :(
|Clearly this interface can be learn quicker by a SQL developer and can be checked statically using something like Metaparser or the techniques used by Metaparser. Whether this is a good idea depends on * the performances at compile time and :( * the kind of errors the user would have :( * there is something that can be done with the first syntax that can not be done with the ct-string syntax or that can be easier to do. From the top of my head I can think of the following things that I would consider extremely hard to do with ct string parsing:
_Typos and similar errors:_ ct-parsing can certainly be trained to parse a string, even with placeholders. But what would be the result for the following string? "select nam, hasVun, from bar where name > 17 and id like '%bar%'" Can it detect errors here?
sqlpp11 on the other hand gets this: select(foo.nam, foo.hasVun).from(bar).where(foo.name > 17 and foo.id.like("%bar%")); The current sqlpp11 will detect 4 errors here at compile time: Two typos in the selected columns, two times comparing apples and oranges. Starting with one of the next few iterations it will even detect that you used the wrong table.
_Result types:_ For static queries, based on the names and types of the selected columns you can access the fields of the result row as appropriately named members with appropriate types, in this case:
std::string = row.name; // OK bool hasFun = row.hasFun; // OK bool wrong = row.name; // compile error
Could that be achieved with ct string parsing of the query? I wouldn't know where to begin. But I am certainly not an expert in this area.
_Dynamic queries:_ sqlpp11 allows you to build queries dynamically way beyond substituting parameters. Selected columns, required tables, where conditions etc can be added at runtime. For example the user could query table Person and have option to add related information from table Job to the result set.
With strings? Well you could construct it like
"select person.* " + (userWantsJobInfo ? ", job.title" : "") + " from person " + (userWantsJobInfo ? ", job" + " where person.active = 1 " + (userWantsJobInfo ? " and job.id = person.jobId" : ""); This could not be a ct_string if userWantsJobInfo is not constexpr. But each of the parts could be a ct-string (Se below) I hope I got that right. Can a ct string parser handle it in a useful way? I honestly don't know, but I doubt it.
With sqlpp11, on the other hand, it is easy: auto s = dynamic_select(db, all_of(person)).from(person).dynamic_where(person.active == true); if (userWantsJobInfo) { s.add_columns(job.title); s.add_from(job); s.add_where(job.id == person.jobId); } What about something like
auto s = db.dynamic_query<"select person.* from person where person.active">(); if (userWantsJobInfo) { s.combine<"select job.title from job where job.id == person.jobId">(); } Here db.dynamic_query<>() would return the same type as your dynamic_select(db, ), but the analysis of "select person.* from person where person.active" can be done statically. Then s.combine<> would combine the preceding query s, so it knows that there is a person table, with the result of "select job.title from job where job.id == person.jobId". Best, Vicente
On 2013-11-12 19:14, Vicente J. Botet Escriba wrote:
Le 12/11/13 09:13, Roland Bock a écrit :
On 2013-11-11 23:53, Vicente J. Botet Escriba wrote:
Le 11/11/13 23:16, Roland Bock a écrit :
On 2013-11-11 22:41, Abel Sinkovics wrote: [snip] I see use cases for printf and regex for instance, where the user provides a textual representation of something at compile time. In those cases, compile time validation of strings is a wonderful tool, and I have highest respect for it.
But in the context of sqlpp11 I don't see how or why I should use it? The library is constructing the query string at runtime. There is no string to be validated at compile time. This is a major difference to most other C++ SQL libraries.
Hi,
I think that what others are saying is that as your interface is a SQL on, maybe a textual interfaces is even closer to the SQL one ;-) Sure, since SQL is text based :-) Thus this concrete example
|for (const auto& row : db.run(select(foo.name, foo.hasFun) .from(foo) .where(foo.id > 17 and foo.name.like("%bar%"))))
|
|could be rewritten with something like| ||
| | |for (const auto& row : db.run<"*select* name, hasFun *from* foo *where ( *id > 17 *and* name *like* "%bar%" )">()) //
This need to take care of escaping '"' or a way to quote |||"%bar%"| :(
|Clearly this interface can be learn quicker by a SQL developer and can be checked statically using something like Metaparser or the techniques used by Metaparser. Whether this is a good idea depends on * the performances at compile time and :( * the kind of errors the user would have :( * there is something that can be done with the first syntax that can not be done with the ct-string syntax or that can be easier to do. From the top of my head I can think of the following things that I would consider extremely hard to do with ct string parsing:
_Typos and similar errors:_ ct-parsing can certainly be trained to parse a string, even with placeholders. But what would be the result for the following string? "select nam, hasVun, from bar where name > 17 and id like '%bar%'" Can it detect errors here?
sqlpp11 on the other hand gets this: select(foo.nam, foo.hasVun).from(bar).where(foo.name > 17 and foo.id.like("%bar%")); The current sqlpp11 will detect 4 errors here at compile time: Two typos in the selected columns, two times comparing apples and oranges. Starting with one of the next few iterations it will even detect that you used the wrong table.
_Result types:_ For static queries, based on the names and types of the selected columns you can access the fields of the result row as appropriately named members with appropriate types, in this case:
std::string = row.name; // OK bool hasFun = row.hasFun; // OK bool wrong = row.name; // compile error
Could that be achieved with ct string parsing of the query? I wouldn't know where to begin. But I am certainly not an expert in this area.
_Dynamic queries:_ sqlpp11 allows you to build queries dynamically way beyond substituting parameters. Selected columns, required tables, where conditions etc can be added at runtime. For example the user could query table Person and have option to add related information from table Job to the result set.
With strings? Well you could construct it like
"select person.* " + (userWantsJobInfo ? ", job.title" : "") + " from person " + (userWantsJobInfo ? ", job" + " where person.active = 1 " + (userWantsJobInfo ? " and job.id = person.jobId" : ""); This could not be a ct_string if userWantsJobInfo is not constexpr. But each of the parts could be a ct-string (Se below) I hope I got that right. Can a ct string parser handle it in a useful way? I honestly don't know, but I doubt it.
With sqlpp11, on the other hand, it is easy: auto s = dynamic_select(db, all_of(person)).from(person).dynamic_where(person.active == true); if (userWantsJobInfo) { s.add_columns(job.title); s.add_from(job); s.add_where(job.id == person.jobId); } What about something like
auto s = db.dynamic_query<"select person.* from person where person.active">(); if (userWantsJobInfo) { s.combine<"select job.title from job where job.id == person.jobId">(); }
Here db.dynamic_query<>() would return the same type as your dynamic_select(db, ), but the analysis of "select person.* from person where person.active" can be done statically. Then s.combine<> would combine the preceding query s, so it knows that there is a person table, with the result of "select job.title from job where job.id == person.jobId".
Best, Vicente Sure, not impossible, but as stated before, I won't go there :-)
Best regards, Roland
Hi, Am Montag, 11. November 2013, 23:16:06 schrieb Roland Bock:
I see use cases for printf and regex for instance, where the user provides a textual representation of something at compile time. In those cases, compile time validation of strings is a wonderful tool, and I have highest respect for it.
But in the context of sqlpp11 I don't see how or why I should use it?
When you have SQL with its original syntax in your source code, you can copy/paste between your source code and your database tool. That way you can develop your queries iteratively. Compile time string parsing then adds compile time syntax checks, type safety and a intuitive interface to access the columns in the result set by using your library.
The library is constructing the query string at runtime. There is no string to be validated at compile time. This is a major difference to most other C++ SQL libraries.
The huge strength of it is to me, that it provides all that syntax and type checking and the intuitive access to the result sets. Its weakness is, that I have to translate my queries from SQL syntax to another similar, but not equal syntax when interactively working with my SQL tools. That could be resolved by adding compile time string parsing on top of your library. Christof -- okunah gmbh i.L. Software nach Maß Zugspitzstr. 211 www.okunah.de 86165 Augsburg cd@okunah.de Registergericht Augsburg Geschäftsführer Augsburg HRB 21896 Christof Donat UStID: DE 248 815 055
On 2013-11-12 07:13, Christof Donat wrote:
Hi,
I see use cases for printf and regex for instance, where the user provides a textual representation of something at compile time. In those cases, compile time validation of strings is a wonderful tool, and I have highest respect for it.
But in the context of sqlpp11 I don't see how or why I should use it? When you have SQL with its original syntax in your source code, you can copy/paste between your source code and your database tool. That way you can develop your queries iteratively. Compile time string parsing then adds compile time syntax checks, type safety and a intuitive interface to access
Am Montag, 11. November 2013, 23:16:06 schrieb Roland Bock: the columns in the result set by using your library.
The library is constructing the query string at runtime. There is no string to be validated at compile time. This is a major difference to most other C++ SQL libraries. The huge strength of it is to me, that it provides all that syntax and type checking and the intuitive access to the result sets. Its weakness is, that I have to translate my queries from SQL syntax to another similar, but not equal syntax when interactively working with my SQL tools. That could be resolved by adding compile time string parsing on top of your library.
Christof
As I wrote to Vicente, personally I wouldn't follow that path. I don't believe in ct SQL parsing for all but the simplest cases. But I might be totally wrong. If you have some spare time on your hands and consider such an approach useful, you should probably give it a try. Regards, Roland
On 11/12/2013 12:26 AM, Roland Bock wrote:
I don't believe in ct SQL parsing for all but the simplest cases. But I might be totally wrong.
FWIW, I agree with you. I like the looks of your EDSL. My suggestion: Look to the success of C#'s LINQ queries, and aim for that. If you can design your EDSL so it can be extended to bind to in-memory data structures, then it would be HUGE. Listen to your gut and don't be afraid to say no. -- Eric Niebler Boost.org http://www.boost.org
On 2013-11-12 19:03, Eric Niebler wrote:
On 11/12/2013 12:26 AM, Roland Bock wrote:
I don't believe in ct SQL parsing for all but the simplest cases. But I might be totally wrong. FWIW, I agree with you. I like the looks of your EDSL. :-) My suggestion: Look to the success of C#'s LINQ queries, and aim for that. If you can design your EDSL so it can be extended to bind to in-memory data structures, then it would be HUGE. Yeah, I am thinking about in-memory data structures and a few other suggestions.
Listen to your gut and don't be afraid to say no.
Will do so and won't be. Thanks again for your input! Best regards from Munich, Roland
On 2013-11-12 19:03, Eric Niebler wrote:
On 11/12/2013 12:26 AM, Roland Bock wrote:
[...] FWIW, I agree with you. I like the looks of your EDSL. My suggestion: Look to the success of C#'s LINQ queries, and aim for that. If you can design your EDSL so it can be extended to bind to in-memory data structures, then it would be HUGE.
FYI: By that definition, as of today, it is HUGE :-) It is still a long way to go, of course, but I do have a working SQL insert and select on std::vector<some_struct>: https://github.com/rbock/sqlpp11-connector-stl
On 12/11/2013 19:13, Quoth Christof Donat:
When you have SQL with its original syntax in your source code, you can copy/paste between your source code and your database tool. That way you can develop your queries iteratively. Compile time string parsing then adds compile time syntax checks, type safety and a intuitive interface to access the columns in the result set by using your library.
As long as it's easy to translate from one to the other, it shouldn't be a big deal either way. The most important thing in this respect for sqlpp11 would be to have an easy way to take a chunk of code and extract the actual SQL query from it, perhaps via inserting an extra line in the development/production code to write it out to a log or to copy/paste it into some helper skeleton that does something similar. Converting back again should be fairly straightforward (albeit more manual) as long as you've only made minor tweaks to the query while iteratively testing it in the database tool, which is probably the common case. The comparative danger of using direct string parsing is that it's far too easy to insert something that the parser can't make sense of, and either way it's likely to increase compile times over an object-model-based approach.
On Mon, Nov 11, 2013 at 9:31 PM, Roland Bock <rbock@eudoxos.de> wrote:
On 11/11/13 13:35, Abel Sinkovics wrote:
There is a library (Metaparse) which supports creating a string-based interface for libraries which is processed at compile-time. It could be used to provide a string-based interface.
Metaparse requires const char[N] arguments, right? That would be a rather atypical case for using a query interface, I'd say. Personally I have never used queries without variable parameters except in examples like the one above.
Do you mean that you "inline" the variable parameters values in the query text as opposed to using "placeholders" which you later "bind" with actual values? If that's the case, that's a big no-no on some RDBMS's like Oracle for performance reason (you need to reparse and replan, i.e, "recompile" the query and find again the optimal query execution plan), w/o even getting into SQL injections, etc... A SQL DB API w/o support for prepared queries which you can reuse with different bind values would be ineffective IMHO. Similarly, if you cannot bind array of values to limit round-trips to the DB (if supported; SQLite has no array interface for example), you can never achieve maximum possible performance. My $0.02 :). --DD
On 2013-11-12 09:39, Dominique Devienne wrote:
On Mon, Nov 11, 2013 at 9:31 PM, Roland Bock <rbock@eudoxos.de> wrote:
There is a library (Metaparse) which supports creating a string-based interface for libraries which is processed at compile-time. It could be used to provide a string-based interface. Metaparse requires const char[N] arguments, right? That would be a rather atypical case for using a query interface, I'd say. Personally I have never used queries without variable parameters except in examples
On 11/11/13 13:35, Abel Sinkovics wrote: like the one above.
Do you mean that you "inline" the variable parameters values in the query text as opposed to using "placeholders" which you later "bind" with actual values? I meant that real life queries are not as simple as the case discussed above. If that's the case, that's a big no-no on some RDBMS's like Oracle for performance reason (you need to reparse and replan, i.e, "recompile" the query and find again the optimal query execution plan), w/o even getting into SQL injections, etc... A SQL DB API w/o support for prepared queries which you can reuse with different bind values would be ineffective IMHO. Similarly, if you cannot bind array of values to limit round-trips to the DB (if supported; SQLite has no array interface for example), you can never achieve maximum possible performance.
My $0.02 :). --DD
Thanks for the input. Prepared queries are in fact not supported. I'll read up on them and report back. Regards, Roland
On 13/11/2013 02:17, Quoth Roland Bock:
Thanks for the input. Prepared queries are in fact not supported. I'll read up on them and report back.
In SQL Server, queries are automatically "prepared" as long as you execute the same SQL statement as previously (so the only variations are in the parameter values). Provided that you do value insertion via actual parameters (either named or positional, depending on the target SQL dialect) and not via directly inserting constants into the SQL query itself, it should be fairly straightforward to extend to a dialect that requires explicit preparing; you'd just need to provide a syntax that allows you to cache a query object in a variable and then execute it multiple times with different actual values. (Using parameters is also preferred over direct insertion because it lessens the risk of script injection.)
On 2013-11-13 00:48, Gavin Lambert wrote:
On 13/11/2013 02:17, Quoth Roland Bock:
Thanks for the input. Prepared queries are in fact not supported. I'll read up on them and report back.
In SQL Server, queries are automatically "prepared" as long as you execute the same SQL statement as previously (so the only variations are in the parameter values). That would be perfect, of course.
Provided that you do value insertion via actual parameters (either named or positional, depending on the target SQL dialect) and not via directly inserting constants into the SQL query itself, it should be fairly straightforward to extend to a dialect that requires explicit preparing; you'd just need to provide a syntax that allows you to cache a query object in a variable and then execute it multiple times with different actual values. (Using parameters is also preferred over direct insertion because it lessens the risk of script injection.)
That should be relatively easy to do (and if you encapsulate the query generation in a parametrized function it is already there). Thanks and regards, Roland
On 13/11/2013 18:24, Quoth Roland Bock:
On 2013-11-13 00:48, Gavin Lambert wrote:
Provided that you do value insertion via actual parameters (either named or positional, depending on the target SQL dialect) and not via directly inserting constants into the SQL query itself, it should be fairly straightforward to extend to a dialect that requires explicit preparing; you'd just need to provide a syntax that allows you to cache a query object in a variable and then execute it multiple times with different actual values. (Using parameters is also preferred over direct insertion because it lessens the risk of script injection.)
That should be relatively easy to do (and if you encapsulate the query generation in a parametrized function it is already there).
I'm not sure I'm reading that last part right, but what I am talking about is very different from having a C++ function that accepts parameters and internally generates and runs the query. I haven't looked at your code enough to formulate a sensible example using its syntax, but if I can offer an analogy using Boost.Format's syntax (don't take it too literally): boost::format GenerateQuery() { return boost::format("select * from T where a=%1% and b=%2%"); // not intended to be fully safe SQL, it's just for show // also I'm not implying you should use strings to construct } std::string IndirectCall(const boost::format& query, int a, const std::string& b) { return (query % a % b).str(); // imagine this actually runs SQL } std::string DirectCall(int a, const std::string& b) { IndirectCall(GenerateQuery(), a, b); } The GenerateQuery method returns a "query object" that defines the action to be done but not the specific parameter values on which it acts. It is expensive to create (because this is where all the parsing happens) but is (fairly) cheap to copy, and reusable without copying. This is analogous to preparing an SQL query. Outside code could run GenerateQuery once (perhaps at program startup), cache the result somewhere, and then use it repeatedly via IndirectCall, thereby only paying the parsing cost once. (Boost.Format allows the same object to be reused once str() is invoked on it, or if explicitly cleared; or you can copy it prior to adding actual values to the copy.) This is analogous to running a prepared query with different actual parameters. (The db connector backend usually has specific separation between the SQL statement and the parameters -- you wouldn't actually insert them into the string the way that this example implies. This also typically means you don't have to escape values to prevent injection, as they're already known to be data.) The DirectCall method is the least performant case; it explicitly regenerates the query each time. This is analogous to running an unprepared query *or* actually inserting the real values directly into the SQL instead of using the db-connector's parameter mechanism. (But this is the most common way that SQL gets used, because it's "easier". I haven't looked too closely but I get the impression that this is the only way that sqlpp11 works at present too.) The key point being that for prepared statements to work (whether explicit as in Oracle or automatic as in SQL Server), you need to be able to define the SQL statement itself using only constants and placeholders where variables go, but not provide the actual values of the variables. Then when you actually go to execute the query you pass the constant SQL statement along with the collection of actual values to fill in the placeholders for this particular invocation. Different database backends support different syntaxes for placeholders; most support positional placeholders and a few offer named placeholders. (In your case I would recommend using only named placeholders on the user side, because it will be very unobvious to the user what the "real" positional order in the underlying SQL query will be. When interfacing to a connector for a DB that only supports positional placeholders you would have to convert them appropriately.)
On Wed, Nov 13, 2013 at 7:28 AM, Gavin Lambert <gavinl@compacsort.com>wrote:
The GenerateQuery method returns a "query object" that defines the action to be done but not the specific parameter values on which it acts. It is expensive to create (because this is where all the parsing happens) but is (fairly) cheap to copy, and reusable without copying. This is analogous to preparing an SQL query.
I agree with Gavin's points. Support for prepared queries returning some kind of functor / lambda taking as many typed variables as the query has placeholders is essential IMHO, because unless you have true binding of in-memory data, it's a "toy" to run literal SQL only. (whether it's the SQL string literal parsed at compile time, or a compile time DSL matters less to me). But I'd like to flesh out my point that array-binding is essential to performance. Most commenters focus on queries, i.e. selects, where you send little data (just a few bind variables) and receive lots of data. That can be made efficient in Oracle just by turning on prefetching on the statement handle, w/o any other change to the client code. It's the OCI client library which does the work for you transparently. But when you insert or update, you have to send a lot more data than you receive, and if you do a round-trip to the server on every row, you get orders of magnitude performance differences sometimes. For example, inserting 10,000 rows in a two-column table (an int ranging from 1 to 10,000, and a fixed "payload" string for all rows that read "azertyuiop"), doing it the "normal" way takes 0.578 sec, while sending the 10,000 rows at once take 0.007 sec. This timing is with the Oracle server and the client on the same Windows laptop. (the same test running on Linux, talking to a Linux server in the same room over the LAN gives 0.310 sec for scalar inserts, < 0.001 sec for array insert). Increase the latency by using a server on the WAN, and the difference can be even worse. But unlike the select case, there's no prefetching equivalent, you must use separate APIs to bind, and it can be done differently too (one array per col, or one array of struct with a member for each col, or using dynamic binding via callbacks that OCI calls to fill its "buffers" before sending the whole lot server-side). An API like SQLite doesn't need array-binding, because it's not client-server, but server round-trips is what makes or breaks an application's performance when dealing with databases, at least for those which deal with high volume of data, which is often one of the primary reason they are coded in C++ in the first place. An API like sqlpp11 cannot be everything to everyone of course. You're very much entitled to decide for yourself what is in-bounds and out-of-bounds. But I thought I'd provide more info on the above so that you at least are more aware of the problem space as I see it myself. Cheers, --DD
On 2013-11-13 12:09, Dominique Devienne wrote:
On Wed, Nov 13, 2013 at 7:28 AM, Gavin Lambert <gavinl@compacsort.com>wrote:
The GenerateQuery method returns a "query object" that defines the action to be done but not the specific parameter values on which it acts. It is expensive to create (because this is where all the parsing happens) but is (fairly) cheap to copy, and reusable without copying. This is analogous to preparing an SQL query.
I agree with Gavin's points. Support for prepared queries returning some kind of functor / lambda taking as many typed variables as the query has placeholders is essential IMHO, because unless you have true binding of in-memory data, it's a "toy" to run literal SQL only. (whether it's the SQL string literal parsed at compile time, or a compile time DSL matters less to me).
But I'd like to flesh out my point that array-binding is essential to performance. Most commenters focus on queries, i.e. selects, where you send little data (just a few bind variables) and receive lots of data. That can be made efficient in Oracle just by turning on prefetching on the statement handle, w/o any other change to the client code. It's the OCI client library which does the work for you transparently.
But when you insert or update, you have to send a lot more data than you receive, and if you do a round-trip to the server on every row, you get orders of magnitude performance differences sometimes. For example, inserting 10,000 rows in a two-column table (an int ranging from 1 to 10,000, and a fixed "payload" string for all rows that read "azertyuiop"), doing it the "normal" way takes 0.578 sec, while sending the 10,000 rows at once take 0.007 sec. This timing is with the Oracle server and the client on the same Windows laptop. (the same test running on Linux, talking to a Linux server in the same room over the LAN gives 0.310 sec for scalar inserts, < 0.001 sec for array insert). Increase the latency by using a server on the WAN, and the difference can be even worse. But unlike the select case, there's no prefetching equivalent, you must use separate APIs to bind, and it can be done differently too (one array per col, or one array of struct with a member for each col, or using dynamic binding via callbacks that OCI calls to fill its "buffers" before sending the whole lot server-side).
An API like SQLite doesn't need array-binding, because it's not client-server, but server round-trips is what makes or breaks an application's performance when dealing with databases, at least for those which deal with high volume of data, which is often one of the primary reason they are coded in C++ in the first place.
An API like sqlpp11 cannot be everything to everyone of course. You're very much entitled to decide for yourself what is in-bounds and out-of-bounds. But I thought I'd provide more info on the above so that you at least are more aware of the problem space as I see it myself.
Cheers, --DD
Gavin, Dominique, Thanks to both of you for clarification and explanations! In fact, I misread Gavins mail and as I answered to Dominique a few days ago, I need to read up on prepared statements. I'll probably come back to you about that. Best regards, Roland
Roland Bock 2013-11-10 02:03:
Hi,
Hi, Code depends on boost? -- Regards, niXman ___________________________________________________ Dual-target(32 & 64-bit) MinGW-W64 compilers for 32 and 64-bit Windows: http://sourceforge.net/projects/mingw-w64/ ___________________________________________________ Another online IDE: http://liveworkspace.org/
On 2013-11-10 02:38, niXman wrote:
Roland Bock 2013-11-10 02:03:
Hi,
Hi,
Code depends on boost?
No, sqlpp11 itself only depends on a decent C++11 compiler and matching STL. I test with clang-3.2 and gcc-4.8 on Ubuntu as of now. We are using a version slightly older than the current release with clang-3.1 on FreeBSD. Connectors could have other dependencies of course. The current connectors for MySQL/MariaDb and Sqlite3 depend on the respective C-client libraries. The MySQL connector currently requires the thread_local feature, so clang3.2 won't work, but I think I'll replace that with std::call_once. Regards, Roland
On 11/09/13 16:03, Roland Bock wrote:
Hi,
over the last four or five years I developed several SQL libraries for C++. With C++11 I was finally able to create an SQL library that meets my own expectations and requirements. It is being used in production, I recently put it on github, and I would really like to hear from you guys whether this is something that could be interesting for you personally or for boost?
https://github.com/rbock/sqlpp11 https://github.com/rbock/sqlpp11/wiki
[snip] The code here: https://github.com/rbock/sqlpp11/blob/master/include/sqlpp11/table_base.h contains: template<typename Table, typename... ColumnSpec> struct table_base_t : public ColumnSpec::_name_t::template _member_t<column_t<Table, ColumnSpec>>... {...}; which looks like it declares a table row, where the columns come from ColumnSpec... What's confusing to me is the code here: https://github.com/rbock/sqlpp11/wiki/Tables contains, what I assume are the ColumnSpecs; however, there's no column_t, there's only column_type. Does table_base.h have a typo? -regards, Larry
On 11/10/13 18:02, Larry Evans wrote:
On 11/09/13 16:03, Roland Bock wrote:
Hi,
over the last four or five years I developed several SQL libraries for C++. With C++11 I was finally able to create an SQL library that meets my own expectations and requirements. It is being used in production, I recently put it on github, and I would really like to hear from you guys whether this is something that could be interesting for you personally or for boost?
https://github.com/rbock/sqlpp11 https://github.com/rbock/sqlpp11/wiki
[snip] The code here:
https://github.com/rbock/sqlpp11/blob/master/include/sqlpp11/table_base.h
contains:
template<typename Table, typename... ColumnSpec> struct table_base_t : public ColumnSpec::_name_t::template _member_t<column_t<Table, ColumnSpec>>... {...};
which looks like it declares a table row, where the columns come from ColumnSpec...
What's confusing to me is the code here:
https://github.com/rbock/sqlpp11/wiki/Tables
contains, what I assume are the ColumnSpecs; however, there's no column_t, there's only column_type. Does table_base.h have a typo?
OOPS. I see it now: https://github.com/rbock/sqlpp11/blob/master/include/sqlpp11/column.h
On 2013-11-11 01:47, Larry Evans wrote:
On 11/10/13 18:02, Larry Evans wrote:
On 11/09/13 16:03, Roland Bock wrote:
Hi,
over the last four or five years I developed several SQL libraries for C++. With C++11 I was finally able to create an SQL library that meets my own expectations and requirements. It is being used in production, I recently put it on github, and I would really like to hear from you guys whether this is something that could be interesting for you personally or for boost?
https://github.com/rbock/sqlpp11 https://github.com/rbock/sqlpp11/wiki
[snip] The code here:
https://github.com/rbock/sqlpp11/blob/master/include/sqlpp11/table_base.h
contains:
template<typename Table, typename... ColumnSpec> struct table_base_t : public ColumnSpec::_name_t::template _member_t<column_t<Table, ColumnSpec>>... {...};
which looks like it declares a table row, where the columns come from ColumnSpec...
In understanding how the library works, this is probably one of the crucial parts. As you guessed correctly, it declares the innards of a table class. The somewhat convoluted looking inheritance adds columns as members to the class. The column_t<Table, ColumnSpec> instantiates a column class with all required information like the value type, whether it can be NULL, its textual representation, etc. The ColumnSpec also contains a template like this: template<typename T> struct _member_t { T foo; }; Inheriting from an instance of this template adds a member with the respective name (foo in this example) to derived class. Thus, the code you cited adds objects representing columns as appropriately named members to the table class. The same technique is used when creating aliases for expressions, using selects as pseudo tables and constructing result row classes.
What's confusing to me is the code here:
https://github.com/rbock/sqlpp11/wiki/Tables
contains, what I assume are the ColumnSpecs; however, there's no column_t, there's only column_type. Does table_base.h have a typo?
OOPS. I see it now:
https://github.com/rbock/sqlpp11/blob/master/include/sqlpp11/column.h
I guess that part of the confusion is due to names. I know that the code has inconsistent use of _type and _t suffix, which certainly doesn't help and is on my list of todos. Having column_t and a member class _column_type is probably also adding to the confusion. Maybe renaming _column_type to _column_traits_t would be better? Best regards, Roland
On 11/11/13 02:36, Roland Bock wrote:
On 2013-11-11 01:47, Larry Evans wrote:
On 11/10/13 18:02, Larry Evans wrote:
On 11/09/13 16:03, Roland Bock wrote:
Hi,
over the last four or five years I developed several SQL libraries for C++. With C++11 I was finally able to create an SQL library that meets my own expectations and requirements. It is being used in production, I recently put it on github, and I would really like to hear from you guys whether this is something that could be interesting for you personally or for boost?
https://github.com/rbock/sqlpp11 https://github.com/rbock/sqlpp11/wiki
[snip] The code here:
https://github.com/rbock/sqlpp11/blob/master/include/sqlpp11/table_base.h
contains:
template<typename Table, typename... ColumnSpec> struct table_base_t : public ColumnSpec::_name_t::template _member_t<column_t<Table, ColumnSpec>>... {...};
which looks like it declares a table row, where the columns come from ColumnSpec...
In understanding how the library works, this is probably one of the crucial parts. As you guessed correctly, it declares the innards of a table class. The somewhat convoluted looking inheritance adds columns as members to the class. The
column_t<Table, ColumnSpec>
instantiates a column class with all required information like the value type, whether it can be NULL, its textual representation, etc. The ColumnSpec also contains a template like this:
template<typename T> struct _member_t { T foo; };
Inheriting from an instance of this template adds a member with the respective name (foo in this example) to derived class. Thus, the code you cited adds objects representing columns as appropriately named members to the table class.
Ah. I see. So tab_sample, if declared as: TabSample tab_sample; where TabSample is from: https://github.com/rbock/sqlpp11/blob/master/tests/TabSample.h could be used in expressions like: tab_sample.alpha; tab_sample.beta; tab_sample.gamma; Nice! Makes meaningful column names. However, instead of table_base_t, would maybe row_base_t be a better name since it's really a row in a table?
The same technique is used when creating aliases for expressions, using selects as pseudo tables and constructing result row classes.
What's confusing to me is the code here:
https://github.com/rbock/sqlpp11/wiki/Tables
contains, what I assume are the ColumnSpecs; however, there's no column_t, there's only column_type. Does table_base.h have a typo?
OOPS. I see it now:
https://github.com/rbock/sqlpp11/blob/master/include/sqlpp11/column.h
I guess that part of the confusion is due to names. I know that the code has inconsistent use of _type and _t suffix, which certainly doesn't help and is on my list of todos. Having column_t and a member class _column_type is probably also adding to the confusion.
Maybe renaming _column_type to _column_traits_t would be better?
Yes, a bit. However, I must confess, I was a bit careless while reading the code. Sorry about that :(
Best regards,
Roland
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On 2013-11-11 15:06, Larry Evans wrote:
On 11/11/13 02:36, Roland Bock wrote:
On 2013-11-11 01:47, Larry Evans wrote:
On 11/10/13 18:02, Larry Evans wrote:
On 11/09/13 16:03, Roland Bock wrote:
Hi,
over the last four or five years I developed several SQL libraries for C++. With C++11 I was finally able to create an SQL library that meets my own expectations and requirements. It is being used in production, I recently put it on github, and I would really like to hear from you guys whether this is something that could be interesting for you personally or for boost?
https://github.com/rbock/sqlpp11 https://github.com/rbock/sqlpp11/wiki
[snip] The code here:
https://github.com/rbock/sqlpp11/blob/master/include/sqlpp11/table_base.h
contains:
template<typename Table, typename... ColumnSpec> struct table_base_t : public ColumnSpec::_name_t::template _member_t<column_t<Table, ColumnSpec>>... {...};
which looks like it declares a table row, where the columns come from ColumnSpec...
In understanding how the library works, this is probably one of the crucial parts. As you guessed correctly, it declares the innards of a table class. The somewhat convoluted looking inheritance adds columns as members to the class. The
column_t<Table, ColumnSpec>
instantiates a column class with all required information like the value type, whether it can be NULL, its textual representation, etc. The ColumnSpec also contains a template like this:
template<typename T> struct _member_t { T foo; };
Inheriting from an instance of this template adds a member with the respective name (foo in this example) to derived class. Thus, the code you cited adds objects representing columns as appropriately named members to the table class.
Ah. I see. So tab_sample, if declared as:
TabSample tab_sample;
where TabSample is from:
https://github.com/rbock/sqlpp11/blob/master/tests/TabSample.h
could be used in expressions like:
tab_sample.alpha; tab_sample.beta; tab_sample.gamma;
Nice! Makes meaningful column names.
Exactly! And this is transported into the result rows as well, e.g. for (const auto& row: db.run(select(all_of(tab_sample)).from(tab_sample))) { std::cerr << row.alpha << '\n'; std::cerr << row.beta << '\n'; std::cerr << row.gamma << '\n'; } And they also have appriate types of course :-)
However, instead of table_base_t, would maybe row_base_t be a better name since it's really a row in a table?
It is a set of columns, which more or less represents a table (if you ignore indexes and foreign keys, etc). So I wouldn't say that it is a row.
On 2013-11-11 11:27, Rodrigo Madera wrote:
On Sat, Nov 9, 2013 at 8:03 PM, Roland Bock <rbock@eudoxos.de> wrote:
Hi,
Hello,
Just curious over some points:
Does it support binary transfers over the wire? The library creates template expression trees for the queries. As of now, the tree is serialized and sent to the database connector when you call
db.run(query) Result rows are currently to be returned from the connector's result object as const char**, see database_api/api.h. Both, the serialization of the query and the representation of results rows as const char** are part of one possible interface, which happens to work quite well with some databases. Since we have all information in the expression tree, it should be possible to send queries and receive results in different ways. Can you send me links to respective database APIs? Preferably ones that I can experiment with on Ubuntu?
Does it support bulk operations? You mean like multi-inserts? Not right now, but that wouldn't be too hard to do, I think. Will it be C++11 only? That's what I tried to imply with the '11' in sqlpp11 ;-)
All kidding aside: * C++03: I think It would be possible to translate the library to C++03 using boost MPL and BOOST_AUTO. I won't go there, personally, but feel free to do so. * C++14 and up: I have every intention to create respective sqlpp14 and up. Regards, Roland
On 11/11/2013 7:53 AM, Roland Bock wrote:
On 2013-11-11 11:27, Rodrigo Madera wrote:
On Sat, Nov 9, 2013 at 8:03 PM, Roland Bock <rbock@eudoxos.de> wrote:
Hi,
Hello,
Just curious over some points:
Does it support binary transfers over the wire? The library creates template expression trees for the queries. As of now, the tree is serialized and sent to the database connector when you call
db.run(query)
Result rows are currently to be returned from the connector's result object as const char**, see database_api/api.h.
You are kidding ? Is std::vector<std::string> too advanced ? Why C++ programmers are still using C null-terminated strings I will never understand.
On 11/11/2013 7:53 AM, Roland Bock wrote:
On 2013-11-11 11:27, Rodrigo Madera wrote:
On Sat, Nov 9, 2013 at 8:03 PM, Roland Bock <rbock@eudoxos.de> wrote:
Hi,
Hello,
Just curious over some points:
Does it support binary transfers over the wire? The library creates template expression trees for the queries. As of now, the tree is serialized and sent to the database connector when you call
db.run(query)
Result rows are currently to be returned from the connector's result object as const char**, see database_api/api.h.
You are kidding ? Is std::vector<std::string> too advanced ? Why C++ programmers are still using C null-terminated strings I will never understand. No kidding. The database client libraries I've used so far are C
On 2013-11-11 15:36, Edward Diener wrote: libraries yielding char**. That is why the current connector interface also uses const char**. BTW: These are typically not null-terminated but are delivered with a size_t* or similar to inform about the length... But: 1. As a user of the library you don't get in touch with those. The members of a row are NOT char*, of course. You get integers as integers, floats as floats and of course you get text as std::string. I wouldn't dare to use char* there ;-) 2. It would be really easy to add additional interface options so that the connector library could yield std::vector<std::string> Regards, Roland
On 11/11/13 09:01, Roland Bock wrote:
On 11/11/2013 7:53 AM, Roland Bock wrote: [snip]
Result rows are currently to be returned from the connector's result object as const char**, see database_api/api.h.
You are kidding ? Is std::vector<std::string> too advanced ? Why C++ programmers are still using C null-terminated strings I will never understand. No kidding. The database client libraries I've used so far are C
On 2013-11-11 15:36, Edward Diener wrote: libraries yielding char**. That is why the current connector interface also uses const char**. BTW: These are typically not null-terminated but are delivered with a size_t* or similar to inform about the length...
But:
1. As a user of the library you don't get in touch with those. The members of a row are NOT char*, of course. You get integers as integers, floats as floats and of course you get text as std::string. I wouldn't dare to use char* there ;-)
Since the return values from the database are char**, these must be parsed and converted into a vector of, for example, TabSample, where TabSample is from: https://github.com/rbock/sqlpp11/blob/master/tests/TabSample.h Sounds like a job for spirit. However, I guess this option was rejected for reasons similar to why proto was rejected for creating the sql string to the dbms, as expressed in your reply to my other post: http://article.gmane.org/gmane.comp.lib.boost.devel/246117 Is that right? [snip]
On 2013-11-11 16:16, Larry Evans wrote:
On 11/11/13 09:01, Roland Bock wrote:
On 11/11/2013 7:53 AM, Roland Bock wrote: [snip]
Result rows are currently to be returned from the connector's result object as const char**, see database_api/api.h.
You are kidding ? Is std::vector<std::string> too advanced ? Why C++ programmers are still using C null-terminated strings I will never understand. No kidding. The database client libraries I've used so far are C
On 2013-11-11 15:36, Edward Diener wrote: libraries yielding char**. That is why the current connector interface also uses const char**. BTW: These are typically not null-terminated but are delivered with a size_t* or similar to inform about the length...
But:
1. As a user of the library you don't get in touch with those. The members of a row are NOT char*, of course. You get integers as integers, floats as floats and of course you get text as std::string. I wouldn't dare to use char* there ;-)
Since the return values from the database are char**, these must be parsed and converted into a vector of, for example, TabSample, where TabSample is from:
https://github.com/rbock/sqlpp11/blob/master/tests/TabSample.h
Sounds like a job for spirit. However, I guess this option was rejected for reasons similar to why proto was rejected for creating the sql string to the dbms, as expressed in your reply to my other post:
http://article.gmane.org/gmane.comp.lib.boost.devel/246117
Is that right?
First, the result is not a sequence of TabSample. The result row type is created based on what you select. That could be analogous to TabSample's rows, but you could select fewer or more columns or different orders... The current status is that the members of the result row parse "their" part of the char**, based on the position and the type of the selected column, see for instance: https://github.com/rbock/sqlpp11/blob/master/include/sqlpp11/integral.h Look for struct _result_entry_t, currently at line 51. There are respective classes for floating_point, text and bool. These classes could certainly be optimized. Spirit might be able to help here, but yes, you are right, I fear the compile times. In our code base we have hundreds of queries to compile. So compile time is an issue for continuous integration. Regards, Roland
On 11/11/13 09:44, Roland Bock wrote:
On 2013-11-11 16:16, Larry Evans wrote:
On 11/11/13 09:01, Roland Bock wrote:
On 11/11/2013 7:53 AM, Roland Bock wrote: [snip]
Result rows are currently to be returned from the connector's result object as const char**, see database_api/api.h.
You are kidding ? Is std::vector<std::string> too advanced ? Why C++ programmers are still using C null-terminated strings I will never understand. No kidding. The database client libraries I've used so far are C
On 2013-11-11 15:36, Edward Diener wrote: libraries yielding char**. That is why the current connector interface also uses const char**. BTW: These are typically not null-terminated but are delivered with a size_t* or similar to inform about the length...
But:
1. As a user of the library you don't get in touch with those. The members of a row are NOT char*, of course. You get integers as integers, floats as floats and of course you get text as std::string. I wouldn't dare to use char* there ;-)
Since the return values from the database are char**, these must be parsed and converted into a vector of, for example, TabSample, where TabSample is from:
https://github.com/rbock/sqlpp11/blob/master/tests/TabSample.h
Sounds like a job for spirit. However, I guess this option was rejected for reasons similar to why proto was rejected for creating the sql string to the dbms, as expressed in your reply to my other post:
http://article.gmane.org/gmane.comp.lib.boost.devel/246117
Is that right?
First, the result is not a sequence of TabSample. The result row type is created based on what you select. That could be analogous to TabSample's rows, but you could select fewer or more columns or different orders...
The current status is that the members of the result row parse "their" part of the char**, based on the position and the type of the selected column, see for instance:
https://github.com/rbock/sqlpp11/blob/master/include/sqlpp11/integral.h
Look for struct _result_entry_t, currently at line 51.
There are respective classes for floating_point, text and bool.
These classes could certainly be optimized. Spirit might be able to help here, but yes, you are right, I fear the compile times. In our code base we have hundreds of queries to compile. So compile time is an issue for continuous integration.
After some more thought, I think Spirit (really spirit's qi) would be overkill since really only parsing primitive types (e.g. int, float, string) are needed. I ran the SelectTest and got: -{--cut here-- ------------------------ SELECT tab_sample.alpha,tab_sample.beta,tab_sample.gamma FROM tab_sample WHERE ((tab_sample.alpha>7)AND(tab_sample.alpha=ANY(SELECT tab_sample.alpha FROM tab_sample WHERE (tab_sample.alpha<3)))) LIMIT 3------------------------ SELECT tab_sample.alpha(tab_sample.beta||'hallenhalma')(tab_sample.beta||'hallenhalma')tab_sample.alpha 3 (SELECT DISTINCT STRAIGHT_JOIN left.alpha,left.beta,(SELECT right.a FROM (SELECT (tab_sample.gamma) AS a FROM tab_sample WHERE (tab_sample.gamma=1)) AS right) FROM (tab_sample) AS left,(SELECT (tab_sample.gamma) AS a FROM tab_sample WHERE (tab_sample.gamma=1)) AS right WHERE ((tab_sample.beta='hello world')ANDSELECT tab_sample.gamma FROM tab_sample) GROUP BY left.gamma,right.a HAVING (right.a!=1) ORDER BY left.beta ASC LIMIT 17 OFFSET 3) AS a Compilation finished at Mon Nov 11 11:08:27 -}--cut here-- It would really help if the outputs were prefixed with some title. That way, one could look where the title in the source is output, and easily find the output from the code. I assume that the output bracketed by the: ------------ lines are produced by lines 305-316: -{--cut here-- { auto s = dynamic_select(db, all_of(t)).dynamic_from().dynamic_where().dynamic_limit().dynamic_offset(); s = s.add_from(t); s = s.add_where(t.alpha > 7 and t.alpha == any(select(t.alpha).from(t).where(t.alpha < 3))); s = s.set_limit(30); s = s.set_limit(3); std::cerr << "------------------------\n"; s.serialize(std::cerr, db); std::cerr << "------------------------\n"; using T = decltype(s); static_assert(sqlpp::is_regular<T>::value, "type requirement"); -}--cut here-- Is there an actual run against a real database somewhere in the tests? The 2 .cpp files I checked (SelectTest.cpp and UpdateTest.cpp) both use db of type MockDb, which doesn't seem to be connected to any real database. [snip] -Larry
On 2013-11-11 19:00, Larry Evans wrote:
On 11/11/13 09:44, Roland Bock wrote:
On 2013-11-11 16:16, Larry Evans wrote:
On 11/11/13 09:01, Roland Bock wrote:
On 11/11/2013 7:53 AM, Roland Bock wrote: [snip]
Result rows are currently to be returned from the connector's result object as const char**, see database_api/api.h.
You are kidding ? Is std::vector<std::string> too advanced ? Why C++ programmers are still using C null-terminated strings I will never understand. No kidding. The database client libraries I've used so far are C
On 2013-11-11 15:36, Edward Diener wrote: libraries yielding char**. That is why the current connector interface also uses const char**. BTW: These are typically not null-terminated but are delivered with a size_t* or similar to inform about the length...
But:
1. As a user of the library you don't get in touch with those. The members of a row are NOT char*, of course. You get integers as integers, floats as floats and of course you get text as std::string. I wouldn't dare to use char* there ;-)
Since the return values from the database are char**, these must be parsed and converted into a vector of, for example, TabSample, where TabSample is from:
https://github.com/rbock/sqlpp11/blob/master/tests/TabSample.h
Sounds like a job for spirit. However, I guess this option was rejected for reasons similar to why proto was rejected for creating the sql string to the dbms, as expressed in your reply to my other post:
http://article.gmane.org/gmane.comp.lib.boost.devel/246117
Is that right?
First, the result is not a sequence of TabSample. The result row type is created based on what you select. That could be analogous to TabSample's rows, but you could select fewer or more columns or different orders...
The current status is that the members of the result row parse "their" part of the char**, based on the position and the type of the selected column, see for instance:
https://github.com/rbock/sqlpp11/blob/master/include/sqlpp11/integral.h
Look for struct _result_entry_t, currently at line 51.
There are respective classes for floating_point, text and bool.
These classes could certainly be optimized. Spirit might be able to help here, but yes, you are right, I fear the compile times. In our code base we have hundreds of queries to compile. So compile time is an issue for continuous integration.
After some more thought, I think Spirit (really spirit's qi) would be overkill since really only parsing primitive types (e.g. int, float, string) are needed.
I ran the SelectTest and got: -{--cut here-- ------------------------ SELECT tab_sample.alpha,tab_sample.beta,tab_sample.gamma FROM tab_sample WHERE ((tab_sample.alpha>7)AND(tab_sample.alpha=ANY(SELECT tab_sample.alpha FROM tab_sample WHERE (tab_sample.alpha<3)))) LIMIT 3------------------------ SELECT tab_sample.alpha(tab_sample.beta||'hallenhalma')(tab_sample.beta||'hallenhalma')tab_sample.alpha 3 (SELECT DISTINCT STRAIGHT_JOIN left.alpha,left.beta,(SELECT right.a FROM (SELECT (tab_sample.gamma) AS a FROM tab_sample WHERE (tab_sample.gamma=1)) AS right) FROM (tab_sample) AS left,(SELECT (tab_sample.gamma) AS a FROM tab_sample WHERE (tab_sample.gamma=1)) AS right WHERE ((tab_sample.beta='hello world')ANDSELECT tab_sample.gamma FROM tab_sample) GROUP BY left.gamma,right.a HAVING (right.a!=1) ORDER BY left.beta ASC LIMIT 17 OFFSET 3) AS a Compilation finished at Mon Nov 11 11:08:27 -}--cut here--
It would really help if the outputs were prefixed with some title. That way, one could look where the title in the source is output, and easily find the output from the code.
I assume that the output bracketed by the: ------------ lines are produced by lines 305-316: -{--cut here-- { auto s = dynamic_select(db, all_of(t)).dynamic_from().dynamic_where().dynamic_limit().dynamic_offset(); s = s.add_from(t); s = s.add_where(t.alpha > 7 and t.alpha == any(select(t.alpha).from(t).where(t.alpha < 3))); s = s.set_limit(30); s = s.set_limit(3); std::cerr << "------------------------\n"; s.serialize(std::cerr, db); std::cerr << "------------------------\n"; using T = decltype(s); static_assert(sqlpp::is_regular<T>::value, "type requirement"); -}--cut here-- Is there an actual run against a real database somewhere in the tests? The 2 .cpp files I checked (SelectTest.cpp and UpdateTest.cpp) both use db of type MockDb, which doesn't seem to be connected to any real database. [snip]
-Larry
Hi Larry, those tests are really just tests, mostly compile time test to ensure that expressions yield the right types. I will have to add quite a few additional ones which check the generated strings. There are two connector libraries (MySQL/MariaDb and Sqlite3) listed here: https://github.com/rbock/sqlpp11#requirements They also contain tests, and those tests actually do something, even though it is not much :-) Based on that it should be rather simple to create a connector library for other databases. What environment (OS, compiler) are you using, btw? Cheers, Roland
On 11/11/13 14:40, Roland Bock wrote:
On 2013-11-11 19:00, Larry Evans wrote: [snip]
Is there an actual run against a real database somewhere in the tests? The 2 .cpp files I checked (SelectTest.cpp and UpdateTest.cpp) both use db of type MockDb, which doesn't seem to be connected to any real database. [snip]
-Larry
Hi Larry,
those tests are really just tests, mostly compile time test to ensure that expressions yield the right types. I will have to add quite a few additional ones which check the generated strings.
There are two connector libraries (MySQL/MariaDb and Sqlite3) listed here:
https://github.com/rbock/sqlpp11#requirements
They also contain tests, and those tests actually do something, even though it is not much :-)
Thanks Roland. I'll try one of those next.
Based on that it should be rather simple to create a connector library for other databases.
What environment (OS, compiler) are you using, btw?
OS=ubuntu 12.04 LTS COMPILER=gcc4.8.1 ( downloaded from: http://gcc.petsads.us/releases/gcc-4.8.1/gcc-4.8.1.tar.bz2 ) I've also a clang3.? compiler, but haven't used it in a while.
Cheers,
Roland
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On 11/11/13 14:40, Roland Bock wrote:
On 2013-11-11 19:00, Larry Evans wrote: [snip]
Is there an actual run against a real database somewhere in the tests? The 2 .cpp files I checked (SelectTest.cpp and UpdateTest.cpp) both use db of type MockDb, which doesn't seem to be connected to any real database. [snip]
-Larry
Hi Larry,
those tests are really just tests, mostly compile time test to ensure that expressions yield the right types. I will have to add quite a few additional ones which check the generated strings.
There are two connector libraries (MySQL/MariaDb and Sqlite3) listed here:
https://github.com/rbock/sqlpp11#requirements
They also contain tests, and those tests actually do something, even though it is not much :-)
Thanks Roland. I'll try one of those next. [snip] The file: sqlpp11-connector-sqlite3/CMakeLists.txt had: include_directories("${PROJECT_SOURCE_DIR}/include") which lead to compile error because compiler wasn't looking in
On 11/11/13 14:54, Larry Evans wrote: the sqlpp11/include. Adding the following line: include_directories("${PROJECT_SOURCE_DIR}/../sqlpp11/include") seemed to solve the problem. Is there some reason why the 2nd include_directories statment wasn't included in the CMakeLists.txt file? -regards, Larry
On 2013-11-12 21:04, Larry Evans wrote:
On 11/11/13 14:40, Roland Bock wrote:
On 2013-11-11 19:00, Larry Evans wrote: [snip]
Is there an actual run against a real database somewhere in the tests? The 2 .cpp files I checked (SelectTest.cpp and UpdateTest.cpp) both use db of type MockDb, which doesn't seem to be connected to any real database. [snip]
-Larry
Hi Larry,
those tests are really just tests, mostly compile time test to ensure that expressions yield the right types. I will have to add quite a few additional ones which check the generated strings.
There are two connector libraries (MySQL/MariaDb and Sqlite3) listed here:
https://github.com/rbock/sqlpp11#requirements
They also contain tests, and those tests actually do something, even though it is not much :-)
Thanks Roland. I'll try one of those next. [snip] The file: sqlpp11-connector-sqlite3/CMakeLists.txt had: include_directories("${PROJECT_SOURCE_DIR}/include") which lead to compile error because compiler wasn't looking in
On 11/11/13 14:54, Larry Evans wrote: the sqlpp11/include. Adding the following line: include_directories("${PROJECT_SOURCE_DIR}/../sqlpp11/include") seemed to solve the problem.
Is there some reason why the 2nd include_directories statment wasn't included in the CMakeLists.txt file?
-regards, Larry I assumed you'd be installing sqlpp11 first.
Regards, Roland
On 11/12/13 15:00, Roland Bock wrote:
On 2013-11-12 21:04, Larry Evans wrote:
On 11/11/13 14:40, Roland Bock wrote:
On 2013-11-11 19:00, Larry Evans wrote: [snip]
Is there an actual run against a real database somewhere in the tests? The 2 .cpp files I checked (SelectTest.cpp and UpdateTest.cpp) both use db of type MockDb, which doesn't seem to be connected to any real database. [snip]
-Larry
Hi Larry,
those tests are really just tests, mostly compile time test to ensure that expressions yield the right types. I will have to add quite a few additional ones which check the generated strings.
There are two connector libraries (MySQL/MariaDb and Sqlite3) listed here:
https://github.com/rbock/sqlpp11#requirements
They also contain tests, and those tests actually do something, even though it is not much :-)
Thanks Roland. I'll try one of those next. [snip] The file: sqlpp11-connector-sqlite3/CMakeLists.txt had: include_directories("${PROJECT_SOURCE_DIR}/include") which lead to compile error because compiler wasn't looking in
On 11/11/13 14:54, Larry Evans wrote: the sqlpp11/include. Adding the following line: include_directories("${PROJECT_SOURCE_DIR}/../sqlpp11/include") seemed to solve the problem.
Is there some reason why the 2nd include_directories statment wasn't included in the CMakeLists.txt file?
-regards, Larry I assumed you'd be installing sqlpp11 first.
I installed each of the git clones in separate directories: /home/evansl/prog_dev/boost/git/sqlpp11: total used in directory 40 available 76979896 drwxrwxr-x 8 evansl evansl 4096 Nov 12 04:42 . drwxr-xr-x 8 evansl evansl 4096 Nov 10 19:24 .. drwxrwxr-x 5 evansl evansl 4096 Nov 11 22:17 build -rw-rw-r-- 1 evansl evansl 134 Nov 11 18:21 cxxflags.txt -rw-rw-r-- 1 evansl evansl 61 Nov 11 18:21 cxxflags.txt~ drwxrwxr-x 2 evansl evansl 4096 Nov 11 08:49 install drwxrwxr-x 3 evansl evansl 4096 Nov 11 22:19 logs drwxr-xr-x 6 evansl evansl 4096 Nov 10 19:26 sqlpp11 drwxr-xr-x 6 evansl evansl 4096 Nov 12 04:42 sqlpp11-connector-mysql drwxr-xr-x 6 evansl evansl 4096 Nov 11 22:17 sqlpp11-connector-sqlite3 IOW: ~/prog_dev/boost/git/sqlpp11 $ git clone https://github.com/rbock\/sqlpp11.git ~/prog_dev/boost/git/sqlpp11 $ git clone https://github.com/rbock\/sqlpp11-connector-sqlite3.git Was that not right? -regards, Larry
On 2013-11-12 22:37, Larry Evans wrote:
On 11/12/13 15:00, Roland Bock wrote:
On 2013-11-12 21:04, Larry Evans wrote:
On 11/11/13 14:40, Roland Bock wrote:
On 2013-11-11 19:00, Larry Evans wrote: [snip]
Is there an actual run against a real database somewhere in the tests? The 2 .cpp files I checked (SelectTest.cpp and UpdateTest.cpp) both use db of type MockDb, which doesn't seem to be connected to any real database. [snip]
-Larry
Hi Larry,
those tests are really just tests, mostly compile time test to ensure that expressions yield the right types. I will have to add quite a few additional ones which check the generated strings.
There are two connector libraries (MySQL/MariaDb and Sqlite3) listed here:
https://github.com/rbock/sqlpp11#requirements
They also contain tests, and those tests actually do something, even though it is not much :-)
Thanks Roland. I'll try one of those next. [snip] The file: sqlpp11-connector-sqlite3/CMakeLists.txt had: include_directories("${PROJECT_SOURCE_DIR}/include") which lead to compile error because compiler wasn't looking in
On 11/11/13 14:54, Larry Evans wrote: the sqlpp11/include. Adding the following line: include_directories("${PROJECT_SOURCE_DIR}/../sqlpp11/include") seemed to solve the problem.
Is there some reason why the 2nd include_directories statment wasn't included in the CMakeLists.txt file?
-regards, Larry I assumed you'd be installing sqlpp11 first.
I installed each of the git clones in separate directories:
/home/evansl/prog_dev/boost/git/sqlpp11: total used in directory 40 available 76979896 drwxrwxr-x 8 evansl evansl 4096 Nov 12 04:42 . drwxr-xr-x 8 evansl evansl 4096 Nov 10 19:24 .. drwxrwxr-x 5 evansl evansl 4096 Nov 11 22:17 build -rw-rw-r-- 1 evansl evansl 134 Nov 11 18:21 cxxflags.txt -rw-rw-r-- 1 evansl evansl 61 Nov 11 18:21 cxxflags.txt~ drwxrwxr-x 2 evansl evansl 4096 Nov 11 08:49 install drwxrwxr-x 3 evansl evansl 4096 Nov 11 22:19 logs drwxr-xr-x 6 evansl evansl 4096 Nov 10 19:26 sqlpp11 drwxr-xr-x 6 evansl evansl 4096 Nov 12 04:42 sqlpp11-connector-mysql drwxr-xr-x 6 evansl evansl 4096 Nov 11 22:17 sqlpp11-connector-sqlite3
IOW:
~/prog_dev/boost/git/sqlpp11 $ git clone https://github.com/rbock\/sqlpp11.git
~/prog_dev/boost/git/sqlpp11 $ git clone https://github.com/rbock\/sqlpp11-connector-sqlite3.git
Was that not right?
-regards, Larry
I meant make install :-) Regards, Roland PS: I' might be offline for a day or two
On 11/12/13 23:24, Roland Bock wrote:
On 2013-11-12 22:37, Larry Evans wrote:
On 11/12/13 15:00, Roland Bock wrote:
On 2013-11-12 21:04, Larry Evans wrote:
On 11/11/13 14:40, Roland Bock wrote:
On 2013-11-11 19:00, Larry Evans wrote: [snip] > Is there an actual run against a real database somewhere in the > tests? > The 2 .cpp files I checked (SelectTest.cpp and UpdateTest.cpp) both > use db of type MockDb, which doesn't seem to be connected to any > real > database. > [snip] > > -Larry
Hi Larry,
those tests are really just tests, mostly compile time test to ensure that expressions yield the right types. I will have to add quite a few additional ones which check the generated strings.
There are two connector libraries (MySQL/MariaDb and Sqlite3) listed here:
https://github.com/rbock/sqlpp11#requirements
They also contain tests, and those tests actually do something, even though it is not much :-)
Thanks Roland. I'll try one of those next. [snip] The file: sqlpp11-connector-sqlite3/CMakeLists.txt had: include_directories("${PROJECT_SOURCE_DIR}/include") which lead to compile error because compiler wasn't looking in
On 11/11/13 14:54, Larry Evans wrote: the sqlpp11/include. Adding the following line: include_directories("${PROJECT_SOURCE_DIR}/../sqlpp11/include") seemed to solve the problem.
Is there some reason why the 2nd include_directories statment wasn't included in the CMakeLists.txt file?
-regards, Larry I assumed you'd be installing sqlpp11 first.
I installed each of the git clones in separate directories:
/home/evansl/prog_dev/boost/git/sqlpp11: total used in directory 40 available 76979896 drwxrwxr-x 8 evansl evansl 4096 Nov 12 04:42 . drwxr-xr-x 8 evansl evansl 4096 Nov 10 19:24 .. drwxrwxr-x 5 evansl evansl 4096 Nov 11 22:17 build -rw-rw-r-- 1 evansl evansl 134 Nov 11 18:21 cxxflags.txt -rw-rw-r-- 1 evansl evansl 61 Nov 11 18:21 cxxflags.txt~ drwxrwxr-x 2 evansl evansl 4096 Nov 11 08:49 install drwxrwxr-x 3 evansl evansl 4096 Nov 11 22:19 logs drwxr-xr-x 6 evansl evansl 4096 Nov 10 19:26 sqlpp11 drwxr-xr-x 6 evansl evansl 4096 Nov 12 04:42 sqlpp11-connector-mysql drwxr-xr-x 6 evansl evansl 4096 Nov 11 22:17 sqlpp11-connector-sqlite3
IOW:
~/prog_dev/boost/git/sqlpp11 $ git clone https://github.com/rbock\/sqlpp11.git
~/prog_dev/boost/git/sqlpp11 $ git clone https://github.com/rbock\/sqlpp11-connector-sqlite3.git
Was that not right?
-regards, Larry
I meant make install :-)
Regards,
Roland
PS: I' might be offline for a day or two
As shown by attached work.log, starting from a fresh clone, I tried cmake followed by make install and got an error. Could outline clearly what steps are needed to run the tests? -regards, Larry
On 2013-11-13 19:49, Larry Evans wrote:
On 11/12/13 23:24, Roland Bock wrote:
On 2013-11-12 22:37, Larry Evans wrote:
On 11/12/13 15:00, Roland Bock wrote:
On 2013-11-12 21:04, Larry Evans wrote:
On 11/11/13 14:40, Roland Bock wrote: > On 2013-11-11 19:00, Larry Evans wrote: [snip] >> Is there an actual run against a real database somewhere in the >> tests? >> The 2 .cpp files I checked (SelectTest.cpp and UpdateTest.cpp) >> both >> use db of type MockDb, which doesn't seem to be connected to any >> real >> database. >> [snip] >> >> -Larry > > Hi Larry, > > those tests are really just tests, mostly compile time test to > ensure > that expressions yield the right types. I will have to add quite a > few > additional ones which check the generated strings. > > There are two connector libraries (MySQL/MariaDb and Sqlite3) > listed > here: > > https://github.com/rbock/sqlpp11#requirements > > They also contain tests, and those tests actually do something, > even > though it is not much :-) > Thanks Roland. I'll try one of those next. [snip] The file: sqlpp11-connector-sqlite3/CMakeLists.txt had: include_directories("${PROJECT_SOURCE_DIR}/include") which lead to compile error because compiler wasn't looking in
On 11/11/13 14:54, Larry Evans wrote: the sqlpp11/include. Adding the following line: include_directories("${PROJECT_SOURCE_DIR}/../sqlpp11/include") seemed to solve the problem.
Is there some reason why the 2nd include_directories statment wasn't included in the CMakeLists.txt file?
-regards, Larry I assumed you'd be installing sqlpp11 first.
I installed each of the git clones in separate directories:
/home/evansl/prog_dev/boost/git/sqlpp11: total used in directory 40 available 76979896 drwxrwxr-x 8 evansl evansl 4096 Nov 12 04:42 . drwxr-xr-x 8 evansl evansl 4096 Nov 10 19:24 .. drwxrwxr-x 5 evansl evansl 4096 Nov 11 22:17 build -rw-rw-r-- 1 evansl evansl 134 Nov 11 18:21 cxxflags.txt -rw-rw-r-- 1 evansl evansl 61 Nov 11 18:21 cxxflags.txt~ drwxrwxr-x 2 evansl evansl 4096 Nov 11 08:49 install drwxrwxr-x 3 evansl evansl 4096 Nov 11 22:19 logs drwxr-xr-x 6 evansl evansl 4096 Nov 10 19:26 sqlpp11 drwxr-xr-x 6 evansl evansl 4096 Nov 12 04:42 sqlpp11-connector-mysql drwxr-xr-x 6 evansl evansl 4096 Nov 11 22:17 sqlpp11-connector-sqlite3
IOW:
~/prog_dev/boost/git/sqlpp11 $ git clone https://github.com/rbock\/sqlpp11.git
~/prog_dev/boost/git/sqlpp11 $ git clone https://github.com/rbock\/sqlpp11-connector-sqlite3.git
Was that not right?
-regards, Larry
I meant make install :-)
Regards,
Roland
PS: I' might be offline for a day or two
As shown by attached work.log, starting from a fresh clone, I tried cmake followed by make install and got an error.
Could outline clearly what steps are needed to run the tests?
-regards, Larry cmake needs to be told the source directory. Here's what I do for instance
cmake . -DCMAKE_CXX_COMPILER=g++-4.8 (I need to tell it the c++ compiler because cmake uses an older version by default on my machine) HTH Roland
On 11/14/13 03:31, Roland Bock wrote:
On 2013-11-13 19:49, Larry Evans wrote:
On 11/12/13 23:24, Roland Bock wrote:
On 2013-11-12 22:37, Larry Evans wrote:
On 11/12/13 15:00, Roland Bock wrote:
On 2013-11-12 21:04, Larry Evans wrote:
On 11/11/13 14:54, Larry Evans wrote: > On 11/11/13 14:40, Roland Bock wrote: >> On 2013-11-11 19:00, Larry Evans wrote: > [snip] >>> Is there an actual run against a real database somewhere in the >>> tests? >>> The 2 .cpp files I checked (SelectTest.cpp and UpdateTest.cpp) >>> both >>> use db of type MockDb, which doesn't seem to be connected to any >>> real >>> database. >>> [snip] >>> >>> -Larry >> >> Hi Larry, >> >> those tests are really just tests, mostly compile time test to >> ensure >> that expressions yield the right types. I will have to add quite a >> few >> additional ones which check the generated strings. >> >> There are two connector libraries (MySQL/MariaDb and Sqlite3) >> listed >> here: >> >> https://github.com/rbock/sqlpp11#requirements >> >> They also contain tests, and those tests actually do something, >> even >> though it is not much :-) >> > Thanks Roland. I'll try one of those next. [snip] The file: sqlpp11-connector-sqlite3/CMakeLists.txt had: include_directories("${PROJECT_SOURCE_DIR}/include") which lead to compile error because compiler wasn't looking in the sqlpp11/include. Adding the following line: include_directories("${PROJECT_SOURCE_DIR}/../sqlpp11/include") seemed to solve the problem.
Is there some reason why the 2nd include_directories statment wasn't included in the CMakeLists.txt file?
-regards, Larry I assumed you'd be installing sqlpp11 first.
I installed each of the git clones in separate directories:
/home/evansl/prog_dev/boost/git/sqlpp11: total used in directory 40 available 76979896 drwxrwxr-x 8 evansl evansl 4096 Nov 12 04:42 . drwxr-xr-x 8 evansl evansl 4096 Nov 10 19:24 .. drwxrwxr-x 5 evansl evansl 4096 Nov 11 22:17 build -rw-rw-r-- 1 evansl evansl 134 Nov 11 18:21 cxxflags.txt -rw-rw-r-- 1 evansl evansl 61 Nov 11 18:21 cxxflags.txt~ drwxrwxr-x 2 evansl evansl 4096 Nov 11 08:49 install drwxrwxr-x 3 evansl evansl 4096 Nov 11 22:19 logs drwxr-xr-x 6 evansl evansl 4096 Nov 10 19:26 sqlpp11 drwxr-xr-x 6 evansl evansl 4096 Nov 12 04:42 sqlpp11-connector-mysql drwxr-xr-x 6 evansl evansl 4096 Nov 11 22:17 sqlpp11-connector-sqlite3
IOW:
~/prog_dev/boost/git/sqlpp11 $ git clone https://github.com/rbock\/sqlpp11.git
~/prog_dev/boost/git/sqlpp11 $ git clone https://github.com/rbock\/sqlpp11-connector-sqlite3.git
Was that not right?
-regards, Larry
I meant make install :-)
Regards,
Roland
PS: I' might be offline for a day or two
As shown by attached work.log, starting from a fresh clone, I tried cmake followed by make install and got an error.
Could outline clearly what steps are needed to run the tests?
-regards, Larry cmake needs to be told the source directory. Here's what I do for instance
cmake . -DCMAKE_CXX_COMPILER=g++-4.8
(I need to tell it the c++ compiler because cmake uses an older version by default on my machine)
HTH
Roland
I've tried to get cmake to work; however, it only works for the: sqlpp11 directory, but fails for the: sqlpp11-connector-sqlite3 directory. I've attached the Makefile, and the output from `make targets_dir`. That target 1st clones sqlpp11, runs cmake in that directory, the make installs in that directory. Everything works find. However, with the sqlpp11-connector-sqlite3, the clone works, the cmake works; however, make gives a compilation error: /home/evansl/prog_dev/boost/git/sqlpp11/sqlpp11-connector-sqlite3/src/connection.cpp:29:31: fatal error: sqlpp11/exception.h: No such file or directory #include <sqlpp11/exception.h> caused by *not* including: -I/home/evansl/prog_dev/boost/git/sqlpp11/sqlpp11/include on the compiler command line. Is there something else I'm doing wrong? -regards, Larry
On 2013-11-14 22:14, Larry Evans wrote:
On 11/14/13 03:31, Roland Bock wrote:
On 2013-11-13 19:49, Larry Evans wrote:
On 11/12/13 23:24, Roland Bock wrote:
On 2013-11-12 22:37, Larry Evans wrote:
On 11/12/13 15:00, Roland Bock wrote:
On 2013-11-12 21:04, Larry Evans wrote: > On 11/11/13 14:54, Larry Evans wrote: >> On 11/11/13 14:40, Roland Bock wrote: >>> On 2013-11-11 19:00, Larry Evans wrote: >> [snip] >>>> Is there an actual run against a real database somewhere in the >>>> tests? >>>> The 2 .cpp files I checked (SelectTest.cpp and UpdateTest.cpp) >>>> both >>>> use db of type MockDb, which doesn't seem to be connected to any >>>> real >>>> database. >>>> [snip] >>>> >>>> -Larry >>> >>> Hi Larry, >>> >>> those tests are really just tests, mostly compile time test to >>> ensure >>> that expressions yield the right types. I will have to add >>> quite a >>> few >>> additional ones which check the generated strings. >>> >>> There are two connector libraries (MySQL/MariaDb and Sqlite3) >>> listed >>> here: >>> >>> https://github.com/rbock/sqlpp11#requirements >>> >>> They also contain tests, and those tests actually do something, >>> even >>> though it is not much :-) >>> >> Thanks Roland. I'll try one of those next. > [snip] > The file: > sqlpp11-connector-sqlite3/CMakeLists.txt > had: > include_directories("${PROJECT_SOURCE_DIR}/include") > which lead to compile error because compiler wasn't looking in > the sqlpp11/include. Adding the following line: > > include_directories("${PROJECT_SOURCE_DIR}/../sqlpp11/include") > seemed to solve the problem. > > Is there some reason why the 2nd include_directories statment > wasn't included in the CMakeLists.txt file? > > -regards, > Larry I assumed you'd be installing sqlpp11 first.
I installed each of the git clones in separate directories:
/home/evansl/prog_dev/boost/git/sqlpp11: total used in directory 40 available 76979896 drwxrwxr-x 8 evansl evansl 4096 Nov 12 04:42 . drwxr-xr-x 8 evansl evansl 4096 Nov 10 19:24 .. drwxrwxr-x 5 evansl evansl 4096 Nov 11 22:17 build -rw-rw-r-- 1 evansl evansl 134 Nov 11 18:21 cxxflags.txt -rw-rw-r-- 1 evansl evansl 61 Nov 11 18:21 cxxflags.txt~ drwxrwxr-x 2 evansl evansl 4096 Nov 11 08:49 install drwxrwxr-x 3 evansl evansl 4096 Nov 11 22:19 logs drwxr-xr-x 6 evansl evansl 4096 Nov 10 19:26 sqlpp11 drwxr-xr-x 6 evansl evansl 4096 Nov 12 04:42 sqlpp11-connector-mysql drwxr-xr-x 6 evansl evansl 4096 Nov 11 22:17 sqlpp11-connector-sqlite3
IOW:
~/prog_dev/boost/git/sqlpp11 $ git clone https://github.com/rbock\/sqlpp11.git
~/prog_dev/boost/git/sqlpp11 $ git clone https://github.com/rbock\/sqlpp11-connector-sqlite3.git
Was that not right?
-regards, Larry
I meant make install :-)
Regards,
Roland
PS: I' might be offline for a day or two
As shown by attached work.log, starting from a fresh clone, I tried cmake followed by make install and got an error.
Could outline clearly what steps are needed to run the tests?
-regards, Larry cmake needs to be told the source directory. Here's what I do for instance
cmake . -DCMAKE_CXX_COMPILER=g++-4.8
(I need to tell it the c++ compiler because cmake uses an older version by default on my machine)
HTH
Roland
I've tried to get cmake to work; however, it only works for the: sqlpp11 directory, but fails for the: sqlpp11-connector-sqlite3 directory.
I've attached the Makefile, and the output from `make targets_dir`. That target 1st clones sqlpp11, runs cmake in that directory, the make installs in that directory. Everything works find. However, with the sqlpp11-connector-sqlite3, the clone works, the cmake works; however, make gives a compilation error:
/home/evansl/prog_dev/boost/git/sqlpp11/sqlpp11-connector-sqlite3/src/connection.cpp:29:31: fatal error: sqlpp11/exception.h: No such file or directory #include <sqlpp11/exception.h>
caused by *not* including:
-I/home/evansl/prog_dev/boost/git/sqlpp11/sqlpp11/include
on the compiler command line.
Is there something else I'm doing wrong?
-regards, Larry
Hi Larry, First of all, thank you for your patience! As far as I can tell the problem results from your install directory not being in the standard include path. I have updated the CMakeLists file in the sqlpp11-connector-sqlite3 repository to default to ../sqlpp11/include (as suggested by you a few mail earlier) This should work with your setup. I tested with your Makefile, just to be sure :-) If you want to use the installed version, you can specify the include path via -DSQLPP11_INCLUDE_DIR=<your path> Best regards, Roland
On 11/15/13 03:21, Roland Bock wrote:
On 2013-11-14 22:14, Larry Evans wrote: [snip]
I've tried to get cmake to work; however, it only works for the: sqlpp11 directory, but fails for the: sqlpp11-connector-sqlite3 directory.
I've attached the Makefile, and the output from `make targets_dir`. That target 1st clones sqlpp11, runs cmake in that directory, the make installs in that directory. Everything works find. However, with the sqlpp11-connector-sqlite3, the clone works, the cmake works; however, make gives a compilation error:
/home/evansl/prog_dev/boost/git/sqlpp11/sqlpp11-connector-sqlite3/src/connection.cpp:29:31: fatal error: sqlpp11/exception.h: No such file or directory #include <sqlpp11/exception.h>
caused by *not* including:
-I/home/evansl/prog_dev/boost/git/sqlpp11/sqlpp11/include
on the compiler command line.
Is there something else I'm doing wrong?
-regards, Larry
Hi Larry,
First of all, thank you for your patience! You're welcome. Thanks for the work on this library!
As far as I can tell the problem results from your install directory not being in the standard include path.
I suspected something like that.
I have updated the CMakeLists file in the sqlpp11-connector-sqlite3 repository to default to
../sqlpp11/include (as suggested by you a few mail earlier)
This should work with your setup. I tested with your Makefile, just to be sure :-)
I tried it and it now works with the sqlite; however, I noticed that: https://github.com/rbock/sqlpp11-connector-mysql/blob/master/CMakeLists.txt doesn't have the same change. Wouldn't that same change be needed there too? -regards, Larry
On 2013-11-15 15:57, Larry Evans wrote:
On 11/15/13 03:21, Roland Bock wrote:
On 2013-11-14 22:14, Larry Evans wrote: [snip]
I've tried to get cmake to work; however, it only works for the: sqlpp11 directory, but fails for the: sqlpp11-connector-sqlite3 directory.
I've attached the Makefile, and the output from `make targets_dir`. That target 1st clones sqlpp11, runs cmake in that directory, the make installs in that directory. Everything works find. However, with the sqlpp11-connector-sqlite3, the clone works, the cmake works; however, make gives a compilation error:
/home/evansl/prog_dev/boost/git/sqlpp11/sqlpp11-connector-sqlite3/src/connection.cpp:29:31:
fatal error: sqlpp11/exception.h: No such file or directory #include <sqlpp11/exception.h>
caused by *not* including:
-I/home/evansl/prog_dev/boost/git/sqlpp11/sqlpp11/include
on the compiler command line.
Is there something else I'm doing wrong?
-regards, Larry
Hi Larry,
First of all, thank you for your patience! You're welcome. Thanks for the work on this library!
As far as I can tell the problem results from your install directory not being in the standard include path.
I suspected something like that.
I have updated the CMakeLists file in the sqlpp11-connector-sqlite3 repository to default to
../sqlpp11/include (as suggested by you a few mail earlier)
This should work with your setup. I tested with your Makefile, just to be sure :-)
I tried it and it now works with the sqlite; however, I noticed that:
https://github.com/rbock/sqlpp11-connector-mysql/blob/master/CMakeLists.txt
doesn't have the same change. Wouldn't that same change be needed there too?
-regards, Larry
Yes, I wanted to wait for your confirmation. Will adjust that tonight. Thanks and regards, Roland
On 11/15/13 09:13, Roland Bock wrote:
On 2013-11-15 15:57, Larry Evans wrote:
On 11/15/13 03:21, Roland Bock wrote:
On 2013-11-14 22:14, Larry Evans wrote: [snip]
I've tried to get cmake to work; however, it only works for the: sqlpp11 directory, but fails for the: sqlpp11-connector-sqlite3 directory.
I've attached the Makefile, and the output from `make targets_dir`. That target 1st clones sqlpp11, runs cmake in that directory, the make installs in that directory. Everything works find. However, with the sqlpp11-connector-sqlite3, the clone works, the cmake works; however, make gives a compilation error:
/home/evansl/prog_dev/boost/git/sqlpp11/sqlpp11-connector-sqlite3/src/connection.cpp:29:31:
fatal error: sqlpp11/exception.h: No such file or directory #include <sqlpp11/exception.h>
caused by *not* including:
-I/home/evansl/prog_dev/boost/git/sqlpp11/sqlpp11/include
on the compiler command line.
Is there something else I'm doing wrong?
-regards, Larry
Hi Larry,
First of all, thank you for your patience! You're welcome. Thanks for the work on this library!
As far as I can tell the problem results from your install directory not being in the standard include path.
I suspected something like that.
I have updated the CMakeLists file in the sqlpp11-connector-sqlite3 repository to default to
../sqlpp11/include (as suggested by you a few mail earlier)
This should work with your setup. I tested with your Makefile, just to be sure :-)
I tried it and it now works with the sqlite; however, I noticed that:
https://github.com/rbock/sqlpp11-connector-mysql/blob/master/CMakeLists.txt
doesn't have the same change. Wouldn't that same change be needed there too?
-regards, Larry
Yes, I wanted to wait for your confirmation. Will adjust that tonight.
Thanks and regards,
Roland
I'm not a cmake expert, but I'd think there would be some way to inherit this adjustment so that you could specify it somewhere and it would be propagated automagically to all the connector projects. Regards, Larry
On 2013-11-15 16:28, Larry Evans wrote:
On 11/15/13 09:13, Roland Bock wrote:
On 2013-11-15 15:57, Larry Evans wrote:
On 11/15/13 03:21, Roland Bock wrote:
On 2013-11-14 22:14, Larry Evans wrote: [snip]
I've tried to get cmake to work; however, it only works for the: sqlpp11 directory, but fails for the: sqlpp11-connector-sqlite3 directory.
I've attached the Makefile, and the output from `make targets_dir`. That target 1st clones sqlpp11, runs cmake in that directory, the make installs in that directory. Everything works find. However, with the sqlpp11-connector-sqlite3, the clone works, the cmake works; however, make gives a compilation error:
/home/evansl/prog_dev/boost/git/sqlpp11/sqlpp11-connector-sqlite3/src/connection.cpp:29:31:
fatal error: sqlpp11/exception.h: No such file or directory #include <sqlpp11/exception.h>
caused by *not* including:
-I/home/evansl/prog_dev/boost/git/sqlpp11/sqlpp11/include
on the compiler command line.
Is there something else I'm doing wrong?
-regards, Larry
Hi Larry,
First of all, thank you for your patience! You're welcome. Thanks for the work on this library!
As far as I can tell the problem results from your install directory not being in the standard include path.
I suspected something like that.
I have updated the CMakeLists file in the sqlpp11-connector-sqlite3 repository to default to
../sqlpp11/include (as suggested by you a few mail earlier)
This should work with your setup. I tested with your Makefile, just to be sure :-)
I tried it and it now works with the sqlite; however, I noticed that:
https://github.com/rbock/sqlpp11-connector-mysql/blob/master/CMakeLists.txt
doesn't have the same change. Wouldn't that same change be needed there too?
-regards, Larry
Yes, I wanted to wait for your confirmation. Will adjust that tonight.
Thanks and regards,
Roland
I'm not a cmake expert, but I'd think there would be some way to inherit this adjustment so that you could specify it somewhere and it would be propagated automagically to all the connector projects.
Regards, Larry The mysql connector has been updated as well.
I guess that in order to propagate anything, the connectors will have to find the sqlpp11 directory somehow. I'll look into it some day. Regards, Roland
On 11/11/2013 10:01 AM, Roland Bock wrote:
On 11/11/2013 7:53 AM, Roland Bock wrote:
On 2013-11-11 11:27, Rodrigo Madera wrote:
On Sat, Nov 9, 2013 at 8:03 PM, Roland Bock <rbock@eudoxos.de> wrote:
Hi,
Hello,
Just curious over some points:
Does it support binary transfers over the wire? The library creates template expression trees for the queries. As of now, the tree is serialized and sent to the database connector when you call
db.run(query)
Result rows are currently to be returned from the connector's result object as const char**, see database_api/api.h.
You are kidding ? Is std::vector<std::string> too advanced ? Why C++ programmers are still using C null-terminated strings I will never understand. No kidding. The database client libraries I've used so far are C
On 2013-11-11 15:36, Edward Diener wrote: libraries yielding char**. That is why the current connector interface also uses const char**. BTW: These are typically not null-terminated but are delivered with a size_t* or similar to inform about the length...
It is irrelevant what the database client libraries return. If you are designing an intelligent interface for C++ end-users to use I believe you should create a return of data which is converted into some C++ data type(s). If the returned data is an array of pointers to something lets have it converted, at compile time or run-time, depending on how your library works, into data which a C++ end-user can understand. I do not view 'char **' as anything I want to deal with in modern C++, no matter what it is supposed to mean. With that said the idea of your library looks very interesting. I have always favored an SQL C++ library as one dealing with SQL syntaxes as templated C++ constructs via a DSEL rather than SQL strings of statements. The only downside to your library is that on-the-fly SQL queries based on run-time analysis of database table structure is impossible with such a design, so that I would suggest you also provide a way of generating an SQL string at run-time as a query, with all syntax checking of the string as an end-user responsibility.
On 2013-11-11 22:22, Edward Diener wrote:
On 11/11/2013 10:01 AM, Roland Bock wrote:
On 11/11/2013 7:53 AM, Roland Bock wrote:
On 2013-11-11 11:27, Rodrigo Madera wrote:
On Sat, Nov 9, 2013 at 8:03 PM, Roland Bock <rbock@eudoxos.de> wrote:
Hi,
Hello,
Just curious over some points:
Does it support binary transfers over the wire? The library creates template expression trees for the queries. As of now, the tree is serialized and sent to the database connector when you call
db.run(query)
Result rows are currently to be returned from the connector's result object as const char**, see database_api/api.h.
You are kidding ? Is std::vector<std::string> too advanced ? Why C++ programmers are still using C null-terminated strings I will never understand. No kidding. The database client libraries I've used so far are C
On 2013-11-11 15:36, Edward Diener wrote: libraries yielding char**. That is why the current connector interface also uses const char**. BTW: These are typically not null-terminated but are delivered with a size_t* or similar to inform about the length...
It is irrelevant what the database client libraries return. If you are designing an intelligent interface for C++ end-users to use I believe you should create a return of data which is converted into some C++ data type(s). If the returned data is an array of pointers to something lets have it converted, at compile time or run-time, depending on how your library works, into data which a C++ end-user can understand. I do not view 'char **' as anything I want to deal with in modern C++, no matter what it is supposed to mean. I totally agree. And as I wrote in my previous mail, the libraries user (or C++ end-user) will not get in touch with it. The mere thought of handing out char** to the end user makes me shiver.
With that said the idea of your library looks very interesting. I have always favored an SQL C++ library as one dealing with SQL syntaxes as templated C++ constructs via a DSEL rather than SQL strings of statements. The only downside to your library is that on-the-fly SQL queries based on run-time analysis of database table structure is impossible with such a design, so that I would suggest you also provide a way of generating an SQL string at run-time as a query, with all syntax checking of the string as an end-user responsibility.
Actually, you can build the query almost completely at runtime, already. There is the verbatim method for instance. auto s = dynamic_select(db).dynamic_columns().dynamic_from(); ... const auto cake = sqlpp::verbatim<sqlpp::text>("cake"); s.add_column(cake).add_from(tab_bakery); for (const auto& row : db.run(s)) { std::cout << row.at("cake") << std::endl; } At the moment, I can think of the following limitations: * You cannot use verbatim() as a table yet, but that extension would be almost trivial. * When used as an expression, verbatim() must represent exactly one expression. * All dynamic fields are returned as text representations, implicitly convertible to std::string sqlpp11 would still do all the syntax checks, assuming that your verbatim strings are valid expressions (e.g. correct column names). Best regards, Roland Regards, Roland
On 11/11/2013 4:51 PM, Roland Bock wrote:
On 2013-11-11 22:22, Edward Diener wrote:
On 11/11/2013 10:01 AM, Roland Bock wrote:
On 11/11/2013 7:53 AM, Roland Bock wrote:
On 2013-11-11 11:27, Rodrigo Madera wrote:
On Sat, Nov 9, 2013 at 8:03 PM, Roland Bock <rbock@eudoxos.de> wrote:
> Hi, > Hello,
Just curious over some points:
Does it support binary transfers over the wire? The library creates template expression trees for the queries. As of now, the tree is serialized and sent to the database connector when you call
db.run(query)
Result rows are currently to be returned from the connector's result object as const char**, see database_api/api.h.
You are kidding ? Is std::vector<std::string> too advanced ? Why C++ programmers are still using C null-terminated strings I will never understand. No kidding. The database client libraries I've used so far are C
On 2013-11-11 15:36, Edward Diener wrote: libraries yielding char**. That is why the current connector interface also uses const char**. BTW: These are typically not null-terminated but are delivered with a size_t* or similar to inform about the length...
It is irrelevant what the database client libraries return. If you are designing an intelligent interface for C++ end-users to use I believe you should create a return of data which is converted into some C++ data type(s). If the returned data is an array of pointers to something lets have it converted, at compile time or run-time, depending on how your library works, into data which a C++ end-user can understand. I do not view 'char **' as anything I want to deal with in modern C++, no matter what it is supposed to mean. I totally agree. And as I wrote in my previous mail, the libraries user (or C++ end-user) will not get in touch with it. The mere thought of handing out char** to the end user makes me shiver.
With that said the idea of your library looks very interesting. I have always favored an SQL C++ library as one dealing with SQL syntaxes as templated C++ constructs via a DSEL rather than SQL strings of statements. The only downside to your library is that on-the-fly SQL queries based on run-time analysis of database table structure is impossible with such a design, so that I would suggest you also provide a way of generating an SQL string at run-time as a query, with all syntax checking of the string as an end-user responsibility.
Actually, you can build the query almost completely at runtime, already. There is the verbatim method for instance.
auto s = dynamic_select(db).dynamic_columns().dynamic_from(); ... const auto cake = sqlpp::verbatim<sqlpp::text>("cake"); s.add_column(cake).add_from(tab_bakery);
for (const auto& row : db.run(s)) { std::cout << row.at("cake") << std::endl; }
At the moment, I can think of the following limitations:
* You cannot use verbatim() as a table yet, but that extension would be almost trivial.
I am not sure what this means. The end-user has to be able to specify a table to be used in a valid SQL query, so a verbatim query must accept a table. But I suspect you mean something else.
* When used as an expression, verbatim() must represent exactly one expression.
That is fine. There is little need in practical code for creating more than one verbatim query and running it in multi-threaded code.
* All dynamic fields are returned as text representations, implicitly convertible to std::string
That is fine. Here since the end-user is doing on-the-fly run-time creation of queries it should be his responsibility of parsing the return data.
sqlpp11 would still do all the syntax checks, assuming that your verbatim strings are valid expressions (e.g. correct column names).
That would be great if it is not too much work for sqlpp11 ! Even without syntax checking for verbatim queries it is a worthwhile feature.
On 2013-11-11 23:22, Edward Diener wrote:
On 11/11/2013 4:51 PM, Roland Bock wrote:
On 2013-11-11 22:22, Edward Diener wrote:
On 11/11/2013 10:01 AM, Roland Bock wrote:
On 2013-11-11 15:36, Edward Diener wrote: [snip] With that said the idea of your library looks very interesting. I have always favored an SQL C++ library as one dealing with SQL syntaxes as templated C++ constructs via a DSEL rather than SQL strings of statements. The only downside to your library is that on-the-fly SQL queries based on run-time analysis of database table structure is impossible with such a design, so that I would suggest you also provide a way of generating an SQL string at run-time as a query, with all syntax checking of the string as an end-user responsibility.
Actually, you can build the query almost completely at runtime, already. There is the verbatim method for instance.
auto s = dynamic_select(db).dynamic_columns().dynamic_from(); ... const auto cake = sqlpp::verbatim<sqlpp::text>("cake"); s.add_column(cake).add_from(tab_bakery);
for (const auto& row : db.run(s)) { std::cout << row.at("cake") << std::endl; }
At the moment, I can think of the following limitations:
* You cannot use verbatim() as a table yet, but that extension would be almost trivial.
I am not sure what this means. The end-user has to be able to specify a table to be used in a valid SQL query, so a verbatim query must accept a table. But I suspect you mean something else.
Yeah, that was neither clear nor thought through. Sorry. What I should have written is: As of now, you cannot use strings in from(): auto s = dynamic_select(...).dynamic_from(); s.add_from("my_table"); // compile error today But I'll make that compile rather sooner than later.
* When used as an expression, verbatim() must represent exactly one expression.
That is fine. There is little need in practical code for creating more than one verbatim query and running it in multi-threaded code.
* All dynamic fields are returned as text representations, implicitly convertible to std::string
That is fine. Here since the end-user is doing on-the-fly run-time creation of queries it should be his responsibility of parsing the return data.
sqlpp11 would still do all the syntax checks, assuming that your verbatim strings are valid expressions (e.g. correct column names).
That would be great if it is not too much work for sqlpp11 ! Even without syntax checking for verbatim queries it is a worthwhile feature.
Hmm? I guess that wasn't clear enough as well. I meant: If you are using verbatim(), sqlpp11 assumes that you know what you are doing. It will not check those strings. But under the assumption, that the provided strings are correct, the EDSL will ensure that the overall syntax of the query will be OK. So you only have to worry about your verbatim strings. I will try to get some sleep now, and to write clearer sentences tomorrow :-) Regards, Roland
On Nov 11, 2013, at 10:01 AM, Roland Bock <rbock@eudoxos.de> wrote:
On 11/11/2013 7:53 AM, Roland Bock wrote:
Result rows are currently to be returned from the connector's result object as const char**, see database_api/api.h.
You are kidding ? Is std::vector<std::string> too advanced ? Why C++ programmers are still using C null-terminated strings I will never understand. No kidding. The database client libraries I've used so far are C
On 2013-11-11 15:36, Edward Diener wrote: libraries yielding char**. That is why the current connector interface also uses const char**. BTW: These are typically not null-terminated but are delivered with a size_t* or similar to inform about the length...
But:
1. As a user of the library you don't get in touch with those. The members of a row are NOT char*, of course. You get integers as integers, floats as floats and of course you get text as std::string. I wouldn't dare to use char* there ;-)
If the underlying buffer lifetime permits, consider returning boost::string_ref to avoid free store allocations and copying until the user decides they're necessary. (Of course, optimizing at that point may be moot considering the speed of the database layer.) ___ Rob (Sent from my portable computation engine)
On Mon, Nov 11, 2013 at 10:53 AM, Roland Bock <rbock@eudoxos.de> wrote:
Result rows are currently to be returned from the connector's result object as const char**, see database_api/api.h.
IIRC, when debate was taking place on whether SOCI should be accepted, there was concern regarding the lack of binary format communications between the client and server. Adding a binary communications layer to SOCI was non-trivial at the time. How hard would it be to use binary transports? (when available, of course) In performance critical scenarios, this is a major benefit. Rodrigo Madera
On Mon, Nov 11, 2013 at 10:53 AM, Roland Bock <rbock@eudoxos.de> wrote:
On 2013-11-11 11:27, Rodrigo Madera wrote:
Does it support binary transfers over the wire? The library creates template expression trees for the queries. As of now, the tree is serialized and sent to the database connector when you call
db.run(query)
Result rows are currently to be returned from the connector's result object as const char**, see database_api/api.h.
Both, the serialization of the query and the representation of results rows as const char** are part of one possible interface, which happens to work quite well with some databases. Since we have all information in the expression tree, it should be possible to send queries and receive results in different ways.
Can you send me links to respective database APIs? Preferably ones that I can experiment with on Ubuntu?
Roland, Sorry I didn't see your reply. I even sent a second message without seeing this. Answering your question on database APIs for bin transfer, I don't know of any. I always use top layers, like SOCI. I use it heavily. If you wish to see details about binary transfer I suppose the SOCI authors could give you pointers on those, and maybe the archives for this ML when that discussion was going on. Not sure, though. Regards, Rodrigo Madera
On 2013-11-11 21:22, Rodrigo Madera wrote:
On Mon, Nov 11, 2013 at 10:53 AM, Roland Bock <rbock@eudoxos.de> wrote:
On 2013-11-11 11:27, Rodrigo Madera wrote:
Does it support binary transfers over the wire? The library creates template expression trees for the queries. As of now, the tree is serialized and sent to the database connector when you call
db.run(query)
Result rows are currently to be returned from the connector's result object as const char**, see database_api/api.h.
Both, the serialization of the query and the representation of results rows as const char** are part of one possible interface, which happens to work quite well with some databases. Since we have all information in the expression tree, it should be possible to send queries and receive results in different ways.
Can you send me links to respective database APIs? Preferably ones that I can experiment with on Ubuntu?
Roland,
Sorry I didn't see your reply. I even sent a second message without seeing this.
No problem :-)
Answering your question on database APIs for bin transfer, I don't know of any. I always use top layers, like SOCI. I use it heavily.
OK, got that, but which databases are you using?
If you wish to see details about binary transfer I suppose the SOCI authors could give you pointers on those, and maybe the archives for this ML when that discussion was going on. Not sure, though.
OK, I'll look into. I faintly remember that discussion. It might take some time though. There is so much feedback to be processed :-) Regards, Roland
On 09-11-2013 23:03, Roland Bock wrote:
Hi,
sqlpp11 is a template library representing an embedded domain specific language (EDSL) that allows you to
* define types representing tables and columns, * construct type safe queries checked at compile time for syntax errors, type errors, name errors and even some semantic errors, * interpret results by iterating over query-specific structs with appropriately named and typed members.
Very interesting. Have you considered a syntax like auto results = db >> select(Person.name,Person.age) >> from(foo) >> where( Person.age > 42); -Thorsten
On 11/11/13 04:49, Thorsten Ottosen wrote:
On 09-11-2013 23:03, Roland Bock wrote:
Hi,
sqlpp11 is a template library representing an embedded domain specific language (EDSL) that allows you to
* define types representing tables and columns, * construct type safe queries checked at compile time for syntax errors, type errors, name errors and even some semantic errors, * interpret results by iterating over query-specific structs with appropriately named and typed members.
Very interesting.
Have you considered a syntax like
auto results = db >> select(Person.name,Person.age) >> from(foo) >> where( Person.age > 42);
-Thorsten
Or maybe using << instead of >>. The reason for using << is it's suggestive of karma operators. I'm also guessing that: auto results = db << select(Person.name,Person.age) << from(foo) << where( Person.age > 42); would be translated into some SQL string that is then passed to the actual dbms thru the connection and the result is then returned. This again sounds vaguely like what karma does, only karma just formats the data structure into a string. Was karma ever considered as part of the implementation? -Larry
On 2013-11-11 13:40, Larry Evans wrote:
On 11/11/13 04:49, Thorsten Ottosen wrote:
On 09-11-2013 23:03, Roland Bock wrote:
Hi,
sqlpp11 is a template library representing an embedded domain specific language (EDSL) that allows you to
* define types representing tables and columns, * construct type safe queries checked at compile time for syntax errors, type errors, name errors and even some semantic errors, * interpret results by iterating over query-specific structs with appropriately named and typed members.
Very interesting. Thanks :-)
Have you considered a syntax like
auto results = db >> select(Person.name,Person.age) >> from(foo) >> where( Person.age > 42);
-Thorsten
Or maybe using << instead of >>. The reason for using << is it's suggestive of karma operators. I'm also guessing that:
auto results = db << select(Person.name,Person.age) << from(foo) << where( Person.age > 42);
would be translated into some SQL string that is then passed to the actual dbms thru the connection and the result is then returned. This again sounds vaguely like what karma does, only karma just formats the data structure into a string. Was karma ever considered as part of the implementation?
-Larry
I experimented with such a syntax in 2010 (using <<) and I prefer the member functions personally. The reason is that if you have auto s1 = select(Person.name, Person.age); auto s2 = s1.from(Person); auto s3 = s2.where(Person.age > 42); then s1, s2 and s3 have different types (sqlpp is creating template expression trees). I would not expect this to happen in case of stream operators. I have no real knowledge of boost.karma, to be honest. So I can't really tell whether or it would be useful in this context. I understand that boost.proto could be helpful in creating the EDSL and processing such trees. I tried to use boost.proto in 2010 but failed to wrap my head around it (and then gave up due to the compile times). During Meeting C++ in Dusseldorf last weekend, Eric encouraged me to try again, when there is a full C++11 version of proto :-) Feel free to jump in and give it a try with karma or proto or both :-) Regards, Roland
On 11/09/13 16:03, Roland Bock wrote:
Hi,
over the last four or five years I developed several SQL libraries for C++. With C++11 I was finally able to create an SQL library that meets my own expectations and requirements. It is being used in production, I recently put it on github, and I would really like to hear from you guys whether this is something that could be interesting for you personally or for boost?
Hi Roland, I 'git cloned' the library and looked at some of the files. Apparently there are tabs in the file. IIRC, tabs are not supposed to be in the boost source code. Also, the header files have a .h extension. Again, IIRC, boost headers should have .hpp extension. -Larry
On 2013-11-11 14:13, Larry Evans wrote:
On 11/09/13 16:03, Roland Bock wrote:
Hi,
over the last four or five years I developed several SQL libraries for C++. With C++11 I was finally able to create an SQL library that meets my own expectations and requirements. It is being used in production, I recently put it on github, and I would really like to hear from you guys whether this is something that could be interesting for you personally or for boost?
Hi Roland,
I 'git cloned' the library and looked at some of the files. Apparently there are tabs in the file. IIRC, tabs are not supposed to be in the boost source code. Also, the header files have a .h extension. Again, IIRC, boost headers should have .hpp extension.
-Larry
Hi Larry, Thanks for the hint! Well, there are quite a few things not in a state that I would dare to ask for a formal review... I consider the library very useful right now and I wouldn't want to do SQL programming in C++ without it. And I would love to see sqlpp11 or something similar to become part of boost and/or STL one fine day, but there are quite a few miles to go to get there. I'll put file names and no-tabs onto my growing ToDo list. These are probably among the earlier items to be ticked off :-) Best regards, Roland
Hi, 2013/11/10 Roland Bock <rbock@eudoxos.de>
Hi,
over the last four or five years I developed several SQL libraries for C++. With C++11 I was finally able to create an SQL library that meets my own expectations and requirements. It is being used in production, I recently put it on github, and I would really like to hear from you guys whether this is something that could be interesting for you personally or for boost?
https://github.com/rbock/sqlpp11 https://github.com/rbock/sqlpp11/wiki
That's almost what I was looking for :D Thus far, I used SOCI with my query wrapper like below: BOOST_FUSION_DEFINE_STRUCT ( (opti), user, (std::string, account) (std::string, pwd) (std::string, name) (unsigned, role) ) soci::session sql; query<with_id<user>*()> query_users(sql, "SELECT id, account, pwd, name, role FROM user"; for (auto&& user : query_users()) {...}
-------------- // selecting zero or more results, iterating over the results for (const auto& row : db.run( select(foo.name, foo.hasFun) .from(foo) .where(foo.id > 17 and foo.name.like("%bar%"))))
Is it possible to separate the placeholders of columns & tables? I feel it redundant to say table.column if not ambiguous. In case of more than one table, maybe table[column] is a good syntax for that. Does it support prepared statement? Maybe another set of placeholders (_1, _2, ...) can be used to generate the functor. Also, it'll be great if it works with Fusion & Optional, etc... {
if (row.name.is_null()) std::cerr << "name will convert to empty string" << std::endl; std::string name = row.name; // text fields are implicitly convertible to string bool hasFun = hasFun; // bool fields are implicitly convertible to bool }
Dynamic Select: ---------------- auto s = dynamic_select(db, tab.foo).dynamic_columns().from(tab); if (userWantsBar) s.add_column(tab.bar); for(const auto& row : run(s)) { std::cerr << "row.foo: " << row.foo; if (userWantsBar) std::cerr << "row.bar" << row.at("bar"); std::cerr << std::endl; };
Please let me know your questions/thoughts/suggestions/rants. Contributions welcome, of course :-)
Regards,
Roland
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On 2013-11-11 16:00, TONGARI J wrote:
Hi,
2013/11/10 Roland Bock <rbock@eudoxos.de>
Hi,
over the last four or five years I developed several SQL libraries for C++. With C++11 I was finally able to create an SQL library that meets my own expectations and requirements. It is being used in production, I recently put it on github, and I would really like to hear from you guys whether this is something that could be interesting for you personally or for boost?
https://github.com/rbock/sqlpp11 https://github.com/rbock/sqlpp11/wiki
That's almost what I was looking for :D Nice to hear that :-)
-------------- // selecting zero or more results, iterating over the results for (const auto& row : db.run( select(foo.name, foo.hasFun) .from(foo) .where(foo.id > 17 and foo.name.like("%bar%"))))
Is it possible to separate the placeholders of columns & tables? I feel it redundant to say table.column if not ambiguous. Hmm, you could make copies of the column members, of course.
In case of more than one table, maybe table[column] is a good syntax for that. I don't think so. A table has a given set of member columns. Using any kind of lookup is contrary to what I want to achieve.
Does it support prepared statement? Not yet. But it is a good point. Maybe another set of placeholders (_1, _2, ...) can be used to generate the functor. Would you want placeholders for values? Or for columns as well? The
constexpr TabSample tab = {}; constexpr auto alpha = tab.alpha; ... Maybe but the columns into a namespace to be able to differentiate in case of ambiguities. former shouldn't be too hard. The latter is a different story, though...
Also, it'll be great if it works with Fusion & Optional, etc...
I can see Fusion for table definition (and I would require some help with that), but where would you see Optional?
AMDG On 11/11/2013 11:16 AM, Roland Bock wrote:
On 2013-11-11 16:00, TONGARI J wrote:
In case of more than one table, maybe table[column] is a good syntax for that.
I don't think so. A table has a given set of member columns. Using any kind of lookup is contrary to what I want to achieve.
This syntax doesn't necessarily have to look up anything. If each column has a distinct type, then table[column] can be resolved at compile time, and is essentially equivalent to table.column. In Christ, Steven Watanabe
AMDG
On 11/11/2013 11:16 AM, Roland Bock wrote:
On 2013-11-11 16:00, TONGARI J wrote:
In case of more than one table, maybe table[column] is a good syntax for that. I don't think so. A table has a given set of member columns. Using any kind of lookup is contrary to what I want to achieve.
This syntax doesn't necessarily have to look up anything. If each column has a distinct type, then table[column] can be resolved at compile time, and is essentially equivalent to table.column.
In Christ, Steven Watanabe Agreed, but it would look like a lookup, wouldn't it? I would consider
On 2013-11-11 20:44, Steven Watanabe wrote: that confusing. Regards, Roland
On 11/09/2013 11:03 PM, Roland Bock wrote:
recently put it on github, and I would really like to hear from you guys whether this is something that could be interesting for you personally or for boost?
I think it could be a very useful addition. There are two potential extensions that may be worth considering. I do not consider these extensions required for adoption in Boost, but it may be a good idea to at least have given them some thought. First, make sqlpp11 work on arbitrary data structures. For instance, SQL is just one data provider in LINQ. Second, consider adding streaming SQL. Rx is an example of this. This could be done with a Boost.Asio integration. http://rx.codeplex.com/
* define types representing tables and columns,
There seems to be room for improvement here. You should consider an integration with Boost.Fusion. This way you can avoid having to build a code generator.
On 11/09/2013 11:03 PM, Roland Bock wrote:
recently put it on github, and I would really like to hear from you guys whether this is something that could be interesting for you personally or for boost?
I think it could be a very useful addition. Thanks :-)
There are two potential extensions that may be worth considering. I do not consider these extensions required for adoption in Boost, but it may be a good idea to at least have given them some thought.
First, make sqlpp11 work on arbitrary data structures. For instance, SQL is just one data provider in LINQ. I haven't given this much thought yet, but I believe that this should be
On 2013-11-11 16:30, Bjorn Reese wrote: possible by using a different connector. sqlpp11 itself has no idea about evaluating the query. In particular, it does not know anything about database connections. It constructs a query for you which you then can run. And if your result has the correct type, you can interpret it using sqlpp11's result object. Whether the evaluation happens with a database, or by analyzing some XML or JSON, is totally up to the connector. Of course, as mentioned elsewhere, it might be useful to create a query representation which is not a string in those cases :-)
Second, consider adding streaming SQL. Rx is an example of this. This could be done with a Boost.Asio integration.
I just heard about Rx for the first time the other day, so let me see if I get the idea right: Instead of querying a database or a given XML file, you evaluate the query against a stream of incoming data. If a row matches, you call a callback with the selected data? If that's the general idea, I think that can be done as well: A library for evaluating incoming data would get some representation of the template expression and maybe the result_row type. In my opinion, both extensions could be done in additional libraries that make use of the query expressions generated by sqlpp11. A few additions to the interface should be sufficient.
* define types representing tables and columns,
There seems to be room for improvement here. You should consider an integration with Boost.Fusion. This way you can avoid having to build a code generator.
There seems to be a great deal of demand for using Boost.Fusion for the Table/Column types. It would be great if someone could lend me a hand with this. I am going to update the information in the wiki on what kind information sqlpp11 requires asap (hopefully this weekend). PS: In the mail you forwarded me this morning you also asked if I were aware of N3612? http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3612.pdf Yes, and I agree with most of the described requirements and wanted to throw my hat in the ring. Personally, I believe that the EDSL-approach is the way to pursue. Falling back to string based is always possible, a library like sqlpp11 does not prevent that.
On Sat, Nov 9, 2013 at 4:03 PM, Roland Bock <rbock@eudoxos.de> wrote:
Hi,
over the last four or five years I developed several SQL libraries for C++. With C++11 I was finally able to create an SQL library that meets my own expectations and requirements. It is being used in production, I recently put it on github, and I would really like to hear from you guys whether this is something that could be interesting for you personally or for boost?
https://github.com/rbock/sqlpp11 https://github.com/rbock/sqlpp11/wiki
sqlpp11 is a template library representing an embedded domain specific language (EDSL) that allows you to
* define types representing tables and columns, * construct type safe queries checked at compile time for syntax errors, type errors, name errors and even some semantic errors, * interpret results by iterating over query-specific structs with appropriately named and typed members.
This results in several benefits, e.g.
* the library user operates comfortably on structs and functions, * the compiler reports many kinds of errors long before the code enters unit testing or production, * the library hides the gory details of string construction for queries and interpreting string based results returned by select calls. I.e. you don't need to use strings in query construction where you wouldn't use them in SQL and there is no need to use positional arguments or to parse strings when obtaining fields from a result row (the latter being true unless you do not know which columns to select at compile time).
The library supports both static and dynamic queries. The former offers greater benefit in terms of type and consistency checking. The latter makes it easier to construct queries on the flight.
Specific traits of databases (e.g. unsupported or non-standard features) are known at compile time as well. This way, the compiler can tell the developer at compile time if a query is not accepted by the database (e.g. if a feature is missing). And the library can form the query in the correct manner, for instance if the engine uses concat instead of operator|| to concatenate strings.
Two Examples: ============= Static Select: -------------- // selecting zero or more results, iterating over the results for (const auto& row : db.run( select(foo.name, foo.hasFun) .from(foo) .where(foo.id > 17 and foo.name.like("%bar%")))) { if (row.name.is_null()) std::cerr << "name will convert to empty string" << std::endl; std::string name = row.name; // text fields are implicitly convertible to string bool hasFun = hasFun; // bool fields are implicitly convertible to bool }
Dynamic Select: ---------------- auto s = dynamic_select(db, tab.foo).dynamic_columns().from(tab); if (userWantsBar) s.add_column(tab.bar); for(const auto& row : run(s)) { std::cerr << "row.foo: " << row.foo; if (userWantsBar) std::cerr << "row.bar" << row.at("bar"); std::cerr << std::endl; };
Please let me know your questions/thoughts/suggestions/rants. Contributions welcome, of course :-)
This is an interesting thread and I thought I'd comment. I am a pretty heavy user of postgresql/postgis (spatial extension) in my work. I wrote the first R package to access postgresql and contributed to the current R DBI package. I did a proof-of-concept (= not very pretty ;-) embedding of the Boost Graph Library in postgresql replacing local storage with prepared queries called on demand. I have to say when I look at this, I don't really want to learn another SQL. I am perfectly happy to send query strings to the database and let it parse them. I can debug these separate from my C++ code. I think for complex queries (recursive with anyone?) it would be quite difficult to get the C++ right. What I would really like is a mapping of binary cursors to iterator concepts + easy type-safe endian-aware customizable data conversion. But that's my bias. I've always liked mapping on-demand data to common interfaces. But your use cases are probably different and I can see how this would be very useful to some. THK
Regards,
Roland
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On 11/12/13, 4:26 AM, Tim Keitt wrote:
On Sat, Nov 9, 2013 at 4:03 PM, Roland Bock <rbock@eudoxos.de> wrote:
Hi,
over the last four or five years I developed several SQL libraries for C++. With C++11 I was finally able to create an SQL library that meets my own expectations and requirements. It is being used in production, I recently put it on github, and I would really like to hear from you guys whether this is something that could be interesting for you personally or for boost?
https://github.com/rbock/sqlpp11 https://github.com/rbock/sqlpp11/wiki
sqlpp11 is a template library representing an embedded domain specific language (EDSL) that allows you to
* define types representing tables and columns, * construct type safe queries checked at compile time for syntax errors, type errors, name errors and even some semantic errors, * interpret results by iterating over query-specific structs with appropriately named and typed members.
This results in several benefits, e.g.
* the library user operates comfortably on structs and functions, * the compiler reports many kinds of errors long before the code enters unit testing or production, * the library hides the gory details of string construction for queries and interpreting string based results returned by select calls. I.e. you don't need to use strings in query construction where you wouldn't use them in SQL and there is no need to use positional arguments or to parse strings when obtaining fields from a result row (the latter being true unless you do not know which columns to select at compile time).
The library supports both static and dynamic queries. The former offers greater benefit in terms of type and consistency checking. The latter makes it easier to construct queries on the flight.
Specific traits of databases (e.g. unsupported or non-standard features) are known at compile time as well. This way, the compiler can tell the developer at compile time if a query is not accepted by the database (e.g. if a feature is missing). And the library can form the query in the correct manner, for instance if the engine uses concat instead of operator|| to concatenate strings.
Two Examples: ============= Static Select: -------------- // selecting zero or more results, iterating over the results for (const auto& row : db.run( select(foo.name, foo.hasFun) .from(foo) .where(foo.id > 17 and foo.name.like("%bar%")))) { if (row.name.is_null()) std::cerr << "name will convert to empty string" << std::endl; std::string name = row.name; // text fields are implicitly convertible to string bool hasFun = hasFun; // bool fields are implicitly convertible to bool }
Dynamic Select: ---------------- auto s = dynamic_select(db, tab.foo).dynamic_columns().from(tab); if (userWantsBar) s.add_column(tab.bar); for(const auto& row : run(s)) { std::cerr << "row.foo: " << row.foo; if (userWantsBar) std::cerr << "row.bar" << row.at("bar"); std::cerr << std::endl; };
Please let me know your questions/thoughts/suggestions/rants. Contributions welcome, of course :-)
This is an interesting thread and I thought I'd comment.
I am a pretty heavy user of postgresql/postgis (spatial extension) in my work. I wrote the first R package to access postgresql and contributed to the current R DBI package. I did a proof-of-concept (= not very pretty ;-) embedding of the Boost Graph Library in postgresql replacing local storage with prepared queries called on demand.
I have to say when I look at this, I don't really want to learn another SQL. I am perfectly happy to send query strings to the database and let it parse them. I can debug these separate from my C++ code. I think for complex queries (recursive with anyone?) it would be quite difficult to get the C++ right.
What I would really like is a mapping of binary cursors to iterator concepts + easy type-safe endian-aware customizable data conversion. But that's my bias. I've always liked mapping on-demand data to common interfaces.
But your use cases are probably different and I can see how this would be very useful to some.
THK
Hi, Isn't libpqxx then not what you are looking for? I know its specifically written for PostgreSQL. http://pqxx.org/development/libpqxx/ Regards, Matthijs
On 2013-11-12 10:00, Matthijs Möhlmann wrote:
On 11/12/13, 4:26 AM, Tim Keitt wrote:
On Sat, Nov 9, 2013 at 4:03 PM, Roland Bock <rbock@eudoxos.de> wrote:
Hi,
over the last four or five years I developed several SQL libraries for C++. With C++11 I was finally able to create an SQL library that meets my own expectations and requirements. It is being used in production, I recently put it on github, and I would really like to hear from you guys whether this is something that could be interesting for you personally or for boost?
https://github.com/rbock/sqlpp11 https://github.com/rbock/sqlpp11/wiki
sqlpp11 is a template library representing an embedded domain specific language (EDSL) that allows you to
* define types representing tables and columns, * construct type safe queries checked at compile time for syntax errors, type errors, name errors and even some semantic errors, * interpret results by iterating over query-specific structs with appropriately named and typed members.
This results in several benefits, e.g.
* the library user operates comfortably on structs and functions, * the compiler reports many kinds of errors long before the code enters unit testing or production, * the library hides the gory details of string construction for queries and interpreting string based results returned by select calls. I.e. you don't need to use strings in query construction where you wouldn't use them in SQL and there is no need to use positional arguments or to parse strings when obtaining fields from a result row (the latter being true unless you do not know which columns to select at compile time).
The library supports both static and dynamic queries. The former offers greater benefit in terms of type and consistency checking. The latter makes it easier to construct queries on the flight.
Specific traits of databases (e.g. unsupported or non-standard features) are known at compile time as well. This way, the compiler can tell the developer at compile time if a query is not accepted by the database (e.g. if a feature is missing). And the library can form the query in the correct manner, for instance if the engine uses concat instead of operator|| to concatenate strings.
Two Examples: ============= Static Select: -------------- // selecting zero or more results, iterating over the results for (const auto& row : db.run( select(foo.name, foo.hasFun) .from(foo) .where(foo.id > 17 and foo.name.like("%bar%")))) { if (row.name.is_null()) std::cerr << "name will convert to empty string" << std::endl; std::string name = row.name; // text fields are implicitly convertible to string bool hasFun = hasFun; // bool fields are implicitly convertible to bool }
Dynamic Select: ---------------- auto s = dynamic_select(db, tab.foo).dynamic_columns().from(tab); if (userWantsBar) s.add_column(tab.bar); for(const auto& row : run(s)) { std::cerr << "row.foo: " << row.foo; if (userWantsBar) std::cerr << "row.bar" << row.at("bar"); std::cerr << std::endl; };
Please let me know your questions/thoughts/suggestions/rants. Contributions welcome, of course :-)
This is an interesting thread and I thought I'd comment.
I am a pretty heavy user of postgresql/postgis (spatial extension) in my work. I wrote the first R package to access postgresql and contributed to the current R DBI package. I did a proof-of-concept (= not very pretty ;-) embedding of the Boost Graph Library in postgresql replacing local storage with prepared queries called on demand.
I have to say when I look at this, I don't really want to learn another SQL. I am perfectly happy to send query strings to the database and let it parse them. I can debug these separate from my C++ code. I think for complex queries (recursive with anyone?) it would be quite difficult to get the C++ right.
What I would really like is a mapping of binary cursors to iterator concepts + easy type-safe endian-aware customizable data conversion. But that's my bias. I've always liked mapping on-demand data to common interfaces.
But your use cases are probably different and I can see how this would be very useful to some.
THK
Hi,
Isn't libpqxx then not what you are looking for? I know its specifically written for PostgreSQL.
http://pqxx.org/development/libpqxx/
Regards, Matthijs
It also string based, isn't it? For those who'd like to experiment with sqlpp11 and postgreSQL: I just received a mail that there is a first connector library for postgreSQL available: https://github.com/pbondo/sqlpp11-connector-postgresql Regards, Roland
On 11/12/13, 2:17 PM, Roland Bock wrote:
On 2013-11-12 10:00, Matthijs Möhlmann wrote:
On 11/12/13, 4:26 AM, Tim Keitt wrote:
On Sat, Nov 9, 2013 at 4:03 PM, Roland Bock <rbock@eudoxos.de> wrote:
Hi,
over the last four or five years I developed several SQL libraries for C++. With C++11 I was finally able to create an SQL library that meets my own expectations and requirements. It is being used in production, I recently put it on github, and I would really like to hear from you guys whether this is something that could be interesting for you personally or for boost?
https://github.com/rbock/sqlpp11 https://github.com/rbock/sqlpp11/wiki
sqlpp11 is a template library representing an embedded domain specific language (EDSL) that allows you to
* define types representing tables and columns, * construct type safe queries checked at compile time for syntax errors, type errors, name errors and even some semantic errors, * interpret results by iterating over query-specific structs with appropriately named and typed members.
This results in several benefits, e.g.
* the library user operates comfortably on structs and functions, * the compiler reports many kinds of errors long before the code enters unit testing or production, * the library hides the gory details of string construction for queries and interpreting string based results returned by select calls. I.e. you don't need to use strings in query construction where you wouldn't use them in SQL and there is no need to use positional arguments or to parse strings when obtaining fields from a result row (the latter being true unless you do not know which columns to select at compile time).
The library supports both static and dynamic queries. The former offers greater benefit in terms of type and consistency checking. The latter makes it easier to construct queries on the flight.
Specific traits of databases (e.g. unsupported or non-standard features) are known at compile time as well. This way, the compiler can tell the developer at compile time if a query is not accepted by the database (e.g. if a feature is missing). And the library can form the query in the correct manner, for instance if the engine uses concat instead of operator|| to concatenate strings.
Two Examples: ============= Static Select: -------------- // selecting zero or more results, iterating over the results for (const auto& row : db.run( select(foo.name, foo.hasFun) .from(foo) .where(foo.id > 17 and foo.name.like("%bar%")))) { if (row.name.is_null()) std::cerr << "name will convert to empty string" << std::endl; std::string name = row.name; // text fields are implicitly convertible to string bool hasFun = hasFun; // bool fields are implicitly convertible to bool }
Dynamic Select: ---------------- auto s = dynamic_select(db, tab.foo).dynamic_columns().from(tab); if (userWantsBar) s.add_column(tab.bar); for(const auto& row : run(s)) { std::cerr << "row.foo: " << row.foo; if (userWantsBar) std::cerr << "row.bar" << row.at("bar"); std::cerr << std::endl; };
Please let me know your questions/thoughts/suggestions/rants. Contributions welcome, of course :-)
This is an interesting thread and I thought I'd comment.
I am a pretty heavy user of postgresql/postgis (spatial extension) in my work. I wrote the first R package to access postgresql and contributed to the current R DBI package. I did a proof-of-concept (= not very pretty ;-) embedding of the Boost Graph Library in postgresql replacing local storage with prepared queries called on demand.
I have to say when I look at this, I don't really want to learn another SQL. I am perfectly happy to send query strings to the database and let it parse them. I can debug these separate from my C++ code. I think for complex queries (recursive with anyone?) it would be quite difficult to get the C++ right.
What I would really like is a mapping of binary cursors to iterator concepts + easy type-safe endian-aware customizable data conversion. But that's my bias. I've always liked mapping on-demand data to common interfaces.
But your use cases are probably different and I can see how this would be very useful to some.
THK
Hi,
Isn't libpqxx then not what you are looking for? I know its specifically written for PostgreSQL.
http://pqxx.org/development/libpqxx/
Regards, Matthijs
It also string based, isn't it? For those who'd like to experiment with sqlpp11 and postgreSQL: I just received a mail that there is a first connector library for postgreSQL available:
https://github.com/pbondo/sqlpp11-connector-postgresql
Regards,
Roland
It is string based yes. Nice, I'll experiment with that. Regards, Matthijs
On 2013-11-12 04:26, Tim Keitt wrote:
On Sat, Nov 9, 2013 at 4:03 PM, Roland Bock <rbock@eudoxos.de> wrote:
[snip]
This is an interesting thread and I thought I'd comment.
I am a pretty heavy user of postgresql/postgis (spatial extension) in my work. I wrote the first R package to access postgresql and contributed to the current R DBI package. I did a proof-of-concept (= not very pretty ;-) embedding of the Boost Graph Library in postgresql replacing local storage with prepared queries called on demand.
I have to say when I look at this, I don't really want to learn another SQL. I am perfectly happy to send query strings to the database and let it parse them. I can debug these separate from my C++ code. I think for complex queries (recursive with anyone?) it would be quite difficult to get the C++ right. Could you send me a complex/recursive example? I'd be interested in a comparison, of course.
The debugging topic is interesting. Obviously, sqlpp11 can be used to print query strings. The current connectors do that, when used in debug mode. Personally, I believe that the way that sqlpp11 yields results (members of a struct instead of positional entries in a container) and the compiler support in constructing and maintaining queries outweigh having the exact textual representation of the query in the source code. But that is certainly a matter of taste.
What I would really like is a mapping of binary cursors to iterator concepts + easy type-safe endian-aware customizable data conversion. But that's my bias. I've always liked mapping on-demand data to common interfaces.
I'll have to read up on binary cursors, that is not my core competence...
But your use cases are probably different and I can see how this would be very useful to some.
Thanks and regards, Roland
On Tue, Nov 12, 2013 at 7:16 AM, Roland Bock <rbock@eudoxos.de> wrote:
On 2013-11-12 04:26, Tim Keitt wrote:
On Sat, Nov 9, 2013 at 4:03 PM, Roland Bock <rbock@eudoxos.de> wrote:
[snip]
This is an interesting thread and I thought I'd comment.
I am a pretty heavy user of postgresql/postgis (spatial extension) in my work. I wrote the first R package to access postgresql and contributed to the current R DBI package. I did a proof-of-concept (= not very pretty ;-) embedding of the Boost Graph Library in postgresql replacing local storage with prepared queries called on demand.
I have to say when I look at this, I don't really want to learn another SQL. I am perfectly happy to send query strings to the database and let it parse them. I can debug these separate from my C++ code. I think for complex queries (recursive with anyone?) it would be quite difficult to get the C++ right. Could you send me a complex/recursive example? I'd be interested in a comparison, of course.
Example here http://jakub.fedyczak.net/post/postgresql-with-recursive-tree-traversing-exa...
The debugging topic is interesting. Obviously, sqlpp11 can be used to print query strings. The current connectors do that, when used in debug mode.
Personally, I believe that the way that sqlpp11 yields results (members of a struct instead of positional entries in a container) and the compiler support in constructing and maintaining queries outweigh having the exact textual representation of the query in the source code. But that is certainly a matter of taste.
What I would really like is a mapping of binary cursors to iterator concepts + easy type-safe endian-aware customizable data conversion. But that's my bias. I've always liked mapping on-demand data to common interfaces.
I'll have to read up on binary cursors, that is not my core competence...
Cursors are not difficult: DECLARE the_cursor CURSOR FOR SELECT ... Lets you iterate through the results. Binary cursors return the binary representation as used in the server, which may be different from the client. THK
But your use cases are probably different and I can see how this would be very useful to some.
Thanks and regards,
Roland
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On 11/09/2013 11:03 PM, Roland Bock wrote:
over the last four or five years I developed several SQL libraries for C++. With C++11 I was finally able to create an SQL library that meets my own expectations and requirements. It is being used in production, I recently put it on github, and I would really like to hear from you guys whether this is something that could be interesting for you personally or for boost?
https://github.com/rbock/sqlpp11 https://github.com/rbock/sqlpp11/wiki
Just to throw another hat in the ring, some colleagues of mine did something similar. https://github.com/KDAB/sqlate No docs yet I'm afraid. Thanks, Steve.
On 2013-11-12 09:49, Stephen Kelly wrote:
On 11/09/2013 11:03 PM, Roland Bock wrote:
over the last four or five years I developed several SQL libraries for C++. With C++11 I was finally able to create an SQL library that meets my own expectations and requirements. It is being used in production, I recently put it on github, and I would really like to hear from you guys whether this is something that could be interesting for you personally or for boost?
https://github.com/rbock/sqlpp11 https://github.com/rbock/sqlpp11/wiki Just to throw another hat in the ring, some colleagues of mine did something similar.
https://github.com/KDAB/sqlate
No docs yet I'm afraid.
Thanks,
Steve.
Hi Steve, You mentioned it during your talk at Meeting C++, didn't you? I will certainly take a closer look for comparison. Does it support evaluating the query's results, too? Best regards, Roland
On 11/12/2013 02:21 PM, Roland Bock wrote:
On 2013-11-12 09:49, Stephen Kelly wrote:
On 11/09/2013 11:03 PM, Roland Bock wrote:
over the last four or five years I developed several SQL libraries for C++. With C++11 I was finally able to create an SQL library that meets my own expectations and requirements. It is being used in production, I recently put it on github, and I would really like to hear from you guys whether this is something that could be interesting for you personally or for boost?
https://github.com/rbock/sqlpp11 https://github.com/rbock/sqlpp11/wiki Just to throw another hat in the ring, some colleagues of mine did something similar.
https://github.com/KDAB/sqlate
No docs yet I'm afraid.
Thanks,
Steve.
Hi Steve,
You mentioned it during your talk at Meeting C++, didn't you? I will certainly take a closer look for comparison.
Yes, I mentioned it.
Does it support evaluating the query's results, too?
Nope. It just gives us a QSqlQuery object. The Qt database connectors do the rest. The API seems similar, with similar goals and problem domain, and in theory the code is refactorable for other use-cases too. Could be interesting for implementation comparison. Thanks, Steve.
participants (22)
-
Abel Sinkovics
-
Bjorn Reese
-
Christof Donat
-
Dominique Devienne
-
Edward Diener
-
Eric Niebler
-
Gavin Lambert
-
Klaim - Joël Lamotte
-
Larry Evans
-
Mateusz Loskot
-
Matthijs Möhlmann
-
Michael Marcin
-
niXman
-
Rob Stewart
-
Rodrigo Madera
-
Roland Bock
-
Stephen Kelly
-
Steven Watanabe
-
Thorsten Ottosen
-
Tim Keitt
-
TONGARI J
-
Vicente J. Botet Escriba