This module has been merged in https://github.com/vert-x3/vertx-jdbc-client in Vert.x 4
This module is still supported for Vert.x 3 https://github.com/vert-x3/vertx-sql-common/tree/3.9
Common SQL interfaces for Vert.x SQL services
This module has been merged in https://github.com/vert-x3/vertx-jdbc-client in Vert.x 4
This module is still supported for Vert.x 3 https://github.com/vert-x3/vertx-sql-common/tree/3.9
Currently all known implementations:
share the connection interface but we should also share a client interface that returns a connection too.
The idea is that we could swap implementations.
3.5.0
A query that returns an empty resultset (with querySingle
and querySingleWithParams
) will cause a NullPointerException.
Here's the code in SQLOperations.java
:
default SQLOperations querySingle(String sql, Handler<AsyncResult<JsonArray>> handler) {
return query(sql, execute -> {
if (execute.failed()) {
handler.handle(Future.failedFuture(execute.cause()));
} else {
final ResultSet rs = execute.result();
if (rs == null) {
handler.handle(Future.succeededFuture());
} else {
// NOTE HERE:
// The result set may not be null, but may be empty. The call to results.get(0) will cause an NPE.
List<JsonArray> results = rs.getResults();
if (results == null) {
handler.handle(Future.succeededFuture());
} else {
handler.handle(Future.succeededFuture(results.get(0)));
}
}
}
});
}
This fix may be simple in that all that needs to be checked is adding:
if (results == null || results.size() == 0)
(and then return Future.succeededFuture()
), or this may be a bit deeper of an issue in that ResultSet is not expected to return an empty list if there were no results in the database for the query? (I'd imagine this is okay, but checking here to make sure because of the way this was written.) Future.succeededFuture()
is going to return a null
result, is this desirable?
I've been stuck trying to pass in Array paramters to my postgres store procedure. Any example of how to do this ?
For example:
SELECT insert_info((ARRAY['(Arjay,[email protected],1234567)'
,'(Bjay,[email protected],2234567)'])::info[]);
In SqlConnection, we can call setAutoCommit(false) to create transaction, but we cannot change the transaction isolation model which is supported by JDBC.
Could we add this function to SqlConnection?
Thanks!
I think UpdatedResult should also contains, if any, generated values instead of just columnNames. Or is this by design?
In the sample of postgresql-async-app, it's shown how to get the generated value. It'd be great if vertx 3 sql modules could also support this.
Hi,
I am using io.vertx.reactivex.ext.sql.SQLConnection.rxUpdateWithParams
from vertx-sql-common and I would like to disable autoGeneratedKeys
for those update operations. However, such a method does not provide an easy way to do it.
I know I can do it by creating an io.vertx.reactivex.ext.sql.SQLConnection
using io.vertx.reactivex.ext.sql.SQLClient.rxGetConnection
and setting new io.vertx.ext.sql.SQLOptions().setAutoGeneratedKeys(false)
. But this makes me manage the connection myself and it is causing a connection leak. This is my code:
public Single <UpdateResult> rxUpdateWithParams(String sql, JsonArray arguments) {
return getConnection().flatMap(c - > c.rxUpdateWithParams(sql, arguments).doAfterTerminate(() - > returnToPool(c)));
}
protected Single <SQLConnection> getConnection() {
return client.rxGetConnection().map(c - > {
c.setOptions(new SQLOptions()
.setAutoGeneratedKeys(false));
return c;
});
}
protected void returnToPool(SQLConnection connection) {
connection.close();
}
Is there any other way of disabling autoGeneratedKeys? Perhaps a config file that vertx will honor? Or a config I can set when creating my JDBC client?
Hi,
So I ran into this problem, that with JDBC, the creation of an SQL array expression is vendor specific, as suggested by existence this method https://docs.oracle.com/javase/7/docs/api/java/sql/Connection.html#createArrayOf(java.lang.String,%20java.lang.Object[])
and this thread:
http://stackoverflow.com/questions/18658107/binding-parameter-as-postgresql-array
How would I work around this with postgres, with the JDBCClient of vertx? If I wanted to use updateWithParams
, how should I supply the parameters, in case one is an array?
Excellent examples. I am a beginner to vert.x and the examples helped me find my way. One feature I am looking for an example of is a paged query where the query returns a large number of rows and I want to read say 1000 rows at a time. Please post if you have an example handy. Appreciate all the work in posting the examples.
I am using JDBCClient from vertx-jdbc-client, which extends SQLClient and implements SQLOperations, both from vertx-sql-common. According to this, the methods exposed by SQLOperations internally deal with getting a connection from the pool and properly returning it behind the scenes, which is nice because it saves that boilerplate code. I am actually using the rx-fied methods on JDBCClient, which come from SQLClient but delegate to the regular methods on SQLOperations.
I would like to configure a query timeout on the connection which is used internally by those methods. Is this possible? If I had a reference to an SQLConnection object, I would do sqlConnection.setOptions(new SQLOptions().setQueryTimeout(timeoutInSeconds));
But the whole point of using those higher-level methods is to never have to deal with an SQLConnection instance. So how would one go about configuring it?
motivation: when the user needs to execute only a single command or execute commands that are not depending on each other, the acquire/execute/release dance is not necessary and can be simplified.
dbClient.getConnection(ar -> {
if (ar.failed()) {
} else {
SQLConnection connection = ar.result();
connection.execute(SQL_CREATE_PAGES_TABLE, create -> {
connection.close();
if (create.failed()) {
} else {
}
});
}
});
can be simplified to:
dbClient.execute(SQL_CREATE_PAGES_TABLE, create -> {
if (create.failed()) {
} else {
}
});
changes: provide a version of the current sql commands that operate on the SQLClient
(the pool) instead of the SQLConnection
. The commands will operate on the underlying pool instead at the connection level. In the vertx-jdbc-client
the implementation can benefit from grouping the acquire/execute/release in a single blocking tasks instead of three (it implies a trivial refactor to be able to execute a JDBC command with an existing connection).
In Vert.x 4, this module is only used by vertx-jdbc-client and therefore we will merge it with vertx-jdbc-client module
By default SQL identifiers are case insensitive and it is up to the DBMS to decide how the names are returned, unless identifiers are declared enclosed with double quotes e.g.: "CoLuMn_nAmE"
.
This makes things complicated when a ResultSet user tried to fetch a column by name which due to the underlying JSONObject expects keys to be case insensitive.
Hi there:
I don't like C3P0 datasource provider for some personal reasons, so I tried to use Hiraki instead, but interesting thing is it seems like Hiraki uses different parameter names like "jdbcUrl" rather than "url" in C3P0, and "driverClassName" rather than "driver_class", "username" rather than "user"... so I failed to initiate the connection pool until I read the source code of HikariCPDataSourceProvider and guessed it out.
So I would recommend at least a document for these different config key names, and if possible use the same config parameter key names for different datasource providers:)
Thanks
Useful avoid writing empty callbacks if you're not interested in the event.
Add OSGi instructions to make a bundle.
3.4.1
, 3.4.2-SNAPSHOT
3.4.1
, 3.4.2-SNAPSHOT
2.4.10
, 2.5.0-SNAPSHOT
I've got this exception:
java.lang.ClassCastException: io.vertx.core.json.JsonArray cannot be cast to java.util.List
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) ~[?:1.8.0_121]
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374) ~[?:1.8.0_121]
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) ~[?:1.8.0_121]
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) ~[?:1.8.0_121]
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) ~[?:1.8.0_121]
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:1.8.0_121]
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499) ~[?:1.8.0_121]
at io.vertx.groovy.ext.sql.SQLConnection_GroovyExtension.batchWithParams(SQLConnection_GroovyExtension.java:73) ~[vertx-sql-common-3.4.2-SNAPSHOT.jar:?]
and, because the groovy-all
jar is present in the classpath, the groovy extension is loaded.
Is it possible to not use the Groovy extrension as long as I don't use vertx-lang-groovy
?
It may be related to this issue. If true, please check every other module for this kind of situation.
I am trying to perform an SQL query to insert data to a BINARY(16) column however the API takes a JsonArray as parameter however i get an error that my data is too long.
I set the parameters with the following code:
byte[] uuid = new byte[16];
secureRandom.nextBytes(uuid);
JsonArray params = new JsonArray().add(uuid);
However the JsonArray.add converts the parameters to base64 (obviously longer) and the code to prepare the sql statement does not care about the type and just gets everything as String thus the base64 encoding.
I try to select attributes from a table which are of the type BIGINT. I get the following error message:
Illegal type in JsonObject: class java.math.BigDecimal
A workaround would be to change the data-type but in my case this is not possible. What could be a way to solve this problem?
Thanks in advance
The current API uses position based parameter typing. This maps directly to the JDBC API but has some drawbacks like lack of typing information. Using a special formatting with named parameters we can introduce a more robust API. For example:
String sql = "select * from table1 where id = :id";
conn.select(sql, {id: 1});
In order to handle types we can do:
String sql = "update table1 set col = :val#VARCHAR";
conn.update(sql, {val: 1});
It can even handle arrays if we would allow:
String sql = "update table1 set col = :val#VARCHAR[]";
conn.update(sql, {val: 1});
The regex to identify params would be:
[^:]\:([a-zA-Z_$][0-9a-zA-Z_$]*(\[\])?)(#([a-zA-Z_$][0-9a-zA-Z_$]*(\[\])?))?
This would build from the basic work already done at #9 but extendable so it can be used with JDBC and the async client.
We should add a way to configure the connection timeout with something like:
SQLConnection setTimeout(long timeoutInSecond, Handler<AsyncResult<Void>> resultHandler);
If we are sure the operation is not blocking (on all drivers) we could have:
SQLConnection setTimeout(long timeoutInSeconds)
I managed to insert several records with one SQL (like this: insert xx values(xx),(xx) ) with updateWithParams. the result was success but I only get one element in the JsonArray, which is the autoincrement key of the first record, whereas I actually inserted three.
I suppose the result of getKeys() should be all the autoincrement keys of the inserted records?
my DB is mySQL 5.5
cheers
Would be nice to provide support for "encoding/decoding"
from "vertx to database (encode)" and "from database to vertx (decode)"
by using something like "MessageCodec" (as a mapping layer).
With this we can use Instances of POJO to CRUD to databases (DataTransferObjects).
http://stackoverflow.com/questions/8247970/using-like-wildcard-in-prepared-statement
Hi there:
For current version, if there are some special characters like % _ [ in the queryParams/jsonArray then the query will fail and the solution is to use like wildcard as stated in the above link, so I would suggest to use like wildcard in prepared statement automatically in the queryWithParams func/api rather than users implement by themselves.
Thanks
I encountered an exception which looks suspicious while doing Batch operations http://vertx.io/docs/vertx-sql-common/java/#_batch_operations
Please see the issue defined here:
http://stackoverflow.com/questions/38295740/run-multiple-sql-queries-asynchronously-in-vertx-vert-x-3
The resultSet.getRows() is always empty. The unit test available for the ResultSet is wrong because it loops the list returned by the resultSet.getRows() but it does not check if list is empty as it is in fact.
The bug is very easy to fix. I am preparing a patch.
Cheers
The (seemingly undocumented) fix for #54 (aa72de8) causes querySingle to return null when the query has no result. This seems sensible at first glance, but causes a lot of headaches when working with the Rx package (especially from Kotlin, where it bypasses the null safety checks and makes for a surprising crash)
RxJava 2 won't accept null values in streams, so the behaviour of this function will probably need to change anyway.
Perhaps querySingle should return an Optional
@chengenzhao commented on Wed May 03 2017
Hi there:
I would like to see some enhancements for sql connection since sql connection is a little bit different from other async api
it requires closing the connection manually, so it makes fluent coding style messy when you compose some futures
e.g.
Future.<SQLConnection>future(future ->{
client.getConnection(future);
}).<UpdateResult>compose(connection ->
Future.future(future -> connection.update("update table_1 set name = a where id = 001", future))
).setHandler(updateResult ->{
if(updateResult.succeeded()){
}else{
}
//is there any way to add updateResult.getConnection(); here?
//thus we could use updateResult.getConnection().close();
});
as we could see here, the 1st step is to get connection from the pool, then do some operations then close the connection, which means return the connection to the pool, however currently we may not be able to do that since most of sql connection apis does not return the reference for the sql connection then the connection is lost when you chain the futures
I would suggest to add sql connection reference to the result then we should be able to close the connection at the end of the fluent future chain.
This would be an enhancement/extension.
For example Postgres/SQLite/MSSQLServer/Maybe others too... support JSON as a type we could define a interface method where the query is known to return JSON and in this case we avoid doing all the encode/decode.
Example, say we have a table words
like this:
id | text
----+--------------------------------------
1 | bla
2 | xpto
3 | durp
...
One can get the result in JSON like:
select row_to_json(words) from words;
or just some columns
select row_to_json(row(id, text)) from words;
So basically I am proposing to add:
SQLConnection queryJson(String sql, Handler<AsyncResult<JsonArray>> resultHandler);
and
SQLConnection queryJsonWithParams(String sql, JsonArray params, Handler<AsyncResult<JsonArray>> resultHandler);
Viewing vertx.io SQL common documentation, all the code blocks do no render properly. It appears there is a bad link.
This appears to impact all the languages for SQL Common, not just Java.
N/A
To register an INOUT parameter, as per Oracle documentation:
Because the parameter newPrice (the third parameter in the procedure RAISE_PRICE) has the parameter mode INOUT, you must both specify its value by calling the appropriate setter method and register it with the registerOutParameter method.
cs = this.con.prepareCall("{call RAISE_PRICE(?,?,?)}");
cs.setString(1, coffeeNameArg);
cs.setFloat(2, maximumPercentageArg);
cs.registerOutParameter(3, Types.NUMERIC);
cs.setFloat(3, newPriceArg);
cs.execute();
In io.vertx.ext.jdbc.impl.actions.JDBCStatementHelper, this is what is done:
for (int i = 0; i < max; i++) {
Object value = null;
if (i < in.size()) {
value = in.getValue(i);
}
// found a in value, use it as a input parameter
if (value != null) {
statement.setObject(i + 1, value);
continue;
}
if (i < out.size()) {
value = out.getValue(i);
}
// found a out value, use it as a output parameter
if (value != null) {
// We're using the int from the enum instead of the enum itself to allow working with Drivers
// that have not been upgraded to Java8 yet.
statement.registerOutParameter(i + 1, JDBCType.valueOf((String) value).getVendorTypeNumber());
continue;
}
// assume null input
statement.setNull(i + 1, Types.NULL);
}
Because of the continue;
, it is impossible to register an INOUT parameter.
Could you please support batch operations on databases. That could improve the performance of bulk database operations.
Thanks in advance
--Ulrich
Hey guys,
I'm trying to query with parameters by looking up users with a given UUID (java.util.UUID on java side and postgres UUID data type on db side).
Because of the query API I have to encapsulate my UUID in an JSON array. Unfortunately, this is only possible when I convert it to a String but postgres does not like that:
22│22:25:31.373 [vert.x-eventloop-thread-1] ERROR SNSPushVerticle.check - ERROR: operator does not exist: uuid = character varying │ │ Hint: No operator matches the given name and argument type(s). You might need to add explicit type casts. │ │ Position: 79[AsyncResultChecker.java:39]
And if I add it as an object the JSON parser blames me:
22:28:22.470 [DEBUG] [TestEventLogger] FATAL: Unhandled exception 22:28:22.470 [DEBUG] [TestEventLogger] java.lang.IllegalStateException: Illegal type in JsonObject: class java.util.UUID
Any suggestions? If I use java.sql.PrepareStatement and Statement.setObject() I can query successfully.
Cheers,
Paul
Currently SQLConnection
implements java.lang.AutoCloseable
. AutoCloseable
was added so that Kotlin's use
method can manage a connection instead of more traditional try-finally
.
But in recent versions of Kotlin, use
no longer workers with AutoCloseable, but with
Closeable` only.
When this is fixed, a breaking (binary) change must be logged.
It is useful to have an extra method to return all column names, for example:
List<String> columns()
Expose:
ResultSet
The package io.vertx.ext.sql
is missing in Vertx Java docs package list.
Class list also doesn`t contain any classes related to SQL Common.
But I still can reach this package through references from Java manual.
I use the sql to select tabel from table_name,but the resultSet's column name is uppercase,and the rows also is uppercase,that is not probablem, but when i use the codegen module,it seems a probablem.
for example:
jdbcClient.getConnection(car -> {
if (car.succeeded()) {
SQLConnection connection = car.result();
connection.query("SELECT Id,Content,Name FROM Pages", res -> {
connection.close();
if (res.succeeded()) {
//the number of pages is null,because the jsonObject is like this:
//{"ID":2,"CONTENT":"This is an Orange","NAME":"Orange"}
List<Pages> pages = res.result().getRows()
.stream()
.map(Pages::new)
.collect(Collectors.toList());
resultHandler.handle(Future.succeededFuture(pages));
} else {
resultHandler.handle(Future.failedFuture(res.cause()));
}
});
} else {
resultHandler.handle(Future.failedFuture(car.cause()));
}
});
but the generated Converter look like
public static void fromJson(JsonObject json, Pages obj) {
if (json.getValue("content") instanceof String) {
obj.setContent((String)json.getValue("content"));
}
if (json.getValue("id") instanceof Number) {
obj.setId(((Number)json.getValue("id")).intValue());
}
if (json.getValue("name") instanceof String) {
obj.setName((String)json.getValue("name"));
}
}
and my Pages
class constructor likes this
public Pages(JsonObject object) {
PagesConverter.fromJson(object, this);
}
what can i do to avoid this,expect i write code like this
...
List<Pages> pages = new ArrayList();
for(row,res.result().getRows){
Pages pages = new Pages();
pages.setId = row.getInteger("ID");
pages.setName = row.getString("Name")
....
}
have any good practice on this? thanks a lot
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.