Git Product home page Git Product logo

sql4es's People

Contributors

corneversloot avatar jeroenvlek avatar mallim avatar seralf avatar snowch avatar toxeh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sql4es's Issues

Incorrectly convert nested query

PUT index
{
  "mappings": {
    "my_case": {
      "properties": {
        "events": {
          "type": "nested" 
        }
      }
    }
  }
}

I made mapping structure like this. every row has more than two events data and would like to find one of events that matches poid and oid.

select name from my_case where (events.poid='A' and events.oid='B');

This is how I tried to query through sql4es, but It seems not results as I expected.

from sql4es
{
  "size": 200,
  "timeout": 10000,
  "post_filter": {
    "bool": {
      "must": [
        {
          "nested": {
            "query": {
              "term": {
                "events.poid": "A"
              }
            },
            "path": "events"
          }
        },
        {
          "nested": {
            "query": {
              "term": {
                "events.oid": "B"
              }
            },
            "path": "events"
          }
        }
      ]
    }
  }
}

It found every events that has poid is 'A' OR oid is 'B'.
How should I made query to find matching both condition?

This is what I want to make through sql4es

   .....
    "query": {
        "nested": {
           "path": "events",
           "filter": {
               "bool" : {
                    "must" : [
                        {"term" : { "events.poid" : "A"}},
                        {"term" : { "events.oid" : "B"}}
                    ]
               }
           }
        }

order by parse bug

	// parse ORDER BY
	if(node.getOrderBy().isEmpty()){
		for(SortItem si : node.getOrderBy()/*.get().getSortItems()*/){
			OrderBy ob = si.accept(orderOarser, state);
			if(state.hasException()) return new ParseResult(state.getException());
			orderings.add(ob);
		}
	}

isEmpty => isNotEmpty

Date issues

I notices that a java.sql.Date is returned when date type is declared on the mapping. However, in this java version the ability of Date to return time also is deprecated.
It seems like ElasticSearch's date type should be exposed as java.sql.Timestamp

Unsupported major.minor version 52.0

Hi
I'm trying to use the driver with SQLWorkbenchJ and SquirrelSQL but I get several errors:

SQLWorkbenchJ give me a generic error: Unable to connect to DB.
SquirrelSQL, after defining the driver conf, give me the error: Unsupported major.minor version 52.0.

I'm using Mac OSX 10.11.5 and Java VM 1.7.

Thank you
Lorenzo

Elasticsearch 6.5.0 - compilation/test failures.

I was trying to use this driver for ES 6.5.0 as new SQL JDBC driver that is provided from Elasticsearch itself requires Platinium License:
https://www.elastic.co/downloads/jdbc-client
The straight forward compilation doesn't work. I was able to compile without test cases:
mvn package -Dmaven.test.skip=true
after making some changes in the source code. I'm attaching the diff file with my changes (it's generated using diff, not git)
build-ES6.5.0-diff.txt
After that I din't noticed any problems using the driver against my ES cluster, so I didn't make any more changes.

I'll be happy if you apply those changes and we can use master directly and work with latest ES version. Also you can check why tests are failing and what need to be changed there, to ensure that all options are working as expected.

DISTINCT - COUNT Combination Reg.

Hi,
I am using the driver for the past 6 months and working fine. Now there is a requirement which is not supported by the driver. When we try to query case like SELECT COUNT(DISTINCT(INVOICE_NUMBER)) FROM FACT_TABLE the condition fails and does not return any value. This requirement is for capturing the count of unique invoice from FACT Table as the FACT table is denormalised with both header and line level transaction records. The COUNT(DISTINCT(<COL_NAME>)) can occur more than once in query as well. Can you please help us in getting this accomplished. Thanks.

Unexpected Error occurred attempting to open an SQL connection.

Trying to configure in Virtual Desktop environment and getting the following error (using SQuirreL). Any help please?

java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.sql.SQLException: Unable to connect to database
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.awaitConnection(OpenConnectionCommand.java:132)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.access$100(OpenConnectionCommand.java:45)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand$2.run(OpenConnectionCommand.java:115)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: java.sql.SQLException: Unable to connect to database
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.executeConnect(OpenConnectionCommand.java:171)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.access$000(OpenConnectionCommand.java:45)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand$1.run(OpenConnectionCommand.java:104)
... 5 more
Caused by: java.sql.SQLException: Unable to connect to database
at nl.anchormen.sql4es.jdbc.ESConnection.buildClient(ESConnection.java:129)
at nl.anchormen.sql4es.jdbc.ESConnection.(ESConnection.java:78)
at nl.anchormen.sql4es.jdbc.ESDriver.connect(ESDriver.java:46)
at net.sourceforge.squirrel_sql.fw.sql.SQLDriverManager.getConnection(SQLDriverManager.java:133)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.executeConnect(OpenConnectionCommand.java:167)
... 7 more
Caused by: java.lang.NoSuchMethodError: com.google.common.util.concurrent.MoreExecutors.directExecutor()Ljava/util/concurrent/Executor;
at org.elasticsearch.threadpool.ThreadPool.(ThreadPool.java:190)
at org.elasticsearch.client.transport.TransportClient$Builder.build(TransportClient.java:131)
at nl.anchormen.sql4es.jdbc.ESConnection.buildClient(ESConnection.java:108)
... 11 more

Need cardinality aggregation to be supported

Please help to implement cardinality-aggregation. Thanks.

Finding Distinct Countsedit
The first approximate aggregation provided by Elasticsearch is the cardinality metric. This provides the cardinality of a field, also called a distinct or unique count. You may be familiar with the SQL version:

SELECT COUNT(DISTINCT color)
FROM cars

https://www.elastic.co/guide/en/elasticsearch/guide/current/cardinality.html
https://www.elastic.co/guide/en/elasticsearch/reference/2.2/search-aggregations-metrics-cardinality-aggregation.html

mysql Load Balancing jdbc url style

hi. very thanks

i wanted a mysql Load Balancing jdbc url style setting
So I modify the source code

-- ESDriver.java
package nl.anchormen.sql4es.jdbc;

import java.io.UnsupportedEncodingException;
import java.net.URLDecoder;
import java.nio.charset.Charset;
import java.sql.Connection;
import java.sql.Driver;
import java.sql.DriverManager;
import java.sql.DriverPropertyInfo;
import java.sql.SQLException;
import java.sql.SQLFeatureNotSupportedException;
import java.util.ArrayList;
import java.util.List;
import java.util.Properties;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import nl.anchormen.sql4es.model.HostPortPair;
import nl.anchormen.sql4es.model.Utils;

/**

  • Basic {@link Driver} implementation used to get {@link ESConnection}.

  • @author cversloot
    *
    */
    public class ESDriver implements Driver{

    private static final Logger logger = LoggerFactory.getLogger(ESDriver.class.getName());

    private static final String ES_HOSTS = "es.hosts";
    private static final String ES_INDEX = "es.index";

    /**

    • Register this driver with the driver manager
      */
      static{
      try {
      DriverManager.registerDriver(new ESDriver());
      } catch (SQLException sqle) {
      logger.error("Unable to register Driver", sqle);
      }
      }

// @OverRide
// public Connection connect(String url, Properties info) throws SQLException {
// Object[] conInfo = parseURL(url, info);
// String host = (String)conInfo[0];
// int port = (int)conInfo[1];
// String index = (String)conInfo[2];
// Properties props = (Properties)conInfo[3];
// return new ESConnection(host, port, index, props);
// }

@Override
public Connection connect(String url, Properties info) throws SQLException {
    Properties properties = parseURL(url, info);

    String hosts = (String)properties.getProperty(ES_HOSTS);
    String index = (String)properties.getProperty(ES_INDEX);
    ConnectionInfo connInfo = new ConnectionInfo(hosts, index, properties);
    return new ESConnection(connInfo);
}

// /**
// * Parses the url and returns information required to create a connection. Properties
// * in the url are added to the provided properties and returned in the object array
// * @param url
// * @param info
// * @return {String:host, int:port, String:index, Properties:info}
// * @throws SQLException
// /
// private Object[] parseURL(String url, Properties info) throws SQLException{
// if(!acceptsURL2(url))
// throw new SQLException("Invalid url");
// try {
// URI uri = new URI(url.substring(12));
// String host = uri.getHost();
// int port = (uri.getPort() < 0 ? Utils.PORT : uri.getPort());
// String index = uri.getPath().length() <= 1 ? null : uri.getPath().split("/")[1];
// Properties props = Utils.defaultProps();
// if(info != null) {
// props.putAll(info);
// }
// info = props;
// if(uri.getQuery() != null)
// for(String keyValue : uri.getQuery().split("&")){
// String[] parts = keyValue.split("=");
// if(parts.length > 1) info.setProperty(parts[0].trim(), parts[1].trim());
// else info.setProperty(parts[0], "");
// }
// return new Object[]{host, port, index, info};
// } catch (URISyntaxException e) {
// throw new SQLException("Unable to parse URL. Pleas use '"+Utils.PREFIX+"//host:port/schema?{0,1}(param=value&)
'", e);
// }catch(ArrayIndexOutOfBoundsException e){
// throw new SQLException("No shema (index) specified. Pleas use '"+Utils.PREFIX+"//host:port/schema?{0,1}(param=value&)*'");
// }catch(Exception e){
// throw new SQLException("Unable to connect to database due to: "+e.getClass().getName(), e);
// }
// }

private Properties parseURL(String url, Properties defaults) throws SQLException 
{
    if(url == null || !url.toLowerCase().startsWith(Utils.PREFIX)) 
        return null;

    Properties urlProps = (defaults != null) ? new Properties(defaults) : new Properties();

    int beginningOfSlashes = url.indexOf("//");
    int questionIndex = url.indexOf("?");
    String queryString = (questionIndex > 1)? url.substring(questionIndex + 1).trim(): "";
    //jdbc:sql4es remove
    url = (questionIndex > 0)? 
            url.substring(beginningOfSlashes + 2, questionIndex) 
            : url.substring(beginningOfSlashes + 2);

    //  host:port,host2,host3:port/index?cluster.name=clustername
    String[] split = url.split("/", 3);
    if(split.length > 2)
    {
        throw new SQLException("Unable to parse URL. Pleas use '"+Utils.PREFIX+"//host:port[,host:port]*/schema?(param=value&)*'");
    }else 
    {
        if(split.length == 1 // host:port,host2,host3:port?cluster.name=clustername
                || split[1].trim().isEmpty()) //host:port,host2,host3:port/?cluster.name=clustername
        {
            // host:port,host2,host3:port?cluster.name=clustername
            throw new SQLException("No shema (index) specified. Pleas use '"+Utils.PREFIX+"//host:port[,host:port]*/schema?(param=value&)*'");
        }
    }
    String hostOrPortPair = split[0];
    // add property : es.hosts
    if(!urlProps.containsKey(ES_HOSTS))
    {
        urlProps.setProperty(ES_HOSTS, hostOrPortPair);
    }else
    {
        String esHosts = urlProps.getProperty(ES_HOSTS).trim();
        if(esHosts.isEmpty())
        {
            urlProps.setProperty(ES_HOSTS, hostOrPortPair);
        }else
        {
            urlProps.setProperty(ES_HOSTS, esHosts+","+hostOrPortPair);
        }
    }

    //add property : es.index
    String index = split[1];
    urlProps.setProperty(ES_INDEX, index);
    if(!queryString.isEmpty())
    {
        String[] querys = queryString.split("&");
        for (String pair : querys)
        {
            int idx = pair.indexOf("=");
            try
            {
                urlProps.put(URLDecoder.decode(pair.substring(0, idx), "UTF-8"), URLDecoder.decode(pair.substring(idx + 1), "UTF-8"));
            } catch (UnsupportedEncodingException e)
            {
                try
                {
                    urlProps.put(URLDecoder.decode(pair.substring(0, idx), Charset.defaultCharset().name()), 
                            URLDecoder.decode(pair.substring(idx + 1), Charset.defaultCharset().name()));
                } catch (UnsupportedEncodingException e1)
                {
                    throw new SQLException("Unable to connect to database due to: "+e.getClass().getName(), e);
                }
            }   
        }
    }

    return urlProps;
}

// @OverRide
// public boolean acceptsURL(String url) throws SQLException {
// if(!url.toLowerCase().startsWith(Utils.PREFIX)) return false;
// try {
// URI uri = new URI(url.substring(5));
// if(uri.getHost() == null) throw new SQLException("Invalid URL, no host specified");
// if(uri.getPath() == null) throw new SQLException("Invalid URL, no index specified");
// if(uri.getPath().split("/").length > 2) throw new SQLException("Invalid URL, "+uri.getPath()+" is not a valid index");
// } catch (URISyntaxException e) {
// throw new SQLException("Unable to parse URL", e);
// }
// return true;
// }

@Override
public boolean acceptsURL(String url) throws SQLException 
{
    return (parseURL(url, null) != null);
}

@Override
public DriverPropertyInfo[] getPropertyInfo(String url, Properties info) throws SQLException {
    Properties props = (Properties)parseURL(url, info);
    Properties defaultProps = Utils.defaultProps();

    DriverPropertyInfo[] result = new DriverPropertyInfo[defaultProps.size() + 2];
    result[0] = new DriverPropertyInfo(ES_HOSTS, (String) props.get(ES_HOSTS));
    result[0].required = true;
    result[0].description = "elasticsearch transport address list";
    result[1] = new DriverPropertyInfo(ES_INDEX, (String) props.get(ES_INDEX));
    result[1].required = true;
    result[1].description = "elasticsearch index name";

    result[2] = new DriverPropertyInfo("cluster.name", (String) props.get("cluster.name"));
    result[2].required = false;
    result[2].description = "If your clustername is not 'elasticsearch' you should specify the clustername";

    int index = 3;
    for(Object key : defaultProps.keySet()){
        result[index] = new DriverPropertyInfo((String)key, props.get(key).toString());
        index++;
    }
    return result;
}

// @OverRide
// public DriverPropertyInfo[] getPropertyInfo(String url, Properties info) throws SQLException {
// Properties props = (Properties)parseURL(url, info)[3];
// DriverPropertyInfo[] result = new DriverPropertyInfo[props.size()];
// int index = 0;
// for(Object key : props.keySet()){
// result[index] = new DriverPropertyInfo((String)key, props.get(key).toString());
// index++;
// }
// return result;
// }

@Override
public int getMajorVersion() {
    return Utils.ES_MAJOR_VERSION;
}

@Override
public int getMinorVersion() {
    return Utils.ES_MINOR_VERSION;
}

@Override
public boolean jdbcCompliant() {
    return false;
}

@Override
public java.util.logging.Logger getParentLogger() throws SQLFeatureNotSupportedException {
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

/**
 * The Class Connection Info.
 */
public static class ConnectionInfo
{
    private List<HostPortPair> hosts = new ArrayList<HostPortPair>(3);

    private String index;

    private Properties properties;

    public ConnectionInfo(String host, int port, String index, Properties properties) throws SQLException
    {
        this(index, properties);
        this.addHost(new HostPortPair(host, port));
    }

    public ConnectionInfo(String hostAndPortPairs, String index, Properties properties) throws SQLException
    {
        this(index, properties);
        this.addHost(hostAndPortPairs);
    }

    public ConnectionInfo(HostPortPair hostPortPair, String index, Properties properties) throws SQLException
    {
        this(index, properties);
        this.addHost(hostPortPair);
    }

    public ConnectionInfo(List<HostPortPair> hostPortPairs, String index, Properties properties) throws SQLException
    {
        this(index, properties);
        for (HostPortPair hostPortPair : hostPortPairs)
        {
            this.addHost(hostPortPair);
        }
    }

    private ConnectionInfo(String index, Properties properties) throws SQLException
    {
        this.setIndex(index);
        String esHosts = properties.getProperty(ES_HOSTS);
        if(esHosts != null)
        {
            this.addHost(esHosts);
        }
        this.setProperties(properties);
    }

    public List<HostPortPair> getHosts()
    {
        return hosts;
    }


    public boolean addHost(HostPortPair hostPortPair)
    {
        for (HostPortPair host : hosts)
        {
            if(host.equals(hostPortPair))
            {
                return false;
            }
        }
        this.hosts.add(hostPortPair);
        return true;
    }

    private void addHost(String hostAndPortPairs) throws SQLException
    {
        for (String hostPort : hostAndPortPairs.split(","))
        {
            String[] _hostPort = hostPort.split(":");
            String host = _hostPort[0].trim();
            HostPortPair hostPair =  (_hostPort.length > 1) ? new HostPortPair(host, Integer.parseInt(_hostPort[1])) : new HostPortPair(host); 
            this.addHost(hostPair);
        }
    }

    public String getIndex()
    {
        return index;
    }

    /**
     * @param index the index to set
     * @throws SQLException 
     */
    public void setIndex(String index) throws SQLException
    {
        if(index == null || index.isEmpty())
        {
            throw new SQLException("Invalid URL, no index specified");
        }
        this.index = index;
    }

    public Properties getProperties()
    {
        return properties;
    }

    public void setProperties(Properties properties)
    {
        if(this.properties == null)
        {
            this.properties = new Properties();
        }
        this.properties.putAll(properties);
    }

    public String toString()
    {
        StringBuilder tmp = new StringBuilder();
        tmp.append(Utils.PREFIX);
        tmp.append("//");
        for (HostPortPair hostPortPair : hosts)
        {
            tmp.append(hostPortPair.toString());
            tmp.append(',');
        }
        tmp.delete(tmp.length()-1, tmp.length());
        tmp.append('/');
        tmp.append(index);
        if(properties.size() > 0)
        {
            tmp.append('?');
            for (Object key : properties.entrySet())
            {
                tmp.append(key);
                tmp.append('=');
                tmp.append(properties.getProperty((String) key));
                tmp.append('&');
            }
            tmp.delete(tmp.length()-1, tmp.length());
        }
        return tmp.toString();
    }
}

}

---- ESConnection.java
package nl.anchormen.sql4es.jdbc;

import java.net.InetAddress;
import java.net.UnknownHostException;
import java.sql.Array;
import java.sql.Blob;
import java.sql.CallableStatement;
import java.sql.Clob;
import java.sql.Connection;
import java.sql.DatabaseMetaData;
import java.sql.NClob;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLClientInfoException;
import java.sql.SQLException;
import java.sql.SQLFeatureNotSupportedException;
import java.sql.SQLWarning;
import java.sql.SQLXML;
import java.sql.Savepoint;
import java.sql.Statement;
import java.sql.Struct;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.Executor;

import org.elasticsearch.action.admin.indices.exists.indices.IndicesExistsRequest;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.test.ESIntegTestCase;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import nl.anchormen.sql4es.ESDatabaseMetaData;
import nl.anchormen.sql4es.jdbc.ESDriver.ConnectionInfo;
import nl.anchormen.sql4es.model.Heading;
import nl.anchormen.sql4es.model.HostPortPair;
import nl.anchormen.sql4es.model.Utils;

/**

  • A {@link Connection} implementation effectively wrapping an Elasticsearch
  • {@link Client}.
  • @author cversloot
    *
    */
    public class ESConnection implements Connection
    {

// private String host;
// private int port;
// private String index;
// private Properties props;

private ConnectionInfo connInfo;

private Client client;
private boolean active = true;
private int timeout = Integer.MAX_VALUE;

private final Logger logger = LoggerFactory.getLogger(this.getClass());
private boolean autoCommit = false;
private boolean readOnly = true;
private List<ESStatement> statements = new ArrayList<ESStatement>();


/**
 * Builds the es {@link Client} using the provided parameters.
 * 
 * @param host
 * @param port
 * @param index
 * @param props
 *            the properties will all be copied to the Settings.Builder used
 *            to create the Client
 * @throws SQLException
 */
public ESConnection(String host, int port, String index, Properties props) throws SQLException
{
    this(new ConnectionInfo(host, port, index, props));
}

public ESConnection(ConnectionInfo connectionInfo) throws SQLException
{
    this.connInfo = connectionInfo;
    this.client = buildClient();
    try
    {
        this.getTypeMap(); // loads types into properties
    } catch (Exception e)
    {
        throw new SQLException("Unable to connect to specified elasticsearch host(s)", e);
    }
}

// /**
// * Builds the Elasticsearch client using the properties this connection was
// * instantiated with
// *
// * @return
// * @throws SQLException
// */
// private Client buildClient() throws SQLException
// {
// if (props.containsKey("test"))
// { // used for integration tests
// return ESIntegTestCase.client();
// } else
// {
// try
// {
// Settings.Builder settingsBuilder = Settings.settingsBuilder();
// for (Object key : this.props.keySet())
// {
// settingsBuilder.put(key, this.props.get(key));
// }
// Settings settings = settingsBuilder.build();
// TransportClient client = TransportClient.builder().settings(settings).build()
// .addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName(host), port));
//
// // add additional hosts if set in URL query part
// if (this.props.containsKey("es.hosts"))
// for (String hostPort : this.props.getProperty("es.hosts").split(","))
// {
// String newHost = hostPort.split(":")[0].trim();
// int newPort = (hostPort.split(":").length > 1 ? Integer.parseInt(hostPort.split(":")[1])
// : Utils.PORT);
// client.addTransportAddress(
// new InetSocketTransportAddress(InetAddress.getByName(newHost), newPort));
// logger.info("Adding additional ES host: " + hostPort);
// }
//
// // check if index exists
// if (index != null)
// {
// boolean indexExists = client.admin().indices().exists(new IndicesExistsRequest(index)).actionGet()
// .isExists();
// if (!indexExists)
// throw new SQLException("Index or Alias '" + index + "' does not exist");
// }
// return client;
// } catch (UnknownHostException e)
// {
// throw new SQLException("Unable to connect to " + host, e);
// } catch (Throwable t)
// {
// throw new SQLException("Unable to connect to database", t);
// }
// }
// }

private Client buildClient() throws SQLException
{
    Properties props = connInfo.getProperties();
    if (props.containsKey("test"))
    { // used for integration tests
        return ESIntegTestCase.client();
    } else
    {
        try
        {
            Settings.Builder settingsBuilder = Settings.settingsBuilder();
            for (Object key : props.keySet())
            {
                settingsBuilder.put(key, props.get(key));
            }
            Settings settings = settingsBuilder.build();
            TransportClient client = TransportClient.builder().settings(settings).build();

            List<HostPortPair> hosts = connInfo.getHosts();
            for (HostPortPair host : hosts)
            {
                client.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName(host.getHost()), host.getPort()));
                logger.info("Adding additional ES host: " + host);
            }

            // check if index exists
            if (connInfo.getIndex() != null)
            {
                boolean indexExists = client.admin().indices().exists(new IndicesExistsRequest(connInfo.getIndex())).actionGet()
                        .isExists();
                if (!indexExists)
                    throw new SQLException("Index or Alias '" + connInfo.getIndex() + "' does not exist");
            }
            return client;
        } catch (UnknownHostException e)
        {
            throw new SQLException("Unable to connect to " + connInfo.getHosts(), e);
        } catch (Throwable t)
        {
            throw new SQLException("Unable to connect to database", t);
        }
    }
}

public Client getClient()
{
    return this.client;
}

@Override
public <T> T unwrap(Class<T> iface) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public boolean isWrapperFor(Class<?> iface) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public Statement createStatement() throws SQLException
{
    if (this.client == null)
    {
        throw new SQLException("Unable to connect on specified schema '" + this.connInfo.getIndex() + "'");
    }
    return new ESStatement(this);
}

@Override
public PreparedStatement prepareStatement(String sql) throws SQLException
{
    if (this.client == null)
    {
        throw new SQLException("Unable to connect on specified schema '" + this.connInfo.getIndex() + "'");
    }
    return new ESPreparedStatement(this, sql);
}

@Override
public CallableStatement prepareCall(String sql) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public Statement createStatement(int resultSetType, int resultSetConcurrency) throws SQLException
{
    // TODO use params
    if (this.client == null)
    {
        throw new SQLException("Unable to connect on specified schema '" + this.connInfo.getIndex() + "'");
    }
    return new ESStatement(this);
}

@Override
public String nativeSQL(String sql) throws SQLException
{
    return sql;
}

@Override
public void setAutoCommit(boolean autoCommit) throws SQLException
{
    this.autoCommit = autoCommit;
}

@Override
public boolean getAutoCommit() throws SQLException
{
    return autoCommit;
}

@Override
public void commit() throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public void rollback() throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public void close() throws SQLException
{
    if (isClosed())
        return;
    for (ESStatement st : this.statements)
        st.close();
    statements.clear();
    client.close();
    this.active = false;
}

@Override
public boolean isClosed() throws SQLException
{
    return !active;
}

@Override
public DatabaseMetaData getMetaData() throws SQLException
{
    HostPortPair hostPortPair = this.connInfo.getHosts().get(0);
    return new ESDatabaseMetaData(hostPortPair.getHost(), hostPortPair.getPort(), client, this.getClientInfo(), this);

// return new ESDatabaseMetaData(this);
}

@Override
public void setReadOnly(boolean readOnly) throws SQLException
{
    this.readOnly = readOnly;
}

@Override
public boolean isReadOnly() throws SQLException
{
    return readOnly;
}

@Override
public void setCatalog(String catalog) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public String getCatalog() throws SQLException
{
    return null;
}

@Override
public void setTransactionIsolation(int level) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public int getTransactionIsolation() throws SQLException
{
    return ESConnection.TRANSACTION_NONE;
}

@Override
public SQLWarning getWarnings() throws SQLException
{
    return null;
}

@Override
public void clearWarnings() throws SQLException
{
    // TODO
}

@Override
public PreparedStatement prepareStatement(String sql, int resultSetType, int resultSetConcurrency)
        throws SQLException
{
    // TODO use params?
    if (this.client == null)
    {
        throw new SQLException("Unable to connect on specified schema '" + this.connInfo.getIndex() + "'");
    }
    return new ESPreparedStatement(this, sql);
}

@Override
public CallableStatement prepareCall(String sql, int resultSetType, int resultSetConcurrency) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public Map<String, Class<?>> getTypeMap() throws SQLException
{
    ResultSet rs = getMetaData().getColumns(null, null, null, null);
    Map<String, Map<String, Integer>> tableColumnInfo = new HashMap<String, Map<String, Integer>>();
    while (rs.next())
    {
        String table = rs.getString(3);
        String col = rs.getString(4);
        int type = rs.getInt(5);
        if (!tableColumnInfo.containsKey(table))
            tableColumnInfo.put(table, new HashMap<String, Integer>());
        tableColumnInfo.get(table).put(col, type);
    }
    this.connInfo.getProperties()
        .put(Utils.PROP_TABLE_COLUMN_MAP, tableColumnInfo);

    Map<String, Class<?>> result = new HashMap<String, Class<?>>();
    for (String type : tableColumnInfo.keySet())
    {
        for (String field : tableColumnInfo.get(type).keySet())
        {
            result.put(type + "." + field, Heading.getClassForTypeId(tableColumnInfo.get(type).get(field)));
        }
    }
    return result;
}

@Override
public void setTypeMap(Map<String, Class<?>> map) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public void setHoldability(int holdability) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public int getHoldability() throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public Savepoint setSavepoint() throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public Savepoint setSavepoint(String name) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public void rollback(Savepoint savepoint) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public void releaseSavepoint(Savepoint savepoint) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public Statement createStatement(int resultSetType, int resultSetConcurrency, int resultSetHoldability)
        throws SQLException
{
    if (this.client == null)
    {
        throw new SQLException("Unable to connect on specified schema '" + this.connInfo.getIndex() + "'");
    }
    if (isClosed())
        throw new SQLException("Connection closed");
    ESStatement st = new ESStatement(this);
    statements.add(st);
    return st;
}

@Override
public PreparedStatement prepareStatement(String sql, int resultSetType, int resultSetConcurrency,
        int resultSetHoldability) throws SQLException
{
    // TODO use params
    if (this.client == null)
    {
        throw new SQLException("Unable to connect on specified schema '" + this.connInfo.getIndex() + "'");
    }
    if (isClosed())
        throw new SQLException("Connection closed");

    ESPreparedStatement st = new ESPreparedStatement(this, sql);
    statements.add(st);
    return st;
}

@Override
public CallableStatement prepareCall(String sql, int resultSetType, int resultSetConcurrency,
        int resultSetHoldability) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public PreparedStatement prepareStatement(String sql, int autoGeneratedKeys) throws SQLException
{
    // TODO use params
    if (this.client == null)
    {
        throw new SQLException("Unable to connect on specified schema '" + this.connInfo.getIndex() + "'");
    }
    if (isClosed())
        throw new SQLException("Connection closed");

    ESPreparedStatement st = new ESPreparedStatement(this, sql);
    statements.add(st);
    return st;
}

@Override
public PreparedStatement prepareStatement(String sql, int[] columnIndexes) throws SQLException
{
    // TODO use params
    if (this.client == null)
    {
        throw new SQLException("Unable to connect on specified schema '" + this.connInfo.getIndex() + "'");
    }
    if (isClosed())
        throw new SQLException("Connection closed");

    ESPreparedStatement st = new ESPreparedStatement(this, sql);
    statements.add(st);
    return st;
}

@Override
public PreparedStatement prepareStatement(String sql, String[] columnNames) throws SQLException
{
    // TODO use params
    if (this.client == null)
    {
        throw new SQLException("Unable to connect on specified schema '" + this.connInfo.getIndex() + "'");
    }
    if (isClosed())
        throw new SQLException("Connection closed");

    ESPreparedStatement st = new ESPreparedStatement(this, sql);
    statements.add(st);
    return st;
}

@Override
public Clob createClob() throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public Blob createBlob() throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public NClob createNClob() throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public SQLXML createSQLXML() throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public boolean isValid(int timeout) throws SQLException
{
    return active;
}

@Override
public void setClientInfo(String name, String value) throws SQLClientInfoException
{
    this.connInfo.getProperties().setProperty(name, value);
}

@Override
public void setClientInfo(Properties properties) throws SQLClientInfoException
{
    this.connInfo.setProperties(properties);
}

@Override
public String getClientInfo(String name) throws SQLException
{
    return this.connInfo.getProperties().getProperty(name);
}

@Override
public Properties getClientInfo() throws SQLException
{
    return this.connInfo.getProperties();
}

@Override
public Array createArrayOf(String typeName, Object[] elements) throws SQLException
{
    return null;
}

@Override
public Struct createStruct(String typeName, Object[] attributes) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public void setSchema(String schema) throws SQLException
{
    boolean indexExists = client.admin().indices().exists(new IndicesExistsRequest(schema)).actionGet().isExists();
    if (!indexExists)
        throw new SQLException("Index '" + schema + "' does not exist");
    this.connInfo.setIndex(schema);
}

@Override
public String getSchema() throws SQLException
{
    return this.connInfo.getIndex();
}

@Override
public void abort(Executor executor) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public void setNetworkTimeout(Executor executor, int milliseconds) throws SQLException
{
    this.timeout = milliseconds;
}

@Override
public int getNetworkTimeout() throws SQLException
{
    return timeout;
}

}

------ HostPortPair
package nl.anchormen.sql4es.model;

import java.sql.SQLException;

public class HostPortPair
{
private String host;
private int port;

public HostPortPair(String host) throws SQLException
{
    this(host, Utils.PORT);
}

public HostPortPair(String host, int port) throws SQLException
{
    if(host == null || host.isEmpty() || port <= 0)
        throw new SQLException("Invalid URL, no host specified");
    this.host = host;
    this.port = port;
}

/**
 * @return the host
 */
public String getHost()
{
    return host;
}
/**
 * @param host the host to set
 */
public void setHost(String host)
{
    this.host = host;
}

/**
 * @return the port
 */
public int getPort()
{
    return port;
}
/**
 * @param port the port to set
 */
public void setPort(int port)
{
    this.port = port;
}

@Override
public String toString()
{
    return host+":"+port;
}

@Override
public boolean equals(Object o)
{
    if (o == this) return true;
    if (!(o instanceof HostPortPair)) return false;
    HostPortPair other = (HostPortPair) o;
    if (this.port != other.port) return false;
    if (this.host.equals(other.host)) 
        return false;
    return true;
}

}

the second bug

explain select sum(money) as allMoney,flag from person having max(flag)=-1

java.sql.SQLException: Having reference col: flag, fullN: max(flag) as alias:max(flag) vis: true index: -1 type: 8 not found in SELECT clause

Please modify the fortieth line about arithmetic expression support on HavingParser.java

Cannot get nested of nested field correctly

PUT /my_index
{
  "mappings": {
    "blogpost": {
      "properties": {
        "comments": {
          "type": "nested"
        }
      }
    }
  }
}



PUT /my_index/blogpost/1
{
  "title": "Nest eggs",
  "body":  "Making your money work...",
  "comments": [ 
    {
      "user":    { "id" : "John Smith"},
      "comment": "Great article"
    },
    {
      "user":    { "id" : "Alice White"},
      "comment": "More like this please"
    }
  ]
}

Here is sample data I made and queried like
select * from blogpost where _id='1';

It converted correctly as json query

GET /my_index/_search
{
   "size": 200,
   "timeout": 10000,
   "post_filter": {
      "ids": {
         "type": "blogpost",
         "values": [
            "1"
         ]
      }
   }
}

and return data is correct in ES BUT not in sql4es result table.

image

comments.user.id of second row should be "Alice White", but sql4es just filled out first nested of nested data to all nested of nested data.

some problems when integrate into pentaho and saiku

**Hi, Corné ,
Firstly, thank you for giving us such a good tool. These days I am looking for jdbc tool for elasticsearch. Sql4es is what I am searching for. But I encounter some problems. could you help me for that? Maybe this is not a defect of sql4es. I use the sql4es in SQLWorkbench and it works well.

  1. I followed the article [https://www.anchormen.nl/elasticsearch-as-a-pentaho-source-through-sql4es/] , but it showed exceptions after I opened the analysis report and selected some fields. It seemed that pentaho knows fields name but can not get field data from Elasticserach. it uses sql4es.8.2.2 es2.2.1 and pentaho business trial 6.1.0. Did you change some configurations when you use sql4es in pentaho? some screenshots
    https://github.com/borderlayout/test/blob/master/test1.png
    https://github.com/borderlayout/test/blob/master/test2.png
    https://github.com/borderlayout/test/blob/master/test3.png
    https://github.com/borderlayout/test/blob/master/test4.png
    https://github.com/borderlayout/test/blob/master/test5.png
  2. I also tried to use sql4es in saiku community edition(saiku CE). [http://community.meteorite.bi/] ,but I encountered another problem.The saiku CE3.7.4 uses apache jackrabbit 2.8.0 which calls lucene-core:3.6.0, and sql4es(0.8.2.3) uses elasticsearch2.3.2 which also calls lucene(version 5.5.0),
    the version of lucene are different and the saiku runs failed. Could you give me some suggestion for it?**

Gary Wu
email: [email protected]

the log of pentaho when I select column:
11:34:37,575 ERROR [c] Ticket Number: 1464089677575
11:34:37,578 ERROR [c] Exception in AJAX handler.
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.sql.SQLException: No result found for this query
at com.pentaho.analyzer.report.ReportRequestService.get(SourceFile:219)
at com.pentaho.analyzer.content.controller.n.a(SourceFile:168)
at com.pentaho.analyzer.content.controller.b.b(SourceFile:202)
at com.pentaho.analyzer.content.AnalyzerContentGenerator.a(SourceFile:311)
at com.pentaho.analyzer.content.AnalyzerContentGenerator.createContent(SourceFile:157)
at org.pentaho.platform.web.http.api.resources.GeneratorStreamingOutput.generateContent(GeneratorStreamingOutput.java:236)
at org.pentaho.platform.web.http.api.resources.GeneratorStreamingOutput.write(GeneratorStreamingOutput.java:163)
at org.pentaho.platform.web.http.api.resources.GeneratorStreamingOutputProvider.writeTo(GeneratorStreamingOutputProvider.java:54)
at org.pentaho.platform.web.http.api.resources.GeneratorStreamingOutputProvider.writeTo(GeneratorStreamingOutputProvider.java:33)
at com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:306)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1479)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1391)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1381)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:538)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:716)
at org.pentaho.platform.web.servlet.JAXRSServlet.service(JAXRSServlet.java:109)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
at org.pentaho.platform.web.servlet.JAXRSServlet.service(JAXRSServlet.java:114)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:292)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.pentaho.platform.web.http.filters.PentahoWebContextFilter.doFilter(PentahoWebContextFilter.java:185)
at com.pentaho.platform.web.http.filters.PentahoEnterpriseWebContextFilter.doFilter(SourceFile:72)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.pentaho.platform.web.http.filters.PentahoRequestContextFilter.doFilter(PentahoRequestContextFilter.java:87)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:399)
at org.springframework.security.intercept.web.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:109)
at org.springframework.security.intercept.web.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:83)
at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:411)
at org.springframework.security.ui.ExceptionTranslationFilter.doFilterHttp(ExceptionTranslationFilter.java:101)
at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53)
at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:411)
at org.springframework.security.providers.anonymous.AnonymousProcessingFilter.doFilterHttp(AnonymousProcessingFilter.java:105)
at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53)
at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:411)
at org.pentaho.platform.web.http.security.RequestParameterAuthenticationFilter.doFilter(RequestParameterAuthenticationFilter.java:191)
at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:411)
at org.springframework.security.ui.basicauth.BasicProcessingFilter.doFilterHttp(BasicProcessingFilter.java:174)
at org.pentaho.platform.web.http.security.PentahoBasicProcessingFilter.doFilterHttp(PentahoBasicProcessingFilter.java:115)
at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53)
at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:411)
at org.springframework.security.context.HttpSessionContextIntegrationFilter.doFilterHttp(HttpSessionContextIntegrationFilter.java:235)
at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53)
at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:411)
at org.pentaho.platform.web.http.filters.HttpSessionPentahoSessionIntegrationFilter.doFilter(HttpSessionPentahoSessionIntegrationFilter.java:263)
at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:411)
at org.springframework.security.wrapper.SecurityContextHolderAwareRequestFilter.doFilterHttp(SecurityContextHolderAwareRequestFilter.java:91)
at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53)
at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:411)
at org.springframework.security.util.FilterChainProxy.doFilter(FilterChainProxy.java:188)
at org.springframework.security.util.FilterToBeanProxy.doFilter(FilterToBeanProxy.java:99)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at com.pentaho.ui.servlet.SystemStatusFilter.doFilter(SourceFile:87)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.pentaho.platform.web.http.filters.SetCharacterEncodingFilter.doFilter(SetCharacterEncodingFilter.java:114)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.pentaho.platform.web.http.filters.WebappRootForwardingFilter.doFilter(WebappRootForwardingFilter.java:70)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.pentaho.platform.web.http.filters.PentahoPathDecodingFilter.doFilter(PentahoPathDecodingFilter.java:34)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:106)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:616)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:522)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1095)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:672)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1502)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1458)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.sql.SQLException: No result found for this query
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at com.pentaho.analyzer.report.ReportRequestService.get(SourceFile:175)
... 86 more
Caused by: java.lang.RuntimeException: java.sql.SQLException: No result found for this query
at com.pentaho.analyzer.report.ReportManagerImpl.runReport(SourceFile:320)
at com.pentaho.analyzer.report.ReportManagerImpl.runReport(SourceFile:139)
at com.pentaho.analyzer.report.ReportRequestService$2.a(SourceFile:275)
at com.pentaho.analyzer.report.ReportRequestService$2.call(SourceFile:270)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
Caused by: java.sql.SQLException: No result found for this query
at nl.anchormen.sql4es.ESQueryState.execute(ESQueryState.java:188)
at nl.anchormen.sql4es.ESQueryState.execute(ESQueryState.java:172)
at nl.anchormen.sql4es.jdbc.ESStatement.executeQuery(ESStatement.java:68)
at org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
at org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
at mondrian.rolap.SqlStatement.execute(SqlStatement.java:199)
at mondrian.rolap.RolapUtil.executeQuery(RolapUtil.java:350)
at mondrian.rolap.SqlTupleReader.prepareTuples(SqlTupleReader.java:412)
at mondrian.rolap.SqlTupleReader.readMembers(SqlTupleReader.java:529)
at mondrian.rolap.SqlMemberSource.getMembersInLevel(SqlMemberSource.java:535)
at mondrian.rolap.SmartMemberReader.getMembersInLevel(SmartMemberReader.java:147)
at mondrian.rolap.RolapCubeHierarchy$CacheRolapCubeHierarchyMemberReader.getMembersInLevel(RolapCubeHierarchy.java:724)
at mondrian.rolap.RolapSchemaReader.getLevelMembers(RolapSchemaReader.java:579)
at mondrian.rolap.RolapSchemaReader.getLevelMembers(RolapSchemaReader.java:565)
at mondrian.rolap.RolapCube$RolapCubeSchemaReader.getLevelMembers(RolapCube.java:2829)
at mondrian.olap.DelegatingSchemaReader.getLevelMembers(DelegatingSchemaReader.java:195)
at mondrian.olap.Query$QuerySchemaReader.getLevelMembers(Query.java:1535)
at mondrian.olap.fun.FunUtil.getNonEmptyLevelMembers(FunUtil.java:2235)
at mondrian.olap.fun.FunUtil.levelMembers(FunUtil.java:2243)
at mondrian.olap.fun.LevelMembersFunDef$1.evaluateList(LevelMembersFunDef.java:37)
at mondrian.calc.impl.AbstractListCalc.evaluate(AbstractListCalc.java:65)
at mondrian.rolap.RolapResult.evaluateExp(RolapResult.java:1089)
at mondrian.rolap.RolapNamedSetEvaluator.ensureList(RolapNamedSetEvaluator.java:98)
at mondrian.rolap.RolapNamedSetEvaluator.ensureList(RolapNamedSetEvaluator.java:98)
at mondrian.rolap.RolapNamedSetEvaluator.evaluateTupleIterable(RolapNamedSetEvaluator.java:66)
at mondrian.mdx.NamedSetExpr$1.evaluateIterable(NamedSetExpr.java:98)
at mondrian.calc.impl.AbstractIterCalc.evaluate(AbstractIterCalc.java:50)
at mondrian.rolap.RolapResult.evaluateExp(RolapResult.java:1089)
at mondrian.rolap.RolapNamedSetEvaluator.ensureList(RolapNamedSetEvaluator.java:98)
at mondrian.rolap.RolapNamedSetEvaluator.evaluateTupleIterable(RolapNamedSetEvaluator.java:66)
at mondrian.mdx.NamedSetExpr$1.evaluateIterable(NamedSetExpr.java:98)
at mondrian.calc.impl.AbstractIterCalc.evaluate(AbstractIterCalc.java:50)
at mondrian.rolap.RolapResult.evaluateExp(RolapResult.java:1089)
at mondrian.rolap.RolapNamedSetEvaluator.ensureList(RolapNamedSetEvaluator.java:98)
at mondrian.rolap.RolapNamedSetEvaluator.evaluateTupleIterable(RolapNamedSetEvaluator.java:66)
at mondrian.mdx.NamedSetExpr$1.evaluateIterable(NamedSetExpr.java:98)
at mondrian.olap.fun.GenerateFunDef$GenerateListCalcImpl.evaluateList(GenerateFunDef.java:113)
at mondrian.calc.impl.AbstractListCalc.evaluate(AbstractListCalc.java:65)
at mondrian.rolap.RolapResult.evaluateExp(RolapResult.java:1089)
at mondrian.rolap.RolapNamedSetEvaluator.ensureList(RolapNamedSetEvaluator.java:98)
at mondrian.rolap.RolapNamedSetEvaluator.evaluateTupleIterable(RolapNamedSetEvaluator.java:66)
at mondrian.mdx.NamedSetExpr$1.evaluateIterable(NamedSetExpr.java:98)
at mondrian.olap.fun.OrderFunDef$CalcImpl.evaluateList(OrderFunDef.java:202)
at mondrian.calc.impl.AbstractListCalc.evaluate(AbstractListCalc.java:65)
at mondrian.rolap.RolapResult.evaluateExp(RolapResult.java:1089)
at mondrian.rolap.RolapNamedSetEvaluator.ensureList(RolapNamedSetEvaluator.java:98)
at mondrian.rolap.RolapNamedSetEvaluator.evaluateTupleIterable(RolapNamedSetEvaluator.java:66)
at mondrian.mdx.NamedSetExpr$1.evaluateIterable(NamedSetExpr.java:98)
at mondrian.rolap.RolapResult.executeAxis(RolapResult.java:976)
at mondrian.rolap.RolapResult.evalLoad(RolapResult.java:801)
at mondrian.rolap.RolapResult.loadMembers(RolapResult.java:757)
at mondrian.rolap.RolapResult.(RolapResult.java:401)
at mondrian.rolap.RolapConnection.executeInternal(RolapConnection.java:672)
at mondrian.rolap.RolapConnection.access$000(RolapConnection.java:52)
at mondrian.rolap.RolapConnection$1.call(RolapConnection.java:623)
at mondrian.rolap.RolapConnection$1.call(RolapConnection.java:621)
... 4 more

Any plan to support Elasticsearch 5.0.0?

Hi
When connect from Sqlworkbenchj with "sql4es-0.9.2.3.jar" this error:

"Received message from unsupported version: [2.0.0] minimal compatible version is: [5.0.0-alpha4]"

Many thanks

Date Time format issue

Joda date time can not parse date in format of yyyy-MM-dd HH:mm:ss.SSS

2017/06/14 10:35:55 - ES UEMM_AUDIT_TRAIL - EVENT 67.0 - ERROR (version 7.1.0.0-12, build 1 from 2017-05-16 17.18.02 by buildguy) : Unexpected error
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - ERROR (version 7.1.0.0-12, build 1 from 2017-05-16 17.18.02 by buildguy) : org.pentaho.di.core.exception.KettleDatabaseException:
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - Couldn't get row from result set
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX -
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - Unable to get value 'Date' from database resultset, index 0
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - Unable to parse Date from '2017-03-09 23:04:14.222' : Invalid format: "2017-03-09 23:04:14.222" is malformed at " 23:04:14.222"
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX -
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX -
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.database.Database.getRow(Database.java:2546)
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.database.Database.getRow(Database.java:2516)
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.database.Database.getRow(Database.java:2494)
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at org.pentaho.di.trans.steps.tableinput.TableInput.doQuery(TableInput.java:259)
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at org.pentaho.di.trans.steps.tableinput.TableInput.processRow(TableInput.java:140)
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at java.lang.Thread.run(Thread.java:745)
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - Caused by: org.pentaho.di.core.exception.KettleDatabaseException:
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - Unable to get value 'Date' from database resultset, index 0
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - Unable to parse Date from '2017-03-09 23:04:14.222' : Invalid format: "2017-03-09 23:04:14.222" is malformed at " 23:04:14.222"
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX -
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.row.value.ValueMetaBase.getValueFromResultSet(ValueMetaBase.java:4964)
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.database.BaseDatabaseMeta.getValueFromResultSet(BaseDatabaseMeta.java:2107)
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.database.DatabaseMeta.getValueFromResultSet(DatabaseMeta.java:2950)
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.database.Database.getRow(Database.java:2538)
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - ... 6 more
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - Caused by: java.sql.SQLException: Unable to parse Date from '2017-03-09 23:04:14.222' : Invalid format: "2017-03-09 23:04:14.222" is malformed at " 23:04:14.222"
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at nl.anchormen.sql4es.ESResultSet.getTimeFromString(ESResultSet.java:340)
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at nl.anchormen.sql4es.ESResultSet.getTimestamp(ESResultSet.java:388)
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.row.value.ValueMetaBase.getValueFromResultSet(ValueMetaBase.java:4946)

ElasticSearch 6.X - Failed test cases

Source version: master level
Running CreateITest against ElasticSearch 6.3.2 / 6.4.0 end up with the following error:

At

boolean res = st.execute("CREATE TABLE simpletype (myString \"type:keyword\", myInt \"type:integer\", myDate \"type:date\")");

throws

Caused by: java.io.IOException:
Unexpected character ('t' (code 116)): was expecting double-quote to start field name

At

assertEquals(Types.DATE, rsm.getColumnType(5));

throws

SQL Value in column '5' is not a Date but is Str...

Unable to Connect to Elasticsearch 5.5.3 using JDBC Driver sql4es-0.8.2.4.jar.

Below is the error message, however this driver supports elasticsearch 2.2.0 but not 5.5.3.

We are in urgent need of help to connect to elasticsearch using Oracle DVD 4. Please suggest/advise.

Also I have tried latest driver "sql4es-0.9.2.4.jar", but no luck.

Apr 13, 2018 5:18:08 PM oracle.bi.datasource.logging.ODLLoggerWrapper error
SEVERE: java.util.concurrent.ExecutionException oracle.bi.datasource.exceptions.DatasourceException: [JDSError : 102] Cannot create a connection since there are some errors. Please fix them and try again. Failed creating a data source connection for jdbc:sql4es://localhost:9300/twitter_local?cluster.name=elasticsearch
Cause - [JDSError : 102] Cannot create a connection since there are some errors. Please fix them and try again. Failed creating a data source connection for jdbc:sql4es://localhost:9300/twitter_local?cluster.name=elasticsearch
java.util.concurrent.FutureTask.report(FutureTask.java:122)
java.util.concurrent.FutureTask.get(FutureTask.java:206)
oracle.bi.datasource.service.adf.server.DatasourceServlet.doGet(DatasourceServlet.java:450)
oracle.bi.datasource.service.adf.server.DatasourceServlet.doPost(DatasourceServlet.java:597)
javax.servlet.http.HttpServlet.service(HttpServlet.java:707)

"having field = value" in select statement can't work correctly

if I have the sql like: select name, count(1) as num from table group by name having num = 5
the "num = 5" can not work correctly because of this use 【==】comparison operation on two Double type values, the code is in the following file:
sql4es/src/main/java/nl/anchormen/sql4es/model/expression/SimpleComparison.java

if(this.comparisonType == Type.EQUAL) return leftValue == rightValue;
if(this.comparisonType == Type.GREATER_THAN) return leftValue > rightValue;
if(this.comparisonType == Type.GREATER_THAN_OR_EQUAL) return leftValue >= rightValue;
if(this.comparisonType == Type.LESS_THAN) return leftValue < rightValue;
if(this.comparisonType == Type.LESS_THAN_OR_EQUAL) return leftValue <= rightValue;

if this can use: return leftValue.equals(rightValue); ?

Using with Jasperosft Reports or JDBC Manager

Hi
I have been looking for a ES JDBC driver for months, so very happy i came across this.
I am trying to use it with Jaspersoft Report Studio. When i do a test, it shows it succesful, but when i go through and would normally see the tables/DB (when connecting to MS SQL), it just sits on Pending... under Table, View & Global Temporary.
I can connect to it with JDBC Manager on WIndows but get similar result where it just sits there.

Can you point me to where i can look to see where the issue could be?

Thanks

ESConnection. isClosed() check error

If I use sql4es driver in apache-comon-dhcp library, Connection initialization error occurs as follows
"initializeConnection: connection closed"
The isClosed () method of ESConnection class may need to be modified as follows:
public boolean isClosed() throws SQLException {
return !active; // old version <-- active
}

Disable automatic discovery?

Is it possible to disable automatic discovery of the cluster nodes? We have an ES cluster behind an NGINX proxy, and the driver is unable to connect to the cluster (2.3.5).
In the ElasticSearch Hadoop driver for example there is a setting called es.nodes.wan.only which disables auto-discovery - is there a way to do it with sql4es?

Thanks,
Dan

NOT Support in SQL - Pentaho Integration

I am trying to use it with Pentaho as described at https://www.anchormen.nl/elasticsearch-as-a-pentaho-source-through-sql4es/

The driver works well in SQLWorkbenchJ but Pentaho gives an error when trying to create an Analysis Report:
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.sql.SQLException: NOT is currently not supported, use '<>' instead

I'm using SQL4ES 0.8.2.3 , Elasticsearch 2.3.4 and Pentaho 6.1.0.1 Trial.
How could you make it work with Pentaho in the post? Is there any plan to add NOT support?

image

ES 5.4 Error SQL4ES 5.0.0

am able to connect to ES 5.4 using JDBC connection. But getting following error while using table input.

Field in question is of type long in ES. Do i have to use any any type conversion in sql query ?

2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - ERROR (version 7.1.0.0-12, build 1 from 2017-05-16 17.18.02 by buildguy) : Unexpected error
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - ERROR (version 7.1.0.0-12, build 1 from 2017-05-16 17.18.02 by buildguy) : org.pentaho.di.core.exception.KettleDatabaseException:
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - Couldn't get row from result set
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX -
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - Unable to get value 'BigNumber(16)' from database resultset, index 0
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - Value in column '1' is not of type BigDecimal but is class java.lang.Integer
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX -
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX -
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.database.Database.getRow(Database.java:2546)
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.database.Database.getRow(Database.java:2516)
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.database.Database.getRow(Database.java:2494)
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - at org.pentaho.di.trans.steps.tableinput.TableInput.doQuery(TableInput.java:259)
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - at org.pentaho.di.trans.steps.tableinput.TableInput.processRow(TableInput.java:140)
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - at java.lang.Thread.run(Thread.java:745)
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - Caused by: org.pentaho.di.core.exception.KettleDatabaseException:
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - Unable to get value 'BigNumber(16)' from database resultset, index 0
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - Value in column '1' is not of type BigDecimal but is class java.lang.Integer
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX -
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.row.value.ValueMetaBase.getValueFromResultSet(ValueMetaBase.java:4964)
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.database.BaseDatabaseMeta.getValueFromResultSet(BaseDatabaseMeta.java:2107)
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.database.DatabaseMeta.getValueFromResultSet(DatabaseMeta.java:2950)
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.database.Database.getRow(Database.java:2538)
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - ... 6 more
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - Caused by: java.sql.SQLException: Value in column '1' is not of type BigDecimal but is class java.lang.Integer
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - at nl.anchormen.sql4es.ESResultSet.getBigDecimal(ESResultSet.java:548)
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.row.value.ValueMetaBase.getValueFromResultSet(ValueMetaBase.java:4919)
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - ... 9 more
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - Finished reading query, closing connection.

Select literal/constant support

Hello!
Thanks for this helpful library.

Could you please implement support for selecting literal values?
I mean queries like "select id, name, 42, 'foo' from Bar". Now such queries always return 0 for integral literals and null for string ones.

Incorrect type for nested fields

In our index we have a field called "system" that has two nested properties - host (String) and sampletime (Date).
Using WorkbenchJ, the following query works OK:

SELECT system.host, system.sampletime
FROM "index"."data-generator-poc-async"."arm" limit 1

But the following queries fail with an error "Value in column '82' is not a Date but is String" (column 82 is system.sampletime):

SELECT system
FROM "index"."data-generator-poc-async"."arm" limit 1

SELECT *
FROM "index"."data-generator-poc-async"."arm" limit 1

Is best practice for sql4es only to use one ESConnection ?

We used the connection pool to create and maintain Connections when the database was MySQL, PostgreSQL, and Oracle. As this way, we thought to use pool to manage ESConnection. But when we reviewed code, we found that one ESConnection wrap one TransportClient instance. If we use the pool, there will be multiple TransportClient instances in project (one JVM). However, official suggests that one client in one JVM.

So what should we do? Use only one ESConnection instance or multiple ESConnection instances in a pool? thanks for your reply?

Value in column X is not of type Boolean but is class java.lang.Boolean

Hi there,

I'm running into an odd issue trying to run a simple select * from statement, where type is a type that exists in more than one index (but the query, and the URL, are specifying a single index).

This is the query:

SELECT * from "data-generator-poc-async".arm

And this is the URL:
jdbc:sql4es://myelastichost:9302/data-generator-poc-async?cluster.name=test23

Using the exact same connection URL and query, I get the following results:

  • Squirrel - returns all documents in the specified index, but with hundreds of columns that do not exist in the index at all (they do exist however for the arm type in other indices on the cluster). The values for all of these columns are being shown as NULL / false for boolean values.
  • WorkbenchJ - Getting back an error: "Value in column '552' is not a Date but is String"
    There are two problems here:
    1. My index doesn't have 552 columns.. the count is probably skewed because it pulls in columns for the arm type from other indices
    2. Using Squirrel I can see that the getColumnTypeName and getColumnClassName for column 552 are both java.sql.Date
  • Apache Drill (ultimately this is what I want to use the sql4es driver with) - Getting back an error: "Value in column '783' is not of type boolean but is class java.lang.Boolean"

The data I have in the index is very simple, Am I doing something wrong? :)

Any plan for "Parent-Child" search on sql4es?

Hi,

I think I share the same preference with you guys, I think use SQL to search for result is much more convenient than use JSON or java client to query from Elasticsearch.

However, the question I have here is that does sql4es plan to support "Parent-Child" search on Elasticsearch? I think the "Parent-Child" search is quite a major feature for Elasticsearch. Hopefully sql4es can added this to the future milestone.

Sincerely,

Feiran

Issue with Elasticsearch 5.2.0

Hi,

I am using Elasticsearch 5.2.0 and querying it in Pentaho 7.1 CDE using sql4es-0.8_5.0.0.jar
JDBC connection is successfully established. But getting following error when querying table,
Caused by: java.lang.NullPointerException
at nl.anchormen.sql4es.ESResultSet.getStatement(ESResultSet.java:905)
at org.pentaho.reporting.engine.classic.core.modules.misc.datafactory.sql.ResultSetTableModelFactory.generateDefaultTableModel(ResultSetTableModelFactory.java:296)
at org.pentaho.reporting.engine.classic.core.modules.misc.datafactory.sql.SimpleSQLReportDataFactory.parametrizeAndQuery(SimpleSQLReportDataFactory.java:328)
at org.pentaho.reporting.engine.classic.core.modules.misc.datafactory.sql.SimpleSQLReportDataFactory.queryData(SimpleSQLReportDataFactory.java:178)
at org.pentaho.reporting.engine.classic.core.modules.misc.datafactory.sql.SQLReportDataFactory.queryData(SQLReportDataFactory.java:142)
at org.pentaho.reporting.engine.classic.core.CompoundDataFactory.queryStaticInternal(CompoundDataFactory.java:172)
at org.pentaho.reporting.engine.classic.core.CompoundDataFactory.queryStatic(CompoundDataFactory.java:154)
at org.pentaho.reporting.engine.classic.core.CompoundDataFactory.queryData(CompoundDataFactory.java:67)
at org.pentaho.reporting.engine.classic.core.cache.CachingDataFactory.queryInternal(CachingDataFactory.java:411)
at org.pentaho.reporting.engine.classic.core.cache.CachingDataFactory.queryData(CachingDataFactory.java:299)
at pt.webdetails.cda.dataaccess.PREDataAccess.performRawQuery(PREDataAccess.java:127)

Could anyone please share inputs on this?

Thanks

first bug

explain select sum(money) as allMoney from person where id=3
result is
{
"size" : 0,
"timeout" : 10000,
"query" : {
"term" : {
"id" : 3
}
},
"aggregations" : {
"filter" : {
"filter" : {
"term" : {
"id" : 3
}
},
"aggregations" : {
"sum(Money)" : {
"sum" : {
"field" : "Money"
}
}
}
}
}
}
the 'money' become 'Money'

I suggest you modify the 251 1ine on Heading.java or SelectParser.java about 'createColumn'

Issue with exposing metadata of aliases

Hi,

I'm having troubles with a 3rd party BI tool, in reading the metadata of the Elasticsearch indexes exposed as VIEWs.
Is there any difference from the point of view of the meta-data, in the way VIEWs are being exposed (via JDBC) vs what MySQL does, for example ?
Maybe something that should be set as additional param in the URI ?

resultsets fetch implementation

Is the resultset FETCH implemented based on this concept https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/java-search-scrolling.html or is it restricted by the greatest of index.max_result_window and fetch.size ?
Currently I can't find a way to get the entire result set from a SELECT query if the result contains more than fetch.size (or index.max_result_window) rows because offsets are currently not implemented in sql4es.
Any hints/workarounds or plans for the future? :)

Error occurred while creating ad-hoc view in Jaspersoft Server Domain

I am using this driver for creating domain in JasperSoft Server. I could create the domain. BUT while creating the ad-Hoc view, I received the following error -

Caused by: java.sql.SQLException: No active index set for this driver. Pleas specify an active index or alias by executing 'USE <index/alias>' first
at nl.anchormen.sql4es.jdbc.ESStatement.executeQuery(ESStatement.java:62)
at nl.anchormen.sql4es.jdbc.ESStatement.execute(ESStatement.java:188)
at nl.anchormen.sql4es.jdbc.ESPreparedStatement.executeQuery(ESPreparedStatement.java:66)
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:96)
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:96)
at net.sf.jasperreports.engine.query.JRJdbcQueryExecuter.createDatasource(JRJdbcQueryExecuter.java:310)

Could you please help?

Memory consumption

Hi,

It seems like there is an issue of memory while doing a select from index (type) without where clause filter, or limit.
lets say we have index/type - foo/bar, and we write the statement "select * from bar",
If the index is large enough, the driver will explode with memory, no matter what fetch.size is set to.
Is there any way to limit this ? Doesn't the fetch.size parameter supposed to stop it from growing that much ?

BTW, this should be very easy to reproduce, even on the twitter example.
(Of course, as the number of fields returning is larger, this issue will be more "burning")

getDate(int columnIndex, Calendar cal) not implemented

The getDate function that accepts a column index and a calendar is not implemented:

    @Override
    public Date getDate(int columnIndex, Calendar cal) throws SQLException {
        // TODO Auto-generated method stub
        return null;
    }

Some clients (Apache Drill being one) require this functionality since they always pass a Calendar object.

Any support for 2.4.3?

Let me say this project is awesome!

It works for me when I connect to 2.4.0 but get a transport error when i connect to 2.4.3. Is it supported?

Not able to connect to elastic versions 5.X.X

Hello!
I am trying to connect to elastic server with version 5.3.0 and run SQL queries.
I am getting following error on server side .

[2018-10-15T12:37:06,482][WARN ][o.e.t.n.Netty4Transport ] [gIaF784] exception caught on transport layer [[id: 0x294d658d, L:/10.54.135.59:9300 - R:/1
0.54.135.59:52665]], closing connection
java.lang.IllegalStateException: Received message from unsupported version: [2.0.0] minimal compatible version is: [5.0.0]
at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1323) ~[elasticsearch-5.3.0.jar:5.3.0]
at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:74) ~[transport-netty4-5.3.0.jar
:5.3.0]

I understand that support for later versions of elastic is not yet available.
When will the new release be available? Or is there any other way to bypass it.
I tried compiling the master branch code and use the jar, still not able to connect.

I am getting the following error on client side when connecting using mater branch,

Exception in thread "main" java.lang.NoClassDefFoundError: org/elasticsearch/client/transport/TransportClient
at nl.anchormen.sql4es.jdbc.ESDriver.connect(ESDriver.java:47)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:270)
Caused by: java.lang.ClassNotFoundException: org.elasticsearch.client.transport.TransportClient
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 4 more

Query with identical fields does not correctly populate the result set

Using 8.2.2 built from source (4/20/2016) and testing with SQL Workbench/J Build 119 and Elasticsearch 2.0.2/2.1.1/2.1.2 a query like "SELECT field as field_1, field as field_2, field as field_3 FROM table;" returns a result set with all three columns but only the last populated with values.

the last

select noon ,noon_some from person where id=1
the second field Will disappear on the resultset;
because the Inaccurate matches in the 89 line of buildHeaders.java 's startsWith

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.