Git Product home page Git Product logo

cql-rb's Introduction

This is not the Cassandra driver you are looking for

cql-rb has graduated from community driver to being the foundation of the official Datastax Ruby Driver for Apache Cassandra.

There will be no more development in this repository, with the exception of critical bug fixes. I encourage everyone to start migrating to the new driver as soon as you can, it's got some great new features that you should try out.

The cql-rb code and the old readme will remain here as legacy documentation.

Read the announcement of the new Ruby driver or the documentation with all the new features.


Ruby CQL3 driver

Build Status Coverage Status Blog

If you're reading this on GitHub, please note that this is the readme for the development version and that some features described here might not yet have been released. You can find the readme for a specific version either through rubydoc.info or via the release tags (here is an example).

Requirements

Cassandra 1.2 or later with the native transport protocol turned on and a modern Ruby. It's tested continuously using Travis with Cassandra 2.0.5, Ruby 1.9.3, 2.0, JRuby 1.7 and Rubinius 2.1.

Installation

gem install cql-rb

if you want to use compression you should also install snappy or lz4-ruby. See below for more information about compression.

Quick start

require 'cql'

client = Cql::Client.connect(hosts: ['cassandra.example.com'])
client.use('system')
rows = client.execute('SELECT keyspace_name, columnfamily_name FROM schema_columnfamilies')
rows.each do |row|
  puts "The keyspace #{row['keyspace_name']} has a table called #{row['columnfamily_name']}"
end

The host you specify is just a seed node, the client will automatically connect to all other nodes in the cluster (or nodes in the same data center if you're running multiple rings).

When you're done you can call #close to disconnect from Cassandra:

client.close

Usage

The full API documentation is available from rubydoc.info.

Changing keyspaces

You can specify a keyspace to change to immediately after connection by passing the :keyspace option to Client.connect, but you can also use the #use method, or #execute:

client.use('measurements')

or using CQL:

client.execute('USE measurements')

Running queries

You run CQL statements by passing them to #execute.

client.execute("INSERT INTO events (id, date, description) VALUES (23462, '2013-02-24T10:14:23+0000', 'Rang bell, ate food')")

client.execute("UPDATE events SET description = 'Oh, my' WHERE id = 13126")

If the CQL statement passed to #execute returns a result (e.g. it's a SELECT statement) the call returns an enumerable of rows:

rows = client.execute('SELECT date, description FROM events')
rows.each do |row|
  row.each do |key, value|
    puts "#{key} = #{value}"
  end
end

The enumerable also has an accessor called metadata which returns a description of the rows and columns:

rows = client.execute('SELECT date, description FROM events'
rows.metadata['date'].type # => :date

If you're using Cassandra 2.0 or later you no longer have to build CQL strings when you want to insert a value in a query, there's a new feature that lets you use bound values with reqular statements:

client.execute("UPDATE users SET age = ? WHERE user_name = ?", 41, 'Sam')

If you find yourself doing this often, it's better to use prepared statements. As a rule of thumb, if your application is sending a request more than once, a prepared statement is almost always the right choice.

When you use bound values with regular statements the type of the values has to be guessed. Cassandra supports multiple different numeric types, but there's no reliable way of guessing whether or not a Ruby Fixnum should be encoded as a BIGINT or INT, or whether a Ruby Float is a DOUBLE or FLOAT. When there are multiple choices the encoder will pick the larger type (e.g. BIGINT over INT). For Ruby strings it will always guess VARCHAR, never BLOB.

You can override the guessing by passing type hints as an option, see the API docs for more information.

Each call to #execute selects a random connection to run the query on.

Creating keyspaces and tables

There is no special facility for creating keyspaces and tables, they are created by executing CQL:

keyspace_definition = <<-KSDEF
  CREATE KEYSPACE measurements
  WITH replication = {
    'class': 'SimpleStrategy',
    'replication_factor': 3
  }
KSDEF

table_definition = <<-TABLEDEF
  CREATE TABLE events (
    id INT,
    date DATE,
    comment VARCHAR,
    PRIMARY KEY (id)
  )
TABLEDEF

client.execute(keyspace_definition)
client.use('measurements')
client.execute(table_definition)

You can also ALTER keyspaces and tables, and you can read more about that in the CQL3 syntax documentation.

Prepared statements

The driver supports prepared statements. Use #prepare to create a statement object, and then call #execute on that object to run a statement. You must supply values for all bound parameters when you call #execute.

statement = client.prepare('SELECT date, description FROM events WHERE id = ?')

[123, 234, 345].each do |id|
  rows = statement.execute(id)
  # ...
end

A prepared statement can be run many times, but the CQL parsing will only be done once on each node. Use prepared statements for queries you run over and over again.

INSERT, UPDATE, DELETE and SELECT statements can be prepared, other statements may raise QueryError.

Statements are prepared on all connections and each call to #execute selects a random connection to run the query on.

You should only create a prepared statement for a query once, and then reuse the prepared statement object. Preparing the same CQL over and over again is bad for performance since each preparation requires a roundtrip to all connected Cassandra nodes.

Batching

If you're using Cassandra 2.0 or later you can build batch requests, either from regular queries or from prepared statements. Batches can consist of INSERT, UPDATE and DELETE statements.

There are a few different ways to work with batches, one is with a block where you build up a batch that is sent when the block ends:

client.batch do |batch|
  batch.add("UPDATE users SET name = 'Sue' WHERE user_id = 'unicorn31'")
  batch.add("UPDATE users SET name = 'Kim' WHERE user_id = 'dudezor13'")
  batch.add("UPDATE users SET name = 'Jim' WHERE user_id = 'kittenz98'")
end

Another is by creating a batch and sending it yourself:

batch = client.batch
batch.add("UPDATE users SET name = 'Sue' WHERE user_id = 'unicorn31'")
batch.add("UPDATE users SET name = 'Kim' WHERE user_id = 'dudezor13'")
batch.add("UPDATE users SET name = 'Jim' WHERE user_id = 'kittenz98'")
batch.execute

You can mix any combination of statements in a batch:

prepared_statement = client.prepare("UPDATE users SET name = ? WHERE user_id = ?")
client.batch do |batch|
  batch.add(prepared_statement, 'Sue', 'unicorn31')
  batch.add("UPDATE users SET age = 19 WHERE user_id = 'unicorn31'")
  batch.add("INSERT INTO activity (user_id, what, when) VALUES (?, 'login', NOW())", 'unicorn31')
end

Batches can have one of three different types: logged, unlogged or counter, where logged is the default. Their exact semantics are defined in the Cassandra documentation, but this is how you specify which one you want:

counter_statement = client.prepare("UPDATE my_counter_table SET my_counter = my_counter + ? WHERE id = ?")
client.batch(:counter) do |batch|
  batch.add(counter_statement, 3, 'some_counter')
  batch.add(counter_statement, 2, 'another_counter')
end

If you want to execute the same prepared statement multiple times in a batch there is a special variant of the batching feature available from PreparedStatement:

# the same counter_statement as in the example above
counter_statement.batch do |batch|
  batch.add(3, 'some_counter')
  batch.add(2, 'another_counter')
end

Cassandra 1.2 also supported batching, but only as a CQL feature, you had to build the batch as a string, and it didn't really play well with prepared statements.

Paging

If you're using Cassandra 2.0 or later you can page your query results by adding the :page_size option to a query:

result_page = client.execute("SELECT * FROM large_table WHERE id = 'partition_with_lots_of_data'", page_size: 100)

while result_page
  result_page.each do |row|
    p row
  end
  result_page = result_page.next_page
end

Consistency

You can specify the default consistency to use when you create a new Client:

client = Cql::Client.connect(hosts: %w[localhost], default_consistency: :all)

The #execute (of Client, PreparedStatement and Batch) method also supports setting the desired consistency level on a per-request basis:

client.execute('SELECT * FROM users', consistency: :local_quorum)

statement = client.prepared('SELECT * FROM users')
statement.execute(consistency: :one)

batch = client.batch
batch.add("UPDATE users SET email = '[email protected]' WHERE id = 'sue'")
batch.add("UPDATE users SET email = '[email protected]' WHERE id = 'tom'")
batch.execute(consistency: :all)

batch = client.batch(consistency: :quorum) do |batch|
  batch.add("UPDATE users SET email = '[email protected]' WHERE id = 'sue'")
  batch.add("UPDATE users SET email = '[email protected]' WHERE id = 'tom'")
end

For batches the options given to #execute take precedence over options given to #batch.

The possible values for consistency are:

  • :any
  • :one
  • :two
  • :three
  • :quorum
  • :all
  • :local_quorum
  • :each_quorum
  • :local_one

The default consistency level unless you've set it yourself is :quorum.

Consistency is ignored for USE, TRUNCATE, CREATE and ALTER statements, and some (like :any) aren't allowed in all situations.

Compression

The CQL protocol supports frame compression, which can give you a performance boost if your requests or responses are big. To enable it you can pass a compressor object when you connect.

Cassandra currently supports two compression algorithms: Snappy and LZ4. cql-rb supports both, but in order to use them you will have to install the snappy or lz4-ruby gems separately. Once it's installed you can enable compression like this:

require 'cql/compression/snappy_compressor'

compressor = Cql::Compression::SnappyCompressor.new
client = Cql::Client.connect(hosts: %w[localhost], compressor: compressor)

or

require 'cql/compression/lz4_compressor'

compressor = Cql::Compression::Lz4Compressor.new
client = Cql::Client.connect(hosts: %w[localhost], compressor: compressor)

Which one should you choose? On paper the LZ4 algorithm is more efficient and the one Cassandra defaults to for SSTable compression. They both achieve roughly the same compression ratio, but LZ4 does it quicker.

Logging

You can pass a standard Ruby logger to the client to get some more information about what is going on:

require 'logger'

client = Cql::Client.connect(logger: Logger.new($stderr))

Most of the logging will be when the driver connects and discovers new nodes, when connections fail and so on, but also when statements are prepared. The logging is designed to not cause much overhead and only relatively rare events are logged (e.g. normal requests are not logged).

Tracing

You can request that Cassandra traces a request and records what each node had to do to process the request. To request that a query is traced you can specify the :trace option to #execute. The request will proceed as normal, but you will also get a trace ID back in your response. This ID can then be used to load up the trace data:

result = client.execute("SELECT * FROM users", trace: true)
session_result = client.execute("SELECT * FROM system_traces.sessions WHERE session_id = ?", result.trace_id, consistency: :one)
events_result = client.execute("SELECT * FROM system_traces.events WHERE session_id = ?", result.trace_id, consistency: :one)

Notice how you can query tables in other keyspaces by prefixing their names with the keyspace name.

The system_traces.sessions table contains information about the request itself; which node was the coordinator, the CQL, the total duration, etc. (if the duration column is null the trace hasn't been completely written yet and you should load it again later). The events table contains information about what happened on each node and at what time. Note that each event only contains the number of seconds that elapsed from when the node started processing the request – you can't easily sort these events in a global order.

Thread safety

Except for results and batches everything in cql-rb is thread safe. You only need a single client object in your application, in fact creating more than one is a bad idea. Similarily prepared statements are thread safe and should be shared.

There are two things that you should be aware are not thread safe: result objects and batches. Result objects are wrappers around an array of rows and their primary use case is iteration, something that makes little sense to do concurrently. Because of this they've been designed to not be thread safe to avoid the unnecessary cost of locking. Similarily it creating batches aren't usually built concurrently, so to avoid the cost of locking they are not thread safe. If you, for some reason, need to use results or batches concurrently, you're responsible for locking around them. If you do this, you're probably doing something wrong, though.

CQL3

This is just a driver for the Cassandra native CQL protocol, it doesn't really know anything about CQL. You can run any CQL3 statement and the driver will return whatever Cassandra replies with.

Read more about CQL3 in the CQL3 syntax documentation and the Cassandra query documentation.

Troubleshooting

I get "connection refused" errors

Make sure that the native transport protocol is enabled. If you're running Cassandra 1.2.5 or later the native transport protocol is enabled by default, if you're running an earlier version (but later than 1.2) you must enable it by editing cassandra.yaml and setting start_native_transport to true.

To verify that the native transport protocol is enabled, search your logs for the message "Starting listening for CQL clients" and look at which IP and port it is binding to.

I get "Deadlock detected" errors

This means that the driver's IO reactor has crashed hard. Most of the time it means that you're using a framework, server or runtime that forks and you call Client.connect in the parent process. Check the documentation and see if there's any way you can register to run some piece of code in the child process just after a fork, and connect there.

This is how you do it in Resque:

Resque.after_fork = proc do
  # connect to Cassandra here
end

and this is how you do it in Passenger:

PhusionPassenger.on_event(:starting_worker_process) do |forked|
  if forked
    # connect to Cassandra here
  end
end

in Unicorn you do it in the config file:

after_fork do |server, worker|
  # connect to Cassandra here
end

Since prepared statements are tied to a particular connection, you'll need to recreate those after forking as well.

If your process does not fork and you still encounter deadlock errors, it might also be a bug. All IO is done is a dedicated thread, and if something happens that makes that thread shut down, Ruby will detect that the locks that the client code is waiting on can't be unlocked.

I get "Bad file descriptor"

If you're using cql-rb on Windows there's an experimental branch with Windows support. The problem is that Windows does not support non blocking reads on IO objects other than sockets, and the fix is very small. Unfortunately I have no way of properly testing things in Windows, so therefore the "experimental" label.

I get QueryError

All errors that originate on the server side are raised as QueryError. If you get one of these the error is in your CQL or on the server side.

I'm not getting all elements back from my list/set/map

There's a known issue with collections that get too big. The protocol uses a short for the size of collections, but there is no way for Cassandra to stop you from creating a collection bigger than 65536 elements, so when you do the size field overflows with strange results. The data is there, you just can't get it back.

Authentication doesn't work

Please open an issue. It should be working, but it's hard to set up and write automated tests for, so there may be edge cases that aren't covered. If you're using Cassandra 2.0 or DataStax Enterprise 3.1 or higher and/or are using something other than the built in PasswordAuthenticator your setup is theoretically supported, but it's not field tested.

If you are using DataStax Enterprise earlier than 3.1 authentication is unfortunately not supported. Please open an issue and we might be able to get it working, I just need someone who's willing to test it out. DataStax backported the authentication from Cassandra 2.0 into DSE 3.0, even though it only uses Cassandra 1.2. The authentication logic might not be able to handle this and will try to authenticate with DSE using an earlier version of the protocol. In short, DSE before 3.1 uses a non-standard protocol, but it should be possible to get it working. DSE 3.1 and 4.0 have been confirmed to work.

I get "end of file reached" / I'm connecting to port 9160 and it doesn't work

Port 9160 is the old Thrift interface, the binary protocol runs on 9042. This is also the default port for cql-rb, so unless you've changed the port in cassandra.yaml, don't override the port.

Something else is not working

Open an issue and someone will try to help you out. Please include the gem version, Casandra version and Ruby version, and explain as much about what you're doing as you can, preferably the smallest piece of code that reliably triggers the problem. The more information you give, the better the chances you will get help.

Performance tips

Use prepared statements

When you use prepared statements you don't have to smash strings together to create a chunk of CQL to send to the server. Avoiding creating many and large strings in Ruby can be a performance gain in itself. Not sending the query every time, but only the actual data also decreases the traffic over the network, and it decreases the time it takes for the server to handle the request since it doesn't have to parse CQL. Prepared statements are also very convenient, so there is really no reason not to use them.

Use JRuby

If you want to be serious about Ruby performance you have to use JRuby. The cql-rb client is completely thread safe, and the CQL protocol is pipelined by design so you can spin up as many threads as you like and your requests per second will scale more or less linearly (up to what your cores, network and Cassandra cluster can deliver, obviously).

Applications using cql-rb and JRuby can do over 10,000 write requests per second from a single EC2 m1.large if tuned correctly.

Try batching

Batching in Cassandra isn't always as good as in other (non-distributed) databases. Since rows are distributed accross the cluster the coordinator node must still send the individual pieces of a batch to other nodes, and you could have done that yourself instead.

For Cassandra 1.2 it is often best not to use batching at all, you'll have to smash strings together to create the batch statements, and that will waste time on the client side, will take longer to push over the network, and will take longer to parse and process on the server side. Prepared statements are almost always a better choice.

Cassandra 2.0 introduced a new form of batches where you can send a batch of prepared statement executions as one request (you can send non-prepared statements too, but we're talking performance here). These bring the best of both worlds and can be beneficial for some use cases. Some of the same caveats still apply though and you should test it for your use case.

Whenever you use batching, try compression too.

Try compression

If your requests or responses are big, compression can help decrease the amound of traffic over the network, which is often a good thing. If your requests and responses are small, compression often doesn't do anything. You should benchmark and see what works for you. The Snappy compressor that comes with cql-rb uses very little CPU, so most of the time it doesn't hurt to leave it on.

In read-heavy applications requests are often small, and need no compression, but responses can be big. In these situations you can modify the compressor used to turn off compression for requests completely. The Snappy compressor that comes with cql-rb will not compress frames less than 64 bytes, for example, and you can change this threshold when you create the compressor.

Compression works best for large requests, so if you use batching you should benchmark if compression gives you a speed boost.

Try experimental features

To get maximum performance you can't wait for a request to complete before sending the next. At it's core cql-rb embraces this completely and uses non-blocking IO and an asynchronous model for the request processing. The synchronous API that you use is just a thin façade on top that exists for convenience. If you need to scale to thousands of requests per second, have a look at the client code and look at the asynchronous core, it works very much like the public API, but using it they should be considererd experimental. Experimental in this context does not mean buggy, it is the core of cql-rb after all, but it means that you cannot rely on it being backwards compatible.

Changelog & versioning

Check out the releases on GitHub. Version numbering follows the semantic versioning scheme.

Private and experimental APIs, defined as whatever is not in the public API documentation, i.e. classes and methods marked as @private, will change without warning. If you've been recommended to try an experimental API by the maintainers, please let them know if you depend on that API. Experimental APIs will eventually become public, and knowing how they are used helps in determining their maturity.

Prereleases will be stable, in the sense that they will have finished and properly tested features only, but may introduce APIs that will change before the final release. Please use the prereleases and report bugs, but don't deploy them to production without consulting the maintainers, or doing extensive testing yourself. If you do deploy to production please let the maintainers know as this helps determining the maturity of the release.

Known bugs & limitations

  • JRuby 1.6 is not officially supported, although 1.6.8 should work, if you're stuck in JRuby 1.6.8 try and see if it works for you.
  • Windows is not supported (there is experimental support in the windows branch).
  • Large results are buffered in memory until the whole response has been loaded, the protocol makes it possible to start to deliver rows to the client code as soon as the metadata is loaded, but this is not supported yet.
  • There is no cluster introspection utilities (like the DESCRIBE commands in cqlsh) -- but it's not clear whether that will ever be added, it would be useful, but it is also something that another gem could add on top.

Also check out the issues for open bugs.

How to contribute

See CONTRIBUTING.md

Copyright

Copyright 2013–2014 Theo Hultberg/Iconara and contributors

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

cql-rb's People

Contributors

ahisbrook avatar brackxm avatar grddev avatar iconara avatar jasonmk avatar leobessa avatar ndrwdn avatar stenlarsson avatar tallevami avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cql-rb's Issues

Problems with decimal type and prepared statements

I tried to use decimal type in Cassandra and it works fine when I do that through cqlsh and I can get back results, but with cql-rb, this happens:

pry(main)> client = Cql::Client.connect(host: 'localhost')
pry(main)> client.use('my_keyspace')
=> nil
pry(main)> client.execute(%^
pry(main)*    CREATE TABLE my_table (
pry(main)*      id uuid,
pry(main)*      random_number decimal,
pry(main)*      PRIMARY KEY (id)
pry(main)*    )
pry(main)* ^)
=> nil
pry(main)> client.execute('INSERT INTO my_table (id, random_number) VALUES (f13940f4-fdd6-11e2-9bb6-bfc6cf90febe, 0.0012095473475870063)')
=> nil
client.execute('SELECT * FROM my_table WHERE id = f13940f4-fdd6-11e2-9bb6-bfc6cf90febe')
Cql::IoError: undefined method `<<' for nil:NilClass
from ~/.rvm/gems/ruby-2.0.0-p195@my_gemset/gems/cql-rb-1.0.3/lib/cql/protocol/decoding.rb:35:in `read_decimal!'

Also I tried to use prepared statement and error again

pry(main)> statement=client.prepare('INSERT INTO my_table (id, random_number) VALUES (f13940f4-fdd6-11e2-9bb6-bfc6cf90febe, ?)')
pry(main)> statement.execute( 0.0012095473475870063)
NoMethodError: undefined method `split' for 0.0012095473475870063:Float
from ~/.rvm/gems/ruby-2.0.0-p195@my_gemset/gems/cql-rb-1.0.3/lib/cql/protocol/encoding.rb:94:in `write_decimal'

Deadlock detected while using with Resque

Hi !

I've planned to use cql-rb execute method inside a worker of Resque gem , however due to the issue specified in topic my question is: is it necessary ? I would like to put the insert into... statement into a separated job, because these statements are simple audit logs and I think putting it into base app is an unnecessary overload. Due to the deadlock issue I suppose execute() runs within a new thread. Does it wait until execute finishes for the insert into... statement or not? If not, then actually there is no problem, but if so, do you have any idea how to avoid the deadlock issue?

resolver broken in 1.1.0pre0 if used with em-resolv-replace

In 1.0.5 everything was working ok, bit now it says "invalid address":

require 'cql'
# breaks cql-rb version 1.1.0pre0
require 'em-resolv-replace'
EM.run {
  #p "trying localhost"
  #c = Cql::Client.connect(host: "localhost")
  #p "trying 8.8.8.8"
  #c = Cql::Client.connect(host: "8.8.8.8")
  p "trying 31.192.115.166"
  c = Cql::Client.connect(host: "31.192.115.166")
  EM.stop
}

Interestingly enough, it's broken only is some cases, with provided ip, otherwise if I put localhost or 8.8.8.8 then it works.

> ruby cql_em_break.rb 
"trying 31.192.115.166"
/home/hanuman/.rbenv/versions/1.9.3-p194/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.0.rc0/lib/cql/client/synchronous_client.rb:32:in `connect': invalid address (Cql::Io::ConnectionError)
        from /home/hanuman/.rbenv/versions/1.9.3-p194/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.0.rc0/lib/cql/client.rb:99:in `connect'

I traced it back to here:

https://github.com/iconara/cql-rb/blob/master/lib/cql/io/connection.rb#L34

I tried using resolver by itself and it works fine.

Endless peer discovery loop

If the driver receives an UP event when it's connected to all nodes (sounds like that shouldn't be possible, it's happened, see logs below), it will enter an endless peer discovery loop.

2013-12-02 16:58:28,794 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,795 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,805 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,805 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,817 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,818 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,821 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,821 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,824 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,824 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,829 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,829 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,830 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,830 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,831 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,831 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,833 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,834 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,837 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,837 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,841 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,841 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,845 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,845 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,847 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,848 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,852 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,852 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,854 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,854 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,860 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,861 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,863 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,863 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,864 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,864 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,867 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,867 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,873 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,874 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,875 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,875 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,886 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,886 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,888 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,888 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,889 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,889 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,890 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,890 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,890 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,891 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,891 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,891 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,892 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,892 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,893 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,893 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,894 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,894 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,894 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,895 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,895 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,895 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,896 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,896 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,897 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,897 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,898 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,898 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,899 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,899 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,900 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,900 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,902 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,902 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,903 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,903 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,904 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,904 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,905 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,905 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,906 1922 DEBUG Client: Received UP event
2013-12-02 16:58:28,906 1922 DEBUG Client: Looking for additional nodes
2013-12-02 16:58:28,908 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,908 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,910 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,910 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,913 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,913 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,920 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,920 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,921 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,922 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,923 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,923 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,926 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,927 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,929 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,929 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,931 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,932 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,934 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,934 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,936 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,936 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,944 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,944 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,945 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,946 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,947 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,947 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,949 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,949 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,951 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,951 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,952 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,953 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,954 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,954 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,956 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,956 1922 DEBUG Client: Scheduling new peer discovery in 1s
2013-12-02 16:58:28,958 1922 DEBUG Client: No additional nodes found
2013-12-02 16:58:28,958 1922 DEBUG Client: Scheduling new peer discovery in 1s

and then it just goes on and on. Notice that it also got multiple UP events, which caused multiple endless peer discovery loops. This happened on three out of three application nodes and produced around 20 GiB of logs over ten days until it was discovered.

Support for TimeUUID

I can insert TimeUUID fields with the following. Is there another method you would recommend? Is it worth wrapping this into a single method and making it part of cql-rb?

require 'simple_uuid'
uuid = SimpleUUID::UUID.new
timeuuid = Cql::Uuid.new(uuid.to_i)
statement = client.prepare("insert into messages (id, sent_at, from, to, body) values (?, ?, ?, ?, ?)")
statement.execute(timeuuid, Time.now, "me", "you", "Hello!")

Thanks!

Cannot specify a quoted keyspace, necessary for keyspaces with capital letters.

The only way to specify a keyspace with capital letters is through use of double quotes. I have a keyspace from the 0.7 days that uses camel-case. When I try to call:

client = Cql::Client.connect(host: 'localhost')
client.use('"ExampleKeyspace_one"')

I get a Cql::Client::InvalidKeyspaceNameError, coming from asynchronous_client.rb:83
as '"ExampleKeyspace_one"' =~ /^\w[\w\d_]*$/ (from valid_keyspace_name?)

If I run the following:
client = Cql::Client.connect(host: 'localhost')
keyspace = '"ExampleKeyspace_one"'
use_request = Cql::Protocol::QueryRequest.new("USE #{keyspace}", :one)
client.async.send(:execute_request, use_request)

the client.keyspace is set correctly and I am able to run queries.

Please adjust the KEYSPACE_NAME_PATTERN so that it can use quoted keyspaces.

Errors in the IO related specs in JRuby 1.6.x and/or JDK 1.6

The IO reactor specs fail randomly in JDK 1.6, and there's something in JRuby 1.6.x that doesn't handle connection errors the same as in 1.7.x (select raises an IOError where in 1.7.x it never gets that far because an earlier connect_nonblock would fail, IIRC). The JDK errors mostly seem to be in the fake server that the specs communicate with.

Since it doesn't affect successful connections, or are artifacts of the testing environment, it's mostly an annoyance, but it would be nice to know that it worked... Travis has stopped supporting JRuby 1.7.x so at least the builds will no longer fail as often (hooray, I guess).

SELECT with composite key broken

Hi,

I've stumbled upon a problem with the following code:

CREATE TABLE items (
  item varchar,
  date    timestamp,
  device  varchar,

  search_id uuid,

  PRIMARY KEY (item, date, device)
);

INSERT INTO items (item, date, device, search_id) VALUES ('foo', 1363337756, 'iphone', 77670974-CBAE-48AB-877E-277704D6F504);

SELECT * FROM items WHERE item = 'Reddit';
SELECT * FROM items WHERE item = 'Reddit' AND date = 1363344499;
SELECT * FROM items WHERE item = 'Reddit' AND date = 1363344499 AND engine = 'iphone' ;

All the SELECT queries work fine in the CQL console, but only the first one returns any results with cql-rb. When a second column is added to WHERE, I get an empty result.

Do you have any idea what might be wrong? I haven't looked into the code, but it looks like problem with encoding/decoding key values.

[cqlsh 2.3.0 | Cassandra 1.2.2 | CQL spec 3.0.0 | Thrift protocol 19.35.0]

Ruby socket IO error Operation already in progress - connect(2) (Errno::EALREADY)

Thanks for making this gem. This one has all the keywords I want, CQL3, native transport, prepared statement 👍

I'm facing one issue though. When I connect to slow (non local) cassandra server, I got this error.

$ irb
1.9.3-p392 :001 > require 'cql'
 => true 
1.9.3-p392 :002 > Cql::Client.connect(host: 's3')
Cql::Io::ConnectionError: Could not connect to s3:9042: Operation already in progress - connect(2) (Errno::EALREADY)
    from /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/gems/cql-rb-1.0.0.pre5/lib/cql/io/node_connection.rb:124:in `connect_nonblock'
    from /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/gems/cql-rb-1.0.0.pre5/lib/cql/io/node_connection.rb:124:in `handle_connecting'
    from /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/gems/cql-rb-1.0.0.pre5/lib/cql/io/io_reactor.rb:126:in `block in io_loop'
    from /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/gems/cql-rb-1.0.0.pre5/lib/cql/io/io_reactor.rb:124:in `each'
    from /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/gems/cql-rb-1.0.0.pre5/lib/cql/io/io_reactor.rb:124:in `io_loop'
    from /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/gems/cql-rb-1.0.0.pre5/lib/cql/io/io_reactor.rb:47:in `block (2 levels) in start'

I bound the server to 0.0.0.0 and set start_native_transport: true already
Cqlsh externally and SSH to server and Cql::Client.connect(host: 'localhost') works find though

One more question: Does client instance have connection pool/thread-safe capability?

No Windows Support? "Cluster disconnect failed: Bad file descriptor" Error on Windows with Cassandra 1.2.11.2

The following code runs fine on Ruby 1.9.3 on Linux but raises strange errors/exceptions on Ruby 1.9.3 on Windows:

require 'cql'
require 'logger'

logger = Logger.new(STDERR)
client = Cql::Client.connect(hosts: ['my_hostname_here'], logger: logger)

The output on RHEL, which is expected is:

[root@localhost cql-test]# ruby cql.rb
D, [2013-12-04T17:01:28.693543 #8190] DEBUG -- : Connecting to node at my_hostname_here:9042
I, [2013-12-04T17:01:28.705406 #8190]  INFO -- : Connected to node 70fb116d-fafd-495b-9c2b-112415d0991f at accfs:9042 in data center Solr
D, [2013-12-04T17:01:28.705548 #8190] DEBUG -- : Looking for additional nodes
D, [2013-12-04T17:01:28.707796 #8190] DEBUG -- : No additional nodes found
I, [2013-12-04T17:01:28.708128 #8190]  INFO -- : Cluster connection complete
W, [2013-12-04T17:01:28.708937 #8190]  WARN -- : Connection to node 70fb116d-fafd-495b-9c2b-112415d0991f at accfs:9042 in data center Solr unexpectedly closed

The broken output on Windows is:

D, [2013-12-04T18:39:32.868677 #16252] DEBUG -- : Connecting to node at my_hostname_here:9042
D, [2013-12-04T18:39:32.888677 #16252] DEBUG -- : Looking for additional nodes
E, [2013-12-04T18:39:32.888677 #16252] ERROR -- : Failed connecting to cluster: exception object expected
E, [2013-12-04T18:39:32.888677 #16252] ERROR -- : Cluster disconnect failed: Bad file descriptor
C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/client/connection_helper.rb:29:in `raise': exception object expected (TypeError)
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/client/connection_helper.rb:29:in `block in connect'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:76:in `try'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:146:in `block in map'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:383:in `call'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:383:in `block in resolve'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:382:in `each'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:382:in `resolve'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:33:in `fulfill'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:52:in `block in observe'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:383:in `call'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:383:in `block in resolve'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:382:in `each'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:382:in `resolve'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:33:in `fulfill'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:52:in `block in observe'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:281:in `call'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:281:in `on_value'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:52:in `observe'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:168:in `block in flat_map'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:383:in `call'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:383:in `block in resolve'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:382:in `each'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:382:in `resolve'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:427:in `block (2 levels) in initialize'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:383:in `call'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:383:in `block in resolve'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:382:in `each'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:382:in `resolve'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:33:in `fulfill'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:76:in `try'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:197:in `block in recover'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:405:in `call'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:405:in `block in fail'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:404:in `each'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:404:in `fail'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:44:in `fail'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:164:in `block in flat_map'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:405:in `call'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:405:in `block in fail'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:404:in `each'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:404:in `fail'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:44:in `fail'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:144:in `block in map'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:405:in `call'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:405:in `block in fail'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:404:in `each'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:404:in `fail'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/future.rb:44:in `fail'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/io/connection.rb:212:in `closed!'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/io/connection.rb:76:in `close'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/io/io_reactor.rb:291:in `block in close_sockets'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/io/io_reactor.rb:289:in `each'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/io/io_reactor.rb:289:in `close_sockets'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/io/io_reactor.rb:141:in `ensure in block in start'
        from C:/Ruby193/lib/ruby/gems/1.9.1/gems/cql-rb-1.1.1/lib/cql/io/io_reactor.rb:147:in `block in start'

I tried debugging in to see what caused this error, but the multithreaded nature of the promises/future failures made it difficult to pinpoint the error. Is there no support for Windows machines? If someone can point me toward a few relevant classes in the source code I'd work on a patch and submit it.

Log query details and timing to the console

Log query details and timing to the rails console like queries done by ActiveRecord do, example:
User Load (0.2ms) SELECT users.* FROM users WHERE users.id = 666 LIMIT 1

Also an option to log to New Relic (newrelic.com) would be nice.

LIMIT ?

using '... LIMIT ?' gives an error

Cql::QueryError: line 1:45 mismatched input '?' expecting INTEGER
/Users/michael/.rvm/gems/ruby-1.9.3-p194/gems/cql-rb-1.0.0.pre1/lib/cql/client.rb:210:in interpret_response!' /Users/michael/.rvm/gems/ruby-1.9.3-p194/gems/cql-rb-1.0.0.pre1/lib/cql/client.rb:203:inblock in execute_request'
/Users/michael/.rvm/gems/ruby-1.9.3-p194/gems/cql-rb-1.0.0.pre1/lib/cql/future.rb:156:in call' /Users/michael/.rvm/gems/ruby-1.9.3-p194/gems/cql-rb-1.0.0.pre1/lib/cql/future.rb:156:inblock in map'
/Users/michael/.rvm/gems/ruby-1.9.3-p194/gems/cql-rb-1.0.0.pre1/lib/cql/future.rb:58:in call' /Users/michael/.rvm/gems/ruby-1.9.3-p194/gems/cql-rb-1.0.0.pre1/lib/cql/future.rb:58:inblock (2 levels) in complete!'
/Users/michael/.rvm/gems/ruby-1.9.3-p194/gems/cql-rb-1.0.0.pre1/lib/cql/future.rb:57:in each' /Users/michael/.rvm/gems/ruby-1.9.3-p194/gems/cql-rb-1.0.0.pre1/lib/cql/future.rb:57:inblock in complete!'
internal:prelude:10:in synchronize' /Users/michael/.rvm/gems/ruby-1.9.3-p194/gems/cql-rb-1.0.0.pre1/lib/cql/future.rb:54:incomplete!'
/Users/michael/.rvm/gems/ruby-1.9.3-p194/gems/cql-rb-1.0.0.pre1/lib/cql/io/io_reactor.rb:232:in handle_read' /Users/michael/.rvm/gems/ruby-1.9.3-p194/gems/cql-rb-1.0.0.pre1/lib/cql/io/io_reactor.rb:131:ineach'
/Users/michael/.rvm/gems/ruby-1.9.3-p194/gems/cql-rb-1.0.0.pre1/lib/cql/io/io_reactor.rb:131:in io_loop' /Users/michael/.rvm/gems/ruby-1.9.3-p194/gems/cql-rb-1.0.0.pre1/lib/cql/io/io_reactor.rb:52:inblock (2 levels) in start'

Client can be reconnected, but IoReactor can't

client.connect.close.connect works in theory, but IoReactor can't be restarted. It would be good if either IoReactor was restartable, or Client didn't allow reconnections (there's also a lot of code in Client that handles the reconnection case, for some reason, so unless IoReactor is going to support it that could should be removed).

Request routing strategies

Make it possible to pass request routing strategies to the client to control which connection will be used for a request. One example of when this would be useful is to route requests to the closest node (for example nodes in the same EC2 zone), or the node with the lowest latency.

"Cql::Protocol::DecodingError (8 bytes required but only 0 available):"

There are some bigint column if that helps

Cql::Protocol::DecodingError (8 bytes required but only 0 available):
  /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/bundler/gems/cql-rb-d4cc0f58a730/lib/cql/byte_buffer.rb:42:in `read'
  /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/bundler/gems/cql-rb-d4cc0f58a730/lib/cql/protocol/decoding.rb:41:in `read_long!'
  /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/bundler/gems/cql-rb-d4cc0f58a730/lib/cql/protocol/type_converter.rb:62:in `convert_bigint'
  /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/bundler/gems/cql-rb-d4cc0f58a730/lib/cql/protocol/type_converter.rb:30:in `call'
  /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/bundler/gems/cql-rb-d4cc0f58a730/lib/cql/protocol/type_converter.rb:30:in `convert_type'
  /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/bundler/gems/cql-rb-d4cc0f58a730/lib/cql/protocol/responses/rows_result_response.rb:92:in `block (2 levels) in read_rows!'
  /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/bundler/gems/cql-rb-d4cc0f58a730/lib/cql/protocol/responses/rows_result_response.rb:91:in `each'
  /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/bundler/gems/cql-rb-d4cc0f58a730/lib/cql/protocol/responses/rows_result_response.rb:91:in `block in read_rows!'
  /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/bundler/gems/cql-rb-d4cc0f58a730/lib/cql/protocol/responses/rows_result_response.rb:89:in `times'
  /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/bundler/gems/cql-rb-d4cc0f58a730/lib/cql/protocol/responses/rows_result_response.rb:89:in `read_rows!'
  /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/bundler/gems/cql-rb-d4cc0f58a730/lib/cql/protocol/responses/rows_result_response.rb:14:in `decode!'
  /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/bundler/gems/cql-rb-d4cc0f58a730/lib/cql/protocol/responses/result_response.rb:12:in `decode!'
  /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/bundler/gems/cql-rb-d4cc0f58a730/lib/cql/protocol/response_frame.rb:120:in `check_complete!'
  /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/bundler/gems/cql-rb-d4cc0f58a730/lib/cql/protocol/response_frame.rb:103:in `initialize'
  /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/bundler/gems/cql-rb-d4cc0f58a730/lib/cql/protocol/response_frame.rb:61:in `new'
  /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/bundler/gems/cql-rb-d4cc0f58a730/lib/cql/protocol/response_frame.rb:61:in `create_body'
  /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/bundler/gems/cql-rb-d4cc0f58a730/lib/cql/protocol/response_frame.rb:44:in `check_complete!'
  /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/bundler/gems/cql-rb-d4cc0f58a730/lib/cql/protocol/response_frame.rb:36:in `<<'
  /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/bundler/gems/cql-rb-d4cc0f58a730/lib/cql/io/node_connection.rb:76:in `handle_read'
  /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/bundler/gems/cql-rb-d4cc0f58a730/lib/cql/io/io_reactor.rb:122:in `each'
  /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/bundler/gems/cql-rb-d4cc0f58a730/lib/cql/io/io_reactor.rb:122:in `io_loop'
  /Users/duyleekun/.rvm/gems/ruby-1.9.3-p392/bundler/gems/cql-rb-d4cc0f58a730/lib/cql/io/io_reactor.rb:47:in `block (2 levels) in start'

No change log

This is not an issue but rather a suggestion.

I think having a change log is crucial for any decently maintained project. I could not find one for cql-rb.

Eventmachine support documentation

Hi,

I am wondering if I can use your gem within eventmachine reactor. Can you add somewhere in documentation is it supported and how to use it or it is planned feature.

I see that you have async client file and io reactor, but I am not sure how that works in relation to eventmachine.

Mutex relocking error when a node went down

Looks like the CqlProtocolHandler lock is held when a callback is called, and in certain situations (like when a connection fails) something in the call chain of the callback reads metadata from the protocol handler, which tries to take the lock.

ThreadError: Mutex relocking by same thread
  org/jruby/ext/thread/Mutex.java:90:in `lock'
  org/jruby/ext/thread/Mutex.java:147:in `synchronize'
  /usr/local/rvm/gems/jruby-1.7.4@xyz/gems/cql-rb-1.1.0/lib/cql/protocol/cql_protocol_handler.rb:64:in `[]'
  /usr/local/rvm/gems/jruby-1.7.4@xyz/gems/cql-rb-1.1.0/lib/cql/client/asynchronous_prepared_statement.rb:28:in `execute'
  (application frame)
  org/jruby/RubyProc.java:255:in `call'
  (application frame)
  (application frame)
  org/jruby/RubyProc.java:255:in `call'
  /usr/local/rvm/gems/jruby-1.7.4@xyz/gems/cql-rb-1.1.0/lib/cql/future.rb:228:in `fallback'
  org/jruby/RubyProc.java:255:in `call'
  /usr/local/rvm/gems/jruby-1.7.4@xyz/gems/cql-rb-1.1.0/lib/cql/future.rb:402:in `fail'
  org/jruby/RubyArray.java:1617:in `each'
  /usr/local/rvm/gems/jruby-1.7.4@xyz/gems/cql-rb-1.1.0/lib/cql/future.rb:401:in `fail'
  /usr/local/rvm/gems/jruby-1.7.4@xyz/gems/cql-rb-1.1.0/lib/cql/future.rb:41:in `fail'
  /usr/local/rvm/gems/jruby-1.7.4@xyz/gems/cql-rb-1.1.0/lib/cql/future.rb:141:in `map'
  org/jruby/RubyProc.java:255:in `call'
  /usr/local/rvm/gems/jruby-1.7.4@xyz/gems/cql-rb-1.1.0/lib/cql/future.rb:402:in `fail'
  org/jruby/RubyArray.java:1617:in `each'
  /usr/local/rvm/gems/jruby-1.7.4@xyz/gems/cql-rb-1.1.0/lib/cql/future.rb:401:in `fail'
  /usr/local/rvm/gems/jruby-1.7.4@xyz/gems/cql-rb-1.1.0/lib/cql/future.rb:41:in `fail'
  /usr/local/rvm/gems/jruby-1.7.4@xyz/gems/cql-rb-1.1.0/lib/cql/protocol/cql_protocol_handler.rb:251:in `socket_closed'
  org/jruby/RubyArray.java:1617:in `each'
  org/jruby/RubyEnumerable.java:920:in `each_with_index'
  /usr/local/rvm/gems/jruby-1.7.4@xyz/gems/cql-rb-1.1.0/lib/cql/protocol/cql_protocol_handler.rb:249:in `socket_closed'
  org/jruby/ext/thread/Mutex.java:149:in `synchronize'
  /usr/local/rvm/gems/jruby-1.7.4@xyz/gems/cql-rb-1.1.0/lib/cql/protocol/cql_protocol_handler.rb:248:in `socket_closed'
  org/jruby/RubyMethod.java:134:in `call'
  org/jruby/RubyProc.java:255:in `call'
  /usr/local/rvm/gems/jruby-1.7.4@xyz/gems/cql-rb-1.1.0/lib/cql/io/connection.rb:216:in `closed!'
  /usr/local/rvm/gems/jruby-1.7.4@xyz/gems/cql-rb-1.1.0/lib/cql/io/connection.rb:76:in `close'
  /usr/local/rvm/gems/jruby-1.7.4@xyz/gems/cql-rb-1.1.0/lib/cql/io/connection.rb:184:in `read'
  /usr/local/rvm/gems/jruby-1.7.4@xyz/gems/cql-rb-1.1.0/lib/cql/io/connection.rb:182:in `read'
  org/jruby/RubyArray.java:1617:in `each'
  /usr/local/rvm/gems/jruby-1.7.4@xyz/gems/cql-rb-1.1.0/lib/cql/io/io_reactor.rb:329:in `check_sockets!'
  /usr/local/rvm/gems/jruby-1.7.4@xyz/gems/cql-rb-1.1.0/lib/cql/io/io_reactor.rb:308:in `tick'
  /usr/local/rvm/gems/jruby-1.7.4@xyz/gems/cql-rb-1.1.0/lib/cql/io/io_reactor.rb:139:in `start'
  org/jruby/RubyProc.java:255:in `call'
  /usr/local/rvm/gems/jruby-1.7.4@xyz/gems/logging-1.8.1/lib/logging/diagnostic_context.rb:323:in `create_with_logging_context'

log query in error/exception

maybe include the cql query in errors/exceptions
currently stacktraces are not very useful to users due to the IoReactor threading

Add support for authentication

Does anyone use authentication? I guess it would be good to add support for it, but it feels like a marginal feature.

Cql::AuthenticationError: Unsupported Authenticator org.apache.cassandra.auth.PasswordAuthenticator

Getting this when trying to connect to a Datatstax Enterprise Cassandra 1.2.11.2 cluster. Password authentication does appear to be working on the cluster though, cqlsh requires my credentials.

Seems like something weird is going on here, I've been able to connect to other clusters running PasswordAuthenticator - but maybe the datastax version of cassandra is different? Is there something that can be patched on the cql-rb side, or will cql-rb just not work with this configuration?

Some timeuuid values cause QueryErrors

I'm loading in historical data and creating timeuuids from the timestamps from the original transactions. A subset of these cause QueryErrors when using prepared statements and passing them uuid through Cql::Uuid, even though the same uuids are acceptable to cassandra when not using prepared statements, or when inserted from the cqlsh command line.

I've tried using two different libraries to generate the timeuuids, with the same results. In my tests with real-world data, all of the exceptions were raised by uuids that start with two zeros, and every uuid that started with two zeros raised an exception. Here are some examples, along with the exceptions they raise:

00853800-5400-11e2-90c5-3409d6a3565d
#<Cql::QueryError: java.lang.IndexOutOfBoundsException: Invalid combined index of 3479, maximum is 108>

00953000-5420-11e2-858b-4ba2af1f81ed
#<Cql::QueryError: java.lang.IndexOutOfBoundsException: Invalid combined index of 3478, maximum is 107>

00e80680-54c6-11e2-9505-804fec35adc9
#<Cql::QueryError: java.lang.IndexOutOfBoundsException: Invalid combined index of 3496, maximum is 107>

005f6800-54e6-11e2-8143-0eb0c6ebb094
#<Cql::QueryError: java.lang.IndexOutOfBoundsException: Invalid combined index of 3478, maximum is 107>

In the cql-rb test suite, these uuids work fine when used in place of the static uuids in the tests, with two notable exceptions in https://github.com/iconara/cql-rb/blob/master/spec/integration/uuid_spec.rb#L39-L46:

    it 'can be used to store data in UUID cells' do
      store_statement.execute(Cql::Uuid.new('00853800-5400-11e2-90c5-3409d6a3565d'), 'hello world')
    end

    it 'will be used when loading data from UUID cells' do
      store_statement.execute(Cql::Uuid.new('00853800-5400-11e2-90c5-3409d6a3565d'), 'hello world')
      client.execute(%<SELECT * FROM ids>).first['id'].should == Cql::Uuid.new('00853800-5400-11e2-90c5-3409d6a3565d')
    end

Other tests, such as to_time and to_s work fine with these uuids. Here's a version of the to_time test, which passes with this timeuuid:

    describe '#to_time' do
      it 'returns a Time' do
        x = TimeUuid.new('00853800-5400-11e2-90c5-3409d6a3565d')
        puts x.to_time.to_s
        x.to_time.should be > Time.utc(2013, 1, 1, 10, 42, 55)
        x.to_time.should be < Time.utc(2013, 1, 1, 10, 42, 57)
      end
    end

I'm using the HEAD revision of cql-rb at master (0a7da7e 1.1.0.pre3), with ruby 2.0.0p317.

Performance

I am not sure if it's some regression, because I think it was faster, but could be my memory...

Anyway can you make this a little faster?
My get function is the creating a string of CQL3 command, so you can see this is a 31,241 rows insert bulk command... (I have bigger data that take almost 3 minutes..)
attaching profile:

  %   cumulative   self              self     total
 time   seconds   seconds    calls  ms/call  ms/call  name
 77.08    28.59     28.59    93770     0.30     0.30  String#slice!
  3.94    30.05      1.46    31241     0.05     1.01  Cql::Protocol::Decoding.read_short_bytes!
  3.75    31.44      1.39    31241     0.04     0.06  Cql::Protocol::Decoding.read_long!
  3.21    32.63      1.19    31252     0.04     0.60  Cql::Protocol::Decoding.read_short!
  2.29    33.48      0.85    31242     0.03     1.24  Cql::Protocol::RowsResultResponse#convert_type
  2.02    34.23      0.75        7   107.14 10211.43  Integer#times
  1.78    34.89      0.66       26    25.38    40.77  IO#select
  0.92    35.23      0.34    31241     0.01     0.01  Set#add
  0.75    35.51      0.28    31241     0.01     0.01  PortfolioArticlesCassandra#get_insert_article
  0.67    35.76      0.25    62510     0.00     0.00  String#unpack
  0.67    36.01      0.25    62498     0.00     0.00  Symbol#===
  0.51    36.20      0.19    93763     0.00     0.00  String#bytesize
  0.35    36.33      0.13    31280     0.00     0.00  Fixnum#to_s
  0.32    36.45      0.12    31286     0.00     0.00  Hash#[]=
  0.24    36.54      0.09    31241     0.00     0.00  Fixnum#|
  0.22    36.62      0.08    31241     0.00     0.00  Fixnum#<<
  0.19    36.69      0.07    31273     0.00     0.00  Array#first
  0.19    36.76      0.07    31246     0.00     0.00  Array#last
  0.16    36.82      0.06    31249     0.00     0.00  Fixnum#&
  0.16    36.88      0.06        1    60.00   190.00  Cassandra#execute_cassandra_query_batch
  0.16    36.94      0.06        5    12.00    12.00  Cql::Protocol::RequestFrame#write
  0.13    36.99      0.05    31253     0.00     0.00  String#force_encoding
  0.05    37.01      0.02       86     0.23   417.91  Mutex#synchronize
  0.03    37.02      0.01       13     0.77     0.77  Array#join
  0.03    37.03      0.01       17     0.59     1.18  Arel::Visitors::Visitor#visit
  0.03    37.04      0.01       15     0.67     0.67  IO#read_nonblock
  0.03    37.05      0.01       24     0.42     0.42  Integer#to_i
  0.03    37.06      0.01        5     2.00     4.00  Hash#each_key
  0.03    37.07      0.01        2     5.00     5.00  ActiveRecord::Associations::Association#reset
  0.03    37.08      0.01        5     2.00  7178.00  Cql::Client#execute
  0.03    37.09      0.01       22     0.45     0.45  ActiveRecord::AttributeMethods::ClassMethods.generated_external_attribute_methods

1.1.0 - cannot connect with hostname

require 'cql'
client = Cql::Client.connect(host: 'localhost')
client.close

works with 1.0.6, does not work with 1.1.0

error:
synchronous_client.rb:32:in `connect': Connection refused - connect(2) (Cql::Io::ConnectionError)

might be related to issue #50
however this is without other dependencies

same error with the use_resolv branch

connecting on ip address works

using ruby 1.9.3-p448

Large number of request goes to one box

Hi,
I have a 3 node Cassandra cluster. I am using your gem to send data to cassandra. I noticed that maximum number of requests goes to only one node out of three. Very few number goes to other two. Because of which the whole CPU usage goes up a lot on that box and OS too. I am not sure if this is an issue or if my usage of this gem is wrong. Can you please provide any advise on this ?

client initialization -

client = Cql::Client.connect(hosts: Cassandra.config[:hosts], connections_per_node: 2, keyspace: Cassandra.config[:keyspace], consistency: :quorum)

execution -

client.execute(cql_statement);

Thanks,
Dhaval.

Prepared Statements on Non-existent column family error is not obvious

I recently started up a brand new 2.0 cluster to try our app against and was running into a few strange errors in the logs. A few prepared statements were consistently throwing errors when executed. I traced it down to this line, an undefined #size on NilClass:

https://github.com/iconara/cql-rb/blob/master/lib/cql/client/asynchronous_prepared_statement.rb#L60

Threw a pry debugger in there to see what was going on (and in the prepare) and turns out I'd accidentally not run all the scripts to create necessary column families. Obviously prepared statements against non-existent column families won't work, but I'd expect them to throw an error when prepared (which I do when my app boots).

I imagine you'd want to guard for a nil on this line and raise right way - does that seem reasonable?

https://github.com/iconara/cql-rb/blob/master/lib/cql/client/asynchronous_prepared_statement.rb#L48

Can't pass column name as param in prepared statement

Hey,

I was just playing around with this gem and found a bit of syntax I think would be nice. Though, I have a feeling its harder to implement than it looks. It is also easy to work around with a non-prepared query.

$cassandra.prepare('UPDATE column_family SET ? = ? WHERE id = ?')
`interpret_response!': line 1:20 no viable alternative at input '?' (Cql::QueryError)

Turns out, you can't substitute a column name with a ? in a prepared statement.

Adam

Retry operations on failure

Provide some kind of (optional) retry mechanism for operations. Many times you just want to try again on failures that are transient.

Operations should only be retried if the user has requested an operation to be automatically retried, and the error is known to be transient (e.g. unavailable, read/write timeout, overloaded).

The number of retries should be configurable on a per-client basis (possibly per-request, but do you really need that kind of granularity?).

Another feature that could be included is optional downgrading of consistency on unavailable errors.

Attempt to save overflowing collections

Not sure this would be a good idea, but it would probably be possible to save collection responses that are bigger than 2**16. The data is there, so in many circumstances it would probably be salveagable.

Connecting to :host works but not to :hosts

require 'cql'
client = Cql::Client.connect(hosts: ['192.168.0.251'])
client = Cql::Client.connect(host: '192.168.0.251')

When using :hosts then I get the following error:
.rvm/gems/ruby-2.0.0-p247/gems/cql-rb-1.0.6/lib/cql/io/node_connection.rb:126:in `connect_nonblock': Could not connect to localhost:9042: Connection refused - connect(2) (Errno::ECONNREFUSED) (Cql::Io::ConnectionError).

When using :host everything is normal.

Am I doing something wrong? Thank you

The timestamp convert doesn't support milliseconds

I'm inserting data into my database, and I'd like millisecond granularity, the native encoding that C* uses. But when you insert an integer timestamp which is the number of milliseconds cql-rb internally assumes that timestamps are seconds and then multiplies them by 1000.

This is inconsistent and forces loss of granularity. It should be in milliseconds only.

I think the problem is this code in type_convert.rb:

      def bytes_to_timestamp(buffer, size_bytes)
        return nil unless read_size(buffer, size_bytes)
        timestamp = read_long!(buffer)
        Time.at(timestamp/1000.0)
      end

start! results in "fatal: deadlock detected"

I'm running this from within the rails 4.0.0.beta1 console:

Loading development environment (Rails 4.0.0.beta1)
1.9.3p392 :001 > $cql = Cql::Client.new(host: '127.0.0.1', port:'9160')
 => #<Cql::Client:0x007f9d7517d800 @host="127.0.0.1", @port="9160", @io_reactor=#<Cql::Io::IoReactor:0x007f9d7517d6e8 @connection_timeout=5, @lock=#<Mutex:0x007f9d7517d698>, @streams=[], @command_queue=[], @queue_signal_receiver=#<IO:fd 6>, @queue_signal_sender=#<IO:fd 7>, @started_future=#<Cql::Future:0x007f9d7517d4b8 @complete_listeners=[], @failure_listeners=[], @value_barrier=#<Queue:0x007f9d7517d378 @que=[], @waiting=[], @mutex=#<Mutex:0x007f9d7517d1e8>>, @state_lock=#<Mutex:0x007f9d7517d148>>, @stopped_future=#<Cql::Future:0x007f9d7517d120 @complete_listeners=[], @failure_listeners=[], @value_barrier=#<Queue:0x007f9d7517cf40 @que=[], @waiting=[], @mutex=#<Mutex:0x007f9d7517ce28>>, @state_lock=#<Mutex:0x007f9d7517cc98>>, @running=false>, @lock=#<Mutex:0x007f9d7517cc70>, @started=false, @shut_down=false, @initial_keyspace=nil, @connection_keyspaces={}>
1.9.3p392 :002 > $cql.start!
fatal: deadlock detected
    from /Users/asoules/.rvm/rubies/ruby-1.9.3-p392/lib/ruby/1.9.1/thread.rb:189:in `sleep'
    from /Users/asoules/.rvm/rubies/ruby-1.9.3-p392/lib/ruby/1.9.1/thread.rb:189:in `block in pop'
    from <internal:prelude>:10:in `synchronize'
    from /Users/asoules/.rvm/rubies/ruby-1.9.3-p392/lib/ruby/1.9.1/thread.rb:184:in `pop'
    from /Users/asoules/.rvm/gems/ruby-1.9.3-p392@rails4/bundler/gems/cql-rb-0d70e9955c7d/lib/cql/future.rb:96:in `value'
    from /Users/asoules/.rvm/gems/ruby-1.9.3-p392@rails4/bundler/gems/cql-rb-0d70e9955c7d/lib/cql/client.rb:95:in `block in start!'
    from <internal:prelude>:10:in `synchronize'
    from /Users/asoules/.rvm/gems/ruby-1.9.3-p392@rails4/bundler/gems/cql-rb-0d70e9955c7d/lib/cql/client.rb:85:in `start!'
    from (irb):2
    from /Users/asoules/.rvm/gems/ruby-1.9.3-p392@rails4/gems/railties-4.0.0.beta1/lib/rails/commands/console.rb:88:in `start'
    from /Users/asoules/.rvm/gems/ruby-1.9.3-p392@rails4/gems/railties-4.0.0.beta1/lib/rails/commands/console.rb:9:in `start'
    from /Users/asoules/.rvm/gems/ruby-1.9.3-p392@rails4/gems/railties-4.0.0.beta1/lib/rails/commands.rb:64:in `<top (required)>'
    from bin/rails:4:in `require'
    from bin/rails:4:in `<main>'

Possible deadlock in Future#value

v1.1.0.pre7 locks up when doing 5000 requests/s. Eventually all threads are stuck in Future#value, waiting for the thread to get notified. It's not clear whether or not it's the implementation of #value that doesn't work, or if it's something else that just never resolves the futures.

utf8 not working with prepared statements

cannot insert utf8 data via prepared statement
using cql-rb HEAD
example on the data model from the cassandra README:

# -*- encoding: utf-8 -*-
require 'cql'

client = Cql::Client.new
client.start!
client.use('schema1')
client.execute("INSERT INTO users (user_id, first, last, age) VALUES ('test', 'ümlaut', 'test', 1)") # ok
statement = client.prepare('INSERT INTO users (user_id, first, last, age) VALUES (?, ?, ?, ?)')
statement.execute('test2', 'test2', 'test2', 2) # ok
statement.execute('test3', 'ümlaut', 'test3', 3) # exception

exception:

/Users/michael/.rvm/gems/ruby-1.9.3-p194/bundler/gems/cql-rb-0d70e9955c7d/lib/cql/protocol/encoding.rb:45:in `write_bytes': incompatible character encodings: ASCII-8BIT and UTF-8 (Encoding::CompatibilityError)
from /Users/michael/.rvm/gems/ruby-1.9.3-p194/bundler/gems/cql-rb-0d70e9955c7d/lib/cql/protocol/request_frame.rb:208:in `write_value'
from /Users/michael/.rvm/gems/ruby-1.9.3-p194/bundler/gems/cql-rb-0d70e9955c7d/lib/cql/protocol/request_frame.rb:151:in `block in write'
  from /Users/michael/.rvm/gems/ruby-1.9.3-p194/bundler/gems/cql-rb-0d70e9955c7d/lib/cql/protocol/request_frame.rb:150:in `each'
from /Users/michael/.rvm/gems/ruby-1.9.3-p194/bundler/gems/cql-rb-0d70e9955c7d/lib/cql/protocol/request_frame.rb:150:in `each_with_index'
from /Users/michael/.rvm/gems/ruby-1.9.3-p194/bundler/gems/cql-rb-0d70e9955c7d/lib/cql/protocol/request_frame.rb:150:in `write'
  from /Users/michael/.rvm/gems/ruby-1.9.3-p194/bundler/gems/cql-rb-0d70e9955c7d/lib/cql/protocol/request_frame.rb:14:in `write'
from /Users/michael/.rvm/gems/ruby-1.9.3-p194/bundler/gems/cql-rb-0d70e9955c7d/lib/cql/io/io_reactor.rb:217:in `perform_request'
from /Users/michael/.rvm/gems/ruby-1.9.3-p194/bundler/gems/cql-rb-0d70e9955c7d/lib/cql/io/io_reactor.rb:390:in `deliver_commands'
  from /Users/michael/.rvm/gems/ruby-1.9.3-p194/bundler/gems/cql-rb-0d70e9955c7d/lib/cql/io/io_reactor.rb:349:in `handle_read'
from /Users/michael/.rvm/gems/ruby-1.9.3-p194/bundler/gems/cql-rb-0d70e9955c7d/lib/cql/io/io_reactor.rb:131:in `each'
from /Users/michael/.rvm/gems/ruby-1.9.3-p194/bundler/gems/cql-rb-0d70e9955c7d/lib/cql/io/io_reactor.rb:131:in `io_loop'
  from /Users/michael/.rvm/gems/ruby-1.9.3-p194/bundler/gems/cql-rb-0d70e9955c7d/lib/cql/io/io_reactor.rb:52:in `block (2 levels) in start'

Configurable number of connections per node

This is possible in v1.0 by specifying hostnames multiple times, this was a bit of a hack, and with peer discovery in v1.1 it doesn't work anymore. Being able to open multiple connections to each node is a crucial feature for performance.

Make QueryResult lazily deserialize its rows

As it is all frame deserialization happens in the IO thread, which is find for small frames, but for big results it's probably not a good idea. QueryResult/RowsResultResponse should be lazy and not deserialize anything until it's actually used). This would make it easier to support streaming in the future.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.