mpenet / alia Goto Github PK
View Code? Open in Web Editor NEWHigh performance Cassandra client for clojure
Home Page: https://mpenet.github.io/alia/qbits.alia.html#docs
High performance Cassandra client for clojure
Home Page: https://mpenet.github.io/alia/qbits.alia.html#docs
Hi, we've been using https://github.com/pcmanus/ccm with alia with good success for integration testing. Is there interest in having a (cql biased) wrapper for this in alia?
(defn new-cluster!
[name num-nodes version cql-port schema-file]
When using an async execute
function, if you accidentally pass argument of a wrong type, you will get an exception that's a bit hard to understand:
Caused by: java.lang.NullPointerException
at qbits.alia$execute_async.invokeStatic(alia.clj:415)
at qbits.alia$execute_async.invoke(alia.clj:380)
I'll debug / handle it later, just leaving info here not to forget..
Just to let you know that alia-all 3.1.11
is not deployed yet, only alia-all 3.1.10
is deployed.
If I create project.clj
like that:
(defproject qwe "0.1.0-SNAPSHOT"
:dependencies
[[org.clojure/clojure "1.8.0"]
[cc.qbits/alia-all "3.3.0"]
[org.clojure/clojurescript "1.9.456"]])
then (require 'qbits.alia)
fails with this in REPL:
CompilerException java.lang.NoClassDefFoundError: com/google/common/util/concurrent/FutureFallback, compiling:(qbits/alia/codec/udt.clj:110:12)
Removing clojurescript from dependencies (or changing to an earlier version) removes this problem. I'm not entirely sure how to fix that, but I would love to see it fixed. :)
Now that we have ex-handler to register on channels with c.c.async, should execute-chan and execute-chan-buffered take an exception handler as an optional arg too so we can consolidate error handling in the one place?
I can submit a patch if so.
It seems the api is changing a bit here and there, we also get new customization options (for query execution for instance).
Hi, I'm remodelling my Cassandra query planner to use async-chan and I've got a few ideas for building on the existing support.
How do these simple changes sound?
Currently alia/cluster ignores any query-options that are passed, this is due to a bug in the cluster-options integration with the ClusterBuilder (retrieving and setting on the configured QueryOptions, which is then discarded, rather than using .withOptions).
This means that all queries that rely on the query options provided at cluster configuration time will be executing with the datastax defaults of LOCAL_ONE consistency, SERIAL serial_consistency, and 5000 as the fetch size regardless of configuration provided.
I've a PR which fixes the query-options case, but I suspect it's also likely the pooling-options, netty-options, etc are similarly affected.
Now that the API is a bit less wild we can experiment with core.typed. It seems it should be quite straightforward. One question is whether to add this to a separate namespace or not.
Hi Max,
It's more as an "open discussion" thing rather then the issue. If you prefer to have such things in email, please let me know.
Thing is that it's quite easy to convert record to UDT. Although as discussed here converting it back is a bit trickier (although possible).
What I wanted to suggest / had in mind is to have pluggable backends that one could register during connection (or similar) and that depend on the type
field of UDT itself, that would allow converting certain UDT types back to records/deftypes without using row generators (also because as it seems, row generators are already one level further: UDT is already converted to map).
So idea is:
What do you think about this idea?
Just looking at the code and it looks like you only support the cassandra password authenticator libary (login/passwd) and not kerberos? If not any plans to add support for this?
When creating a cluster you can specify :keep-alive? true/false. It would be useful to be able to specify the time period of the keep-alive action.
Hi,
I just noticed that there is a parameter i
present 2 times in the params, thus the second one shadowing the first occurrence:
Hi
When I try to require alia
in the repl I get this exception:
java.lang.NoSuchFieldException: builder
Or when I try to execute the following code I get the same exception:
(defn connect
[]
(let [host (env :cassandra-host)
cluster (alia/cluster {:contact-points [host]
:load-balancing-policy :default})]
(alia/connect cluster)))
Subject says it all:
https://clojars.org/cc.qbits/alia-all
https://clojars.org/cc.qbits/alia
Hi!
qbits.alia.codec
decodes UDTs as maps with string keys. It seems as though the UDT API only allows strings for names and value names. Would you be open to switching it to keywords ? If so I can cook a PR.
Cheers,
pyr
Simple alia components for session/cluster.
The short story is that currently if you dont care about aot, right now you can ship your project with alia without whatever optional deps (manifold, clj-time etc). However if you insist on aot'ing alia with your project (sigh), you will hit a wall unless you include all the dependencies with it.
One more important point in favor of improving this more than aot all the things, is that it's probably cleaner to just compartmentalize the code that requires the aforementioned dependencies.
One way to do this is in https://github.com/mpenet/alia/tree/feature/aot.
the root project.clj becomes qbits/alia-all
Then we have ./modules/alia ./modules/alia-async ./modules/alia-manifold & co with their own project.clj and dependencies. https://github.com/mpenet/alia/tree/feature/aot/modules
It's possible to make this 100% compatible with master (it actually is right now) via macro magic (to alias the core.async stuff that is now in alia-async), that is if you use qbits/alia-all
in your dependencies instead of qbits/alia.
All in all this doesn't seem too bad, I am just unsure about the naming ("modules", "alia-all"), but I tend to overthink this stuff. I ll let this rest for a couple of days.
Feel free to comment/suggest/criticize, as this is just a proposal at the moment.
What the branch lacks is proper "readme" per module and so on, but it's a bit premature atm.
Within the codec ns there are a number of decoders, but I can't see any way of declaring a custom one?
I would like to be able to specify decoders, similar to the existing PCodec support for Joda but decode rather than encode, i.e.:
DataType$Name/TIMESTAMP (timec/to-date-time (.getDate x idx))
I'm happy to supply a PR, but I wasn't quite sure where/if to start.
Hi max,
The latest release of alia fails to create a cluster, regardless of the contact-points
provided:
Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.collect.Sets.newConcurrentHashSet()Ljava/util/Set;, compiling:(/tmp/form-init3959983115342003836.clj:1:72)
at clojure.lang.Compiler.load(Compiler.java:7196)
at clojure.lang.Compiler.loadFile(Compiler.java:7140)
at clojure.main$load_script.invoke(main.clj:274)
at clojure.main$init_opt.invoke(main.clj:279)
at clojure.main$initialize.invoke(main.clj:307)
at clojure.main$null_opt.invoke(main.clj:342)
at clojure.main$main.doInvoke(main.clj:420)
at clojure.lang.RestFn.invoke(RestFn.java:421)
at clojure.lang.Var.invoke(Var.java:383)
at clojure.lang.AFn.applyToHelper(AFn.java:156)
at clojure.lang.Var.applyTo(Var.java:700)
at clojure.main.main(main.java:37)
Caused by: java.lang.NoSuchMethodError: com.google.common.collect.Sets.newConcurrentHashSet()Ljava/util/Set;
at com.datastax.driver.core.Cluster$ConnectionReaper.<init>(Cluster.java:2020)
at com.datastax.driver.core.Cluster$Manager.<init>(Cluster.java:1124)
at com.datastax.driver.core.Cluster$Manager.<init>(Cluster.java:1071)
at com.datastax.driver.core.Cluster.<init>(Cluster.java:118)
at com.datastax.driver.core.Cluster.<init>(Cluster.java:105)
at com.datastax.driver.core.Cluster.buildFrom(Cluster.java:174)
at com.datastax.driver.core.Cluster$Builder.build(Cluster.java:1036)
at qbits.alia$cluster.invoke(alia.clj:141)
at io.cyanite.store$cassandra_metric_store.invoke(store.clj:148)
at clojure.lang.Var.invoke(Var.java:379)
at io.cyanite.config$instantiate.invoke(config.clj:94)
at io.cyanite.config$get_instance.invoke(config.clj:102)
at clojure.lang.AFn.applyToHelper(AFn.java:156)
at clojure.lang.AFn.applyTo(AFn.java:144)
at clojure.core$apply.invoke(core.clj:628)
at clojure.core$update_in.doInvoke(core.clj:5853)
at clojure.lang.RestFn.invoke(RestFn.java:467)
at io.cyanite.config$init.invoke(config.clj:124)
at io.cyanite$_main.doInvoke(cyanite.clj:31)
at clojure.lang.RestFn.invoke(RestFn.java:421)
at clojure.lang.Var.invoke(Var.java:383)
at user$eval5.invoke(form-init3959983115342003836.clj:1)
at clojure.lang.Compiler.eval(Compiler.java:6757)
at clojure.lang.Compiler.eval(Compiler.java:6747)
at clojure.lang.Compiler.load(Compiler.java:7184)
... 11 more
This seems to be related to the updated dependency on java-driver
I saw the lazy-query function but am not really sure how to use it. The example given is very special. As I understood I'd have to merge
or refine a query each iteration. I have a huge database and when altering the query and add constraints to it, it'll become quite large. Not sure if I got that wrong.
What is the idiomatic way to execute an (hayt/select :foo)
(select * from foo
) in a lazy manner/paginated/chunked?
CQL and the Datastax driver support named bindings in CQL, i.e.:
"INSERT INTO table (id, location, contact) values (:id, :address, :phone)";
as opposed to positional:
"INSERT INTO table (id, location, contact) values (?, ?, ?)"
This is useful, because when using PreparedStatement (and assuming our data-structures are generally maps), we have to also maintain a mapping from each map to the PreparedStatement value positional order, which is cumbersome.
If we allow alia to accept a map of values, we can bind that to the named parameters in the CQL, and execute similarly to:
(alia/execute session statement {:values {:id 1 :address "some-address" :phone "123-321"}})
Since now the query builder matured and is quite full featured we should be able to port it over to a module (say module/alia-query-builder
or something).
I expect the driver to be at least as performant as hayt, (java-driver caches aggressively compiled query output, and can produce simple statements with (ready to be bound) values, which is kind of neat if used appropriately).
We might keep the intermediary map based AST for composition, but generation would have to go through the driver.
Some ref in comments here: https://github.com/datastax/java-driver/blob/3.x/driver-core/src/main/java/com/datastax/driver/core/querybuilder/BuiltStatement.java
One of the big pains when using hayt was the error reporting, the driver can be better at this (I think), but we might also provide dev/compile time clj.specs to validate queries, and this would also prove useful to generate tests and documentation, this is totally optional tho.
Ideally we might want to retain ast compatibility with hayt, but this is not a requirement as both DSLs can live separately (for now).
Using latest Alia, and Cassandra 2.0.10:
The current PCodec DateTime implementation returns a Long, where it should return a standard java Date. Currently persisting a joda DateTime leads to:
<InvalidTypeException com.datastax.driver.core.exceptions.InvalidTypeException:
Invalid type for value 3 of CQL type timestamp, expecting class java.util.Date but
class java.lang.Long provided>
I'll raise a PR one second.
It's not really a bug, but I would like some external point of view on this (if anyone is reading :)).
Currently (beta5) we use a clojure promise
as async-execute return value, most of the time it is sufficient, but there is one really annoying thing imo, when an error happens, the delivered value is the exception instance, there is currently no way to re-throw it at "deref" time, and I am afraid it is a bit sneaky and we might end up with "exceptions as values" bugs in code using deref in our apps.
That said using the blocking behavior of a promise on ResultFuture is a bit useless too, but it's possible so anyway...
Replacing core promise with a lamina result-channel
solves this (and adds more features down the road, lamina api is really rich), when you deref a result-channel
in an error state it will throw the source exception, the rest of it is identical to a promise (for what we care for in this instance, the api stays the same, tests still pass etc).
The performance hit with lamina/result-channel vs core/promise (meaning only for their creation/setup/delivery, not the whole query) is 10-15%, which accounts for 15ms on 100k queries, so nearly identical. Another route I tried was cljque, but it is slower (+200ms for 100k queries).
I had a couple of solutions in mind:
core/promise
with a lamina/result-channel
I think that having lamina as a dependency is actually a nice thing, channels and all their accompanying api are a good fit for the kind of tasks we might encounter in async. mode. But maybe there is another solution or some strong argument for one of of the other solutions.
/discuss
[edit] I took the decision to go with the second solution for beta6 see https://github.com/mpenet/alia/blob/master/CHANGELOG.md for details. I might change my mind in 2 days and revert this, but this feels like the best choice right now.
Hi Max,
Currently it's possible that the insertion of a nil value via a prepared statement may cause an un-purgeable tombstone (the only way to avoid it pre C* 2.2 is to use a prepared-statement without that parameter set when the value is nil)
I can see these created in my own case but haven't hit the threshold where it can become problematic (droppable-tombstone-ratio defaults to 0.2).
So, post C* 2.2+ with driver 3.0.0+ you can chose to leave parameters in a prepared statement unset, and no tombstone is created (previous versions you must explicitly setToNull).
We could leave values unset where we're binding named parameters, I'd be happy to go ahead and scratch up an implementation but I'm wondering what the best way to go about it may be.
Currently we have a PNamedBinding protocol in the codec ns with:
nil
(-set-named-parameter! [_ settable name]
(.setToNull ^SettableByNameData settable name))
Would it be sensible to provide a mechanism to re-extend nil to this protocol specifically for 2.2+ where the behaviour is basically a no-op. Let me know your thoughts.
Hi Max,
I'd like to suggest a couple of changes so I can integrate https://github.com/troy-west/arche a little more cleanly (I like the proxy-session but don't like the way it's currently sort of proxied itself)
I'll chuck a patch set over over and stick some notes on this ticket.
It's really outdated and deserve some love.
Hi!
I'm trying to address a small dependency issue on 2.12.1 but I am failing to find a tag or branch to base my work on, is there something that I missed?
the result-set-fn option now allows to go into IReduce path directly allowing "garbage free" result set collection among other things, this could be leveraged to provide a few implementations that are common.
I can imagine at least an eager "seq-free" one that's used frequently -> #(into [] %)
(optionaly with an xform) but there could be others, such as result-set->map, reductions of various kinds (avg, mean, etc...). This is also possible to get to execution infos that way, a few helpers in that direction couldn't hurt.
It is possible to group individual DML Statement into a single BatchStatement which useful for ensuring atomicity on write.
Very minor addition to the DSL to wrap that, it's related(ish) to the named bindings ticket and PR I'm about to raise but suitably orthogonal so in-case you like one and not the other I've separated them.
Hi,
i have some trouble using alia into an immutant project (http://immutant.org/).
I write few steps to recreate the problem.
Prerequisites :
Then :
Here the code of example https://gist.github.com/paologf/8575630
The error i have is the following :
10:38:13,826 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-7) MSC00001: Failed to start service jboss.deployment.unit."project-name.clj".WeldStartService: org.jboss.msc.service.StartException in service jboss.deployment.unit."project-name.clj".WeldStartService: Failed to start service
at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1767) [jboss-msc-1.0.4.GA.jar:1.0.4.GA]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_25]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_25]
at java.lang.Thread.run(Thread.java:724) [rt.jar:1.7.0_25]
Caused by: org.jboss.weld.exceptions.DeploymentException: WELD-001408 Unsatisfied dependencies for type [Set<Service>] with qualifiers [@Default] at injection point [[parameter 1] of [constructor] @Inject com.google.common.util.concurrent.ServiceManager(Set<Service>)]
at org.jboss.weld.bootstrap.Validator.validateInjectionPoint(Validator.java:311)
at org.jboss.weld.bootstrap.Validator.validateInjectionPoint(Validator.java:280)
at org.jboss.weld.bootstrap.Validator.validateBean(Validator.java:143)
at org.jboss.weld.bootstrap.Validator.validateRIBean(Validator.java:163)
at org.jboss.weld.bootstrap.Validator.validateBeans(Validator.java:382)
at org.jboss.weld.bootstrap.Validator.validateDeployment(Validator.java:367)
at org.jboss.weld.bootstrap.WeldBootstrap.validateBeans(WeldBootstrap.java:379)
at org.jboss.as.weld.WeldStartService.start(WeldStartService.java:64)
at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811) [jboss-msc-1.0.4.GA.jar:1.0.4.GA]
at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746) [jboss-msc-1.0.4.GA.jar:1.0.4.GA]
... 3 more
10:38:14,042 ERROR [org.jboss.as.server] (DeploymentScanner-threads - 1) JBAS015870: Deploy of deployment "project-name.clj" was rolled back with the following failure message:
{"JBAS014671: Failed services" => {"jboss.deployment.unit.\"project-name.clj\".WeldStartService" => "org.jboss.msc.service.StartException in service jboss.deployment.unit.\"project-name.clj\".WeldStartService: Failed to start service
Caused by: org.jboss.weld.exceptions.DeploymentException: WELD-001408 Unsatisfied dependencies for type [Set<Service>] with qualifiers [@Default] at injection point [[parameter 1] of [constructor] @Inject com.google.common.util.concurrent.ServiceManager(Set<Service>)]"}}
I think the real problem is "WELD-001408 Unsatisfied dependencies".
Out of immutant (lein deps && lein run) no problem using alia, tested under OSX and Windows7.
Thanks.
Hello mpenet
I'm in doubt about whether this is a known issue or even accepted, known behavior.
I've experienced problems with the connection generated by alia/connection
using a cluster generated by alia/cluster
called with a map that includes keys in addition to the ones generated in the documentation. Nothing seems to go wrong at connection time, but at alia/execute
time, exceptions occur. (Sorry, don't recall exactly which, and the problem is fixed now and the project is moving on).
It would have been nice for cluster
or connection
to validate the map passed in and throw upon extraneous keys, rather than failing silently and causing errors later on - this made it much harder to figure out the error.
Kind regards
Reefersleep
Implement a chunked sequence that triggers select queries to get new chunks according to a where modifier fn.
This would ease working with timeseries for instance.
Subject to change, just an idea for now
Example of lazy/chunked seq api:
(cql-range hayt-select step-predicate)
hayt-select
is a query-map, step-predicate
a fn that returns a new query-map from the old one (or maybe just the where
portion). It returns a lazy sequence over cql select
s.
This is handy for some situations, but we might want something that doesn't retain the head and grows forever for long running situations.
This is where streams could make sense, exposed as a lamina channel (?), and with some api sugar to hide that fact, other than that the signature is similar to cql-range. The api should allow while true style consumption or callback based. To be investigated but I think the lazy seq could be implemented on top of that too.
More thought needs to be put into this, I am just thinking out loud, but this sounds useful.
/discuss
It would be great to be able to specify cluster options in an EDN file and and pass that directly to the cluster function to create a cluster.
This can be achieved via extending providing further cluster set-cluster-option!
implementations.
This approach can be seen in pull request #79
Another option would be to provide some tagged literal reader functions.
I have a UDT like this:
(create-type :list_field
(if-not-exists)
(column-definitions {:type :text
:name :text
:label :text
:required :boolean}))
I can insert the UDT value without problem by https://github.com/mpenet/alia/blob/master/docs/guide.md#udt-and-tuple
But when I query the data and the result looks like this:
#object[com.datastax.driver.core.UDTValue 0x2d506d0c
"{type:'integer', name:'reason-code', label:null, required:true}"]
It seems alia return the UDTValue java object instead of a map. If I use bean, I get follow result
{:class com.datastax.driver.core.UDTValue,
:type #object[com.datastax.driver.core.UserType 0x139f5bf4 "frozen<mykeyspace.list_field>"]}
Could this be a bug in alia or by design?
I had to inspect the source to figure out the keys are actually :user and :password, not :username and :password.
The docstring of execute-chan includes: "Exceptions are sent to the channel as a value" which seems like a very reasonable approach. However it's also possible to call this function in a manner which will cause an exception to be thrown, and no channel returned, i.e.
(execute-chan prepared-statement {:values [:fails-to "bind"]}})
This is due to (query->statement query values) being executed synchronously and outside the scope of a try/catch. There may be other examples, I primarily use execute-chan and execute-chan-buffered.
Happy to create a PR if this is something worth picking up.
The datastax java driver pages results with a default fetch-size of 5000.
If the cluster is unavailable as the ResultSet requests the next page of results a NoHostAvailableException is thrown, there's a ticket to make paging resumable (https://datastax-oss.atlassian.net/browse/JAVA-277), but there will always be a possibility of this exception being thrown.
As much as it is unlikely to occur, there are two broad implications:
In the case of -async and -chan there's only a vanishingly small window of triggering the latter case, fetch-size needs to be less than thunk-size which is probably a misconfiguration anyway. You're more likely to encounter this case when using -chan-buffered and paging through large amounts of data when a network partition occurs, then you end up waiting on a channel which will never close.
We use ccm (https://github.com/SMX-LTD/ccm-clj) for stopping/starting clusters while testing and I've been able to reproduce both cases for -chan and -chan-buffered by executing a query and stopping the cluster temporarily, I'll raise a PR which addresses it, the only further thing to add is consumers of a -chan-buffered execution result should know an exception may be placed on the channel at any time, even after some results have been successfully returned.
I feel like -chan-buffered is the most 'correct' implementation of the core.async way of querying C* and maybe should replace -chan?
Datastax provide a version of the driver where the netty dependency is shaded:
https://datastax.github.io/java-driver/2.1.6/features/shaded_jar/
I have a conflict between the version of Netty my network service framework is based on (4.0.29) and the version in the driver currently (4.0.27).
I think it's possible for me to work around this in my project with exclusions and not altering Alia (I haven't actually tried that yet), if you're so inclined it might not be a bad idea to use the shaded version.
Hi!
The README states that:
Alia runs on clojure >= 1.7
Yet the project.clj/clojars requires 1.8.
Is there a specific reason for the 1.8 requirement?
Thanks!
In a REPL with the following dependencies:
[[org.clojure/clojure "1.5.0-RC17"]
[cc.qbits/alia "0.2.0-SNAPSHOT"]]
When I try to define a cluster, it hangs:
dgibbons:cassandra-test dgibbons$ lein repl
nREPL server started on port 58911
REPL-y 0.1.0-beta10
Clojure 1.5.0-RC17
Exit: Control+D or (exit) or (quit)
Commands: (user/help)
Docs: (doc function-name-here)
(find-doc "part-of-name-here")
Source: (source function-name-here)
(user/sourcery function-name-here)
Javadoc: (javadoc java-object-or-class-here)
Examples from clojuredocs.org: [clojuredocs or cdoc](user/clojuredocs name-here)
(user/clojuredocs "ns-here" "name-here")
cassandra-test.core=> (require '[qbits.alia :as alia])
nil
cassandra-test.core=> (def cluster (alia/cluster "10.60.89.0" :port 9160))
^ At this point, the REPL hangs until I press CTRL+C.
Yet, cassandra-cli can successfully connect to this server:
dgibbons:bin dgibbons$ ./cassandra-cli -h 10.60.89.0 -p 9160
Connected to: "Test Cluster" on 10.60.89.0/9160
Welcome to Cassandra CLI version 1.2.2
Type 'help;' or '?' for help.
Type 'quit;' or 'exit;' to quit.
[default@unknown]
The DataStax driver was installed via:
git clone [email protected]:datastax/java-driver.git
cd java-driver/driver-core
mvn install -DskipTests
Platform: OS X Mountain Lion
Cassandra cluster version: 1.2.1
Currently only the following are configurable via the pooling-options
Is there a way I can override other pooling options defined here. Specifically setMaxRequestsPerConnection?
Thanks
When trying to use the following code
(require '[qbits.alia :as alia])
I am getting the error:
java.lang.ClassNotFoundException: com.google.common.util.concurrent.FutureFallback
java.lang.NoClassDefFoundError: com/google/common/util/concurrent/FutureFallback
clojure.lang.Compiler$CompilerException: java.lang.NoClassDefFoundError: com/google/common/util/concurrent/FutureFallback, compiling:(qbits/alia/udt.clj:109:12)
Here is my dependency tree. I am using this in a luminus web template
https://gist.github.com/devasiajoseph/ba8beafafdda8ee1350de01c799e4fc8
What did I miss?
I am very tempted to depreciate releases for alia-all
in favor of forcing users to pick and choose manually what they need. I have had to deal a few times with people blindly getting alia-all and have troubles with transitive deps (ex nippy). This ultimately should not cause trouble to anyone.
right now I can't spec things such as a core.async/channel or manifold deferred because I have no guarantee core.async will be present in the classpath of a user. One solution would be to split specs per "module", either integrated into the module itself, or in a separate modules (for people who still like to use AOT). Not sure about that one yet, but 1 project/module per spec context might be better.
I have a table with a UDT column. Rows which have a value for the UDT column work normally but rows which have no value for the UDT column cause a NullPointerException. Also, "SELECT * FROM tablename" where "table name" is a table that has some null cells for the UDT column causes an NPE which refers to codec.clj:90.
same as #71 but for manifold
I'd like a variant of execute-chan, where I provide the channels used, i.e.
(execute-chans result-chan error-chan session query options)
Where a successful query results are placed on result-chan, and any error is placed on error-chan.
Two reasons:
I'm happy to work on a PR if you think this sensible.
Ta,
Derek
I'm trying to execute a prepared statement to insert a timestamp
value using named parameters. I'm getting the following error:
clojure.lang.ExceptionInfo: Query binding failed {:type :qbits.alia/bind-error, :exception #error { :cause "java.util.Date cannot be cast to com.datastax.driver.core.LocalDate" :via [{:type java.lang.ClassCastException :message "java.util.Date cannot be cast to com.datastax.driver.core.LocalDate" :at [qbits.alia.codec$eval4743$fn__4744 invoke "codec.clj" 153]}]
I think I've tracked it down to qbits.alia.codec.clj
in the java.util.Date
implementation of the PNamedBinding
protocol. It is currently calling setDate
with a java.util.Date
argument, but in the Java driver, setDate
un-intuitively takes a com.datastax.driver.core.LocalDate
while setTimestamp
takes a java.util.Date
Anyway, it seems like an easy fix. Pull request coming soon.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.