loyd / clickhouse.rs Goto Github PK
View Code? Open in Web Editor NEWA typed client for ClickHouse
License: MIT License
A typed client for ClickHouse
License: MIT License
Because HTTP connection needs heartbeat messages send periodically from the server to the client, and second, that's the only way the server can detect abruptly terminated client connections.
Please, add Date type because it is massively used even in Yandex itself.
Кстати, структура точно такая же, как в Яндекс.Метрике для таблиц просмотров событий. Номер счетчика, т. е. идентификатор сайта, потом дата идет
(c) Milovidov
Date is often needed for queres like: ... WHERE _date BETWEEN '2020-12-01' AND '2020-12-30'
Thank you for great crate!
Regarding U256
, not sure if there is one, what should we use? Something like this would be nice.
I saw a previous discussion about DateTime
in #1. Any interest in supporting chrono or time instead of u32
?
I was looking at the code and I have a question
Do we have a guarantee nothing will be send to clickhouse -before- we actually invoke inserter.commit
?
I'm doing a bunch of inserter.write
calls and due to the way how I handle errors(backoff) there might be a situation where I accumulate quite a bunch of records and write them all with inserter.write
and only then call inserter.commit
from what I see in the code inserter.write
actually calls this if buffer is getting bigger than some threshold
Line 88 in cb24f41
which essentially should lead to writing into a channel(hyper
Client's BODY
)
now if I understand correctly nothing should be sent over the wire before we actually invoke inserter.end
/inserter.commit
(which closes the channel)
am I correct in my line of thought?
It's just I am experiencing quite weird sort of bugs on high load with clickhouse where all my writers are getting locked, the throughput can get quite high(~30k writes per second) but I am writing in chunks(using infinite inserter), from several green threads to exclude possible starvation and database has been switched to memory table engine to ensure it's not IO problem
Thanks!
Now only decompression is available.
ClickHouse sends an error after data if it occurs during processing.
[tests/integration.rs:157] &row = MyRowResult {
no: 500,
result: 2,
}
[tests/integration.rs:157] &row = MyRowResult {
no: 0,
result: 501,
}
The second row must be:
[tests/integration.rs:157] &row = MyRowResult {
no: 501,
result: 2,
}
Test:
#[tokio::test]
async fn it_writes_then_reads_count() {
let client = prepare("it_writes_then_reads_count").await;
#[derive(Debug, Row, Serialize, Deserialize)]
struct MyRow {
no: u32,
num: u32
}
#[derive(Debug, Row, Serialize, Deserialize)]
struct MyRowResult {
no: u32,
result: u32
}
// Create a table.
client
.query(
"
CREATE TABLE some(no UInt32, num UInt32)
ENGINE = MergeTree
ORDER BY no
",
)
.execute()
.await
.expect("cannot create a table");
// Write to the table.
let mut insert = client.insert("some").expect("cannot insert");
for i in 0..1000 {
insert
.write(&MyRow { no: i, num: i })
.await
.expect("cannot write()");
insert
.write(&MyRow { no: i, num: i+1 })
.await
.expect("cannot write()");
}
insert.end().await.expect("cannot end()");
// Read from the table.
let mut cursor = client
.query("SELECT no, count(*) FROM some WHERE no BETWEEN ? AND ? GROUP BY no ORDER BY no")
.bind(500)
.bind(504)
.fetch::<MyRowResult>()
.expect("cannot fetch");
let mut i = 500;
while let Some(row) = cursor.next().await.expect("cannot next()") {
dbg!(&row);
assert_eq!(row.no, i);
assert_eq!(row.result, 2);
i += 1;
}
}
The same with : count(*) as result
The title says everything. According to the crates.io page, the latest version of clickhouse
is 0.10.0
. As I'm writing this issue, the version of clickhouse
from this repository's Cargo.toml
is 0.9.3
.
Thank you for your work on this crate!
Hi,
The Python client of ClickHouse allows to insert a raw pyarrow.Table
via the insert_arrow
method, which sends the Apache Arrow encoded data 1:1 to ClickHouse through ClickHouse's ArrowStream
format. This is incredibly efficient.
Code is quite short, see https://github.com/ClickHouse/clickhouse-connect/blob/fa20547d7f7e2fd3a2cf4cd711c3262c5a79be7a/clickhouse_connect/driver/client.py#L576
Surprisingly, the INSERTs using Arrow in Python are even faster than this ClickHouse Rust client using RowBinary
format, though I have not investigated where this client loses time.
Has anyone looked into Apache Arrow support and benchmarked it? Rust's polars
is based on Apache Arrow as backend--using the native insert format seems like the logical choice, providing an easy way to directly insert a polars DataFrame into ClickHouse. Supporting Arrow would potentially improve performance and we could directly query/insert a whole polars DataFrame.
These are all Arrow-based standards and supported by ClickHouse/polars, so the extension might be straightforward.
Wasn't sure whether to comment on #48 or not. Do you have any suggestions on how to handle Clickhouse's Int256
in Rust?
The current API allows the following code:
let mut cursor = client.query("...").fetch::<MyRow<'_>>()?;
let a = cursor.next().await?;
let b = cursor.next().await?; // <- must be error
We should use something like sqlx::fetch
or wait for GAT stabilization.
Sadly, it will be a breaking change.
It seems more than 1 FixedString not work.
demo code:
let client = Client::default()
.with_url("http://localhost:8123")
.with_database("default");
client.query(r###"create table if not exists tpayment
(
code FixedString(8),
name String,
stock_code FixedString(8)
)
ENGINE = ReplacingMergeTree()
PRIMARY KEY (name);"###).execute().await.unwrap();
let mut insert = client.insert("tpayment").unwrap();
for _ in 0..10 {
insert.write(&TPayment {
code: "12345678".to_string(),
name: "foo".to_string(),
stock_code: "12345678".to_string(),
}).await.unwrap();
}
insert.end().await.unwrap();
let mut cursor = client
.query("SELECT ?fields FROM tpayment")
.fetch::<TPayment>().unwrap();
the panic information:
called `Result::unwrap()` on an `Err` value: BadResponse("Code: 33. DB::Exception: Cannot read all data. Bytes read: 3. Bytes expected: 53.: (at row 4)\n: While executing BinaryRowInputFormat. (CANNOT_READ_ALL_DATA) (version 21.12.3.32 (official build))")
thread 'store::clickhouse::clickhouse::tests::test' panicked at 'called `Result::unwrap()` on an `Err` value: BadResponse("Code: 33. DB::Exception: Cannot read all data. Bytes read: 3. Bytes expected: 53.: (at row 4)\n: While executing BinaryRowInputFormat. (CANNOT_READ_ALL_DATA) (version 21.12.3.32 (official build))")', data/src/store/clickhouse/clickhouse.rs:99:32
stack backtrace:
the call stack point to insert.end().await.unwrap();
rust version:
rustc 1.66.0-nightly (4a1467723 2022-09-23)
clickhouse version:
ClickHouse client version 21.12.3.32 (official build).
Doing expensive concat(c1, c2...c10) as f where f like '%k%'
searches in 100mil rows is painful, if add one more count(1)
query before it would be even worse, so I'm thinking using https://clickhouse.com/docs/en/interfaces/formats/#json to accelerate(rows_before_limit_at_least
), but RowBinary
has been hard coded. Am I making any sense, or there's some way better to achieve?
Use RowBinaryWithNamesAndTypes
instead of RowBinary
as a primary format in order to support deserialize_any
and, hence, serde(flatten)
and type conversions & validation.
Avoid the case of long executing INSERT
s, where the spent time is more than max_duration
in the following code:
let mut inserter = client.inserter("kek")
.set_max_duration(Duration::from_millis(5));
while let Some(envelope) = ctx.recv().await {
let row = make_row(envelope);
inserter.write(row).await;
// if time is exceeded,
// ends the current HTTP request and open another one.
inserter.commit().await;
}
The simplest possible solution is just to skip ticks.
I've looked into the PRs and https urls should work, but I'm still getting an error. Double checked that the tls
feature is enabled:
Network(hyper::Error(Connect, "invalid URL, scheme is not http"))
This happens when executing a query:
let result = client.query("SELECT 1").fetch_one::<usize>().await.unwrap();
Am I doing something wrong or is this a bug?
So, I removed all Reflection
references from my code and all Reflection
derives from structures interfacing with clickhouse...
However I have one generic writer function that looks something like
pub async fn go<A: Clone + std::fmt::Debug + Send, R: Serialize, F: Fn(A) -> R>(&self,client: Client,map: &F) -> Result<()> {
let mut inserter = client.inserter(...);
inserter.write(&map(...)).await?;
...
}
now I'm getting this error which I find unintuitive
error[E0277]: the trait bound R: clickhouse::row::Primitive is not satisfied
(at the &map
function call)
(as clickhouse::row::Primitive
trait is private)
What would be the best way to resolve?...
p.s. before R: Serialize
used to be R: Reflection + Serialize
I suspect that I'm doing something wrong, but I'm trying to use an inserter to write many rows to a ClickHouse db and started getting CANNOT_READ_ALL_DATA errors from the DB.
Created this simple program that demonstrates the problem while inserting just a single row: https://github.com/jjtt/clickhouse-cannot-read-all-data:
run_clickhouse_server.sh
- to start the latest ClickHouse docker imagecargo run
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: BadResponse("Code: 33. DB::Exception: Cannot read all data. Bytes read: 1. Bytes expected: 8.: (at row 1)\n: While executing BinaryRowInputFormat. (CANNOT_READ_ALL_DATA) (version 23.2.3.17 (official build))")', src/main.rs:23:26
I have the next struct:
use serde_derive::{Deserialize, Serialize};
use clickhouse::Reflection;
#[derive(Clone, Reflection, Deserialize, Serialize)]
pub enum LogLevel {
DEBUG,
INFO,
WARNING,
ERROR
}
#[derive(Clone, Reflection, Deserialize, Serialize)]
pub struct LogEntry {
level: LogLevel,
message: String,
}
When I'm trying to insert LogEntry
struct into database I get the next panic:
thread 'tokio-runtime-worker' panicked at 'not yet implemented', /home/zezic/.cargo/registry/src/github.com-1ecc6299db9ec823/clickhouse-0.6.5/src/rowbinary/ser.rs:111:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
which points to another todo!()
in the serialization source code:
#[inline]
fn serialize_unit_variant(
self,
_name: &'static str,
_variant_index: u32,
_variant: &'static str,
) -> Result<()> {
todo!();
}
What is the best way to satisfy types with the custom serialization implementation?
use serde::{Deserialize, Serialize};
#[derive(clickhouse::Row, Debug, Serialize, Deserialize)]
pub struct Test {
#[serde(with = "uuid::serde::compact")]
pub id: uuid::Uuid,
}
let mut inserter = db.insert("tests")?;
inserter.write(&Test { id: Uuid::new_v4() }).await?; // id: cb5f628b-0f76-4e3e-a310-9b15cb9d29ee
inserter.end().await?; // success
// later in clickhouse-client do:
// select * from tests;
┌─id───────────────────────────────────┐
│ 3e4e760f-8b62-5fcb-ee29-9dcb159b10a3 │
└──────────────────────────────────────┘
Hi @loyd
You have closed the issue but still it is not working. I am getting below error
the trait bound f64: clickhouse::row::Primitive is not satisfied
required because of the requirements on the impl of Row for f64
Below is my code
let r_ts = ts_client
.query(&query)
.fetch_all::<f64>()
.await.unwrap();
Below is my dependencies,
[dependencies]
clickhouse = "0.9.3"
tokio = { version = "1.15.0", features = ["full"] }
Code: 62. DB::Exception: Syntax error: failed at position 81 ('TIMEOUT'): TIMEOUT AS SELECT num FROM test ORDER BY num. Expected one of: REFRESH, PERIODIC REFRESH. (SYNTAX_ERROR) (version 22.10.2.11 (official build))
Enable the watch
feature in CI tests after fixing.
Looks like not much weekly activity on master
since June. Project still going strong?
clickhouse.rs version 0.7.2
rustc version 1.52.1
Large queries that cause the query URI to exceed the ClickHouse URI size limit of 16384 B results in a BadResponse("")
error from the driver.
Reproducer:
use clickhouse::Row;
use std::error::Error;
#[derive(Row, Debug, serde::Deserialize)]
struct Res {
result: String
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let client = clickhouse::Client::default()
.with_url("http://localhost:8123");
let large_literal: String = std::iter::repeat("A").take(16384).collect();
let _: Vec<Res> = client.query(format!("SELECT '{}' as result", large_literal).as_str())
.fetch_all().await?;
Ok(())
}
Result:
❯ cargo run
Compiling rust-playground v0.1.0 (/home/shenghao/Projects/rust-playground)
Finished dev [unoptimized + debuginfo] target(s) in 2.26s
Running `target/debug/rust-playground`
Error: BadResponse("")
Hacking clickhouse.rs
to use POSTs instead of GETs (with the method
parameter in do_execute()
replaced by a read_only
that makes the request builder set the query parameter readonly
to emulate the GET request's read only semantics) for read queries as well as DDL seems to work, though it changes the request type - maybe someone has a better fix?
When I do fetch_all::<(u32, f64, u32)>()
it is throwing an error,
the trait bound (u32, f64, u32): clickhouse::Row is not satisfied
.
Could you help me here?
Something like this test but with ["bar", "baz", "foo?bar"] ( with ? inside string)
#[test]
fn it_builds_sql_with_in_clause() {
fn t(arg: &[&str], expected: &str) {
let mut sql = SqlBuilder::new("SELECT ?fields FROM test WHERE a IN ?");
sql.bind_arg(arg);
sql.bind_fields::<Row>();
assert_eq!(sql.finish().unwrap(), expected);
}
const ARGS: &[&str] = &["bar", "baz", "foo?bar"];
t(&ARGS[..0], r"SELECT `a`,`b` FROM test WHERE a IN []");
t(&ARGS[..1], r"SELECT `a`,`b` FROM test WHERE a IN ['bar']");
t(
&ARGS[..2],
r"SELECT `a`,`b` FROM test WHERE a IN ['bar','baz']",
);
t(
ARGS,
r"SELECT `a`,`b` FROM test WHERE a IN ['bar','baz','foo?bar']",
);
}
unbound query argument: ?
thread 'sql::test::it_builds_sql_with_in_clause' panicked at 'unbound query argument: ?', clickhouse.rs/src/sql/mod.rs:69:21
stack backtrace:
0: std::panicking::begin_panic
at /home/f/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panicking.rs:519:12
1: clickhouse::sql::SqlBuilder::finish
at ./src/sql/mod.rs:69:21
2: clickhouse::sql::test::it_builds_sql_with_in_clause::t
at ./src/sql/mod.rs:116:24
3: clickhouse::sql::test::it_builds_sql_with_in_clause
at ./src/sql/mod.rs:126:9
4: clickhouse::sql::test::it_builds_sql_with_in_clause::{{closure}}
Would you welcome a PR adding some doc comments? I could work on this as I come to understand the code.
For all compressions except None the returned message is broken. E.g. for gzip it looks like:
Error: BadResponse("\u{1f}�\u{8}\u{0}\u{0}\u{0}\u{0}\u{0}\u{4}\u{3}\u{2}�9?%�J��PG!U/%�� \'�2$��DCS�V����ʵ\"9��$3?�J�%�$1)�8U!\u{5}��O-�S/QH��,.Q�(K-*���S02�3�3�3U��OK�L�L�QH*��I���\u{2}\u{0}\u{0}\u{0}��\u{3}\u{0}\u{11}�\u{7f}\u{8}o\u{0}\u{0}\u{0}")
use clickhouse::{error::Result, Client, Reflection};
#[derive(Debug, Serialize, Reflection)]
struct Row {
no: u32,
name: String,
}
This fails with error
6 | #[derive(Debug, Serialize, Reflection)]
| ^^^^^^^^^^ could not find `reflection` in `{{root}}`
|
= note: this error originates in a derive macro (in Nightly builds, run with -Z macro-backtrace for more info)
I added this to my cargo toml file:
[dependencies]
clickhouse = "0.6.3"
reflection = "0.1.3"
but when I run cargo build
I get a failure saying:
Compiling clickhouse v0.6.3
error[E0433]: failed to resolve: could not find `test` in `tokio`
--> /Users/gudjonragnar/.cargo/registry/src/github.com-1ecc6299db9ec823/clickhouse-0.6.3/src/compression/lz4.rs:163:10
|
163 | #[tokio::test]
| ^^^^ could not find `test` in `tokio`
error: aborting due to previous error
For more information about this error, try `rustc --explain E0433`.
error: could not compile `clickhouse`
To learn more, run the command again with --verbose.
warning: build failed, waiting for other jobs to finish...
error: build failed
I am quite new to Rust so I don't know what to do here, any thoughts?
I am running on MacOS BigSur if that is relevant.
Let's say I do
let cursor = client.query("SELECT bla, wtf FROM ?").bind("some_table").fetch::<SomeType<'_>>();
let wtf = cursor.as_mut().unwrap().next().await;
hangs forever never to return...
same happens without binding table name if I just make a mistake in query syntax - errors swallowed, iterator hangs
Right now it can be done using separate arrays:
#[derive(Debug, Row, Serialize, Deserialize)]
struct MyRowOwned {
no: u32,
#[serde(rename = "nested.a")]
a: Vec<f64>,
#[serde(rename = "nested.b")]
b: Vec<f64>,
}
However it can be more convenient to detect following pattern:
#[derive(Debug, Row, Serialize, Deserialize)]
struct MyRowOwned {
no: u32,
nested: Vec<Nested>,
}
#[derive(Debug, Row, Serialize, Deserialize)]
struct Nested {
a: f64,
b: f64,
}
Error:
Custom("invalid type: string \"...\" expected a borrowed string")
Test:
#[tokio::test]
async fn it_works_with_big_borrowed_str() {
let client = common::prepare_database("it_works_with_big_borrowed_str").await;
#[derive(Debug, Row, Serialize, Deserialize)]
struct MyRow<'a> {
no: u32,
body: &'a str,
}
client
.query("CREATE TABLE test(no UInt32, body String) ENGINE = MergeTree ORDER BY no")
.execute()
.await
.unwrap();
let long_string = "A".repeat(10000);
let mut insert = client.insert("test").unwrap();
insert
.write(&MyRow {
no: 0,
body: &long_string,
})
.await
.unwrap();
insert.end().await.unwrap();
let mut cursor = client
.query("SELECT ?fields FROM test")
.fetch::<MyRow<'_>>()
.unwrap();
let row = cursor.next().await.unwrap().unwrap();
assert_eq!(row.body, long_string);
}
Clickhouse can ensure high availibility with replicated tables between servers
From the client point of view, it may be usefull to be able to rotate between multiple hosts.
For example, most postgres client support failover, and it's very usefull.
Didn't see it in https://github.com/loyd/clickhouse.rs/blob/master/tests/test_uuid.rs
Is there a way to work with Vec<uuid::Uuid>
?
I've been testing clickhouse as a potential db for work and was excited to see a rust clickhouse client. I wrote a small program that randomly generates data that matches our db schema, and then inserts tons of that data into the database with the intent of both getting to know clickhouse better and seeing if it meets out insert and query needs.
Running my test though, I'm coming up against memory errors that look like this issue on the clickhouse repo. I've been trying to troubleshoot it, but I'm just not familiar enough with clickhouse yet to nail down what's causing the issue and what exactly the issue is.
Here's my little test program
#[tokio::main]
async fn main() -> Result<()> {
let row_size = std::mem::size_of::<DragonflyRow>();
let bytes_in_billion_rows = 1_000_000_000 * row_size;
// insert one billion rows in batches of 10,000
// I've done this in various batch sizes from 10 to 10,000
let total_rows_to_insert = 1_000_000_000;
let batch_size = 10_000;
// start a clickhouse client
let client = Client::default().with_url("http://localhost:8123");
// create an "inserter"
let mut inserter = client
.inserter("dragonfly")? // table name
.with_max_entries(10_000);
let mut inserted_so_far = 0;
for i in 1..((total_rows_to_insert/batch_size) + 1) {
for j in 1..batch_size+1 {
inserter.write(&DragonflyRow::rand_new()).await?; // the object inserted is a randomly generated/populated struct that matches the db schema
inserted_so_far = i * j;
}
inserter.commit().await?;
thread::sleep(time::Duration::from_secs(2)); // sleep two seconds to not potentially overwhelm clickhouse
}
// close the inserter
inserter.end().await?;
Ok(())
}
My table is very simple, no nested objects and the engine is just a MergeTree
using a timestamp
value to order by.
When I run this with batch sizes of <1,000 rows, I get this error
Error: BadResponse("Code: 33. DB::Exception: Cannot read all data. Bytes read: 582754. Bytes expected: 1838993.: (at row 1)\n: While executing BinaryRowInputFormat. (CANNOT_READ_ALL_DATA) (version 21.11.3.6 (official build))")
When I run this with a batch size of 10,000, I get this
Error: BadResponse("Code: 49. DB::Exception: Too large size (9223372036854775808) passed to allocator. It indicates an error.: While executing BinaryRowInputFormat. (LOGICAL_ERROR) (version 21.11.3.6 (official build))")
Based on the information in the clickhouse issues that are similar to this, I think there's something going on with how the BinaryRowInputFormat queries are being executed, but being newer to clickhouse I'm not very confident that I'm correct about that. Today I hope to follow up by doing a similar test but instead of using this clickhouse client library, I'll just connect to the port and send raw http requests and see if I get the same issues or not.
Similar clickhouse issues I've found
I'm on Ubuntu 20.04, 4 cpu cores, 10GB ram, 32GB disk.
Looking at htop output while the program is running, I don't see much that helps aside from a lot of clickhouse-server
threads (maybe about 50).
For reference, I have no problem inserting a 4GB json file with clickhouse-client --query "<insert statement>" < file.json
I'm happy to help if there are more questions about this issue.
Hi,
Clickhouse is an OLAP database and I want to make a generic query engine where users can enter any query SQL and return query results, including returning schema and data, and then the application layer displays the data. When querying, the user does not have to specify the data type.
The query result is a two-dimensional data, where one array is the column name and the other data is also a two-dimensional data, e.g., but the schema is not fixed
{
"schema": ["id","name", "age","birthday"],
"data": [
["20", ”james1, "18", "2000-01-01"],
["21", ”james2, "19", "2000-01-01"],
["22", ”james2, "10", "2000-01-01"],
["23", ”james4, "30", "2000-01-01"],
["24", ”james5, "26", "2000-01-01"]
...
]
}
How to do that? thanks.
Is there an expected release date for 0.11.1? I would like to use the offsetdatetime serialization feature. Currently I'm doing so using path = ../clickhouse.rs
in the crate.
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: BadResponse("Code: 33. DB::Exception: Cannot read all data. Bytes read: 6. Bytes expected: 56.: (at row 2)\n: While executing BinaryRowInputFormat. (CANNOT_READ_ALL_DATA) (version 23.1.2.9 (official build))")', src/xxx.rs:181:40
error occur in insert.end().await.unwrap()
for (symbol_id, (total, cnt)) in records.iter() {
let cli = make_client(url, db, table).await?;
let mut insert = cli.inserter(table)?;
...
insert.write(&r).await.unwrap();
insert.commit().await.unwrap();
insert.end().await.unwrap();
};
here I tried cli.insert()
cli.inserter()
,
also try to drop table, and create a new one,
but it doesn't help
Hi, I'm trying to migrate https://github.com/suharev7/clickhouse-rs to use yours, but I cannot find where the Pool is.
I want to have some connections always established, and need to keep at least one at all times, i.e. pool_min and pool_max.
Your docs say almost nothing about it, besides that one sentence "Reuse created clients or clone them in order to reuse a connection pool".
Please, care to explain it better? How does a Pool work here?
Thank you.
I'm writing a kafka consumer, it consumes message and insert those message into clickhouse according to message's meta info, like:
{
"table": "ch_table_1",
"data": [
{"col_name": "foo", "type": "uint32", "val": "3"},
{"col_name": "bar", "type": "string", "val": "hello"}
//...
]
}
How do I construct the Row to insert? Didn't find any docs about this
Thanks @loyd for your support. I don't see a implementation for (T,T) bind which Clickhouse supports. I saw the code but was not able to write my own code and send a PR (I am not there in Rust yet). Can you support me in this? I am willing to send a PR if you could suggest how I can solve it.
I think it's now safe to update to tokio 1.0, bytes 1.0 and hyper 0.14
Hi,
Thanks for your crate. I am facing random issue. Do you know what timeout settings I can use to fix this?
network error: connection closed before message completed
I tried connect_timeout
but didnt work.
Hi everyone,
Thank you for this librarie.
I want to insert some DateTime in a field:
CREATE TABLE IF NOT EXISTS a.test
(
`key` String,
`test` DateTime
) ENGINE = MergeTree() ORDER BY (key) SETTINGS index_granularity = 8192;
use chrono::{DateTime};
use chrono_tz::Tz;
use reflection::{terminal, Type};
use serde::{Deserialize, Serialize};
use clickhouse::{Client, Reflection};
#[derive(Debug, Reflection, Serialize, Deserialize)]
struct Row{
key: String,
test: DateTime<Utc>,
}
let timestamp= chrono::offset::Utc::now();
insert.write(&Row{
key: "key".to_string(),
test: timestamp,
});
let stats = insert.commit().await?;
however, when commit is called
insert.commit().await?;
I have this error:
error[E0277]: the trait bound `chrono::DateTime<chrono::Utc>: clickhouse::Reflection` is not satisfied
--> src/main.rs:116:17
|
116 | #[derive(Debug, Reflection, Serialize, Deserialize)]
| ^^^^^^^^^^ the trait `clickhouse::Reflection` is not implemented for `chrono::DateTime<chrono::Utc>`
|
= note: required by `clickhouse::Reflection::ty`
= note: this error originates in a derive macro (in Nightly builds, run with -Z macro-backtrace for more info)
How to insert a DateTime ?
When I preform a query looking like SELECT * FROM table WHERE field LIKE '%?%'
I get a panic like: unbound query argument ? or ?fields
. This is bad on multiple levels. Panicking in response to invalid input doesn't seem great, but it's even worse that this panic is for a valid sql query.
It seems that at least some rudimentary tokenizing should be done in order to only respond to ?
that are not parts of string literals.
As a kludge we could alternatively have a method that says to ignore any ?
that are present.
I am having an issue with collecting items into a batch. The issue can be boiled down to this code:
let mut results = client
.query("Select ...")
.fetch::<MyStruct<'_>>()?
let mut batch = Vec::with_capacity(100);
while let Some(res) = results.next().await.unwrap() {
batch.push(res);
if batch is full {
do work();
batch.clear();
}
}
What I see happening is that each res
struct is correctly deserialized but what sometimes happens (I haven't found any deterministic explanation) is that when there are elements in the batch they are sometimes updated when the next res
is deserialized, thus corrupting the batch. I am at a loss why this would happen since I but I though it might be related to issue #24?
Since that one is still open, are there any better ways of doing this kind of stuff? The workflow is simply collecting a batch and storing it elsewhere, before moving onto the next batch. Since the dataset is possibly quite large I was hoping I could bypass using fetch_all
and allocating the whole result set. I guess another option would be to batch it at a query level in conjunction with fetch_all
but that would require executing the query multiple times.
Hi,
I am first time user of your lib. I am getting "Error: Decompression("incorrect magic number")"
whenever I try below code,
let client = Client::default().with_url("http://clickhouse.mytest.net:8123");
let count = client
.query("SELECT count() FROM mytable")
.fetch_all::<u64>()
.await?;
When I try the link http://clickhouse.mytest.net:8123/play
in Chrome, it loads properly. So don't know what is the error. Can you help me?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.