Git Product home page Git Product logo

byconity / byconity Goto Github PK

View Code? Open in Web Editor NEW
2.1K 56.0 317.0 340.92 MB

ByConity is an open source cloud data warehouse

Home Page: https://byconity.github.io/

License: Apache License 2.0

Shell 0.88% Python 5.42% Vim Script 0.01% CMake 0.86% C++ 86.63% Makefile 0.08% C 0.33% Assembly 5.43% Dockerfile 0.04% HTML 0.03% Clojure 0.05% Perl 0.01% ANTLR 0.03% Batchfile 0.01% XSLT 0.02% PLpgSQL 0.01% Smarty 0.01% Rust 0.01% Cap'n Proto 0.01% JavaScript 0.17%
clickhouse cloud kubernets lakehouse olap s3 snowflake sql clickhouse-database tiktok

byconity's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

byconity's Issues

execute sql appears error in document

SYSTEM RESTART MERGES helloworld.my_first_table;Cannot execute
SYSTEM RESTART GC helloworld.my_first_table;Cannot execute
It appears in the document SYSTEM START CONSUMER helloworld.cnch_kafka_consume;Cannot execute,SYSTEM START CONSUME helloworld.cnch_kafka_consume;Can execute
system.cnch_kafka_tables doesn't exist
Cnchkafka table execute select * from table error,Code: 48. DB::Exception: Received from localhost:9000. DB::Exception: Method read is not supported by storage CnchKafka SQLSTATE: HY000. (NOT_IMPLEMENTED)

Read CnchHive table error: DB::Exception: File not found: FileNotFoundException: /user/hive/warehouse/wisedata.db/par10/id=2vb not found.

I am trying to use the CnchHive table engine. I just built a single-node HDFS and Hive service (open-source version) to test ByConity's ability to query the Hive table.
Here is the SQL that I used to create table in Hive:

CREATE TABLE `par10`(
    `name` string)
    PARTITIONED BY (
    `id` string)
    ROW FORMAT SERDE
    'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
    STORED AS INPUTFORMAT
    'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
    OUTPUTFORMAT
    'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
    LOCATION
    '/user/hive/warehouse/wisedata.db/par10';

INSERT INTO par10 VALUES ('1a','2vb');

Here is the SQL that I used to create CnchHive table in ByConity, and it works:

CREATE TABLE hive_test_10 ( 
    `id` String,
    `name` String) 
ENGINE = CnchHive('thrift://xx.xx.xxx.xxx:pppp', 'wisedata', 'par10') 
PARTITION BY id;

However, when I tried to read the table's data, it caused error:

:) select * from hive_test_10;

SELECT *
FROM hive_test_10

Query id: 795a6a56-6b25-4ba8-af9f-ecdea3ef7e4c

0 rows in set. Elapsed: 0.045 sec. 

Received exception from server (version 21.8.7):
Code: 1000. DB::Exception: Received from localhost:9000. DB::Exception: File not found: FileNotFoundException: /user/hive/warehouse/wisedata.db/par10/id=2vb not found. SQLSTATE: HY000. 

Setup workflow to trigger website build and deployment on docs change

Following up #169

Our documentations are stored in this repository, however the website source code is stored at https://github.com/ByConity/byconity.github.io. In order to make sure the docs in the deployed website is up-to-date, we need to setup a workflow to automatically do this.

Suggested procedure:

  1. Whenever any documentation file is changed in the master branch, trigger a pipeline to copy the changes to https://github.com/ByConity/byconity.github.io, main branch.
  2. https://github.com/ByConity/byconity.github.io will then trigger a build and deployment with the latest content.
  3. We can allow this action to push to the main branch directly without PR.

File mapping:

Format:
ByConity path --> byconity.github.io path
*********

/docs/en/*  --> /docs/*
/docs/zh-cn/* --> /i18n/zh-cn/docusaurus-plugin-content-blog/current

@WillemJiang may I have your help to assist on this?

crash in query to system.metastore

Describe the bug

Server crash during query to metastore

Does it reproduce on recent release?

Yes

How to reproduce

10.3.0.30 :) SELECT * FROM system.metastore WHERE database = 'test' AND table  ='lc';

[byconity-server-0] 2023.01.12 15:59:46.695119 [ 236 ] {895ea68e-b0b3-4af9-a900-268031702217} <Debug> executeQuery: (from [::1]:33640) SELECT * FROM system.metastore WHERE database = 'test' AND table ='lc';
[byconity-server-0] 2023.01.12 15:59:46.702378 [ 236 ] {895ea68e-b0b3-4af9-a900-268031702217} <Trace> ContextAccess (default): Access granted: SELECT(database, table, uuid, meta_key) ON system.metastore
[byconity-server-0] 2023.01.12 15:59:46.702440 [ 236 ] {895ea68e-b0b3-4af9-a900-268031702217} <Trace> InterpreterSelectQuery: query: SELECT database, table, uuid, meta_key FROM system.metastore WHERE (database = 'test') AND (table = 'lc')
[byconity-server-0] 2023.01.12 15:59:46.703484 [ 236 ] {895ea68e-b0b3-4af9-a900-268031702217} <Debug> DatabaseCnch (test): Create database test in query
[byconity-server-0] 2023.01.12 15:59:46.705202 [ 236 ] {895ea68e-b0b3-4af9-a900-268031702217} <Debug> StorageFactory:  engine name CnchMergeTree
[byconity-server-0] 2023.01.12 15:59:46.706114 [ 7089 ] <Fatal> BaseDaemon: ########################################
[byconity-server-0] 2023.01.12 15:59:46.706263 [ 7089 ] <Fatal> BaseDaemon: (version 21.8.7.1 scm cdw-2.0.0/2.0.0.115, build id: B1C8E2A14E2DBD1E5587FAA6B806FACBBA832810) (from thread 236) (query_id: 895ea68e-b0b3-4af9-a900-268031702217) Received signal Segmentation fault (11)
[byconity-server-0] 2023.01.12 15:59:46.706394 [ 7089 ] <Fatal> BaseDaemon: Address: 0x640 Access: read. Address not mapped to object.
[byconity-server-0] 2023.01.12 15:59:46.706513 [ 7089 ] <Fatal> BaseDaemon: Stack trace: 0x13378c44 0x132dacaa 0x14573e75 0x13f4f4a7 0x13f4943c 0x13f4860e 0x14104fac 0x14106399 0x1432ed74 0x1432cfbb 0x14bfa893 0x14c07c5c 0x18db01ec 0x18db06cc 0x18e9137a 0x18e8ef2c 0x7f4223f31fa3 0x7f422295006f
[byconity-server-0] 2023.01.12 15:59:46.706659 [ 7089 ] <Fatal> BaseDaemon: 3. DB::StorageSystemMetastore::fillData(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, std::__1::shared_ptr<DB::Context const>, DB::SelectQueryInfo const&) const @ 0x13378c44 in /opt/byconity/usr/bin/clickhouse
[byconity-server-0] 2023.01.12 15:59:46.706780 [ 7089 ] <Fatal> BaseDaemon: 4. DB::IStorageSystemOneBlock<DB::StorageSystemMetastore>::read(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::SelectQueryInfo&, std::__1::shared_ptr<DB::Context const>, DB::QueryProcessingStage::Enum, unsigned long, unsigned int) @ 0x132dacaa in /opt/byconity/usr/bin/clickhouse
[byconity-server-0] 2023.01.12 15:59:46.706902 [ 7089 ] <Fatal> BaseDaemon: 5. DB::IStorage::read(DB::QueryPlan&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::SelectQueryInfo&, std::__1::shared_ptr<DB::Context const>, DB::QueryProcessingStage::Enum, unsigned long, unsigned int) @ 0x14573e75 in /opt/byconity/usr/bin/clickhouse
[byconity-server-0] 2023.01.12 15:59:46.707023 [ 7089 ] <Fatal> BaseDaemon: 6. DB::InterpreterSelectQuery::executeFetchColumns(DB::QueryProcessingStage::Enum, DB::QueryPlan&) @ 0x13f4f4a7 in /opt/byconity/usr/bin/clickhouse
[byconity-server-0] 2023.01.12 15:59:46.707153 [ 7089 ] <Fatal> BaseDaemon: 7. DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::__1::shared_ptr<DB::IBlockInputStream> const&, std::__1::optional<DB::Pipe>) @ 0x13f4943c in /opt/byconity/usr/bin/clickhouse
[byconity-server-0] 2023.01.12 15:59:46.707269 [ 7089 ] <Fatal> BaseDaemon: 8. DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) @ 0x13f4860e in /opt/byconity/usr/bin/clickhouse
[byconity-server-0] 2023.01.12 15:59:46.707362 [ 7089 ] <Fatal> BaseDaemon: 9. DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) @ 0x14104fac in /opt/byconity/usr/bin/clickhouse
[byconity-server-0] 2023.01.12 15:59:46.707463 [ 7089 ] <Fatal> BaseDaemon: 10. DB::InterpreterSelectWithUnionQuery::execute() @ 0x14106399 in /opt/byconity/usr/bin/clickhouse
[byconity-server-0] 2023.01.12 15:59:46.707568 [ 7089 ] <Fatal> BaseDaemon: 11. ? @ 0x1432ed74 in /opt/byconity/usr/bin/clickhouse
[byconity-server-0] 2023.01.12 15:59:46.707710 [ 7089 ] <Fatal> BaseDaemon: 12. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, bool) @ 0x1432cfbb in /opt/byconity/usr/bin/clickhouse
[byconity-server-0] 2023.01.12 15:59:46.707837 [ 7089 ] <Fatal> BaseDaemon: 13. DB::TCPHandler::runImpl() @ 0x14bfa893 in /opt/byconity/usr/bin/clickhouse
[byconity-server-0] 2023.01.12 15:59:46.707938 [ 7089 ] <Fatal> BaseDaemon: 14. DB::TCPHandler::run() @ 0x14c07c5c in /opt/byconity/usr/bin/clickhouse
[byconity-server-0] 2023.01.12 15:59:46.708046 [ 7089 ] <Fatal> BaseDaemon: 15. Poco::Net::TCPServerConnection::start() @ 0x18db01ec in /opt/byconity/usr/bin/clickhouse
[byconity-server-0] 2023.01.12 15:59:46.708149 [ 7089 ] <Fatal> BaseDaemon: 16. Poco::Net::TCPServerDispatcher::run() @ 0x18db06cc in /opt/byconity/usr/bin/clickhouse
[byconity-server-0] 2023.01.12 15:59:46.708249 [ 7089 ] <Fatal> BaseDaemon: 17. Poco::PooledThread::run() @ 0x18e9137a in /opt/byconity/usr/bin/clickhouse
[byconity-server-0] 2023.01.12 15:59:46.708348 [ 7089 ] <Fatal> BaseDaemon: 18. Poco::ThreadImpl::runnableEntry(void*) @ 0x18e8ef2c in /opt/byconity/usr/bin/clickhouse
[byconity-server-0] 2023.01.12 15:59:46.708434 [ 7089 ] <Fatal> BaseDaemon: 19. start_thread @ 0x7fa3 in /lib/x86_64-linux-gnu/libpthread-2.28.so
[byconity-server-0] 2023.01.12 15:59:46.708531 [ 7089 ] <Fatal> BaseDaemon: 20. clone @ 0xf906f in /lib/x86_64-linux-gnu/libc-2.28.so
[byconity-server-0] 2023.01.12 15:59:46.835526 [ 7089 ] <Fatal> BaseDaemon: Calculated checksum of the binary: 7B0734FCB0DE0271D880B340784BCEDA. There is no information about the reference checksum.

Expected behavior

Query works

Trying to ALTER RENAME key old_column_name column which is a part of key expression SQLSTATE: HY000

Step 1: create a table

CREATE TABLE db_name.table_name
(
order_by_column String,
old_column_name Int64
)
ENGINE = CnchMergeTree
ORDER BY (old_column_name)

Step 2: rename column

ALTER TABLE db_name.table_name RENAME COLUMN old_column_name TO new_column_name

ClickHouse exception, code: 524, host: 10.100.3.48, port: 8123; Code: 524, e.displayText() = DB::Exception: Trying to ALTER RENAME key old_column_name column which is a part of key expression SQLSTATE: HY000 (version 21.8.7.1)

Several issues with secondary indexes

(you don't have to strictly follow this form)

Bug Report

  1. Secondary index is not used in query
create table t (x UInt32, y UInt32, index minmax_y y type minmax granularity 1) engine=CnchMergeTree order by x;
insert into t select (number ,number) from numbers(100000);
select count() from t where y = 1;

The select query should read only 1 mark, currently doing full-scan:

image

Log from worker:

image

  1. Materialize index has't supported

image

DB::Exception: Division by zero while executing functions

Bug Report

Briefly describe the bug

Code: 153. DB::Exception: Received from 10.149.54.214:18684. DB::Exception: Division by zero: while executing 'FUNCTION divide(CAST(expr#count(), 'decimal(15, 4)') :: 3, CAST(expr#count()_1, 'decimal(15, 4)') :: 0) -> divide(CAST(expr#count(), 'decimal(15, 4)'), CAST(expr#count()_1, 'decimal(15, 4)')) Decimal(18, 6) : 2' SQLSTATE: 22012.

The result you expected

Correct outputs like
┌─am_pm_ratio─┐
│ 0.603480 │
└─────────────┘

How to Reproduce

Run TPC-DS 1GB test, sql 90.
select cast(amc as decimal(15,4))/cast(pmc as decimal(15,4)) am_pm_ratio
from ( select count() amc
from web_sales, household_demographics , time_dim, web_page
where ws_sold_time_sk = time_dim.t_time_sk
and ws_ship_hdemo_sk = household_demographics.hd_demo_sk
and ws_web_page_sk = web_page.wp_web_page_sk
and time_dim.t_hour between 8 and 8+1
and household_demographics.hd_dep_count = 6
and web_page.wp_char_count between 5000 and 5200) at,
( select count(
) pmc
from web_sales, household_demographics , time_dim, web_page
where ws_sold_time_sk = time_dim.t_time_sk
and ws_ship_hdemo_sk = household_demographics.hd_demo_sk
and ws_web_page_sk = web_page.wp_web_page_sk
and time_dim.t_hour between 19 and 19+1
and household_demographics.hd_dep_count = 6
and web_page.wp_char_count between 5000 and 5200) pt
order by am_pm_ratio
limit 100;

But the SQL can be run successfully for 100GB test. So it depends on the data.

Version

v0.2.1

Roadmap 2023

Welcome to share your ideas on the roadmap. The updated roadmap for Q3 and Q4 are shown below.

Storage

  • Data cache preload - Q1
    #189
  • Object store (S3) support preview - Q2
    #260
  • #545
  • IO scheduler to improve remote read performance - Q3

External Table/Data Lake (project https://github.com/orgs/ByConity/projects/2)

  • Hive usability improvement (e.g., schema auto inference) - Q3
    #315
  • Hudi COW and MOR support - Q3
    #360
  • Multi-catalog (Glue/Hive) support - Q3
    #361
  • Hive query execution improvement - Q3-Q4
    #550
    #551
    #220

Index

  • Index cache - Q2
    #209
  • Inverted index phase 2 - Q4

Runtime

  • Projection support - Q3
  • Grace hash join - Q3
  • Adaptive query scheduling - Q3
  • Common table expression (CTE) reuse - Q3
  • Materialized view - Q4
  • Extract, Load, Transform (ELT) phase 1 - Q3
    Asynchronous query execution、query Queue、join spill
  • Extract, Load, Transform (ELT) phase 2 - Q4
    Exchange spill、colocated scheduling、batch execution
  • Sql UDF support - Q4
    #427

Optimizer

  • CBO statistics auto collection - Q3
  • SQL plan management (manually creating binding) - Q3

Transaction

Enterprise feature

  • #547
  • #548
  • Multi-tenant support - Q4
  • FoundationDB back& restore - Q4

Performance improvement

  • Part cache lockless scan - Q3
  • #544
  • #549
  • Column min/max for part pruning - Q4

Stability

  • Query auto forwarding among multiple servers - Q1
    #208
  • FoundationDB CAS usage improvement - Q1
    #145
    #185
  • Server isolation - Q3
  • Storage based HA support - Q4
  • Metrics enhancement for better observability - Q4

Installation

CI

  • Auto testing script and test guideline for developers - Q1
    #206
  • Enrich CI test suit - Q1

S3Disk support as main storage layer.

Use case
AWS, GCP and other main cloud vendors.

Where running own HDFS cluster could be avoided.

Additional context
It seems, that storage_policy requires to have volume named hdfs.
And if you try to use in that volume something other than HDFS Disk, it return error during write:

Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Received from 10.3.1.221:9000. DB::Exception: Writing part to hdfs but write buffer is not hdfs: While executing SinkToOutputStream SQLSTATE: HY000.

There is no session or session context has expired

2023.01.12 15:10:31.676427 [ 186 ] {0d80b5cc-af85-45ec-afc4-ead822a01dda} executeQuery: Code: 113, e.displayText() = DB::Exception: There is no session or session context has expired SQLSTATE: HY000 (version 21.8.7.1) (from 10.20.1.240:24284) (in query: set enable_optimizer=1), Stack trace (when copying this message, always include the lines below):

  1. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int) @ 0x18e345d2 in /opt/byconity/usr/bin/clickhouse
  2. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int, bool) @ 0xb230820 in /opt/byconity/usr/bin/clickhouse
  3. DB::Context::getSessionContext() const @ 0x13b8f621 in /opt/byconity/usr/bin/clickhouse
  4. DB::InterpreterSetQuery::execute() @ 0x14109e6a in /opt/byconity/usr/bin/clickhouse
  5. ? @ 0x1432ed74 in /opt/byconity/usr/bin/clickhouse
  6. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptrDB::Context, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&)>, std::__1::optionalDB::FormatSettings const&, bool) @ 0x14332b5f in /opt/byconity/usr/bin/clickhouse
  7. DB::HTTPHandler::processQuery(std::__1::shared_ptrDB::Context, DB::HTTPServerRequest&, DB::HTMLForm&, DB::HTTPServerResponse&, DB::HTTPHandler::Output&, std::__1::optionalDB::CurrentThread::QueryScope&) @ 0x14bbe7cb in /opt/byconity/usr/bin/clickhouse
  8. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x14bc12af in /opt/byconity/usr/bin/clickhouse
  9. DB::HTTPServerConnection::run() @ 0x14c0e979 in /opt/byconity/usr/bin/clickhouse
  10. Poco::Net::TCPServerConnection::start() @ 0x18db01ec in /opt/byconity/usr/bin/clickhouse
  11. Poco::Net::TCPServerDispatcher::run() @ 0x18db06cc in /opt/byconity/usr/bin/clickhouse
  12. Poco::PooledThread::run() @ 0x18e9137a in /opt/byconity/usr/bin/clickhouse
  13. Poco::ThreadImpl::runnableEntry(void*) @ 0x18e8ef2c in /opt/byconity/usr/bin/clickhouse
  14. start_thread @ 0x7fa3 in /lib/x86_64-linux-gnu/libpthread-2.28.so
  15. clone @ 0xf906f in /lib/x86_64-linux-gnu/libc-2.28.so

2023.01.12 15:10:31.676925 [ 186 ] {0d80b5cc-af85-45ec-afc4-ead822a01dda} DynamicQueryHandler: Code: 113, e.displayText() = DB::Exception: There is no session or session context has expired SQLSTATE: HY000, Stack trace (when copying this message, always include the lines below):

  1. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int) @ 0x18e345d2 in /opt/byconity/usr/bin/clickhouse
  2. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int, bool) @ 0xb230820 in /opt/byconity/usr/bin/clickhouse
  3. DB::Context::getSessionContext() const @ 0x13b8f621 in /opt/byconity/usr/bin/clickhouse
  4. DB::InterpreterSetQuery::execute() @ 0x14109e6a in /opt/byconity/usr/bin/clickhouse
  5. ? @ 0x1432ed74 in /opt/byconity/usr/bin/clickhouse
  6. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptrDB::Context, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&)>, std::__1::optionalDB::FormatSettings const&, bool) @ 0x14332b5f in /opt/byconity/usr/bin/clickhouse
  7. DB::HTTPHandler::processQuery(std::__1::shared_ptrDB::Context, DB::HTTPServerRequest&, DB::HTMLForm&, DB::HTTPServerResponse&, DB::HTTPHandler::Output&, std::__1::optionalDB::CurrentThread::QueryScope&) @ 0x14bbe7cb in /opt/byconity/usr/bin/clickhouse
  8. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x14bc12af in /opt/byconity/usr/bin/clickhouse
  9. DB::HTTPServerConnection::run() @ 0x14c0e979 in /opt/byconity/usr/bin/clickhouse
  10. Poco::Net::TCPServerConnection::start() @ 0x18db01ec in /opt/byconity/usr/bin/clickhouse
  11. Poco::Net::TCPServerDispatcher::run() @ 0x18db06cc in /opt/byconity/usr/bin/clickhouse
  12. Poco::PooledThread::run() @ 0x18e9137a in /opt/byconity/usr/bin/clickhouse
  13. Poco::ThreadImpl::runnableEntry(void*) @ 0x18e8ef2c in /opt/byconity/usr/bin/clickhouse
  14. start_thread @ 0x7fa3 in /lib/x86_64-linux-gnu/libpthread-2.28.so
  15. clone @ 0xf906f in /lib/x86_64-linux-gnu/libc-2.28.so
    (version 21.8.7.1)

DB::Exception: Cannot read all marks from file xxx/xxx/data, eof: 0, buffer size: 6866, file size: 176688: While executing MergeTreeThread SQLSTATE: 22000.

I was just using ClickHouse official's Star Schema Benchmark to test the ByConity.
When I tried to create a big flat table lineorder_flat (about 40 columns) by INNER JOIN 4 tables (the biggest table has about 600 million rows), I got this exception:

Received exception from server (version 21.8.7): Code: 33. DB::Exception: Received from 127.0.0.1:56871. DB::Exception: Received from 127.0.0.1:53749. DB::Exception: Received from 127.0.0.1:41655. DB::Exception: Cannot read all marks from file c1133162-7809-4269-8ad5-41243831e384/1996_439329587933478912_439330151713472512_2_439330217603104768_0/data, eof: 0, buffer size: 6866, file size: 176688: While executing MergeTreeThread SQLSTATE: 22000.

It seems to be a memory issue. I have not seen this error before when using ClickHouse. I assume this is a new error added to ByConity? Is this a problem with the HDFS related configuations or do I need to configure several ByConity settings correctly?

Thank you very much.

The following table will report errors

system.grants
system.quota_limits
system.quota_usage
system.quotas
system.quotas_usage
system.role_grants
system.roles
system.row_policies
system.settings_profile_elements
system.settings_profiles
system.users

SQL 错误 [497]: ClickHouse exception, code: 497, host: 10.100.3.48, port: 8123; Code: 497, e.displayText() = DB::Exception: default: Not enough privileges. To execute this query it's necessary to have grant SHOW USERS ON . SQLSTATE: HY000 (version 21.8.7.1)

A quick start for newcomers in the community

community people who are familiar with ByConity can write some guides for newcomers,
or module introduction, code talk document...
Otherwise newcomers may not know how to start.

system.distributed_ddl_queue

The system.distributed_ddl_queue table will cause the server to exit. Now the permission has not been designed.You can remove permission-related tables from the system. SQL error [497]: ClickHouse exception, code: 497, host: 10.100.3.48, port: 8123; Code: 497, e.displayText() = DB::Exception: default: Not enough privileges. To execute this query it's necessary to have grant SHOW SETTINGS PROFILES ON . SQLSTATE: HY000 (version 21.8.7.1)

ClickHouse response without column names

dbeaver error

SELECT * FROM system.cnch_tables where database = currentDatabase() and name = 'dts_bucket_with_split_number_n_range' FORMAT CSV;

SQL 错误 [1002]: ClickHouse exception, code: 1002, host: 10.100.3.48, port: 8123; ClickHouse response without column names

Application: DB::Exception: cnch_config not found SQLSTATE: F0000

I am trying ByConity and pulling down its source code onto my machine (Linux CentOS) and installing it manually.

I have finished compiling (generated bin files like clickhouse) and then there has required FoundationDB and HDFS dependencies installed.

I also have changed the values inside the relevant configuration files, but I'm not sure if I changed them correctly (mainly those .xml files in the deploy/template folder).

When I run the following steps in the installation tutorial:

Run the deploy script in a separate terminal. template_paths and program_dir args are compulsory
cd ByConity/deploy
python3.9 deploy.py --template_paths template/byconity-server.xml template/byconity-worker.xml --program_dir /home/ByConity/build/programs

Several of the services I start report this error:
<Error> Application: DB::Exception: cnch_config not found SQLSTATE: F0000

How can I fix this error?

Segmentation fault when execute insert into

Bug Report

image
image
`[n196-081-196] 2023.03.22 03:33:05.131951 [ 286 ] BaseDaemon: ########################################
[n196-081-196] 2023.03.22 03:33:05.132052 [ 286 ] BaseDaemon: (version 21.8.7.1 scm 1.0.0.0, build id: 551E8F5B2EBF293E) (from thread 171) (query_id: b4a194b2-04f6-4451-b691-a81135d06eeb) Received signal Segmentation fault (11)
[n196-081-196] 2023.03.22 03:33:05.132090 [ 286 ] BaseDaemon: Address: 0xffffffffffffff58 Access: read. Address not mapped to object.
[n196-081-196] 2023.03.22 03:33:05.132127 [ 286 ] BaseDaemon: Stack trace: 0x14b6ae10 0x14a7c105 0x14058d72 0x140569ac 0x140560f4 0x140561ae 0x13aa1943 0x14d37caf 0x14d32b78 0x14d3fe5c 0x18f04c4c 0x18f0512c 0x18fe5dda 0x18fe398c 0x7f48ce31bea7 0x7f48cce2da2f
[n196-081-196] 2023.03.22 03:33:05.207677 [ 286 ] BaseDaemon: 3. /data01/minh.dao/git/ByConity/src/Storages/MergeTree/MergeTreeDataWriter.cpp:364: DB::MergeTreeDataWriter::writeTempPart(DB::BlockWithPartition&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::Context const>, unsigned long, long, long) @ 0x14b6ae10 in /root/app/usr/bin/clickhouse
[n196-081-196] 2023.03.22 03:33:05.263677 [ 286 ] BaseDaemon: 4. /data01/minh.dao/git/ByConity/src/Storages/MergeTree/MergeTreeBlockOutputStream.cpp:31: DB::MergeTreeBlockOutputStream::write(DB::Block const&) @ 0x14a7c105 in /root/app/usr/bin/clickhouse
[n196-081-196] 2023.03.22 03:33:05.323805 [ 286 ] BaseDaemon: 5. /data01/minh.dao/git/ByConity/src/DataStreams/PushingToViewsBlockOutputStream.cpp:190: DB::PushingToViewsBlockOutputStream::write(DB::Block const&) @ 0x14058d72 in /root/app/usr/bin/clickhouse
[n196-081-196] 2023.03.22 03:33:05.335528 [ 286 ] BaseDaemon: 6. /data01/minh.dao/git/ByConity/src/DataStreams/AddingDefaultBlockOutputStream.cpp:0: DB::AddingDefaultBlockOutputStream::write(DB::Block const&) @ 0x140569ac in /root/app/usr/bin/clickhouse
[n196-081-196] 2023.03.22 03:33:05.340081 [ 286 ] BaseDaemon: 7. /data01/minh.dao/git/ByConity/src/DataStreams/SquashingBlockOutputStream.cpp:0: DB::SquashingBlockOutputStream::finalize() @ 0x140560f4 in /root/app/usr/bin/clickhouse
[n196-081-196] 2023.03.22 03:33:05.344551 [ 286 ] BaseDaemon: 8.1. inlined from /data01/minh.dao/git/ByConity/contrib/libcxx/include/memory:2844: std::__1::shared_ptrDB::IBlockOutputStream::operator->() const
[n196-081-196] 2023.03.22 03:33:05.344578 [ 286 ] BaseDaemon: 8. /data01/minh.dao/git/ByConity/src/DataStreams/SquashingBlockOutputStream.cpp:51: DB::SquashingBlockOutputStream::writeSuffix() @ 0x140561ae in /root/app/usr/bin/clickhouse
[n196-081-196] 2023.03.22 03:33:05.354364 [ 286 ] BaseDaemon: 9.1. inlined from /data01/minh.dao/git/ByConity/src/Common/Stopwatch.h:32: Stopwatch::elapsedNanoseconds() const
[n196-081-196] 2023.03.22 03:33:05.354400 [ 286 ] BaseDaemon: 9.2. inlined from /data01/minh.dao/git/ByConity/src/Common/Stopwatch.h:34: Stopwatch::elapsedMilliseconds() const
[n196-081-196] 2023.03.22 03:33:05.354419 [ 286 ] BaseDaemon: 9. /data01/minh.dao/git/ByConity/src/DataStreams/CountingBlockOutputStream.cpp:78: DB::CountingBlockOutputStream::writeSuffix() @ 0x13aa1943 in /root/app/usr/bin/clickhouse
[n196-081-196] 2023.03.22 03:33:05.443781 [ 286 ] BaseDaemon: 10. /data01/minh.dao/git/ByConity/src/Server/TCPHandler.cpp:614: DB::TCPHandler::processInsertQuery(DB::Settings const&) @ 0x14d37caf in /root/app/usr/bin/clickhouse
[n196-081-196] 2023.03.22 03:33:05.524204 [ 286 ] BaseDaemon: 11.1. inlined from /data01/minh.dao/git/ByConity/src/Common/Stopwatch.h:43: Stopwatch::nanoseconds() const
[n196-081-196] 2023.03.22 03:33:05.524259 [ 286 ] BaseDaemon: 11.2. inlined from /data01/minh.dao/git/ByConity/src/Common/Stopwatch.h:28: Stopwatch::stop()
[n196-081-196] 2023.03.22 03:33:05.524280 [ 286 ] BaseDaemon: 11.3. inlined from /data01/minh.dao/git/ByConity/src/DataStreams/BlockIO.h:68: DB::BlockIO::onFinish()
[n196-081-196] 2023.03.22 03:33:05.524320 [ 286 ] BaseDaemon: 11. /data01/minh.dao/git/ByConity/src/Server/TCPHandler.cpp:366: DB::TCPHandler::runImpl() @ 0x14d32b78 in /root/app/usr/bin/clickhouse
[n196-081-196] 2023.03.22 03:33:05.620599 [ 286 ] BaseDaemon: 12. /data01/minh.dao/git/ByConity/src/Server/TCPHandler.cpp:1874: DB::TCPHandler::run() @ 0x14d3fe5c in /root/app/usr/bin/clickhouse
[n196-081-196] 2023.03.22 03:33:05.622078 [ 286 ] BaseDaemon: 13. /data01/minh.dao/git/ByConity/contrib/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x18f04c4c in /root/app/usr/bin/clickhouse
[n196-081-196] 2023.03.22 03:33:05.626651 [ 286 ] BaseDaemon: 14. /data01/minh.dao/git/ByConity/contrib/poco/Net/src/TCPServerDispatcher.cpp:114: Poco::Net::TCPServerDispatcher::run() @ 0x18f0512c in /root/app/usr/bin/clickhouse
[n196-081-196] 2023.03.22 03:33:05.630546 [ 286 ] BaseDaemon: 15.1. inlined from /data01/minh.dao/git/ByConity/contrib/poco/Foundation/include/Poco/ScopedLock.h:36: ScopedLock
[n196-081-196] 2023.03.22 03:33:05.630577 [ 286 ] BaseDaemon: 15. /data01/minh.dao/git/ByConity/contrib/poco/Foundation/src/ThreadPool.cpp:213: Poco::PooledThread::run() @ 0x18fe5dda in /root/app/usr/bin/clickhouse
[n196-081-196] 2023.03.22 03:33:05.633877 [ 286 ] BaseDaemon: 16.1. inlined from /data01/minh.dao/git/ByConity/contrib/poco/Foundation/include/Poco/SharedPtr.h:156: Poco::SharedPtr<Poco::Runnable, Poco::ReferenceCounter, Poco::ReleasePolicyPoco::Runnable >::assign(Poco::Runnable*)
[n196-081-196] 2023.03.22 03:33:05.633914 [ 286 ] BaseDaemon: 16.2. inlined from /data01/minh.dao/git/ByConity/contrib/poco/Foundation/include/Poco/SharedPtr.h:208: Poco::SharedPtr<Poco::Runnable, Poco::ReferenceCounter, Poco::ReleasePolicyPoco::Runnable >::operator=(Poco::Runnable*)
[n196-081-196] 2023.03.22 03:33:05.633936 [ 286 ] BaseDaemon: 16. /data01/minh.dao/git/ByConity/contrib/poco/Foundation/src/Thread_POSIX.cpp:360: Poco::ThreadImpl::runnableEntry(void*) @ 0x18fe398c in /root/app/usr/bin/clickhouse
[n196-081-196] 2023.03.22 03:33:05.634009 [ 286 ] BaseDaemon: 17. start_thread @ 0x7ea7 in /lib/x86_64-linux-gnu/libpthread-2.31.so
[n196-081-196] 2023.03.22 03:33:05.634055 [ 286 ] BaseDaemon: 18. clone @ 0xfca2f in /lib/x86_64-linux-gnu/libc-2.31.so
[n196-081-196] 2023.03.22 03:33:05.759949 [ 286 ] BaseDaemon: Calculated checksum of the binary: 26CE6A2543BDC31DB6339A026B71E95F. There is no information about the reference checksum.

Exception on client:
Code: 32. DB::Exception: Attempt to read after eof: while receiving packet from 127.0.0.1:18684 SQLSTATE: 22000

Connecting to 127.0.0.1:18684 as user default.
Code: 210. DB::NetException: Connection refused (127.0.0.1:18684) SQLSTATE: 08000`

Briefly describe the bug

Segmentation fault when execute insert into

The result you expected

How to Reproduce

CREATE OR REPLACE TABLE test_jx ( id UInt64, name String ) ENGINE = MergeTree ORDER BY id;

insert into test_jx values(1,'1');

Version

use docker : byconity/byconity-server:stable which is pushed to repository

create stats if not exists helloworld.*; Report an error

2023.01.16 08:40:21.942519 [ 204 ] {b516140f-19fc-41bb-b739-dec2cf3db000} executeQuery: Code: 62, e.displayText() = DB::Exception: Syntax error: failed at position 39 ('*'): . Expected identifier SQLSTATE: 42000 (version 21.8.7.1) (from 10.20.1.240:10097) (in query: create stats if not exists helloworld.), Stack trace (when copying this message, always include the lines below):

  1. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int) @ 0x18e345d2 in /opt/byconity/usr/bin/clickhouse

  2. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int, bool) @ 0xb230820 in /opt/byconity/usr/bin/clickhouse

  3. DB::parseQueryAndMovePosition(DB::IParser&, char const*&, char const*, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, bool, unsigned long, unsigned long) @ 0x154651de in /opt/byconity/usr/bin/clickhouse

  4. DB::parseQuery(DB::IParser&, char const*, char const*, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, unsigned long, unsigned long) @ 0x15465262 in /opt/byconity/usr/bin/clickhouse

  5. ? @ 0x1432d414 in /opt/byconity/usr/bin/clickhouse

  6. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptrDB::Context, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&)>, std::__1::optionalDB::FormatSettings const&, bool) @ 0x14332b5f in /opt/byconity/usr/bin/clickhouse

  7. DB::HTTPHandler::processQuery(std::__1::shared_ptrDB::Context, DB::HTTPServerRequest&, DB::HTMLForm&, DB::HTTPServerResponse&, DB::HTTPHandler::Output&, std::__1::optionalDB::CurrentThread::QueryScope&) @ 0x14bbe7cb in /opt/byconity/usr/bin/clickhouse

  8. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x14bc12af in /opt/byconity/usr/bin/clickhouse

  9. DB::HTTPServerConnection::run() @ 0x14c0e979 in /opt/byconity/usr/bin/clickhouse

  10. Poco::Net::TCPServerConnection::start() @ 0x18db01ec in /opt/byconity/usr/bin/clickhouse

  11. Poco::Net::TCPServerDispatcher::run() @ 0x18db06cc in /opt/byconity/usr/bin/clickhouse

  12. Poco::PooledThread::run() @ 0x18e9137a in /opt/byconity/usr/bin/clickhouse

  13. Poco::ThreadImpl::runnableEntry(void*) @ 0x18e8ef2c in /opt/byconity/usr/bin/clickhouse

  14. start_thread @ 0x7fa3 in /lib/x86_64-linux-gnu/libpthread-2.28.so

  15. clone @ 0xf906f in /lib/x86_64-linux-gnu/libc-2.28.so
    2023.01.16 08:40:21.942746 [ 204 ] {b516140f-19fc-41bb-b739-dec2cf3db000} DynamicQueryHandler: Code: 62, e.displayText() = DB::Exception: Syntax error: failed at position 39 ('*'): *. Expected identifier SQLSTATE: 42000, Stack trace (when copying this message, always include the lines below):

  16. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int) @ 0x18e345d2 in /opt/byconity/usr/bin/clickhouse

  17. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int, bool) @ 0xb230820 in /opt/byconity/usr/bin/clickhouse

  18. DB::parseQueryAndMovePosition(DB::IParser&, char const*&, char const*, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, bool, unsigned long, unsigned long) @ 0x154651de in /opt/byconity/usr/bin/clickhouse

  19. DB::parseQuery(DB::IParser&, char const*, char const*, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, unsigned long, unsigned long) @ 0x15465262 in /opt/byconity/usr/bin/clickhouse

  20. ? @ 0x1432d414 in /opt/byconity/usr/bin/clickhouse

  21. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptrDB::Context, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&)>, std::__1::optionalDB::FormatSettings const&, bool) @ 0x14332b5f in /opt/byconity/usr/bin/clickhouse

  22. DB::HTTPHandler::processQuery(std::__1::shared_ptrDB::Context, DB::HTTPServerRequest&, DB::HTMLForm&, DB::HTTPServerResponse&, DB::HTTPHandler::Output&, std::__1::optionalDB::CurrentThread::QueryScope&) @ 0x14bbe7cb in /opt/byconity/usr/bin/clickhouse

  23. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x14bc12af in /opt/byconity/usr/bin/clickhouse

  24. DB::HTTPServerConnection::run() @ 0x14c0e979 in /opt/byconity/usr/bin/clickhouse

  25. Poco::Net::TCPServerConnection::start() @ 0x18db01ec in /opt/byconity/usr/bin/clickhouse

  26. Poco::Net::TCPServerDispatcher::run() @ 0x18db06cc in /opt/byconity/usr/bin/clickhouse

  27. Poco::PooledThread::run() @ 0x18e9137a in /opt/byconity/usr/bin/clickhouse

  28. Poco::ThreadImpl::runnableEntry(void*) @ 0x18e8ef2c in /opt/byconity/usr/bin/clickhouse

  29. start_thread @ 0x7fa3 in /lib/x86_64-linux-gnu/libpthread-2.28.so

  30. clone @ 0xf906f in /lib/x86_64-linux-gnu/libc-2.28.so
    (version 21.8.7.1)

explain pipeline select xxx error

I execute the following sql statement, the server hangs up.

set enable_optimizer = 1;
explain pipeline select xxx from xxx as A join xxx as B on A.name = B.name where A.name is not null;

The same sql statement will not have any errors when executed on the native clickhouse.

What does "psm" stand for?

From my view, may be "psm" is some kind of uri of a service, and may acts as a name of certain service. But I haven't figure it out what does psm stand for.

Thanks.

CI is not ready yet

We are still working hard to make ci work in order to make everyone can contribute.

The data merge error, after deploying the byconity cluster

I deployed a byconity cluster using docker, 3 work-default nodes for query, and 1 work-write node for data writing.
The data storage is normal, and 3.5 billion data has been entered, but I found that the data is not merged. I manually executed on the server node: optimize table xxx.yyy and found an error, the information is:
DB::Exception: No available service for data.cnch.vm.
I suspect that there is a problem with the configuration of my cnch-config.yml file. I did not find this configuration item in this file, but I refer to the file configured under the byconity-docker project. Here I need to configure it in this file vm information? I configured resource_manager in this file, why should I configure vm?

TSO deployment cannot be finished

When using tmux + deploy.py to start ByConity servers and the client, the TSO process got stuck:

Processing configuration file '/xxx/xxx/xx/byconity/deploy/cluster/tso/tso.xml'.
Include not found: clickhouse_remote_servers
Include not found: clickhouse_compression
Cannot set max size of core file to 1073741824
Logging trace to /xxx/xxx/xx/byconity/deploy/cluster/Server-0/log/clickhouse.log
Logging errors to /xxx/xxx/xx/byconity/deploy/cluster/Server-0/log/clickhouse.err.log
Logging trace to console
2023.02.07 20:30:39.592065 [ 1827733 ] {} <Information> SentryWriter: Sending crash reports is disabled
2023.02.07 20:30:39.592122 [ 1827733 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2023.02.07 20:30:39.706222 [ 1827733 ] {} <Information> : Starting ClickHouse 21.8.7.1 scm 1.0.0.0 with revision 54453, build id: 6ED0B82E77874EF6CEC6CAA3C0111F770AE5C29F, PID 1827733
Processing configuration file '/xxx/xxx/xx/byconity/deploy/template/cnch_config.xml'.
2023.02.07 20:30:39.708153 [ 1827733 ] {} <Trace> Application: host_port: xx.xx.xx.xx:49963

(xx is just used as mosaic)

Check the process state by ps aux:

root     1827732  0.0  0.0 213772  3112 pts/3    Ss+  20:30   0:00 bash -c /xx/xx/xx/byconity/build/programs/tso-server --config /xx/xx/xx/byconity/deploy/cluster/tso/tso.xml; while true; do sleep 2; done
root     1827733  0.3  0.3 4365396 227352 pts/3  Sl+  20:30   0:00 /xx/xx/xx/byconity/build/programs/tso-server --config /xx/xx/xx/byconity/deploy/cluster/tso/tso.xml

General questions about ByConity team plans

Hi,

First I want to say thanks for open sourcing it.

It seems, that it have in production quite a bit of features, which doesn't production ready in ClickHouse, or not yet implemented:

  1. Transactions
  2. CBO, HBO and other analyzers and optimizations.
  3. Statistics
  4. Resource Pools & management
  5. Complete separation of Storage & Compute
  6. Shared Metadata Storage
  7. Exactly once Kafka tables (with auto scaling?)
  8. Refresh for Materialized Views
  9. DELETE FROM (lightweight deletes?)
  10. Shuffle Join (probably?)
  11. New aggregate and regular functions
  12. Numa aware compute?
  13. Unique key support in MergeTree (BTW, there is ongoing PR with the same name (but not sure about implementation details) to ClickHouse, ClickHouse/ClickHouse#44534 it is related somehow to you? (I see, that it's being made by Tencent guys, and ByConity from ByteDance, but any chance))

And, i'm interested in some questions about plans of your team.

  1. Are you are going to try keep it up with recent ClickHouse releases?
    1.1 Probably even contribute some features from ByConity back to ClickHouse, in order to make total diff smaller and easier to manage? (ie having different Transaction implementation in ClickHouse and ByConity, will make live harder)
    1.2 It is going to be based on ClickHouse in future, or rewritten over time in completely separate project?
  2. Am i right, that it's being used in ByteHouse? (i've heard, that it had much older version of ClickHouse, but this one seems pretty recent ~21.8)

BTW, it would be great to have complete list of distinguish features compared to ClickHouse

The execution caused the server to crash

CREATE WAREHOUSE
IF NOT EXISTS vw_default
SETTINGS num_workers = 1, type = 'Default',
auto_suspend = 3600, auto_resume = 1,
min_worker_groups = 0, max_worker_groups = 1, max_concurrent_queries=200;

log:
2023.01.16 08:53:36.257992 [ 261 ] {} BaseDaemon: ########################################
2023.01.16 08:53:36.258131 [ 261 ] {} BaseDaemon: (version 21.8.7.1 scm cdw-2.0.0/2.0.0.115, build id: B1C8E2A14E2DBD1E5587FAA6B806FACBBA832810) (from thread 204) (query_id: 78231797-fbf2-4466-af51-bfc6ca2c4911) Received signal Segmentation fault (11)
2023.01.16 08:53:36.258296 [ 261 ] {} BaseDaemon: Address: 0xc0 Access: read. Address not mapped to object.
2023.01.16 08:53:36.258354 [ 261 ] {} BaseDaemon: Stack trace: 0xe23d4f0 0x135acb99 0x13efb8ab 0x1432ed74 0x14332b5f 0x14bbe7cb 0x14bc12af 0x14c0e979 0x18db01ec 0x18db06cc 0x18e9137a 0x18e8ef2c 0x7f6d11d83fa3 0x7f6d107a206f
2023.01.16 08:53:36.258526 [ 261 ] {} BaseDaemon: 3. DB::WithContextImpl<std::__1::shared_ptr<DB::Context const> >::getContext() const @ 0xe23d4f0 in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:53:36.258607 [ 261 ] {} BaseDaemon: 4. DB::ResourceManagement::ResourceManagerClient::createVirtualWarehouse(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, DB::ResourceManagement::VirtualWarehouseSettings const&, bool) @ 0x135acb99 in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:53:36.258688 [ 261 ] {} BaseDaemon: 5. DB::InterpreterCreateWarehouseQuery::execute() @ 0x13efb8ab in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:53:36.258737 [ 261 ] {} BaseDaemon: 6. ? @ 0x1432ed74 in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:53:36.258828 [ 261 ] {} BaseDaemon: 7. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptrDB::Context, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&)>, std::__1::optionalDB::FormatSettings const&, bool) @ 0x14332b5f in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:53:36.258930 [ 261 ] {} BaseDaemon: 8. DB::HTTPHandler::processQuery(std::__1::shared_ptrDB::Context, DB::HTTPServerRequest&, DB::HTMLForm&, DB::HTTPServerResponse&, DB::HTTPHandler::Output&, std::__1::optionalDB::CurrentThread::QueryScope&) @ 0x14bbe7cb in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:53:36.259029 [ 261 ] {} BaseDaemon: 9. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x14bc12af in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:53:36.259101 [ 261 ] {} BaseDaemon: 10. DB::HTTPServerConnection::run() @ 0x14c0e979 in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:53:36.259196 [ 261 ] {} BaseDaemon: 11. Poco::Net::TCPServerConnection::start() @ 0x18db01ec in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:53:36.259281 [ 261 ] {} BaseDaemon: 12. Poco::Net::TCPServerDispatcher::run() @ 0x18db06cc in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:53:36.259352 [ 261 ] {} BaseDaemon: 13. Poco::PooledThread::run() @ 0x18e9137a in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:53:36.259411 [ 261 ] {} BaseDaemon: 14. Poco::ThreadImpl::runnableEntry(void*) @ 0x18e8ef2c in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:53:36.259468 [ 261 ] {} BaseDaemon: 15. start_thread @ 0x7fa3 in /lib/x86_64-linux-gnu/libpthread-2.28.so
2023.01.16 08:53:36.259527 [ 261 ] {} BaseDaemon: 16. clone @ 0xf906f in /lib/x86_64-linux-gnu/libc-2.28.so
2023.01.16 08:53:36.448579 [ 261 ] {} BaseDaemon: Calculated checksum of the binary: 7B0734FCB0DE0271D880B340784BCEDA. There is no information about the reference checksum.

Simplify local installation and optimize documents

  1. Directly provide binary installation package
  2. Provide the installation documents of fdb and hdfs
  3. Local installation reduces dependency, and now python 3.9
  4. Detailed documents of cluster deployment and cluster configuration
  5. Remove functions and tables that are not supported by byconsistency to avoid misunderstanding
  6. Detailed usage documents of all table engines supported

Not supported background thread ClusteringThread in server rpc log

@dmthuc Please help to take a look this exception in server log


0. /data01/zhaojie.niu/byconity/ByConity/build/../contrib/libcxx/include/exception:133: Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x18f84df2 in /data01/zhaojie.niu/byconity/ByConity/build/programs/clickhouse
1. /data01/zhaojie.niu/byconity/ByConity/build/../src/Common/Exception.cpp:92: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xb2aa020 in /data01/zhaojie.niu/byconity/ByConity/build/programs/clickhouse
2. /data01/zhaojie.niu/byconity/ByConity/build/../src/CloudServices/CnchBGThreadsMap.cpp:0: DB::CnchBGThreadsMap::createThread(DB::StorageID const&) @ 0x13d09146 in /data01/zhaojie.niu/byconity/ByConity/build/programs/clickhouse
3. /data01/zhaojie.niu/byconity/ByConity/build/../contrib/libcxx/include/type_traits:3935: DB::CnchBGThreadsMap::getOrCreateThread(DB::StorageID const&) @ 0x13d0a6bf in /data01/zhaojie.niu/byconity/ByConity/build/programs/clickhouse
4. /data01/zhaojie.niu/byconity/ByConity/build/../src/CloudServices/CnchBGThreadsMap.cpp:108: DB::CnchBGThreadsMap::startThread(DB::StorageID const&) @ 0x13d0931f in /data01/zhaojie.niu/byconity/ByConity/build/programs/clickhouse
5. /data01/zhaojie.niu/byconity/ByConity/build/../contrib/libcxx/include/memory:3211: DB::CnchBGThreadsMap::controlThread(DB::StorageID const&, DB::CnchBGThread::Action) @ 0x13d0925a in /data01/zhaojie.niu/byconity/ByConity/build/programs/clickhouse
6. /data01/zhaojie.niu/byconity/ByConity/build/../contrib/libcxx/include/string:1444: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<void DB::RPCHelpers::serviceHandler<DB::Protos::ControlCnchBGThreadResp, DB::CnchServerServiceImpl::controlCnchBGThread(google::protobuf::RpcController*, DB::Protos::ControlCnchBGThreadReq const*, DB::Protos::ControlCnchBGThreadResp*, google::protobuf::Closure*)::$_11>(google::protobuf::Closure*, DB::Protos::ControlCnchBGThreadResp*, DB::CnchServerServiceImpl::controlCnchBGThread(google::protobuf::RpcController*, DB::Protos::ControlCnchBGThreadReq const*, DB::Protos::ControlCnchBGThreadResp*, google::protobuf::Closure*)::$_11&&)::'lambda'()>(DB::Protos::ControlCnchBGThreadResp&&, DB::CnchServerServiceImpl::controlCnchBGThread(google::protobuf::RpcController*, DB::Protos::ControlCnchBGThreadReq const*, DB::Protos::ControlCnchBGThreadResp*, google::protobuf::Closure*)::$_11&&...)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0x13662c96 in /data01/zhaojie.niu/byconity/ByConity/build/programs/clickhouse
7. /data01/zhaojie.niu/byconity/ByConity/build/../contrib/libcxx/include/functional:2210: ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xb2e127f in /data01/zhaojie.niu/byconity/ByConity/build/programs/clickhouse
8. /data01/zhaojie.niu/byconity/ByConity/build/../contrib/libcxx/include/memory:1655: void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) @ 0xb2e555a in /data01/zhaojie.niu/byconity/ByConity/build/programs/clickhouse
9. start_thread @ 0x74a4 in /lib/x86_64-linux-gnu/libpthread-2.24.so
10. clone @ 0xe8d0f in /lib/x86_64-linux-gnu/libc-2.24.so
 (version 0.1.1.1)

Could I continue to use MergeTree table engine in ByConity?

Just a question..
Cause CnchMergeTree table only can be used in Cnch engine's database...

I just tried to build an Atomic database, and tried to create a MergeTree table:

$ create table st(`did` UInt32, `col` String) engine=MergeTree() order by did;

CREATE TABLE st
(
    `did` UInt32,
    `col` String
)
ENGINE = MergeTree
ORDER BY did

Query id: ba5e3263-0ac4-45fb-9fc9-e54373121ca4


0 rows in set. Elapsed: 0.009 sec. 

Received exception from server (version 21.8.7):
Code: 79. DB::Exception: Received from 127.0.0.1:34159. DB::Exception: MergeTree storages require data path SQLSTATE: 58P01. 

AND I added path...

$ create table st(`did` UInt32, `col` String) engine=MergeTree('mergetree/') order by did;

CREATE TABLE st
(
    `did` UInt32,
    `col` String
)
ENGINE = MergeTree('mergetree/')
ORDER BY did

Query id: 0251e700-c349-4681-828e-7962b1689ff6


Code: 42. DB::Exception: Received from 127.0.0.1:34159. DB::Exception: With extended storage definition syntax storage MergeTree requires no parameters

Syntax for the MergeTree table engine:

CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
    name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1] [TTL expr1],
    name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2] [TTL expr2],
    ...
    INDEX index_name1 expr1 TYPE type1(...) GRANULARITY value1,
    INDEX index_name2 expr2 TYPE type2(...) GRANULARITY value2
) ENGINE = MergeTree()
ORDER BY expr
[PARTITION BY expr]
[CLUSTER BY expr INTO <TOTAL_BUCKET_NUMBER> BUCKETS [SPLIT_NUMBER <SPLIT_NUMBER_VALUE>] [WITH_RANGE] ]
[PRIMARY KEY expr]
[SAMPLE BY expr]
[TTL expr [DELETE|TO DISK 'xxx'|TO VOLUME 'xxx'], ...]
[SETTINGS name=value, ...]

See details in documentation: https://clickhouse.tech/docs/en/engines/table-engines/mergetree-family/mergetree/. Other engines of the family support different syntax, see details in the corresponding documentation topics.

If you use the Replicated version of engines, see https://clickhouse.tech/docs/en/engines/table-engines/mergetree-family/replication/.
 SQLSTATE: 42000. 

Is it a bug?

explain pipeline select xxx error

I execute the following sql statement, the server hangs up.

set enable_optimizer = 1;
explain pipeline select xxx from xxx as A join xxx as B on A.name = B.name where A.name is not null;

The same sql statement will not have any errors when executed on the native clickhouse.

DB::Exception: cannot write to file: Input/output error

This issue occurs occasionally when use INSERT INTO xxx in the ByConity (manually deployment).
In addition, when we reproduced this error in our test environment, we found that after starting services such as tso and worker, we had to keep the terminal window where the service was started open; otherwise, the error would most likely occur when the windows were closed and left the services running in the background. We suspect that the workers running in the background is not running correctly.

Important: when this error occurs, just restart the services and then it will be fine.

The following is the log:

2023.03.14 14:49:46.708980 [ 46822 ] {5a113457-af3c-41ed-bc93-abee3d4d8a34} <Error> TCPHandler: Code: 1001, e.displayText() = DB::Exception: DB::Exception: cannot write to file: Input/output error SQLSTATE: HY000. SQLSTATE: HY000, Stack trace:

0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x187f3752 in /xx/xx/ByConity/clickhouse-server
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xae11020 in /xx/xx/ByConity/clickhouse-server
2. DB::readException(DB::ReadBuffer&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool) @ 0xae7a42f in /xx/xx/ByConity/clickhouse-server
3. DB::RPCHelpers::checkException(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x1334cbcb in /xx/xx/ByConity/clickhouse-server
4. DB::CnchServerClient::commitParts(DB::TxnTimestamp const&, DB::ManipulationType, DB::MergeTreeMetaBase&, std::__1::vector<std::__1::shared_ptr<DB::MergeTreeDataPartCNCH>, std::__1::allocator<std::__1::shared_ptr<DB::MergeTreeDataPartCNCH> > > const&, std::__1::vector<std::__1::shared_ptr<DB::DeleteBitmapMeta>, std::__1::allocator<std::__1::shared_ptr<DB::DeleteBitmapMeta> > > const&, std::__1::vector<std::__1::shared_ptr<DB::MergeTreeDataPartCNCH>, std::__1::allocator<std::__1::shared_ptr<DB::MergeTreeDataPartCNCH> > > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<cppkafka::TopicPartition, std::__1::allocator<cppkafka::TopicPartition> > const&) @ 0x131d3f56 in /xx/xx/ByConity/clickhouse-server
5. DB::CnchServerClient::precommitParts(std::__1::shared_ptr<DB::Context const>, DB::TxnTimestamp const&, DB::ManipulationType, DB::MergeTreeMetaBase&, std::__1::vector<std::__1::shared_ptr<DB::MergeTreeDataPartCNCH>, std::__1::allocator<std::__1::shared_ptr<DB::MergeTreeDataPartCNCH> > > const&, std::__1::vector<std::__1::shared_ptr<DB::DeleteBitmapMeta>, std::__1::allocator<std::__1::shared_ptr<DB::DeleteBitmapMeta> > > const&, std::__1::vector<std::__1::shared_ptr<DB::MergeTreeDataPartCNCH>, std::__1::allocator<std::__1::shared_ptr<DB::MergeTreeDataPartCNCH> > > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<cppkafka::TopicPartition, std::__1::allocator<cppkafka::TopicPartition> > const&) @ 0x131d45ef in /xx/xx/ByConity/clickhouse-server
6. DB::CnchDataWriter::commitDumpedParts(DB::DumpedData const&) @ 0x1323c975 in /xx/xx/ByConity/clickhouse-server
7. DB::CnchDataWriter::dumpAndCommitCnchParts(std::__1::vector<std::__1::shared_ptr<DB::IMergeTreeDataPart>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeDataPart> > > const&, std::__1::vector<std::__1::shared_ptr<DB::LocalDeleteBitmap>, std::__1::allocator<std::__1::shared_ptr<DB::LocalDeleteBitmap> > > const&, std::__1::vector<std::__1::shared_ptr<DB::IMergeTreeDataPart>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeDataPart> > > const&) @ 0x1323a6ea in /xx/xx/ByConity/clickhouse-server
8. DB::CloudMergeTreeBlockOutputStream::write(DB::Block const&) @ 0x14553fd7 in /xx/xx/ByConity/clickhouse-server
9. DB::PushingToViewsBlockOutputStream::write(DB::Block const&) @ 0x13ba4d95 in /xx/xx/ByConity/clickhouse-server
10. DB::AddingDefaultBlockOutputStream::write(DB::Block const&) @ 0x13ba296c in /xx/xx/ByConity/clickhouse-server
11. DB::SquashingBlockOutputStream::finalize() @ 0x13ba2094 in /xx/xx/ByConity/clickhouse-server
12. DB::SquashingBlockOutputStream::writeSuffix() @ 0x13ba214e in /xx/xx/ByConity/clickhouse-server
13. DB::CountingBlockOutputStream::writeSuffix() @ 0x13636943 in /xx/xx/ByConity/clickhouse-server
14. DB::copyData(DB::IBlockInputStream&, DB::IBlockOutputStream&, std::__1::atomic<bool>*) @ 0x136556e6 in /xx/xx/ByConity/clickhouse-server
15. DB::NullAndDoCopyBlockInputStream::readImpl() @ 0x13b9fbfe in /xx/xx/ByConity/clickhouse-server
16. DB::IBlockInputStream::read() @ 0x13637405 in /xx/xx/ByConity/clickhouse-server
17. DB::AsynchronousBlockInputStream::calculate() @ 0x136330a4 in /xx/xx/ByConity/clickhouse-server
18. void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::AsynchronousBlockInputStream::next()::$_0, void ()> >(std::__1::__function::__policy_storage const*) @ 0x13633390 in /xx/xx/ByConity/clickhouse-server
19. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xae4c06f in /xx/xx/ByConity/clickhouse-server
20. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()&&...)::'lambda'()::operator()() @ 0xae4de0c in /xx/xx/ByConity/clickhouse-server
21. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xae48b65 in /xx/xx/ByConity/clickhouse-server
22. void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) @ 0xae4d07a in /xx/xx/ByConity/clickhouse-server
23. ? @ 0x8f4b in /usr/lib64/libpthread-2.28.so
24. __clone @ 0xf8810 in /usr/lib64/libc-2.28.so

DB::Exception: Unknown packet 11 from one of the following replicas: SQLSTATE: 08S01.

I am trying to import data from external ORC files into CnchMergeTree, the data is generated based on TPC-DS. Unfortunately, I am running into a new problem again.

Here is my create table query in ByConity:

CREATE TABLE hive.store_returns
(
    `sr_return_time_sk` Nullable(Int32),
    `sr_item_sk` Nullable(Int32),
    `sr_customer_sk` Nullable(Int32),
    `sr_cdemo_sk` Nullable(Int32),
    `sr_hdemo_sk` Nullable(Int32),
    `sr_addr_sk` Nullable(Int32),
    `sr_store_sk` Nullable(Int32),
    `sr_reason_sk` Nullable(Int32),
    `sr_ticket_number` Nullable(Int64),
    `sr_return_quantity` Nullable(Int32),
    `sr_return_amt` Nullable(Float32),
    `sr_return_tax` Nullable(Float32),
    `sr_return_amt_inc_tax` Nullable(Float32),
    `sr_fee` Nullable(Float32),
    `sr_return_ship_cost` Nullable(Float32),
    `sr_refunded_cash` Nullable(Float32),
    `sr_reversed_charge` Nullable(Float32),
    `sr_store_credit` Nullable(Float32),
    `sr_net_loss` Nullable(Float32),
    `sr_returned_date_sk` Int32 DEFAULT 1
)
ENGINE = CnchMergeTree
PARTITION BY sr_returned_date_sk
ORDER BY tuple()
SETTINGS storage_policy = 'cnch_default_hdfs', index_granularity = 8192

And the following is the command that I used to import data into the table:
[$]# cat /var/lib/docker/data/store_returns/sr_returned_date_sk=2452822/000004_0 | kubectl -n byconity exec -it sts/byconity-server -- clickhouse client --query="INSERT INTO hive.store_returns (sr_return_time_sk,sr_item_sk,sr_customer_sk,sr_cdemo_sk,sr_hdemo_sk,sr_addr_sk,sr_store_sk,sr_reason_sk,sr_ticket_number,sr_return_quantity,sr_return_amt,sr_return_tax,sr_return_amt_inc_tax,sr_fee,sr_return_ship_cost,sr_refunded_cash,sr_reversed_charge,sr_store_credit,sr_net_loss,sr_returned_date_sk) SELECT sr_return_time_sk,sr_item_sk,sr_customer_sk,sr_cdemo_sk,sr_hdemo_sk,sr_addr_sk,sr_store_sk,sr_reason_sk,sr_ticket_number,sr_return_quantity,sr_return_amt,sr_return_tax,sr_return_amt_inc_tax,sr_fee,sr_return_ship_cost,sr_refunded_cash,sr_reversed_charge,sr_store_credit,sr_net_loss,2 FROM input('sr_return_time_sk Nullable(Int32), sr_item_sk Nullable(Int32), sr_customer_sk Nullable(Int32), sr_cdemo_sk Nullable(Int32), sr_hdemo_sk Nullable(Int32), sr_addr_sk Nullable(Int32), sr_store_sk Nullable(Int32), sr_reason_sk Nullable(Int32),sr_ticket_number Nullable(Int64),sr_return_quantity Nullable(Int32), sr_return_amt Nullable(Float32), sr_return_tax Nullable(Float32), sr_return_amt_inc_tax Nullable(Float32), sr_fee Nullable(Float32), sr_return_ship_cost Nullable(Float32), sr_refunded_cash Nullable(Float32), sr_reversed_charge Nullable(Float32), sr_store_credit Nullable(Float32), sr_net_loss Nullable(Float32)') FORMAT ORC";

And finally it caused the following errors:

Unable to use a TTY - input is not a terminal or the right kind of file
Received exception from server (version 21.8.7):
Code: 100. DB::Exception: Received from localhost:9000. DB::Exception: Unknown packet 11 from one of the following replicas:  SQLSTATE: 08S01.
command terminated with exit code 100

Trying to ALTER RENAME key old_column_name column which is a part of key expression SQLSTATE: HY000

Step 1: create a table

CREATE TABLE db_name.table_name
(
order_by_column String,
old_column_name Int64
)
ENGINE = CnchMergeTree
ORDER BY (old_column_name)

Step 2: rename column

ALTER TABLE db_name.table_name RENAME COLUMN old_column_name TO new_column_name

ClickHouse exception, code: 524, host: 10.100.3.48, port: 8123; Code: 524, e.displayText() = DB::Exception: Trying to ALTER RENAME key old_column_name column which is a part of key expression SQLSTATE: HY000 (version 21.8.7.1)

server start error

Startup error log:
2023.01.13 08:11:20.828947 [ 17 ] {} Application: Calculated checksum of the binary: 7B0734FCB0DE0271D880B340784BCEDA. There is no information about the reference checksum.
2023.01.13 08:11:20.896737 [ 17 ] {} CnchServerManager: There is no zookeeper, skip start background task for serverManager
Restart error log:
2023.01.13 10:27:53.949400 [ 17 ] {} Application: Calculated checksum of the binary: 7B0734FCB0DE0271D880B340784BCEDA. There is no information about the reference checksum.
2023.01.13 10:27:53.969618 [ 17 ] {} CnchServerManager: There is no zookeeper, skip start background task for serverManager
2023.01.13 10:27:54.104651 [ 205 ] {} auto DB::MergeTreeData::loadPartsFromFileSystem(DB::MergeTreeData::PartNamesWithDisks, DB::MergeTreeData::PartNamesWithDisks, bool, DB::MergeTreeMetaBase::DataPartsLock &)::(anonymous class)::operator()() const: Code: 246, e.displayText() = DB::Exception: While loading part /var/byconity/data/store/09c/09ce3590-8eb7-4dca-aa17-3f475cfbb795/20230113_20230113_507_507_0/: calculated partition ID: 19850803 differs from partition ID in part name: 202301 SQLSTATE: HY000, Stack trace (when copying this message, always include the lines below):

  1. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int) @ 0x18e345d2 in /opt/byconity/usr/bin/clickhouse

  2. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int, bool) @ 0xb230820 in /opt/byconity/usr/bin/clickhouse

  3. DB::IMergeTreeDataPart::loadPartitionAndMinMaxIndex() @ 0x14914689 in /opt/byconity/usr/bin/clickhouse

  4. DB::IMergeTreeDataPart::loadColumnsChecksumsIndexes(bool, bool) @ 0x14911473 in /opt/byconity/usr/bin/clickhouse

  5. ? @ 0x149988d8 in /opt/byconity/usr/bin/clickhouse

  6. ThreadPoolImpl::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xb26ad9e in /opt/byconity/usr/bin/clickhouse

  7. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl::scheduleImpl(std::__1::function<void ()>, int, std::__1::optional)::'lambda0'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function<void ()>, int, std::__1::optional)::'lambda0'()&&...)::'lambda'()::operator()() @ 0xb26cca4 in /opt/byconity/usr/bin/clickhouse

  8. ThreadPoolImplstd::__1::thread::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xb267a3f in /opt/byconity/usr/bin/clickhouse

  9. ? @ 0xb26bd1a in /opt/byconity/usr/bin/clickhouse

  10. start_thread @ 0x7fa3 in /lib/x86_64-linux-gnu/libpthread-2.28.so

  11. clone @ 0xf906f in /lib/x86_64-linux-gnu/libc-2.28.so
    (version 21.8.7.1)
    2023.01.13 10:27:54.104651 [ 206 ] {} auto DB::MergeTreeData::loadPartsFromFileSystem(DB::MergeTreeData::PartNamesWithDisks, DB::MergeTreeData::PartNamesWithDisks, bool, DB::MergeTreeMetaBase::DataPartsLock &)::(anonymous class)::operator()() const: Code: 246, e.displayText() = DB::Exception: While loading part /var/byconity/data/store/09c/09ce3590-8eb7-4dca-aa17-3f475cfbb795/20230113_20230113_1_539_108/: calculated partition ID: 19850803 differs from partition ID in part name: 202301 SQLSTATE: HY000, Stack trace (when copying this message, always include the lines below):

  12. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int) @ 0x18e345d2 in /opt/byconity/usr/bin/clickhouse

  13. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int, bool) @ 0xb230820 in /opt/byconity/usr/bin/clickhouse

  14. DB::IMergeTreeDataPart::loadPartitionAndMinMaxIndex() @ 0x14914689 in /opt/byconity/usr/bin/clickhouse

  15. DB::IMergeTreeDataPart::loadColumnsChecksumsIndexes(bool, bool) @ 0x14911473 in /opt/byconity/usr/bin/clickhouse

  16. ? @ 0x149988d8 in /opt/byconity/usr/bin/clickhouse

  17. ThreadPoolImpl::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xb26ad9e in /opt/byconity/usr/bin/clickhouse

  18. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl::scheduleImpl(std::__1::function<void ()>, int, std::__1::optional)::'lambda0'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function<void ()>, int, std::__1::optional)::'lambda0'()&&...)::'lambda'()::operator()() @ 0xb26cca4 in /opt/byconity/usr/bin/clickhouse

  19. ThreadPoolImplstd::__1::thread::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xb267a3f in /opt/byconity/usr/bin/clickhouse

  20. ? @ 0xb26bd1a in /opt/byconity/usr/bin/clickhouse

  21. start_thread @ 0x7fa3 in /lib/x86_64-linux-gnu/libpthread-2.28.so

  22. clone @ 0xf906f in /lib/x86_64-linux-gnu/libc-2.28.so
    (version 21.8.7.1)
    2023.01.13 10:27:54.104712 [ 205 ] {} system.query_log (09ce3590-8eb7-4dca-aa17-3f475cfbb795): Detaching broken part /var/byconity/data/store/09c/09ce3590-8eb7-4dca-aa17-3f475cfbb795/20230113_20230113_507_507_0. If it happened after update, it is likely because of backward incompability. You need to resolve this manually
    2023.01.13 10:27:54.104728 [ 206 ] {} system.query_log (09ce3590-8eb7-4dca-aa17-3f475cfbb795): Detaching broken part /var/byconity/data/store/09c/09ce3590-8eb7-4dca-aa17-3f475cfbb795/20230113_20230113_1_539_108. If it happened after update, it is likely because of backward incompability. You need to resolve this manually
    2023.01.13 10:27:54.105022 [ 204 ] {} auto DB::MergeTreeData::loadPartsFromFileSystem(DB::MergeTreeData::PartNamesWithDisks, DB::MergeTreeData::PartNamesWithDisks, bool, DB::MergeTreeMetaBase::DataPartsLock &)::(anonymous class)::operator()() const: Code: 246, e.displayText() = DB::Exception: While loading part /var/byconity/data/store/09c/09ce3590-8eb7-4dca-aa17-3f475cfbb795/20230113_20230113_514_514_0/: calculated partition ID: 19850803 differs from partition ID in part name: 202301 SQLSTATE: HY000, Stack trace (when copying this message, always include the lines below):

  23. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int) @ 0x18e345d2 in /opt/byconity/usr/bin/clickhouse

  24. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int, bool) @ 0xb230820 in /opt/byconity/usr/bin/clickhouse

  25. DB::IMergeTreeDataPart::loadPartitionAndMinMaxIndex() @ 0x14914689 in /opt/byconity/usr/bin/clickhouse

  26. DB::IMergeTreeDataPart::loadColumnsChecksumsIndexes(bool, bool) @ 0x14911473 in /opt/byconity/usr/bin/clickhouse

  27. ? @ 0x149988d8 in /opt/byconity/usr/bin/clickhouse

  28. ThreadPoolImpl::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xb26ad9e in /opt/byconity/usr/bin/clickhouse

  29. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl::scheduleImpl(std::__1::function<void ()>, int, std::__1::optional)::'lambda0'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function<void ()>, int, std::__1::optional)::'lambda0'()&&...)::'lambda'()::operator()() @ 0xb26cca4 in /opt/byconity/usr/bin/clickhouse

  30. ThreadPoolImplstd::__1::thread::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xb267a3f in /opt/byconity/usr/bin/clickhouse

  31. ? @ 0xb26bd1a in /opt/byconity/usr/bin/clickhouse

  32. start_thread @ 0x7fa3 in /lib/x86_64-linux-gnu/libpthread-2.28.so

  33. clone @ 0xf906f in /lib/x86_64-linux-gnu/libc-2.28.so
    (version 21.8.7.1)
    2023.01.13 10:27:54.105076 [ 204 ] {} system.query_log (09ce3590-8eb7-4dca-aa17-3f475cfbb795): Detaching broken part /var/byconity/data/store/09c/09ce3590-8eb7-4dca-aa17-3f475cfbb795/20230113_20230113_514_514_0. If it happened after update, it is likely because of backward incompability. You need to resolve this manually
    2023.01.13 10:27:54.106491 [ 205 ] {} auto DB::MergeTreeData::loadPartsFromFileSystem(DB::MergeTreeData::PartNamesWithDisks, DB::MergeTreeData::PartNamesWithDisks, bool, DB::MergeTreeMetaBase::DataPartsLock &)::(anonymous class)::operator()() const: Code: 246, e.displayText() = DB::Exception: While loading part /var/byconity/data/store/09c/09ce3590-8eb7-4dca-aa17-3f475cfbb795/20230113_20230113_515_515_0/: calculated partition ID: 19850803 differs from partition ID in part name: 202301 SQLSTATE: HY000, Stack trace (when copying this message, always include the lines below):

  34. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int) @ 0x18e345d2 in /opt/byconity/usr/bin/clickhouse

  35. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int, bool) @ 0xb230820 in /opt/byconity/usr/bin/clickhouse

  36. DB::IMergeTreeDataPart::loadPartitionAndMinMaxIndex() @ 0x14914689 in /opt/byconity/usr/bin/clickhouse

  37. DB::IMergeTreeDataPart::loadColumnsChecksumsIndexes(bool, bool) @ 0x14911473 in /opt/byconity/usr/bin/clickhouse

  38. ? @ 0x149988d8 in /opt/byconity/usr/bin/clickhouse

  39. ThreadPoolImpl::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xb26ad9e in /opt/byconity/usr/bin/clickhouse

  40. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl::scheduleImpl(std::__1::function<void ()>, int, std::__1::optional)::'lambda0'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function<void ()>, int, std::__1::optional)::'lambda0'()&&...)::'lambda'()::operator()() @ 0xb26cca4 in /opt/byconity/usr/bin/clickhouse

  41. ThreadPoolImplstd::__1::thread::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xb267a3f in /opt/byconity/usr/bin/clickhouse

  42. ? @ 0xb26bd1a in /opt/byconity/usr/bin/clickhouse

  43. start_thread @ 0x7fa3 in /lib/x86_64-linux-gnu/libpthread-2.28.so

  44. clone @ 0xf906f in /lib/x86_64-linux-gnu/libc-2.28.so
    (version 21.8.7.1)
    2023.01.13 10:27:54.106528 [ 205 ] {} system.query_log (09ce3590-8eb7-4dca-aa17-3f475cfbb795): Detaching broken part /var/byconity/data/store/09c/09ce3590-8eb7-4dca-aa17-3f475cfbb795/20230113_20230113_515_515_0. If it happened after update, it is likely because of backward incompability. You need to resolve this manually
    2023.01.13 10:27:54.108316 [ 205 ] {} auto DB::MergeTreeData::loadPartsFromFileSystem(DB::MergeTreeData::PartNamesWithDisks, DB::MergeTreeData::PartNamesWithDisks, bool, DB::MergeTreeMetaBase::DataPartsLock &)::(anonymous class)::operator()() const: Code: 246, e.displayText() = DB::Exception: While loading part /var/byconity/data/store/09c/09ce3590-8eb7-4dca-aa17-3f475cfbb795/20230113_20230113_523_523_0/: calculated partition ID: 19850803 differs from partition ID in part name: 202301 SQLSTATE: HY000, Stack trace (when copying this message, always include the lines below):

  45. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int) @ 0x18e345d2 in /opt/byconity/usr/bin/clickhouse

  46. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int, bool) @ 0xb230820 in /opt/byconity/usr/bin/clickhouse

  47. DB::IMergeTreeDataPart::loadPartitionAndMinMaxIndex() @ 0x14914689 in /opt/byconity/usr/bin/clickhouse

  48. DB::IMergeTreeDataPart::loadColumnsChecksumsIndexes(bool, bool) @ 0x14911473 in /opt/byconity/usr/bin/clickhouse

  49. ? @ 0x149988d8 in /opt/byconity/usr/bin/clickhouse

  50. ThreadPoolImpl::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xb26ad9e in /opt/byconity/usr/bin/clickhouse

  51. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl::scheduleImpl(std::__1::function<void ()>, int, std::__1::optional)::'lambda0'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function<void ()>, int, std::__1::optional)::'lambda0'()&&...)::'lambda'()::operator()() @ 0xb26cca4 in /opt/byconity/usr/bin/clickhouse

  52. ThreadPoolImplstd::__1::thread::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xb267a3f in /opt/byconity/usr/bin/clickhouse

  53. ? @ 0xb26bd1a in /opt/byconity/usr/bin/clickhouse

  54. start_thread @ 0x7fa3 in /lib/x86_64-linux-gnu/libpthread-2.28.so

  55. clone @ 0xf906f in /lib/x86_64-linux-gnu/libc-2.28.so
    (version 21.8.7.1)
    2023.01.13 10:27:54.108344 [ 205 ] {} system.query_log (09ce3590-8eb7-4dca-aa17-3f475cfbb795): Detaching broken part /var/byconity/data/store/09c/09ce3590-8eb7-4dca-aa17-3f475cfbb795/20230113_20230113_523_523_0. If it happened after update, it is likely because of backward incompability. You need to resolve this manually
    2023.01.13 10:27:54.109351 [ 207 ] {} auto DB::MergeTreeData::loadPartsFromFileSystem(DB::MergeTreeData::PartNamesWithDisks, DB::MergeTreeData::PartNamesWithDisks, bool, DB::MergeTreeMetaBase::DataPartsLock &)::(anonymous class)::operator()() const: Code: 246, e.displayText() = DB::Exception: While loading part /var/byconity/data/store/09c/09ce3590-8eb7-4dca-aa17-3f475cfbb795/20230113_20230113_505_505_0/: calculated partition ID: 19850803 differs from partition ID in part name: 202301 SQLSTATE: HY000, Stack trace (when copying this message, always include the lines below):

  56. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int) @ 0x18e345d2 in /opt/byconity/usr/bin/clickhouse

  57. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int, bool) @ 0xb230820 in /opt/byconity/usr/bin/clickhouse

Run create stats if not exists all; Cause server crash

log:
2023.01.16 08:30:54.874708 [ 41799 ] {} BaseDaemon: ########################################
2023.01.16 08:30:54.874809 [ 41799 ] {} BaseDaemon: (version 21.8.7.1 scm cdw-2.0.0/2.0.0.115, build id: B1C8E2A14E2DBD1E5587FAA6B806FACBBA832810) (from thread 8930) (query_id: 598756cf-d1a5-485e-960c-a295a62a8bca) Received signal Segmentation fault (11)
2023.01.16 08:30:54.874870 [ 41799 ] {} BaseDaemon: Address: NULL pointer. Access: read. Address not mapped to object.
2023.01.16 08:30:54.874924 [ 41799 ] {} BaseDaemon: Stack trace: 0x14c9be30 0x14c9b8a5 0x14cd8036 0xb26ad9e 0xb26cca4 0xb267a3f 0xb26bd1a 0x7fc1a8e5efa3 0x7fc1a787d06f
2023.01.16 08:30:54.875085 [ 41799 ] {} BaseDaemon: 3. DB::IRowOutputFormat::write(std::__1::vector<COWDB::IColumn::immutable_ptrDB::IColumn, std::__1::allocator<COWDB::IColumn::immutable_ptrDB::IColumn > > const&, unsigned long) @ 0x14c9be30 in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:54.875146 [ 41799 ] {} BaseDaemon: 4. DB::IRowOutputFormat::consume(DB::Chunk) @ 0x14c9b8a5 in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:54.875299 [ 41799 ] {} BaseDaemon: 5. DB::ParallelFormattingOutputFormat::formatterThreadFunction(unsigned long, std::__1::shared_ptrDB::ThreadGroupStatus const&) @ 0x14cd8036 in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:54.875389 [ 41799 ] {} BaseDaemon: 6. ThreadPoolImpl::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xb26ad9e in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:54.875486 [ 41799 ] {} BaseDaemon: 7. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl::scheduleImpl(std::__1::function<void ()>, int, std::__1::optional)::'lambda0'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function<void ()>, int, std::__1::optional)::'lambda0'()&&...)::'lambda'()::operator()() @ 0xb26cca4 in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:54.875543 [ 41799 ] {} BaseDaemon: 8. ThreadPoolImplstd::__1::thread::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xb267a3f in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:54.875599 [ 41799 ] {} BaseDaemon: 9. ? @ 0xb26bd1a in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:54.875662 [ 41799 ] {} BaseDaemon: 10. start_thread @ 0x7fa3 in /lib/x86_64-linux-gnu/libpthread-2.28.so
2023.01.16 08:30:54.875721 [ 41799 ] {} BaseDaemon: 11. clone @ 0xf906f in /lib/x86_64-linux-gnu/libc-2.28.so
2023.01.16 08:30:55.018096 [ 41800 ] {} BaseDaemon: ########################################
2023.01.16 08:30:55.018144 [ 41800 ] {} BaseDaemon: (version 21.8.7.1 scm cdw-2.0.0/2.0.0.115, build id: B1C8E2A14E2DBD1E5587FAA6B806FACBBA832810) (from thread 61835) (query_id: 598756cf-d1a5-485e-960c-a295a62a8bca) Received signal Segmentation fault (11)
2023.01.16 08:30:55.018192 [ 41800 ] {} BaseDaemon: Address: NULL pointer. Access: read. Address not mapped to object.
2023.01.16 08:30:55.018225 [ 41800 ] {} BaseDaemon: Stack trace: 0x14c9be30 0x14c9b8a5 0x14cd8036 0xb26ad9e 0xb26cca4 0xb267a3f 0xb26bd1a 0x7fc1a8e5efa3 0x7fc1a787d06f
2023.01.16 08:30:55.018291 [ 41800 ] {} BaseDaemon: 3. DB::IRowOutputFormat::write(std::__1::vector<COWDB::IColumn::immutable_ptrDB::IColumn, std::__1::allocator<COWDB::IColumn::immutable_ptrDB::IColumn > > const&, unsigned long) @ 0x14c9be30 in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:55.018348 [ 41800 ] {} BaseDaemon: 4. DB::IRowOutputFormat::consume(DB::Chunk) @ 0x14c9b8a5 in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:55.018374 [ 41800 ] {} BaseDaemon: 5. DB::ParallelFormattingOutputFormat::formatterThreadFunction(unsigned long, std::__1::shared_ptrDB::ThreadGroupStatus const&) @ 0x14cd8036 in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:55.018439 [ 41800 ] {} BaseDaemon: 6. ThreadPoolImpl::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xb26ad9e in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:55.018531 [ 41800 ] {} BaseDaemon: 7. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl::scheduleImpl(std::__1::function<void ()>, int, std::__1::optional)::'lambda0'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function<void ()>, int, std::__1::optional)::'lambda0'()&&...)::'lambda'()::operator()() @ 0xb26cca4 in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:55.018554 [ 41800 ] {} BaseDaemon: 8. ThreadPoolImplstd::__1::thread::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xb267a3f in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:55.018579 [ 41800 ] {} BaseDaemon: 9. ? @ 0xb26bd1a in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:55.018644 [ 41800 ] {} BaseDaemon: 10. start_thread @ 0x7fa3 in /lib/x86_64-linux-gnu/libpthread-2.28.so
2023.01.16 08:30:55.018671 [ 41800 ] {} BaseDaemon: 11. clone @ 0xf906f in /lib/x86_64-linux-gnu/libc-2.28.so
2023.01.16 08:30:55.091275 [ 41799 ] {} BaseDaemon: Calculated checksum of the binary: 7B0734FCB0DE0271D880B340784BCEDA. There is no information about the reference checksum.
2023.01.16 08:30:55.141027 [ 41801 ] {} BaseDaemon: ########################################
2023.01.16 08:30:55.141116 [ 41801 ] {} BaseDaemon: (version 21.8.7.1 scm cdw-2.0.0/2.0.0.115, build id: B1C8E2A14E2DBD1E5587FAA6B806FACBBA832810) (from thread 8518) (query_id: 598756cf-d1a5-485e-960c-a295a62a8bca) Received signal Segmentation fault (11)
2023.01.16 08:30:55.141211 [ 41801 ] {} BaseDaemon: Address: NULL pointer. Access: read. Address not mapped to object.
2023.01.16 08:30:55.141277 [ 41801 ] {} BaseDaemon: Stack trace: 0x14c9be30 0x14c9b8a5 0x14cd8036 0xb26ad9e 0xb26cca4 0xb267a3f 0xb26bd1a 0x7fc1a8e5efa3 0x7fc1a787d06f
2023.01.16 08:30:55.141393 [ 41801 ] {} BaseDaemon: 3. DB::IRowOutputFormat::write(std::__1::vector<COWDB::IColumn::immutable_ptrDB::IColumn, std::__1::allocator<COWDB::IColumn::immutable_ptrDB::IColumn > > const&, unsigned long) @ 0x14c9be30 in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:55.141448 [ 41801 ] {} BaseDaemon: 4. DB::IRowOutputFormat::consume(DB::Chunk) @ 0x14c9b8a5 in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:55.141511 [ 41801 ] {} BaseDaemon: 5. DB::ParallelFormattingOutputFormat::formatterThreadFunction(unsigned long, std::__1::shared_ptrDB::ThreadGroupStatus const&) @ 0x14cd8036 in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:55.141575 [ 41801 ] {} BaseDaemon: 6. ThreadPoolImpl::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xb26ad9e in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:55.141687 [ 41801 ] {} BaseDaemon: 7. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl::scheduleImpl(std::__1::function<void ()>, int, std::__1::optional)::'lambda0'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function<void ()>, int, std::__1::optional)::'lambda0'()&&...)::'lambda'()::operator()() @ 0xb26cca4 in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:55.141764 [ 41801 ] {} BaseDaemon: 8. ThreadPoolImplstd::__1::thread::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xb267a3f in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:55.141816 [ 41801 ] {} BaseDaemon: 9. ? @ 0xb26bd1a in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:55.141905 [ 41801 ] {} BaseDaemon: 10. start_thread @ 0x7fa3 in /lib/x86_64-linux-gnu/libpthread-2.28.so
2023.01.16 08:30:55.141965 [ 41801 ] {} BaseDaemon: 11. clone @ 0xf906f in /lib/x86_64-linux-gnu/libc-2.28.so
2023.01.16 08:30:55.216678 [ 41800 ] {} BaseDaemon: Calculated checksum of the binary: 7B0734FCB0DE0271D880B340784BCEDA. There is no information about the reference checksum.
2023.01.16 08:30:55.259437 [ 41802 ] {} BaseDaemon: ########################################
2023.01.16 08:30:55.259563 [ 41802 ] {} BaseDaemon: (version 21.8.7.1 scm cdw-2.0.0/2.0.0.115, build id: B1C8E2A14E2DBD1E5587FAA6B806FACBBA832810) (from thread 691) (query_id: 598756cf-d1a5-485e-960c-a295a62a8bca) Received signal Segmentation fault (11)
2023.01.16 08:30:55.259629 [ 41802 ] {} BaseDaemon: Address: NULL pointer. Access: read. Address not mapped to object.
2023.01.16 08:30:55.259684 [ 41802 ] {} BaseDaemon: Stack trace: 0x14c9be30 0x14c9b8a5 0x14cd8036 0xb26ad9e 0xb26cca4 0xb267a3f 0xb26bd1a 0x7fc1a8e5efa3 0x7fc1a787d06f
2023.01.16 08:30:55.259803 [ 41802 ] {} BaseDaemon: 3. DB::IRowOutputFormat::write(std::__1::vector<COWDB::IColumn::immutable_ptrDB::IColumn, std::__1::allocator<COWDB::IColumn::immutable_ptrDB::IColumn > > const&, unsigned long) @ 0x14c9be30 in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:55.259865 [ 41802 ] {} BaseDaemon: 4. DB::IRowOutputFormat::consume(DB::Chunk) @ 0x14c9b8a5 in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:55.259929 [ 41802 ] {} BaseDaemon: 5. DB::ParallelFormattingOutputFormat::formatterThreadFunction(unsigned long, std::__1::shared_ptrDB::ThreadGroupStatus const&) @ 0x14cd8036 in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:55.259991 [ 41802 ] {} BaseDaemon: 6. ThreadPoolImpl::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xb26ad9e in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:55.260063 [ 41802 ] {} BaseDaemon: 7. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl::scheduleImpl(std::__1::function<void ()>, int, std::__1::optional)::'lambda0'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function<void ()>, int, std::__1::optional)::'lambda0'()&&...)::'lambda'()::operator()() @ 0xb26cca4 in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:55.260126 [ 41802 ] {} BaseDaemon: 8. ThreadPoolImplstd::__1::thread::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xb267a3f in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:55.260231 [ 41802 ] {} BaseDaemon: 9. ? @ 0xb26bd1a in /opt/byconity/usr/bin/clickhouse
2023.01.16 08:30:55.260302 [ 41802 ] {} BaseDaemon: 10. start_thread @ 0x7fa3 in /lib/x86_64-linux-gnu/libpthread-2.28.so
2023.01.16 08:30:55.260362 [ 41802 ] {} BaseDaemon: 11. clone @ 0xf906f in /lib/x86_64-linux-gnu/libc-2.28.so
2023.01.16 08:30:55.290290 [ 41801 ] {} BaseDaemon: Calculated checksum of the binary: 7B0734FCB0DE0271D880B340784BCEDA. There is no information about the reference checksum.
2023.01.16 08:30:55.452277 [ 41802 ] {} BaseDaemon: Calculated checksum of the binary: 7B0734FCB0DE0271D880B340784BCEDA. There is no information about the reference checksum.
2023.01.16 08:31:16.415813 [ 8930 ] {598756cf-d1a5-485e-960c-a295a62a8bca} Minidump: SCM cdw-2.0.0/2.0.0.115, core dump path: /var/byconity//f0fee29a-09ab-4f0b-7aa5988f-bf24e61c.dmp

clickhouse-keeper instead of foundationDB

Use case
clickhouse-keeper also have support for key-value storage.

Simplify setup for endusers, have less dependency on external projects.
I didn't dig into codebase, so it's unclear for me, how much is byconity depends on foundationdb featureset, but it seems it also support bytekv, which is another type of key-value storage, so it seems that it's possible to do.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.