Git Product home page Git Product logo

go-tpc's Introduction

Go TPC

A toolbox to benchmark workloads in TPC for TiDB and almost MySQL compatible databases, and PostgreSQL compatible database, such as PostgreSQL / CockroachDB / AlloyDB / Yugabyte.

Install

You can use one of the three approaches

Install using script(recommend)

curl --proto '=https' --tlsv1.2 -sSf https://raw.githubusercontent.com/pingcap/go-tpc/master/install.sh | sh

And then open a new terminal to try go-tpc

Download binary

You can download the pre-built binary here and then gunzip it

Build from source

git clone https://github.com/pingcap/go-tpc.git
cd go-tpc
make build

Then you can find the go-tpc binary file in the ./bin directory.

Usage

If you have go-tpc in your PATH, the command below you should replace ./bin/go-tpc with go-tpc

By default, go-tpc uses root::@tcp(127.0.0.1:4000)/test as the default dsn address, you can override it by setting below flags:

  -D, --db string           Database name (default "test")
  -H, --host string         Database host (default "127.0.0.1")
  -p, --password string     Database password
  -P, --port int            Database port (default 4000)
  -U, --user string         Database user (default "root")

Note:

When exporting csv files to a directory, go-tpc will also create the necessary tables for further data input if the provided database address is accessible.

For example:

./bin/go-tpc -H 127.0.0.1 -P 3306 -D tpcc ...

TPC-C

Prepare

TiDB & MySQL
# Create 4 warehouses with 4 threads
./bin/go-tpc tpcc --warehouses 4 prepare -T 4
PostgreSQL & CockroachDB & AlloyDB & Yugabyte
./bin/go-tpc tpcc prepare -d postgres -U myuser -p '12345678' -D test -H 127.0.0.1 -P 5432 --conn-params sslmode=disable

Run

TiDB & MySQL
# Run TPCC workloads, you can just run or add --wait option to including wait times
./bin/go-tpc tpcc --warehouses 4 run -T 4
# Run TPCC including wait times(keying & thinking time) on every transactions
./bin/go-tpc tpcc --warehouses 4 run -T 4 --wait
PostgreSQL & CockroachDB & AlloyDB & Yugabyte
./bin/go-tpc tpcc run -d postgres -U myuser -p '12345678' -D test -H 127.0.0.1 -P 5432 --conn-params sslmode=disable

Check

# Check consistency. you can check after prepare or after run
./bin/go-tpc tpcc --warehouses 4 check

Clean up

# Cleanup
./bin/go-tpc tpcc --warehouses 4 cleanup

Other usages

# Generate csv files (split to 100 files each table)
./bin/go-tpc tpcc --warehouses 4 prepare -T 100 --output-type csv --output-dir data
# Specified tables when generating csv files
./bin/go-tpc tpcc --warehouses 4 prepare -T 100 --output-type csv --output-dir data --tables history,orders
# Start pprof
./bin/go-tpc tpcc --warehouses 4 prepare --output-type csv --output-dir data --pprof :10111

If you want to import tpcc data into TiDB, please refer to import-to-tidb.

TPC-H

Prepare

TiDB & MySQL
# Prepare data with scale factor 1
./bin/go-tpc tpch --sf=1 prepare
# Prepare data with scale factor 1, create tiflash replica, and analyze table after data loaded
./bin/go-tpc tpch --sf 1 --analyze --tiflash-replica 1 prepare
PostgreSQL & CockroachDB & AlloyDB & Yugabyte
./bin/go-tpc tpch prepare -d postgres -U myuser -p '12345678' -D test -H 127.0.0.1 -P 5432 --conn-params sslmode=disable

Run

TiDB & MySQL
# Run TPCH workloads with result checking
./bin/go-tpc tpch --sf=1 --check=true run
# Run TPCH workloads without result checking
./bin/go-tpc tpch --sf=1 run
PostgreSQL & CockroachDB & AlloyDB & Yugabyte
./bin/go-tpc tpch run -d postgres -U myuser -p '12345678' -D test -H 127.0.0.1 -P 5432 --conn-params sslmode=disable

Clean up

# Cleanup
./bin/go-tpc tpch cleanup

CH-benCHmark

Prepare

  1. First please refer to the above instruction(go-tpc tpcc --warehouses $warehouses prepare) to prepare the TP part schema and populate data

  2. Then uses go-tpc ch prepare to prepare the AP part schema and data

A detail example to run CH workload on TiDB can be refered to TiDB Doc

TiDB & MySQL
# Prepare TP data
./bin/go-tpc tpcc --warehouses 10 prepare -T 4 -D test -H 127.0.0.1 -P 4000
# Prepare AP data, create tiflash replica, and analyze table after data loaded
./bin/go-tpc ch --analyze --tiflash-replica 1 prepare -D test -H 127.0.0.1 -P 4000
PostgreSQL & CockroachDB & AlloyDB & Yugabyte
# Prepare TP data
./bin/go-tpc tpcc prepare -d postgres -U myuser -p '12345678' -D test -H 127.0.0.1 -P 5432 --conn-params sslmode=disable -T 4
# Prepare AP data
./bin/go-tpc ch prepare -d postgres -U myuser -p '12345678' -D test -H 127.0.0.1 -P 5432 --conn-params sslmode=disable

Run

TiDB & MySQL
./bin/go-tpc ch --warehouses $warehouses -T $tpWorkers -t $apWorkers --time $measurement-time run
PostgreSQL & CockroachDB & AlloyDB & Yugabyte
./bin/go-tpc ch run -d postgres -U myuser -p '12345678' -D test -H 127.0.0.1 -P 5432 --conn-params sslmode=disable

Raw SQL

rawsql command is used to execute sql from given sql files.

Run

./bin/go-tpc rawsql run --query-files $path-to-query-files

go-tpc's People

Contributors

aylei avatar breezewish avatar busyjay avatar cadmusjiang avatar coocood avatar dbsid avatar depaulmillz avatar elsa0520 avatar hawkingrei avatar innerr avatar jayson-huang avatar lloyd-pottiger avatar lobshunter avatar lysu avatar mahjonp avatar mjonss avatar pingyu avatar pyhalov avatar siddontang avatar sillydong avatar sleepymole avatar sunrunaway avatar xuanyu66 avatar yeya24 avatar yisaer avatar yongpan0709 avatar zhouqiang-cl avatar zyguan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

go-tpc's Issues

CSV data generated by TPCC / TPCH uses different default delimiters

==> tpcc10000.customer.csv <==
1|Customer#000000001|IVhzIApeRb ot,c,E|15|25-989-741-2988|711.56|BUILDING|to the even, regular platelets. regular, ironic epitaphs nag e|
[root@ip-172-31-21-61 tpcc10000]# head -n 1 tpcc10000.orders*
==> tpcc10000.orders.0.csv <==
1,1,1,598,2022-05-30 03:19:37,2,12,1

Support --weight in ch-benchmark

It is not possible to use --weight like it in when running tpcc when running ch-benchmark. This would be an extremely useful feature.

TPCH - Number of rows is not correct?

Lineitem should have 6,000,000 x SF

But
For sf1 it has 6001215
For sf5 it has 29999795
For sf10 it has 59986052
For sf100 it has 600037902

What is going on?

The error message is not user-friendly enough - panic: failed to connect to database when loading data

The error message is misleading and not user-friendly enough to show why the program gets panic.

# go-tpc tpcc --host xxx.xxx.xxx.xxx -P4000 --warehouses 1000 run -D tpcc -T 200 --time 10m0s --ignore-error --conn-params="kv_read_timeout=1000"
panic: failed to connect to database when loading data

goroutine 1 [running]:
github.com/pingcap/go-tpc/tpcc.NewWorkloader(0x0, 0x10b08e0, 0x1b, 0x1b, 0x1a, 0xc000104380)
        /go/src/github.com/pingcap/go-tpc/tpcc/workload.go:105 +0xbb3
main.executeTpcc(0xac5e7a, 0x3)
        /go/src/github.com/pingcap/go-tpc/cmd/go-tpc/tpcc.go:56 +0x36c
main.registerTpcc.func2(0xc000456dc0, 0xc00022fa00, 0x0, 0xd)
        /go/src/github.com/pingcap/go-tpc/cmd/go-tpc/tpcc.go:102 +0x36
github.com/spf13/cobra.(*Command).execute(0xc000456dc0, 0xc00022f930, 0xd, 0xd, 0xc000456dc0, 0xc00022f930)
        /go/pkg/mod/github.com/spf13/[email protected]/command.go:846 +0x2c2
github.com/spf13/cobra.(*Command).ExecuteC(0xc0004562c0, 0xb0cdb0, 0xc000400720, 0xc000418760)
        /go/pkg/mod/github.com/spf13/[email protected]/command.go:950 +0x375
github.com/spf13/cobra.(*Command).Execute(...)
        /go/pkg/mod/github.com/spf13/[email protected]/command.go:887
main.main()
        /go/src/github.com/pingcap/go-tpc/cmd/go-tpc/main.go:153 +0x9db

The root cause of the panic:

# mysql -h xxx.xxx.xxx.xxx -P4000
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 687
Server version: 5.7.25-TiDB-v6.5.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> set @@kv_read_timeout=1000;
ERROR 1193 (HY000): Unknown system variable 'kv_read_timeout'

Another example: https://asktug.com/t/topic/1000741

Support higher load concurrency?

Hi,

I am using TPC-H prepare:

go-tpc tpch --sf 50 prepare --analyze

I discovered that the data was loaded slowly while the whole cluster is kind of not busy:

6.1 ~ 6.3: 3 TiKV
7.1 ~ 7.2: 2 TiDB

image

Is it possible to speed up the load process, utilizing more resources?

errcheck report some output

cmd/go-tpc/main.go:40:17:	globalDB.Close()
cmd/go-tpc/main.go:113:17:	rootCmd.Execute()
tpcc/check.go:77:18:	defer rows.Close()
tpcc/check.go:112:18:	defer rows.Close()
tpcc/check.go:142:18:	defer rows.Close()
tpcc/check.go:172:18:	defer rows.Close()
tpcc/check.go:202:18:	defer rows.Close()
tpcc/check.go:239:18:	defer rows.Close()
tpcc/check.go:270:18:	defer rows.Close()
tpcc/check.go:300:18:	defer rows.Close()
tpcc/check.go:330:18:	defer rows.Close()
tpcc/check.go:375:18:	defer rows.Close()
tpcc/check.go:416:18:	defer rows.Close()
tpcc/check.go:449:18:	defer rows.Close()
tpcc/delivery.go:28:19:	defer tx.Rollback()
tpcc/load.go:64:15:	l.InsertValue(ctx, v)
tpcc/new_order.go:112:19:	defer tx.Rollback()
tpcc/order_status.go:42:19:	defer tx.Rollback()
tpcc/order_status.go:73:13:	rows.Close()
tpcc/order_status.go:111:18:	defer rows.Close()
tpcc/payment.go:77:19:	defer tx.Rollback()
tpcc/payment.go:145:13:	rows.Close()
tpcc/stock_level.go:14:19:	defer tx.Rollback()
tpcc/workload.go:105:14:	s.Conn.Close()

Feature Request: automatically reconnect to TiDB

The benchmark executor should automatically reconnect to TiDB and restart any aborted transactions if the connection to TiDB is broken.

High Availability is a core part of the TiDB value proposition, and any situation involving upgrading or failing over to another AZ/region will require applications to re-establish connections to TiDB, so our benchmarking tools should also support reconnections.

Separate the logic of different workloads in tpcc prepare

Currently in tpcc prepare, we support two kinds of workloads. The first is SQL loader, which directly sends SQL statements to db. Another is csv generator, which just generate csv file locally.

Now the two use the same Workloader struct, which makes the logic quite ugly and hard to read. I propose we can add a new csvWorkloader, which also implements workload.Workloader interface, then we can separate the logic.

// Workloader is the interface for running customized workload
type Workloader interface {
	Name() string
	InitThread(ctx context.Context, threadID int) context.Context
	CleanupThread(ctx context.Context, threadID int)
	Prepare(ctx context.Context, threadID int) error
	CheckPrepare(ctx context.Context, threadID int) error
	Run(ctx context.Context, threadID int) error
	Cleanup(ctx context.Context, threadID int) error
	Check(ctx context.Context, threadID int) error
	DBName() string
}

For csv workloader, we don't need such methods like Check, Run, we can just leave them "not supported" and it is fine.

Support resume preparing

It's common to cancel preparing when we want to adjust configuration to speed it up or adjust dataset. And it's also possible that connection is broken during preparing. It can save a lot of time if preparing can resume from last stopped point.

Add SQLite support?

Now that postgres support is added, would it make sense to add support for SQLite too?

Context: I'm looking for a tool to benchmark mvsqlite :)

consistency check 10 is too slow

begin to checking warehouse 1 at check 1
begin to checking warehouse 1 at check 2
begin to checking warehouse 1 at check 3
begin to checking warehouse 1 at check 4
begin to checking warehouse 1 at check 5
begin to checking warehouse 1 at check 6
begin to checking warehouse 1 at check 7
begin to checking warehouse 1 at check 8
begin to checking warehouse 1 at check 9
begin to checking warehouse 1 at check 10

After preparing, check 10 takes a very long time.

@zhouqiang-cl

Design a fast TPCC test data generation tool: Generate TPCC SST data, then use br to complete a quick import

Feature Request

Describe your feature request related problem:

We do not have a simple tool to generate large-scale example archives. For large-scale tests, we need to use dbgen to produce SQL dump and then use TiDB Lightning to import into the cluster. This is very time consuming — for 10T-scale test we need almost 2 days for this preparation step.

Describe the feature you'd like:

We should be able to directly generate the backup archive (create SSTs directly and populate the corresponding backupmeta).

Either we create a dedicated tool (focusing on a few selected schemas, e.g. sysbench or TPC-C), or extend dbgen to create SSTs (hard, since dbgen is schema-less and won't generate indices).

Describe alternatives you've considered:

Teachability, Documentation, Adoption, Migration Strategy:

Getting undesired tpc-c benchmark result for tidb cluster

I'm new to tidb and trying to benchmark my tidb cluster using tpc-c. I'm running a tidb cluster with 3 tidb,3 tikv and 3 pd services running on the 3 azure VMs. After running the benchmark commands for 20 warehouses, i got the invalid result(tpmC: 7577.6, tpmTotal: 16770.1, efficiency: 2946.2%).

Here is the result printed:

Finished
[Summary] DELIVERY - Takes(s): 599.9, Count: 6630, TPM: 663.1, Sum(ms): 542394.8, Avg(ms): 81.8, 50th(ms): 79.7, 90th(ms): 100.7, 95th(ms): 113.2, 99th(ms): 159.4, 99.9th(ms): 604.0, Max(ms): 1140.9
[Summary] NEW_ORDER - Takes(s): 600.0, Count: 75771, TPM: 7577.6, Sum(ms): 2882033.7, Avg(ms): 38.0, 50th(ms): 32.5, 90th(ms): 56.6, 95th(ms): 67.1, 99th(ms): 104.9, 99.9th(ms): 201.3, Max(ms): 1208.0
[Summary] NEW_ORDER_ERR - Takes(s): 600.0, Count: 1, TPM: 0.1, Sum(ms): 15.4, Avg(ms): 15.5, 50th(ms): 15.7, 90th(ms): 15.7, 95th(ms): 15.7, 99th(ms): 15.7, 99.9th(ms): 15.7, Max(ms): 15.7
[Summary] ORDER_STATUS - Takes(s): 599.8, Count: 6718, TPM: 672.0, Sum(ms): 64366.9, Avg(ms): 9.6, 50th(ms): 8.4, 90th(ms): 16.3, 95th(ms): 17.8, 99th(ms): 24.1, 99.9th(ms): 54.5, Max(ms): 121.6
[Summary] PAYMENT - Takes(s): 600.0, Count: 71783, TPM: 7178.8, Sum(ms): 2379369.3, Avg(ms): 33.2, 50th(ms): 26.2, 90th(ms): 54.5, 95th(ms): 71.3, 99th(ms): 109.1, 99.9th(ms): 234.9, Max(ms): 1140.9
[Summary] STOCK_LEVEL - Takes(s): 599.7, Count: 6782, TPM: 678.5, Sum(ms): 87790.2, Avg(ms): 12.9, 50th(ms): 11.5, 90th(ms): 17.8, 95th(ms): 24.1, 99th(ms): 37.7, 99.9th(ms): 58.7, Max(ms): 121.6

tpmC: 7577.6, tpmTotal: 16770.1, efficiency: 2946.2%

Support rate limiting

We may want to apply rate limit to make the cluster in certain workload instead of always push to the limit to test stability.

Output mode cannot work without a DB connection

~ go-tpc tpch --sf=50 --output-dir=tpch50 --output-type=csv prepare
[mysql] 2022/04/01 01:29:25 packets.go:36: unexpected EOF
[mysql] 2022/04/01 01:29:37 packets.go:36: unexpected EOF
[mysql] 2022/04/01 01:29:50 packets.go:36: unexpected EOF
cannot connect to the database

TPCC: support csv format DGen

Feature Request

Is your feature request related to a problem? Please describe:

Describe the feature you'd like:

Our TPC-C workload loads data directly to database, but generating CSV format file is also useful especially for terabyte data size, so we'd better add a csv data format DGen for TPC-C workload.

Prepare is at: https://github.com/pingcap/go-tpc/blob/master/tpcc/workload.go#L122

./bin/go-tpc tpcc --host 127.0.0.1 -P 4000 --warehouses N check

Above command can be used to check data integrality.

Describe alternatives you've considered:

Teachability, Documentation, Adoption, Migration Strategy:

staticcheck report some output

pkg/measurement/hist.go:73:13: should use time.Since instead of time.Now().Sub (S1012)
pkg/measurement/measure.go:49:23: func (*measurement).getOpName is unused (U1000)
tpcc/check.go:75:20: error strings should not be capitalized (ST1005)
tpcc/check.go:110:20: error strings should not be capitalized (ST1005)
tpcc/check.go:140:20: error strings should not be capitalized (ST1005)
tpcc/check.go:170:20: error strings should not be capitalized (ST1005)
tpcc/check.go:200:20: error strings should not be capitalized (ST1005)
tpcc/check.go:237:20: error strings should not be capitalized (ST1005)
tpcc/check.go:268:20: error strings should not be capitalized (ST1005)
tpcc/check.go:298:20: error strings should not be capitalized (ST1005)
tpcc/check.go:328:20: error strings should not be capitalized (ST1005)
tpcc/check.go:373:20: error strings should not be capitalized (ST1005)
tpcc/check.go:414:20: error strings should not be capitalized (ST1005)
tpcc/check.go:447:20: error strings should not be capitalized (ST1005)
tpcc/ddl.go:222:2: empty branch (SA9003)
tpcc/ddl.go:226:2: empty branch (SA9003)
tpcc/delivery.go:13:2: field olDeliveryD is unused (U1000)
tpcc/delivery.go:41:21: error strings should not be capitalized (ST1005)
tpcc/delivery.go:47:21: error strings should not be capitalized (ST1005)
tpcc/delivery.go:55:21: error strings should not be capitalized (ST1005)
tpcc/delivery.go:62:21: error strings should not be capitalized (ST1005)
tpcc/delivery.go:69:21: error strings should not be capitalized (ST1005)
tpcc/delivery.go:77:21: error strings should not be capitalized (ST1005)
tpcc/delivery.go:85:21: error strings should not be capitalized (ST1005)
tpcc/new_order.go:123:20: error strings should not be capitalized (ST1005)
tpcc/new_order.go:131:20: error strings should not be capitalized (ST1005)
tpcc/new_order.go:139:20: error strings should not be capitalized (ST1005)
tpcc/new_order.go:152:20: error strings should not be capitalized (ST1005)
tpcc/new_order.go:160:20: error strings should not be capitalized (ST1005)
tpcc/new_order.go:174:21: error strings should not be capitalized (ST1005)
tpcc/new_order.go:196:21: error strings should not be capitalized (ST1005)
tpcc/new_order.go:215:21: error strings should not be capitalized (ST1005)
tpcc/new_order.go:232:21: error strings should not be capitalized (ST1005)
tpcc/order_status.go:51:21: error strings should not be capitalized (ST1005)
tpcc/order_status.go:65:21: error strings should not be capitalized (ST1005)
tpcc/order_status.go:85:21: error strings should not be capitalized (ST1005)
tpcc/order_status.go:97:20: error strings should not be capitalized (ST1005)
tpcc/order_status.go:109:20: error strings should not be capitalized (ST1005)
tpcc/order_status.go:119:17: this result of append is never used, except maybe in other appends (SA4010)
tpcc/payment.go:82:20: error strings should not be capitalized (ST1005)
tpcc/payment.go:92:20: error strings should not be capitalized (ST1005)
tpcc/payment.go:99:20: error strings should not be capitalized (ST1005)
tpcc/payment.go:109:20: error strings should not be capitalized (ST1005)
tpcc/payment.go:119:21: error strings should not be capitalized (ST1005)
tpcc/payment.go:136:21: error strings should not be capitalized (ST1005)
tpcc/payment.go:165:20: error strings should not be capitalized (ST1005)
tpcc/payment.go:173:21: error strings should not be capitalized (ST1005)
tpcc/payment.go:191:21: error strings should not be capitalized (ST1005)
tpcc/payment.go:200:21: error strings should not be capitalized (ST1005)
tpcc/payment.go:210:20: error strings should not be capitalized (ST1005)
tpcc/workload.go:216:32: should use time.Since instead of time.Now().Sub (S1012)

Document that this tool is specifically for MySQL, or even TiDB?

This tool claims to be "A toolbox to benchmark TPC workloads in Go", but in fact it only supports MySQL protocol databases.

What's more, in PRs like #58 TiDB specific hint syntax are introduced, which may not work for all MySQL protocol compatible databases.

For these reasons, I think we should at least claim this fact in README, or even rename the project (if supporting non TiDB database is not-a-goal), like "tidb-tpc", to avoid misleading others and waste their time.

tpcc check can't check all warehouse

1. Minimal reproduce step (Required)

tidb cluster v4.0.9

go-tpc tpcc prepare --warehouses 28000
go-tpc tpcc check --warehouses 28000 --threads 10

2. What did you expect to see? (Required)

begin to check warehouse 1 at ...
...
begin to check warehouse 28000 at ...
Finished

3. What did you see instead (Required)

./go-tpc tpcc check --warehouses 28000 --threads 10
begin to check warehouse 7 at condition 3.3.2.1
begin to check warehouse 3 at condition 3.3.2.8
begin to check warehouse 1 at condition 3.3.2.10
begin to check warehouse 8 at condition 3.3.2.8
begin to check warehouse 6 at condition 3.3.2.8
begin to check warehouse 2 at condition 3.3.2.5
begin to check warehouse 5 at condition 3.3.2.12
begin to check warehouse 10 at condition 3.3.2.9
begin to check warehouse 4 at condition 3.3.2.6
begin to check warehouse 9 at condition 3.3.2.7
begin to check warehouse 7 at condition 3.3.2.8
begin to check warehouse 10 at condition 3.3.2.10
begin to check warehouse 6 at condition 3.3.2.9
execute check failed, err check warehouse 2 at condition 3.3.2.5 failed count(*) in warehouse 2, but got 9000.000000
begin to check warehouse 8 at condition 3.3.2.10
begin to check warehouse 3 at condition 3.3.2.9
begin to check warehouse 3 at condition 3.3.2.1
begin to check warehouse 3 at condition 3.3.2.3
begin to check warehouse 3 at condition 3.3.2.4
begin to check warehouse 4 at condition 3.3.2.8
begin to check warehouse 7 at condition 3.3.2.9
begin to check warehouse 6 at condition 3.3.2.12
begin to check warehouse 4 at condition 3.3.2.3
begin to check warehouse 4 at condition 3.3.2.2
begin to check warehouse 4 at condition 3.3.2.4
execute check failed, err check warehouse 5 at condition 3.3.2.12 failed count(*) in warehouse 5, but got 9000.000000
begin to check warehouse 9 at condition 3.3.2.9
execute check failed, err check warehouse 10 at condition 3.3.2.10 failed count(*) in warehouse 10, but got 9000.000000
execute check failed, err check warehouse 1 at condition 3.3.2.10 failed count(*) in warehouse 1, but got 9000.000000
execute check failed, err check warehouse 8 at condition 3.3.2.10 failed count(*) in warehouse 8, but got 9000.000000
begin to check warehouse 4 at condition 3.3.2.5
begin to check warehouse 7 at condition 3.3.2.12
begin to check warehouse 3 at condition 3.3.2.5
begin to check warehouse 9 at condition 3.3.2.12
execute check failed, err check warehouse 4 at condition 3.3.2.5 failed count(*) in warehouse 4, but got 9000.000000
execute check failed, err check warehouse 3 at condition 3.3.2.5 failed count(*) in warehouse 3, but got 9000.000000
execute check failed, err check warehouse 6 at condition 3.3.2.12 failed count(*) in warehouse 6, but got 9000.000000
execute check failed, err check warehouse 7 at condition 3.3.2.12 failed count(*) in warehouse 7, but got 9000.000000
execute check failed, err check warehouse 9 at condition 3.3.2.12 failed count(*) in warehouse 9, but got 9000.000000
Finished

TPCC testing takes too much time

I am new to k8s and TiDB. Now I have a confusing problem. I used kind to create a local k8s cluster with the following configure,

# three node (two workers) cluster config
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker

and followed this tutorial to deploy TiDB.

Afterward, I tried to run TPCC on the abovementioned TiDB cluster. However, this command tiup bench tpcc -H 127.0.0.1 -P 14000 -D tpcc --warehouses 10 -T 4 run has taken more than three days. Until now, it hasn't finished.

So, is this normal or did something go wrong? Can I do something to improve the efficiency of my TiDB cluster?

Refine readme

The readme is complex, we should refine it
Maybe we can move some complex usage in other file, such as csv import etc

MySQL connect error message when the workload completed

[mysql] 2021/05/20 07:49:53 statement.go:48: invalid connection
[mysql] 2021/05/20 07:49:53 statement.go:48: invalid connection
[mysql] 2021/05/20 07:49:53 statement.go:48: invalid connection
[mysql] 2021/05/20 07:49:53 statement.go:96: invalid connection
Finished
[Summary] DELIVERY - Takes(s): 299.8, Count: 23365, TPM: 4676.1, Sum(ms): 1866946.8, Avg(ms): 79.9, 50th(ms): 79.7, 90th(ms): 96.5, 95th(ms): 100.7, 99th(
ms): 130.0, 99.9th(ms): 176.2, Max(ms): 436.2
[Summary] DELIVERY_ERR - Takes(s): 299.8, Count: 7, TPM: 1.4, Sum(ms): 229.8, Avg(ms): 32.5, 50th(ms): 35.7, 90th(ms): 52.4, 95th(ms): 75.5, 99th(ms): 75.
5, 99.9th(ms): 75.5, Max(ms): 75.5
[Summary] NEW_ORDER - Takes(s): 299.9, Count: 261842, TPM: 52390.6, Sum(ms): 7768822.1, Avg(ms): 29.7, 50th(ms): 29.4, 90th(ms): 37.7, 95th(ms): 41.9, 99t
h(ms): 50.3, 99.9th(ms): 117.4, Max(ms): 419.4
[Summary] NEW_ORDER_ERR - Takes(s): 299.9, Count: 26, TPM: 5.2, Sum(ms): 422.4, Avg(ms): 16.2, 50th(ms): 15.2, 90th(ms): 25.2, 95th(ms): 39.8, 99th(ms): 6
0.8, 99.9th(ms): 60.8, Max(ms): 60.8
[Summary] ORDER_STATUS - Takes(s): 299.9, Count: 23481, TPM: 4698.0, Sum(ms): 400093.1, Avg(ms): 17.0, 50th(ms): 18.9, 90th(ms): 26.2, 95th(ms): 28.3, 99t
h(ms): 33.6, 99.9th(ms): 52.4, Max(ms): 385.9
[Summary] ORDER_STATUS_ERR - Takes(s): 299.9, Count: 1, TPM: 0.2, Sum(ms): 5.4, Avg(ms): 5.5, 50th(ms): 5.8, 90th(ms): 5.8, 95th(ms): 5.8, 99th(ms): 5.8,
99.9th(ms): 5.8, Max(ms): 5.8
[Summary] PAYMENT - Takes(s): 299.9, Count: 250331, TPM: 50086.4, Sum(ms): 4542236.7, Avg(ms): 18.2, 50th(ms): 17.8, 90th(ms): 24.1, 95th(ms): 27.3, 99th(
ms): 35.7, 99.9th(ms): 60.8, Max(ms): 402.7
[Summary] PAYMENT_ERR - Takes(s): 299.9, Count: 12, TPM: 2.4, Sum(ms): 44.4, Avg(ms): 3.8, 50th(ms): 2.6, 90th(ms): 7.9, 95th(ms): 7.9, 99th(ms): 7.9, 99.
9th(ms): 7.9, Max(ms): 7.9
[Summary] STOCK_LEVEL - Takes(s): 299.8, Count: 23463, TPM: 4695.1, Sum(ms): 396450.5, Avg(ms): 16.9, 50th(ms): 16.3, 90th(ms): 23.1, 95th(ms): 26.2, 99th
(ms): 35.7, 99.9th(ms): 75.5, Max(ms): 402.7
tpmC: 52390.5, efficiency: 407.4%

makefile abnormal

build error

GO15VENDOREXPERIMENT="1" CGO_ENABLED=0 GOARCH=amd64 GO111MODULE=on go build -ldflags '-L/usr/local/opt/sqlite/lib -X "main.version=v1.0.7" -X "main.commit=2021-04-02 06:02:02" -X "main.date=e53e96a326a2b54e1ff2074927bad519ef914766"' -o ./bin/go-tpc cmd/go-tpc/*
command-line-arguments
flag provided but not defined: -L/usr/local/opt/sqlite/lib

normal
-L /usr/local/opt/sqlite/lib

LDFLAGS += -X "main.commit=$(shell date -u '+%Y-%m-%d %I:%M:%S')"
LDFLAGS += -X "main.date=$(shell git rev-parse HEAD)"

It's reversed?

Support E2E tests for some basic utilities

I build an image based on tiup playground https://github.com/yeya24/tidb-playground to support running a tidb cluster in a docker container. Maybe we can add some E2E tests based on it.

I think it would be good to run these tests in github actions CI, but I am not sure about the memory requirements, whether it will OOM or not.

We can start with something easy like tpcc prepare or tpcc check.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.