Git Product home page Git Product logo

blkchain's Introduction

Fast Bitcoin Blockchain Postgres Import

Introduction

This is Go code aimed at importing the blockchain into Postgres as fast as possible. Licensed under the Apache License, Version 2.0

This code can import the entire blockchain from a Bitcoin Core node into a PostgreSQL database in 21 hours on fairly basic hardware.

Once the data is imported, the tool can append new blocks as they appear on the network by connecting to a Core node. You can then run this primitive (but completely private) Block Explorer against this database.

This project was started mostly for fun and learning and to find out whether putting the blockchain into PostgreSQL is (1) possible and (2) useful. We think "yes" on both, but you can draw your own conclusions.

Quick Overview

The source of the data is the Bitcoin Core store. You need a Core instance to download the entire blockchain, then the cmd/import tool will be able to read the data directly (not via RPC) by accessing the LevelDb and the blocks files as well as the UTXO set. (The Core program cannot run while this happens, but this is only necessary during the initial bulk import of the data).

You should be able to build go build cmd/import/import.go then run it with (Core should not be running):

# Warning - this may take many hours
./import \
     -connstr "host=192.168.X.X dbname=blocks sslmode=disable" \
     -cache-size 100000000 \
     -blocks ~/.bitcoin/blocks

This will read all blocks and upload them to Postgres. The block descriptors are first read from leveldb block index, which contains file names and offsets to actual block data. Using the block index lets us read blocks in order which is essential for the correct setting of tx_id in outputs. For every output we also query the LevelDb UTXO set so that we can set the spent column correctly.

The following log excerpt is from an import with the exact parameters as above, where the sending machine is 2019 MacBook Pro with 32GB of RAM and the receiving (PostgreSQL server) machine is a 4-core i7 with 16GB running PostgreSQL version 15, both machines have SSDs.

2023/11/03 16:57:13 Setting open files rlimit of 256 to 1024.
2023/11/03 16:57:13 Tables created without indexes, which are created at the very end.
2023/11/03 16:57:13 Setting table parameters: autovacuum_enabled=false
2023/11/03 16:57:13 Reading block headers from LevelDb (/Users/grisha/Library/Application Support/Bitcoin/blocks/index)...
2023/11/03 16:57:14 Read 814062 block header entries.
2023/11/03 16:57:14 Ignoring orphan block 000000000000000000025edbf5ea025e4af2674b318ba82206f70681d97ca162
2023/11/03 16:57:15 Read 814062 block headers.
2023/11/03 16:57:18 Height: 64469 Txs: 72672 Time: 2010-07-05 15:34:27 -0400 EDT Tx/s: 14534.391593 KB/s: 3574.597873 Runtime: 5s
2023/11/03 16:57:23 Height: 93635 Txs: 191499 Time: 2010-11-24 13:40:43 -0500 EST Tx/s: 19148.955229 KB/s: 5088.048924 Runtime: 10s
2023/11/03 16:57:28 Height: 111512 Txs: 306357 Time: 2011-03-03 05:08:54 -0500 EST Tx/s: 20422.953787 KB/s: 5733.495734 Runtime: 15s
2023/11/03 16:57:33 Height: 118521 Txs: 412626 Time: 2011-04-15 15:15:05 -0400 EDT Tx/s: 20630.450404 KB/s: 6071.899925 Runtime: 20s
2023/11/03 16:57:38 Height: 124440 Txs: 512850 Time: 2011-05-16 17:18:22 -0400 EDT Tx/s: 20510.751291 KB/s: 6473.134698 Runtime: 25s
2023/11/03 16:57:38 Txid cache hits: 645079 (100.00%) misses: 0 collisions: 0 dupes: 2 evictions: 364709 size: 148139 procmem: 434 MiB
2023/11/03 16:57:43 Height: 128385 Txs: 613308 Time: 2011-06-03 12:27:53 -0400 EDT Tx/s: 20438.703244 KB/s: 6757.081103 Runtime: 30s
... snip ...
2023/11/04 03:27:07 WARNING: Txid cache collision at hash: 0f157800dba58b15ad242b3f7b48b4010079515e2c9e4702384cc701f05cebc0 existing id: 713414812 new id: 739931084 (prefix sz: 7).
... snip ...
2023/11/04 06:15:13 Height: 813296 Txs: 907828225 Time: 2023-10-22 01:13:19 -0400 EDT Tx/s: 18960.353894 KB/s: 10611.851878 Runtime: 13h18m0s
2023/11/04 06:15:19 Height: 813321 Txs: 907874418 Time: 2023-10-22 06:24:08 -0400 EDT Tx/s: 18959.286326 KB/s: 10611.507696 Runtime: 13h18m6s
2023/11/04 06:15:19 Txid cache hits: 2369583564 (99.91%) misses: 2042901 collisions: 1 dupes: 2 evictions: 777933798 size: 105238549 procmem: 16243 MiB
2023/11/04 06:15:24 Height: 813339 Txs: 907900812 Time: 2023-10-22 09:00:01 -0400 EDT Tx/s: 18957.833769 KB/s: 10610.989629 Runtime: 13h18m11s
2023/11/04 06:15:29 Closing channel, waiting for workers to finish...
2023/11/04 06:15:29 Height: 813369 Txs: 907964339 Time: 2023-10-22 13:56:02 -0400 EDT Tx/s: 18956.918563 KB/s: 10610.608260 Runtime: 13h18m16s
2023/11/04 06:15:30 Closed db channels, waiting for workers to finish...
2023/11/04 06:15:30 Tx writer channel closed, committing transaction.
2023/11/04 06:15:30 Block writer channel closed, commiting transaction.
2023/11/04 06:15:30 TxIn writer channel closed, committing transaction.
2023/11/04 06:15:30 TxOut writer channel closed, committing transaction.
2023/11/04 06:15:30 TxOut writer done.
2023/11/04 06:15:30 Block writer done.
2023/11/04 06:15:30 TxIn writer done.
2023/11/04 06:15:30 Tx writer done.
2023/11/04 06:15:30 Workers finished.
2023/11/04 06:15:30 Txid cache hits: 2369888696 (99.91%) misses: 2056367 collisions: 1 dupes: 2 evictions: 778046986 size: 105221498 procmem: 16243 MiB
2023/11/04 06:15:30 The following txids collided:
2023/11/04 06:15:30 Txid: 0f157800dba58b15ad242b3f7b48b4010079515e2c9e4702384cc701f05cebc0 prefix: c0eb5cf001c74c
2023/11/04 06:15:30 Cleared the cache.
2023/11/04 06:15:30 Creating indexes part 1, please be patient, this may take a long time...
2023/11/04 06:15:30   Starting txins primary key...
2023/11/04 07:17:54   ...done in 1h2m24.594s. Starting txs txid (hash) index...
2023/11/04 07:41:41   ...done in 23m47.193s.
2023/11/04 07:41:41 Running ANALYZE txins, _prevout_miss, txs to ensure the next step selects the optimal plan...
2023/11/04 07:42:22 ...done in 40.348s. Fixing missing prevout_tx_id entries (if needed), this may take a long time..
2023/11/04 07:42:22   max prevoutMiss id: 2056367 parallel: 8
2023/11/04 07:42:22   processing range [1, 10001) of 2056367...
2023/11/04 07:42:22   processing range [10001, 20001) of 2056367...
... snip ...
2023/11/04 07:49:32   processing range [2050001, 2060001) of 2056367...
2023/11/04 07:49:39 ...done in 7m17.348s.
2023/11/04 07:49:39 Creating indexes part 2, please be patient, this may take a long time...
2023/11/04 07:49:39   Starting blocks primary key...
2023/11/04 07:49:41   ...done in 1.89s. Starting blocks prevhash index...
2023/11/04 07:49:42   ...done in 718ms. Starting blocks hash index...
2023/11/04 07:49:42   ...done in 695ms. Starting blocks height index...
2023/11/04 07:49:43   ...done in 450ms. Starting txs primary key...
2023/11/04 07:59:06   ...done in 9m23.284s. Starting block_txs block_id, n primary key...
2023/11/04 08:11:27   ...done in 12m20.978s. Starting block_txs tx_id index...
2023/11/04 08:20:20   ...done in 8m53.257s. Creatng hash_type function...
2023/11/04 08:20:20   ...done in 40ms. Starting txins (prevout_tx_id, prevout_tx_n) index...
2023/11/04 09:02:06   ...done in 41m45.629s. Starting txouts primary key...
2023/11/04 09:42:49   ...done in 40m42.816s. Starting txouts address prefix index...
2023/11/04 11:12:41   ...done in 1h29m52.436s. Starting txins address prefix index...
2023/11/04 14:32:54   ...done in 3h20m13.17s.
2023/11/04 14:32:54 Creating constraints (if needed), please be patient, this may take a long time...
2023/11/04 14:32:54   Starting block_txs block_id foreign key...
2023/11/04 14:32:55   ...done in 173ms. Starting block_txs tx_id foreign key...
2023/11/04 14:32:55   ...done in 7ms. Starting txins tx_id foreign key...
2023/11/04 14:32:55   ...done in 6ms. Starting txouts tx_id foreign key...
2023/11/04 14:32:55   ...done in 6ms.
2023/11/04 14:32:55 Creating txins triggers.
2023/11/04 14:32:55 Dropping _prevout_miss table.
2023/11/04 14:32:55 Marking orphan blocks (whole chain)...
2023/11/04 14:33:37 Done marking orphan blocks in 42.163s.
2023/11/04 14:33:37 Reset table storage parameters: autovacuum_enabled.
2023/11/04 14:33:37 Indexes and constraints created.
2023/11/04 14:33:37 All done in 21h36m23.9s.

There are two phases to this process, the first is just streaming the data into Postgres, the second is building indexes, constraints and otherwise tying up loose ends.

The -cache-size parameter is the cache of txid (the SHA256) to the database tx_id, which import can set on the fly. This cache is also used to identify duplicate transactions. Having a cache of 100M entries achieves 99.90% hit rate (as of Nov 2023, see above). The missing ids will be corrected later, but having as much as possible set from the beginning will reduce the time it takes to correct them later. A 100M entry cache will result in the import process taking up ~16GB of RAM.

After the initial import, the tool can "catch up" by importing new blocks not yet in the database. The catch up is many times slower than the initial import because it does not have the luxury of not having indexes and constraints. The catch up does not read LevelDb, it simply uses the Bitcoin protocol to request new blocks from the node. If you specify the -wait option, the import will wait for new blocks as they are announced and write them to the DB. For example:

# In this example there is a full core node running on 192.168.A.B
# new blocks will be written as they come in.
./import \
    -connstr "host=192.168.X.X dbname=blocks sslmode=disable" \
    -nodeaddr 192.168.A.B:8333 -wait

PostgreSQL Tuning

  • Do not underestimate the importance of the sending (client) machine performance, it is possible that the client side cannot keep up with Postgres. During the initial data load, all that Postgres needs to do is stream the incoming tuples to disk. The client needs to parse the blocks, format the data for the Postgres writes and maintain a cache of the tx_id's. Once the initial load is done, the burden shifts unto the server, which needs to build indexes. You can specify -connstr nulldb to make all database operations noops, akin to writing to /dev/null. Try running it this way to see the maximum speed you client is capable of before attempting to tune the Postgres side.

  • Using SSD's on the Postgres server (as well as the sending machine) will make this process go much faster. Remember to set random_page_cost to 1 or less, depending on how fast your disk really is. The blockchain will occupy more than 600GB on disk and this will grow as time goes on.

  • Turning off synchronous_commit and setting commit_delay to 100000 would make the import faster. Turning fsync off entirely might make it faster even still (heed the documentation warnings).

  • shared_buffers should not be set high, PostgreSQL does better relying on the OS disk buffers cache. Shared buffers are faster than OS cache for hits, but more expensive on misses, thus the PG docs advise not relying on it unless the whole working set fits in PG shared buffers. Of course if your PG server has 512GB of RAM, then this advice does not apply.

  • Setting maintenance_work_mem high should help with speeding up the index building. Note that it can be temporarily set right in the connection string (-connstr "host=... maintenance_work_mem=2GB"). Increasing max_parallel_maintenance_workers will also help with index building. Each worker will get maintenance_work_mem divided by max_parallel_maintenance_workers of memory.

  • Setting wal_writer_delay to the max value of 10000 and increasing wal_buffers and wal_writer_flush_after should speed up the initial import in theory.

  • Setting wal_level to minimal may help as well. (You will also need to set max_wal_senders to 0 if you use minimal).

ZFS

Using a filesystem which supports snapshots is very useful for development of this thing because it provides the ability to quickly rollback to a snapshot should anything go wrong.

ZFS (at least used on a single disk) seems slower than ext4, but still well worth it. The settings we ended up with are:

zfs set compression=zstd-1 tank/blocks # lz4 if your zfs is old
zfs set atime=off tank/blocks
zfs set primarycache=all tank/blocks
zfs set recordsize=16k tank/blocks
zfs set logbias=latency tank/blocks

If you use ZFS, then in the Postgres config it is advisable to turn full_page_writes, wal_init_zero and wal_recycle to off.

Internals of the Data Stream

The initial data stream is done via COPY, with a separate goroutine streaming to its table. We read blocks in order, iterate over the transactions therein, the transactions are split into inputs, outpus, etc, and each of those records is sent over a channel to the goroutine responsible for that table. This approach is very performant.

On catch up the process is slightly more complicated because we need to ensure that referential integrity is maintained. Each block should be followed by a commit, all outputs in a block must be commited before inputs.

blkchain's People

Contributors

crabel99 avatar dependabot[bot] avatar freewil avatar grisha avatar yaslama avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

blkchain's Issues

ERROR: Time out.

@grisha Sorry to interrupt you, want to get some help, thank you very much,Now I don’t know why, when you read 688652, you will find that the data of this block is not read normally,Look at the log and find the error ERROR: Time out. The following is the data detected by my sql, this block has lost a lot of data

bitcoinblocks=# select * from txouts where tx_id in (select tx_id from block_txs where block_id =688652);
   tx_id   | n |   value   |                                                  scriptpubkey                                                  | spent 
-----------+---+-----------+----------------------------------------------------------------------------------------------------------------+-------
 651269600 | 0 | 625000000 | \x76a914c825a1ecf2a6830c4401620c3a16f1995057c2ab88ac                                                           | f
 651269600 | 1 |         0 | \x6a34486174685d6021d30fb9ab3e2de65b8c0d8943f1bbb9c8bab086c2d34dd01f4eafa5681c4a77f2c3171148c39dedc141cda8dedc | f
 651269600 | 2 |         0 | \x6a4c2952534b424c4f434b3a0bde938d409648c985a31b79d335172cc03b2dce726b9d9284309d2800347b77                     | f
 651269600 | 3 |         0 | \x6a24b9e11b6d456119451d3227d9af6aca1066eba30454688c5f5235cd326cebba4fa09839bc                                 | f

I deleted the data above 688600 and tried to synchronize once, but the problem is still not solved,Are you using it to synchronize your data? Have you encountered this problem?

Build Fails on windows

C:\Users\Administrator>go build D:\GO\src\github.com\blkchain\blkchain\cmd\import\import.go

command-line-arguments

D:\GO\src\github.com\blkchain\blkchain\cmd\import\import.go:305:13: undefined: syscall.Rlimit
D:\GO\src\github.com\blkchain\blkchain\cmd\import\import.go:306:12: undefined: syscall.Getrlimit
D:\GO\src\github.com\blkchain\blkchain\cmd\import\import.go:306:30: undefined: syscall.RLIMIT_NOFILE
D:\GO\src\github.com\blkchain\blkchain\cmd\import\import.go:312:13: undefined: syscall.Setrlimit
D:\GO\src\github.com\blkchain\blkchain\cmd\import\import.go:312:31: undefined: syscall.RLIMIT_NOFILE
D:\GO\src\github.com\blkchain\blkchain\cmd\import\import.go:315:13: undefined: syscall.Getrlimit
D:\GO\src\github.com\blkchain\blkchain\cmd\import\import.go:315:31: undefined: syscall.RLIMIT_NOFILE

Stuck on "Marking orphan blocks" stage for hours

2023/06/03 06:15:05 Creating indexes part 2, please be patient, this may take a long time...
2023/06/03 06:15:05   Starting blocks primary key...
2023/06/03 06:15:06   ...done in 916ms. Starting blocks prevhash index...
2023/06/03 06:15:06   ...done in 380ms. Starting blocks hash index...
2023/06/03 06:15:06   ...done in 324ms. Starting blocks height index...
2023/06/03 06:15:07   ...done in 185ms. Starting txs primary key...
2023/06/03 06:18:35   ...done in 3m27.893s. Starting block_txs block_id, n primary key...
2023/06/03 06:22:50   ...done in 4m15.126s. Starting block_txs tx_id index...
2023/06/03 06:26:14   ...done in 3m24.282s. Creatng hash_type function...
2023/06/03 06:26:14   ...done in 231ms. Starting txins (prevout_tx_id, prevout_tx_n) index...
2023/06/03 06:43:09   ...done in 16m54.5s. Starting txouts primary key...
2023/06/03 06:57:32   ...done in 14m22.83s. Starting txouts address prefix index...
2023/06/03 07:33:16   ...done in 35m44.219s. Starting txins address prefix index...
2023/06/03 08:36:26   ...done in 1h3m10.469s.
2023/06/03 08:36:26 Creating constraints (if needed), please be patient, this may take a long time...
2023/06/03 08:36:26   Starting block_txs block_id foreign key...
2023/06/03 08:39:04   ...done in 2m37.158s. Starting block_txs tx_id foreign key...
2023/06/03 08:51:18   ...done in 12m14.971s. Starting txins tx_id foreign key...
2023/06/03 09:16:55   ...done in 25m36.902s. Starting txouts tx_id foreign key...
2023/06/03 09:36:08   ...done in 19m13.096s.
2023/06/03 09:36:09 Creating txins triggers.
2023/06/03 09:36:09 Dropping _prevout_miss table.
2023/06/03 09:36:09 Marking orphan blocks (whole chain)...

Hi, Marking orphan blocks (whole chain) stage has been working for 6 hours already, but in README it took only 45 seconds.

2023/01/14 08:15:05 Marking orphan blocks (whole chain)...
2023/01/14 08:15:50 Done marking orphan blocks in 44.698s.

Should I wait more?

Txin commit error: pq: Unknown tx_id:prevout_n combination: 648538036:1

@grisha
When my database restarts unexpectedly, some txid will be lost in the startup program, and the following error will be found in the log. How can I fix this problem, thank you very much

2021/06/12 00:53:03 Txin commit error: pq: Unknown tx_id:prevout_n combination: 648538036:1
2021/06/12 00:53:05 Txin commit error: pq: Unknown tx_id:prevout_n combination: 648537969:0
2021/06/12 00:53:06 Txin commit error: pq: Unknown tx_id:prevout_n combination: 648533578:0

ERROR: Problem finding valid parent when eliminating orphans.

super@ubuntu:/data/github/blkchain$ ./import -connstr "host=XXXX dbname=blocks sslmode=disable" -blocks ~/.ulordcore/blocks/
2018/06/26 11:01:47 Reading block headers from LevelDb (/home/super/.ulordcore/blocks/index)...
2018/06/26 11:01:47 Read 20540 block header entries, maxHeight: 20441.
2018/06/26 11:01:47 Ignoring orphan block b0c8f27be0cb6a534fe735a37751ec17efd90cce6bb284a2706d9de50d2c5f7f
2018/06/26 11:01:47 Ignoring orphan block 6009fcbc657a641ee190242f0a333139367bcd7ffdf3fc61bf50cab00d39d04c
2018/06/26 11:01:47 ERROR: Problem finding valid parent when eliminating orphans.

Any help?

Unexpected stop of bitcoind will cause data loss during synchronization

@grisha Thanks for your help again and again,Unexpected stop of my bitcoind will cause data loss during synchronization,The next is full of wrong questions

2021/06/15 23:52:27 ERROR (13.6): driver: bad connection
2021/06/15 23:52:27 ERROR (13.6): driver: bad connection
2021/06/15 23:52:27 ERROR (7): driver: bad connection
2021/06/15 23:52:27 ERROR (7.5): driver: bad connection
2021/06/15 23:52:27 ERROR (13.6): driver: bad connection
2021/06/15 23:52:27 ERROR (13.6): driver: bad connection
2021/06/15 23:52:27 ERROR (13.6): driver: bad connection
2021/06/15 23:52:27 Tx commit error: driver: bad connection
2021/06/15 23:52:27 Block Txs commit error: driver: bad connection
2021/06/15 23:52:27 TxOut commit error: driver: bad connection
2021/06/15 23:52:27 Txin commit error: driver: bad connection
2021/06/15 23:52:27 Done writing block 00000000000000000008ab171664ae2e6618c34ab7ee2faf018ee2cf70f89a80.
2021/06/15 23:52:27 Marking orphan blocks going back 10...
2021/06/15 23:52:27 Height: 687708 Txs: 835758 Time: 2021-06-15 23:52:33 +0800 CST Tx/s: 3.230544 KB/s: 2.487218
2021/06/15 23:52:27 Marking orphan blocks done.
2021/06/16 00:10:45 Received a block: 00000000000000000009054a94e8ca216cd537610bcedc62d3df3f0d97631f9f
2021/06/16 00:10:45 Waiting for a block...
2021/06/16 00:10:45 Writing block 00000000000000000009054a94e8ca216cd537610bcedc62d3df3f0d97631f9f...
2021/06/16 00:10:45 pgBlockWorker: Could not connect block to a previous block on our chain, ignoring it.
2021/06/16 00:10:45 Error writing block: 00000000000000000009054a94e8ca216cd537610bcedc62d3df3f0d97631f9f
2021/06/16 00:10:45 Write failed - exiting processEachNewBlock() (00000000000000000009054a94e8ca216cd537610bcedc62d3df3f0d9
7631f9f)
2021/06/16 00:24:01 Received a block: 000000000000000000079fc2767c7afa7ff742892a2cea745d1125e784982925
2021/06/16 00:24:01 Exiting processEachNewBlock() on writing error, possibly inventory skipped a block.
2021/06/16 00:24:01 Reading block headers from Node (10.10.17.5:8333)...
2021/06/16 00:24:01 Received batch of 3 headers.
2021/06/16 00:24:01 End of headers (for now).
2021/06/16 00:24:01 Read 3 block headers.

There is a problem with entering sql to view id

bitcoinblocks=# select * from blocks where height=687709;
-[ RECORD 1 ]------------------------------------------------------------------
id         | 687712
height     | 687709
hash       | \x9f1f63970d3fdfd362dcce0b6137d56c21cae8944a0509000000000000000000
version    | 939515908
prevhash   | \x809af870cfe28e01af2feeb74ac318662eae641617ab08000000000000000000
merkleroot | \xcc8ad64d7a911799a205697140e074cbce77aa8efe43abd364da1723c0957e56
time       | 1623773427
bits       | 386801401
nonce      | 1905847871
orphan     | f
size       | 1408470
base_size  | 861592
weight     | 3993246
virt_size  | 999307

There is no data in the query tx_ins

bitcoinblocks=# select * from txins where tx_id in (select tx_id from block_txs where block_id =687714);
(0 rows)

But there is data in tx_out

bitcoinblocks=# select * from txouts where tx_id in (select tx_id from block_txs where block_id =686714);
-[ RECORD 1 ]+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
tx_id        | 647687464
n            | 0
value        | 447076
scriptpubkey | \x76a914da0c3ca25f956c43f10db42675b5efc4c85b648488ac
spent        | t
-[ RECORD 2 ]+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
tx_id        | 647687464
n            | 1
value        | 55899192
scriptpubkey | \xa914e82029e9e73a0d9fbcbc53a1be4bff3d39d8441587
spent        | t
-[ RECORD 3 ]+---------------------------------------

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.