serai-dex / serai Goto Github PK
View Code? Open in Web Editor NEWLicense: Other
License: Other
We currently use Vec to black box FROST and simply say what has to be sent around, with it being expected to be sent securely as needed. By moving deserialization, and serialization, out of the core algorithm and integrating the type system into calls, we may have a more preferable API.
The truly end comment on this path would be taking in a broadcast function and per-participant communication function, calling each as needed and integrating encryption as needed. I believe that'll be a step too far, with the processor aiming to handle this, yet simply moving out serialization code may be beneficial.
Instead of calling it serialize, we could call it broadcast/participant, depending on the package, to signify how it should be communicated as well.
Instructions are DEX commands included within the data fields of each supported currency. Instructions provide the necessary context so that the protocol understands intent and destination.
Define instructions for:
Ink: https://ink.substrate.io
Move: https://move-language.github.io/move/
Ink Advertised Pros:
Move Advertised Pros:
Move Practical Cons:
In the long run, Move could up be a more widely adopted language increasing performance and making it easier to comment on security. It also may be better to write contracts in for developers, so if we ever open up SC deployment, it'd be more accessible. I'll probably spend time with both over the next few days, as I have yet to spend any notable amount of time with either, comparing how it feels to write code, the success of the object model, and the feasibility of the formal verification. If Move is sufficiently well designed, the question becomes performance/integration effort.
hidapi, which I don't know how to ignore on the Monero end, is an optional lib currently linked by the monero crate. Furthermore, epee/randomx/libunbound are linked, as is boost/sodium/easylogging. Ideally, all of these would be removed.
Instead of obtaining context and so on, a Merlin transcript could be passed and altered as needed. The issue is that the FROST IETF draft requires a specific serialization format for its binding factor calculation, which must be honored. This presumably means we can only use Merlin transcripts when an offset/context is specified, which may be a pain to manage (or relatively simple).
The inevitable Zcash standard for handling randomization (offsets) also should be considered, yet unfortunately can't be discussed yet as they haven't released anything nor have they answered my questions.
Thanks to @noot, we now have a Ethereum Schnorr contract. Our final contract still needs two features:
This issue is specifically for the latter, with a follow up issue being for the former.
By taking in a series of address, calldata pairs, we can execute an arbitrary list of transactions. We can only check a single signature for all transactions if either:
accum = keccak256(abi.encodePacked(accum, address, data))
.keccak256(abi.encode(addresses, datas))
.calldatasize
and calldatacopy
to avoid abi.encode
entirely. This has a slight degree of malleability, as data could be put after the calldata and included here in this hash, yet then it'd be seeking a hash collision so this shouldn't be an issue.I assume the last one will be cheapest, yet wanted to note the ideas I had so we can ensure this has minimal costs.
Due to using a Schnorr signature, someone must also publish this transaction. Accordingly, they should either:
A) Get a refund for doing so, from the contract. This would have the Schnorr signature also specify a gas price, verifying it's the one actually used by gasprice
. This would create a leaderless scenario.
B) This can be left out-of-contract to some sort of DAO, yet I'd like for it to be in the protocol (under A).
I believe we'd want to manually implement the Ed25519 FieldElement field as a PrimeField under dalek-ff-group, as dalek doesn't ecpose it, and then build the hash_to_point on top of that.
Related zkcrypto/group#30.
This was 'patched' in 2ae715f by rejecting scanning such transactions. While not as comprehensive as it could be, Monero transactions don't use timelocks and someone receiving funds likely doesn't care about locked funds. They care about usable funds.
Checking block-based timelocks, due to an erroneous wallet which uses very low timelock values existing, could be beneficial, yet passing a height, as needed to ensure coordination, is a pain. Simply checking if the timelock is < 60, which all transactions have as an implicit timelock due to miner maturity preventing outputs from being spent before then, would handle said wallet without requiring an extra argument.
That patch was an immediate reaction which broke miner transactions, which are explicitly scanned and therefore their maturity properties should either be assumed to be handled, OR the API should return a tuple of (Vec, Option). I'm currently leaning towards the latter as it's comprehensive.
Thanks to @Rucknium for reminding me these are an issue.
I believe the optimal solution for this is instead of taking an G in, take in an Enum of G, SmallTable, or LargeTable (rough names, please don't use), allowing the caller to reuse where possible where multiexp expands the rest as needed.
Would notably speed up FROST key generation.
Related to #41, which is intending to provide large compile time tables.
Likely benefits from being done after #12.
This flies against good practice and is suboptimal, being a security risk in the worst case (though one already requiring some level of system access), making it a security risk now.
This enables trivial recovery of the real spend by chain analysis for a malicious Monero node. While that isn't expected to be an issue with Serai, a QoL improvement which will minimally affect Serai would be to include the real spend regardless.
These would range from Curve tests which would verify canonical functions are appropriately canonical (as possible) to algorithm tests which would verify signature share validation works as expected.
Theoretically, a Chinese-Remainder-Theorem-based Distributed Key Generation protocol would enable us to have weighted key shares while keeping a linear runtime complexity when signing, from my understanding.
I'll admit to not fully understanding the coprimality requirements on key generation, and it does look like key generation remains exponential, yet we may be able to achieve an exponential key generation with less parties due to the presence of weights and accordingly still have lower computational complexity. The other concern is the utter lack of implementations of such schemes, despite papers existing for years on the topic, and their description as a leader scheme without explicitly providing leaderless transformations.
These both appear to be of notable historical relevance. 2005 is when CRT seemed to be re-adopted for consideration, spawning a series of papers. These papers took advantage of CRT for further functionality not present under Shamir's work alone, and have a variety of formations. Some share multiple secrets, while sublinearly increasing complexity. Others define levels of participants with differing considerations. The "weighted" schemes target the model where any set of key shares, whose weights sum to >= 1, are eligible to recreate the key, and are the ones up for consideration here.
"Distributive Weighted Threshold Secret Sharing Schemes" - Dragan, Tiplea (2016) caught my eye immediately.
"Weighted Secret Sharing Based on the Chinese Remainder Theorem" - Harn, Fuyou (2013) was frequently acknowledged by the former.
There are also a few papers specifically detailing constructions over elliptic curves ("Threshold verifiable multi-secret sharing based on elliptic curves and Chinese remainder theorem" - Maryam Sheikhi-Garjan, Mojtaba Bahramian, Christophe Doche (2018)), which may be worth referring to if implementation is attempted.
I have not done extensive review on the above, and am not ready to endorse any for implementation, yet rather solely wanted to collect them here as notes. This likely won't be tackled until after mainnet, if we do it.
Blocks which fail contextual checks on consensus should still gleam the top block hash the other node is attempting to use, and after sufficient discrepancies being reported, drop that chain until resyncing occurs.
This would use monero-rs's serialization, reducing our load and offering better integration, while fixing #23.
monero-rs would have to be updated for the latest hard fork for this to be considered however.
https://gist.github.com/ kayabaNerve/8066c13f1fe1573286ba7a2fd79f6100
This is the official address format I've proposed, for adoption not only by Serai yet the larger ecosystem. It's a trivial edit from Serai's current definition internally (a string constant off), with the notable aspects being:
I'm not confident their current length checks are sufficient and its poor overall.
The C file should set a boolean to true every time randomness is used, letting test functions be defined which set it to false, call the functions in question which are expected not to use randomness, and verify the boolean remains false.
I'm actually not sure how the monero library handles this. They won't be rejected at the RPC level, yet will the created TX have torsion, making it unable to be parsed back? Will the torsion be cleared naturally? Can we explicitly clear the torsion successfully?
Almost, if not, 0 functionality from monero-rs is being used regarding key management, TX scanning, address handling... And yet we are bound by its types as they offer serialization, which I didn't bother to implement yet. This creates a very kludgy system using fields such as
k_image: Hash([u8; 32])
TxOut { amount: VarInt(u64), target: TxOutTarget { ToKey { key: PublicKey { point: CompressedEdwardsY } }, .. } }
While this makes sense for monero-rs, it does not offer value for serai.
There's no tagged release for them yet, hence why I don't care to update quite yet, yet this library is decently worthless without v16 support.
This library should maintain support the historical v14 transactions, as we already have full support for them, yet create a new BulletproofsPlus file with according support.
Resolution, and carrying, of blame for arbitrary errors via calling .blame()
would be very beneficial.
The current DLEq proof only supports a pair of generators, primary and alt. It should be trivial to have it support an arbitrary amount, and enable optimizations with multi-generator FROST nonces.
While there's absolutely no reason for this to matter, as key shares shouldn't have cross talk here, the processor should be aware of what network it's running on and it would ensure domain separation there.
ff/group doesn't really support torsioned curves, so preventing torsion from being an issue would be beneficial. Specifically, following ed25519consensus would be beneficial.
Would drop the is_torsion_free function from the lib.
https://docs.rs/ff/latest/ff/trait.PrimeFieldBits.html
I am hesitant to commit to this as it requires PrimeFieldBits within FROST as well (or feature gating its multiexp usage, yet FROST becomes imperformant for large groups when that happens), and the API for PrimeFieldBits doesn't appear the simplest at first glance. That said, k256 supports it, as does p256, and we have control over ed25519. We'd have to check for Ristretto, which has a pending PR for ff/group (which may have stalled), yet worst case, we clone the ed25519 work and also control Ristretto's ff/group definition.
The transactions in the block should not be dropped due to the miner being non-standard, though this library should continue to reject such non-standard behavior.
FROST lost a notable amount of performance when it lost access to dalek's tables.
Also removes GENERATOR_TABLE, only used by dalek, as we should provide
our own API for that over ff/group instead. This slows down the FROST
tests, under debug, by about 0.2-0.3s. Ed25519 and Ristretto together
take ~2.15 seconds now.
My proposal is a tables library offering a universal set of generator tables available anywhere within applications. This means both FROST and DLEq, without knowledge of each other, could define an expectation for a table. At compile time, one would be generated, and it'd be read in on first use (a la lazy_static). Then, any portion of the code could obtain a reference to it, and perform efficient multiplication with it.
group defines a struct for tables already. WnafGroup provides recommended window parameters, which would be able to replace multiexp's current amalgamation of dalek and k256 performance.
It may be more beneficial to roll our own solution, as this would have implications for #41 and #42, and will likely be optimal to write our own code to solve those first BEFORE considering Wnaf.
WnafGroup alone would be very beneficial for multiexp, yet it isn't widely supported. I've opened an issue for k256 and p256 however, and we can provide support for dalek.
May have a relation to zkcrypto/group issues/25.
EDIT: I explicitly removed github.com from the last item there, trying not to ping zkcrypto's issue as it wasn't a relevant item to them at this time, IMO. Turns out GH still figured it out. Now I know GH does that.
develop...rustfmt shows how different it looks. While I believe it's largely fine, it's just that. Fine. I don't believe it notably increased code readability, yet it did settle any potential arguments about style. It has a few notable detriments.
use std::{io::Read, sync::{Arc, RwLock}};
use std::{
io::Read,
sync::{Arc, RwLock},
};
I don't know why this was expanded, but it was?
let mut commitments = (0 .. self.clsags.len()).map(|c| {
let mut commitments = (0 .. self.clsags.len())
.map(|c| {
Forcing an extra ident.
let outputs = self.rpc
.get_block_transactions_possible(height).await.unwrap()
.swap_remove(0).scan(self.empty_view_pair(), false).ignore_timelock();
This block took the rpc, got the outputs for a height, and then scanned the coinbase, ignoring the timelock to get the outputs.
let outputs = self
.rpc
.get_block_transactions_possible(height)
.await
.unwrap()
.swap_remove(0)
.scan(self.empty_view_pair(), false)
.ignore_timelock();
This is how it was reformatted. While I get why it was, as it's an auto-formatter... is this beneficial?
Arc::new(RwLock::new(Some(
ClsagDetails::new(
ClsagInput::new(
Commitment::new(randomness, AMOUNT),
Decoys {
i: RING_INDEX,
offsets: (1 ..= RING_LEN).into_iter().collect(),
ring: ring.clone()
}
Arc::new(RwLock::new(Some(ClsagDetails::new(
ClsagInput::new(
Commitment::new(randomness, AMOUNT),
Decoys {
i: RING_INDEX,
offsets: (1 ..= RING_LEN).into_iter().collect(),
ring: ring.clone(),
},
It also frequently collapses things I wouldn't, though the original was already suboptimal. It was meant to be the wrappers on one line with...
e = Some(
Self::R_nonces(transcript.clone(), generators, self.s[i], ring[i], e.unwrap_or(e_0))
);
e = Some(Self::R_nonces(
transcript.clone(),
generators,
self.s[i],
ring[i],
e.unwrap_or(e_0),
));
Or simply changes where it's collapsed.
cc @noot, the person I'm currently fighting on style with :p I'm willing to accept the above cargo fmt
to resolve this on a project level, yet I do wonder if we shouldn't use an auto formatter yet a style guide/just say be pleasant with the rest of the codebase (like Monero).
Spec: https://gist.github.com/kayabaNerve/01c50bbc35441e0bbdcee63a9d823789
Follow up to #27. Replaces GAs.
Currently, all of the code in the project is MIT licensed. This keeps all our infrastructure accessible, while letting the community build on it and iterate, hopefully contributing back. To be more specific, on the notable pieces of of code currently in this project:
crypto/
coins/
There are three further planned pieces (beyond further coins, which will largely be predicated on existing libraries):
The immediate plan is to update the processor's license to AGPL, and use AGPL for the daemon (along with consensus if possible).
Pros:
Cons:
My personal view on forks is that they're fine so long as they provide merit. Else, they're an damaging obfuscation to the project. While this is decently arbitrary, and not a well written blog post yet meant to be my brief take on this issue, I do believe projects which show sufficient POTENTIAL technical innovation do count as providing merit, yet I don't believe projects which simply add a premine and use that to obtain VC funding show merit. On the contrary, I do believe projects which remove notable premines offer sufficiently increased decentralization to warrant existence.
My goal with this, beyond ensuring FOSS as I can currently cite multiple cross-chain projects which are closed source, is to make people interested in flippantly forking reconsider due to the required level of transparency and accreditation which isn't feasible to work around. Any fork would have to argue to the populace their documented divergence is sufficiently notable. For projects with merit, that is possible. For projects without...
I also do not currently see actual detriment beyond potentially making certain developers not interested, including myself from roughly a year ago. I strongly admire the MIT license and believe in it for our libraries, and admire those who always use it, truly offering their code to the world. Unfortunately, this is an example of when cryptocurrency emphasizes currency more than it does crypto. As a decentralized exchange, we are explicitly measured by financial activity, and the easiest way to have financial activity is to have financials. As an individual unable to offer financials to the noted degree, one working on ways to ensure the DEX still has them, it's important to note it'd be immensely easier to be more successful via walking in, forking, and having financials already to offer. That's why I'm hoping the AGPL will increase the view of such projects while demonstrating their contribution is capital, not labor, letting the public make a more informed decision.
I am currently the sole copyright holder of the project. At some point, this MUST be turned over to a larger group, such as "Serai Developers", in order to prevent re-licensing without immense effort. Since I am the only person who needs to decide at this time, that means I CAN relicense the processor as AGPL right now without issue, AND I can revert it back to MIT after sufficient discussion has happened, before the copyright is turned over. If the processor is not made AGPL now however, all its active development work will be MIT, which doesn't offer the protection of the AGPL. If the processor is reverted to MIT, it'll just be MIT though, making the tighter grip required now yet not actually committed to.
As a final note, I will note this can be taken a step further by making FROST LGPL to ensure that the new cryptography scheme it is has the collaborative effort needed to perfect it. The remaining effort for the library is notably in optimizations for large groups, yet it's currently sufficiently performant so I do not see that as a requirement. It'd also bar adoption by certain parties interested in private modifications, which would needlessly turn away people who may eventually submit select changes back/be willing to contribute to development in other ways. Accordingly, I do truly believe all of our libraries, including FROST, should be MIT.
The only current further plans are BTC and ETH.
There is also consideration for:
The current TX key handling is non-standard, and payment IDs likely aren't properly handled.
Doing so would also reduce reliance on monero-rs.
WIP
Column Outline:
Will upload an initial version soon and document user inputs.
Currently, there are two notable ways algorithms extend FROST's nonce handling.
Seraphis currently has individual ownership proofs, which means they won't use multiple nonces, yet now that Spark's aggregate proof was proven secure, there was potentially some level of reconsideration.
Extending Algorithm with:
fn nonce_generators() -> Vec<Vec<C::G>>
to return the amount of nonces and the generators to use for each of them (where CLSAG would return vec![vec![G, H]]
yet Spark would return vec![vec![G]; n]
) would alleviate the burden on algorithms and consolidate this code.
Reasons for:
Reasons against:
fn nonces() -> usize;
fn nonce_generators() -> Vec<C::G>;
would be a more favorable API, albeit less flexible.
vec![vec![G]]
.vec![vec![G]]
.Furthermore, if we're actually discussing implementation, there is one other discussion regarding the binding factor. Given all nonces would be randomly generated, it shouldn't be an issue to reuse the binding factor. However, Lelantus Spark's FROSTy implementation did localize the binding factors per nonce. This truly should be extraneous, and was not mirrored in this project's prototype, yet it's effectively trivial while potentially increasing security. We also do not have any specification obligations.
Tracking issue to promote discussion. Examples of other algorithms relying on multiple nonces/generators would be appreciated. A partial solution of solely
fn nonces() -> usize;
would be much simpler to implement (not requiring DLEq proofs) and potentially still appropriately face the future (as CLSAG, which is [G, H], is being deprecated for split proofs which generally only require a single generator, as relevant here).
If we have our router emit an event whenever it receives ETH, we can find all transactions where we received ETH. ERC20 transfers will already identify all transactions where we receive ERC20s.
While the traditional flow would be on receiving ETH, we execute code emitting an event with the full instruction, or to handle ERC20s, we'd call transferFrom
and emit an event with the full transaction, this can be rewritten as us simply appending the instruction data after the ETH/ERC20 transfer. The fallback function/ERC20 will ignore it, yet we can trace the TX through the EVM to recover the data, therefore eliding gas costs while minimizing the amount of Solidity.
This also voids the need to do the full approval flow.
Full credit for this idea goes to @TheArchitect108.
Paired with #49.
For some reason, I've never managed to successfully build Monero statically. Regardless, that makes the Monero rust crate only suitable when it's locally built on the system, which isn't suitable.
The required effects of consensus are firmly detailed in https://github.com/serai-dex/serai/blob/develop/docs/protocol/Consensus.md. While it's not perfect, namely already describing itself as a modified Aura, it does describe the needed UX.
Substrate offers Inherent transactions, which, combined with per-block finalization, should hit all bases. The question becomes:
Delegation will potentially increase centralization while also allowing anyone to bond Serai, not just people with large amounts of it. This will maintain the ability to repay holders after theft (if a validator set is compromised), yet does also increase the odds of theft as validator sets are no longer paying for the bond they use to enable their theft.
So:
- fund security
+ scalability
+ potential centralization
+ staking participation
while maintaining economic security, as viewed by the statement we can reimburse users.
The issue with the reimbursement comment is that if we have a single validator set which commits theft of all funds, Serai is worthless. Sure, we have their bonds they paid for... they won't pay for it again with the assets they stole. We can only buy back assets so long as Serai as a whole still has backing, such as under a scheme with multiple validator sets, which will happen post-launch.
Beyond the multiple validator set discussion about the feasibility of re-acquiring assets, the question becomes is the reduction in security acceptable for the benefits we'd gain. Others considerations would be reducing the bond ratio, which would decrease fund security and no longer guarantee economic security (unless we blame everyone in the group, which may be required since we'd be unable to determine who actually did it). It would enforce the costs come from the actual validators who performed the theft though, while not changing centralization, yet would potentially be less scalable.
#53 has the following document regarding delegation.
# Validators
### Register (message)
- `validator` (signer): Address which will be the validator on Substrate.
- `manager` (signer): Address which will manage this validator.
- `set` (VS): Validator set being joined.
Marks `validator` as a validator candidate for the specified validator set,
enabling delegation.
### Delegate (message)
- `delegator` (signer): Address delegating funds to `validator`.
- `validator` (address): Registered validator being delegated to.
- `amount` (Amount): Amount of funds being delegated to `validator`.
Delegated funds will be removed from `delegator`'s wallet and moved to
`validator`'s bond. If `validator`'s bond is not a multiple of the validator
set's bond, it is queued, and will become actively delegated when another
delegator reduces their bond.
Note: At launch, only `validator`'s manager will be able to delegate to
`validator`, and only in multiples of the validator set's bond.
### Undelegate (message)
- `delegator` (signer): Address removing delegated funds from `validator`.
- `validator` (address): Registered validator no longer being delegated to.
- `amount` (Amount): Amount of funds no longer being delegated to
`validator`.
If a sufficient amount of funds are queued, the `validator`'s operation
continues normally, shifting in queued funds. If the `validator` falls below a
multiple of the validator set's bond, they will lose a key share at the next
churn. Only then will this undelegation process, unless another party delegates,
forming a sufficient queue.
Note: At launch, only multiples of the validator set's bond will be valid.
### Resign (message)
- `validator` (address): Validator being removed from the pool/candidacy.
- `manager` (signer): Manage of `validator`.
If `validator` is active, they will be removed at the next churn. If they are
solely a candidate, they will no longer be eligible for delegations. All bond is
refunded after their removal.
The commentary on delegation will be reviewed, as it was already left to post-launch, and there's obviously still aspects to discuss here. The API itself remains sane though, as a manager address may have multiple validators, and accordingly the separation of addresses/messages remains sane. It's just the heavy leaning towards DPoS in the future which will be removed, as this is discussed.
We can also just leave this until after launch. It's highly dependent on the security we achieve, how limiting bond is to pool liquidity, and if we do set up multiple validator sets.
This forces the leader to provide the BPs and adds ~0.75kb of data to each sign operation. A Rust implementation could not only be more performant, yet also remove this requirement.
While it's currently a very minimal function without need to be in its own crate, it has reason to exist in a variety of algorithms (such as a potential BP implementation, see #2), and by providing a dedicated crate with a single exported function we have the option to abstract away Strauss/Pippenger/whatever however we want in a more full featured release later.
Right now, there's an ink contract on https://github.com/serai-dex/serai/tree/ink-multisig offering confirmation for a multisig's validity and able to signal when to move it.
... this has no reason to be an ink! smart contract, other than to keep things consolidated in ink!. It's an incredibly simple piece of code though solely used to obtain BFT agreement on which multisig to move forward with and exactly when. I'd like to see this rewritten as a pallet.
I wanted to open a running list of issues with Substrate I'd like to tackle, yet need some level of review before doing so is possible.
Extension of #34, yet for multiple scalars instead of generators.
This will practically require a Vec, so it'll be behind std, and I'd like to leave the existing DLEq proof untouched to provide some level of isolation. This would optimize CLSAG FROST as instead of one proof for D and one for E, it'd be one for both. Ideally there's a cost of 1 + n, instead of 2n.
Doing so would enable removing -z muldefs
and make our only reliance on C the hash_to_point function.
To quote the code,
// Select decoys
// Ideally, this would be done post entropy, instead of now, yet doing so would require sign
// to be async which isn't preferable. This should be suitably competent though
// While this inability means we can immediately create the input, moving it out of the
// Rc RefCell, keeping it within an Rc RefCell keeps our options flexible
Considering this is being moved to an Arc RwLock, deciding one way or the other should be done. Considering this is in a multisig context, and private data is still included, I currently lean towards accepting this, removing the Arc RwLock.
If the multiexp crate offers an accumulator, preferably one with an API suitable for batch verification, we can consolidate logic while merging multi-exp opportunities. FROST currently uses a multiexp for verifying the Proofs of Knowledge. By delaying this multiexp, instead returning an accumulator, we can merge it with a multiexp at the end of round 2 which verifies the received share for a given participant.
Said share verification code currently does one multiexp per counterparty, yet using unique factors would enable batch verification of them, which is a possible optimization local to FROST yet related to any potential batch verification API.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.