nostr-protocol / nostr Goto Github PK
View Code? Open in Web Editor NEWa truly censorship-resistant alternative to Twitter that has a chance of working
a truly censorship-resistant alternative to Twitter that has a chance of working
I guess share can be implemented using just republishing the post to your followers.
But how to implement likes, dislikes and comments?
Starting a discussion here based on a mention from from @fiatjaf on Telegram (https://t.me/nostr_protocol/12352).
This would modify the 'authors' field on a subscription filter to allow hex-string prefixes. So, the following would become a valid filter, and would match any event by an author with any of these three prefixes:
{"authors": ["8876", "35d26", "99a"]}
For the author prefix "99a", that would match any key that is lexigraphically greater than "99a", and strictly less than "99b".
Relays must remember REQ
s and return events that matches the given filter after the request has been processed.
Maybe REQ
s should have the relay return a set of specific events one time (e.g. to fetch old events that the client remembers the id
of, but hasn’t kept in memory/cache entirely).
Maybe “subscribing” to future messages that match the filter should be trigger via a field in REQ
, or a new type of message entirely (SUB
?).
There should probably also be a way to “unsubscribe”, to prevent relays from sending information that isn’t needed any more.
I'm trying to understand if I need to do anything (as the author of a relay) to support NIP-03.
If a client sends an event that includes an ots
field, should that be used for serialization to determine the event ID and verifying a signature? Or is it simply ignored for signature validation, and passed along if present?
I know in my relay, I make a strong assumption that events are unique - someone attempting to publish an event with the same 'id' as an existing one will simply be ignored. But with the ots
addition, this introduces two distinct events, with the same id/signature, but they have a different serialization (one with the ots
, one without). What would happen if a client received the same event id from two relays, but with different ots
fields?
Again, my concern is primarily from the relay perspective. I don't accept events that contain fields other than those specified in NIP-01 (NIP-03 events would be rejected), so I'm curious about the correct way to extend things to allow handling that extension. Right now, the easiest thing would be to essentially say, "first publisher wins", for events with ots
fields, but perform no validation.
If this is merely a hint field, perhaps the simplest/right answer for me is to simply accept events that have ots
fields, but never serialize or re-publish them. That way I don't break clients that use ots
.
While reading the README.md I was immediately reminded of Syndie (https://en.wikipedia.org/wiki/Syndie) (http://syndie.de/), which is no longer maintained. I found no references to / comparison between nostr
and Syndie
yet, so I'm just dropping this here. Maybe someone finds it useful and wants to take away some learnings from the stalled Syndie project.
This already exists, has existed coming up on 7 years, it seems silly to reproduce work that is already done.
https://gitlab.syncad.com/hive/hive_protocol
Forking the project wouldn't alienate the community presuming they get an airdrop on the new fork.
In fact, many of us would like to escape some of our largest stakeholders as they are greedy af.
Up to you people.
A public key is as part of a user’s local identity as their name or picture. Clients could also use this information to validate past events with a user’s previous or current identity.
Floating an idea for a new optional command (to be codified in a NIP soon). Clients should be able to request relay info, using a command like ["RELAY_INFO"]. The relays could then respond with something like:
["SERVER_INFO",
{
"name": "Nostr Dev Discussions",
"admin_pubkey": "48f475e1acc18ba4...",
"admin_email": "[email protected]",
"supported_nips": ["NIP-01", "NIP-02", "NIP-09"...],
"software": "nostr-rs-relay",
"version": "0.3.0",
"public_write": true
}
]
Notices could be direct messages sent by the relay’s identity.
How does this provide discoverability?
NIP03 smartly defines proper use of Open Time Stamp. This provides proof of existence, showing that the event happened before that block was found, the upper limit.
Adding a recent bitcoin blockhash into the event, provides a proof of absence, meaning that the event existed after that block, the lower limit.
Together these provide a cryptodynamically secure proof of time window when a event was published.
Of course this can already be achieved by including the block hash in the event message. But maybe having a dedicated tag for it would be useful.
Can we use more industry standard key generation/derivation standards for now as Schnorr support is very limited, and it will halt the development of a lot of Nostr clients/relays?
Once Scnorr is more of a standard for key crypto, and there are plenty of libraries, the Nostr network could move to support it
Can we have geo-location and reputation data types?
That way we can make a nostr-Uber. It would also be useful for nostr-prostitutes.
In fact reputation is an important one as we can use it for market stuff like Diagon Alley.
Just time-stamping the concepts, and looking forward to helping build them at the hackathon.
I noticed there is no mention in the NIP about the validation process. Is it intentional? I recognize that bad-faith relay or client might intentionaly publish non-valid signatures so nobody should be relaying on any third party about validation but I feel like there should be at least mention about it in the NIP. Something like:
Relays SHOULD validate the sig
and id
of the event and the event MIGHT be rejected if the validity cannot be established. Clients SHOULD validate the sig
and id
fileds of any incomming event and it SHOULD reject any invalid events or visually idicate to the user that the events are not valid.
I'm thinking of a situation where there are many relays, and perhaps your client connects to some ransom subset of them. There doesn't appear to be any way to ensure that a particular event makes it to a specific user.
Let's say I publish a message to relays I'm connected to, and I want to send a DM to another pubkey on the network. afaict there is no way of knowing if they can see that message, because I do not know what relays they are connected to.
activitypub solves this since you know where a particular user's inbox is, so you can send an event directly to that server. Are there proposed solutions on how nostr might solve this?
Genuine question!
Isn't this essentially just stripped-down email?
Could you rename the authors
filter field to pubkeys
, for consistency with the pubkey
event field?
(Or of course instead rename the event field to author
)
Hi,
I'm trying to understand censorship resistance model. I have read the specs and I didn't found nothing about transport. I guess it's up to the client and relay to negociate the transport. But still I didn't found any information about negotiation of the protocol between client and relay, then negotiating media like the SDP in SIP or WebRTC.
To really make it censorship resistant the traffic should be indistinguishable by any useful protocol like https or mqtts in deep packet inspection. They can censor services and protocols that are not needed for country infrastructure, but they can not censor protocol used in country infrastructure like https or mqtts.
Another concern is how normies can easily discover relays without the ability to build blacklists?
I didn't found too much information about these topics.
Create a NIP that standardize the config file that is up to the client give support or not.
An initial scheme idea:
{
"user":{
"name":"string",
"about":"string",
"picture_url":"string",
"public_key":"string",
"secrets":{
"secret_key":"string",
"seed_phrase":"string"
},
"contacts":[
{
"public_key":"string",
"alias":"string"
}
],
"relays":[
{
"url":"string",
"permissions":{
"write":"bool",
"read":"bool"
}
}
]
}
}
All the content in field secrets MUST be encrypted OR empty.
Some references for secrets encryption:
Wasabi implementation
AES Encryption with HMAC Integrity in Java
Should events have a field that hints at the format of their contents? It could be a MIME type or one of the only formats (Markdown?) accepted by the protocol.
Or should relay try to guess the format using heuristic, the same way the file
utility does?
Would you consider a NIP proposing a more limited kind of relay that supports just two queries via HTTP GET, and no other functionality at all:
/id/<single event id>
/pubkey/<pubkey of an author>?since=<timestamp>
?
With a little content negotiation, blogs could easily double as dumb relays. More generally, it makes it very cheap and easy to guarantee availability of some content. It could be done using static web pages, if the server doesn't handle the since
filter.
Edit: I've built a working POC. See the latest comment.
This is an improvement upon NIP-04. It makes private messaging leaks less metadata.
Idea:
Example of such an event:
{
id: '9141dc50144cc243acfe78dc7799768e2f3eef2f857d8adc4fb8600dee6e8e9b',
pubkey: '0000000000000000000000000000000000000000000000000000000000000000',
created_at: 1643777968,
kind: 4,
tags: [
[
'shared',
'dbb06024d1ec94e1d0b76b2105410ecf85f01f0ecf6de563b5485288f6eff6c1'
]
],
content: 'ubrKJp8e2XKQ7utVAY66BQ==?iv=sy0vaW9Au0jIRSFqSrCjnA==',
sig: '78fdfffb8bc983af75a250cb13225b75a74e104a82881fa35d7efa36f48dcd5bedfe43f29b50c7c26a5d39b1bee7d4f23b0c3516b1149c6bc047198e5f9acf99'
}
Request to relay to get such an event: ["REQ", "foobar", {"#shared", sha256(shared_key)}]
Cons of this idea:
Please poke holes in this idea (if you find any); I'd like to know whether this idea has merit or it's just full of hot air.
The JS code below is an implementation of this idea.
import * as secp from "@noble/secp256k1";
import crypto from "crypto";
const senderPriv = "d5f9b88ae04e7adb2fc075515e39df546df56d88ccdb304a9a779af1563d79ba";
const senderPub = "ab1a33b0cf3d8f8896c433e6996744e48f1401e6fbc94aea6f84291074fb1b75";
const recipientPriv = "c10d9871f37f5d7dae09e93f2a381593b57697e02e15b446fbc99531b4623555";
const recipientPub = "7002538efd7175b2b5fafe4ee5a933242081c067a48ff019deca56eb13ef2186";
const inboxAddress = "0000000000000000000000000000000000000000000000000000000000000000";
function toHexString(byteArray) {
return Array.prototype.map
.call(byteArray, function (byte) {
return ("0" + (byte & 0xff).toString(16)).slice(-2);
})
.join("");
}
async function broadcastKey() {
// the purpose of this is to announce sender's pubkey to recipient
// it's basically a normal kind `4` event
const dummyMessage = "adsfasdf" // doesn't matter
const unixTime = Math.floor(Date.now() / 1000);
const data = [0, senderPub, unixTime, 4, [["p", recipientPub]], dummyMessage];
const eventString = JSON.stringify(data);
const eventByteArray = new TextEncoder().encode(eventString);
const eventIdRaw = await secp.utils.sha256(eventByteArray);
const eventId = toHexString(eventIdRaw);
const signatureRaw = await secp.schnorr.sign(eventId, senderPriv);
const signature = toHexString(signatureRaw);
const sampleBroadcastEvent = {
id: eventId,
pubkey: senderPub,
created_at: unixTime,
kind: 4,
tags: [["p", recipientPub]],
content: dummyMessage,
sig: signature
}
// sampleBroadcastEvent looks like this:
/*
{
id: '714f104515c8980dc9a793cf15b0a8700f8be1e2436165217bfca6976975c1d4',
pubkey: 'ab1a33b0cf3d8f8896c433e6996744e48f1401e6fbc94aea6f84291074fb1b75',
created_at: 1643778625,
kind: 4,
tags: [
[
'p',
'7002538efd7175b2b5fafe4ee5a933242081c067a48ff019deca56eb13ef2186'
]
],
content: 'adsfasdf',
sig: '03e9b5c375188ff912d536cb8e59bc45e643b0eca642838e8fa8f8692c1ce7bb051a22ff83dd2d17aefc0703ea046aaef9ec3859559a01cbf45226957a57e520'
}
*/
// push sampleBroadcastEvent to relay: `["EVENT", sampleBroadcastEvent]`
// recipient can get the event: `["REQ", "foobar", {"#p": [recipientPub]}]`
// when both have each other's pubkey, they can generate the shared key
// when they both have the shared key, they can communicate privately through inbox address
}
async function generatePrivateEvent(priv, pub) {
const unencryptedMessage = "supersecret"
// `secp.getSharedSecret(senderPriv, "02" + recipientPub)`
// and
// `secp.getSharedSecret(recipientPriv, "02" + senderPub)`
// produce the same value
const sharedPointBytes = secp.getSharedSecret(priv, "02" + pub);
const sharedPoint = toHexString(sharedPointBytes);
const sharedX = sharedPoint.substr(2, 64)
const sharedXByteArray = new TextEncoder().encode(sharedX);
const sharedXByte = await secp.utils.sha256(sharedXByteArray);
const sharedXSha = toHexString(sharedXByte);
const iv = crypto.randomFillSync(new Uint8Array(16))
const ivBase64 = Buffer.from(iv.buffer).toString('base64')
const cipher = crypto.createCipheriv(
'aes-256-cbc',
Buffer.from(sharedX, 'hex'),
iv
)
// to decrypt later on, use `crypto.createDecipheriv()`
let encryptedMessage = cipher.update(JSON.stringify(unencryptedMessage), 'utf8', 'base64')
encryptedMessage += cipher.final('base64')
encryptedMessage += "?iv=" + ivBase64
const unixTime = Math.floor(Date.now() / 1000);
const data = [0, inboxAddress, unixTime, 4, [["shared", sharedXSha]], encryptedMessage];
// event id is sha256 of data above
// sig is schnorr sig of id
const eventString = JSON.stringify(data);
const eventByteArray = new TextEncoder().encode(eventString);
const eventIdRaw = await secp.utils.sha256(eventByteArray);
const eventId = toHexString(eventIdRaw);
const signatureRaw = await secp.schnorr.sign(eventId, priv);
const signature = toHexString(signatureRaw);
return {
id: eventId,
pubkey: inboxAddress,
created_at: unixTime,
kind: 4,
tags: [["shared", sharedXSha]],
content: encryptedMessage,
sig: signature
}
}
(async () => {
console.log(await generatePrivateEvent(senderPriv, recipientPub))
console.log(await generatePrivateEvent(recipientPriv, senderPub))
})();
Edit: This issue has been edited to clarify.
Howdy!
I was just skimming over your plans for nostr. I might have missed it, but do you have plans for:
I'm not super experienced with security or cryptography, so I don't have a strong sense of how the above should be handled to offer any suggestions, but I'm interested to know how it might work.
edit: gonna unsubscribe from notifications for this
While the goal of creating a truly censorship-resistant social network seems to be very appealing I am curious about the issue of what happens when posts with content related to selling harmful drugs, human trafficking, and child pornography is posted.
This type of content doesn't just need to be censored but also needs to be reported to authorities with the true identity of the people involved.
Here are a few questions about this:
(Sorry if this is discussed somewhere, please point me towards that discussion/document if applicable)
P.S. These questions are not meant to belittle the efforts of the people involved but come from a genuine place of concern
If uploading files directly to a regular relay is not desirable, maybe a separate “file server” relay could be setup that accepts only a resource
event type. The content
field would be a blob.
Files uploaded to this relay could later be referenced in regular events, via a URL in the message’s body.
The difference with uploading the file to e.g. an HTTP file server is that the files would be kept “in closer proximity” with the events or user metadata (picture
).
When I have 10000 posts on the old relay, how to migrate these data to the new relay? by manually post to the new relay?
Uber works well, apart from Uber the company.
BUber uses Nostr network to connect geolocated sellers and customers, in an instant permissionless way.
User opens software and selects whether they are a seller or customer (either choice issues Schnorr key-pair, or they can add their own pre-existing key-pair)
Seller: Using their private key, the seller publishes their geolocation
, rate
, currency
, services
(ie taxi) and availability
to relays.
Customer: Software receives from relays all sellers offering the specified service
(ie taxi) in the location of the customer and lists their rate
, currency
, availability
and reputation
(if the seller has reputation
#20).
Customer Using their private key, the customer publishes a request for a particular seller and the geolocation
they want to go to.
Seller: The seller receives the customers request, and customer reputation
(if the customer has reputation
#20).
Seller: Using their private key, the seller accepts the job. Geolocation
data is sent every few seconds to the customer, so they can track.
Customer/Seller: Service happens. Reputation
is given at the end of the transaction.
Note: I will build an https://github.com/lnbits/lnbits extension that acts as a Nostr client software for seller and customer, but any client software for BUber will perform the same functions/datatypes and will be interoperable
EDIT As suggested by @entryist changed term taxi to driver and added a service
array, so a seller can specify services
EDIT Changed driver to seller as a seller might be offering a service
that does not include driving such as bodyguard
Messages that have critical side effects should probably be signed, for security reasons.
Why are event id
fields so hard to compute? Are there real chances of collisions when generating a UID with sha256 over random data? It seems redundant with the sig
field.
There are some good ideas (and it does solve the problems mentioned), but there are some problems:
Use of JSON. Although it is simple, there are some problems. I might instead consider a simple binary format.
The way the signature is computed isn't very good, I think. For one thing, it requires two separate JSON formats. Another is multiple ways of representing some characters.
Use of websockets. A simple HTTP POST, or TCP without HTTP (and using SASL if authentication is needed, and optional TLS if desired), would be better, I think.
Being restricted to a single signature scheme and a single hash algorithm. While that might work for now, it is possible that someone might want to change them in future.
It uses Unicode.
Relays cannot communicate with relays even if you want to do such a thing optionally. (Some users might also wish to receive messages as they arrive, on their own server, instead of requesting them from a relay each time.)
Timestamp synchronization, when wanting to receive new messages. (NNTP solves this problem.)
So, I think I will prefer to use NNTP.
Some third-party services offer auto-delete feature for Twitter (example: delete all tweets that is one month old). Maybe NIP-09 can be expanded to include this auto-delete feature.
One implementation would be to include an "expireAt" property on all tweets (notes) for accounts that decides to enable this auto-delete feature.
I do not yet have a telegram account, otherwise I would post this in there. Have you seen https://iris.to ? (seems to also be created by bitcoiners). I would like to know your thoughts as to why nostr is or will be better. Or perhaps they are quite similar in underlying design?
P.S. How close is nostr to being usable for bootstrapping itself? In other words, when will the primary support / dev channel for nostr being administered on nostr itself rather than Telegram?
Can we have a reputation datatype?
[rep: [id:<string>,rating:<int 1-100>, comment: <string>]]
ID: To give context (is this rep for taxis, dentistry, cobbling, etc)
Rating: 1-100 seems the most flexible as different UIs can translate as stars, whatever
Comment: character limited comment "Phil is an excellent dentist"
Sock-puppetry can be dealt with by relays offering verification services "verified by legit-verify-relay.com", or a Proof of Account data-type #15
In NIP-01, there's a "REQ"
message from client to relay, which looks like this: ["REQ", <id>, <filter JSON>...]
. The <id>
part confused me a bit (took me some time to figure out what it is). I thought it was event id.
I think it would be less confusing if it's renamed to <subscription id>
or <subscription string>
. Not a critical issue; just a semantic change to make it easier to understand.
On Bitcoin, losing your key means you lose your money. On system like nostr, losing your key means permanently losing the identity and reputation you've built up. Not nice.
Twitter seems to have perfected the ease-of-use for users. Many innovations can happen on the wallet side. What sort of extensions, projects, and NIPs is no one working on that would help nostr achieve Twitter-like ease-of-use?
Currently, with only one value allowed in a filter condition for id
and kind
, subscriptions are unnecessarily large.
A common use case for a client is to request all the profile, metadata, text notes, and deletion events for a set of followed pubkeys. Constructing this subscription for 100 addresses requires repeating the address in a filter for kind: 0
, kind: 1
, etc., leading to 400 addresses in the `REQ' message.
Allowing ids
and kinds
to be treated similarly to authors
would allow clients to search across multiple event types without unnecessary duplication in the query. This would lead to significant bandwidth savings for clients and relays.
The "author" and "authors" fields in a REQ message are redundant.
Since these fields are ANDed together, if a subscription filter includes author=A, and authors=[B,C], then it will never return any results. If the filter uses author=A, authors=[A,B,C], it is equivalent to only sending the authors field.
Since it never makes sense to use both of these fields at the same time, and "authors" is more general than "author", I suggest dropping the "author" field from REQ altogether.
It'd be great to have http support too (implemented via polling) along with web sockets in the protocol. It'd be interesting to write a serverless plug and play relay based on lambdas / functions.
I've known about Nostr for a while and decided to try it but the nostr.com website and this github give me no clue (or links to) how to do this. 🤷
It's sometimes desirable to link a main identity with an alt. Two pubkeys (main and alt) can be cryptographically linked by creating these two events:
main event
(created by main):
{
"id": <main event id>
"pubkey": <main's pubkey>,
"created_at": <unix timestamp in seconds>,
"kind": <integer>,
"tags": [
["p", <alt's pubkey>],
]
"content": <empty string>,
"sig": <64-bytes signature of the sha256 hash of the serialized event data, which is the same as the "id" field>,
}
alt event
(created by alt):
{
"id": <alt event id>
"pubkey": <alt's pubkey>,
"created_at": <unix timestamp in seconds>,
"kind": <integer>,
"tags": [
["e", <main event id>],
["p", <main's pubkey>],
]
"content": <empty string>,
"sig": <64-bytes signature of the sha256 hash of the serialized event data, which is the same as the "id" field>,
}
Looked another way: alt event is a reply to main event.
A main or alt can prove its main-alt relationship by providing main event
and alt event
; the main-alt relationship is invalid until alt event
is present.
These main-alt relationship can also be privately announced through NIP-04, which can be used to have private chats. Say Alice wants to chat with Bob, but Bob is a super-controversial character that has been cancelled. If Alice is caught chatting with Bob, she's also gonna get cancelled. Alice can chat privately with Bob through Nostr by doing this:
main event
and alt event
with NIP-04, then sends it to Bob's publicly-known pubkey with a random key.main event
and alt event
with NIP-04, then send it to Alice's alt with his alt.Nostr-native alias is a simple building-block that has use-cases. Can this alias-creation scheme be simplified further?
Allow subscriptions to define time ranges to retrieve events will allow paging-like behavior for clients.
This would be done with the addition of an until
field to request filters. until
should match all events up to but not including the specified timestamp (exclusive range).
Some clients (nostr.com, Damus, maybe others) prefer to show pubkeys as npub instead of hex.
Is there any benefit to that?
They have similar length:
npub19n5utls2m3le9fujey5ydf2cr3nam739srlscj52vsdkmwczh3aq74amnp
2ce9c5fe0adc7f92a792c92846a5581c67ddfa2580ff0c4a8a641b6dbb02bc7a
Hey there.
I am trying to implement a commenting system with nostr, where I will add several tags to the event to mark for which post it belong, and what is its type.
So on creation for example, I'm creating this kind of object:
content: "Hello Guys" created_at: 1658319264 id: "0fb317e89fd9b2ace07cc2cf63389e4dc5e1c741c1f2ea4d87daa67dfb73c2e7" kind: 1 pubkey: "e7fc0ac27ee230ca885e08bd3284006db84ec4ac4dba8ee80672148765abcf20" sig: "ee4d7aad7fad78fa318d5b8bc1fd18faeadbe997e3067ba84f992cd57db8289f829955943d0f65e0b33185c69f622a8a76b51f5b8e67cf8cd9c5d2662c941cf0" tags: [["h", "bolt_fun"], ["t", "Story comment"], ["i", "21"]] 0: ["h", "my_host"] 1: ["t", "post comment"] 2: ["i", "2111aav123"]
As you can see, I'm using three filters when creating the object: h, t, and i.
So when I'm starting a new subscription, I'm construction this filter:
pool.sub({ filter: { "#i": ["2111aav123"], "#h": ['my_host'], "#t": ['post comment'] }, cb:()=>{...}
But it's returning nothing at all...
Note that when I use only 1 filter of any of the above, it returns the events, but whenever I use more than 1 it returns nothing.
I'm not sure if maybe I'm constructing the filters object incorrectly, or if multi-tags aren't supported, or if it's something else...
Any help is appreciated.
Thanks
It would be cool for fediverse users/groups to be able to followed on nostr and vice versa, allowing for the two systems to exist together as needed (for example peertube instances or users that prefer the Mastadon model).
Understandably, this maybe early for this project's development cycle, but I figure I would at least through it on the backlog.
Since the requests are so human friendly (a bit like IRC is), maybe relays could support alternative flavours of JSON like JSON5?
The Cloudflare error page is shown.
Would moving from Telegram (a closed sourced application that requires account creation) to nostr as the primary way of communicating be possible? I see telegram being an ideological and maybe a privacy barrier for people interested in this project.
How long should relays be required to store events? Or are all expected to be available at any point in the future?
If a user makes a mistake (or a client bug occurs) that results in an event being sent to a relay, how could they remove it?
If deleting an arbitrary event isn’t desirable, maybe relays should have a grace period that allows clients to correct a mistake shortly after the event was sent?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.