webaudio / web-audio-api Goto Github PK
View Code? Open in Web Editor NEWThe Web Audio API v1.0, developed by the W3C Audio WG
Home Page: https://webaudio.github.io/web-audio-api/
License: Other
The Web Audio API v1.0, developed by the W3C Audio WG
Home Page: https://webaudio.github.io/web-audio-api/
License: Other
Originally reported on W3C Bugzilla ISSUE-17363 Tue, 05 Jun 2012 11:51:00 GMT
Reported by Philip Jägenstedt
Assigned to
Audio-ISSUE-76 (BiquadFilterNode): BiquadFilterNode is underdefined [Web Audio API]
http://www.w3.org/2011/audio/track/issues/76
Raised by: Philip Jägenstedt
On product: Web Audio API
https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#BiquadFilterNode
The filter operation is undefined, with wording such as "standard second-order resonant lowpass filter with 12dB/octave rolloff." A lot more specificity is required.
Wikipedia is the reference for several of the filter modes. We could not find any mode that is implementable given the information provided.
Originally reported on W3C Bugzilla ISSUE-17793 Tue, 17 Jul 2012 18:30:24 GMT
Reported by Chris Wilson
Assigned to
(Summary of email conversation in list)
There is currently no way to disconnect node A's connection to node B without disconnecting all connections from node A to other nodes. This makes it impossible to disconnect node B from the graph without potential side effects, as you have to:
Not only is this cumbersome, it will be problematic in the future when we solve the related issue of unconnected streams - which is currently exhibiting incorrect behavior in Chrome (it pauses the audio stream), but is underspecified in the spec today. (filing separate bug). Disconnecting then reconnecting would have to have no side effects. (It works okay today, but not ideal - can click.)
Recommended solution:
E.g.: the IDL for disconnect should read:
void disconnect(in [Optional] AudioNode destination, in [Optional] unsigned long output = 0)
raises(DOMException);
this lets us keep most compatibility - node.disconnect() will still remove all connections.
Originally reported on W3C Bugzilla ISSUE-21547 Tue, 02 Apr 2013 16:18:37 GMT
Reported by Olivier Thereaux
Assigned to
Per discussion at Audio WG f2f 2013-03-26
Originally reported on W3C Bugzilla ISSUE-21543 Tue, 02 Apr 2013 16:11:05 GMT
Reported by Olivier Thereaux
Assigned to
Per discussion at Audio WG f2f 2013-03-26:
We need to set aside discussion on delayNode - how to deal with change in number of input while live - how to allocate/deallocate buffers and maintain state.
Originally reported on W3C Bugzilla ISSUE-19977 Fri, 16 Nov 2012 00:20:11 GMT
Reported by Chris Wilson
Assigned to
One of the few node types I've been sorely missing, that could be implemented in JS but with needless latency, is a noise gate/expander node.
Would need standard noise gate controls: threshold, attack, release, hold, and possibly an attenuation setting, maybe even hysteresis control. Additionally, an AudioNode output of the attenuation would be very helpful for doing sidechain gating.
Originally reported on W3C Bugzilla ISSUE-17701 Thu, 05 Jul 2012 15:03:59 GMT
Reported by Olivier Thereaux
Assigned to
The example at the end of the section on AudioParam Automation is giving examples for setValueAtTime, linearRampToValueAtTime, exponentialRampToValueAtTime and setValueCurveAtTime but not setTargetValueAtTime - which is too bad since that one seems to be the one hardest to comprehend.
Originally reported on W3C Bugzilla ISSUE-21580 Thu, 04 Apr 2013 14:36:31 GMT
Reported by Olivier Thereaux
Assigned to
The example in "Modular Routing" still uses the creategainnode() method. We want to limit instances of this "old" name to the alternate names section.
Originally reported on W3C Bugzilla ISSUE-21518 Tue, 02 Apr 2013 12:42:10 GMT
Reported by Olivier Thereaux
Assigned to
Per discussion at Audio WG f2f 2013-03-26
AudioContext.createBuffer (synchronous) will be deprecated in favor of decodeAudioBuffer (asynchronous)
Originally reported on W3C Bugzilla ISSUE-20750 Thu, 24 Jan 2013 02:07:38 GMT
Reported by Ehsan Akhgari [:ehsan]
Assigned to
We should probably raise an exception and the exception type should be specified. This applies both to the bufferSize argument and when both numberOfInputChannels and numberOfOutputChannels passed are zero.
Originally reported on W3C Bugzilla ISSUE-17534 Mon, 18 Jun 2012 11:27:42 GMT
Reported by Marcus Geelnard (Opera)
Assigned to
The JavaScriptAudioNode does not have the ability to dynamically change its number of input/ouptut channels after creation.
This makes it impossible to re-implement nodes such as AudioGainNode (depends on number of input channels) and ConvolverNode (depends on number of AudioContext output channels - according to [1]).
[1] https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#Convolution-reverb-effect
Originally reported on W3C Bugzilla ISSUE-20842 Thu, 31 Jan 2013 23:26:08 GMT
Reported by Ehsan Akhgari [:ehsan]
Assigned to
The spec doesn't describe at all how the constructor arguments of OfflineAudioContext are supposed to change the behavior of the context. This is highly under-specified...
Originally reported on W3C Bugzilla ISSUE-21445 Sat, 30 Mar 2013 16:58:54 GMT
Reported by Ehsan Akhgari [:ehsan]
Assigned to
Currently the spec doesn't say much about how this node should be implemented.
Originally reported on W3C Bugzilla ISSUE-18676 Fri, 24 Aug 2012 10:33:30 GMT
Reported by Jussi Kalliokoski
Assigned to
Proposed originally in http://lists.w3.org/Archives/Public/public-audio/2012JulSep/0614.html
A bit of clarification though, this method should bring the nodes to a state that is effectively the same as if they had not received any input data yet.
Originally reported on W3C Bugzilla ISSUE-21240 Sun, 10 Mar 2013 17:10:27 GMT
Reported by Ehsan Akhgari [:ehsan]
Assigned to
Created attachment 1331 [details]
Test case
We need to specify what happens when these values are invalid, for example, negative, or greater than the length of the buffer.
Currently, WebKit ignores offset if it's negative but smaller than the length of the buffer, and in that case respects duration. If a value larger than the length of the buffer is passed as offset, then WebKit ignores both offset and duration. It would have probably made much more sense if these kinds of invalid values would throw DOM_SYNTAX_ERR as these are probably not what the author would intend to pass in.
Originally reported on W3C Bugzilla ISSUE-20841 Thu, 31 Jan 2013 19:57:26 GMT
Reported by Ehsan Akhgari [:ehsan]
Assigned to
The spec currently says:
"maxNumberOfChannels is the maximum number of channels that this hardware is capable of supporting. If this value is 0, then this indicates that maxNumberOfChannels may not be changed."
I believe the second maxNumberOfChannels should be numberOfChannels.
Originally reported on W3C Bugzilla ISSUE-21537 Tue, 02 Apr 2013 15:58:47 GMT
Reported by Olivier Thereaux
Assigned to
Per discussion at Audio WG f2f 2013-03-26
Originally reported on W3C Bugzilla ISSUE-21515 Tue, 02 Apr 2013 12:30:25 GMT
Reported by Olivier Thereaux
Assigned to
Per discussion at Audio WG f2f 2013-03-26
The conformance section should note that the use of keywords MUST, MAY, SHOULD are used per RFC-2119 and constitute normative statements.
The group did not express a strong preference for either of the following two options. The choice will be left to the discretion of the editor:
or
Originally reported on W3C Bugzilla ISSUE-20372 Wed, 12 Dec 2012 23:08:33 GMT
Reported by Ehsan Akhgari [:ehsan]
Assigned to
That would allow people to pass in undefined for example, which would hopefully make for an easier to use API.
Originally reported on W3C Bugzilla ISSUE-21533 Tue, 02 Apr 2013 15:13:13 GMT
Reported by Olivier Thereaux
Assigned to
Per discussion at Audio WG f2f 2013-03-26
Proposal for a real-time recorderNode. Already possible with scriptProcessorNode - but dedicated node would be more conveinient.
Originally reported on W3C Bugzilla ISSUE-21520 Tue, 02 Apr 2013 14:41:45 GMT
Reported by Olivier Thereaux
Assigned to
Per discussion at Audio WG f2f 2013-03-26
Change:
"Audio file data can be in any of the formats supported by the audio element"
For:
"...can be accepted in formats containing only audio data (w/o video)"
This is to avoid the overhead of dealing with video containers that have an audio track.
Originally reported on W3C Bugzilla ISSUE-21542 Tue, 02 Apr 2013 16:08:36 GMT
Reported by Olivier Thereaux
Assigned to
Per discussion at Audio WG f2f 2013-03-26
Originally reported on W3C Bugzilla ISSUE-21426 Thu, 28 Mar 2013 17:05:04 GMT
Reported by Ehsan Akhgari [:ehsan]
Assigned to
For example, when the input channel count increases, it's not clear whether we need to playback silence or an upmixed version of the delayed buffer. The rather might not be efficient to implement because it might require a large amount of work when processing the first buffer after the input channel count change.
Originally reported on W3C Bugzilla ISSUE-21548 Tue, 02 Apr 2013 16:20:21 GMT
Reported by Olivier Thereaux
Assigned to
Per discussion at Audio WG f2f 2013-03-26
This is a placeholder to keep track of all the sections in the spec which are considered "developer documentation", to be split out from the spec and into a primer/developer doc type document.
Originally reported on W3C Bugzilla ISSUE-21446 Sat, 30 Mar 2013 22:06:05 GMT
Reported by Ehsan Akhgari [:ehsan]
Assigned to
The current AnalyserNode is under-spec'ed. We need to provide more information on what an implementation needs to do.
Originally reported on W3C Bugzilla ISSUE-20229 Tue, 04 Dec 2012 08:46:43 GMT
Reported by Li Yin
Assigned to
From the spec, it says "stop must only be called one time and only after a call to start or stop, or an exception will be thrown."
It's confused to me that if stop can be called only one time, it should be impossible that stop can be called after stop. In offlinemode, stop can be called multiple times from web developers' eyes.
So maybe it will be more reasonable if we describe it like this:
start can be called only when playbackState is UNSCHEDULED_STATE, or InvalidStateError exception will be thrown.
stop can be called only when playbackState is SCHEDULED_STATE or PLAYING_STATE, if not, InvalidStateError exception will be thrown.
Originally reported on W3C Bugzilla ISSUE-21539 Tue, 02 Apr 2013 16:03:27 GMT
Reported by Olivier Thereaux
Assigned to
Per discussion at Audio WG f2f 2013-03-26
When an AudioNode is connected to an AudioParam and disconnect() is called, what should the behaviour be? How will it affect the value of the AudioParam. This needs to be specified in greater detail.
Originally reported on W3C Bugzilla ISSUE-17794 Tue, 17 Jul 2012 18:34:19 GMT
Reported by Chris Wilson
Assigned to
This can be described as "What happens when a playing node is temporarily disconnected?" - or, conversely, "If you play a node, and no one is listening (aka connected), does it really play?".
This first came to my attention when I was working with Z Goddard on the Fieldrunners article for HTML5Rocks (http://www.html5rocks.com/en/tutorials/webaudio/fieldrunners/) - particularly, read the section entitled Pausing Sounds. In short - they'd noticed that if you disconnected an audio connection, it paused the audio "stream". I thought this seemed pretty wrong - knowing what I knew about how automation on AudioParams works - in discussions with Chris Rogers, he confirmed this wasn't his expected behavior.
My mental model of connections as an API user still really wants to be "they're just like plugging 1/4" audio cables between hardware units," despite knowing that is not the case here; I would expect if a node was playing and I disconnected its graph, then replugged it 0.5 sec later, it would be 0.5 sec further along - i.e., I would expect the behavior to be the same as if I had connected the node to a zero-gain gain node connected to the audiocontext.destination.
Originally reported on W3C Bugzilla ISSUE-21532 Tue, 02 Apr 2013 15:11:49 GMT
Reported by Olivier Thereaux
Assigned to
Per discussion at Audio WG f2f 2013-03-26
The section on OfflineAudioContext states that "rendering/mixing-down is faster than real-time". Need to specify that the rendering should be "as fast as possible", with no relation to real time.
Originally reported on W3C Bugzilla ISSUE-21545 Tue, 02 Apr 2013 16:14:51 GMT
Reported by Olivier Thereaux
Assigned to
Per discussion at Audio WG f2f 2013-03-26:
Originally reported on W3C Bugzilla ISSUE-21530 Tue, 02 Apr 2013 15:07:03 GMT
Reported by Olivier Thereaux
Assigned to
Per discussion at Audio WG f2f 2013-03-26
Need to add details to the spec around startRendering().
How do multiple offline/onilne contexts interact?
Originally reported on W3C Bugzilla ISSUE-17542 Tue, 19 Jun 2012 08:30:43 GMT
Reported by Marcus Geelnard (Opera)
Assigned to
https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#ConvolverNode
Currently, the number of output channels of the ConvolverNode is controlled by the number of output channels of the AudioContext (although it's not very clear in the spec).
I think it would be a better idea to be able to control the number of output channels of the ConvolverNode upon construction rather than relying on the AudioDestinationNode.
It would give you more freedom to do custom processing, and makes the actual usage of the impulse response channels much more user controllable.
The last point is important, since the number of output channels controls which matrixing operation will be used.
Originally reported on W3C Bugzilla ISSUE-21345 Wed, 20 Mar 2013 06:04:34 GMT
Reported by Wei James
Assigned to
When changing accessories, the max number of channels can change, which has an impact on virtualization and 3D positioning, you wouldn't use the same settings and algorithms when switching from headset to speakers.
In case you are switching from local speakers to headphones, you are really sending the same stream to the same low-level driver, and the switch is typically handled in the audio codec hardware. You will have continuity of the playback by construction, and the only time you'd need to reconfigure the graph is if you have any sort of 3D positioning.
But if the new output is HDMI, Bluetooth A2DP, USB, there will be a delay and volume ramps when switching, and it'd be perfectly acceptable to stop and reconfigure without any impact to user experience. It'd be interesting to capture this difference in the notification.
Originally reported on W3C Bugzilla ISSUE-21526 Tue, 02 Apr 2013 14:57:19 GMT
Reported by Olivier Thereaux
Assigned to
Per discussion at Audio WG f2f 2013-03-26
Remove the sentence:
The decodeAudioData() method is preferred over the createBuffer() from ArrayBuffer method because it is asynchronous and does not block the main JavaScript thread.
From the section on decodeAudioData
Originally reported on W3C Bugzilla ISSUE-20822 Tue, 29 Jan 2013 22:46:33 GMT
Reported by Ehsan Akhgari [:ehsan]
Assigned to
The spec currently says:
"The value parameter is the value the parameter will exponentially ramp to at the given time. An exception will be thrown if this value is less than or equal to 0, or if the value at the time of the previous event is less than or equal to 0."
We need to clarify what exception gets raised in these cases.
Originally reported on W3C Bugzilla ISSUE-21527 Tue, 02 Apr 2013 14:58:44 GMT
Reported by Olivier Thereaux
Assigned to
Per discussion at Audio WG f2f 2013-03-26
The XHR spec should have an entry in the web audio API references table, and we should point all references in the prose to that entry.
Originally reported on W3C Bugzilla ISSUE-21311 Sat, 16 Mar 2013 17:58:03 GMT
Reported by Joe Berkovitz / NF
Assigned to
Reference from mailing list:
post: http://lists.w3.org/Archives/Public/public-audio/2013JanMar/0395.html
author: Russell McClellan [email protected]
"[OfflineAudioContext] really should provide some way to receive data block-by-block rather than in a single "oncomplete" callback. Otherwise, the memory footprint grows quite quickly with the rendering time. I don't think this would a major burden to implementors, and it would make the API tremendously more useful. Currently it's just not feasible to mix down even a minute or so. If this is ever going to be used for musical applications, this has to change."
Chris Rogers stated in teleconference 14 Mar 2013 that it is in fact feasible to mix down typical track lengths of several minutes with the single oncomplete call. A discussion of block size suggested that any breaking of audio rendering into chunks should be fairly large to avoid overhead of switching threads and passing data.
Originally reported on W3C Bugzilla ISSUE-21344 Wed, 20 Mar 2013 06:00:18 GMT
Reported by Wei James
Assigned to
should it be the same with setting attribute of value? or just ignore if the value is invalid.
Originally reported on W3C Bugzilla ISSUE-21546 Tue, 02 Apr 2013 16:16:51 GMT
Reported by Olivier Thereaux
Assigned to
Per discussion at Audio WG f2f 2013-03-26
Document inital time constant and algorythm for dezippering, allow it to be disabled. (explains how to make it "sound good")
Originally reported on W3C Bugzilla ISSUE-21535 Tue, 02 Apr 2013 15:17:16 GMT
Reported by Olivier Thereaux
Assigned to
Per discussion at Audio WG f2f 2013-03-26
This is a placeholder for discussion on where block size limits should be defined in the spec, and whether or not the 128 sample value is appropriate.
Originally reported on W3C Bugzilla ISSUE-19991 Sat, 17 Nov 2012 17:12:35 GMT
Reported by Jussi Kalliokoski
Assigned to
Currently there are several annoyances to scheduling in the Web Audio API. For example, if you want to play back a dynamic sequence, you would power it up with a setTimeout/setInterval, both of which are throttled to once per second when in a background tab. Now, if you set up a timer that happens once per second what happens if a reflow or other event delays the timer? To protect against this, you can schedule events for more than a second at a time, but it's a tradeoff for responsiveness of the application. Responsiveness or robustness is not a nice tradeoff to make.
A suggested approach to this problem has been to add a callbackAtTime() method to the AudioContext, but I fear that introducing yet another timer mechanism to the main thread won't help much. Say you setup a callback to trigger one second from the current time. Should it
a) Fire before the clock actually hits the specified time to be a bit more sure to make it in time?
b) Fire exactly when the clock actually hits the specified time? In this case, the desired target is most likely missed.
c) Fire after the time? This is a ridiculous idea. :D
Anyway, even that would be suspect to being delayed by other main thread events like reflows etc., being not much more reliable than a setTimeout(). I think we're going to get a lot of "no" sound from other working groups and browser vendors if the callbackAtTime() had no throttling rules, when browsers finally have painfully put those restrictions to place for existing main thread timer callbacks, so I don't think we'd get even that advantage.
Hence I'd suggest specifying access to the AudioContext interface from Web Workers, where one doesn't need to worry about main thread events delaying anything, nor about timer throttling.
For the time being, the Workers would obviously support less features (supporting MediaStreamSourceNode and MediaElementSourceNode in the Workers would require transferring these entities to the Worker as well). One option would of course be that AudioContexts would be defined as Transferrables, as well as a AudioNodes, letting graphs be shared across threads. This would probably actually be the best way to achieve this, provided we can eliminate race conditions by having value setters and getters exclusive to the thread that currently has the ownership of each node. But there aren't many critical features like this in the Web Audio API, which makes it a prime candidate for being a Transferrable.
Originally reported on W3C Bugzilla ISSUE-21519 Tue, 02 Apr 2013 14:41:41 GMT
Reported by Olivier Thereaux
Assigned to
Per discussion at Audio WG f2f 2013-03-26
Add an optional 4th argument for decodeAudioData that disables automatic sample rate conversion.
Originally reported on W3C Bugzilla ISSUE-20698 Thu, 17 Jan 2013 14:15:09 GMT
Reported by Joe Berkovitz / NF
Assigned to
Use case:
If one needs to display a visual cursor in relationship to some onscreen representation of an audio timeline (e.g. a cursor on top of music notation or DAW clips) then knowing the real time coordinates for what is coming out of the speakers is essential.
However on any given implementation an AudioContext's currentTime may report a time that is somewhat ahead of the time of the actual audio signal emerging from the device, by a fixed amount. If a sound is scheduled (even very far in advance) to be played at time T, the sound will actually be played when AudioContext.currentTime = T + L where L is a fixed number.
On Jan 16, 2013, at 2:05 PM [email protected] wrote:
It's problematic to incorporate scheduling other real-time events (even knowing precisely "what time it is" from the drawing function) without a better understanding of the latency.
The idea we reached (I think Chris proposed it, but I can't honestly remember) was to have a performance.now()-reference clock time on AudioContext that would tell you when the AudioContext.currentTime was taken (or when that time will occur, if it's in the future); that would allow you to synchronize the two clocks. The more I've thought about it, the more I quite like this approach - having something like AudioContext.currentSystemTime in window.performance.now()-reference.
On Jan 16, 2013, at 3:18 PM, Chris Rogers [email protected] wrote:
the general idea is that the underlying different platforms/OSs can have very different latency characteristics, so I think you're looking for a way to query the system to know what it is. I think that something like AudioContext.presentationLatency is what we're looking for. Presentation latency is the time difference between when you tell an event to happen and the actual time when you hear it. So, for example, with source.start(0), you would hope to hear the sound right now, but in reality will hear it with some (hopefully) small delay. One example where this could be useful is if you're trying to synchronize a visual "playhead" to the actual audio being scheduled...
I believe the goal for any implementation should be to achieve as low a latency as possible, one which is on-par with desktop/native audio software on the same OS/hardware that the browser is run on. That said, as with other aspects of the web platform (page rendering speed, cache behavior, etc.) performance is something which is tuned (and hopefully improved) over time for each browser implementation and OS.
Originally reported on W3C Bugzilla ISSUE-18510 Thu, 09 Aug 2012 16:16:41 GMT
Reported by Tony Ross [MSFT]
Assigned to
The decodeAudioData method on AudioContext is stated to support any of the formats supported by the element, but unlike the element it doesn't allow the author to state the format of the audio data (since the ArrayBuffer is already a step removed from the XMLHttpRequest likely used to fetch the data).
We should fix this by adding an (ideally required) contentType argument to decodeAudioData to communicate the format of the audio in the provided ArrayBuffer.
Originally reported on W3C Bugzilla ISSUE-21417 Wed, 27 Mar 2013 23:05:37 GMT
Reported by Ehsan Akhgari [:ehsan]
Assigned to
Note: crogers suggests that the once the UA calls the event handler with a given size, it should always call it with the same buffer size.
Originally reported on W3C Bugzilla ISSUE-17533 Mon, 18 Jun 2012 11:22:09 GMT
Reported by Marcus Geelnard (Opera)
Assigned to
The number of inputs & outputs of a JavaScriptAudioNode can not be specified.
Without this ability, it is impossible to re-implement nodes such as AudioChannelSplitter.
Originally reported on W3C Bugzilla ISSUE-21538 Tue, 02 Apr 2013 16:00:50 GMT
Reported by Olivier Thereaux
Assigned to
Per discussion at Audio WG f2f 2013-03-26
In section "The connect to AudioParam method", add detail of connecting audio node to non audio node. An explanation on why would you do it (LFO example) could be added to the graph routing introduction.
Originally reported on W3C Bugzilla ISSUE-19885 Wed, 07 Nov 2012 00:51:17 GMT
Reported by Ehsan Akhgari [:ehsan]
Assigned to
Currently the spec doesn't provide much information on what the algorithm behind DynamicsCompressorNode should look like, which is not very helpful for implementers.
Originally reported on W3C Bugzilla ISSUE-19871 Tue, 06 Nov 2012 01:43:29 GMT
Reported by Ehsan Akhgari [:ehsan]
Assigned to
We should probably throw INDEX_SIZE_ERR.
Originally reported on W3C Bugzilla ISSUE-21528 Tue, 02 Apr 2013 15:01:18 GMT
Reported by Olivier Thereaux
Assigned to
Per discussion at Audio WG f2f 2013-03-26
(As minuted. Needs clarification)
Originally reported on W3C Bugzilla ISSUE-21512 Tue, 02 Apr 2013 12:21:56 GMT
Reported by Olivier Thereaux
Assigned to
Per discussion at Audio WG f2f 2013-03-26:
The Introduction section mentions the use cases and requirements document. The link should be pointing to http://www.w3.org/TR/webaudio-usecases/ and that spec/Note added to the list of references.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.