qmuntal / gltf Goto Github PK
View Code? Open in Web Editor NEW:eyeglasses: Go library for encoding glTF 2.0 files
Home Page: https://www.khronos.org/gltf/
License: BSD 2-Clause "Simplified" License
:eyeglasses: Go library for encoding glTF 2.0 files
Home Page: https://www.khronos.org/gltf/
License: BSD 2-Clause "Simplified" License
Line 17 in 58807b5
If you change last line to:
if err := gltf.Save(doc, "./example.gltf"); err != nil {
panic(err)
}
You get
panic: gltf: Invalid buffer.uri value '' [recovered]
panic: gltf: Invalid buffer.uri value ''
seems glb and gltf aren't treated same? any tips to resolve the above?
Is it possible to use this library to apply Draco compression to an existing gltf file?
Take a look at the glTF sample files called Box With Spaces
This file has a uri
reference to Box With Spaces.bin
. But this library performs urlencode of the URI so spaces become %20
and then fails because it cannot find a file named Box%20With%20Spaces.bin
The url-encoding happens because of a round-trip through net/url.Parse
A solution would be to only use the parsed URL if the scheme is not empty. i.e. the URL is http/file/etc
Hi,
This is a minor issue.
While using the library to parse some animations, I noticed that the Input
and Output
fields of the AnimationSampler
object are defined as pointers to uint32
, which is the usual approach when defining optional fields.
However, looking at the spec, it seems that these are required:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#reference-animation-sampler
Unless I am missing something, maybe you can consider making them non-pointer in some future iteration. Though to be honest, it's not really that big of an issue as is.
I am using your project to create gltf file. I have draco precompressed geometry data like a.drc, and I referenced it in a buffer in document. When I save the document, the a.drc file will be overwrite. It size becomes zero.
This is how I define my buffer
buffer := gltf.Buffer{
URI: "a.drc",
ByteLength: 123,
}
This is how I save the doc
if err := gltf.Save(doc, path, false); err != nil {
panic(err)
}
Hi,
We found some interesting behavior of the glTF document unmarshaling.
According to the official documentation, the accessor type can be only one of the seven values.
But if we try to unmarshal and then marshal a document with an incorrect accessor type, we get unexpected results.
In the code, we can see the following
const (
// AccessorScalar corresponds to a single dimension value.
AccessorScalar AccessorType = iota
// AccessorVec2 corresponds to a two dimensions array.
AccessorVec2
// AccessorVec3 corresponds to a three dimensions array.
AccessorVec3
// AccessorVec4 corresponds to a four dimensions array.
AccessorVec4
// AccessorMat2 corresponds to a 2x2 matrix.
AccessorMat2
// AccessorMat3 corresponds to a 3x3 matrix.
AccessorMat3
// AccessorMat4 corresponds to a 4x4 matrix.
AccessorMat4
)
that means AccessorScalar
is 0
.
According to this block of code
// UnmarshalJSON unmarshal the accessor type with the correct default values.
func (a *AccessorType) UnmarshalJSON(data []byte) error {
var tmp string
err := json.Unmarshal(data, &tmp)
if err == nil {
*a = map[string]AccessorType{
"SCALAR": AccessorScalar,
"VEC2": AccessorVec2,
"VEC3": AccessorVec3,
"VEC4": AccessorVec4,
"MAT2": AccessorMat2,
"MAT3": AccessorMat3,
"MAT4": AccessorMat4,
}[tmp]
}
return err
}
the accessor type SCALAR
will return us the value 0
and true
as the second parameter (which is not handled here).
But, if we pass to the function UnmarshalJSON
an incorrect accessor type (like CUSTOM_TYPE
) the map[string]AccessorType
will return us also the value 0
(and false
for the second parameter ok
).
The function UnmarshalJSON
doesn't validate the second parameter of the map and that's why all incorrect accessor types will return the result like for SCALAR
.
But, unfortunately, while marshaling the incorrect accessor type will be always parsed in SCALAR
.
Could we fix it? Maybe we can validate the second parameter of the map[string]AccessorType
and return an error if get false
.
Also, I wrote some test cases to show the current described behavior.
func TestAccessorType_UnmarshalJSON(t *testing.T) {
{ // correct unmarshalling to 2 (for VEC3)
typeStr := []byte(`"VEC3"`)
var accType gltf.AccessorType = 0
err := json.Unmarshal(typeStr, &accType)
if err != nil {
t.Errorf("Error while unmarshaling accessor type: %s, %s", string(typeStr), err)
}
if accType != gltf.AccessorVec3 {
t.Errorf("Expected: %d, got: %d", gltf.AccessorVec3, accType)
}
}
{ // when the string of accessor type is incorrect, the value will be always 0 -> as a result, will be marshaled to SCALAR
typeStr := []byte(`"CUSTOM_TYPE"`)
var accType gltf.AccessorType = 100
err := json.Unmarshal(typeStr, &accType)
if err != nil {
t.Errorf("Expected an error while unmarshaling accessor type: %s", string(typeStr))
}
if accType != gltf.AccessorScalar {
t.Errorf("Expected: %d, got: %d", gltf.AccessorScalar, accType)
}
}
}
I've prepared the appropriate PR.
Thank you, in advance!
Hi,
First of all, great library, thanks for creating it.
This is both an issue and a support request, I guess. This one might be hard to explain and argument so bear with me.
It seems that the Node
type does not conform well to the glTF specification. It defines Matrix
, Translation
, Rotation
and Scale
as required (i.e. non-pointer fields) but the specification clearly states that they are not required:
https://www.khronos.org/registry/glTF/specs/2.0/glTF-2.0.html#reference-node
My initial expectation was that these values would be zeroes if not specified in the glTF files. This was based on the following methods and their implementations:
However, it turned out (and this was tricky to spot and caught me off guard) that these fields will only be zeroes if they are specified in the glTF file as zeroes.
So as a result I now have the following questions:
Matrix
field or the TRS fields?
Matrix
field with the TRS fields (converted to a Matrix) and hope that there isn't a non-compliant exporter out there that produces both (resulting in a double transform).ScaleOrDefault
and the like? It seems that just using Scale
directly should be better. In fact, it would allow me to handle situations where a zero scale is specified in order to make a node invisible (described in the spec here)Maybe there is something I am missing since I am still fairly new to the API. And I also understand that the current design makes it more user-friendly, though personally the hidden behavior was confusing for me.
Something that can be considered for a backward-incompatible version in the future is to have the Matrix
, Translation
, Rotation
and Scale
fields be pointer types and have the respective MatrixOrDefault
, RotationOrDefault
, ScaleOrDefault
and TranslationOrDefault
produce default transforms when the field is nil
.
Hello. I'm new at gltf.
https://github.com/qmuntal/gltf/blob/6f371950e7fa5384366436658da419b8f27ee3ce/modeler/write.go
I read the code.
I think almost every modeler
's Write*
method needs to have ensurePadding
.
The lack of it produces poor results in my converter.
Thanks for reading.
Sorry for bad English.
This is similar to #15 except that that was regarding a way to create a GLTF document whereas I would like a way to read GTLF document without any knowledge of GLTF.
The ideal API for me would take a document, a mesh index, and return vertices, normals, uvs, material indices, etc.
You could call the package scanner
, reader
, inspector
, etc.
I'm trying to create a mesh from scratch and save it. I started with the example code in the readme:
doc := gltf.NewDocument()
doc.Meshes = []*gltf.Mesh{{
Name: "Pyramid",
Primitives: []*gltf.Primitive{{
Indices: gltf.Index(modeler.WriteIndices(doc, []uint16{0, 1, 2})),
Attributes: map[string]uint32{
gltf.POSITION: modeler.WritePosition(doc, [][3]float32{{0, 0, 0}, {0, 10, 0}, {0, 0, 10}}),
gltf.COLOR_0: modeler.WriteColor(doc, [][3]uint8{{255, 0, 0}, {0, 255, 0}, {0, 0, 255}}),
},
}},
}}
I added:
doc.Nodes = []*gltf.Node{{Name: "Pyramid", Mesh: gltf.Index(0)}}
gltf.Save(doc, "./test.gltf")
This output test.gltf
is:
{"accessors":[{"bufferView":0,"componentType":5123,"count":3,"type":"SCALAR"},{"bufferView":1,"componentType":5126,"count":3,"type":"VEC3","max":[0,10,10],"min":[0,0,0]},{"bufferView":2,"componentType":5121,"normalized":true,"count":3,"type":"VEC3"}],"asset":{"generator":"qmuntal/gltf","version":"2.0"},"buffers":[{"byteLength":56}],"bufferViews":[{"buffer":0,"byteLength":6,"target":34963},{"buffer":0,"byteOffset":8,"byteLength":36,"target":34962},{"buffer":0,"byteOffset":44,"byteLength":12,"byteStride":4,"target":34962}],"meshes":[{"name":"Pyramid","primitives":[{"attributes":{"COLOR_0":2,"POSITION":1},"indices":0}]}],"nodes":[{"name":"Pyramid","mesh":0}],"scene":0,"scenes":[{"name":"Root Scene"}]}
The buffer here has "byteLength":56
but no "uri"
.
If instead of Save()
I use SaveBinary()
, the output test.glb
does have data.
I'm using gltf v0.21.1 and go 1.17.
I tried to open file example.glb, generated by example from documentation https://pkg.go.dev/github.com/qmuntal/[email protected]/modeler with threejs and babylonjs, but without success.
I tried to embed the texture image into .glb using gltf.Modeler but generated a broken file.
in modeler.go
buffer.ByteLength += uint32(len(buffer.Data))
I think, =
should be correct, not +=
.
Getting EOF error when opening a .glb
file converted from .gltf
using gltf.SaveBinary()
.
Sample input .gltf
file: https://github.com/qmuntal/gltf/files/9627951/simple.gltf.txt
Sample script:
import (
"log"
"github.com/qmuntal/gltf"
)
func main() {
const simpleGltf = "simple.gltf"
const outputGlb = "simple.glb"
srcDoc, err := gltf.Open(simpleGltf)
if nil != err {
log.Fatal(err)
}
err = gltf.SaveBinary(srcDoc, outputGlb)
if nil != err {
log.Fatal(err)
}
tgtDoc, err2 := gltf.Open(outputGlb)
if nil != err2 {
log.Fatal(err2)
}
log.Println("SUCCESS!")
log.Printf("DEBUG: %#v", tgtDoc)
}
NB: error seen using latest version of gltf package (v.0.22.1), but not using an older version (v.0.18.3).
The output files slightly differ, it looks like the old version of the package was adding some nil
"padding" at the end.
When Sampler's MagFilter, MinFilter, WrapS and WrapT fields are zero values, they won't be included in encoded JSON document.
These fields' types have MarshalJSON implemented, but IIUC omitempty is evaluated before MarshalJSON and the values are omitted (because they're zero), so the encoded JSON won't have them.
I think that there are 2 options for resolving this issue.
What do you think?
I'll try to make a PR after you decide.
Example pseudocode as follows:
Array-of-Structures example
attrs := []struct {
Position [3]float `gltf:"required"`
TexCoord_0 [2]float // No need for interface{}
Color_0 [3]uint8
Color_1 [4]float // Additional colors supported seamlessly
CustomMetallicRoughness [2]float // Marshals/unmarshals as "_METALLICROUGHNESS"
// Occlusion float // panics, custom attributes must be prefixed with "Custom"
}{}
Structure-of-Arrays example
attrs := struct {
Position [][3]float `gltf:"required"`
Normal [][3]float
}{}
Array-of-Pointer-to-Structures example
// modeler.ReadAttributes() should internally allocate an Array-of-Structures
// to which the returned Array-of-Pointer-to-Structures all point to, to reduce allocator/GC pressure
attrs := []*struct {
Position [3]float `gltf:"required"`
Normal [3]float
}{}
if err := modeler.ReadAttributes(doc, doc.Meshes[0].Primitives[0].Attributes, &attrs); err != nil {
// ...
}
// Maybe an interleaved option?
if attributesMap, err := modeler.WriteAttributes(doc, attrs); err == nil {
doc.Meshes[0].Primitives[0].Attributes = attributesMap
} else {
// ...
}
I saw somewhere on the original roadmap to add support for validations, is that currently still the case?
Hello!
First of all, congratulations for the project.
However, I am trying to use it, add an imported image, but I am trying it and I am lost in the objects hierarchy. I've tried to read the test but they don't have an specific item for images and I could't find an example for this.
Would you mind to create and merge a simple example to import and use an image to a scene into a GLB doc?
With a basic example I guess I would be able to move forward and later I will be more than happy to send PRs with other examples.
Thanks in advance.
I'm just try to use this project but sample code in readme.md not works. I'm new to golang and gltf. maybe demo code should be this? :
doc := &gltf.Document{
Scene: 0,
Asset: gltf.Asset{Generator: "qmuntal/gltf"},
Scenes: []gltf.Scene{
{
Extras: 8.0, Extensions: gltf.Extensions{"a": "b"}, Name: "s_1",
},
},
}
if err := gltf.Save(doc, "./a.gltf", false); err != nil {
panic(err)
}
Hi,
In my work we have to use BufferView
name
field.
According to the official documentation BufferView can contain not required field name
.
I found that the appropriate strut doesn't contain such field.
Could we please add name
field to BufferView
struct like in a sample below?
// BufferView is a view into a buffer generally representing a subset of the buffer.
type BufferView struct {
Extensions Extensions `json:"extensions,omitempty"`
Extras interface{} `json:"extras,omitempty"`
Buffer uint32 `json:"buffer"`
ByteOffset uint32 `json:"byteOffset,omitempty"`
ByteLength uint32 `json:"byteLength" validate:"required"`
ByteStride uint32 `json:"byteStride,omitempty" validate:"omitempty,gte=4,lte=252"`
Target Target `json:"target,omitempty" validate:"omitempty,oneof=34962 34963"`
Name string `json:"name,omitempty"`
}
For that, I prepared the appropriate PR.
Thank you, in advance!
Hi qmuntal ,
I'm pretty new to this low level of glTF editing ,
I spent two days trying to make find the right parameters for the modeler's writeIndices
All i'm trying to do now is write a simple triangle :
package main
import (
"github.com/qmuntal/gltf"
"github.com/qmuntal/gltf/modeler"
)
func drawTriangle() {
doc := gltf.NewDocument()
positionAccessor := modeler.WritePosition(doc, [][3]float32{{0, 0, 0}, {0, 10, 0}, {0, 0, 10}})
indicesAccessor := modeler.WriteIndices(doc, []uint8{ 0, 1, 0,
0, 0, 1,
0, 0, 1 })
colorIndices := modeler.WriteColor(doc, [][3]uint8{{50, 155, 255}, {0, 100, 200}, {255, 155, 50}})
doc.Meshes = []*gltf.Mesh{{
Name: "Pyramid",
Primitives: []*gltf.Primitive{{
Indices: gltf.Index(indicesAccessor),
Attributes: map[string]uint32{
"POSITION": positionAccessor,
"COLOR_0": colorIndices,
},
}},
}}
doc.Nodes = []*gltf.Node{{Name: "Root", Mesh: gltf.Index(0)}}
doc.Scenes[0].Nodes = append(doc.Scenes[0].Nodes, 0)
if err := gltf.SaveBinary(doc, "./example.glb"); err != nil {
panic(err)
}
}
func main (){
drawTriangle()
}
I tried two of the examples in the repo (the second one on the md file and example_test.go ) but they didn't work .
the pyramid one on the md file is working fine and this code is trying to replicate your example but i don't know the data that i'm supposed to pass as indices . i tried
Thanks for these contributions . Lots of respect for your code .
Hello, firstly thank you for building this module; it's been really useful in a project I've been playing with.
My apologies if there is something in the documentation I've overlooked but I can't find a way to remove data that is no longer used.
For example in my use case, I'm loading a source gltf file, making some modifications to some of the data using the modeler
package such as reading, editing and then writing vertex positions as well as replacing a texture assigned to a material). I'd like to be able to remove the old, no longer referenced, vertex and image data to keep the resulting file size as small as possible.
I'd love to contribute to the project too so if there's any way I could help with the implementation of the above I'd be more than happy to. However I am very new to gltf, I've only known about the formats existence for a few weeks.
I would like to see an API that lets someone supply a list of vertices, zero or more material definitions, and one or more lists of vertex indices to define triangles, each with an associated material reference, and have all the nuts and bolts of assembling the buffer, bufferviews, and accessors done automatically. Simply calling .Save on that type should allow the user to then persist this model to disk.
I don't know if that is within the scope of this library, and I felt it needed suggesting, in case it is.
doc := gltf.NewDocument()
positionAccessor := modeler.WritePosition(doc, [][3]float32{{43, 43, 0}, {83, 43, 0}, {63, 63, 40}, {43, 83, 0}, {83, 83, 0}})
indicesAccessor := modeler.WriteIndices(doc, []uint16{0, 1, 2, 3, 1, 0, 0, 2, 3, 1, 4, 2, 4, 3, 2, 4, 1, 3})
colorIndices := modeler.WriteColor(doc, [][3]uint8{{50, 155, 255}, {0, 100, 200}, {255, 155, 50}, {155, 155, 155}, {0, 0, 0}})
doc.Meshes = []*gltf.Mesh{{
Name: "Pyramid",
Primitives: []*gltf.Primitive{
{
Indices: gltf.Index(indicesAccessor),
Attributes: map[string]uint32{
gltf.POSITION: positionAccessor,
gltf.COLOR_0: colorIndices,
},
},
},
}}
doc.Nodes = []*gltf.Node{{Name: "Root", Mesh: gltf.Index(0)}}
doc.Scenes[0].Nodes = append(doc.Scenes[0].Nodes, 0)
//if err := gltf.SaveBinary(doc, "./output/example.glb"); err != nil {
// panic(err)
//}
if err := gltf.Save(doc, "./output/example.gltf"); err != nil {
panic(err)
}
I got this code from here and use this code for learning how to use this package.
When I save this doc as a glb, it works fine. But not for saving as a gltf. When use saving as gltf code, I got "panic: gltf: Invalid buffer.uri value '' .
So I am wondering do I have to set a buffer for the document when save as gltf?
If I do, then why glb was ok?
I was wondering why the modeler has a limit of 32 bits for indices and positions
Right now the JSON schema for numbers artificially limits itself to 32bit precision. It'd be nice to be able to pull full precision from the JSON for application like geo-spatial rendering.
https://github.com/qmuntal/gltf/blob/master/gltf.go#L176C19-L176C19
At a cursory glance, this would remove some redundancies in specifying Index/Position/Color/etc multiple times.
modeler.Read*()
primitive := doc.Meshes[0].Primitives[0]
-position, err := modeler.ReadPosition(doc, doc.Accessors[primitive.Attributes[gltf.POSITION]], nil)
+position, err := modeler.ReadPosition(doc, primitive, nil)
-color, err := modeler.ReadColor(doc, doc.Accessors[primitive.Attributes[gltf.COLOR_0]], nil)
+color, err := modeler.ReadColor(doc, primitive, 0, nil)
modeler.Write*()
primitive := &gltf.Primitive{}
doc.Meshes = []*gltf.Mesh{{
Name: "Pyramid",
Primitives: []*gltf.Primitive{primitive},
}}
-primitive.Indices = gltf.Index(modeler.WriteIndices(doc, indices))
+modeler.WriteIndices(doc, primitive, indices)
-primitive.Attributes[gltf.POSITION] = modeler.WritePosition(doc, position)
+modeler.WritePosition(doc, primitive, position)
-primitive.Attributes[gltf.COLOR_0] = modeler.WriteColor(doc, color)
+modeler.WriteColor(doc, primitive, 0, color)
Hi,
We've found one more interesting behavior of the glTF document unmarshaling.
In my project, I have to hold very big unmarshalled glTF in the gltf.Document
struct in the memory.
After uploading glTF to the gltf.Document
structs the application consumed quite a lot of the memory.
I've profiled this case and found the reason. During handling glTF the decoder (decode.go
) calls the following function for each buffer
for i := externalBufferIndex; i < len(doc.Buffers); i++ {
if err := d.decodeBuffer(doc.Buffers[i]); err != nil {
return err
}
}
In the function decodeBuffer
we can see the following lines of code
} else if err = validateBufferURI(buffer.URI); err == nil {
buffer.Data = make([]byte, buffer.ByteLength)
err = d.ReadHandler.ReadFullResource(buffer.URI, buffer.Data)
}
which allocate a block of memory (a bytes array) for each asset in the glTF's buffers according to buffer.ByteLength
and pass this bytes array to the ReadFullResource
function.
I have pretty huge glTF documents with many assets (in summary about a few gigabytes), which are stored in the cloud - that's why I skip the execution of the ReadFullResource
function overriding it via gltf.NewDecoder(...).WithReadHandler(CustomReadHandler{})
, the custom ReadFullResource
function returns nil
.
I've wanted to free the memory allocated for buffer.Data
in the custom ReadFullResource
function but the bytes array (from buffer.Data
) is copied to the ReadFullResource
function (created a new variable which refers to the same block of memory).
That's why in my case after executing the line of code
err = d.ReadHandler.ReadFullResource(buffer.URI, buffer.Data)
the buffer.Data
has the non-empty value (actually all nulls) with full length and capacity like the appropriate buffer.ByteLength
.
Also, I've written a test case to show the current described behavior.
type mockSkipReadHandler struct{}
func (m mockSkipReadHandler) ReadFullResource(_ string, _ []byte) error {
return nil
}
func TestDecoder_decodeBuffer_SkipReader(t *testing.T) {
type args struct {
buffer *Buffer
}
tests := []struct {
name string
d *Decoder
args args
want []byte
wantErr bool
length int
}{
{"skipReader", NewDecoder(nil).WithReadHandler(&mockSkipReadHandler{}), args{&Buffer{ByteLength: 10000, URI: "a.bin"}}, make([]byte, 10000), false, 10000},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if err := tt.d.decodeBuffer(tt.args.buffer); err != nil {
t.Errorf("[SKIP-test] Decoder.decodeBuffer() in error = %v, wantErr %v", err, tt.wantErr)
return
}
if !reflect.DeepEqual(tt.args.buffer.Data, tt.want) {
t.Errorf("[SKIP-test] Decoder.decodeBuffer() buffer = %v, want %v", string(tt.args.buffer.Data), tt.want)
}
if len(tt.args.buffer.Data) != tt.length {
t.Errorf("[SKIP-test] Decoder.decodeBuffer() length = %v, want %v", len(tt.args.buffer.Data), tt.length)
}
if cap(tt.args.buffer.Data) != tt.length {
t.Errorf("[SKIP-test] Decoder.decodeBuffer() capacity = %v, want %v", cap(tt.args.buffer.Data), tt.length)
}
})
}
}
If I comment these two lines of code (lines 181 and 182 in decode.go
) to skip memory allocating to the buffer.Data
variable and calling of the ReadFullResource
function I get impressive memory results
Could I suggest a little change so anyone who has a problem like me can solve it via the custom ReadFullResource
function? Maybe pass the buffer.Data
variable by the reference (like err = d.ReadHandler.ReadFullResource(buffer.URI, &buffer.Data)
in decode.go) to have an availability to free the memory (like *data = nil
in the custom ReadFullResource
function)?
If you agree, could I create the appropriate PR?
Thank you, in advance!
Hi,
I'm very new to Go and gltf. Is there any chance of you (or someone else) adding a "Hello World" example to display a rotating cube? The program would read a gltf file (e.g. Box.gltf) using gltf.Open and extract the necessary vertices to display a rotating cube using the simplest vertex and frag shaders.
My main difficulty is going from opening a gltf.Document to displaying something. An example would be nice.
Thanks
Andre
KhronosGroup/glTF-Validator reports errors on glb file(see attachment
test.glb.gz) generated by this package:
Error | Message | Pointer |
---|---|---|
GLB_CHUNK_LENGTH_UNALIGNED | Length of 0x00000002 chunk is not aligned to 4-byte boundaries. | ย |
GLB_CHUNK_TOO_BIG | Chunk (0x00000002) length (1179937895) does not fit total GLB length. | ย |
GLB_UNEXPECTED_END_OF_CHUNK_DATA | Unexpected end of chunk data. | ย |
Warning | Message | Pointer |
---|---|---|
GLB_UNKNOWN_CHUNK_TYPE | Unknown GLB chunk type: 0x00000002. |
This one is a bit of a nit picking but I assume the lightspuntual
package was intended to be lightspunctual
(a c
is missing)?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.