Git Product home page Git Product logo

graphneuralnetworks.jl's Introduction

GraphNeuralNetworks.jl

codecov

GraphNeuralNetworks.jl is a graph neural network library written in Julia and based on the deep learning framework Flux.jl.

Among its features:

  • Implements common graph convolutional layers.
  • Supports computations on batched graphs.
  • Easy to define custom layers.
  • CUDA support.
  • Integration with Graphs.jl.
  • Examples of node, edge, and graph level machine learning tasks.
  • Heterogeneous and temporal graphs.

Installation

GraphNeuralNetworks.jl is a registered Julia package. You can easily install it through the package manager:

pkg> add GraphNeuralNetworks

Usage

Usage examples can be found in the examples and in the notebooks folder. Also, make sure to read the documentation for a comprehensive introduction to the library.

Citing

If you use GraphNeuralNetworks.jl in a scientific publication, we would appreciate the following reference:

@misc{Lucibello2021GNN,
  author       = {Carlo Lucibello and other contributors},
  title        = {GraphNeuralNetworks.jl: a geometric deep learning library for the Julia programming language},
  year         = 2021,
  url          = {https://github.com/CarloLucibello/GraphNeuralNetworks.jl}
}

Acknowledgments

GraphNeuralNetworks.jl is largely inspired by PyTorch Geometric, Deep Graph Library, and GeometricFlux.jl.

graphneuralnetworks.jl's People

Contributors

aarsebail avatar abieler avatar achiverram28 avatar animiral avatar asinghvi17 avatar askorupka avatar aurorarossi avatar bicycle1885 avatar carlolucibello avatar dependabot[bot] avatar dsantra92 avatar eahenle avatar github-actions[bot] avatar graidl avatar melioristic avatar mplemay avatar natema avatar oysteinsolheim avatar pevnak avatar pitmonticone avatar pri1311 avatar rbsparky avatar svilupp avatar tclements avatar umbriquse avatar yichengdwu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

graphneuralnetworks.jl's Issues

add examples

  • Semi-supervised node classification with Cora dataset
  • supervised graph classification with TUDataset

Merging multiple feature arrays

I'm a bit confused about using multiple node feature arrays per graph. Using multiple node feature arrays allows keeping apart different features of the node (i.e. x and y values) however when trying to pass it through a layer it outputs an error. Is the intended use to keep all features in a single array? Couldn't all features arrays be merged?

This works

julia> l = GCNConv(2=>1)
julia> g = rand_graph(4, 6, ndata=(x = ones(2,4)))
julia> l(g)`
GNNGraph:
    num_nodes = 4
    num_edges = 6
    ndata:
        x => (1, 4)

This doesn't

julia> g = rand_graph(4, 6, ndata=(x = ones(4), y = zeros(4)))  
julia> l(g)  
┌ Error: Multiple feature arrays, access directly through g.ndata
└ @ GraphNeuralNetworks.GNNGraphs ~/.julia/packages/GraphNeuralNetworks/KNr8R/src/GNNGraphs/query.jl:321
ERROR: MethodError: no method matching (::GCNConv{Matrix{Float32}, Vector{Float32}, typeof(identity)})(::GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}, ::Nothing)

implement graph concatenation

When training on multiple small graphs, typically one batches several graphs together into a larger graph for efficiency.
This operation is called blockdiag in SparseArrays and LightGraphs.jl.

For FeaturedGraphs, node and edge features should be vertically concatenated in the resulting graph. I'm not sure how we should handle global features, maybe we should just require them to be == nothing for all graphs as a start

Problem with InlineStrings.jl

How can I work around the issue in InlineStrings JuliaStrings/InlineStrings.jl#21 ?
I don't make any sense of the suggested fix.

For example:

julia> using Flux
julia> using GraphNeuralNetworks

julia> g = rand_graph(2,2)
GNNGraph:
    num_nodes = 2
    num_edges = 2

julia> Flux.batch([g,g])
GNNGraph:
    num_nodes = 4
    num_edges = 4
    num_graphs = 2

julia> using InlineStrings

julia> Flux.batch([g,g])
ERROR: MethodError: defalg(::Vector{Union{}}) is ambiguous. Candidates:
  defalg(v::AbstractArray{<:Union{Missing, Number}}) in Base.Sort at sort.jl:658
  defalg(::AbstractArray{<:Union{Missing, String1, String15, String3, String7}}) in InlineStrings at /home/oystein/.julia/packages/InlineStrings/aWvyB/src/InlineStrings.jl:698
Possible fix, define
  defalg(::AbstractArray{<:Missing})
Stacktrace:
 [1] sort!(v::Vector{Union{}})
   @ Base.Sort ./sort.jl:711
 [2] sort(v::Vector{Union{}}; kws::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
   @ Base.Sort ./sort.jl:770
 [3] sort(v::Vector{Union{}})
   @ Base.Sort ./sort.jl:770
 [4] cat_features(x1::NamedTuple{(), Tuple{}}, x2::NamedTuple{(), Tuple{}})
   @ GraphNeuralNetworks.GNNGraphs ~/.julia/packages/GraphNeuralNetworks/KNr8R/src/GNNGraphs/utils.jl:22
 [5] blockdiag(g1::GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}, g2::GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}})
   @ GraphNeuralNetworks.GNNGraphs ~/.julia/packages/GraphNeuralNetworks/KNr8R/src/GNNGraphs/transform.jl:177
 [6] batch(gs::Vector{GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}})
   @ GraphNeuralNetworks.GNNGraphs ~/.julia/packages/GraphNeuralNetworks/KNr8R/src/GNNGraphs/transform.jl:256
 [7] top-level scope
   @ REPL[12]:1
 [8] top-level scope
   @ ~/.julia/packages/CUDA/iDsKe/src/initialization.jl:52

Failed to compile PTX code

After updating to the latest version, GPU executing stopped working for me. I have not created a minimal reproducible example yet, but this is the error I'm facing.

Failed to compile PTX code (ptxas exited with code 255)
ptxas /tmp/jl_rh605v.ptx, line 313; error   : Instruction 'atom.cas.b16.global' requires .target sm_70 or higher
ptxas fatal   : Ptx assembly aborted due to errors
If you think this is a bug, please file an issue and attach /tmp/jl_rh605v.ptx
Stacktrace:
  [1] error(s::String)
    @ Base ./error.jl:33
  [2] cufunction_compile(job::GPUCompiler.CompilerJob)
    @ CUDA ~/.julia/packages/CUDA/bki2w/src/compiler/execution.jl:399
  [3] cached_compilation(cache::Dict{UInt64, Any}, job::GPUCompiler.CompilerJob, compiler::typeof(CUDA.cufunction_compile), linker::typeof(CUDA.cufunction_link))
    @ GPUCompiler ~/.julia/packages/GPUCompiler/1Ajz2/src/cache.jl:90
  [4] cufunction(f::typeof(GraphNeuralNetworks.scatter_scalar_kernel!), tt::Type{Tuple{typeof(+), CUDA.CuDeviceVector{UInt16, 1}, Int64, CUDA.CuDeviceVector{Int64, 1}}}; name::Nothing, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ CUDA ~/.julia/packages/CUDA/bki2w/src/compiler/execution.jl:297
  [5] cufunction(f::typeof(GraphNeuralNetworks.scatter_scalar_kernel!), tt::Type{Tuple{typeof(+), CUDA.CuDeviceVector{UInt16, 1}, Int64, CUDA.CuDeviceVector{Int64, 1}}})
    @ CUDA ~/.julia/packages/CUDA/bki2w/src/compiler/execution.jl:291
  [6] macro expansion
    @ ~/.julia/packages/CUDA/bki2w/src/compiler/execution.jl:102 [inlined]
  [7] scatter!(op::Function, dst::CUDA.CuArray{UInt16, 1, CUDA.Mem.DeviceBuffer}, src::Int64, idx::CUDA.CuArray{Int64, 1, CUDA.Mem.DeviceBuffer})
    @ GraphNeuralNetworks ~/.julia/packages/GraphNeuralNetworks/Hv1up/src/utils.jl:52
  [8] degree(g::GraphNeuralNetworks.GNNGraphs.GNNGraph{Tuple{CUDA.CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}, Nothing}}, T::Type{UInt16}; dir::Symbol, edge_weight::Nothing)
    @ GraphNeuralNetworks.GNNGraphs ~/.julia/packages/GraphNeuralNetworks/Hv1up/src/GNNGraphs/query.jl:212
  [9] (::GraphNeuralNetworks.GCNConv{CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}, typeof(NNlib.relu)})(g::GraphNeuralNetworks.GNNGraphs.GNNGraph{Tuple{CUDA.CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}, Nothing}}, x::CUDA.CuArray{UInt16, 2, CUDA.Mem.DeviceBuffer}, edge_weight::Nothing)
    @ GraphNeuralNetworks ~/.julia/packages/GraphNeuralNetworks/Hv1up/src/layers/conv.jl:95
 [10] (::GraphNeuralNetworks.GCNConv{CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}, typeof(NNlib.relu)})(g::GraphNeuralNetworks.GNNGraphs.GNNGraph{Tuple{CUDA.CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}, Nothing}}, x::CUDA.CuArray{UInt16, 2, CUDA.Mem.DeviceBuffer})
    @ GraphNeuralNetworks ~/.julia/packages/GraphNeuralNetworks/Hv1up/src/layers/conv.jl:80
 [11] (::GraphNeuralNetworks.GCNConv{CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}, typeof(NNlib.relu)})(g::GraphNeuralNetworks.GNNGraphs.GNNGraph{Tuple{CUDA.CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}, Nothing}})
    @ GraphNeuralNetworks ~/.julia/packages/GraphNeuralNetworks/Hv1up/src/layers/basic.jl:12
 [12] applylayer
    @ ~/.julia/packages/GraphNeuralNetworks/Hv1up/src/layers/basic.jl:121 [inlined]
 [13] applychain
    @ ~/.julia/packages/GraphNeuralNetworks/Hv1up/src/layers/basic.jl:133 [inlined]
 [14] (::GraphNeuralNetworks.GNNChain{Tuple{GraphNeuralNetworks.GCNConv{CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}, typeof(NNlib.relu)}, GraphNeuralNetworks.GCNConv{CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}, typeof(NNlib.relu)}, GraphNeuralNetworks.GCNConv{CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}, typeof(NNlib.relu)}, GraphNeuralNetworks.GCNConv{CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}, typeof(NNlib.relu)}, GraphNeuralNetworks.GCNConv{CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}, typeof(NNlib.relu)}, GraphNeuralNetworks.GCNConv{CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}, typeof(NNlib.relu)}, GraphNeuralNetworks.GCNConv{CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}, typeof(NNlib.relu)}, Flux.BatchNorm{typeof(NNlib.relu), CUDA.CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}, Float32, CUDA.CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}}}})(g::GraphNeuralNetworks.GNNGraphs.GNNGraph{Tuple{CUDA.CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}, Nothing}})
    @ GraphNeuralNetworks ~/.julia/packages/GraphNeuralNetworks/Hv1up/src/layers/basic.jl:140

Missing functionality compared to DGL

Checklist of stuff we miss compared to Deep Graph Library.
PRs are welcome!

Conv Layers

  • GraphConv (called GCNConv here)
  • EdgeWeightNorm
  • RelGraphConv
  • TAGConv
  • GATConv
  • EdgeConv
  • SAGEConv
  • SGConv
  • APPNPConv
  • GINConv
  • GatedGraphConv
  • GMMConv (#147)
  • ChebConv
  • AGNNConv
  • NNConv
  • AtomicConv
  • CFConv
  • DotGatConv
  • TWIRLSConv
  • TWIRLSUnfoldingAndAttention
  • GCN2Conv

Dense Conv Layers

  • DenseGraphConv
  • DenseSAGEConv
  • DenseChebConv

Global Pooling Layers

  • SumPooling (GlobalPooling(+) here)
  • AvgPooling (GlobalPooling(mean) here)
  • MaxPooling (GlobalPooling(max) here)
  • SortPooling
  • WeightAndSum
  • GlobalAttentionPooling
  • Set2Set
  • SetTransformerEncoder
  • SetTransformerDecoder

Batching and Reading Out Ops

https://docs.dgl.ai/en/0.6.x/api/python/dgl.html#batching-and-reading-out-ops

  • batch. Use Flux.batch or SparseArrays.blockdiag
  • unbatch
  • readout_nodes (called reduce_nodes here)
  • readout_edges (called reduce_edges here)
  • sum_nodes # use reduce_nodes(+, g, x)
  • sum_edges # use reduce_edges(+, g, x)
  • mean_nodes
  • mean_edges
  • max_nodes
  • max_edges
  • softmax_nodes
  • softmax_edges
  • broadcast_nodes
  • broadcast_edges
  • topk_nodes
  • topk_edges

Adjacency Related Utilities

  • khop_adj
  • laplacian_lambda_max

nn.functional

https://docs.dgl.ai/api/python/nn.functional.html

  • edge_softmax (softmax_edge_neighbors here)

optim

https://docs.dgl.ai/api/python/dgl.optim.html

  • Sparse Adam
  • Sparse AdaGrad

nn Utility Modules

  • Sequential (GNNChain here)
  • WeightBasis
  • KNNGraph
  • SegmentedKNNGraph

nn NodeEmbedding Module

  • NodeEmbedding

Sampling and Stochastic training

.....

Distributed Training

....

propagate() is 20x slower than built-in sparse matmul

With the well-known graph-matrix duality (see GraphBLAS intro, Fig. 1), simple graph message passing kernels are equivalent to sparse matrix-vector multiplication (SpMV) or sparse matrix-matrix multiplication (SpMM). However, I notice that propagate() is more than 20x slower than the built-in A * B for an equivalent operation. I did the same test with DGL, and did not observe such drastic slow down.

To reproduce

using SparseArrays
using GraphNeuralNetworks
using BenchmarkTools
import Random: seed!

n = 1024
seed!(0)
A = sprand(n, n, 0.01)
b = rand(1, n)
B = rand(100, n)

g = GNNGraph(
    A,
    ndata=(; b=b, B=B),
    edata=(; A=reshape(A.nzval, 1, :)),
    graph_type=:coo  # changing to :sparse has little effect on performance
)

function spmv(g)
    propagate(
        (xi, xj, e) -> e .* xj ,  # same as e_mul_xj
        g, +; xj=g.ndata.b, e=g.edata.A
        )
end

function spmm(g)
    propagate(
        (xi, xj, e) -> e .* xj ,  # same as e_mul_xj
        g, +; xj=g.ndata.B, e=g.edata.A
        )
end

isequal(spmv(g),  b * A)  # true
@btime spmv(g)  # ~5 ms
@btime b * A  # ~32 us

isequal(spmm(g), B * A)  # true
@btime spmm(g)  # ~9 ms
@btime B * A  # ~400 us

Such performance gap can't be explained by storing the sparse matrix in COO (GNN libraries' default) vs CSR (SciPy default) vs CSC (Julia default). In the code below, changing scipy matrix format has minor effect on speed. Also, the speed of DGL and SciPy are similar.

Compare with DGL and SciPy

import numpy as np
import scipy.sparse as sp
import torch

import dgl
import dgl.function as fn

n = 1024

np.random.seed(0)
A = sp.random(n, n, density=0.01, format='csc')  # changing format to `coo` or `csr` affects performance, but not much
b = np.random.rand(1, n)
B = np.random.rand(100, n)

g = dgl.from_scipy(A)
g.edata['A'] = torch.tensor(A.data[:, np.newaxis])
g.ndata['b'] = torch.tensor(b.T)
g.ndata['B'] = torch.tensor(B.T)

def spmv(g):
    with g.local_scope():
        g.update_all(fn.e_mul_u('A', 'b', 'm'), fn.sum('m', 'bA'))
        return g.ndata['bA']
    
def spmm(g):
    with g.local_scope():
        g.update_all(fn.e_mul_u('A', 'B', 'M'), fn.sum('M', 'BA'))
        return g.ndata['BA']

np.array_equal(spmv(g).numpy().T, b @ A)  # True
%timeit spmv(g)  # ~200 us
%timeit b @ A  # ~70 us

np.array_equal(spmm(g).numpy().T, B @ A)  # True
%timeit spmm(g)  # ~900 us
%timeit B @ A  # ~900 us

Effect of fusion

DGL's update_all fuses the message and reduction kernels. To mimic the two-stage propagate, and see if fusion causes such performance difference:

def spmv_twostage(g):
    with g.local_scope():
        g.apply_edges(fn.e_mul_u('A', 'b', 'm'))
        g.update_all(
            fn.copy_e('m', 'm'),
            fn.sum('m', 'bA')
        )
        return g.ndata['bA']

%timeit spmv_twostage(g)  # ~240 us; just 20% slower

The unfused version is only slightly slower. It cannot explain the 5ms vs 200us performance gap.

There must be other causes of inefficiency. I'd like to figure it out and bring the performance at least close to DGL. (I use DGL a lot, but there are certain projects that favor all-Julia implementation, and your package seems a good option with a clean syntax🙂 )

Package version

Jl:

  • GraphNeuralNetworks.jl 0.3.8
  • Julia 1.7

Py:

  • DGL 0.7.1
  • PyTorch 1.9.1

outputsize for GNNChain

This is a feature request: it'd be nice to have extend the functionality of Flux.outputsize to GNNChains. I imagine this could be applied to either a WithGraph or a GNNGraph and Tuple of inputsize. Here's a sketch of a MWE from the docs:

using Flux, Graphs, GraphNeuralNetworks

din, d, dout = 3, 4, 2 
g = rand_graph(10, 30)
X = randn(Float32, din, 10)
inputsize = size(X) 

model = GNNChain(GCNConv(din => d),
                 BatchNorm(d),
                 x -> relu.(x),
                 GCNConv(d => d, relu),
                 Dropout(0.5),
                 Dense(d, dout))
wg = WithGraph(model, g)

@assert GraphNeuralNetworks.outputsize(model, g, inputsize) == size(model(g,X))
@assert GraphNeuralNetworks.outputsize(wg, inputsize) == wg(X) 

GATv2Conv show method errors

The show method for the GATv2Conv layer is throwing an error in the REPL:

julia> GATv2Conv(128=>128,relu)
Error showing value of type GATv2Conv{Float32, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Flux.Zeros}, Vector{Float32}, Matrix{Float32}}:
ERROR: type GATv2Conv has no field weight_i

Should be a pretty simple fix at

out, in = size(l.weight_i)

Flux.batch Overloading for Generators

We came across an instance where the batching function was used for a generator instead of a vector. Do you think that GraphNeuralNetworks would also be able to overload the batching function for generators alongside vectors?

Gradient of edge weights is nothing with fused e_mul_xj

#107 breaks Zygote autodiff. Zygote.gradient() returns nothing for the fused kernel, while returns correct gradient for the unfused one. This bug further breaks GNN training, with hard-to-understand error like MethodError: no method matching vec(::Nothing)

To reproduce

using GraphNeuralNetworks
using SparseArrays
import Random: seed!
using Zygote

n = 32
seed!(0)
A = sprand(n, n, 0.1)
b = rand(1, n)
g = GNNGraph(A)
A_val = reshape(A.nzval, 1, :)

"""SpMV followed by a scalar loss function"""
function forward_fused(g, b, A_val)
    out = propagate(
        e_mul_xj, g, +; xj=b, e=A_val
        )
    return sum(abs2, out)
end

function forward_unfused(g, b, A_val)
    out = propagate(
        (xi, xj, e) -> e .* xj, g, +; xj=b, e=A_val
        )
    return sum(abs2, out)
end

forward_fused(g, b, vec(A_val)) == forward_unfused(g, b, A_val)  # true, forward passes agree

grad_builtin = gradient(A -> sum(abs2, b * A), A)[1];  # turns a sparse CSC matrix containing gradient

grad_gnn1 = gradient(
    A_vals -> forward_unfused(g, b, A_vals), 
    A_val
)[1]

isequal(vec(grad_gnn1), grad_builtin.nzval)  # true, gradient agree with reference

# not flatten edge feature, so the “fused function” not actually invoking the fused kernel
grad_gnn2 = gradient(
    A_vals -> forward_fused(g, b, A_vals), 
    A_val
)[1]

isequal(vec(grad_gnn2), grad_builtin.nzval)   # true, gradient agree with reference

# passing flattened edge feature, activating fusion
grad_gnn3 = gradient(
    A_vals -> forward_fused(g, b, A_vals), 
    vec(A_val)
)[1]  # bug, turns nothing

Pacakge version

  • GraphNeuralNetworks.jl 0.3.10 (from git master)
  • Zygote.jl 0.6.33

Implement more pooling operators

This is the list of pooling operators in pytorch geometric

  • global_add_pool (GlobalPool(+) here)
  • global_mean_pool (GlobalPool(mean) here)
  • global_max_pool (GlobalPool(max) here)
  • global_sort_pool
  • GlobalAttention (GlobaAttentionlPool(max) here)
  • Set2Set
  • GraphMultisetTransformer

define a `message_and_aggregate` method

In order to avoid feature allocations on each edge, we should provide define a message_and_aggregate function
fusing together the compute_message and aggregate_neighbors functions.

The operation to be fused together are

s, t = edge_index(g)
xi = gather(x, t)
xj = gather(x, s)
m = compute_message(l, xi, xj, e)
scatter(aggr, m, t)

Differences to GeometricFlux.jl?

Here's the inevitable question ;)

What are the differences (philosophical, implementation etc) between this and geometric flux?

Are you covering a smaller scope? I think graphs are a subset of geometric deep learning

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Explainer vs GeometricFlux

Hello,

I'm an experience Julia developer, and I am getting into GNN's. I am looking at your package and GeometricFlux, and I almost defaulted to GeometricFlux on popularity, but it appears you have many nice features that I might prefer.

Can you provide a "simple explanation" of why this package vs GeometricFlux (and vice versa)?

Great work by the way!

Implement add_reverse_edges

One utility function we can add is add_reverse_edges(g) to add the missing reverse edges to a graph and make it bidirected.

An option to consider is to avoid duplicating memory consumption for edge features by using views.

Related to #101

Failure to combine `SparseDiffTools.autoback_hesvec` and `GCNConv`

Hello! Nice work on the library; it is very usable. I'm trying to calculate the hessian-vector product of a loss function involving GNNGraph datapoints and a GNNChain model. I've been using the SparseDiffTools.jl function autoback_hesvec for this, which implements ForwardDiff.jl over Zygote.jl for the hessian-vector calculation. However, this function is failing in the GraphNeuralNetworks.jl setting. The other hessian-vector functions in SparseDiffTools.jl do work, and an analogously-constructed calculation using only Flux works.

using GraphNeuralNetworks, Flux, Graphs, ForwardDiff, Random, SparseDiffTools


function gnn_test()
    Random.seed!(1234)

    g = GNNGraph(erdos_renyi(10,  30), ndata=rand(Float32, 3, 10), gdata=rand(Float32, 2))

    m = GNNChain(GCNConv(3 => 2, tanh), GlobalPool(+))
    ps, re = Flux.destructure(m)  # primal vector and restructure function
    ts = rand(Float32, size(ps))  # tangent vector

    loss(_ps) = Flux.Losses.mse(re(_ps)(g, g.ndata.x), g.gdata.u)

    numback_hesvec(loss, ps, ts) |> println  # works
    numback_hesvec(loss, ps, ts)  |> println  # works
    numauto_hesvec(loss, ps, ts)  |> println  # works
    autoback_hesvec(loss, ps, ts) |> println  # fails
end

function flux_test()
    Random.seed!(1234)

    x = rand(Float32, 10, 3)
    y = rand(Float32, 2, 3)

    m = Chain(Dense(10, 4, tanh), Dense(4, 2))
    ps, re = Flux.destructure(m)  # primal vector and restructure function
    ts = rand(Float32, size(ps))  # tangent vector

    loss(_ps) = Flux.Losses.mse(re(_ps)(x), y)

    numback_hesvec(loss, ps, ts) |> println  # works
    numback_hesvec(loss, ps, ts)  |> println  # works
    numauto_hesvec(loss, ps, ts)  |> println  # works
    autoback_hesvec(loss, ps, ts) |> println  # works
end

The full error message:

ERROR: MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{DataType, Float32}, Float32, 1})
Closest candidates are:
  (::Type{T})(::Real, ::RoundingMode) where T<:AbstractFloat at /Applications/Julia-1.7.app/Contents/Resources/julia/share/julia/base/rounding.jl:200
  (::Type{T})(::T) where T<:Number at /Applications/Julia-1.7.app/Contents/Resources/julia/share/julia/base/boot.jl:770
  (::Type{T})(::AbstractChar) where T<:Union{AbstractChar, Number} at /Applications/Julia-1.7.app/Contents/Resources/julia/share/julia/base/char.jl:50
  ...
Stacktrace:
  [1] convert(#unused#::Type{Float64}, x::ForwardDiff.Dual{ForwardDiff.Tag{DataType, Float32}, Float32, 1})
    @ Base ./number.jl:7
  [2] setindex!(A::Vector{Float64}, x::ForwardDiff.Dual{ForwardDiff.Tag{DataType, Float32}, Float32, 1}, i1::Int64)
    @ Base ./array.jl:903
  [3] (::ChainRulesCore.ProjectTo{SparseArrays.SparseMatrixCSC, NamedTuple{(:element, :axes, :rowval, :nzranges, :colptr), Tuple{ChainRulesCore.ProjectTo{Float64, NamedTuple{(), Tuple{}}}, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}, Vector{Int64}, Vector{UnitRange{Int64}}, Vector{Int64}}}})(dx::Matrix{ForwardDiff.Dual{ForwardDiff.Tag{DataType, Float32}, Float32, 1}})
    @ ChainRulesCore ~/.julia/packages/ChainRulesCore/uxrij/src/projection.jl:580
  [4] #1335
    @ ~/.julia/packages/ChainRules/3HAQW/src/rulesets/Base/arraymath.jl:37 [inlined]
  [5] unthunk
    @ ~/.julia/packages/ChainRulesCore/uxrij/src/tangent_types/thunks.jl:197 [inlined]
  [6] wrap_chainrules_output
    @ ~/.julia/packages/Zygote/FPUm3/src/compiler/chainrules.jl:104 [inlined]
  [7] map
    @ ./tuple.jl:223 [inlined]
  [8] wrap_chainrules_output
    @ ~/.julia/packages/Zygote/FPUm3/src/compiler/chainrules.jl:105 [inlined]
  [9] ZBack
    @ ~/.julia/packages/Zygote/FPUm3/src/compiler/chainrules.jl:204 [inlined]
 [10] Pullback
    @ ~/.julia/packages/GraphNeuralNetworks/HAl1C/src/msgpass.jl:189 [inlined]
 [11] (::typeof(∂(propagate)))(Δ::Matrix{ForwardDiff.Dual{ForwardDiff.Tag{DataType, Float32}, Float32, 1}})
    @ Zygote ~/.julia/packages/Zygote/FPUm3/src/compiler/interface2.jl:0
 [12] Pullback
    @ ~/.julia/packages/GraphNeuralNetworks/HAl1C/src/msgpass.jl:68 [inlined]
 [13] (::typeof(∂(#propagate#84)))(Δ::Matrix{ForwardDiff.Dual{ForwardDiff.Tag{DataType, Float32}, Float32, 1}})
    @ Zygote ~/.julia/packages/Zygote/FPUm3/src/compiler/interface2.jl:0
 [14] Pullback
    @ ~/.julia/packages/GraphNeuralNetworks/HAl1C/src/msgpass.jl:68 [inlined]
 [15] (::typeof(∂(propagate##kw)))(Δ::Matrix{ForwardDiff.Dual{ForwardDiff.Tag{DataType, Float32}, Float32, 1}})
    @ Zygote ~/.julia/packages/Zygote/FPUm3/src/compiler/interface2.jl:0
 [16] Pullback
    @ ~/.julia/packages/GraphNeuralNetworks/HAl1C/src/layers/conv.jl:103 [inlined]
 [17] (::typeof(∂(λ)))(Δ::Matrix{ForwardDiff.Dual{ForwardDiff.Tag{DataType, Float32}, Float32, 1}})
    @ Zygote ~/.julia/packages/Zygote/FPUm3/src/compiler/interface2.jl:0
 [18] Pullback
    @ ~/.julia/packages/GraphNeuralNetworks/HAl1C/src/layers/conv.jl:80 [inlined]
 [19] Pullback
    @ ~/.julia/packages/GraphNeuralNetworks/HAl1C/src/layers/basic.jl:125 [inlined]
 [20] (::typeof(∂(applylayer)))(Δ::Matrix{ForwardDiff.Dual{ForwardDiff.Tag{DataType, Float32}, Float32, 1}})
    @ Zygote ~/.julia/packages/Zygote/FPUm3/src/compiler/interface2.jl:0
 [21] Pullback
    @ ~/.julia/packages/GraphNeuralNetworks/HAl1C/src/layers/basic.jl:137 [inlined]
 [22] (::typeof(∂(applychain)))(Δ::Matrix{ForwardDiff.Dual{ForwardDiff.Tag{DataType, Float32}, Float32, 1}})
    @ Zygote ~/.julia/packages/Zygote/FPUm3/src/compiler/interface2.jl:0
 [23] Pullback
    @ ~/.julia/packages/GraphNeuralNetworks/HAl1C/src/layers/basic.jl:139 [inlined]
 [24] (::typeof(∂(λ)))(Δ::Matrix{ForwardDiff.Dual{ForwardDiff.Tag{DataType, Float32}, Float32, 1}})
    @ Zygote ~/.julia/packages/Zygote/FPUm3/src/compiler/interface2.jl:0
 [25] Pullback
    @ ~/JuliaProjects/GraphNetworkLayers/test/fwd.jl:15 [inlined]
 [26] (::typeof(∂(λ)))(Δ::ForwardDiff.Dual{ForwardDiff.Tag{DataType, Float32}, Float32, 1})
    @ Zygote ~/.julia/packages/Zygote/FPUm3/src/compiler/interface2.jl:0
 [27] (::Zygote.var"#57#58"{typeof(∂(λ))})(Δ::ForwardDiff.Dual{ForwardDiff.Tag{DataType, Float32}, Float32, 1})
    @ Zygote ~/.julia/packages/Zygote/FPUm3/src/compiler/interface.jl:41
 [28] gradient(f::Function, args::Vector{ForwardDiff.Dual{ForwardDiff.Tag{DataType, Float32}, Float32, 1}})
    @ Zygote ~/.julia/packages/Zygote/FPUm3/src/compiler/interface.jl:76
 [29] (::SparseDiffTools.var"#78#79"{var"#loss#5"{Flux.var"#66#68"{GNNChain{Tuple{GCNConv{Matrix{Float32}, Vector{Float32}, typeof(tanh)}, GlobalPool{typeof(+)}}}}, GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}}})(x::Vector{ForwardDiff.Dual{ForwardDiff.Tag{DataType, Float32}, Float32, 1}})
    @ SparseDiffTools ~/.julia/packages/SparseDiffTools/9lSLn/src/differentiation/jaches_products_zygote.jl:39
 [30] autoback_hesvec(f::Function, x::Vector{Float32}, v::Vector{Float32})
    @ SparseDiffTools ~/.julia/packages/SparseDiffTools/9lSLn/src/differentiation/jaches_products_zygote.jl:41
 [31] gnn_test()
    @ Main [script location]
 [32] top-level scope
    @ REPL[8]:1
 [33] top-level scope
    @ ~/.julia/packages/CUDA/bki2w/src/initialization.jl:52

Problem with GNNChain and NNConv

The following example raises a MethodError:

using Flux
using GraphNeuralNetworks

ndata = rand(2,3)
edata = rand(2,3)

n_in=2
n_out=4

g = GNNGraph([1,1,2], [2, 3, 3], ndata=ndata, edata=edata)
edge_model = Dense(2,n_in*n_out, relu)
gnn_model = GNNChain(
    NNConv(n_in=>n_out, edge_model, relu)
)
gnn_model(g)

with error message

ERROR: MethodError: no method matching (::NNConv)(::GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}, ::Matrix{Float64})
Closest candidates are:
(::NNConv)(::GNNGraph, ::AbstractMatrix{T} where T, ::Any) at /home/oystein/JuliaProjects/GraphNeuralNetworks.jl/src/layers/conv.jl:480
(::NNConv)(::GNNGraph) at /home/oystein/JuliaProjects/GraphNeuralNetworks.jl/src/layers/conv.jl:495
(::GNNLayer)(::GNNGraph) at /home/oystein/JuliaProjects/GraphNeuralNetworks.jl/src/layers/basic.jl:12

Am I missing something here? Calling an individual layer with g is ok.

aggregate_neighbors() is 100x slower than equivalent sparse matrix operation

Although #106 has been solved by fusion #108, the slowness of the unfused implementation (apply_edges + aggregate_neighbors) was not clearly understood. Realistic GNN models would contain mixed calls to message function, reduce function, and neural network layers, so they don't always exhibit a nice form for #108 to work.

Profiling with ProfileSVG.jl shows that 60% of time was spent on aggregate_neighbors:

prof_spmm

Neighbor reduction with + is equivalent to either:

  • SpMV e * A where e is a unit vector, or
  • column sum of sparse matrix A

Either way turns out to be more than 100x faster than aggregate_neighbors.

Reproducible example

using SparseArrays
using GraphNeuralNetworks
import Random: seed!
using BenchmarkTools

n = 1024
seed!(0)
A = sprand(n, n, 0.01)

g = GNNGraph(
    A,
    edata=(; A=reshape(A.nzval, 1, :)),
    graph_type=:coo
)

out = aggregate_neighbors(g, +, g.edata.A)
@btime aggregate_neighbors(g, +, g.edata.A)  # ~4 ms

e = ones(1, n)
e * A == out  # true
@btime e * A  # ~20 us

sum(A, dims=1)  out  # true
@btime sum(A, dims=1)  # ~10 us

Here only uses a single edge feature. Multiple edge features would correspond to a 3D sparse tensor that is not supported by SparseArrays.jl -- TACO could be used then.

Package version

GINConv not working on GPU when not all nodes are connected

using GraphNeuralNetworks
using Flux

nn = GNNChain(GINConv(identity, 0))
x = GNNGraph(collect(1:6), collect(1:6), num_nodes = 6, ndata= rand(1, 6))
x2 = GNNGraph(collect(1:5), collect(1:5), num_nodes = 6, ndata= rand(1, 6))
println("CPU")
@show nn(x)
@show nn(x2)
println("GPU")
x = Flux.gpu(x)
x2 = Flux.gpu(x2)
nn = Flux.gpu(nn)
@show nn(x)
@show nn(x2)
CPU
nn(x) = GNNGraph:
    num_nodes = 6
    num_edges = 6
    ndata:
        x => (1, 6)
nn(x2) = GNNGraph:
    num_nodes = 6
    num_edges = 5
    ndata:
        x => (1, 6)
GPU
nn(x) = GNNGraph:
    num_nodes = 6
    num_edges = 6
    ndata:
        x => (1, 6)
ERROR: LoadError: DimensionMismatch("dimensions must match: a has dims (Base.OneTo(1), Base.OneTo(6)), b has dims (Base.OneTo(1), Base.OneTo(5)), mismatch at 2")
Stacktrace:
 [1] promote_shape
   @ ./indices.jl:178 [inlined]
 [2] promote_shape
   @ ./indices.jl:169 [inlined]
 [3] +(A::CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, B::CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer})
   @ Base ./arraymath.jl:38
 [4] (::GINConv{Int64})(g::GNNGraph{Tuple{CUDA.CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}, Nothing}}, x::CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer})
   @ GraphNeuralNetworks ~/.julia/packages/GraphNeuralNetworks/KNr8R/src/layers/conv.jl:585
 [5] (::GINConv{Int64})(g::GNNGraph{Tuple{CUDA.CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}, Nothing}})
   @ GraphNeuralNetworks ~/.julia/packages/GraphNeuralNetworks/KNr8R/src/layers/basic.jl:12
 [6] applylayer
   @ ~/.julia/packages/GraphNeuralNetworks/KNr8R/src/layers/basic.jl:121 [inlined]
 [7] applychain
   @ ~/.julia/packages/GraphNeuralNetworks/KNr8R/src/layers/basic.jl:133 [inlined]
 [8] (::GNNChain{Tuple{GINConv{Int64}}})(g::GNNGraph{Tuple{CUDA.CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}, Nothing}})
   @ GraphNeuralNetworks ~/.julia/packages/GraphNeuralNetworks/KNr8R/src/layers/basic.jl:140
 [9] top-level scope
   @ show.jl:1047
in expression starting at /home/casperp/Documents/testJulia/main.jl:16

Question about temporal graph neural networks

I was wondering if there is a wish to also include temporal graph neural networks architectures in this repo or if that is preferred to be as a separate package?

There's PyTorch Geometric and PyTorch Geometric Temporal which are currently separate packages if I remember correctly but I wonder if it wouldn't be more natural to have these in the same repo in GraphNeuralNetworks.jl?

Thoughts?

conflict with CSV and GNNGraphs when running Flux.batch

The issue comes from
sort(collect(keys(x1))) == sort(collect(keys(x2))) || @error "cannot concatenate feature data with different keys"

because Julia does not know which version of defalg to use for the sorting algorithm since InlineStrings (loaded by CSV) has a version of defalg that is compatible with this line.

Adding this line: Base.Sort.defalg(x::AbstractArray{<:Missing}) = QuickSort
resolved the ambiguity and allowed Flux.batch to run

Roadmap to merge GeometricFlux.jl and GraphNeuralNetworks.jl

I think we should work together and avoid redundant work. It's no need to compete in the same comminuty.

Could you list the major difference of GraphNeuralNetworks.jl from GeometricFlux.jl?
I am curious about how can we redesign GeometricFlux.jl and keep the strength of GraphNeuralNetworks.jl.

I am thinking to migrate GraphSignals.jl to FluxML and you can put your design there.

Include undirected graphs

Would it be a lot of work to include the possibility of having undirected graphs? Or is there any simple resolution of it?

batching scales quadratically

GraphNeuralNetworks.batch scales quadratically in run time and memory use with the number of graphs to batch. Here is a MWE with benchmarks:

using BenchmarkTools
using GraphNeuralNetworks

g1 = rand_graph(4, 6, ndata=ones(8, 4));
g2 = rand_graph(7, 4, ndata=zeros(8, 7));
GraphNeuralNetworks.batch([g1, g2]);

for ngraphs in 2 .^ (8:10)
    gs = [rand_graph(4, 6, ndata=ones(8, 4)) for _ in 1:ngraphs]
    println("\n=======================\nBatchsize = $ngraphs graphs\n=======================\n")    
    b = @benchmark GraphNeuralNetworks.batch($gs)
    display(b)
end

=======================
Batchsize = 256 graphs
=======================

BenchmarkTools.Trial: 1532 samples with 1 evaluation.
 Range (min  max):  2.583 ms  6.482 ms  ┊ GC (min  max):  0.00%  54.02%
 Time  (median):     2.712 ms             ┊ GC (median):     0.00%
 Time  (mean ± σ):   3.260 ms ± 1.278 ms  ┊ GC (mean ± σ):  16.20% ± 19.62%

  ▄█▂▁
  ████▃▂▂▁▁▁▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▃▄▄▄▃ ▂
  2.58 ms        Histogram: frequency by time       6.33 ms <

 Memory estimate: 13.08 MiB, allocs estimate: 17790.

=======================
Batchsize = 512 graphs
=======================

BenchmarkTools.Trial: 510 samples with 1 evaluation.
 Range (min  max):   8.219 ms  11.230 ms  ┊ GC (min  max):  0.00%  21.96%
 Time  (median):     10.632 ms              ┊ GC (median):    20.63%
 Time  (mean ± σ):    9.790 ms ±  1.241 ms  ┊ GC (mean ± σ):  13.99% ± 10.61%

  ▁█                                                    ▁▂▁
  ███▆▄▂▂▁▁▁▁▁▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▂▄▆▆▄▄▆███▅▄ ▃
  8.22 ms         Histogram: frequency by time        11.1 ms <

 Memory estimate: 50.14 MiB, allocs estimate: 36053.

=======================
Batchsize = 1024 graphs
=======================

BenchmarkTools.Trial: 150 samples with 1 evaluation.
 Range (min  max):  32.179 ms  37.269 ms  ┊ GC (min  max): 11.89%  16.35%
 Time  (median):     32.532 ms              ┊ GC (median):    12.07%
 Time  (mean ± σ):   33.527 ms ±  1.463 ms  ┊ GC (mean ± σ):  12.63% ±  1.19%

  ▃█▇▆
  ████▅▄▅▃▃▃▃▁▁▁▁▃▃▁▁▁▁▁▃▃▃▁▃▃▁▃▄▃▆▃▅▃▄▃▃▄▅▃▃▁▁▄▁▃▁▁▃▁▁▁▃▃▃▁▃ ▃
  32.2 ms         Histogram: frequency by time          37 ms <

 Memory estimate: 196.20 MiB, allocs estimate: 73941.

I came up with a fix to the current implementation and will submit a PR soon.

Weights not included in GNNGraph made from SimpleWeightedDiGraph

Hello, just found your sweet package!
I ran into a minor issue when wrapping a SimpleWeightedDiGraph with a GNNGraph. The weights from the DiGraph are not included in the wrapped graph.

Below is an MWE of this behavior. I'm not certain if this was by design.

using Graphs, GraphNeuralNetworks, SimpleWeightedGraphs
function randGraph(graphSize::Int)
       graph = rand(graphSize, graphSize)
       foreach(enumerate(eachcol(graph))) do (idx, col)
             graph[idx, :] .= col
             graph[idx, idx] = 0
       end
       return graph
end
a = randGraph(10)
g = SimpleWeightedDiGraph(a)
b = GNNGraph(g)
b.graph

RETURNS

([1, 1, 1, 1, 1, 1, 1, 1, 1, 2  …  9, 10, 10, 10, 10, 10, 10, 10, 10, 10], [2, 3, 4, 5, 6, 7, 8, 9, 10, 1  …  10, 1, 2, 3, 4, 5, 6, 7, 8, 9], nothing)

Custom Function GPU Compatibitlity Issue: Indexing

Hello, I am curious if you know of a way to access the nodal information that is returned from a GCNChain without using indicies or at least is GPU/CUDA friendly.

Below is the function in question and the issue is from creating the vectors v and p.

function Network.forward(nn::SimpleGNN, state)
  c = nn.common.(state)
  applyV(graph) = nn.vhead(graph, graph.ndata.x)
  resultv = applyV.(c)
  v = [resultv[ind][indDepth] for indDepth in 1:1, ind in 1:length(state)]
  applyP(graph) = nn.phead(graph)
  resultp = applyP.(c)
  p = [resultp[ind].ndata.x[indDepth] for indDepth in 1:state[1].num_nodes, ind in 1:length(state) ]
  return (p, v)
end
modelP = GNNChain(Dense(innerSize, 1),softmax)
modelV = GNNChain( GlobalPool(mean),  # aggregate node-wise features into graph-wise features
                              Dense(innerSize, 1),
                              softmax);

In this case modelP is the nn.phead function call and modelV is the nn.vhead function call.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.