Git Product home page Git Product logo

dolfinx's People

Contributors

alogg avatar anderslogg avatar aslakbergersen avatar blechta avatar chrisrichardson avatar cianwilson avatar cmaurini avatar ettieunwin avatar francesco-ballarin avatar funsim avatar garth-wells avatar igorbaratta avatar ivanyashchuk avatar jhale avatar jobh avatar johanhake avatar johannesring avatar jorgensd avatar kent-and avatar meg-simula avatar michalhabera avatar mmorandi avatar mscroggs avatar nate-sime avatar nschloe avatar quang-ha avatar skavhaug avatar solveigm avatar tianyikillua avatar tormodlandet avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dolfinx's Issues

Separate cpp.Expression and ufl.Coefficient

From @blechta

Expressions have element associated with them. That enables any expression to be used immediately in integrals. This poses several problems:

  1. Not all expressions are intended to be used in integrals. Why should they have element.
  2. Attached element does not have any relevance apart of integrals. This is confusing
  3. degree kwarg being shortcut for Lagrange element is overly confusing.

Proposal is to separate Expression from ufl.Coefficient in a following spirit

expression = Expression("x[0]*x[0]")
expression.eval(...)  # ok
u = interpolate(expression, some_function_space)  # ok
bc = DirichletBC(space, expression, where)  # ok
expression*dx  # failure: is not ufl expression

Expression(expression, element=...)*dx  # ok
create_pointwise_expression(expression, ...)*dx  # ok

The last two lines could be renamed to something more sensible, e.g.,

Coefficient(expression, element)*dx  # ok

See #2 for discussion about pointwise expressions.

Add support for h5py

Allow use if h5py (http://www.h5py.org/) for advanced manipulation of HDF5 files.

Maybe it can be supported in a similar way to petsc4py. It would make the DOLFIN-side wrapping of HDF5 simpler.

Avoid MPI all_to_all when finalising sparsity pattern

la::SparsityPattern::apply communicates off-diagonal entries in the sparsity pattern. It would be more efficient and quite straightforward to perform point-to-point communication using information from the IndexMaps stored by the SparsityPattern.

New ffc-x interface for integrals

From @chrisrichardson

At the moment, ffc generates tabulate_tensor() code for integrals in several ufc::foointegral classes, for cell, interior_facet, exterior_facet, vertex etc. The class also contains an enabled_coefficients() function (can we eliminate?)

Should we use a common function signature for all integrals?
e.g.
void tabulate_tensor(double *A, const double* w, const double* coords, const int* entity_indices);

(for integrals over multiple cells: "macro", the data can be stacked for each argument).
We can store the integrals as std::function<> or just as raw C function pointers (may be more efficient).

[x] signature for cell
[x] signature for interior_facet
[x] signature for exterior_facet
[x] signature for vertex
[] signature for custom integral

Use LocalAssembler in Assembler

Looking at the signature of typical LocalAssembler methods, they look like they would fit right in with the assembly logic.

For example:

void LocalAssembler::assemble_cell(
    Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor>& A,
    UFC& ufc, const Eigen::Ref<const EigenRowArrayXXd>& coordinate_dofs,
    const mesh::Cell& cell, const mesh::MeshFunction<std::size_t>* cell_domains)

Could replace a bit of this logic, and DRY out the code a bit.

Sort out naming in XDMFFile method names

From @blechta

It has been agreed in a discussion that following renaming scheme is favourable

class XDMFFile:
    read_checkpoint() -> read()
    write_checkpoint() -> write()
    write() -> write_vertex_values()

Arguably write_vertex_values() could be removed once support for write_checkpoint() format in Paraview is packaged and distributed well. User can instead do more explicit operation involving some kind of interpolation or projection.

See https://bitbucket.org/fenics-project/dolfinx/issues/27/sort-out-naming-in-xdmffile for comments.

Reduce time for XDMF tests

Tests in test.XDMF.py are way too slow. Python loops over entities of non-trivial meshes are performed.

Distinguish between abstract Expression and a FE function

DOLFIN doesn't clearly distinguish between:

We should have:

  • An abstract Expression class (given x, returns a value(s)), and
  • A way to put the Expression into a FE/quadrature space.

The problem leads to differences in how a coefficient is evaluated inside a form integral and how is it is evaluated for some x. It is particularly confusing on the Python side.

Support assembly of nested matrices

New assembler should be simple and preserve symmetry. It should also be flexible in the ways boundary conditions can be applied.

Task list:

  • Assemble M x N matrices with m x n block, where each block is a form
  • Support assembling of block matrix into the PETSc monolithic and MatNest formats
  • Support cell integrals
  • Support exterior facet integrals
  • Support interior facet integrals
  • Support multiple domains (different integrals)
  • Improve efficiency of boundary condition look-ups
  • Avoid exposing any UFC objects - pass function pointers if necessary
  • Add support for assembly of scalars (functionals)

Pointwise expressions

From @blechta

From time to time there appears desire for pointwise expressions, i.e., expressions which get evaluated in forms at quadrature points rather than interpolated to local FE space (typically Lagrange). Absurdly, FEniCS has support for pointwise expressions for a long time through quadrature element, see https://bitbucket.org/snippets/blechta/Rezng6.

I argue that implementation through quadrature element is a right approach. It just fits the pipeline:

  • UFL: representation, preprocessing, including degree estimation,
  • tabulate_tensor-interface: no need to have special kernel interface for pointwise expression; it is passed as expansion coefficients of quadrature element; remind that we want simple C replacement of UFC so this is a right approach,
  • C++ DOLFIN: nothing new is needed on C++ side; dolfin::Expression
  • Python DOLFIN: it is possible to have pointwise expression which is cpp.Expression and ufl.Coefficient on quadrature element but it is arguable how this multiple inheritance is confusing and sustainable, see #1.

Taking the assumption that the quadrature element approach is a right approach, the following text is relevant. Range of approaches is available, providing more or less magic functionality and difficulty of implementation. The dilemma comes from the fact that

e = PointwiseExpression("cos(x[0])")
e*u*v*dx

does not have any sense as quadrature degree is not specified. It can be done in many several ways and all of them have advantages and disadvantages.

1. Soft, trivial approach: provide documentation and coverage for existing functionality. With that users and developers would be aware how to specify pointwise expressions:

qe = FiniteElement('Quadrature', mesh.ufl_cell(), 6, quad_scheme="default")
my_pointwise_expression = MyExpr("x[0]*x[0]", element=qe)

# Compatible quadrature specification is needed
integral = my_pointwise_expression*other_coefficients*dx(metadata={'quadrature_degree': 6})

The advantage is that there is no magic functionality (no need for some tricky UFL preprocessing as element is fully specified from the beginning). Disadvantage is that some people might find it pretty ugly and discouraging from usage.

2. Less magic user friendly factory. It turns out it is easy to support:

my_pointwise_expression = create_pointwise_expression("x[0]*x[0]", quadrature_degree=6)
integral = my_pointwise_expression*other_coefficients*dx

Here user calls the factory function with compiled or Python expression and total quadrature degree in the integral where they intends to use the expression. Arguably this is already too much from mathematical perspective because it hides quadrature details from the measure into the expression. (Less invasive approach of still requiring degree specification in measure is possible.)

3. Very magic, super-friendly factory. It turns out that it is possible to implement:

# Expression contributes to total quadrature by 2
my_pointwise_expression = create_pointwise_expression("x[0]*x[0]", degree=2)
integral = my_pointwise_expression*other_coefficients*dx  # quad degree is 2 + estimated degree of other_coefficients

# Expression does  not have degree; quadrature degree specified in measure
my_pointwise_expression = create_pointwise_expression("x[0]*x[0]")
integral = my_pointwise_expression*other_coefficients*dx(metadata={'quadrature_degree': 6})

Both of these can be supported simultaneously. There is a working sketch of this. Unfortunately it turns out that this is very problematic if the form consists of more integrals (possibly having different quadrature degree). To support this properly, major change in UFL form preprocessing and form compilers would be needed (coefficient replace maps would need to be per integral rather than per form).

Approach 2. is easily possible now, but might require the same UFL preprocessing overhaul if mixed cells support is wanted.

Remove dependency on python-pkgconfig

The Python/JIT code uses the package https://pypi.org/project/pkgconfig/ to get build info from dolfin.pc. This was helpful for getting started, but is an extra dependency for something we could do ourselves.

It would be good to implement the limited required functionality in DOLFIN, and at the some time review the caching of the build info to avoid necessary calls to pkg-config, i.e. just call once when first needed and store the data.

Remove GenericFunction::compute_vertex_values

Now that we have vectorised versions of GenericFunction::eval, it might be better to just get the vertex coords and call the vectorised eval function.

Need to decide on Function::compute_vertex_values.

From @chrisrichardson

I'm removing the call to the generated code ufc::finite_element::interpolate_vertex_values from Function.cpp but I guess there is a more general question - can we delete compute_vertex_values altogether?

Support evaluate_reference_basis in TensorProduct elements

Consider the following

from dolfin import *
mesh = UnitSquareMesh(MPI.comm_world, 3, 3, cell_type=CellType.Type.quadrilateral)
mesh.geometry.coord_mapping = fem.create_coordinate_map(mesh)
V = FunctionSpace(mesh, "CG", 1)
f = interpolate(Expression("x[0]", degree=1), V)
print(f.vector().get_local())
V.set_x(f.vector(), 1.0, 0)
print(f.vector().get_local())

which yields

[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]

I've tried to track down the cause of the problem:

Many aspects of dolfinx require ufc_coordinate_mapping::compute_physical_coordinates which in turn requires ufc_finite_element::evaluate_reference_basis_ffc_element. These are populated by code automatically generated by FFC. However, in the case of tensor product elements:

ufc_finite_element::evaluate_reference_basis
ufc_finite_element::evaluate_reference_basis_derivatives
ufc_finite_element::transform_reference_basis_derivatives

are unsupported by FIAT.tensor_product.TensorProductElement(.get_coeffs) and therefore populated with

  // evaluate_reference_basis_derivatives: Function is not supported/implemented.
return -1;

Currently, no error will be thrown in dolfinx with return -1 as the result, and will unexpected yield incorrect answers.

Support periodic meshes via mesh topology/geometry

Periodic boundary conditions are currently handled via the dofmap construction. This has some drawbacks:

  • The dofmap code is complicated
  • Facet integrals on periodic boundaries are not supported
  • Mesh partitioning does not account for periodicity (probably not a big deal)

It would be simpler to support periodic meshes in which the topology naturally accounts for the periodicity.

  • Work out a format to save/load periodic meshes from File (XDMF) whilst maintaining visualisation integrity

Function::eval does not return correct result for vector spaces

Consider the following: Interpolate the expression (x, y) on a vector function space and print the solution at the point (0.2, 0.1),

from dolfin import *

coords = [[0.0, 0.0], [1.0, 0.0], [0.0, 1.0]]
topology = [[0, 1, 2]]

mesh = Mesh(MPI.comm_self, cpp.mesh.CellType.Type.triangle, coords, topology, [], cpp.mesh.GhostMode.none)
cmap = fem.create_coordinate_map(mesh.ufl_domain())
mesh.geometry.coord_mapping = cmap

V = FunctionSpace(mesh, "CG", 1)
u = Function(V)
u.interpolate(Expression("x[0]", degree=1))
print(u([0.2, 0.1]))

Vvec = VectorFunctionSpace(mesh, "CG", 1)
uvec = Function(Vvec)
uvec.interpolate(Expression(("x[0]", "x[1]"), degree=1))
print(uvec([0.2, 0.1]))

The above code yields the output

u:  [0.2]
uvec:  [[0.1 0.1]]

which is incorrect.

Support adaptive methods

This will be a high-level description of what is needed to support adaptive methods.

Link to more related but more specific Issues here.

Add check for FFCX return/error codes

Now that FFCX generated code is C (rather than C++) is does not throw C++ exceptions, but returns error codes. The return codes need to be checked by DOLFIN.

Related issue is #67.

Clean up PETScKrylovSolver parameters in favour of PETScOptions.set()

There are remnants of old vanilla dolfin style Krylov solver parameters and settings, e.g:

// Map from method string to PETSc (for subset of PETSc solvers)
const std::map<std::string, const KSPType> _methods = {
    {"default", ""},       {"cg", KSPCG},       {"gmres", KSPGMRES},
    {"minres", KSPMINRES}, {"tfqmr", KSPTFQMR}, {"richardson", KSPRICHARDSON},
    {"bicgstab", KSPBCGS},

I propose we remove these in favour of the user using PETScOptions.set().

There are some methods of PETScKrylovSolver which are "nice to have" which configure the underlying KSP in ways which (as far as I'm aware) cannot be set through the PETScOptions. E.g. set_reuse_preconditioner().

Optimise XDMFFile::write_checkpoint for mesh sharing

Old XDMFFile::write takes into account parameters functions_share_mesh and rewrite_mesh - but not implemented for write_checkpoint. These basically specify if the mesh is shared across functions (if multiple in file) and across timesteps.

In ParaView, if multiple functions belong to the same dataset and they have all their own mesh "(partial)" is appended to their name. Some filters and functionality is limited in such case. One cannot warp by one (partial) vector field and show other (partial) scalar field on warped surface. (It is possible with ugly filter roundabout)

Such optimisation would also significantly reduce IO time and disk use.

Implement iteration over MeshTopology/Connectivity

TopologyComputation works on a Mesh and not just the MeshTopology. It would cleaner if it worked just on MeshTopology/MeshConnectivity.

A missing ingredient for this is iteration over a mesh topology/connectivity without needing a Mesh.

Set Dirichlet dofs to negative in dofmap

This makes it easy to detect Dirichlet dofs, and PETSc will not assemble negative index entries.

This approach should make assemblers: - simpler - faster? - easy to assemble matrix and vector separately while preserving symmetry, and with a predictable '1' on the diagonal for the matrix (at present the value depends on the number of connected cells)

A downside is that it relies on a PETSc convention of ignoring entries with negative indices.

XDMFFile does not close correctly at end of python session

Consider the following

from dolfin import *

mesh = UnitSquareMesh(MPI.comm_world, 1, 1)
mesh.geometry.coord_mapping = fem.create_coordinate_map(mesh.ufl_domain())

V = FunctionSpace(mesh, "CG", 1)

uu = Function(V)
XDMFFile("out1.xdmf").write(uu, float(100.0), XDMFFile.Encoding.HDF5)

xdmf2 = XDMFFile("out2.xdmf")
xdmf2.write(uu, float(100.0), XDMFFile.Encoding.HDF5)

Hopefully the error is reproducible. I see that out1.xdmf will be read fine in paraview, whilst out2.xdmf has a corrupt HDF5 file.

I can fix the problem by adding XDMF::close to the python layer and explicitly calling xdmf2.close() at the end of the code.

Review extraction of sub-dofmaps for correctness and performance

There is some dubious construction/copying of member data when creating sub-dofmaps, especially with regard to the IndexMaps (from which the block size is extracted) and the ufc-to-reordered map, both of which are simply copied from the parent. A possible, and simple, improvement would be to:

  • Throw and error when asking for the block size of a dofmap view
  • Share more data between parent and sub-views using std::shared_ptr

Performance regression in serial mesh library

DOLFIN:

time python3 -c"from dolfin import *; UnitCubeMesh(MPI.comm_world, 100, 100, 100)"

real    0m0.867s
user    0m0.761s
sys    0m0.106s

DOLFIN-X:

time python3 -c"from dolfin import *; UnitCubeMesh(MPI.comm_world, 100, 100, 100)"

real    0m15.661s
user    0m14.440s
sys    0m1.223s

Avoid MPI_Alltoall call in mesh code

The mesh distribution code makes a number of calls to MPI_Alltoall. These should be avoided to improve scaling.

  • test MPI Put/Get implementation on HPC systems

Fix assembly of elements that need orientation information

Removing mesh.order() means that the alignment of dofs between cells is no longer given by the global vertex indices.

  • adjust code in DofMapBuilder for higher order (P3 and higher) elements
  • figure out how to support elements which need a global direction on facets (Hdiv, Hcurl)
  • possibly by using global cell/vertex indices across boundary

Be careful with std::vector<> (and Eigen types) as arguments in pybind wrappers (big data)

There are two strong reasons why std::vector<> should not be an argument in pybind wrappers (at least for big arrays).

  1. Any python object that has __getitem__ and __len__ method is casted element-by-element to std::vector. Since most of our python objects have these methods, they can be used in places with completely different meaning and create big confusion for users in best case, hidden bugs in worst case.
  2. We store big data in python as numpy.ndarray. Pybind wrapper that expects std::vector has to convert it and this conversion seems inefficient. On the other hand, pybind wrapper that expects numpy's py::array_t has to convert to std::vector inside the binding, this seems much faster (10x).

This relates to subscriptable behaviour of PETScVector and inefficiency of PETScVector::set_local() called in python on PETScVector.

Both effects are demonstrated by the following code
Note: there must be keyword py::array::c_style, otherwise passing sliced numpy array gives wrong results.

import dolfin as d
import numpy
import time

base_code = '''
#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
#include <pybind11/numpy.h>
namespace py = pybind11;

#include <vector>
#include <stdio.h>

void numpy_to_std(std::vector<double>& vals)
{
    std::cout << "numpy_to_std called " << std::endl;
    
}

void std_to_std(std::vector<double>& vals)
{
    std::cout << "std_to_std called " << std::endl;
}

PYBIND11_MODULE(SIGNATURE, m)
{
    m.def("numpy_to_std", [](py::array_t<double, py::array::c_style> vals){
        // Init std::vector based on pybind array
        std::vector<double> values(vals.data(), vals.data() + vals.size());
        numpy_to_std(values);
    });
    m.def("std_to_std", &std_to_std);
}
'''

compiled = d.compile_cpp_code(base_code)

# Create big numpy data array
N = 1E+8
vec = numpy.random.rand(int(N))

# This binding is efficient
# no implicit conversion left to pybind
t0 = time.time()
compiled.numpy_to_std(vec)
print(time.time() - t0)

# This binding is inefficient
t0 = time.time()
compiled.std_to_std(vec)
print(time.time() - t0)


class BadObject(object):
    def __getitem__(self, pos):
        return 1.0

    def __len__(self):
        return 1


bo = BadObject()
# No error here, very confusing
compiled.std_to_std(bo)

--->

numpy_to_std called 
0.29850053787231445
std_to_std called 
3.804774045944214
std_to_std called 

Support complex-valued equations

Collect here more specific sub-issues that are required to support complex-valued equations.

  • Get dolfin to compile with petsc-complex
  • Get a working linear algebra for complex valued vectors/matrices (templating?)
  • Investigate I/O and how to represent complex values in XDMF

Remove bounding box tree from Mesh

The design with the bounding box tree being a member of Mesh creates a number of issues around mutability, lack of low level control, memory, mixing data structures with algorithms, etc.

Bounding box tree should be removed as a member of Mesh to permit proper control.

Re-implement higher order geometry

From @chrisrichardson

MeshGeometry needs some way to reference quadratic, cubic, etc. geometry, accessed cell-wise. Whilst it could be represented as a Function this places a circular dependency on Mesh.

A simplified dofmap (providing cellwise dofs on a Vector Lagrange basis) may be one option.

  • support for Mesh constructor with Tri_6 and Tet_10 topology in parallel
  • attach correct CoordinateMapping object to Mesh and check it matches with Mesh
  • figure out the ordering for points on edges, faces, cells etc. (currently using VTK ordering as in XDMF)
    • gmsh has a complete scheme for any order Lagrange...
  • consider adding a mapping for vertices too.

Custom integrals in FEniCSx

How will custom integrals be implemented? I imagine there's a lot of coordination required with this issue.

When I say "custom integrals" I mean: quadrature rules that are computed at solver runtime on a per-cell basis. This encompasses cut-cell, multimesh, and user-supplied per-cell rules. Hopefully there's a solution that can work with all these cases.

Maybe it looks like a map from cells to quadrature rules that can be supplied either by a function that runs at assembly-time, or by the user in the python interface?

Dolfin reading valid XDMF from mesh generators

gmsh output

<?xml version="1.0" ?>
<Xdmf Version="3.0">
  <Domain>
    <Grid Name="Grid">
      <Geometry Type="XYZ">
        <DataItem DataType="Float" Dimensions="13 3" Format="HDF" Precision="8">square_p2_0.h5:/data0</DataItem>
      </Geometry>
      <Topology NumberOfElements="4" Type="Triangle_6">
        <DataItem DataType="Int" Dimensions="4 6" Format="HDF" Precision="4">square_p2_0.h5:/data1</DataItem>
      </Topology>
    </Grid>
  </Domain>
</Xdmf>

dolfin required for input:

<?xml version="1.0" ?>
<Xdmf Version="3.0">
  <Domain>
    <Grid Name="Grid">
      <Geometry GeometryType="XYZ">
        <DataItem DataType="Float" Dimensions="13 3" Format="HDF" Precision="8">square_p2_0.h5:/data0</DataItem>
      </Geometry>
      <Topology NumberOfElements="4" TopologyType="Triangle_6">
        <DataItem DataType="Int" Dimensions="4 6" Format="HDF" Precision="4">square_p2_0.h5:/data1</DataItem>
      </Topology>
    </Grid>
  </Domain>
</Xdmf>

both are valid XDMF http://www.xdmf.org/index.php/XDMF_Model_and_Format.

The difference is <Geometry Type=... and <Geometry GeometryType=... and <Topology Type=... and <Topology TopologyType=...

Need minor updates for dolfin to interpret both.

Check where 'assert' should be changed to an exception

In many places assert is used instead of throwing an exception. This is likely due to the old dolfin_error being clunky to use (taking too many arguments).

Need to review the assert and throw exceptions where appropriate. Good example is checking the return code for file operations, e.g. calling HDF5 functions.

Compute cell volume using quadrature

With the advent of high order geometry representation, it will be useful (necessary?) to introduce MeshEntity volume computation with quadrature.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.