Git Product home page Git Product logo

intervallinearalgebra.jl's People

Contributors

dkarrasch avatar jorgepz avatar lucaferranti avatar mforets avatar orkolorko avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

intervallinearalgebra.jl's Issues

implement verified solver of scalar linear system

Given a square scalar linear system Ax = b implement verified_linear_solver(A, b) (feel free to suggest other names) which gives a rigorous bound of the solution x. Use epsilon inflation method described in section 5.5 of horacek paper, which is algorithm 10.7 in Rump's paper Verification methods: Rigorous results using floating-point arithmetic

first release

time to start thinking the first release. This is a metaissue containing things to do before release

DOCUMENTATION

  • #37
  • preconditioning explanation #65
  • introduction tutorial for interval linear systems #69

FEATURES

OTHER

  • move package to juliaintervals
  • install julia registrator
  • register to zenodo (zenodo enabled for this repository)

docs didn't deploy

I setup the deploy key and secret key called DOCUMENTER_KEY but doesn't work. I'll try to go through the instructions again and see if I missed something.

oettli-präger solver should not need a list of variables as input

Currently it is called as oettli(A, b, X, vars; tol=0.01),

the list of variables vars is used only internally to construct the separators, so the user should not need to give it manually, the function should generate the list of variables internally. It should do something similar to @polyvar x[1:10] in DynamicPolynomials.jl, but probably simpler and cheaper

[enhancement]: format references with APA style

Feature description

This is small and low-priority (but also easy to do), it would be nice to standardise the references in the references page to follow a common standard (e.g. APA).

Minimum working example

Additional information

Taking parametric interval linear system seriuosly

Normal interval linear systems (that is the main functionality of this package currently) are quite useless. In most (prob. all) true applications, you have parametric interval linear systems (PILS, like the beer 🍺 ), that is a system of the form

A(p)x = b(p)

where p is a vector of intervals (ranges for each parameters). Treating PILS like normal linear systems gives poor results, because dependency problem etc. the next big milestone of this package is to take parametric interval linear systems seriously and produce a state-of-the-art toolset for it. This would greatly increase uniqueness and value of the package. This metaissue collects different scenarios and references related to PILS.

Symmetric systems

  • Interval linear systems with symmetric matrices, skew-symmetric matrices and dependencies in the right hand side (this could be a good starting point to get the ball rolling)

Linear/Affine dependency on the parameters

  • I. Skalna, A Method for Outer Interval Solution of Systems of Linear Equations Depending Linearly on Interval Parameters, Reliable Computing, Volume 12, Number 2, April, 2006, pp. 107–120 #115
  • Popova, E. D. (2004). Parametric interval linear solver. Numerical Algorithms, 37(1), 345-356.
  • Skalna, I., & Hladík, M. (2017). A new method for computing a p-solution to parametric interval linear systems with affine-linear and nonlinear dependencies. BIT Numerical Mathematics, 57(4), 1109-1136.
  • Hladík, M. (2012). Enclosures for the solution set of parametric interval linear systems. International Journal of Applied Mathematics and Computer Science, 22, 561-574.
  • Popova, E., & Krämer, W. (2007). Inner and outer bounds for the solution set of parametric linear systems. Journal of Computational and Applied Mathematics, 199(2), 310-316.
  • Skalna, I., & Hladík, M. (2021). On preconditioning and solving an extended class of interval parametric linear systems. Numerical Algorithms, 87(4), 1535-1562.

Nonlinear dependency

  • Skalna, I. (2009, September). Direct method for solving parametric interval linear systems with non-affine dependencies. In International conference on parallel processing and applied mathematics (pp. 485-494). Springer, Berlin, Heidelberg.
  • Skalna, I. (2009, September). A global optimization method for solving parametric linear systems whose input data are rational functions of interval parameters. In International Conference on Parallel Processing and Applied Mathematics (pp. 475-484). Springer, Berlin, Heidelberg.

Applications

  • #102
  • Electric circuits. When solving resistive electrical circuits with e.g. nodal / mesh /modified nodal analysis, the problem is to solve a linear systems that depends linearly on resistances or conductances, hence linear PILS could be a good tool for circuit analysis with uncertainty in parameters. This can also generalize to AC RLC-circuits, as one obtains a linear system with linear dependency on impedances or admittances, which are complex numbers, but a general complex linear system can be rewritten as two linear PILS, hence the final system would still be linear-PILS.
  • Complex ILS. A complex interval linear system can be rewritten as two real ILS with linear dependency.
  • Least squares, the linear leastsquare solution of an overdetermined ILS can be found by solving a symmetric ILS.

Data structure / interface

Let us first focus on symmetric and linear PILS. In a linear PILS we have

A(p) = A0 + A1*p1 + A2*p2 + .... + An*pn

cc @mforets @dpsanders @schillic @jorgepz

is the current CI an overkill

Currently the CI is testing on all main OS (windows, linux, macos) and on both 64 and 32 bits architecture (32 bits not for macos), for latest stable and nightly, giving in total 322 - 1= 11 checks plus the documentation.

I wonder whether this is necessary or whether it is an overkill. Maybe test just 3 OS and 64 bits (latest and nightly)?

cc @mforets @dpsanders

`list_orthants` should return an iterator

Feature description

The function list_orthants at the moment returns a vector of vectors and hence allocates and it's not very efficient. It should return an iterator, similar to what DiagDirection in LazySets.jl does (cc @mforets ). Since LazySets.jl is an optional dependency used only in LinearOettliPrager, it would be good to have a more efficient version of list_orthants in IntervalLinearAlgebra.jl

Minimum working example

I'm not sure what would be a smart way to go, maybe define

struct Orthants
n::Int
end

and implement the iterators interface for that

Additional information

Add more benchmark models

The task is to extend perf/benchmarks.jl with more Ax = b instances. Maybe check Horacek's thesis.

think of interface / dastructures to handle linear-PILS

Let us first focus on symmetric and linear PILS. In a linear PILS we have

A(p) = A0 + A1p1 + A2p2 + .... + An*pn

A few alternatives off the top of my head

Alternative 1

struct IntervalAffineArray{T, MT<: AbstractVecOrMat{T}}
  coeffs::Vector{M}
  params::Vector{Interval{T}
end

Alternative 2

using Symbolics.jl

struct IntervalParametricArray{T, MT <: AbstractVecOrMat{Expr}}
  A::MT
  params::Vector{Interval{T}}
end

I guess this would generalize to more complex expressions, but it would require to analyze the expressions in the matrix to figure out the case, i.e. for the linear case we would need a function

is_affine_multivariate_polynomial(ex::Expr)::Bool

e.g.

is_affine_multivariate_polynomial(x^2+ y) == false
is_affine_multivariate_polynomial(x + y + z + 1) == true
is_affine_multivariate_polynomial(xy + x + y) == false

maybe if we are planning to focus on linear PILS atm, it could be good to go for option 1 (unless that is worse than 2 anyway)

in both cases, the solve interface could be

AbstractParametricIntervalLinearSolver <: AbstractIntervalLinearSolver

solve(Ap, bp, method::AbstractParametricIntervalLinearSolver) = ....

[enhancement]: spectral decomposition of interval matrices

Feature description

The approach to compute the spectral decomposition of interval matrices described here could be interesting to implement, would be interesting to study how big intervals it can handle.

Minimum working example

  • overload eigvals and eigvecs

eigenvalues of interval matrices

Feature description

Implement the algorithms described in this paper to find a bound on eigenvalue set of interval matrices

Minimum working example

I'm thinking of an interface like:

function eigvals(A::AbstractMatrix{IntervalOrComplexInterval}, method=someDefaultMethod)
  .....
end

function eigvals(A::SymmetricMatrix{IntervalOrComplexInterval}, method=someDefaultMethodForSymmetricMatrices)
....
end

For start the methods could be

  • Rohn method for symmetric matrices
  • Herz method for symmetric matrices (has exponential complexity, maybe later, would probably benefit from #76)
  • Rohn method for general real matrices
  • Hladík method for general compex matrices

In a first version, it could be non-rigorous (solve real symmetric eigenvalue problems with eigen). Later, it could have rigorous and non-rigorous version (after #68 is merged)

Additional information

Support Gaussian elimination for static matrices

Feature description

At the moment Gaussian elimination supports only mutable arrays (because it calls rref). Would be nice to support also static arrays, probably the static matrix should be converted to mutable to use rref.

Minimum working example

A = @SMatrix [2..4 -1..2;-2..1 2..4]
b = @SVector [-2..2, -2..2]
solve(A, b, GaussianElimination)

Additional information

Custom printing for algorithm structures

It's a minor thing but adding a custom printing to algorithm structures make it easy to know what do the parameters represent. Example here.

Please also note that the algorithm parameters should also be mentioned in their respective docstrings.

? Jacobi

search: Jacobi

Solves the linear system using Jacobi method. See section 5.7.4 of [1](page 52)

It is not clear what is [1], but we can add a hyperlink to the reference once the docs are setup.

[feature request]: FEM minimal example problem/test

Feature description

A minimal working example of the package functions applied to a linear static FEM problem. I offer myself as contributor!

  • add a fixed-parameter truss example
  • add a interval-parameter truss example

[enhancement]: Ship a correctly rounded threaded OpenBLAS as an Artifact

Feature description

I think it would be a good idea to ship a version of OpenBLAS with the CONSISTENT_FPCSR=1 flag enabled together with the library as an Artifact, or compile during installation.

The main reason is that the system (or Julia) OpenBLAS distribution may not have this flag enabled.
While Julia may be started with only 1 thread, unless explicitly stated, OpenBLAS may run with multiple thread enabled and have different rounding modes on each thread.

Currently, a fix that allows consistent rounding is to call Julia with the

OPENBLAS_NUM_THREADS=1

but this affects performance.

See
Julia Threads + BLAS Threads
Using directed rounding in Octave/Matlab

reexport IntervalArithmetic.jl

At least for the time being, should we reexport intervalArithmetic.jl? This way it would be enough to do

julia> using IntervalLinearAlgebra

instead of

julia> using IntervalLinearAlgebra, IntervalArithmetic

[enhancement]: Add hertz method for eigenvalues of symmetric interval matrices

Feature description

Add the Hertz method to compute the exact hull of eigenvalues of symmetric interval matrices. This has exponential complexity, the alternative is the Rohn method (currently useD), which is faster but can return a strictly larger enclosure of the eigenvalues.

Minimum working example

struct Herz end
struct Rohn end

function eigenbox(A::Symmetric, ::Type{Hertz}) end
function eigenbox(A::Symmetric,  ::Type{Rohn}) end

function eigenbox(A, method)
  # construct symmetric interval eigenvalue problem
  eigenbox(As, method)
end

Additional information

solvers types names

The current type system for the solvers (including the changes in the latest PRs)

  • LinearSolver

    • DirectSolver
      • GaussElimination
      • HansenBliekRohn
      • LinearOettliPrager
    • IterativeSolver
      • GaussSeidel
      • Jacobi
      • Krawczyk
    • OettliPrager (solves nonlinear equalities using ICP.jl)

    I think there's room for improvement, here's a proposal

  • AbstractIntervalLinearSolver

    • AbstractDirectIntervalLinearSolver
      • GaussElimination
      • HansenBliekRohn
      • LinearOettliPrager
    • AbstractIterativeIntervalLinearSolver
      • GaussSeidel
      • Jacobi
      • LinearKrawczyk (to distinguish from Krawczyk exported in IntervalRootFining.jl)
      • NonLinearOettliPrager

    what do you think? Feel free to comment other proposals or suggestions

interval*floating point falls back to generic algorithm in LinearAlgebra.jl

Bug description

Multiplying an interval matrix with a floating point matrix uses the generic method in LinearAlgebra.jl instead of Rump multiplicaton in the package

Minimum (non-)working example

julia> A = [1..2 3..4;5..6 7..8]
2×2 Matrix{Interval{Float64}}:
 [1, 2]  [3, 4]
 [5, 6]  [7, 8]

julia> B = [1 2;3 4]
2×2 Matrix{Int64}:
 1  2
 3  4

julia> @which A*B
*(A::AbstractMatrix{T} where T, B::AbstractMatrix{T} where T) in LinearAlgebra at C:\Users\lucaa\AppData\Local\Programs\Julia-1.6.1\share\julia\stdlib\v1.6\LinearAlgebra\src\matmul.jl:151

Expected behavior

should dispatch to the method in IntervalLinearAlgebra.jl. It should be fixable by just adding

@eval *(A::AbstractMatrix{Interval{T}} where T, B::AbstractMatrix{T} where T) =
        *($type, A, B)

@eval *(A::AbstractMatrix{T} where T, B::AbstractMatrix{Interval{T}} where T) =
        *($type, A, B)

in the definition of set_multiplication_mode

Version info

  • IntervalLinearAlgebra.jl version: main-branch
  • System information:
Julia Version 1.6.1
Commit 6aaedecc44 (2021-04-23 05:59 UTC)
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
  CPU: Intel(R) Core(TM) i7-8565U CPU @ 1.80GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-11.0.1 (ORCJIT, skylake)
Environment:
  JULIA_EDITOR = code.cmd -g
  JULIA_NUM_THREADS =

Related issues

Additional information

Add any other useful information

Difference between static / regular arrays

julia> A = @SMatrix [2..4 -2..1; -1..2 2..4]
julia> b = @SVector [-2..2, -2..2];

julia> Am = Matrix(A)
julia> bm = Vector(b);

julia> solve(A, b, Jacobi())
solve(A, b, Jacobi()) = Interval{Float64}[[-14.0001, 14.0001], [-14.0001, 14.0001]]

julia> solve(Am, bm, Jacobi())
solve(Am, bm, Jacobi()) = Interval{Float64}[[-45.7501, 45.7501], [-45.0795, 45.0795]]

restructure tests

Instead of having all tests in one file, they should be in several files. Ideally, the structure of test should reflect the structure of src.

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

generic solve interface

Now that we have different preconditioning mechanisms and solvers, it's time to polish the generic solve interface

  • Extend CommonSolve.jl #42
  • At the moment the solution using Oettli-Präger has its stand alone solver oettli I guess that should also be a solver like the others #41
  • implement generic interface with different methods that decide the preconditioning based on the matrix type and the algorithms, e.g. #42
function _default_precondition(A, method::AbstractSolver)
    # ....
end

# specific default preconditioning methods
function _default_precondition(A, method::GaussElimination)
    if is_sdd_matrix || is_M_matrix 
        return NoPrecondition()
    else
        return InverseMidpoint()
    end
end

function _default_solver()
    GaussEliminiation()
end

function solve(A, b,
               solver::AbstractSolver=_default_solver(),
               precondition::AbstractPrecondition=_default_precondition(A, method))
   
    # precondition
    Ap, bp = precondition(A, b)

    # compute solution
    return solver(Ap, bp)
end

[enhancement]: Bypassing uncomputability issues

Feature description

I'm new to interval computing. I'm a PhD student at Imperial College London. I have a suggestion, and I'm wondering how you handle this.

Certain operations on matrices are uncomputable. For instance, eigendecomposition of even "nice" matrices like symmetric matrices is not computable. What "uncomputable" means in practice is that "forwards numerical stability" is not attainable. The notion of "backwards numerical stability" may still be attainable and can suffice for certain use cases. Unfortunately, interval arithmetic in the naive sense cannot "handle" backwards stability. But this limitation can be by-passed.

My suggestion surrounds eigendecomposition. The API I propose should have three functions:

  • eigendecomp : Mat(Complex) -> (Mat(Complex), DiagMat(Complex)),
  • inv_eigendecomp : (Mat(Complex), DiagMat(Complex)) -> Mat(Complex)
  • lift : (Complex -> Complex) -> ((Mat(Complex), DiagMat(Complex)) -> (Mat(Complex), DiagMat(Complex))).

eigendecomp(M) should produce a pair of matrices (P,D) such that the matrix P is exact (and not interval valued!!) and D is interval-valued. P and D should be chosen so that P D P^-1 contains M as snuggly as possible. The actual eigenvalues and eigenvectors of M don't matter because the eigenvectors of M in particular are not computable.

I also suggest that the purpose of eigendecomposition is to lift functions of type Complex -> Complex to complex matrices. I suspect that the eigenvalues and eigenvectors are perhaps not entirely relevant, in practice.

[bug]: issues with complex interval matrices multiplication

Bug description

At the moment, using complex interval matrices falls back to the "default" multiplication in LinearAlgebra.jl instead of using Rump fast multiplication, which should be added for complex interval matrices.

Minimum (non-)working example

julia> typeof(A)
Matrix{Complex{Interval{Float64}}} (alias for Array{Complex{Interval{Float64}}, 2})

julia> @which A * A
*(A::AbstractMatrix{T} where T, B::AbstractMatrix{T} where T) in LinearAlgebra at C:\Users\lucaa\AppData\Local\Programs\Julia-1.6.1\share\julia\stdlib\v1.6\LinearAlgebra\src\matmul.jl:151

julia> typeof(P)
Matrix{Interval{Float64}} (alias for Array{Interval{Float64}, 2})

julia> typeof(D)
Diagonal{ComplexF64, Vector{ComplexF64}}

julia> P * D
ERROR: MethodError: no method matching *(::IntervalLinearAlgebra.MultiplicationType{:fast}, ::Matrix{Interval{Float64}})
Closest candidates are:
  *(::IntervalLinearAlgebra.MultiplicationType{:fast}, ::AbstractArray{Interval{T}, 2}, ::AbstractArray{Interval{T}, 2}) where T<:Real at c:\Users\lucaa\projects\IntervalLinearAlgebra\src\multiplication.jl:46
  *(::IntervalLinearAlgebra.MultiplicationType{:fast}, ::AbstractArray{Interval{T}, 2}, ::AbstractMatrix{T}) where T<:Real at c:\Users\lucaa\projects\IntervalLinearAlgebra\src\multiplication.jl:101
  *(::IntervalLinearAlgebra.MultiplicationType{:fast}, ::AbstractMatrix{T}, ::AbstractArray{Interval{T}, 2}) where T<:Real at c:\Users\lucaa\projects\IntervalLinearAlgebra\src\multiplication.jl:76
  ...
Stacktrace:
 [1] *(::IntervalLinearAlgebra.MultiplicationType{:fast}, ::Matrix{Interval{Float64}}, ::Diagonal{ComplexF64, Vector{ComplexF64}})
   @ Base .\operators.jl:560
 [2] *(A::Matrix{Interval{Float64}}, B::Diagonal{ComplexF64, Vector{ComplexF64}})
   @ IntervalLinearAlgebra c:\Users\lucaa\projects\IntervalLinearAlgebra\src\multiplication.jl:37
 [3] top-level scope
   @ REPL[86]:1

Expected behavior

should use the multiplication algorithm defined in the package.

Related issues

Additional information

I am starting to think I should define a IntervalMatrix type here and define the operations on it, this would solve this, #72 and other possible issues that haven't noticed yet.

implement different preconditioning methods

For example, sometimes preconditioning by Diagonal(Ac)^-1 gives better results than Ac^-1. It might be good to offer different preconditioning mechanisms.

I don't know whether there are some criteria or heuristic techniques to choose the preconditioning method for a given system.

[bug] generation of documentation freezes

Bug description

the process of generation of the documentation freezes in ubuntu and in arch. the build folder is not filled with the files. only image files are copied to assets and applications folder

Minimum (non-)working example

(@v1.7) pkg> activate .
  Activating project at `~/work/IntervalLinearAlgebra.jl`

julia> include("docs/make.jl")
[ Info: generating markdown page from `~/work/IntervalLinearAlgebra.jl/docs/literate/applications/FEM_example.jl`
[ Info: writing result to `~/work/IntervalLinearAlgebra.jl/docs/src/applications/FEM_example.md`
[ Info: SetupBuildDirectory: setting up build directory.
[ Info: Doctest: running doctests.
[ Info: ExpandTemplates: expanding markdown templates.

Expected behavior

documentation html files generated in the build folder.

Version info

(IntervalLinearAlgebra) pkg> st
     Project IntervalLinearAlgebra v0.1.4
      Status `~/work/IntervalLinearAlgebra.jl/Project.toml`
  [38540f10] CommonSolve v0.2.0
  [d1acc4aa] IntervalArithmetic v0.20.3
  [189a3867] Reexport v1.2.2
  [ae029012] Requires v1.3.0
  [90137ffa] StaticArrays v1.3.3
  [37e2e46d] LinearAlgebra
julia> versioninfo()
Julia Version 1.7.0
Commit 3bf9d17731 (2021-11-30 12:12 UTC)
Platform Info:
 OS: Linux (x86_64-pc-linux-gnu)
 CPU: Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz

[enhancement]: inv and det (needed by IntervalRootFinding)

I am currently procrastinating on my thesis by trying to update IntervalRootFinding.jl to the newest change in IntervalArithmetic.

It appears that some basics LinearAlgebra do not work.

In particular, det and inv or interval matrices use forbidden operations (isfinite and <). Would this package be a good place to have those ? IntervalRootFinding would then depend on IntervalLinearAlgebra (would make sense, we already have our own version of Gauss elimination for some reason.

@lucaferranti @OlivierHnt

[enhancement]: A is not a squre matrix

Feature description

If there is a way to solve the case where $ A \in \mathbb{R}^{m \times n}, m > n $ is not a square matrix (number of rows is greater than number of columns)?

Minimum working example

using IntervalLinearAlgebra, LazySets, Plots
A = [2..4 -1..1;-1..1 2..4; -0.5..0.5 1..2]
b = [-2..2; -1..1; -0.1..0.1]
Xenclose = solve(A, b)
polytopes = solve(A, b, LinearOettliPrager())

Additional information

Is there any relevant literature to refer to?

Standardize docstrings

At the moment each docstring has it's own style

  • Follow this format
    • utils docstrings #43
    • classify docstrings #43
    • hull docstrings #48
    • oettli docstrings #41
    • precondition docstrings #48
    • solve docstrings #49
    • rref docstrings #43
  • Move references.md under docs and use @refs in docstrings #40
  • Fix aesthetic issues in documentation (e.g. the !!! note in the homepage is not rendered correctly #50
  • Since the package isn't released yet, comment out docs stable badge #40
  • Restructure API page in documentation #50

[bug]: don't use subset to check if interval vector is in the interior of the other

Bug description

Verified floating point algorithms (such as epsilon inflation) need to check that a vector is in the interior of the other. Currently, this is done by all(x .⊂ y), but that checks it is a proper subset.

Minimum (non-)working example

Expected behavior

use all(isinterior.(x, y)) instead.

Version info

  • IntervalLinearAlgebra.jl version: 0.1.1
  • System information:
Julia Version 1.6.1
Commit 6aaedecc44 (2021-04-23 05:59 UTC)
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
  CPU: Intel(R) Core(TM) i7-8565U CPU @ 1.80GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-11.0.1 (ORCJIT, skylake)
Environment:
  JULIA_EDITOR = code.cmd -g

Related issues

See related issue in IntervalArithmetic.jl here

Additional information

Add any other useful information

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.