Git Product home page Git Product logo

performance's Introduction

.NET Performance

Build Source Version Public Build Status Internal Build Status
main public_build_icon_main internal_build_icon_main
release/7.0 public_build_icon_release_7.0 internal_build_icon_release_7.0
release/6.0 public_build_icon_release_6.0 internal_build_icon_release_6.0

This repo contains benchmarks used for testing the performance of all .NET Runtimes: .NET Core, Full .NET Framework, Mono and NativeAOT.

Finding these benchmarks in a separate repository might be surprising. Performance in a given scenario may be impacted by changes in seemingly unrelated components. Using this central repository ensures that measurements are made in comparable ways across all .NET runtimes and repos. This consistency lets engineers make progress and ensures the customer scenarios are protected.

Documentation

Contributing to Repository

This project has adopted the code of conduct defined by the Contributor Covenant to clarify expected behavior in our community. For more information, see the .NET Foundation Code of Conduct.

performance's People

Contributors

adamsitnik avatar agocke avatar billwert avatar caaavik-msft avatar carlossanlop avatar cincuranet avatar cshung avatar dakersnar avatar danmoseley avatar dependabot[bot] avatar dotnet-maestro[bot] avatar drewscoggins avatar eiriktsarpalis avatar ivdiazsa avatar jorive avatar kotlarmilos avatar kunalspathak avatar loopedbard3 avatar lxiamail avatar michellemcdaniel avatar mrsharm avatar nategraf avatar ooooolivia avatar radical avatar stephentoub avatar steveharter avatar tannergooding avatar tohron avatar vsadov avatar wfurt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

performance's Issues

Support for SupplementalTestData

For some benchmarks we use test data that is not included in the repository because of its size (i.e. 50MB text file). We store these files in https://github.com/dotnet/corefx-testdata and include them via a SupplementalTestData directive in our benchmark csproj file: https://github.com/dotnet/corefx/blob/master/src/System.Text.RegularExpressions/tests/Performance/System.Text.RegularExpressions.Performance.Tests.csproj#L20

We should support these directives here in the performance repository and make sure that these are only included when needed. Another example why a csproj per source assembly is desirable.

cc @danmosemsft @adamsitnik

Port and cleanup Perf.TypeDescriptorTests

  • Port
  • compare the results for every type used as an argument, remove the types with similar results
  • move the unit testing part to CoreFX unit tests project
        [InlineData(typeof(bool), typeof(BooleanConverter))]
        [InlineData(typeof(byte), typeof(ByteConverter))]
        [InlineData(typeof(SByte), typeof(SByteConverter))]
        [InlineData(typeof(char), typeof(CharConverter))]
        [InlineData(typeof(double), typeof(DoubleConverter))]
        [InlineData(typeof(string), typeof(StringConverter))]
        [InlineData(typeof(short), typeof(Int16Converter))]
        [InlineData(typeof(int), typeof(Int32Converter))]
        [InlineData(typeof(long), typeof(Int64Converter))]
        [InlineData(typeof(float), typeof(SingleConverter))]
        [InlineData(typeof(UInt16), typeof(UInt16Converter))]
        [InlineData(typeof(UInt32), typeof(UInt32Converter))]
        [InlineData(typeof(UInt64), typeof(UInt64Converter))]
        [InlineData(typeof(object), typeof(TypeConverter))]
        [InlineData(typeof(void), typeof(TypeConverter))]
        [InlineData(typeof(DateTime), typeof(DateTimeConverter))]
        [InlineData(typeof(DateTimeOffset), typeof(DateTimeOffsetConverter))]
        [InlineData(typeof(Decimal), typeof(DecimalConverter))]
        [InlineData(typeof(TimeSpan), typeof(TimeSpanConverter))]
        [InlineData(typeof(Guid), typeof(GuidConverter))]
        [InlineData(typeof(Array), typeof(ArrayConverter))]
        [InlineData(typeof(ICollection), typeof(CollectionConverter))]
        [InlineData(typeof(Enum), typeof(EnumConverter))]
        [InlineData(typeof(SomeEnum), typeof(EnumConverter))]
        [InlineData(typeof(SomeValueType?), typeof(NullableConverter))]
        [InlineData(typeof(int?), typeof(NullableConverter))]
        [InlineData(typeof(ClassWithNoConverter), typeof(TypeConverter))]
        [InlineData(typeof(BaseClass), typeof(BaseClassConverter))]
        [InlineData(typeof(DerivedClass), typeof(DerivedClassConverter))]
        [InlineData(typeof(IBase), typeof(IBaseConverter))]
        [InlineData(typeof(IDerived), typeof(IBaseConverter))]
        [InlineData(typeof(ClassIBase), typeof(IBaseConverter))]
        [InlineData(typeof(ClassIDerived), typeof(IBaseConverter))]
        [InlineData(typeof(Uri), typeof(UriTypeConverter))]

Redesign LINQ benchmarks

This is a #90 followup

  1. ~~CoreFX benchmarks call .ToArray() everywhere, while they should just iterate over the enumeration (like the CoreCLR ones) ~~ fixed in #127
  2. Cast_ToBaseClass and Cast_SameType ignore provided size and iteration count and use hardcoded values. We don't know if this is by design or a bug. (cc @jorive)
  3. Some benchmarks use iteration some iterationCount argument names, this should be unified.

Repo structure change proposal

We are soon going to Open this repo and I think that we should change the folder structure before we do that.

Currently we have:

โ”œโ”€โ”€โ”€.vscode
โ”œโ”€โ”€โ”€docs
โ”œโ”€โ”€โ”€scripts
โ”‚   โ””โ”€โ”€โ”€build
โ””โ”€โ”€โ”€src
    โ”œโ”€โ”€โ”€ArtifactsUploader // an internal tool
    โ”œโ”€โ”€โ”€benchmarks // bdn microbenchmarks
    โ”œโ”€โ”€โ”€common // single file
    โ”œโ”€โ”€โ”€coreclr // old xunit microbenchmarks
    โ”œโ”€โ”€โ”€CoreFx // old xunit microbenchmarks
    โ”œโ”€โ”€โ”€dmlib
    โ”œโ”€โ”€โ”€docker
    โ””โ”€โ”€โ”€scenarios // end-to-end benchmarks

My proposal for now

โ”œโ”€โ”€โ”€.vscode
โ”œโ”€โ”€โ”€build // common with a static code analysis rules moved to build
โ”œโ”€โ”€โ”€docs
โ”œโ”€โ”€โ”€scripts
โ”‚   โ””โ”€โ”€โ”€build
โ””โ”€โ”€โ”€src
    โ”œโ”€โ”€โ”€benchmarks
    โ””โ”€โ”€โ”€โ”€โ”€micro // currently in "benchmarks"
    โ””โ”€โ”€โ”€โ”€โ”€end-to-end // currently in "scenarios"
    โ””โ”€โ”€โ”€โ”€โ”€other
          โ””โ”€โ”€โ”€โ”€โ”€containers // currently in "docker"
          โ””โ”€โ”€โ”€โ”€โ”€dmlib // currently in "dmlib"
    โ”œโ”€โ”€โ”€tools
    โ””โ”€โ”€โ”€โ”€โ”€ArtifactsUploader

In the future I would like to add Java and C++ benchmarks to compare against our competition:

โ”œโ”€โ”€โ”€.vscode
โ”œโ”€โ”€โ”€build
โ”œโ”€โ”€โ”€docs
โ”œโ”€โ”€โ”€scripts
โ”‚   โ””โ”€โ”€โ”€build
โ””โ”€โ”€โ”€src
    โ”œโ”€โ”€โ”€benchmarks
    โ””โ”€โ”€โ”€โ”€โ”€micro 
    โ””โ”€โ”€โ”€โ”€โ”€end-to-end
    โ””โ”€โ”€โ”€โ”€โ”€competition
          โ””โ”€โ”€โ”€โ”€โ”€java
          โ””โ”€โ”€โ”€โ”€โ”€cpp
    โ””โ”€โ”€โ”€โ”€โ”€other
          โ””โ”€โ”€โ”€โ”€โ”€containers
          โ””โ”€โ”€โ”€โ”€โ”€dmlib
    โ”œโ”€โ”€โ”€tools
    โ””โ”€โ”€โ”€โ”€โ”€ArtifactsUploader

@jorive @brianrob what do you think? I want to get acceptance before I start working on the PR ;p

Make it possible to run with Mono

It would be amazing for this benchmarking suite to run on top of Mono. This would allow us to more easily compare performance across the different runtimes, as well as help the Mono team identify where we should focus our performance work.

What should we do on our end to make it easier for you?

Thank you!

Investigate bimodal SpectralNorm_3

The SpectralNorm_3 is a bimodal benchmark.

Example histograms from BenchmarkDotNet:

-------------------- Histogram --------------------
[0.786 ms ; 1.033 ms) | @
[1.033 ms ; 1.466 ms) | @@@@@@@@@@@@@@@@@@
[1.466 ms ; 1.807 ms) | @@@
[1.807 ms ; 2.240 ms) | @@@@@@@
[2.240 ms ; 2.875 ms) | @@@@@
[2.875 ms ; 3.288 ms) |
[3.288 ms ; 3.721 ms) | @@@@@@
---------------------------------------------------
-------------------- Histogram --------------------
[0.942 ms ; 1.237 ms) | @@@
[1.237 ms ; 1.614 ms) | @@@@@@@@@@
[1.614 ms ; 1.892 ms) | @
[1.892 ms ; 2.269 ms) | @@@@@@@
[2.269 ms ; 2.563 ms) | @@@
[2.563 ms ; 2.940 ms) | @@@@@@@@@
[2.940 ms ; 3.354 ms) | @@@@
[3.354 ms ; 3.731 ms) | @@@
---------------------------------------------------

Sample results from xunit-performance (please look at the Min and Max value):

DotNetBenchmark-spectralnorm-3.dll Metric Unit Iterations Average STDEV.S Min Max
BenchmarksGame.SpectralNorm_3.RunBench Duration msec 6 1848.838 380.029 1517.790 2584.247
DotNetBenchmark-spectralnorm-3.dll Metric Unit Iterations Average STDEV.S Min Max
BenchmarksGame.SpectralNorm_3.RunBench Duration msec 6 1928.860 173.474 1831.114 2272.792

We can get the diasm with BDN, but we need #40 to be implemented first to get profiles

Investiage multimodal BinaryTrees_5

The BinaryTrees_5 is a bimodal benchmark.

Example histograms from BenchmarkDotNet:

-------------------- Histogram --------------------
[119.310 ms ; 146.287 ms) | @@@
[146.287 ms ; 182.663 ms) |
[182.663 ms ; 210.865 ms) | @@
[210.865 ms ; 237.841 ms) | @@@@@
[237.841 ms ; 256.492 ms) | @
[256.492 ms ; 283.468 ms) | @@@@@@@@@@@
[283.468 ms ; 316.363 ms) | @@@@@@@@@@@@@@
[316.363 ms ; 346.171 ms) | @@@@
---------------------------------------------------
-------------------- Histogram --------------------
[113.076 ms ; 131.348 ms) | @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
[131.348 ms ; 151.367 ms) | @@@@
[151.367 ms ; 162.228 ms) |
[162.228 ms ; 180.499 ms) | @@
[180.499 ms ; 191.304 ms) |
[191.304 ms ; 209.576 ms) | @@@@@@@
[209.576 ms ; 220.639 ms) |
[220.639 ms ; 238.910 ms) | @@
[238.910 ms ; 261.127 ms) | @
---------------------------------------------------

Sample results from xunit-performance (please look at the Min value):

DotNetBenchmark-binarytrees-5.dll Metric Unit Iterations Average STDEV.S Min Max
BenchmarksGame.BinaryTrees_5.RunBench Duration msec 7 1604.557 564.717 1040.190 2265.712
DotNetBenchmark-binarytrees-5.dll Metric Unit Iterations Average STDEV.S Min Max
BenchmarksGame.BinaryTrees_5.RunBench Duration msec 5 2122.072 138.067 1935.307 2304.632

Port System.Runtime.Performance.Tests

  • Port
  • Make sure we don't benchmark empty loops
for (int i = 0; i < 10000; i++)
 {
      new Guid(guidStr); new Guid(guidStr); new Guid(guidStr);
      new Guid(guidStr); new Guid(guidStr); new Guid(guidStr);
      new Guid(guidStr); new Guid(guidStr); new Guid(guidStr);
}
  • Make sure we don't run more permutations than needed:
[Benchmark]
[InlineData("a", 0)]
[InlineData("  ", 0)]
[InlineData("  ", 1)]
[InlineData("TeSt!", 0)]
[InlineData("TeSt!", 2)]
[InlineData("TeSt!", 3)]
[InlineData("I think Turkish i \u0131s TROUBL\u0130NG", 0)]
[InlineData("I think Turkish i \u0131s TROUBL\u0130NG", 18)]
[InlineData("I think Turkish i \u0131s TROUBL\u0130NG", 22)]
[InlineData("dzsdzsDDZSDZSDZSddsz", 0)]
[InlineData("dzsdzsDDZSDZSDZSddsz", 7)]
[InlineData("dzsdzsDDZSDZSDZSddsz", 10)]
[InlineData("a\u0300\u00C0A\u0300A", 0)]
[InlineData("a\u0300\u00C0A\u0300A", 3)]
[InlineData("a\u0300\u00C0A\u0300A", 4)]
[InlineData("Foo\u0400Bar!", 0)]
[InlineData("Foo\u0400Bar!", 3)]
[InlineData("Foo\u0400Bar!", 4)]
[InlineData("a\u0020a\u00A0A\u2000a\u2001a\u2002A\u2003a\u2004a\u2005a", 0)]
[InlineData("a\u0020a\u00A0A\u2000a\u2001a\u2002A\u2003a\u2004a\u2005a", 3)]
[InlineData("\u4e33\u4e65 Testing... \u4EE8", 0)]
public static object[][] UInt64Values => new[]
{
    new object[] { 214748364LU },
    new object[] { 2LU },
    new object[] { 21474836LU },
    new object[] { 21474LU },
    new object[] { 214LU },
    new object[] { 2147LU },
    new object[] { 214748LU },
    new object[] { 21LU },
    new object[] { 2147483LU },
    new object[] { 922337203685477580LU },
    new object[] { 92233720368547758LU },
    new object[] { 9223372036854775LU },
    new object[] { 922337203685477LU },
    new object[] { 92233720368547LU },
    new object[] { 9223372036854LU },
    new object[] { 922337203685LU },
    new object[] { 92233720368LU },
    new object[] { 0LU }, // min value
    new object[] { 18446744073709551615LU }, // max value
    new object[] { 2147483647LU }, // int32 max value
    new object[] { 9223372036854775807LU }, // int64 max value
    new object[] { 1000000000000000000LU }, // quintillion
    new object[] { 4294967295000000000LU }, // uint.MaxValue * Billion
    new object[] { 4294967295000000001LU }, // uint.MaxValue * Billion + 1
};

2.1 vs 2.2

I took all the benchmarks we have an executed them using all the goodness we have here in the perf repo. I will be posting the results below.

Info:

BenchmarkDotNet=v0.11.1.812-nightly, OS=Windows 10.0.17134.345 (1803/April2018Update/Redstone4)
Intel Xeon CPU E5-1650 v4 3.60GHz, 1 CPU, 12 logical and 6 physical cores
Frequency=3507496 Hz, Resolution=285.1037 ns, Timer=TSC
  [Host] : .NET Core 2.1.5 (CoreCLR 4.6.26919.02, CoreFX 4.6.26919.02), 64bit RyuJIT
  2.1    : .NET Core 2.1.5 (CoreCLR 4.6.26919.02, CoreFX 4.6.26919.02), 64bit RyuJIT
  2.2    : .NET Core 2.2.0-rtm-27029-02 (CoreCLR 4.6.27029.01, CoreFX 4.6.27029.02), 64bit RyuJIT

Road to BenchmarkDotNet

This is a list of requirements that need to be met before we switch from xunit-performance to BenchmarkDotNet.

Missing features:

Performance (the tool needs to be fast to run as part of CI):

  • dotnet/BenchmarkDotNet#606 "Improve Memory Diagnoser" - BDN required one extra process run to get the memory statistics, I have changed the architecture to require only one extra iteration. We need one extra iteration because for desktop .NET we are using AppDomain.MonitoringIsEnabled which adds an extra overhead. So we run the benchmarks without overhead, measure time, enable monitoring and run one extra iteration to get the memory statistics (fixed by @adamsitnik, was part of 0.10.12 release)
  • dotnet/BenchmarkDotNet#543 "Run Disassembly Diagnoser without extra run" - BDN required one extra process run to get the disassembly, I have changed the architecture to synchronize parent and child processes and get the disassembly after running the benchmarks, but before quiting the process (fixed by @adamsitnik, was part of 0.10.12 release)
  • dotnet/BenchmarkDotNet#699 "Generate one executable per runtime settings" BDN used to build an extra exe per benchmark. It was taking a lot of time. , I have changed the architecture, now it groups the benchmarks by runtime settings (framework/JIT/GC etc) and builds in parallel one exe for the entire group of benchmarks. For BenchmarkDotNet.Samples project with 650 benchmarks it used to take 1h to build the extra exes. Now its's 13s on my PC (fixed by @adamsitnik, will be part of 0.11.00 release)
  • dotnet/BenchmarkDotNet#704 "Add an optional way to configure storage to remember the PerfectInvocationCount, IterationCount and UnrollFactor" By default BDN uses a heuristic to find perfect invocation and iteration count. For CI scenarios we should remember these values, I estimate that it should save us 20-25% of time.
  • dotnet/BenchmarkDotNet#716 Allow to optionally run benchmarks in Parallel.

Private runtimes support (BenchmarkDotNet compiles new exe, so it needs to know how to work with private builds):

  • dotnet/BenchmarkDotNet#648 "BenchmarkDotNet requires dotnet cli toolchain to be installed". Prior joining MS I had no idea that you run .NET Core apps without adding dotnet cli to the PATH. Now it's optional and user can provide path to dotnet cli which should be used (fixed by @adamsitnik, was part of 0.10.13 release)
  • dotnet/BenchmarkDotNet#643 "BenchmarkDotNet should respect LangVersion project setting" - just copy it to the auto-generated project (fixed by @adamsitnik, was part of 0.10.13 release)
  • dotnet/BenchmarkDotNet#706 "Support private builds of .NET Runtime" - @vitek-karas needed to measure the perf difference after his recent NGEN changes. He wanted to compare the existing .NET Framework with his private build of CLR. This feature simply sends the provided version as COMPLUS_Version env var to the benchmarked process and allows to benchmark private desktop CLR builds. (fixed by @adamsitnik, was part of 0.10.14 release)
  • dotnet/BenchmarkDotNet#700 "Support private CoreCLR and CoreFX buids" - users can now use ANY CoreCLR and CoreFX builds for benchmarking. Uses dotnet cli to publish self-contained app, works on Windows, Linux and Mac. (fixed by @adamsitnik, will be part of 0.11.00 release)
  • dotnet/BenchmarkDotNet#718 CoreRT support.

Verification:

I am tagging the Perf Team and the people who are interested in the progress:
@jorive @valenis @adiaaida @DrewScoggins @brianrob
@ViktorHofer @danmosemsft @eerhardt
@AndyAyersMS @JosephTremoulet
@davidfowl @DamianEdwards

Port and improve Perf_Marvin

  • Port from xunit-performance to BenchmarkDotNet
  • investigate why the benchmarks are not calling Marvin API from CoreFX, but instead, have it's own copy of the hashing algorithm. IMHO it creates a need to synchronize the code between repositories and makes it possible to miss a regression if somebody changes the implementation in CoreFX
  • reduce the number of permutations, it's more a unit test than a benchmark today.

Set PERFSNAKE Machines to Auto-Connect to Jenkins

Right now, most PERFSNAKE machines don't auto-connect to Jenkins, which means that manual intervention is needed whenever machines go down for any reason.

In order for us to increase the number of performance runs for both daily builds and PRs, we need to make sure that our machine pool is resilient to restarts and outages.

@anscoggi, @adiaaida can one of you take this?

Required for https://github.com/dotnet/coreclr/issues/15175.

cc: @dotnet/rap-team

Expand the Perf test coverage for the System.Numerics.Vector types

The current perf test coverage for the System.Numerics.Vector types is fairly limited (this extends to types such as Matrix4x4, Quaternion, and Plane as well).

Given that these are meant to be fairly core high-performance types (used in things such as Multimedia-based applications), it is important that we have a high amount of perf-test coverage over the current implementations.

Ideas for reducing the time required to run all benchmarks

@jorive I am going to add all the ideas I have here and one day we can start the improvements from here.

  1. PingPong benchmark from System.Threading.Channels.Tests takes 2.5s to execute. It's executed for 3 channels, up to 20 times for each of them. 2.5 x 3 x 20 = 150 seconds. The solution would be to change the inner iteration count from 1_000_000 to 1_000 or any other smaller value. fixed in #126

Port System.Memory.Performance.Tests

There is a LOT of Memory benchmarks

  • Port
  • Make sure we don't measure empty loops:
  • Take a short look at CoreCLR Span benchmarks, create new issue for removing duplicated benchmarks
for (int i = 0; i < Benchmark.InnerIterationCount; i++)
{
      Span<char> span = memory.Span;
}

Redesign EnumPerf.ObjectGetType

JIT is smart and knows that enum.GetType() is a constant, so it optimizes it away. Unfortunately, it's keeping the empty loop.

public static Color blackColor = Color.Black;

[Benchmark]
public Type ObjectGetType()
{
    Type tmp = null;

    for (int i = 0; i < InnerIterationCount; i++)
        tmp = blackColor.GetType();

    return tmp;
}

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.