Comments (20)
Please make the "run" instance variable in ThreadTest (https://github.com/evanfoster/kata-java-benchmarks/blob/main/ThreadTest.java#L11) volatile and update the numbers, otherwise there will be unpredictable delays in getting your threads to terminate.
from runtime.
I remember running into the ulimit
discrepancy in the past, and having to correct that in manifests, e.g. for Jenkins. I will try to remember what I traced that back to and put a link here. Assigning to myself for now.
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-kata
labels:
app: jenkins
annotations:
io.kubernetes.cri.untrusted-workload: "true"
spec:
replicas: 8
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
runtimeClassName: kata
containers:
- name: jenkins
image: jenkins/jenkins
command: [ "bash" ]
args: [ "-c", "ulimit -n 5000; ulimit -a; /usr/local/bin/jenkins.sh" ]
resources:
limits:
memory: "3000Mi"
requests:
memory: "2000Mi"
env:
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
from runtime.
/cc @c3d
from runtime.
I should also mention that the thread fairness seems pretty wacky. It seems like worker threads are given too much precedence. The worker threads seem to do more work in Kata, but creating new threads takes far, far longer, like the main thread isn't getting cycles.
EDIT: The above is just conjecture, however.
from runtime.
How many CPUs does the pod see -- I'm not referring to CPU requests or limits -- inside the Kata vs. runc container? The sysctl output from the runc container shows 32 CPUs under kernel.sched_domain, but the Kata sysctl output shows nothing.
from runtime.
In Kata, the pod sees whatever the limit is (so 8 in one test, 1 in the other). In runc, the pod sees the actual number of CPU cores on the host (32 in Azure, I think 128 in AWS). Not sure why kernel.sched_domain
isn't showing anything inside of Kata. That's quite weird...
from runtime.
Interesting that there are differing values in ulimit, ie:
runc:
process 1048576
nofiles 1048576
kata:
process 7947
nofiles 1073741816
I'm wondering if it'd make sense to update your crio configuration to set this so its consistent?
from runtime.
Interesting that there are differing values in ulimit, ie:
runc:process 1048576 nofiles 1048576
kata:
process 7947 nofiles 1073741816
I'm wondering if it'd make sense to update your crio configuration to set this so its consistent?
Hmm. Both tests were run sequentially on the same node with the same CRI-O config, just with a different runtimeClassName
. Not sure what's causing the discrepancy there.
from runtime.
It would be really interesting to see the output of lscpu
in both containers, in addition to the contents of /sys/fs/cgroup/cpu/cpu.shares
.
from runtime.
EDIT: This is from the test that only exposes 1 CPU core (test-thread-creation-teardown). I will note that openJDK checks the cgroup before checking the number of processors. If there's a limit applied at the cgroup level then it ignores the number of procs.
Kata:
root@thread-creation-teardown-test-kata:/# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz
Stepping: 1
CPU MHz: 2294.824
BogoMIPS: 4589.64
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 4096K
L3 cache: 16384K
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti tpr_shadow vnmi ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat umip md_clear arch_capabilities
root@thread-creation-teardown-test-kata:/# cat /sys/fs/cgroup/cpu/cpu.shares
1024
runc:
root@thread-creation-teardown-test-runc:/# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz
Stepping: 1
CPU MHz: 2294.825
BogoMIPS: 4589.65
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 51200K
NUMA node0 CPU(s): 0-31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti tpr_shadow vnmi ept vpid fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt md_clear
root@thread-creation-teardown-test-runc:/# cat /sys/fs/cgroup/cpu/cpu.shares
1024
from runtime.
This is from the test that only exposes 1 CPU core (test-thread-creation-teardown). I will note that openJDK checks the cgroup before checking the number of processors. If there's a limit applied at the cgroup level then it ignores the number of procs.
Would you like the results from the 8 core test (thread-yield-test)?
EDIT: Here's the same data from the 8 core test (thread-yield-test):
Kata:
root@thread-yield-test-kata:/# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 9
On-line CPU(s) list: 0-8
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 9
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz
Stepping: 1
CPU MHz: 2294.824
BogoMIPS: 4589.64
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 4096K
L3 cache: 16384K
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti tpr_shadow vnmi ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat umip md_clear arch_capabilities
root@thread-yield-test-kata:/# cat /sys/fs/cgroup/cpu/cpu.shares
8192
runc:
root@thread-yield-test-runc:/# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz
Stepping: 1
CPU MHz: 2294.825
BogoMIPS: 4589.65
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 51200K
NUMA node0 CPU(s): 0-31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti tpr_shadow vnmi ept vpid fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt md_clear
root@thread-yield-test-runc:/# cat /sys/fs/cgroup/cpu/cpu.shares
8192
from runtime.
@mnmehta I've updated the test. I'm spinning up my Azure and AWS test clusters now. I'll provide updated results once I have them.
from runtime.
Please make the "run" instance variable in ThreadTest (https://github.com/evanfoster/kata-java-benchmarks/blob/main/ThreadTest.java#L11) volatile and update the numbers, otherwise there will be unpredictable delays in getting your threads to terminate.
Hey @mnmehta, apologies that this took so long to generate! Had a couple of rough days.
Here's the raw data (which includes things like ulimit and sysctl output): https://gist.github.com/evanfoster/6deab087f8c18f208c460daf715ddc10
Here are the results after pulling them out and cleaning them up (thread creation and teardown only includes the worst case result):
AWS Kata (thread creation and teardown):
{
"threadCreationTest2": "start",
"repeats" : 2,
"threads" : 1024,
"sleep" : 2,
"priority" : 10,
"setupTotal" : 905,
"teardownTotal" : 72,
"overallTotal" : 977
}
AWS runc (thread creation and teardown):
{
"threadCreationTest2": "start",
"repeats" : 2,
"threads" : 1024,
"sleep" : 2,
"priority" : 10,
"setupTotal" : 1693,
"teardownTotal" : 614,
"overallTotal" : 2307
}
Azure Kata (thread creation and teardown):
{
"threadCreationTest2": "start",
"repeats" : 2,
"threads" : 1024,
"sleep" : 2,
"priority" : 10,
"setupTotal" : 503381,
"teardownTotal" : 120,
"overallTotal" : 503501
}
Azure runc (thread creation and teardown):
{
"threadCreationTest2": "start",
"repeats" : 2,
"threads" : 1024,
"sleep" : 2,
"priority" : 10,
"setupTotal" : 2508,
"teardownTotal" : 199,
"overallTotal" : 2707
}
AWS Kata (thread yield test):
threads total_ms perthread_ms total_iterations
1 1.0 1.0 0
2 0.0 0.0 2
4 0.0 0.0 5
8 1.0 0.125 23
16 49.0 3.0625 24584
32 161.0 5.03125 83143
64 1495.0 23.359375 753366
128 6027.0 47.0859375 3030165
256 19410.0 75.8203125 9701221
512 54141.0 105.744140625 27024516
1024 207794.0 202.923828125 103460392
2048 422438.0 206.2685546875 211768041
AWS runc (thread yield test):
threads total_ms perthread_ms total_iterations
1 1.0 1.0 0
2 0.0 0.0 2
4 0.0 0.0 5
8 1.0 0.125 26
16 1.0 0.0625 844
32 3.0 0.09375 4471
64 6.0 0.09375 20267
128 1356.0 10.59375 759518
256 5298.0 20.6953125 2571878
512 31896.0 62.296875 15690242
1024 121082.0 118.244140625 56947795
2048 611689.0 298.67626953125 319274922
Azure Kata (thread yield test):
threads total_ms perthread_ms total_iterations
1 3.0 3.0 0
2 1.0 0.5 4
4 3.0 0.75 200
8 11.0 1.375 1740
16 39.0 2.4375 20267
32 322.0 10.0625 158514
64 1990.0 31.09375 945239
128 4957.0 38.7265625 2446591
256 21184.0 82.75 10028896
512 45128.0 88.140625 21688878
1024 182810.0 178.525390625 86612100
2048 784528.0 383.0703125 370664012
Azure runc (thread yield test):
threads total_ms perthread_ms total_iterations
1 1.0 1.0 0
2 0.0 0.0 2
4 1.0 0.25 4
8 2.0 0.25 27
16 3.0 0.1875 877
32 13.0 0.40625 22037
64 560.0 8.75 274929
128 2986.0 23.328125 1407261
256 10999.0 42.96484375 5223520
512 32288.0 63.0625 15509476
1024 88801.0 86.7197265625 43347168
2048 270199.0 131.93310546875 128721491
It looks like that did make a difference when running on AWS bare metal at high thread counts. Azure looks to be relatively unchanged.
from runtime.
@c3d @RobertKrawitz Any other ideas on this issue?
from runtime.
@c3d your pod definition doesn't have a CPU request/limit?
from runtime.
@evanfoster could you provide lscpu
output for all four combinations (AWS kata, AWS runc, Azure kata, Azure runc)? There are two other things I'm thinking about:
- Difference in the number of cores
- Difference in the CPU flags possibly resulting in different instructions being used (e. g. for synchronization)
from runtime.
Hey @RobertKrawitz ,
I now include lscpu
output in every test I run. The test output is super verbose (https://gist.github.com/evanfoster/6deab087f8c18f208c460daf715ddc10) so let me extract it for easier reading:
AWS bare metal -- Kata
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.990
BogoMIPS: 4999.98
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 4096K
L3 cache: 16384K
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku avx512_vnni md_clear arch_capabilities
AWS bare metal -- runc
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2417.827
CPU max MHz: 3500.0000
CPU min MHz: 1200.0000
BogoMIPS: 5000.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 36608K
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Azure nested -- Kata
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz
Stepping: 1
CPU MHz: 2294.686
BogoMIPS: 4589.37
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 4096K
L3 cache: 16384K
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti tpr_shadow vnmi ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat umip md_clear arch_capabilities
Azure nested -- runc
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz
Stepping: 1
CPU MHz: 2294.688
BogoMIPS: 4589.37
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 51200K
NUMA node0 CPU(s): 0-31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti tpr_shadow vnmi ept vpid fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt md_clear
from runtime.
So those are big differences in the machine sizes (virtual or otherwise). The Kata pods are getting 2 CPUs, which is what I expect for a Kata pod with no CPU request (one core for the pod, one core for the rest of the VM). Can you use a much higher CPU request on the Kata pods to see if that makes a difference?
from runtime.
Sure thing! I just need to spin up my clusters again. It'll be a bit, but I'll do that and post the full results.
from runtime.
This issue is being automatically closed as Kata Containers 1.x has now reached EOL (End of Life). This means it is no longer being maintained.
Important:
All users should switch to the latest Kata Containers 2.x release to ensure they are using a maintained release that contains the latest security fixes, performance improvements and new features.
This decision was discussed by the @kata-containers/architecture-committee and has been announced via the Kata Containers mailing list:
- http://lists.katacontainers.io/pipermail/kata-dev/2020-November/001601.html
- http://lists.katacontainers.io/pipermail/kata-dev/2021-April/001843.html
- http://lists.katacontainers.io/pipermail/kata-dev/2021-May/001896.html
If you believe this issue still applies to Kata Containers 2.x, please open an issue against the Kata Containers 2.x repository, pointing to this one, providing details to allow us to migrate it.
from runtime.
Related Issues (20)
- Attaching multiple network interfaces to Firecracker container HOT 5
- add observability bind mount to retrieve logs from guest HOT 1
- don't pass cpuset details to guest when creating container
- 9pfs: Tar reports that file shrank HOT 2
- clh: Upgrade to cloud-hypervisor v0.14.0
- Kata Containers 1.x container hangs running Python scripts HOT 12
- Enabling sangbox_cgroup_only causes the creation of pods with small memory.limits settings to fail HOT 3
- Disk io limit of kata container (bps and iops) HOT 2
- versions: Upgrade to cloud-hypervisor release v0.14.1
- cli: the incorrect info of command "kata-runtime exec" HOT 1
- `Dead agent` will block kata-agent exit HOT 6
- cli: the default hostname should not be "runc" HOT 1
- kill: does kata container support "terminationGracePeriodSeconds" in pod definition of kubernetes? HOT 5
- versions: Upgrade to cloud-hypervisor v15.0
- Kata on MSFT Win10 WSL2 HOT 5
- Manually hot-plugged interfaces don't work HOT 2
- Network hotplug issues with firecracker HOT 7
- Revert 1.13.0-alpha1 version bump, as the release was never tagged.
- Incorrect agent version specified in versions.yaml HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from runtime.