containers / composefs Goto Github PK
View Code? Open in Web Editor NEWa file system for mounting container images
License: GNU General Public License v2.0
a file system for mounting container images
License: GNU General Public License v2.0
This way we don't have to fix all distcheck issues at release day.
Basically...we're now shipping support in ostree for embedding whiteouts. This allows c/image (podman etc.) to be directly pointed at this alternative root in a read-only fashion. Container images shipped this way are "lifecycle bound" with the host (and gain benefits of dedup actually and the efficient ostree on-the-wire deltas).
But this only works because ostree itself doesn't use overlayfs (ostree actually predates overlayfs).
In a composefs future, because overlayfs doesn't nest, we're going to need to figure out how to handle this fun special case.
In a unified storage world things are inherently better here, but hard requring that would actually be an "API break".
I guess another way to say this is that everyone turning on the ostree composefs support is just going to break if they have nested containers today.
With the work in #58 we can do binary search on the dir chunks, so we don't have to load all chunks for a lookup. If we also add the name offsets in the dentry so we can do random access in that table we can do binary search also inside a chunk.
This would make name lookup faster.
do we need to care about it?
I keep accidentally running mkcomposefs --digest-store=path ...
on e.g. systems that don't support fs-verity (or similar situtations), and then get surprised when reading files off the mountpoint fails (with a logged message about wrong/missing fs-verity digest). Then i keep having to look up what the option is to disable verity checks.
I wonder if we should change the behaviour of open, when the inode has a recorded fs-verity digest. What about this set of behaviours:
If the digest=xxx mount option is specified (ie. image itself is verity), then we require:
Otherwise by default we require:
Then we add two new options, one to require all backing files to have a fs-verity of the inode has it, and another one to require all image inodes to have a fs-verity digest. (Bascially one switch for each of the digest=xx requirements). Then we drop the old noverity
option.
This patch-set adds support to erofs for bloom filters to speed up negative xattr lookups:
https://lists.ozlabs.org/pipermail/linux-erofs/2023-July/008565.html
This change is backwards compat in the sense that we could add these to the erofs images we generate and old kernels would still work with the images. It would be good to get this into our erofs image format before we formally lock down the v0 binary format.
Does it really make sense to have nanosecond mtime precision in something like composefs? If we drop it that saves some space.
This code:
https://github.com/giuseppe/composefs/blob/b6c3e3524f96b155d1e2f7b237a8246557cb6349/kernel/cfs.c#L525
Only checks for O_WRONLY (==1), and fails to detect (and deny) the case of O_RDWR (==2), which allows the fs user to modify the backing file.
Migrating this from ostreedev/ostree#2879 (comment)
Today, our signature verification logic relies on the in-kernel fsverity signature handling.
In the primary original use case for fsverity (e.g. Android), signatures on the files are verified in userspace before they're processed. Now, a whole problem with using fsverity outside of Android is other Linux systems don't ship apps as single .zip files with a single trusted process launcher.
But, composefs is a way to sign and manage filesystem trees - and the fsverity maintainer is arguing that it makes more sense for us to do signature verification in userspace, instead of going through the Linux kernel's fsverity "automatic" flow using CONFIG_FS_VERITY_BUILTIN_SIGNATURES
.
I think I lean in that direction too. At least, we should support external/userspace signatures and not require CONFIG_FS_VERITY_BUILTIN_SIGNATURES
. Which I guess we basically do now because we could just document how to use whatever tools (e.g. openssl) in combination with calling FS_IOC_MEASURE_VERITY
on the erofs to verify the signature before mounting.
(This topic relates to the question of how opinionated this project is, which relates to #125 )
Hmm. One tricky thing here is that if we say that the signed object is the fsverity digest (as we do now), that then does really commit us to fsverity for the erofs metadata file. But long term...it may actually make sense to cut the backing filesystem out of the flow for the erofs metadata (i.e. not use loopback files...)? In a non-loopback world, perhaps we actually use dm-verity for the erofs metadata? I guess nothing really stops us today actually from setting up dm-verity on the loopback and using its signature tooling...although that cuts strongly against the "block device is hidden" argument.
Well, anyways I guess the bottom line here is that in theory, we do support "external signatures" today. But we should document it. And then a debate is whether to keep the current signature code which the fsverity maintainer argues against.
To make it easier to track things, lets use this issue to track the current state (and to have discussions about this).
Current state:
The use of data-only lower layers (to hide all possible files in the basedir) we need the lazy-lower-data support which was added in 6.5.
To use the LCFS_MOUNT_FLAGS_REQUIRE_VERITY (or -o verity) options you need the overlay verity
patches, which was added in 6.6rc1.
Overlayfs also requires erofs support for chunked files. This was added in linux 5.15.
To be able to store overlayfs lower directories (nested overlayfs) some overlay patches will be needed, these are being discussed on the list.
This issue is probably caused by a Linux kernel regression.
Steps to reproduce the issue:
sudo -i
mkdir basedir
mkdir workdir
mkdir upper,dir
mkdir mnt
mkcomposefs ./basedir example.cfs
# mount.composefs -t composefs \
-o 'basedir=./basedir,workdir=./workdir,upperdir=./upper\,dir' example.cfs ./mnt
mount.composefs: Failed to mount composefs example.cfs: No such file or directory
#
Describe the results you received:
Command in step 7 fails.
Describe the results you expected:
I had expected the command in step 7 to succeed.
About the system
# uname -r
6.6.0-0.rc2.20230919git2cf0f7156238.21.fc40.aarch64
# rpm-ostree status
State: idle
Deployments:
● fedora:fedora/aarch64/coreos/rawhide
Version: 40.20230921.91.0 (2023-09-21T14:07:56Z)
Commit: d8eab688f9726a1aac5d55922a0d205c03fbf243d30d7e9c4e280f0190a2abe0
GPGSignature: Valid signature by 115DF9AEF857853EE8445D0A0727707EA15B79CC
#
Additional note 1
This issue is probably caused by a Linux kernel regression
Quote: "Up to and including kernel 6.4.15, it was possible to have commas in
the lowerdir/upperdir/workdir paths used by overlayfs, provided they were
escaped with backslashes:"
See
https://lore.kernel.org/all/[email protected]/
Additional note 2
I could not reproduce the bug on an older Fedora CoreOS version.
About the older system:
[core@localhost ~]$ rpm-ostree status
State: idle
AutomaticUpdatesDriver: Zincati
DriverState: inactive
Deployments:
● fedora:fedora/aarch64/coreos/next
Version: 38.20230310.1.0 (2023-03-10T22:51:50Z)
Commit: b0fdf736cdbbd3971380d5549635e30155f07af6100925d987de623b4722637f
GPGSignature: Valid signature by 6A51BBABBA3D5467B6171221809A8D7CEB10B464
[core@localhost ~]$ uname -r
6.2.2-301.fc38.aarch64
[core@localhost ~]$
For some reason selinux decides that composefs doesn't support xattrs for storing the selinux context, so we never even get called to read security.selinux
. We need to figure out why.
The xattr data is already uniqueified on disk, so with a simple hash-table and some refcounting we could share the loaded data between inodes in ram too.
On musl stdio is implemented using readv/writev
: https://wiki.musl-libc.org/functional-differences-from-glibc.html
Since these syscalls aren't included in the seccomp filter it breaks stdio and since this also breaks stderr it fails to print an error and silently ends the program.
It would be useful to have a section about how the composefs approach, in its theoretical fully Secure Boot-chained configuration, compares against other approaches like IMA and dm-verity.
Somewhere under https://github.com/containers/composefs#filesystem-integrity perhaps?
Let's assume 1.0 is released, and we discover something like a notable performance issue with the 1.0 format. Or maybe it's actually broken in an important corner case on big-endian (s390x) - something like that.
Say this is important enough to do a 1.1.
I think the way this would need to work is basically we add support for e.g. --format=1.1
to the CLI/API - and then we generate both digests.
We need to think through and verify a scenario like this would work:
Right?
While preparing packaging for 0.9.0 over at NixOS/nixpkgs#256892 I ran into a few potential portability issues while trying various cross-compilations and thought to report them.
#import <endian.h>
in libcomposefs/lcfs-fsverity.c
- not supported by darwin or freebsd#import <error.h>
in tools/*
- gnu extension, not available when using muslos.setxattr
in tests/gendir
- throws an exception if the file system the builder is running on does not support xattrs:composefs> Traceback (most recent call last):
composefs> File "/build/source/tests/./gendir", line 193, in <module>
composefs> make_dir(root, dirs)
composefs> File "/build/source/tests/./gendir", line 163, in make_dir
composefs> set_user_xattr(path)
composefs> File "/build/source/tests/./gendir", line 104, in set_user_xattr
composefs> os.setxattr(path, name, value, follow_symlinks=False)
composefs> OSError: [Errno 95] Operation not supported: b'/build/lcfs-test.pTv96t/root'
While the file system itself only works on linux, it would be nice if the tooling to create or inspect a composefs image works on other unix variants.
In my recent work on fileformat changes I dropped all trailing zeros in strings from the formats, as we typically have lengths already in the format. The remaining place we still have it is in the symlink target payload. We should drop that for consistency.
Right now we're serializing inodes in reverse order with the root being last. This means that the files near the root directory get spread out a lot. It would be better if the root inode was first (although it does make the generation code a bit harder).
In cfs_show_options() we use base=
for this, but the option parser uses basedir=
I would like to add a fs-verity digest to struct lcfs_backing_s, in combination with a mount option to enforce verification of the backing files having that fs-verity digest. In addition I would like to add a mount option that specifies a fs-verity digest of the descriptor file, which if set fails the mount if the file doesn't have the digest.
With this, one could mount a descriptor + a set of backing files which are all fs-verified, and we're guaranteed to verify each read of both metadata and file contents. The plan is to then have ostree generate a descriptor like this during commit and put the fs-verity digest of it into the commit object metadata. Then the ostree client could regenerate the descriptor, enable fs-verity on all files and be able to trust that we get the right files on each read.
In lcfs_c_string it checks if (off >= ctx->descriptor_len)
, but then computes data = (char *)(ctx->descriptor + ctx->vdata_off + off)
. If off == descriptor_len
this is vdata_off bytes past the end of descriptor_len.
Be wary of just checking v_data_off + off >= descriptor_len though, as that can overflow if off is very very large.
A 700 meg cs9 root filesystem generates a 3.3MB metadata file, yet all the offsets in the file is 64bit. If we drop this to 32bit then we can still support up to 4GB metadata file, which is an order of 1000 more.
I'm not sure if this is a useful change though. It won't really make the files a lot smaller.
In this:
I think you handle the "no such name" case wrong. You're supposed to return -ENODATA for that.
If the payload is not set or it is a zero, we should always encode this as a offset=0, len=0 vdata.
I'm the maintainer of TorizonCore (https://www.toradex.com/operating-systems/torizon-core), an open-source and container-based Linux distribution built with OE that leverages OSTree.
I am very interested in composefs, especially for the root-of-trust implementation, and I am willing to help with testing and the OSTree integration.
I am following the mailing list discussion and reading all documentation.
One question I have is about authenticity checks. As far as I understood, composefs leverages fs-verity for integrity checks, but authentication is not supported. Is that right?
I was wondering if it would be possible to sign the descriptor digest so we could trust it by just checking the signature, avoiding the need to encode it inside an early boot stage (kernel/ramdisk). That would make it easier to implement remote updates with verified boot/chain-of-trust support.
are override_creds/revert_creds needed?
I've copied it from overlay but it is probably not necessary
When building the project on a Linux Mint 20.3 (based on Ubuntu Focal) I get the following errors:
...
CC composefs/libcomposefs/libcomposefs_la-lcfs-writer.lo
CC composefs/libcomposefs/libcomposefs_la-lcfs-mount.lo
composefs/libcomposefs/lcfs-mount.c:112:13: warning: ‘struct mount_attr’ declared inside parameter list will not be visible outside of this definition or declaration
112 | struct mount_attr *attr, size_t usize)
| ^~~~~~~~~~
composefs/libcomposefs/lcfs-mount.c: In function ‘setup_loopback’:
composefs/libcomposefs/lcfs-mount.c:273:9: error: variable ‘loopconfig’ has initializer but incomplete type
273 | struct loop_config loopconfig = { 0 };
| ^~~~~~~~~~~
composefs/libcomposefs/lcfs-mount.c:273:36: warning: excess elements in struct initializer
273 | struct loop_config loopconfig = { 0 };
| ^
composefs/libcomposefs/lcfs-mount.c:273:36: note: (near initialization for ‘loopconfig’)
composefs/libcomposefs/lcfs-mount.c:273:21: error: storage size of ‘loopconfig’ isn’t known
273 | struct loop_config loopconfig = { 0 };
| ^~~~~~~~~~
composefs/libcomposefs/lcfs-mount.c:303:20: error: ‘LOOP_CONFIGURE’ undeclared (first use in this function)
303 | if (ioctl(loopfd, LOOP_CONFIGURE, &loopconfig) < 0) {
| ^~~~~~~~~~~~~~
composefs/libcomposefs/lcfs-mount.c:303:20: note: each undeclared identifier is reported only once for each function it appears in
composefs/libcomposefs/lcfs-mount.c:273:21: warning: unused variable ‘loopconfig’ [-Wunused-variable]
273 | struct loop_config loopconfig = { 0 };
| ^~~~~~~~~~
composefs/libcomposefs/lcfs-mount.c: In function ‘lcfs_mount_erofs’:
composefs/libcomposefs/lcfs-mount.c:384:10: error: variable ‘attr’ has initializer but incomplete type
384 | struct mount_attr attr = {
| ^~~~~~~~~~
composefs/libcomposefs/lcfs-mount.c:385:5: error: ‘struct mount_attr’ has no member named ‘attr_set’
385 | .attr_set = MOUNT_ATTR_IDMAP,
| ^~~~~~~~
composefs/libcomposefs/lcfs-mount.c:385:16: error: ‘MOUNT_ATTR_IDMAP’ undeclared (first use in this function); did you mean ‘MOUNT_ATTR_NODEV’?
385 | .attr_set = MOUNT_ATTR_IDMAP,
| ^~~~~~~~~~~~~~~~
| MOUNT_ATTR_NODEV
composefs/libcomposefs/lcfs-mount.c:385:16: warning: excess elements in struct initializer
composefs/libcomposefs/lcfs-mount.c:385:16: note: (near initialization for ‘attr’)
composefs/libcomposefs/lcfs-mount.c:386:5: error: ‘struct mount_attr’ has no member named ‘userns_fd’
386 | .userns_fd = state->options->idmap_fd,
| ^~~~~~~~~
composefs/libcomposefs/lcfs-mount.c:386:17: warning: excess elements in struct initializer
386 | .userns_fd = state->options->idmap_fd,
| ^~~~~
composefs/libcomposefs/lcfs-mount.c:386:17: note: (near initialization for ‘attr’)
composefs/libcomposefs/lcfs-mount.c:384:21: error: storage size of ‘attr’ isn’t known
384 | struct mount_attr attr = {
| ^~~~
composefs/libcomposefs/lcfs-mount.c:390:17: error: invalid application of ‘sizeof’ to incomplete type ‘struct mount_attr’
390 | sizeof(struct mount_attr));
| ^~~~~~
composefs/libcomposefs/lcfs-mount.c:384:21: warning: unused variable ‘attr’ [-Wunused-variable]
384 | struct mount_attr attr = {
| ^~~~
make[2]: *** [Makefile:5304: composefs/libcomposefs/libcomposefs_la-lcfs-mount.lo] Error 1
CC src/rofiles-fuse/rofiles_fuse-main.o
CC src/libostree/tests_test_rollsum_cli-ostree-rollsum.o
...
Basically, the build fails because of two reasons:
linux/mount.h
misses the struct mount_attr
and macro MOUNT_ATTR_IDMAP
; this happens despite the fact that "new mount API" has been detected.linux/loop.h
misses the struct loop_config
and macro LOOP_CONFIGURE
.I was wondering if it would make sense to solve these issues in the upstream project to allow the build on machines having older Linux headers. If it does, I have a patch that tries to tackle them and I could create a PR for it. What do you guys think?
In ostree i implemented statfs() to forward e.g. remaining disk space to the backing fs.
I'm not sure if this is the right thing to do or not though, but adding it her for discussions.
For some use-cases, it would be useful to have an API to recreate an lcfs_node
tree from .cfs
image:
.cfs
images.cfs
imageMy main motivation for this would be RAUC's artifact updates, where we could use this to stream missing objects from the remote update bundle.
I think you should change:
void *lcfs_get_vdata(struct lcfs_context_s *ctx,
const struct lcfs_vdata_s *vdata)
To:
void *lcfs_get_vdata(struct lcfs_context_s *ctx,
struct lcfs_vdata_s vdata)
In other words, pass the struct by value. If you're not used to things like that it may seem a bit inefficient, but actually the ABI will allow the optimizer to pass the struct members in individual registers and is highly efficient.
I've been thinking more about the ostree/composefs integration and longer term, I think composefs should have its own opinionated management tooling for backing store files and checkouts.
Basically we move the "GC problem" from higher level tools into a shared composefs layer - and that will greatly "thin out" what ostree needs to do, and the same for container/storage type things. And more generally, it would help drive unifying these two things which I think we do want long term. Related to this, a mounted composefs shouldn't have backing store files deleted underneath it.
Maybe we could get away with having this just be a directory, e.g. /composefs
(like /ostree/repo
) or perhaps /usr/.composefs
. Call this a $composefsdir
.
Vaguely thinking perhaps we could have then $composefsdir/roots.d
with namespaced subdirectories, like $composefsdir/roots.d/ostree
and $composefsdir/roots.d/containers
. Finally there'd be $composefsdir/files
which would hold the regular files.
Then we'd have a CLI tool like /usr/libexec/composefsctl --root /composefs gc
that would iterate over all composefs filesystems and GC any unreferenced regular files. In order to ensure GC doesn't race with addition we'd also need "API" operations like /usr/libexec/composefsctl add container/foo.composefs
that did locking. And a corresponding composefsctl delete
.
When I build composefs against musl, the mtime in the images generated by composefs-from-json are different, because the mtime differs by (for me at least) 2h. I believe this is related to e062b81
Now that we set SB_RDONLY
we can drop a whole bunch of unnecessary NOOP methods that just return EROFS
It would be great, if mkcomposefs would support multi-threading. Processing a lot of files takes a long time, even if my machine has 31 idle CPU threads.
I think we just need some minor changes and then set a flag for this.
(This issue is somewhat half baked, but there's some valid discussion to be had of composefs+IMA)
While we decided to remove composefs' builtin signature verification using the fs-verity mechanism, for systems deploying with IMA, it could make a lot of sense to use IMA to sign and verify the composefs metadata file.
I don't think initially this needs actual code changes here, it's basically documenting:
evmctl ima_sign /path/to/composefs.img
evmctl ima_verify /target/composefs.img
mount.composefs /target/composefs.img
This is just reusing IMA as a mechanism to sign files in a generic fashion. Verification happens in userspace.
In this scenario we aren't using fsverity on the composefs image itself...which would definitely be better. To do that though, we'd need to use IMA to sign the expected digest instead.
Now I guess things get more interesting here as one could imagine a deeper integration with IMA policies (and IMA measurement in general) a bit like what happens with devicemapper ima.
I think doing that would require driving some of the current mount.composefs
logic into the Linux kernel though. Which I guess in the end is bringing us back almost full circle in a way, except instead of using fsverity's signature support we'd be using IMA's signature support.
Is there any reason for us to not depend on libfsverity? Why are we carrying a reimplementation of things like the digest computation?
So...basically today for RHEL9 we use XFS by default, which doesn't yet support fs-verity.
One thing I could imagine doing is basically adding support for IMA in all the places we support fs-verity - on the cfs image, and the backing store.
I think because of how IMA works we might not even need the verity=require
flow that was needed for overlayfs+fs-verity.
This is a half baked thought. I don't know if it's worth it.
As mentioned here: ostreedev/ostree#2640 (comment)
ignition would like to be able to track which block devices as used by a mount. We can't currently always do this. For example, the mount source is a relative pathname. We should maybe make this an absolute pathname?
There might be similar issues with the basedir.
In the file tools/mkcomposefs.c, incorrect length parameters are given to strncat(), see for instance
strncpy(tmppath, dst_base, sizeof(tmppath) - 1);
strncat(tmppath, "/.tmpXXXXXX", sizeof(tmppath) - 1);
After booting an OSTree based filesystem with composefs, I could not run sudo
:
$ sudo ls /
-sh: /mnt/usr/bin/sudo: Permission denied
After some investigation, I discovered that the problem was permission. The permission below (4111) works with OSTree hard links, but doesn't work with composefs.
$ ls -l /usr/bin/sudo
---s--x--x 1 root root 189676 Jan 1 1970 /usr/bin/sudo
After regenerating the image with a+r
for the sudo
binary, it worked.
Is this expected?
Steps to reproduce the issue:
$ ls
$ mkdir dir1
$ echo a > dir1/file1
$ str=$(python3 -c "print(10000*'A')")
$ mkcomposefs --by-digest --digest-store=$str dir1 outfile
$ echo $?
0
Describe the results you received:
$ mkcomposefs --by-digest --digest-store=$str dir1 outfile
$ echo $?
0
Describe the results you expected:
$ mkcomposefs --by-digest --digest-store=$str dir1 outfile
mkcomposefs: cannot fill payload: File name too long
$ echo $?
1
This is not guaranteed to be set. Some filesystems will return DT_UNKNOWN, and we must then do a stat to see what type of file it is.
It says
License: GPLv2+
But it is LGPLv2.1+ for the library, and GPLv3+ for the tools.
We currently only implement cfs_listxattr for cfs_file_inode_operations(), so we can't enumerate xattrs for other kinds of inodes.
After a lot of debate, it seems like we will be focusing on the "erofs+overlayfs" flow. There are positives and negatives to this.
This issue is about one of the negative things we lose with this combination, which is that we need to make a loopback device.
In our usage, the loopback device is an implementation detail of "composefs". However, its existence leaks out to all of the rest of the system, e.g. it shows up in lsblk
, there's objects in /sysfs
for it, etc.
One thing I'd bikeshed here is that perhaps using the new mount API we could add something like this
diff --git a/libcomposefs/lcfs-mount.c b/libcomposefs/lcfs-mount.c
index ea2c2e9..b9d608d 100644
--- a/libcomposefs/lcfs-mount.c
+++ b/libcomposefs/lcfs-mount.c
@@ -393,7 +393,7 @@ static int lcfs_mount_erofs(const char *source, const char *target,
return -errno;
}
- res = syscall_fsconfig(fd_fs, FSCONFIG_SET_STRING, "source", source, 0);
+ res = syscall_fsconfig(fd_fs, FSCONFIG_SET_FD, "loop-file", src_fd, 0);
if (res < 0)
return -errno;
So instead of passing the /dev/loopX
pathname, we just give an open fd to the kernel (to erofs) and internally it creates the loopback setup. But the key here is that this block device would be exclusively owned by the erofs instance, it wouldn't be visible to userspace.
We're using struct timespec for the mtime and ctime. However this has format:
struct timespec64 {
time64_t tv_sec; /* seconds */
long tv_nsec; /* nanoseconds */
};
Long is 32bit on 32bit, so there is a hole in this struct there.
We should probably use out own struct here which is packed with fixed sizes.
Hi! I noticed that one of the nice features of composefs, the ability to store and share content at a file-level, aligns with a similar feature found in the BitTorrent v2 protocol.
This allows sharing of content files between images, even if the metadata (like the timestamps or file ownership) vary between images.
— https://github.com/containers/composefs#usecase-container-images
The fact that each file has its own hash tree, and that its leaves are defined to be 16 kiB, means that files with identical content will always have the same merkle root. This enables finding matches of the same file across different torrents.
From what I've seen, composefs' current/proposed image transfer protocol is roughly one http request per file from a dedicated host. I suspect a P2P-based image downloading system would achieve greater total throughput, lower latency, and better resilience for many installations.
The basic idea would be to make something similar to uber/kraken or dragonfly, both p2p-based docker image transfer registries, but for composefs images.
Has something like this been considered before?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.