Git Product home page Git Product logo

fuse-archive's Introduction

Fuse-Archive

fuse-archive is a program that serves an archive or compressed file (e.g. foo.tar, foo.tar.gz, foo.xz or foo.zip) as a read-only FUSE file system.

It is similar to mount-zip and fuse-zip but speaks a larger range of archive or compressed file formats.

It is similar to archivemount but can be much faster (see the Performance section below) although it can only mount read-only, not read-write.

Build

$ git clone https://github.com/google/fuse-archive.git
$ cd fuse-archive
$ make

On a Debian system, you may first need to install some dependencies:

$ sudo apt install libarchive-dev libfuse-dev

Performance

Create a single .tar.gz file that is 256 MiB decompressed and 255 KiB compressed (the file just contains repeated 0x00 NUL bytes):

$ truncate --size=256M zeroes
$ tar cfz zeroes-256mib.tar.gz zeroes

Create a mnt directory:

$ mkdir mnt

fuse-archive timings:

$ time fuse-archive zeroes-256mib.tar.gz mnt
real    0m0.443s

$ dd if=mnt/zeroes of=/dev/null status=progress
524288+0 records in
524288+0 records out
268435456 bytes (268 MB, 256 MiB) copied, 0.836048 s, 321 MB/s

$ fusermount -u mnt

archivemount timings:

$ time archivemount zeroes-256mib.tar.gz mnt
real    0m0.581s

$ dd if=mnt/zeroes of=/dev/null status=progress
268288512 bytes (268 MB, 256 MiB) copied, 569 s, 471 kB/s
524288+0 records in
524288+0 records out
268435456 bytes (268 MB, 256 MiB) copied, 570.146 s, 471 kB/s

$ fusermount -u mnt

Here, fuse-archive takes about the same time to scan the archive, bind the mountpoint and daemonize, but it is ~700ร— faster (0.83s vs 570s) to copy out the decompressed contents. This is because fuse-archive does not use archivemount's quadratic complexity algorithm.

Disclaimer

This is not an official Google product. It is just code that happens to be owned by Google.


Updated on May 2022.

fuse-archive's People

Contributors

fdegros avatar nigeltao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fuse-archive's Issues

Discussion of performance

Hello again and thank you for fixing my last issue even on New Year's Day :).

I tried to include fuse-archive in my performance comparison and I noticed some things, which beget questions. First, here are my preliminary results as tested on an encrypted Intel SSD 660p 2TB:

archivemount-comparison

  • In my opinion, the 58x speedup mentioned in the Github Readme over archivemount is misleading because the mount point is not ready for usage at that point yet! I do not see the importance of daemonizing before or after the bulk of processing has been done. For measuring the time for fuse-archive to finish mounting (upper right plot), I had to wrap fuse-archive in a custom fusearchive script, which looks like this: fusearchive() { fuse-archive "$@" && stat -- "${@: -1}" &>/dev/null; } because fuse-archive daemonizes even when the mount point is not ready yet. I'm waiting for it to be ready by simply stat-ing the mount point, which hangs until then. Also, I do see an indicator, that the quadratic scaling issue has been fixed in the tests for the archives with 0B-files (plus sign markers all line styles). Starting from around 300k files, archivemount (blue lines) seem to begin to take quadratically (very reproducibly) more time whereas fuse-archive (green lines) stays O(n) and becomes considerably faster compared to archivemount.
  • I could not reproduce a general 682x speedup for copying at contents of an uncompressed file (lower left chart). It seems to generally be a bit faster especially for the compressed tests. But, that is not even a full magnitude and it still seems to be O(number of files) even for uncompressed archives, which is something which can be improved upon as I did in ratarmount. I think, in your benchmarks for the speedup, you might have simply compared getting the contents of a file at the beginning of an archive, which is faster, vs. a file at the end of an archive! In my benchmarks, I test 5 relatively random files and showing the median, min and max with the error bars. I'm going to improve upon this by using tar's --sort=name option. Because then, I can infer the position in the archive by the name and can test with e.g., only the last file in the archive to trigger the worst-case scenario.
  • The time for a simple find inside the mount point is similar to ratarmount, which is a magnitude slower than archivemount. This is surprising but exciting because I'm trying to track down why ratarmount is slower than archivemount in that metric since starting work on ratarmount! Do you have an idea, what you might do differently to archivemount? Could it be because archivemount uses the FUSE low-level interface? I already am currently speeding up ratarmount quite a bit in the develop branch over the released version by implementing the readdir variant which not only returns the names but also the attributes for each file. But even that was not enough to close the gap to archivemount fully. (Does fuse-archive implement the readdir variant returning the attributes for each file instead of just the name? If not that might explain why ratramount is a bit faster in this metric even though it is Python-based.)

The benchmark script can be found here. I did not yet push the changes necessary for also benchmarking fuse-archive.

C implementation

Is there any possibility to provide a c implementation?
Both libarchive and libfuse have c implementation, maybe this not such friendly for intergated usage.

Cannot Mount Archive Generated From 7-Zip-ZSTD

Hi I Can't Mount Archive Which Generated From 7-ZIP-ZSTD
A Error Show For :::::

sick@sick-virtualbox:~$ fuse-archive sharedvm/test_compress_zstd.7z $(mktemp -d)
fuse-archive: libarchive error for sharedvm/test_compress_zstd.7z: Unknown codec ID: 4f71101
sick@sick-virtualbox:~$ fuse-archive sharedvm/test_compress_brotli.7z $(mktemp -d)
fuse-archive: libarchive error for sharedvm/test_compress_brotli.7z: Unknown codec ID: 4f71102
sick@sick-virtualbox:~$ fuse-archive sharedvm/test_compress_lz4.7z $(mktemp -d)
fuse-archive: libarchive error for sharedvm/test_compress_lz4.7z: Unknown codec ID: 4f71104

So This Archive Test Contain A Ipsum From LaTeX And A Image Generated From ImageMagick.

test_compress_test.zip

Newa Our Master.

Unable to compile it on ubuntu 22.04

Hi,

I am unable to compile fuse-archive on ubuntu 22.04, I get this error:

make: pkg-config: No such file or directory
make: pkg-config: No such file or directory
mkdir -p out
g++   src/main.cc   -o out/fuse-archive
In file included from /usr/include/fuse/fuse.h:26,
                 from /usr/include/fuse.h:9,
                 from src/main.cc:39:
/usr/include/fuse/fuse_common.h:33:2: error: #error Please add -D_FILE_OFFSET_BITS=64 to your compile flags!
   33 | #error Please add -D_FILE_OFFSET_BITS=64 to your compile flags!
      |  ^~~~~
make: *** [Makefile:25: out/fuse-archive] Error 1

I have to say that I was able to compile it (v0.1.13) before upgrading ubuntu to 22.04.

Thanks for the help.

Empty folders not mounted

I am noticing that if a tar.gz archive contains empty folders, these are not mounted.
Is this behavior intended or am I missing something?

Here how to reproduce:

  1. Create test archive
$ mkdir test-fuse-archive
$ mkdir -p test-fuse-archive/dir1/dir1.1
$ mkdir -p test-fuse-archive/dir2/dir2.1
$ echo "existing file" > test-fuse-archive/dir2/dir2.1/file

$ find test-fuse-archive 
test-fuse-archive
test-fuse-archive/dir2
test-fuse-archive/dir2/dir2.1
test-fuse-archive/dir2/dir2.1/file
test-fuse-archive/dir1
test-fuse-archive/dir1/dir1.1
  1. The empty folder is properly extracted by tar
$ mkdir out_tgz && tar xvaf test_fuse_archive.tgz -C out_tgz
$ find out_tgz 
out_tgz
out_tgz/test-fuse-archive
out_tgz/test-fuse-archive/dir2
out_tgz/test-fuse-archive/dir2/dir2.1
out_tgz/test-fuse-archive/dir2/dir2.1/file
out_tgz/test-fuse-archive/dir1
out_tgz/test-fuse-archive/dir1/dir1.1
  1. fuse-archive doesn't mount neither the empty folder dir1.1 nor its parent folder dir1:
git clone https://github.com/google/fuse-archive.git -b v0.1.14
cd fuse-archive
make
./out/fuse-archive ~/test_fuse_archive.tgz /mnt/tmp
find /mnt/tmp
/mnt/tmp
/mnt/tmp/test-fuse-archive
/mnt/tmp/test-fuse-archive/dir2
/mnt/tmp/test-fuse-archive/dir2/dir2.1
/mnt/tmp/test-fuse-archive/dir2/dir2.1/file

Feel free to close the issue if this is the intended behavior.

Security Policy violation Binary Artifacts

This issue was automatically created by Allstar.

Security Policy Violation
Project is out of compliance with Binary Artifacts policy: binaries present in source code

Rule Description
Binary Artifacts are an increased security risk in your repository. Binary artifacts cannot be reviewed, allowing the introduction of possibly obsolete or maliciously subverted executables. For more information see the Security Scorecards Documentation for Binary Artifacts.

Remediation Steps
To remediate, remove the generated executable artifacts from the repository.

Artifacts Found

  • test/data/archive.iso

Additional Information
This policy is drawn from Security Scorecards, which is a tool that scores a project's adherence to security best practices. You may wish to run a Scorecards scan directly on this repository for more details.


Allstar has been installed on all Google managed GitHub orgs. Policies are gradually being rolled out and enforced by the GOSST and OSPO teams. Learn more at http://go/allstar

This issue will auto resolve when the policy is in compliance.

Issue created by Allstar. See https://github.com/ossf/allstar/ for more information. For questions specific to the repository, please contact the owner or maintainer.

Support for indexed tar.xz file from pixz tool

I already tested several fuse tool, including infamous archivemount, for compressed files.
I have a lot of log file into 7zformat.
At last, I did try the good one pixz and lzofs fuse tool counterparty.
It's a great, but did not access tar in transparent way.

For solid compressed 7z file, I know that fuse-archive will be indexing before that mount it, and I need wait for that.
But, for pixz tar file it's not necessary because that already had internal index.

So, Is it possible implement to use pixz internal index for mount?

Note:
The code for indexing and access it can founded at common.c and write.c files at GitHub.
The functions are called *_file_index_*.

fuse-archive reports the number of blocks taken by each file as zero

fuse-archive reports the number of blocks taken by each file as zero, which gives some confusing sizes with some tools.

Example:

$ fuse-archive "Big One.zip" mnt

$ ls -ls mnt
total 0
0 -r--r--r-- 1 fdegros primarygroup 6777995272 Mar 26  2020 'Big One.txt'

$ stat "mnt/Big One.txt" 
  File: mnt/Big One.txt
  Size: 6777995272      Blocks: 0          IO Block: 4096   regular file
Device: 39h/57d Inode: 2           Links: 1
Access: (0444/-r--r--r--)  Uid: (270632/ fdegros)   Gid: (89939/primarygroup)
Access: 1970-01-01 10:00:00.000000000 +1000
Modify: 2020-03-26 00:00:44.000000000 +1100
Change: 1970-01-01 10:00:00.000000000 +1000
 Birth: -

$ du mnt
0       mnt

$ df mnt
Filesystem     1K-blocks  Used Available Use% Mounted on
fuse-archive           0     0         0    - /usr/local/google/home/fdegros/Downloads/mnt

fuse-archive silently fails for paths beginning with './'

Create a tar and try to mount it in this manner:

touch foo
tar -cf invalid-pathname.tar ./foo
mkdir mounted
fuse-archive invalid-pathname.tar mounted
ls -lA mounted  # "total 0", no files are shown but `mount` shows it as mounted
fusermount -u mounted

Try to gather information by keeping it in the foreground:

fuse-archive -f invalid-pathname.tar mounted

Output:

fuse-archive: archive entry in invalid-pathname.tar has invalid pathname: ./foo

The tar works without problems with archivemount.

migrate to fuse3

fuse2 doesn't seem to be maintained.

TODO: read about fuse3 and understand what would be the gain for fuse-archive.

No way to get fuse-archive version

Looks like there's no way to get the current version (0.1.9) by running the executable:

# fuse-archive --version
fuse-archive: missing archive_filename argument

# fuse-archive --version myzip.zip /mnt
FUSE library version: 2.9.9
fusermount3 version: 3.9.0
using FUSE kernel interface version 7.19
fuse: failed to unmount /mnt: Invalid argument

Even looking through the source code, I couldn't find any references to the version.

decompressing individual .gz files in a directory (compression without tar envelope)

Hello,

this is more a question than issue. First, let me describe my use-case:

I have a directory full of .gz files with vastly differing sizes - from 159 bytes to 4.4 GiB per file:

$ ls -lSh /data/czds/zonefiles/
-rw-r--r-- 1 pspacek pspacek  4.4G Feb 23 14:04 com.txt.gz
...
-rw-r--r-- 1 pspacek pspacek   159 Feb 23 14:26 xn--cg4bki.txt.gz

The use-case would be to "mount" the source directory and then transparently decompress .gz files in it, so I can run an utility on it which requires seeking in the files (and thus cannot simply use gzip output piped in via stdin).

This is not supported (I did not expect it to work, but was curious about error handling :-)):

$ ./my-fuse-archive /data/czds/zonefiles /tmp
fuse-archive: could not open /data/czds/zonefiles: fuse-archive: could not read archive file

To my surprise it somewhat works with a single .gz file:

$ ./my-fuse-archive /data/czds/zonefiles/com.txt.gz /tmp/mount
$ ls -l /tmp/f
total 0
-r--r--r-- 1 pspacek pspacek 24301854277 Feb 23 02:33 com.zone.46894
$ file /tmp/mount/com.zone.46894 
/tmp/mount/com.zone.46894: ASCII text, with very long lines (302)

File sizes & also md5 sums are all okay:

$ pigz -c -d /data/czds/zonefiles/com.txt.gz | wc -c
24301854277

$ time pigz -c -d /mnt/experiment/czds/new/zonefiles/com.txt.gz | md5sum
44461e319488dc9eca92444f68dd0019  -

real	0m47.204s
user	1m39.729s
sys	0m14.170s

$ time md5sum /tmp/mount/com.zone.46894
44461e319488dc9eca92444f68dd0019  /tmp/mount/com.zone.46894

real	0m51.924s
user	0m29.740s
sys	0m5.372s

Okay cool, so at the first glance it seems I should use your software as it is - just loop through list of files and mount each file separately. Bunch of symlinks would then solve naming etc.

Nevertheless, I have couple questions for you:

  • Where does the name in the mount point come from? It does not seem to be in the archive (or maybe gzip just does not output it?):
$ gzip -l com.txt.gz 
         compressed        uncompressed  ratio uncompressed_name
         4699561414          2827017797 -66.2% com.txt

Another randomly selected .gz file contains file named data, which is unrelated to the original file name net.txt.gz.

  • First ls operation on the mount takes ages - time comparable to decompressing the whole archive, presumably because it looks for end of first gzip stream to see if another gzip stream might follow. Is this intentional? Would you be willing to add an option to treat gz files as one-item archives and thus make initial listing fast?
$ time ls /tmp/mount
com.zone.46894

real	0m34.677s
user	0m0.001s
sys	0m0.000s
  • Would you be interested in an option which uses the original file name without .gz suffix for names in the mount?

  • And finally, assuming answers above were mostly "yes", are you interested in more complete feature request description to mount directories and transparently decompress files in them? I could write it down if it is not waste of time.

Thank you very much for your work on this project!

BTW you did impressive work on decompression speed (or selection or decompressor): This mount-hack decompresses the file like 4x faster than stock gzip and pigz!

Nonstandard-C++ in src/main.cc

my_operations is initialized using non-standard C++ syntax. Why not just initialize with a function instead?

struct fuse_operations init_my_operations(){
     struct fuse_operations opers = {};
     opers.getattr = my_getattr;
     opers.readlink = my_readlink;
     opers.open = my_open;
     opers.read = my_read;
     opers.release = my_release;
     opers.readdir = my_readdir;
     opers.init = my_init;
     opers.destroy = my_destroy;
  
     return opers;
 }
  
static struct fuse_operations my_operations = init_my_operations();

Feature request: mount directories containing archive files

Use case

I have a directory tree containing both a large number of bare files, and a large number of archives. I would like to mount a FUSE filesystem that gives a read-only view of this directory tree, but wherein all archives are instead exposed as directories, as if each had been separately mounted via fuse-archive.

For example, if I have the following tree:

dir/archive1.tar.gz  # contains contents1.txt and contents2.txt
dir/file1.txt
dir/inner/archive2.tar.gz  # contains contents3.txt and contents4.txt
dir/inner/file2.txt

and I mount it with fuse-archive dir mnt, I expect to see the following in the mount point:

mnt/archive1.tar.gz/contents1.txt
mnt/archive1.tar.gz/contents2.txt
mnt/file1.txt
mnt/inner/archive2.tar.gz/contents3.txt
mnt/inner/archive2.tar.gz/contents4.txt
mnt/inner/file2.txt

Performance and limitations

I imagine that when the contents of a directory are listed, each file in the backing directory will need at least some inspection to determine whether it is an unpackable archive, and this necessarily incurs some performance penalty (which I'm willing to accept). I expect the most effective solution to this problem is for users to avoid large, flat directories.

Provided that users exercise reasonable restraint in making parallel accesses, it should be possible to mount and use a directory tree containing, say, a million archives. For example, it should be possible to run find on the mount point, with performance at least equal to (and ideally much better than) finding all files, checking which ones are archives, sequentially mounting each archive individually, and running find on the resulting mount points.

Access to each of the archives should be lazy, and the initial mounting process should not require recursively scanning the entire directory tree. In particular, even if the directory tree contains a million archives, there should be no appreciable difference in performance between:

  1. mounting a single archive deep in the directory tree, and then accessing a single file within it, or
  2. mounting the top of a large directory tree, and then accessing a single file within a single archive deep in that tree

Recursive unpacking is not required for my use case; however, I can imagine other users may want this option.

Considered alternatives

For a small number of archives, it might be possible to use overlayfs, unionfs, or similar with a startup script that simply enumerates archives in the backing directory and mounts them individually. However, as the number of archives increases, this solution would probably also incur linear growth in startup time, in number of processes, in number of mounts, and in memory usage.

It might also be possible for this script to use autofs mounts for each archive, allowing the fuse-archive processes to be created lazily. Unfortunately, this probably doesn't solve the issue with startup time nor number of mounts, and I expect that it would be quite difficult to get auto-unmounting right.

Prior art

  • ratarmount supports this use case, though I believe only for a subset of the archive types that fuse-archive supports.
  • rar2fs supports this use case for rar files specifically.
  • #5 touches on this use case, but IMHO the discussion there seemed a bit more focused on some gzip-specific questions that aren't really on-topic for this feature request (though some may come up when considering edge cases).

Hard links not mounted

Hard links produce an error and are not mounted.
The issue is likely related to how libarchive handles hard links.

How to reproduce:

  1. Creating a tar.gz archive with two hard linked files
$ echo "testing fs-archive with hard links" > file1

$ ln file1 file2

$ stat file1 file2 
  File: file1
  Size: 35        	Blocks: 8          IO Block: 4096   regular file
Device: 254,0	Inode: 794969      Links: 2
Access: (0644/-rw-r--r--)  Uid: ( 1000/  eugene)   Gid: ( 1000/  eugene)
Access: 2023-07-31 16:12:00.330384852 +0200
Modify: 2023-07-31 16:12:00.330384852 +0200
Change: 2023-07-31 16:12:07.123700568 +0200
 Birth: 2023-07-31 16:12:00.330384852 +0200
  File: file2
  Size: 35        	Blocks: 8          IO Block: 4096   regular file
Device: 254,0	Inode: 794969      Links: 2
Access: (0644/-rw-r--r--)  Uid: ( 1000/  eugene)   Gid: ( 1000/  eugene)
Access: 2023-07-31 16:12:00.330384852 +0200
Modify: 2023-07-31 16:12:00.330384852 +0200
Change: 2023-07-31 16:12:07.123700568 +0200
 Birth: 2023-07-31 16:12:00.330384852 +0200

$ tar cvaf test_fuse_archive_hard_links.tar.gz file1 file2
file1
file2
  1. Mounting with latest fuse-archive fails:
$ git clone https://github.com/google/fuse-archive.git -b v0.1.14
$ cd fuse-archive
$ make all
$ ./out/fuse-archive ~/test_fuse_archive_hard_links.tar.gz /mnt/tmp
fuse-archive: irregular non-link file in /home/eugene/test_fuse_archive_hard_links.tar.gz: /file2
$ ls /mnt/tmp 
file1
  1. Re-extracting the tar.gz works properly and hard links are preserved:
$ mkdir ~/out && tar xvaf ~/test_fuse_archive_hard_links.tar.gz -C ~/out
$ stat ~/out/file1 ~/out/file2 
  File: /home/eugene/out/file1
  Size: 35        	Blocks: 8          IO Block: 4096   regular file
Device: 254,0	Inode: 15874535    Links: 2
Access: (0644/-rw-r--r--)  Uid: ( 1000/  eugene)   Gid: ( 1000/  eugene)
Access: 2023-07-31 16:49:52.049826264 +0200
Modify: 2023-07-31 16:12:00.000000000 +0200
Change: 2023-07-31 16:49:52.049826264 +0200
 Birth: 2023-07-31 16:49:52.049826264 +0200
  File: /home/eugene/out/file2
  Size: 35        	Blocks: 8          IO Block: 4096   regular file
Device: 254,0	Inode: 15874535    Links: 2
Access: (0644/-rw-r--r--)  Uid: ( 1000/  eugene)   Gid: ( 1000/  eugene)
Access: 2023-07-31 16:49:52.049826264 +0200
Modify: 2023-07-31 16:12:00.000000000 +0200
Change: 2023-07-31 16:49:52.049826264 +0200
 Birth: 2023-07-31 16:49:52.049826264 +0200
  1. By debugging further, it is clear that the "regular" bit flag is not set for the second file.
diff --git a/src/main.cc b/src/main.cc
index 6afb15f..6c77912 100644
--- a/src/main.cc
+++ b/src/main.cc
@@ -1130,6 +1130,8 @@ insert_leaf(struct archive* a,
 
   std::string symlink;
   mode_t mode = archive_entry_mode(e);
+  mode_t filetype = archive_entry_filetype(e);
+  syslog(LOG_INFO, "AE_IFREG: 0x%04x, file: %s, mode 0x%04x, filetype: 0x%04x\n", AE_IFREG, redact(pathname.c_str()), mode, filetype);
   if (S_ISLNK(mode)) {
     const char* s = archive_entry_symlink_utf8(e);
     if (!s) {
./out/fuse-archive ~/test_fuse_archive_hard_links.tar.gz /mnt/tmp                                                                                                                                          130
fuse-archive: AE_IFREG: 0x8000, file: /file1, mode 0x81a4, filetype: 0x8000
fuse-archive: AE_IFREG: 0x8000, file: /file2, mode 0x01a4, filetype: 0x0000
fuse-archive: irregular non-link file in /home/eugene/test_fuse_archive_hard_links.tar.gz: /file2

A possible solution has already been submitted in PR #11.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.