Git Product home page Git Product logo

lmctfy's Introduction

lmctfy - Let Me Contain That For You

Note

We have been collaborating with Docker over libcontainer and are in process of porting the core lmctfy concepts and abstractions to libcontainer. We are not actively developing lmctfy further and have moved our efforts to libcontainer. In future, we hope to replace the core of lmctfy with libcontainer.

Introduction

lmctfy (pronounced l-m-c-t-fi, IPA: /ɛlɛmsitifаɪ/) is the open source version of Google’s container stack, which provides Linux application containers. These containers allow for the isolation of resources used by multiple applications running on a single machine. This gives the applications the impression of running exclusively on a machine. The applications may be container-aware and thus be able to create and manage their own subcontainers.

The project aims to provide the container abstraction through a high-level API built around user intent. The containers created are themselves container-aware within the hierarchy and can be delegated to be managed by other user agents.

lmctfy was designed and implemented with specific use-cases and configurations in mind and may not work out of the box for all use-cases and configurations. We do aim to support more use-cases and configurations so please feel free to contribute patches or send e-mail to the mailing list so that we may incorporate these into the roadmap.

lmctfy is released as both a C++ library and a CLI.

Current Status

lmctfy is currently stalled as we migrate the core concepts to libcontainer and build a standard container management library that can be used by many projects.

lmctfy is beta software and may change as it evolves. The latest release version is 0.5.0. It currently provides isolation for CPU, memory, and devices. It also allows for the creation of Virtual Hosts which are more heavily isolated containers giving the impression of running as an independent host.

Getting Started

This section describes building the CLI, running all unit tests, and initializing the machine. The CLI Commands section provides some examples of CLI operations and C++ Library describes the use of the underlying library.

Dependencies

lmctfy depends on the following libraries and expects them to be available on the system:

Addtionally to build lmctfy you also need:

  • make
  • go compiler
  • g++ or clang version with C++11 support (tested with g++-4.7 and clang-3.2)

We've tested the setup on Ubuntu 12.04+. We are happy to accept patches that add support for other setups.

Building the CLI

To build the lmctfy CLI:

make -j <number of threads> lmctfy

The CLI should now be available at: bin/lmctfy/cli/lmctfy

Building the C++ Library

To build the lmctfy library:

make -j <number of threads> liblmctfy.a

The library should now be available at: bin/liblmctfy.a.

Running Unit Tests

To build and run all unit tests:

make -j <number of threads> check

Initialization

lmctfy has been tested on Ubuntu 12.04+ and on the Ubuntu 3.3 and 3.8 kernels. lmctfy runs best when it owns all containers in a machine so it is not recommended to run lmctfy alongside LXC or another container system (although given some configuration, it can be made to work).

In order to run lmctfy we must first initialize the machine. This only needs to happen once and is typically done when the machine first boots. If the cgroup hierarchies are already mounted, then an empty config is enough and lmctfy will auto-detect the existing mounts:

lmctfy init ""

If the cgroup hierarchies are not mounted, those must be specified so that lmctfy can mount them. The current version of lmctfy needs the following cgroup hierarchies: cpu, cpuset, cpuacct, memory, and freezer. cpu and cpuacct are the only hierarchies that can be co-mounted, all other must be mounted individually. For details on configuration specifications take a look at InitSpec in lmctfy.proto. An example configuration mounting all of the hierarchies in /sys/fs/cgroup:

lmctfy init "
  cgroup_mount:{
    mount_path:'/sys/fs/cgroup/cpu'
    hierarchy:CGROUP_CPU hierarchy:CGROUP_CPUACCT
  }
  cgroup_mount:{
    mount_path:'/sys/fs/cgroup/cpuset' hierarchy:CGROUP_CPUSET
  }
  cgroup_mount:{
    mount_path:'/sys/fs/cgroup/freezer' hierarchy:CGROUP_FREEZER
  }
  cgroup_mount:{
    mount_path:'/sys/fs/cgroup/memory' hierarchy:CGROUP_MEMORY
  }"

The machine should now be ready to use lmctfy for container operations.

Container Names

Container names mimic filesystem paths closely since they express a hierarchy of containers (i.e.: containers can be inside other containers, these are called subcontainers or child containers).

Allowable characters for container names are:

  • Alpha numeric ([a-zA-Z0-9]+)
  • Underscores (_)
  • Dashes (-)
  • Periods (.)

An absolute path is one that is defined from the root (/) container (i.e.: /sys/subcont). Container names can also be relative (i.e.: subcont). In general and unless otherwise specified, regular filesystem path rules apply.

Examples:

   /           : Root container
   /sys        : the "sys" top level container
   /sys/sub    : the "sub" container inside the "sys" top level container
   .           : the current container
   ./          : the current container
   ..          : the parent of the current container
   sub         : the "sub" subcontainer (child container) of the current container
   ./sub       : the "sub" subcontainer (child container) of the current container
   /sub        : the "sub" top level container
   ../sibling  : the "sibling" child container of the parent container

CLI Commands

Create

To create a container run:

lmctfy create <name> <specification>

Please see lmctfy.proto for the full ContainerSpec.

Example (create a memory-only container with 100MB limit):

lmctfy create memory_only "memory:{limit:100000000}"

Destroy

To destroy a container run:

lmctfy destroy <name>

List

To list all containers in a machine, ask to recursively list from root:

lmctfy list containers -r /

You can also list only the current subcontainers:

lmctfy list containers

Run

To run a command inside a container run:

lmctfy run <name> <command>

Examples:

lmctfy run test "echo hello world"
lmctfy run /test/sub bash
lmctfy run -n /test "echo hello from a daemon"

Other

Use lmctfy help to see the full command listing and documentation.

C++ Library

The library is comprised of the ::containers::lmctfy::ContainerApi factory which creates, gets, destroys, and detects ::containers::lmctfy::Container objects that can be used to interact with individual containers. Full documentation for the lmctfy C++ library can be found in lmctfy.h.

Roadmap

The lmctfy project proposes a containers stack with two major layers we’ll call CL1 and CL2. CL1 encompases the driver and enforcement of the containers policy set by CL2. CL1 will create and maintain the container abstraction for higher layers. It should be the only layer that directly interacts with the kernel to manage containers. CL2 is what develops and sets container policy, it uses CL1 to enforce the policy and manage containers. For example: CL2 (a daemon) implements a policy that the amount of CPU and memory used by all of a machine’s containers must not exceed the amount of available CPU and memory (as opposed to overcommitting memory in the machine). To enforce that policy it uses CL1 (library/CLI) to create containers with memory limits that add up to the machine’s available memory. Alternate policies may involve overcommitting a machine’s resources by X% or creating levels of resources with different guarantees for quality of service.

The lmctfy project currently provides the CL1 component. The CL2 is not yet implemented.

CL1

Currently only provides robust CPU and memory isolation. In our roadmap we have support for the following:

  • Disk IO Isolation: The specification is mostly complete, we’re missing the controller and resource handler.
  • Network Isolation: The specification and cgroup implementation is up in the air.
  • Support for Root File Systems: Specifying and building root file systems.
  • Disk Images: Being able to import/export a container’s root file system image.
  • Checkpoint Restore: Being able to checkpoint and restore containers on different machines.

CL2

The most basic CL2 would use a container policy that ensures the fair sharing of a machine’s resources without allowing overcommitment. We aim to eventually implement a CL2 that provides different levels of guaranteed quality of service. In this scheme some levels are given stronger quality of service than others. The following CL2 features are supported in our roadmap:

  • Monitoring and statistics support.
  • Admission control and feasibility checks.
  • Quality of Service guarantees and enforcement.

We have started work on CL2 under the cAdvisor project

Kernel Support

lmctfy was originally designed and implemented around a custom kernel with a set of patches on top of a vanilla Linux kernel. As such, some features work best in conjunction with those kernel patches. However, lmctfy should work without them. It should detect available kernel support and adapt accordingly. We’ve tested lmctfy in vanilla Ubuntu 3.3* and 3.8 kernels. Please report any issues you find with other kernel versions.

Some of the relevant kernel patches:

  • CPU latency: This adds the cpu.lat cgroup file to the cpu hierarchy. It bounds the CPU wakeup latency a cgroup can expect.
  • CPU histogram accounting: This adds the cpuacct.histogram cgroup file to the cpuacct hierarchy. It provides various histograms of CPU scheduling behavior.
  • OOM management: Series of patches to enforce priorities during out of memory conditions.

Contributing

Interested in contributing to the project? Feel free to send a patch or take a look at our roadmap for ideas on areas of contribution. Follow Getting Started above and it should get you up and running. If not, let us know so we can help and improve the instructions. There is some documentation on the structure of lmctfy in the primer.

Mailing List

The project mailing list is [email protected]. The list will be used for announcements, discussions, and general support. You can subscribe via groups.google.com.

lmctfy's People

Contributors

abhishekrai avatar adamvduke avatar ezhuk avatar jakewharton avatar jeffreyroberts avatar jonjonsonjr avatar kyurtsever avatar nipun-sehrawat avatar philips avatar rainbowrun avatar rjnagal avatar vishh avatar vmarmol avatar zohaib1020 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lmctfy's Issues

Issues with Debian Wheezy

$ sudo lmctfy init "
>   cgroup_mount:{
>     mount_path:'/dev/cgroup/cpu'
>     hierarchy:CGROUP_CPU hierarchy:CGROUP_CPUACCT
>   }
>   cgroup_mount:{
>     mount_path:'/dev/cgroup/cpuset' hierarchy:CGROUP_CPUSET
>   }
>   cgroup_mount:{
>     mount_path:'/dev/cgroup/freezer' hierarchy:CGROUP_FREEZER
>   }
>   cgroup_mount:{
>     mount_path:'/dev/cgroup/memory' hierarchy:CGROUP_MEMORY
>   }"
Command exited with error message: 9: Failed to mount hierarchy with ID "memory" at "/dev/cgroup/memory"
$ lmctfy  -v
lmctfy version 0.1-0
$ uname -a
Linux byok 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1 x86_64 GNU/Linux

I'm running on Google Compute Engine. ([email protected] if you wanna talk off bug about how to replicate this).

Create proper 0.2.0 release

In order for github to give you a proper tarball, you need to bake a release/create a tag. This would make it easier to update the package in linux distributions (such as Gentoo). Thanks.

Using memory controller when swap accounting is disabled

We are seeing that on some kernels, when swap accounting is not enabled, lmctfy is unable to create any containers having memory limit. For example:


$ lmctfy create /test/abhishek ' cpu { limit: 1000 } memory { limit: -1 }'
Command exited with error message: 14: Failed to write "-1" to file "/sys/fs/cgroup/memory/test/abhishek/memory.memsw.limit_in_bytes" for hierarchy 7

$ echo -1 >/sys/fs/cgroup/memory/test/abhishek/memory.memsw.limit_in_bytes 
bash: echo: write error: Operation not supported

$ uname -a
Linux abhishek-samsung 3.8.0-44-generic #66~precise1-Ubuntu SMP Tue Jul 15 04:01:04 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

At ThoughtSpot, we are seeing this on all 3.8 kernels that we have in our dev environment. However, this works fine on 3.5 and 3.11 kernels which are the other two common kernels we have in our dev environment.

The root cause for this "operation not supported" seems to come from this code from mm/memcontrol.c which is returning EOPNOTSUPP.


5017 static int mem_cgroup_write(struct cgroup *cont, struct cftype *cft,
5018                             const char *buffer)
5019 {
5020         struct mem_cgroup *memcg = mem_cgroup_from_cont(cont);
5021         enum res_type type;
5022         int name;
5023         unsigned long long val;
5024         int ret;
5025 
5026         type = MEMFILE_TYPE(cft->private);
5027         name = MEMFILE_ATTR(cft->private);
5028 
5029         if (!do_swap_account && type == _MEMSWAP)
5030                 return -EOPNOTSUPP;

Restarting the kernel with swap accounting enabled fixes the problem. In our environment, we always try to place a limit on memory, even if -1 in some cases, and not being able to do that is a limitation. I was curious to see how docker handles this and from the docs it looks like docker also recommends swap accounting to be enabled, but requires it only if user wants swap accounting.

We can work around this issue by enabling swap accounting on all machines, but filing it to see if there is a better approach. I noticed some comments in the code about "feature detection", and in some sense, this is a problem of feature detection where we have an invalid expectation about the kernel features.

One possible fix for this particular issue is to make the following change in MemoryResourceHandler::Update(). Does this look reasonable?


  // Set the swap limit if it was specified. The default is -1 if it was not
  // specified during a replace.
  // TODO(zohaib): swap_limit must be greater than or equal to the limit. We
  // need to check that this is true.
  if (memory_spec.has_swap_limit()) {
     RETURN_IF_ERROR(
         memory_controller_->SetSwapLimit(Bytes(memory_spec.swap_limit())));
   } else if (policy == Container::UPDATE_REPLACE) {
-    Status status = memory_controller_->SetSwapLimit(Bytes(-1));
-    // This may not be supported in all kernels so don't fail if it is not
-    // supported and not specified.
-    if (!status.ok() && status.error_code() != ::util::error::NOT_FOUND) {
-      return status;
+    // When swap limit is not specified, set it only when swap accounting is
+    // enabled.  Since swap accounting cannot be modified without rebooting the
+    // kernel, it's OK to not update the swap limit when swap accounting is
+    // disabled regardless of the previous value.
+    Status swap_accounting_enabled = memory_controller_->GetSwapLimit();
+    if (swap_accounting_enabled.ok()) {
+      Status status = memory_controller_->SetSwapLimit(Bytes(-1));
+      // This may not be supported in all kernels so don't fail if it is not
+      // supported and not specified.
+      if (!status.ok() && status.error_code() != ::util::error::NOT_FOUND) {
+        return status;
+      }
     }
   }

lmctfy init fails

(Cross posting from https://groups.google.com/d/topic/lmctfy/W_JoHrrOGwE/discussion because this seems to be the right place)

Hello,
I built the latest release of lmctfy (0.5) and tried creating a cgroup hierarchy using the following config:
cgroup_mount:{
mount_path:'/sys/fs/cgroup/freezer'
hierarchy:CGROUP_FREEZER
}
cgroup_mount:{
mount_path:'/sys/fs/cgroup/cpu'
hierarchy:CGROUP_CPU
hierarchy:CGROUP_CPUACCT
}
cgroup_mount:{
mount_path:'/sys/fs/cgroup/cpuset'
hierarchy:CGROUP_CPUSET
}
cgroup_mount:{
mount_path:'/sys/fs/cgroup/memory'
hierarchy:CGROUP_MEMORY
}

The command I run to initialize cgroups is (the machine doesn't have any cgroups mounted already):
sudo ./lmctfy init -c ../../../cgroup_config/config.ascii

It fails with the following message:
Command exited with error message: 5: lmctfy requires a canonical tasks cgroup hierarchy, none were found
try using --stderrthreshold to get more info

Reading through the source code of lmctfy_impl.cc, it seems that freezer cgroup is expected to be mounted as the canonical cgroup task tracking hierarchy. Could someone point out if I am missing some steps?

Please note that this problem was not present in lmctfy 0.3.1 release.

Thanks,
Nipun

Full ContainerSpec documentation

Hi there,

I'm having trouble exploring the available fields for a ContainerSpec. I managed to set memory limit from an example, but couldn't find an obvious place in the docs or source where they are listed. At first glance it looks like each resource handler adds fields in a reserved part of the spec?

Failure to create container can leave behind state requiring manual intervention

Consider the following sequence of events:


$ lmctfy create /test/abhishek 'cpu { limit: 1000 } memory { limit: -1 }'
Command exited with error message: 14: Failed to write "-1" to file "/sys/fs/cgroup/memory/test/abhishek/memory.memsw.limit_in_bytes" for hierarchy 7

The "/test" container being created above spans CPU and memory resource controllers. Here, lmctfy was successfuly in creating the state under freezer and cpu controllers, but had an error in creating memory controller state. Ignore the cause for the failure for the purpose of this issue.

Here's what the cgroup file system looks like at this point:


$ find /sys/fs/cgroup/ -name abhishek
/sys/fs/cgroup/memory/test/abhishek

So, there is a leftover directory for test/abhishek under memory controller, but not under others. This confuses subsequent commands. For example, destroy throws the following error:


$ lmctfy destroy -f /test/abhishek
Command exited with error message: 5: Can't get non-existent container "/test/abhishek"

Trying to re-run the create command throws up a different error this time:


$ lmctfy create /test/abhishek ' cpu { limit: 1000 } memory { limit: 1000 }'
Command exited with error message: 6: Expected cgroup "/sys/fs/cgroup/memory/test/abhishek" to not exist.

To summarize, manual intervention is necessary to recover from this state, which is unfortunate. I realize the manual intervention may be necessary in some other cases as well if say the application unexpectedly crashes in an intermediate state. Either we should fix it, or provide a "cleanup" operation which, given a container, cleans up all its state, including any such leftover state. The "destroy" operation is unable to do this as it only acts if the container's node under "freezer" exists.

All of this weirdness is because of the leftover state of memory controller after the failure of the first create operation. Deleting this directory brings us back to the original create error again:


$ rmdir /sys/fs/cgroup/memory/test/abhishek
$ lmctfy create /test/abhishek ' cpu { limit: 1000 } memory { limit: 1000 }'
Command exited with error message: 14: Failed to write "-1" to file "/sys/fs/cgroup/memory/test/abhishek/memory.memsw.limit_in_bytes" for hierarchy 7

From the code for the "create" operation, it looks like lmctfy is smart about cleaning up some state. In particular, ContainerApiImpl::Create() uses UniqueDestroyPtr<> for resource handlers to ensure that if some resource handlers were successfully created, before one failed, state of the previously initialized resource handlers gets cleaned up.

But, within the context of the same resource handler, this is not true. In the following code, perhaps we could inject a Destroy() call if the Update() call fails?


StatusOr CgroupResourceHandlerFactory::Create(
      const string &container_name,
      const ContainerSpec &spec) {
  // Create the ResourceHandler for the container.
  StatusOr statusor_handler =
      CreateResourceHandler(container_name, spec);
  if (!statusor_handler.ok()) {
    return statusor_handler.status();
  }
  unique_ptr handler(statusor_handler.ValueOrDie());

  // Run the create action before applying the update.
  RETURN_IF_ERROR(handler->CreateResource(spec));

  // Prepare the container by doing a replace update.
  Status status = handler->Update(spec, Container::UPDATE_REPLACE);
  if (!status.ok()) {
    return status;
  }

  return handler.release();
}

Duplicate SplitStringUsing etc.

The following 2 functions exist in both lmctfy/strings/strutil.cc and lmctfy/strings/split.cc

SplitStringUsing(std::string const&, char const_, std::vector<std::string, std::allocatorstd::string >_)'

SplitStringAllowEmpty(std::string const&, char const_, std::vector<std::string, std::allocatorstd::string >_)

install script fails

Hello guys,

Tried to start an install on ubuntu 12.04 and the whole thing fails at grr_config_updater initialize step in the script.

 Initialize the configuration, building clients and setting options.

Traceback (most recent call last):
File "/usr/bin/grr_config_updater", line 9, in
load_entry_point('grr==0.2', 'console_scripts', 'grr_config_updater')()
File "/usr/lib/python2.7/dist-packages/grr/lib/distro_entry.py", line 46, in ConfigUpdater
from grr.tools import config_updater
File "/usr/lib/python2.7/dist-packages/grr/tools/config_updater.py", line 18, in
from grr.lib import server_plugins
File "/usr/lib/python2.7/dist-packages/grr/lib/server_plugins.py", line 34, in
from grr.server import server_plugins
File "/usr/lib/python2.7/dist-packages/grr/server/server_plugins.py", line 13, in
from grr.lib import export
File "/usr/lib/python2.7/dist-packages/grr/lib/export.py", line 15, in
from google.protobuf import message_factory
ImportError: cannot import name message_factory

Have any idea how i can solve this ?

Thanks

swap_limit semantics are confusing

Hello,

MemorySpec allows users to set 'swap_limit', whose value ultimately maps to memory.memsw.limit_in_bytes. This is a bit confusing as these two entities represent different things: swap_limit gives the impression that one can specify the limit of swap space available to the application, whereas memory.memsw.limit_in_bytes value is the sum of memory and swap usage.

So, if I were to create a container with 50M memory and no swap, it's config would currently look like:
memory {
limit: 50M
swap: 50M
}

whereas, the following captures the intent better:
memory {
limit: 50M
swap: 0
}

What do folks think? I can make the required change, if this change in semantics is acceptable to the authors. We can deprecate the proto field and introduce a new one that specifies just the swap limit.

Thanks,
Nipun

Namespacing

I couldn't find a way to apply namespaces to containers. I'm not sure if that would be part of the spec, similar to resource limits, or rather an argument to each individual run? Either way, I can't find an example in the docs or the source.

Thanks!

Cannot create containers

When trying to create a container I get the following error

$ sudo lmctfy create test "memory: {limit: 1000000}"
Command exited with error message: 5: Expected cgroup "/sys/fs/cgroup/test" to exist.
try using --stderrthreshold to get more info

That message is weird, as it is part of the Get method.

Even if I manually create /sys/fs/cgroup/test then I get the following error:

$ sudo lmctfy create test "memory: {limit: 1000000}"
Command exited with error message: 6: Can't create existing container "/test"
try using --stderrthreshold to get more info

Note: I'm running this on Debian and followed what was suggested in #8. However, my init command did not succeed with the defaults parametersand had to initialize it with init "", which worked.

The output of /proc/cgroups is:

$ cat /proc/cgroups 
#subsys_name    hierarchy   num_cgroups enabled
cpuset  1   3   1
cpu 1   3   1
cpuacct 1   3   1
memory  1   3   1
devices 1   3   1
freezer 1   3   1
net_cls 1   3   1
blkio   1   3   1
perf_event  1   3   

Mounted cgroups:

$ cat /proc/mounts | grep cgroup
cgroup /sys/fs/cgroup cgroup rw,relatime,perf_event,blkio,net_cls,freezer,devices,memory,cpuacct,cpu,cpuset 0 0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.