Git Product home page Git Product logo

googlecomputeenginer's Introduction

googleComputeEngineR

CRAN Build Status codecov.io

googleComputeEngineR provides an R interface to the Google Cloud Compute Engine API, for launching virtual machines. It looks to make the deployment of cloud resources for R as painless as possible, and includes some special templates to launch R-specific resources such as RStudio, Shiny, and OpenCPU with a few lines from your local R session.

See all documentation on the googleComputeEngineR website

TL;DR - Creating an RStudio server VM

  1. Configure a Google Cloud Project with billing.
  2. Download a service account key JSON file.
  3. Put your default project, zone and JSON file location in your .Renviron.
  4. Run library(googleComputeEngineR) and auto-authenticate.
  5. Run vm <- gce_vm(template = "rstudio", name = "rstudio-server", username = "mark", password = "mark1234") (or other credentials) to start up an RStudio Server.
  6. Wait for it to install, login via the returned URL.

A video guide to setup and launching an RStudio server has been kindly created by Donal Phipps and is available at this link.

<iframe width="560" height="315" src="http://www.youtube.com/embed/1oM0NZbRhSI?rel=0" frameborder="0" allowfullscreen></iframe>

Thanks to

  • Scott Chamberlin for the analogsea package for launching Digital Ocean VMs, which inspired the SSH connector functions for this one.
  • Winston Chang for the harbor package where the docker functions come from. If harbor will be published to CRAN, it will become a dependency for this one.
  • Henrik Bengtsson for help in integrating the fantastic future package that allows asynchronous R functions run in GCE clusters.
  • Carl Boettiger and Dirk Eddelbuettel for rocker that Docker containers some of the R templates used in this package.

Install

CRAN version:

install.packages("googleComputeEngineR")

Development version:

if (!require("ghit")) {
    install.packages("ghit")
}
ghit::install_github("cloudyr/googleComputeEngineR")

cloudyr project logo

googlecomputeenginer's People

Contributors

calicule avatar emraher avatar grantmcdermott avatar henrikbengtsson avatar j450h1 avatar jburos avatar jennybc avatar leeper avatar markedmondson1234 avatar markwh avatar muschellij2 avatar papageorgiou avatar sportebois avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

googlecomputeenginer's Issues

Question: GCP preemptible VMs

This is an awesome package! Really appreciate all the work you have put into this. I was just curious - is it possible to use the current package to create preemptible-VMs?

Thanks so much,
Mike

WISH: Functions for setting up and returning an up-and-running instance

When I was trying out the package, I found myself creating the following utility functions:

gce_vm <- function(name, ...) {
  tryCatch({
    vm <- gce_get_instance(name)
  }, error = function(ex) {
    vm <- gce_vm_template(name = name, ...)
  })
}

gce_vm_waitfor <- function(vm, max_tries = 100L, delta = 1.0) {
  ips <- gce_get_external_ip(vm)
  if (is.null(ips)) {
    gce_vm_start(vm)
    for (kk in seq_len(max_tries)) {
      vm <- gce_get_instance(vm$name)
      ips <- gce_get_external_ip(vm)
      if (!is.null(ips)) break
      Sys.sleep(delta)
    }
  }
  invisible(vm)
}

These would allow me to do:

if (!exists("vm")) {
  vm <- gce_vm(name = "r-demo", template = "r-base", predefined_type = "f1-micro")
  vm <- gce_vm_waitfor(vm)
  ## gce_ssh_setup(instance = vm, ...)
}
print(vm)


library("future")
cl <- as.cluster(vm)
plan(cluster, workers = cl)
x %<-% { Sys.info() }
print(x)

so that if the instance was already running and I reran my script, it would pick up the running instance. If not running, it would launch one and wait for it to be fully up and running.

I can imagine this is a common use pattern; does function for this already exists in the package, or would it make sense to add something like this to the API?

Permission issue for new Rstudio users

Need to make the new user folder permission accessible by rstudio user

gce_check_container(vm, "rstudio")

ERROR system error 13 (Permission denied) [path=/home/newuser/.rstudio, target-dir=]; OCCURRED AT: rstudio::core::Error rstudio::core::FilePath::createDirectory(const string&) const /home/ubuntu/rstudio/src/cpp/core/FilePath.cpp:795; LOGGED FROM: int main(int, char* const*) /home/ubuntu/rstudio/src/cpp/session/SessionMain.cpp:3303

Windows users can't SSH into VM

First: Great package. Really exciting stuff.

On a Windows 10 machine:

I'm having trouble accessing the VM created in either of the rstudio server templates (rstudio-hadleyverse and rstudio). After creating either one, I am able to SSH via the browser, SSH via gcloud, SSH via putty to the appropriate user@externalIP (after adding the appropriate keys and checking to make sure the public key is in both the project metadata store and on the VM's authorized keys list itself) but I cannot access RStudio server at port 8787 of the external ID nor can I run command like gce_push_registry

Public SSH key uploaded to instance Warning: Permanently added '104.197.245.222' (RSA) to the list of known hosts. Permission denied (publickey). Error: ssh failed ssh -o BatchMode=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=C:\Users\Nick\AppData\Local\Temp\RtmpU5tD1m/hosts -i C:\Users\Nick\.ssh\google_compute_engine.ppk [email protected] "docker commit rstudio gcr.io/sandbox-157602/my_rstudio" In addition: Warning message: running command 'ssh -o BatchMode=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=C:\Users\Nick\AppData\Local\Temp\RtmpU5tD1m/hosts -i C:\Users\Nick\.ssh\google_compute_engine.ppk [email protected] "docker commit rstudio gcr.io/sandbox-157602/my_rstudio"' had status 255

Similarly, as I'm sure you could guess, I cannot connect from the command line with simple commands like don't work (verbose output included)

C:\Users\Nick\AppData\Local\Google\Cloud SDK>ssh -vT -i C:\Users\Nick\.ssh\google_compute_engine.ppk [email protected] "echo foo" OpenSSH_7.3p1 Microsoft_Win32_port_with_VS, OpenSSL 1.0.2d 9 Jul 2015 debug1: Connecting to 104.197.245.222 [104.197.245.222] port 22. debug1: socket:460, io:00000257265B2EE0, fd:3 debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: open - handle:00000000000001D0, io:00000257265B3880, fd:4 debug1: close - io:00000257265B3880, type:2, fd:4, table_index:4 debug1: key_load_public: No such file or directory debug1: identity file C:\Users\Nick\.ssh\google_compute_engine.ppk type -1 debug1: open - CreateFile ERROR:2 debug1: key_load_public: No such file or directory debug1: identity file C:\Users\Nick\.ssh\google_compute_engine.ppk-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_7.3p1 Microsoft_Win32_port_with_VS Nov 29 2016 debug1: Remote protocol version 2.0, remote software version OpenSSH_7.3 debug1: match: OpenSSH_7.3 pat OpenSSH* compat 0x04000000 debug1: Authenticating to 104.197.245.222:22 as 'nickshffer' debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: algorithm: [email protected] debug1: kex: host key algorithm: rsa-sha2-512 debug1: kex: server->client cipher: [email protected] MAC: <implicit> compression: none debug1: kex: client->server cipher: [email protected] MAC: <implicit> compression: none debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ssh-rsa SHA256:7Tlsx+RlXnmXw8ldXRJ/8Rb6E1HCFpGtl8RqA/cO39Y debug1: Host '104.197.245.222' is known and matches the RSA host key. debug1: Found key in C:\Users\Nick/.ssh/known_hosts:1 debug1: rekey after 134217728 blocks debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: rekey after 134217728 blocks debug1: SSH2_MSG_NEWKEYS received debug1: close - io:00000257265B3670, type:2, fd:4, table_index:4 debug1: SSH2_MSG_EXT_INFO received debug1: kex_input_ext_info: server-sig-algs=<rsa-sha2-256,rsa-sha2-512> debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: C:\Users\Nick\.ssh\google_compute_engine.ppk debug1: open - handle:00000000000001D0, io:00000257265B3040, fd:4 debug1: close - io:00000257265B3040, type:2, fd:4, table_index:4 Enter passphrase for key 'C:\Users\Nick\.ssh\google_compute_engine.ppk': debug1: No more authentication methods to try. Permission denied (publickey).

Any thoughts?

Travis covr check timeout

The tests are all successful, but the final step of uploading the results to codecov.io fails.

I can't see what was introduced in to stop the timeout occuring, maybe its just going over the 20min. The tests themselves avoid timeout via the below in .travis.yaml:

script: 
  - |
    R CMD build .
    travis_wait 40 R CMD check googleComputeEngineR*tar.gz
after_failure:
- find *Rcheck -name '*.fail' -print -exec cat '{}' \;

Run functions in the cloud

Use an OpenCPU instance, load the local function + data (via googleCloudStorageR? or SSH/SCP), create a package, load into OpenCPU.

Run function via calling OpenCPU instance, get result JSON and parse back into R

Reproducability

Make sure you can load for example RStudio, set up all packages and code, save that version of the docker and reboot as needed on a bigger VM etc.

message() instead of cat()

Some of the below should probably be using message() instead of cat():

$ grep -F "cat(" R/*.R
R/container.R:  cat("\n## ", paste0(template, " running at ", ip,ip_suffix),"\n")
R/container.R:  cat("\n You may need to wait a few minutes for the inital docker container to download and install before logging in.\n")
R/networks.R:    cat("\n External IP for instance", as.gce_instance_name(instance), " : ", ip, "\n")
R/operations.R:  cat("\nStarting operation...\n")
R/operations.R:      if(verbose) cat("\nOperation running...\n")
R/operations.R:      if(verbose) cat("\nChecking operation...\n")
R/operations.R:    cat("\nOperation complete in", 
[...]
R/utilities.R:    cat(prefix, x, "\n")

Otherwise they'll be cluttering up the content of for instance dynamic reports (which typically capture stdout).

Add firewall rule function

I finally managed to test the library from Mac and get RServer running today, but before that I stumbled upon the firewall problem. GCE doesn't allow http access by default, so I had to add this manually, but I thought it could be included in the package as to make the whole "2 lines of R, get the IP and log-in" concept more real ; )

I wrote the function below:

gce_add_firewall_rule <- function(name,
                                  protocol,
                                  ports,
                                  sourceRanges,
                                  project = gce_get_global_project()) {
  url <-
    sprintf("https://www.googleapis.com/compute/v1/projects/%s/global/firewalls",
            project)
  
  the_rule <- jsonlite::toJSON(list(
    name = name,
    allowed = list(list(IPProtocol = protocol, ports = list(ports))),
    sourceRanges = I(sourceRanges)),
    auto_unbox = T
  )
  
  f <- gar_api_generator(
    url,
    "POST",
    customConfig = list(
      httr::add_headers("Content-type" = "application/json")
    ))

  out <- f(the_body = the_rule)
 
}

It seems to do the job with basic scenario: gce_add_firewall_rule("allow-http", "tcp", "80", "0.0.0.0/0"). It doesn't work with multiple ports, but this json stuff is killing me, so I'll have to give up for now.

feature request to change size of boot disk on gce_vm_create

Related to issue #36, seems like an easier route to achieve that goal would be to add a parameter for the diskSizeGb in the gce_vm_create function.

I have a test implementation of this in a branch that is working in my limited tests.

Currently implemented as:

# optional parameter 
build_vm <- gce_vm_create('my-build-image3', disk_size_gb = 20)
build_vm <- gce_vm_create('my-build-image4')

This seems like a much cleaner way to increase the size of the boot disk, rather than create the disk separate from the vm & pass in a reference.

Jules' updates

The rstudio-hadleyverse doesn't have the libxml2-dev .deb package. Had to install that first via shell.

The last vm2 <- gce_vm(name = "rstudio-big",... command nees a username/password line as well (but it warns you allright).

Restart VMs causes conflict of docker names

/usr/bin/docker: Error response from daemon: Conflict. The name "/rstudio" is already in use by container e043696ddfefec5502deaa1ea0345509f35a378074b1a24cbc25846d7de9d1a9. You have to remove (or rename) that container to be able to reuse that name..

Have to run

docker_cmd(vm, "rm rstudio")
gce_ssh(vm, "sudo systemctl start rstudio.service")

..to restart image or

docker_cmd(vm, "start rstudio")

...to run same container.

rstudio cloudinit template fails

Images run but fail with

s6-mkdir: warning: unable to mkdir /var/run/s6: Permission denied

View stuff via:

> sudo journalctl -u cloudservice
> docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                      PORTS               NAMES
39f68803aad6        rocker/rstudio      "/init"             45 seconds ago      Exited (1) 39 seconds ago                       rstudio
> docker logs 39f68803aad6

Cannot build VMs from Windows 10

Hey,

after following the documentation (creating VM and adding valid SSH keys) and trying to use docker_build() (for a Shiny app, but that's not important, I guess), I was getting this:
Error in cli_tools() : ssh, scp not found on your computer Install the missing tool(s) and try again

I dived into the code, went to analogsea docs and found these issues/solutions by Hadley - pachadotdev/analogsea#81 pachadotdev/analogsea#88

After adding "C:\Program Files\RStudio\bin\msys-ssh-1000-18" to the PATH variable and getting excited, I went to another error:
Permission denied (publickey). Error: ssh failed ssh -o BatchMode=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=C:\Users\Tomek\AppData\Local\Temp\Rtmpsdfqdz/hosts -i C:\Users\Tomek\Desktop\Projekty\shiny\my-ssh.ppk [email protected] "mkdir -p -m 0755 buildimage" In addition: Warning message: running command 'ssh -o BatchMode=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=C:\Users\Tomek\AppData\Local\Temp\Rtmpsdfqdz/hosts -i C:\Users\Tomek\Desktop\Projekty\shiny\my-ssh.ppk [email protected] "mkdir -p -m 0755 buildimage"' had status 255, which seems to be similar to another analogsea issue: pachadotdev/analogsea#114

I'll be able to test the package on Mac next week and while I expect it to work, it would be great to find a way to fix the SSH connection on Windows.

Not able to use a custom disk_source in `gce_vm_create`

I'm trying to launch a gcloud VM with a larger boot disk, so that I can build my images without running out of disk space.

So my naive first attempt was to create a boot disk from the source image & pass that option to gce_vm_create:

boot_disk <- gce_make_boot_disk(
  diskType = 'PERSISTENT',
  sourceImage = 'https://www.googleapis.com/compute/v1/projects/debian-cloud/global/images/family/debian-8',
  diskSizeGb = 30,
  diskName = 'boot')

build_vm <- gce_vm_create('my-build-image', disk_source = boot_disk)

Here is the output:

> boot_disk <- gce_make_boot_disk(diskType = 'PERSISTENT', sourceImage = 'https://www.googleapis.com/compute/v1/projects/debian-cloud/global/images/family/debian-8', diskSizeGb = 30, diskName = 'boot')
> build_vm <- gce_vm_create('my-build-image', disk_source = boot_disk)
Error in gce_vm_create("my-build-image", disk_source = boot_disk) : 
  Can specify only one of 'image_project' or 'disk_source' arguments.

I then tried various ways of nullifying the 'image_project' parameter, none of which were successful:

> build_vm <- gce_vm_create('my-build-image', disk_source = boot_disk, image_project = NA)
Error in if (nchar(image_project) > 0) { : 
  missing value where TRUE/FALSE needed
> build_vm <- gce_vm_create('my-build-image', disk_source = boot_disk, image_project = '')
Error in gce_vm_create("my-build-image", disk_source = boot_disk, image_project = "") : 
  Need to specify either an image_project or a disk_source

Finally, I thought maybe I could resize the boot disk after VM creation:

build_vm <- gce_vm(name = vm_name, 
                     template = template, 
                     predefined_type = 'n1-standard-2')
  
build_vm_disk <- gce_get_disk(disk = vm_name)
## resize disk 

But then I saw that the function (disks.resize) implementing the compute.disks.resize API method was commented-out in R/disks.R.

Please let me know if I am missing something - and/or if there is another way to pass the disk_source to gce_vm_create. Again thanks for the great packages!

Question: Massively parallel processing

So I am going through the Massively parallel processing tutorial, which is awesome and could really help me with my workflows. However, I keep getting an error when I use the plan function.

> plan(cluster, workers = fiftyvms)
Error: check_ssh_set(x) is not TRUE

Any idea on how to avoid this error? I am able to ssh into each of the instances using the gce_ssh function, so I think everything is fine with my ssh keys, but I could be missing something.

Thanks,
Mike

# Libraries
library(googleComputeEngineR)
library(future)

## auto auth to GCE via environment file arguments

## create 50 preemptible CPUs as they are much cheaper
vm_names <- paste0("cpu",1:3)

## specify the cheapest VMs that may get turned off
preemptible = list(preemptible = TRUE)

## start up 50 VMs with R base on them (can customise via Dockerfiles)
fiftyvms <- lapply(vm_names, gce_vm, predefined_type = "n1-standard-1", template = "r-base", scheduling = preemptible)

# Check ssh on each of machines
> gce_ssh(fiftyvms[[1]], "echo foo")
2017-06-11 11:07:17> Public SSH key uploaded to instance
Warning: Permanently added 'XX.XXX.XXX.XXX' (RSA) to the list of known hosts.
foo
[1] TRUE
> gce_ssh(fiftyvms[[2]], "echo foo")
2017-06-11 11:07:26> Public SSH key uploaded to instance
Warning: Permanently added 'XX.XXX.XXX.XXX' (RSA) to the list of known hosts.
foo
[1] TRUE
> gce_ssh(fiftyvms[[3]], "echo foo")
2017-06-11 11:07:37> Public SSH key uploaded to instance
Warning: Permanently added 'XX.XXX.XXX.XXX' (RSA) to the list of known hosts.
foo
[1] TRUE

## once all launched, add to cluster
exists("ssh",fiftyvms[[1]])
plan(cluster, workers = fiftyvms)
Error: check_ssh_set(x) is not TRUE

Cannot add a shiny app to shiny VM

While trying to add a shiny app to a shiny spinned GCE VM I get an error

> vm <- gce_vm(name = "shiny-app", 
+              template = "shiny", 
+              predefined_type = "n1-standard-1")
VM running
> gce_shiny_addapp(instance = "shiny-app", 
+               shinyapp = 'home/Documents/Github/CampaignPlanner/AB_test/')
Error in as.environment(where) : 
  no item called "shiny-app" on the search list

containerit - Automatic generation of Dockerfiles of your R environment, oh my!

This promises some automation heaven
http://o2r.info/2017/05/30/containerit-package/

  1. Make your R code
  2. Run https://github.com/o2r-project/containerit to generate a Dockerfile with your system depedecies
  3. Use docker_build or build triggers to create image on Google container registry
  4. Launch VM using dynamic_image

This is quite exciting. Also works with Rmd and Shiny files.

Related issues
#41
https://cloudyr.github.io/googleComputeEngineR/articles/single-scheduler.html

don't get same shell when accessing vm via ssh

i would be very grateful for any concrete tip to solve this:

  1. create vm by gce_vm()
  2. log into vm via gce_ssh_browser()
    i expected to have linux commands available like apt-get but get instead: command not available.

how can i get the "normal" linux shell which you get when creating the vm within the google compute engine ui ("create vm instance" | "SSH")?

this is important because some things get installed easier on linux command line like e.g. for installing devtools, you need to do:
sudo apt-get -y install libcurl4-gnutls-dev libxml2-dev libssl-dev

Java not included

This is not a bug, but just a question. I need java included on my 'googleComputeEngineR' Compute Engine instance which I need to include rJava. What is the recommended way to do this? When I try to manually load java, the /usr filesystem is real-only and it doesn't seem like I can add /usr/bin/java, which rJava is requiring. I appreciate any help or advice that you can provide.

default image_project `cos-cloud` missing cron (or apt-get)

The example given at https://cloudyr.github.io/googleComputeEngineR/articles/single-scheduler.html fails because gce_vm_container (in container.R) defaults to image_family cos-cloud (Container-Optimized OS), and this parameter can not be overridden.
The Google Container-Optimized OS does not support package managers like apt-get (https://cloud.google.com/container-optimized-os/docs/resources/faq#what_is_the_software_package_manager_for_container-optimized_os). Therefore cron is not installed during docker_build in the example, which means scheduling via cronR is impossible.

I have replaced the default image_project with debian-cloud and the default image_family with debian-8 (valentinumbach@7584784). Now cron is installed during docker_build, but I cannot get RStudio Server to run on the Debian image. Possibly Ubuntu would be a better choice? (This setup worked for me: https://github.com/grantmcdermott/rstudio-compute-engine).

DOCS: Project name vs Project ID

So, I'm fairly new to GCP, so forgive if this is a FAQ, but I just created a new GCP project named 'research-2016'. This resulted in:

  • Project name: research-2016
  • Project ID: research-2016-149512
  • Project number: 627573128191

Given the above, it is not immediately clear from the docs what GCE_DEFAULT_PROJECT should be, but from trial and error I learned at it should be the Project ID. For the longest I tried with the Project name, but got stuck with errors basically such as:

> d <- gce_get_project()
Request Status Code: 403
Error in checkGoogleAPIError(req) : 
  JSON fetch error: Access Not Configured. Compute Engine API has not been used in project 578809911671 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/compute_component/overview?project=578809911671 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.

Using the Project ID things work as expected. May I suggest to rename GCE_DEFAULT_PROJECT to GCE_DEFAULT_PROJECT_ID to make this non-ambiguous?

Also, when I have the above set up, I get:

> library(googleComputeEngineR)
Setting scopes to https://www.googleapis.com/auth/cloud-platform
If you need additional scopes set do so via options(googleAuthR.scopes.selected = c('scope1', 'scope2')) before loading library and include one required scope.
Successfully authenticated via ~/.ssh/GoogleComputeEngine/research-2016-00a0b75271f4.json
Set default project name to 'research-2016-149512'
Set default zone to 'us-west1-a'

Note how the message mention project name; should it say project ID instead?

PS. I didn't notice this in my previous test setup, because there the project name and the project ID were identical.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.