Git Product home page Git Product logo

appscale / gts Goto Github PK

View Code? Open in Web Editor NEW
2.4K 161.0 279.0 133.53 MB

AppScale is an easy-to-manage serverless platform for building and running scalable web and mobile applications on any infrastructure.

Home Page: https://www.appscale.com/community/

License: Apache License 2.0

Ruby 0.70% Shell 0.18% Python 87.89% HTML 1.23% CSS 1.18% JavaScript 4.38% Java 0.56% PHP 2.97% Makefile 0.06% Smarty 0.01% TeX 0.01% Emacs Lisp 0.01% C 0.70% Batchfile 0.03% Assembly 0.01% Roff 0.07% Vim Script 0.01% Cap'n Proto 0.01% Erlang 0.03% TSQL 0.01%

gts's Introduction

AppScale GTS

GitHub version AppScale license

AppScale GTS is an open source serverless platform for building and running scalable web and mobile applications on any infrastructure.

The platform enables developers to focus solely on business logic in order to rapidly build scalable apps, cleanly separating it from deployment and scaling logic. It allows operations to provide a consistent, tunable environment that can simplify running and maintaining apps on multiple infrastructures. The business will benefit from faster time-to-market, reduced operational costs, maximized application lifetime, and the flexibility to integrate with new or existing technologies.

AppScale GTS is open source and modeled on Google App Engine APIs, allowing developers to automatically deploy and scale unmodified Google App Engine applications over public and private cloud systems and on-premise clusters. It currently supports Python, Go, PHP and Java applications. The software was developed by AppScale Systems, Inc., based in Santa Barbara, California, and Google. In 2019, the company ended commercial support AppScale GTS, however the source code remains available in this GitHub Repo.

Why Use AppScale GTS?

The goal of AppScale GTS is to provide developers with a rapid, API-driven development platform that can run applications on any cloud infrastructure. AppScale GTS decouples application logic from its service ecosystem to give developers and cloud administrators control over application deployment, data storage, resource use, backup, and migration.

I Want ...

Documentation

Community and Support

Join the Community Google Group for announcements, help, and to discuss cloud research.

gts's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gts's Issues

In Euca, nginx incorrectly writes SSL configuration files

In Eucalyptus deployments, nginx uses the public IP address in the upstream configuration for the SSL port (8060 for the AppLoadBalancer). Since public IP: 8060 isn't resolvable (blocked by the firewall), this causes users to see 403 Forbidden errors. Need to change nginx.rb (which writes our nginx config files) to use the private IP address instead of the public IP (which is resolvable).

Investigate NDB on AppScale

In theory the Next DB module (a Datastore v2 API) should work on AppScale since it's a wrapper around Datastore v1, but we should still look into it personally and see if the sample apps they provide work for us.

Too many apicheckers and data generated by the apichecker

We have 3x apicheckers running on each node which may be too much. Also, each apichecker generates data into the datastore which builds up over time and is never garbage collected. Either GC it or reuse the datastore keys to make sure there is only one copy which keeps getting overwritten.

AppController crashes in response to failed ZooKeeper operations

If a ZooKeeper write fails, we throw a FailedZooKeeperOperationException. A stack trace showing this is:

/root/appscale/AppController/lib/zkinterface.rb:638:in `set': Failed to set path /appcontroller/state with data 
{:rc=>-7, :stat=>#<ZookeeperStat::Stat:0x7fc864a1e0a8 @exists=false>, :req_id=>11026} (FailedZooKeeperOperationException)
        from /root/appscale/AppController/lib/zkinterface.rb:178:in `write_appcontroller_state'
        from /root/appscale/AppController/djinn.rb:1461:in `backup_appcontroller_state'
        from /root/appscale/AppController/djinn.rb:795:in `job_start'
        from /root/appscale/AppController/djinnServer.rb:120

This crashes the AppController, and when it restarts, it removes all the nginx config files on this machine, which makes users unable to access their applications. The AppController fails to restart correctly, so users are never able to access their applications.

AppScale fails to successfully terminate on EC2

Start command:
/appscale-run-instances --machine ami-a1c772c8 --scp ~/appscale --min 2 --max 2 --test -v --infrastructure ec2 --keyname appscaleraj008 --group appscaleraj008 --force --file sample-apps/python/guestbook/

End command:
./appscale-terminate-instances --keyname appscaleraj008
About to terminate instances spawned via ec2 with keyname 'appscaleraj008'...
Unable to contact shadow node, shutting down via tools...
Client.InvalidInstanceID.Malformed: Invalid id: "r-81d7b6e7" (expecting "i-...")

Client.InvalidGroup.NotFound: The security group 'appscale' does not exist

It uses the appscale group when it should be appscaleraj008 and is having issues parsing the output of ec2 tools.

Autoscaler scales beyond maximum number of nodes requested

The AppController uses ZooKeeper to detect if other nodes have failed, and spawns new nodes to take their place in those scenarios. However, it does not obey the --max flag that the user gives us, which indicates the total maximum number of virtual machines in use in a single AppScale deployment. Need to change the AppController to actually obey this limit.

Multinode Eucalyptus deployment fails to send access key

When starting a multinode deployment of Eucalyptus, the access key fails to be copied over to slave nodes. The infrastructure log on the head node will complain about not having the access key and therefore unable to start subsequent nodes.

Channel API does not work in Eucalyptus

Receiving works (connection to ejabberd/nginx is fine) but sending gives the following exception:
File "/root/appscale/AppServer/google/appengine/api/xmpp/xmpp_service_real.py", line 240, in _Dynamic_SendChannelMessage
client.auth(my_jid.getNode(), self.uasecret, resource=my_jid.getResource())
File "/usr/share/pyshared/xmpp/client.py", line 214, in auth
AttributeError: Client instance has no attribute 'Dispatcher'

Tested and works fine in a Xen deployment.

Advanced placement in Eucalyptus is broken

Used the following ips.yaml:

:master: node-1
:appengine:
- node-2
- node-3
- node-4
- node-5
- node-6
- node-7
:database:
- node-8
- node-9
- node-10
- node-11
- node-12

The logs show the head db node try to start up slaves but fails to do so since it does not able the ability to ssh into the other nodes. This may be because the head db node is not also the master node which has the ssh keys set up already.

RabbitMQ doesn't work for App Engine apps in EC2

When Google App Engine apps start in Amazon EC2 in a standard two node deployment, they throw this stack trace:

WARNING  2012-05-17 08:17:34,392 rdbms_mysqldb.py:90] The rdbms API is not available because the MySQLdb library could not be loaded.
ERROR    2012-05-17 08:17:37,717 base_connection.py:119] BlockingConnection: Socket Error on 3: 104
ERROR    2012-05-17 08:17:40,720 base_connection.py:119] BlockingConnection: Socket Error on 3: 104
ERROR    2012-05-17 08:17:40,721 dev_appserver_main.py:688] <class 'pika.exceptions.ChannelClosed'>:

Right now we just see this for Sisyphus, as it's the first App Engine app to start and it blocks all others from starting.

AppController does not properly revive upon failure

If the AppController crashes, then god revives it. However, the AppController does not properly start up in these scenarios and crashes once more. The problem appears to be ZooKeeper-related - need to get the stack trace it prints and fix it.

Bad applications crash appscale

If an application has a bad app.yaml file (where an exception is thrown when trying to parse it) AppScale will become unresponsive. Instead, swap out their app with an app which gives an error message telling the developer to fix the issue and reupload the application.

No way to add nodes via IP address

If a node fails, there currently is no way to add it back into an AppScale deployment once it powers back on. Similarly, there is no way for a system administrator to add nodes at will to an AppScale deployment given only their IP addresses (assuming ssh keys are sync'ed).

epmd complains that ejabberd/rabbitmq are already running

god appears to believe that ejabberd and rabbitmq have failed and continuously tries to restart them, even though they may already be running. Need to check syslog for epmd to see when this occurs and if god can be instructed to correctly view these processes.

Advance placement unable to determine the correct number of nodes

euca-delete-group appscale3 ; ./appscale-run-instances --infrastructure euca --test -v --appengine 20 --table hbase --machine emi-B9EB3C99 --scp --instance_type m1.xlarge --group appscale3 --force --force --file ~/querybenchmark/ --ips ips.yaml --keyname appscale049 -n 3
GROUP appscale3
About to start AppScale over a cloud environment with the euca tools with instance type m1.xlarge.
./../lib/../lib/common_functions.rb:1205:in generate_node_layout': There were errors with the yaml file: (BadConfigurationException) The provided replication factor is too high. The replication factor (-n flag) cannot be greater than the number of database nodes. from ./../lib/appscale_tools.rb:283:inrun_instances'
from ./appscale-run-instances:14

The number of database nodes is 6 but it only allows for 1x replication.

Getting ZooKeeper lock times out in multi-node EC2 deployment

Not sure if this is reproducible yet, but on the 1.6-rc1 public ami (ami-e4a3048d), a two node deployment appears to start up fine, but then when the first node tries to grab the ZooKeeper lock, it infinitely hangs. Need to find out why this is hanging and ideally add some kind of timeout in.

collectd may be writing to disk too often

syslog appears to be writing lots of information from collectd, which could be slowing down disk access on our machines. Need to investigate and see if this is a problem, and if so, how to remedy it.

"Quiet mode" for logging

As far as producing output / logs are concerned, we currently have two options:

  1. -v flag - produces the maximum amount of output / logs
  2. no -v flag - produces the regular amount of output / logs

There currently is no option to produce the minimum amount of output. Need a "quiet" flag that fills this need. At the least, the AppController needs to change in appscale/AppController/lib/godinterface.rb, erasing this line:

w.log = "/var/log/appscale/#{WATCH}-#{port}.log"

to prevent god from writing logs for each process it monitors.

Add a tool to manually add nodes to a running AppScale deployment

Right now if the user indicates that a certain number of nodes should be used within AppScale, the system uses that many nodes. If the user wants to add another node later, this is currently not a trivial task (and it should be!). In EC2 and Eucalyptus, the user can write a Neptune script to do this, but it's far from fully tested and this can really be done better than the current implementation.

In summary, we need a new tool, along the lines of appscale-add-nodes, that accepts as inputs:

  • The number of nodes to add, or IP addresses that correspond to VMs that have AppScale installed.
  • The roles that each node should take on.
  • The keyname, used to indicate which AppScale deployment nodes should be added to.

Consider adding a tool to manually remove nodes upon completion of this tool (and throughout this tool's design).

Cassandra Connection Timeout

Under high load cassandra faces connection issues with pycassa interface:
INFO:pycassa.pool:Connection 66232208 (:9160) in None (id = 51385744) failed: timed out

ERROR:root:Uncaught exception POST / (IP)HTTPRequest(protocol='http', host='IP:8888', method='POST', uri='/', version='HTTP/1.0', remote_ip='IP', remote_ip='IP', body='\x12\x0cdatastore_v3\x1a\x08RunQuery"@\n\x12appscalebenchmark4\x1a\x08TestItem#0\x03r\x0f\x1a\x05prop1 \x00_\x04\x08\xda\xe4)$\x80\x01d\xc8\x01\x01\xea\x01\x04100
K', headers={'X-Real-Ip': 'IP', 'Content-Length': '90', 'Accept-Encod
ing': 'identity', 'Appdata': 'appscalebenchmark4', 'Protocolbuffertype': 'Request', 'X-Forwarded-For': 'IP', 'Host': 'IP:8888'})
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/tornado/web.py", line 688, in _execute getattr(self, self.request.method.lower())(_args, *_kwargs)
File "/usr/lib/python2.6/site-packages/tornado/web.py", line 774, in wrapper
return method(self, *args, *_kwargs) File "/root/appscale/AppDB/datastore_server.py", line 1813, in post
self.remote_request(app_id, http_request_data)
File "/root/appscale/AppDB/datastore_server.py", line 1613, in remote_request http_request_data)
File "/root/appscale/AppDB/datastore_server.py", line 1710, in run_query
app_datastore._Dynamic_Run_Query(app_id, query, clone_qr_pb)
File "/root/appscale/AppDB/datastore_server.py", line 1551, in _Dynamic_Run_Quer
y
result = self.__GetQueryResults(query)
File "/root/appscale/AppDB/datastore_server.py", line 1539, in __GetQueryResults
results = strategy(self, query, filter_info, order_info)
File "/root/appscale/AppDB/datastore_server.py", line 1121, in __SinglePropertyQ
uery
startrow)
File "/root/appscale/AppDB/datastore_server.py", line 1252, in __ApplyFilters
end_inclusive=end_inclusive)
File "/root/appscale/AppDB/cassandra/cassandra_interface.py", line 224, in range
_query
for key in keyslices:
File "/usr/local/lib/python2.6/dist-packages/pycassa-1.3.0-py2.6.egg/pycassa/col
umnfamily.py", line 797, in get_range key_slices = self.pool.execute('get_range_slices', cp, sp, key_range, cl) File "/usr/local/lib/python2.6/dist-packages/pycassa-1.3.0-py2.6.egg/pycassa/pool.py", line 572, in execute return getattr(conn, f)(_args, *_kwargs) File "/usr/local/lib/python2.6/dist-packages/pycassa-1.3.0-py2.6.egg/pycassa/pool.py", line 145, in new_f
return new_f(self, _args, *_kwargs)
File "/usr/local/lib/python2.6/dist-packages/pycassa-1.3.0-py2.6.egg/pycassa/poo
l.py", line 145, in new_f
return new_f(self, _args, *_kwargs)
File "/usr/local/lib/python2.6/dist-packages/pycassa-1.3.0-py2.6.egg/pycassa/poo
l.py", line 145, in new_f
return new_f(self, _args, *_kwargs)
File "/usr/local/lib/python2.6/dist-packages/pycassa-1.3.0-py2.6.egg/pycassa/pool.py", line 145, in new_f
return new_f(self, _args, *_kwargs)
File "/usr/local/lib/python2.6/dist-packages/pycassa-1.3.0-py2.6.egg/pycassa/pool.py", line 145, in new_f
return new_f(self, _args, *_kwargs)
File "/usr/local/lib/python2.6/dist-packages/pycassa-1.3.0-py2.6.egg/pycassa/pool.py", line 140, in new_f
(self._retry_count, exc.class.name, exc))
MaximumRetryException: Retried 6 times. Last failure was timeout: timed out
ERROR:root:500 POST / (IP) 3649.25ms

Uploaded Apps clutter /tmp

When running the tools, uploaded apps get placed in /tmp/<random_string>. Over time this clutters up /tmp and its not simple to clear out all the apps because a random string is used for the directory name.

EC2 Startup with inability to log into slaves with keyname has "-" in it

Starting AppScale with the following using master tools and master appscale (with reporting of API status turned on, rather than just returning):
./appscale-run-instances --machine ami-a1c772c8 --scp ~/appscale --min 2 --max 2 --test -v --infrastructure ec2 --keyname appscale-raj-002 --group appscale-raj-002 --force

Once AppScale reports that it is up, log into the head node and then:
ssh appscale-image1

My result is:
The authenticity of host 'appscale-image1 (10.202.21.128)' can't be established.
RSA key fingerprint is 22:b6:03:f0:9f:6c:12:90:dc:95:9c:b3:cc:82:e6:b6.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'appscale-image1' (RSA) to the list of known hosts.
Permission denied (publickey).

AppController erases nginx configuration files on bootup

When the AppController starts, it erases all of the files in /etc/nginx/sites-enabled and reloads nginx, which disables access to hosted applications. This is done to ensure that we start up in a clean, known state (if they forgot to run appscale-terminate-instances), but causes the above problem if the AppController gets auto-revived by god.

Need to investigate whether this erasure should be done, or if making the AppController revive properly (#33) fixes this problem.

Make Infrastructure support into a Factory

Right now we don't support cloud infrastructures in a very pluggable way. In InfrastructureManager/lib/helper_functions.rb we basically have a big if/else clause with shell calls for each cloud infrastructure (e.g., EC2, Eucalyptus). Need to refactor this info an InfrastructureFactory, determine which methods are necessary to support a new cloud infrastructure in AppScale, and add tests accordingly.

NeptuneManager doesn't add/remove roles correctly

The NeptuneManager adds and removes roles by directly calling an AppController's add/remove role method, which starts the necessary role but doesn't update the ZooKeeper metadata information, causing that node to undo those changes when it contacts ZooKeeper for the latest list of roles. Need to just update ZooKeeper accordingly.

Add a top-level Rakefile that can be used to run tests on any component

Right now it's pretty hard for a non-expert user to run tests, generate documentation, and code coverage stats, as this requires intimate knowledge of how each component (written in Python or Ruby) does this. A Rakefile that contains all this information would do wonders for new users as well as for automating this process all around. At the minimum, the Rakefile should:

Namespace each component, so 'rake appcontroller' does "everything" for the AppController only.
Have the same set of tasks for each component, so we can say "rake appcontroller:test" or "rake neptunemanager:test" without having to know exactly how these components implement testing.
Have TODOs and aborts for each task that isn't implemented (e.g., there aren't any tests for the UserAppServer, so that should complain and abort).

ZooKeeper Ephemeral locks periodically time out

Each AppController uses an ephemeral lock in ZooKeeper to indicate that its node is alive. However, the way that we currently use these locks may not be correct, because we see AppControllers losing their ephemeral locks periodically. This causes other AppControllers to think that node is dead, and spawn up additional nodes to take its place.

Minimum number of AppServers should be a constant

Right now we assume that the minimum number of AppServers per node should be 1, but to support use cases where this isn't the case, we should extract it to a class constant in appscale/AppController/djinn.rb. In particular, the below code should be changed to replace all instances of 1 with the new constant.

    if time_since_last_decision > SCALEDOWN_TIME_THRESHOLD and
      !@app_info_map[app_name][:appengine].nil? and
      appservers_running > 1

      Djinn.log_debug("Removing an AppServer on this node for #{app_name}")
      remove_appserver_process(app_name)
      @last_decision[app_name] = Time.now.to_i
    elsif !@app_info_map[app_name][:appengine].nil? and
      appservers_running <= 1

      Djinn.log_debug("Only 1 AppServer is running - don't kill it")

Failure to delete app data from ZooKeeper prevents app removal

Tried to upload the guestbook app and then remove it, but the AppController wasn't able to remove the guestbook app's data from ZooKeeper:

(controller-17443.log)

 [Fri Oct 26 17:40:05 +0000 2012] Is app guestbook running? true
[Fri Oct 26 17:40:06 +0000 2012] [ZK] trying to delete /apps/guestbook
[Fri Oct 26 17:40:06 +0000 2012] Delete failed - {:rc=>-111, :req_id=>201}

This causes the AppController to keep checking to make sure everyone has stopped hosting the app, and thus do so infinitely. Failed deletes should retry with a backoff to reduce their likelihood.

Tar file containing app is left in app folder

When the user gives us their application, we tar it up and pass it to each AppController. The AppController then untars it in /var/apps/name-of-app/app. If the user has specified that the app folder should be served up, then it could be possible to download the tar file containing the whole app. Should move this tar file a folder up to avoid this scenario.

Make the NeptuneManager run outside of AppScale

The NeptuneManager makes it easy to connect arbitrary applications to cloud services (compute, storage, and queue), but only runs within AppScale. It doesn't look like there's anything AppScale-intrinsic that prevents us from using the NeptuneManager outside of AppScale (or even running it outside of AppScale but connecting to an AppScale deployment) - can we make the NeptuneManager a stand-alone service?

InfrastructureManager is not using given user's credentials

When running in multinode setups on Eucalyptus, the InfrastructureManager does not appear to be setting the given credentials in the environment, causing errors like this to occur:

euca-describe-instances 2>&1
describe-instances says [EC2_ACCESS_KEY environment variable must be set.
Connection failed
]

Need to properly propagate those credentials and test accordingly.

MongoDB runs upon boot

Remove MongoDB from starting up at boot time, and only run when mongodb is the chosen DB.

SOAP calls to NeptuneManager time out

When running MapReduce jobs via the NeptuneManager, Neptune calls fail with the following stack trace:

/home/cgb/neptune/bin/../lib/neptune_manager_client.rb:95:in `make_call': We saw an unexpected error of the type Errno::ETIMEDOUT with the following message: (NeptuneManagerException)
Connection timed out - connect(2).
        from /home/cgb/neptune/bin/../lib/neptune_manager_client.rb:111:in `start_neptune_job'
        from /home/cgb/neptune/bin/../lib/neptune.rb:562:in `run_job'
        from /home/cgb/neptune/bin/../lib/neptune.rb:88:in `neptune'
        from ./run_mapreduce.rb:10
        from /home/cgb/neptune/bin/neptune:21:in `load'
        from /home/cgb/neptune/bin/neptune:21

Persistence

The data store should be persistent by default and flushable on command.

Autoscaler scales down when --appengine is used

The autoscaler will downscale the number of appservers on a node even if the --appengine flag is used. The correct functionality should be that it does not scale up or down, but rather leaves the number of application servers the same.

Can't Start After Reboot...

Hi Guys,

Thanks for all the hard work on AppScale. I am currently having an issue with 1.6 RC 1. I restarted the server without terminating the instance and appscale would not start afterwards. I just kept getting the message 'validate_run_instances_options': An AppScale instance is already running

When I try to terminate it says it was not running.

After much mucking around, I found the trick:

You have to force it to run with --force parameter (I did this without the app file just to be safe).

It will hang just after "App Controller has just started".

At this point, use Ctrl-C to kill the start-up.

Then use the terminate command to terminate AppScale and Voila.

You should able to start the server again.

Can you please fix the issue with shutting down?

It is a real pain having to remember to log into the server and terminate the instance before rebooting.

Kind regards,
Will

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.