Git Product home page Git Product logo

actuator's Introduction

Actuator

Actuator allows you to use Python to declaratively describe system infra, configuration, and execution requirements, and then provision them in the cloud.

  1. Intro
  2. Installing
  3. Basic
  4. IDE install
  5. Requirements
  6. Python version
  7. Core packages
  8. Tutorial
  9. Documentation
  10. Hadoop Example
  11. Roadmap (yet to come)
  12. Contact
  13. Acknowledgements

Current status

  • 12 Feb 2015: Actuator can provision a limited set of items against Openstack clouds. It can create instances, networks, subnets, routers (plus router gateways and interfaces), and floating IPs. Not all options available via the Python Openstack client libraries are supported for each provisionable. Namespace models can drive the variable aspects of infra models successfully, and acquire information from the infra model such as IPs of a provisioned server. These can then be accessed by the configuration model, which has support of a small set of Ansible modules (specifically, ping, command, shell, script, and copy file), as well as a task that can process a template file through the namespace before it gets copied to a remote machine. Environment variables are populated from the namespace model for each configuration activity run on a remote system. Due to the direct dependency on Ansible, Actuator must itself run on a *nix box. A number of features over the Oct status have been added to make the environment more expressive.

Actuator seeks to provide an end-to-end set of tools for spinning up systems in the cloud, from provisioning the infra, defining the names that govern operation, configuring the infra for the software that is to be run, and then executing that system's code on the configured infra.

It does this by providing facilities that allow a system to be described as a collection of models in a declarative fashion directly in Python code, in a manner similar to various declarative systems for ORMs (Elixir being a prime example). Being in Python, these models:

  • can be very flexible and dynamic in their composition
  • can be integrated with other Python packages
  • can be authored and browsed in existing IDEs
  • can be debugged with standard tools
  • can be used in a variety of ways
  • and can be factored into multiple modules of reusable sets of declarative components

And while each model provides capabilties on their own, they can be inter-related to not only exchange information, but to allow instances of a model to tailor the content of other models.

Actuator uses a Python class as the basis for defining a model, and the class serves as a logical description of the item being modeled; for instance a collection of infrastructure components for a system. These model classes can have both static and dynamic aspects, and can themselves be easily created within a factory function to make the classes' content highly variable.

Actuator models can be related to each other so that their structure and data can inform and sometimes drive the content of other models.

The best way to try Actuator out is to create a virtual Python environment with virtualenv and then use pip to install Actuator into it (virtualenv will take care of installing pip for you). After you fetch virtualenv and install it into your global Python 2.7, you can create an "Actuator test" (at) environment under your home directory with the following command:

~/tmp$ virtualenv --no-site-packages ~/at

You then need to activate the environment to work in it; do that with the following shell command:

~/tmp$ source ~/at/bin/activate

This will change your shell command prompt to now be prepended with the name of your virtual environment, in this case '(at)'. Clone the Actuator project and cd into the project root (where the setup.py file is). There, run the following pip command to install Actuator into your virtual environment:

(at)~/tmp/actuator/$ pip install .

Now, while in your virtual environment, you can start Python and import Actuator:

(at)~/tmp/actuator$ python
Python 2.7.6 (default, Mar 22 2014, 22:59:56) 
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from actuator import *
>>> 

When you're done playing around with Actuator, remember to deactivate your virtual env:

(at)~/tmp/actuator$ deactivate
~/tmp/actuator$

To get the full value of Actuator, you'll want to use it from an IDE. Once you have a virtual environment set up, most IDEs provide a way add additional Python interpreters to choose from when you start a project. You can add the interpreter from the virtual environment created above, and the IDE will then know all about Actuator.

The details vary from IDE to IDE, but there's lots of help on the web for this process. For instance, here are directions for adding interpreters to Eclipse using the PyDev plugin:

http://pydev.org/manual_101_interpreter.html

###Python version

Actuator has been developed against Python 2.7. Support for 3.x will come later.

###Core packages

Actuator requires the following packages:

  • networkx, 1.9 minimum
  • ipaddress, 1.0.4 minimum
  • fake_factory (to support running tests), 0.4.2 minimum
  • ansible, 1.7.2 minimum. Currently required for configuration tasks, but other config systems will be supported in the future
  • subprocess32, 3.2.6 minimum. MUST BE IMPORTED BEFORE ANY ANSIBLE MODULES
  • python_novaclient, 2.18.1 minimum (for Openstack)
  • python_neutronclient, 2.3.7 minimum (for Openstack)
  • nose, 1.3.4 minimum, for testing
  • coverage, 3.7.1 minimum, for testing
  • epydoc, 3.0.1 minimum, documentation generation

You can find a discussion of the basic concepts and an overview of the use of Actuator here.

You can find the epydoc-generated docs here and the source html can be found here.

A more significant example of Actuator's use can be found in the examples directory. It is a set of models that describe setting up a Hadoop cluster with an arbitrary number of slave nodes. You can see the readme and associated example files here.

You can write to me with questions at [email protected].

The following projects and people have provided inspiration, ideas, or approaches that are used in Actuator.

  • Elixir: Actuator's declarative style has been informed by Elixir's declarative ORM approach. Additionally, Actuator uses a similar mechanism to Elixir's for its "with_" functions that provide modifications to a modeling class (such as with_variables() and with_components()).
  • Celery: Actuator has re-used some of Celery's notation for describing dependencies between tasks and other entities.
  • John Nolan, who provided a sounding board for ideas and spent time pairing on an initial implementation.

actuator's People

Contributors

haxsaw avatar

Stargazers

 avatar

Watchers

 avatar

actuator's Issues

Improve error reporting.

Currently, bad returns from underlying provisioning systems such as Openstack aren't handled very well. A better general error reporting system needs to be introduced and better use of it must be made from any actuator provisioner object.

Need a worklist generator as an alternative to actually performing the work

The worklist generator should be able to intercept actions (tasks) and record somewhere the work that would need to be done to actually perform the task. The tricky part of this is to somehow convey where you include data from a previous task in the input of a new task. This should go into a file in a format that suitably represents the parallelism in the work.

Add a nexus property to the CallContext

Instead of having one model have a direct reference to another (namespace to infra, for example), all sibling models should be accessed through the nexus object. To make this work proper though, we need to add a specific property to the ContextExpr that recognizes nexus accesses and presents a group of attributes that allow access to the specific models in the model group.

For example, it should be possible to access the infra model from a context expression in the namespace model with a construct like so:

ctxt.nexus.inf.some_thing

Where the 'inf' attribute (or property, whichever is needed) results in a nexus lookup of the infra model in the nexus.

For a class IClass that is a kind of InfraModel, the following should be true:

infra = IClass("infra")
ctxt.nexus.inf.some_thing is infra.some_thing

To keep the expressions from getting too unwieldy before getting to the proper attribute, we might want to consider abbreviating 'nexus' to 'nex', and using short names for the models: infra model is 'inf', namespace model is 'ns', config model is 'cfg', and execution model is 'exe'. So a short version of the above would be:

ctxt.nex.inf.some_thing

Perhaps they would be made available as synonyms as an alernative.

Allow the infra model to access the namespace for various values

This may be a little tricky, but the motivation is that without it there are cases where values are actually determined in the infra model rather than the namespace, and we'd prefer that all values originate in the namespace since it has an in-built ability to change those values.

A good example of this is setting up SecurityGroupRules that need to open ports to the outside world. The port value is needed (as an int) in the SecurityGroupRule object, but the value is often also needed as the value of a Var as the port number may need to be processed into a template file. Actuator currently supports this, but there's a problem: the data only flows from the infra model to the namespace model, so if you want to change the port for some reason, you need to actually change the infra model, as opposed to just supplying an override (or new Var) to the namespace.

If the infra model can be set up to get values from the namespace, we can then allow all such tune-able values to be supplied from the namespace, which is more consistent.

One issue here is that the you may not want to expose the structure of the namespace back to the infra, and hence implicitly binding a namespace Role to some infra object is a questionable practice. It may be that we only allow this on the class level, as that's structure free and just needs to be able to resolve the name. OTOH, we might want to allow people to do whatever wacky thing they want and simply state best practice is to only bind vars at the namespace model level.

Another approach is to have another namespace that drives these values in, but mechanically it probably isn't very different from using the primary namespace for a system, so it isn't clear that taking this approach makes things much easier.

Improve argument error detection

Look into generic ways to ensure that arguments are valid prior to actually submitting them to a provisioning system. This may prove tricky in all cases since the arguments may not be "fixed" until just before being used. Still, checks need to be provided to give the best local error detection possible.

Expanded namespace capabilities

We need to add some capabilities to the NamespaceSpec object. It needs to be able to support:

  • Responding with a list of infra components from a particular infra model instance against a namespace model instance
  • Provide a list of Vars (and their values) from the perspective of any namespace component
  • Provide a list of Vars per namespace component whose value can't be determined at a given point in time
  • Provide a list of components in the namespace

Implement a de-provisioner that works on the provisioning record

Need to implement a de-provisioner that can take the returned provisioning record and release all the items referred to by the record. This will help not only the normal lifecycle use case, but will shorten the time to release partially-provisioned capacity if a failure occurs during provisioning and someone wants to start over.

Investigate multiple security groups per server

Although the nova api allows a list of multiple security groups to be specified, it appears it only associates the first with the server. This needs to be investigated further, but after the alpha is finished.

Add ability to access Var's by name on any VariableContainer

It would be useful to be able to create references for Vars directly on a special attribute of any VariableContainer object (the namespace class or any of the namespace objects) for either use in other portions of the model or possibly in integration circumstances.

For example, suppose you had:

class NS(Namespace):
  with_variables(Var("WIBBLE", "wobble"))
  role1 = Role("somename", variables=[Var("WIBBLE", "don't wobble")])

And further, suppose that every VariableContainer had a special attribute 'v', on which you could use a Var name as an attribute and get a reference back that would let you acquire the value. Something like this:

NS.v.WIBBLE
NS.role1.v.WIBBLE

On classes, these should return a ref object on which value() can be invoked to get the value:

NS.v.WIBBLE.value() == "wobble"
NS.role1.v.WIBBLE.value() = "don't wobble"

And which can be turned into instance refs on an instance of the namespace class in the usual way. Invoking value() on the instance ref may yield a different value if an override or alternate value for the var is supplied.

Also, accessing the Var by name via the context should also work:

ctxt.model.v.WIBBLE

Tearing down with a single worker thread can cause teardown to fail

The number of threads used to initiate a system with the orchestrator is the name number used for teardown. This can pose a problem where there are too few threads for the number of resources to be managed.

For example, the following simple infrastructure model:

class SimpleInfra(InfraModel):
    with_infra_options(long_names=True)
    fip_pool = "public"  # this is the name of the pool at TryStack

    #
    # connectivity
    network = Network("simple_network")
    subnet = Subnet("simple_subnet", ctxt.model.network, "192.168.10.0/24",
                    dns_nameservers=['8.8.8.8'])
    router = Router("simple_router")
    gateway = RouterGateway("simple_gateway", ctxt.model.router, "public")
    interface = RouterInterface("simple_interface",
                                ctxt.model.router,
                                ctxt.model.subnet)

    #
    # security
    secgroup = SecGroup("simple_secgroup")
    ping_rule = SecGroupRule("ping_rule", ctxt.model.secgroup,
                             ip_protocol="icmp", from_port=-1, to_port=-1)
    ssh_rule = SecGroupRule("ssh_rule", ctxt.model.secgroup,
                            ip_protocol="tcp", from_port=22, to_port=22)
    kp = KeyPair("simple_kp", "simple_kp", pub_key_file=pubkey)

    #
    # server
    server = Server("simple_server", "ubuntu14.04-LTS", "m1.small",
                    nics=[ctxt.model.network],
                    security_groups=[ctxt.model.secgroup], key_name=ctxt.model.kp,
                    availability_zone="nova")

    #
    # external access
    fip = FloatingIP("simple_fip", ctxt.model.server,
                     ctxt.model.server.iface0.addr0,
                     pool=fip_pool)

...when managed by a single thread can result in the teardown failing. The reason is that we traverse the resource graph in reverse dependency order, however there are often implicit dependencies that we can't see. For example, nothing explicitly depends on the router interface, but implicitly the floating ip does, hence if we attempt to tear down the interface before we tear down the FIP we'll fail. The current implementation re-tries the teardown until the performance limit is reached, and at that point we give up on trying to tear the system down.

This particular example can be corrected by upping the number of worker threads above say 3; this means that all initially eligible tasks can be performed, and if one fails the item that it implicitly depends on will succeed, so when the failed item retries its dependency is gone and it can proceed.

This is a workaround at best; this solution doesn't generalize very well, especially with a static number of worker threads.

There are two approaches for fixing this: first, vary the number of worker threads to be turned to the amount of outstanding tasks to perform (something I've wanted to do anyway). Second, push failed tasks back into the work queue. This would allow a low number of threads to attempt other tasks before coming back to the original task, at which point the originally failed task might now succeed.

This latter approach will need to capture the retry count externally, as well as time to wait for a retry so that a retry task can be sure to wait the remaining wait period before retrying.

Initiate AWS support

Need to begin the AWS support in Actuator, probably over supporting more OpenStack resources.

Make model instances persistable/re-createable

This is a prelude to providing a persistence system, but for now we want to be able to create an external representation of a model that can be re-created from that representation. Making this JSON is probably the best thing as it will work with various doc stores as well as web browsers. This should include any graphs constructed for task management (although graphs should be able to be independently persisted).

Look into supplemental dependencies to help with infra de-provisioning

So it turns out that there are implicit dependencies between certain Openstack resources. For instance, there's a dependency between a floating ip and a router interface, although neither needs to explicitly reference the other in their respective creation methods. This has caused some issues when deprovisioning as often the resource that is being deprovisioned thinks it is still relied upon by some other resource (although generally this is just a consequence of timings or a limited number of worker threads). This has been solved for now by upping the retry count for task reversing. However, it isn't clear that this will generalize, and so another solution would be useful. One approach would be to allow the user to optionally supply references to additional dependencies; these will be taken into account when constructing the graph and doing the work. Making them optional provides an "out" in the case where multiple attempts don't resolve the problem.

Make parallelism smarter

We need to make parallelism in task execution smarter: if there are a lot of tasks ready to run, we should be able to increase the number of threads available to run them up to some threshold value. Similarly, if there aren't many tasks to perform, then the number of threads should drop to a smaller level.

Need a model classmethod that returns all the components in the model

This should be both the structuring objects as well as top-level defined objects. It may be that the structural objects will need a way to return their components so someone can look inside.

While the above provides a view of the topology of a model, there should also be a separate call that returns the non-structural components alone. This may use the first call to get everything, and then dive into the structural components to get anything they contain.

Implement provisioning resumption after partial provision

Provide the ability to take an InfraSpec instance and a provisioning record for a provisioning run that was only partially successful and resume the provisioning process, skipping the items named in the provisioning record and continuing with the things that hadn't been provisioned yet.

Change provisioner to work on something besides an InfraSpec

Currently, the provisioner takes an InfraSpec derived class instance and asks it for the provisionables it contains. However, because the computation of provisionables from the perspective of a namespace can yield a subset of these because of the exclusion capability the contain, there's a problem: static provisionables a the class level are always reported by the "provisionable()" method, regardless of the exclusions desired. To keep these always excluded would require a deeper integration of the concept of exclusion in to the rest of the infra model, and that's probably the wrong choice.

Instead, we should look to alternate ways to providing provisionables to the provisioner. A couple of possibilities are:

  • Generalize the "provision" method so that either a list of provisionables or an infra object can be supplied (probably via kwargs); the namespace could provide a list of provisionables, or possibly the provisioner could recognize it's a namespace and call different method
  • Develop an independent interface that both the provisioner and the namespace implement the yields provisionables, and supply that to the provisioner

This issue is going to be pushed up against again when a fuller notion of "substitution of hosts" comes along, as there may be dependencies on the host in the model being substituted, and we have to figure out what that means (for instance, if a FloatingIP would normally be generated for a dynamic host, what happens to it when a static host is allocated?)

Fix up task.status management for ConfigTasks

Since ConfigTasks are separate from the underlying Ansible binding (and questionably so), the task's .perform() method isn't used; instead, a processor object for each task class has been defined that carries out the work needed. The problem with that is that by passing the perform() method, we lose the management of the status of the task, and so can't take advantage of its in-built status management. Probably the right thing to do here is to merge the processors with the tasks, move the tasks into the Ansible sub-package, and then have perform() and reverse do the right thing by the status attribute. It all works now, but is a problem waiting to happen.

Support for more Ansible modules

Candiates are:

  • acl
  • fetch
  • template
  • lineinfile
  • replace
  • synchronize
  • xattr
  • selection of monitoring modules
  • selection of network modules
  • windows modules

get_path() on a ref doesn't return the full path for nested ConfigModels

If you use ConfigClassTask to wrap a ConfigModel, and ask a task in the wrapped model for its path, you only get the path from the wrapped ConfigModel, not the path all the way up to the ConfigModel that wraps the inner model. This makes logging less helpful, as if the ConfigClassTask is the template for a MultiTask, then you can't determine which instance of the template is creating the log messages (strictly speaking you can as the id differentiates the instances, but it's less helpful than providing a full path). This needs to be addressed in a nice way.

Create a new "with_infra_components" function that attaches components to a class

In order to promote a different kind of reuse, create an "with_infra_components" function that works with InfraSpecMeta to allow a group of components to be defined with keyword args to the function. Each key will become an attribute on the new class object, with the corresponding value. This can be called multiple times to additively provide new components to an InfraSpec class.

Add an option to generate "fully qualified" logicalNames

Some arrangements of MultiComponents and ComponentGroups can result in lots of sibling provisionables having the same name. This can lead to confusion when looking at a dashboard to see all provisioned items, for instance when looking at the Openstack web UI and seeing multiple instances with the same name, but which are in different logical subtrees of an actuator InfraSpec. An option needs to be introduced at the InfraSpec level that supports taking the user-provided logical name for a provisionable and appending it with the path to that item within a InfraSpec. This will allow it to uniquely identified when being viewed in a report or dashboard UI.

Make a doc store for persisted models

Once models can represent themselves in a persistable form, build a repository on something like Mongo or other NoSQL DB to store/retrieve the models. This may also need to be able to store the completion records so that deco'ing can happy with respect to the model instance later.

Consider making the orchestrator block until all tasks complete in an abort

Currently, as soon as a task gives up altogether, the orchestrator quits. However, other tasks may be in progress in other threads, and if so they will continue even though the orchestrator has returned. This has a couple of knock-on effects: first, log messages will still get emitted for the tasks that are still running as they finish. Second, if any of the still-running tasks aborts, then they will be adding tracebacks to the aborted tasks collection, which the user of the orchestrator may no longer be looking at. If that's the case they may not report all problems with orchestration.

Either the orchestrator needs to block until all tasks are complete, or, if it returns as soon as an abort has been detected, it needs to provide a way to let the user know when all tasks are complete.

Create a conditional task

We need a task that wraps one or two other tasks and a test of some kind. If the test returns True, then one task is run. If false, either nothing happens or the other task is run. Something like this:

CondTask(name, predicate, true_task, else_task=None)

where:

  • name is the conditional task name
  • predicate is a callable that takes a CallContext and returns either True or False
  • true_task is some kind of task that is run if predicate returns true
  • else_task defaults to none, but can be task that will be run if predicate is false

CondTask only fails if one of its wrapped tasks fails.

Allow refs to IPAddressables to be used as cidr arg in SecGroupRule

Currently, only CIDR strings can be used for the cidr argument to the SecGroupRule constructor. This is fine if the CIDR is known in advance, but doesn't work if the CIDR is coming from another dynamically provisioned host and you want this expressed as a model dependency. So we need to allow the cidr argument to be a reference to a host, not just a string. The following need to be added to address this in total:

  • Add a method to IPAddressable that retrieves the CIDR string from the object (issue: different methods for v4 and v6?)
  • Change docstring on SecGroupRule to indicate a Server of FloatingIP can be used here
  • In the ProvisionSecGroupRuleTask class, modify depends_on_list to potentially return the cidr argument if it is some kind of reference (this will ensure that the proper dependency performance order is computed)
  • In ProvisionSecGroupRuleTask, also modify the _perform task to call the new CIDR method on the underlying resource if it's a reference to an IPAddressable, otherwise do the usual thing to get the value.

Sort out NetworkX's integration with DataViz in order to draw task graphs

As a starting point, we should be able to take a supplied NetworkX DiGraph and get DataViz to render it. Colour should be used to indicate progress on the work in the graph, and node labels indicating the task to be performed should be provided. We should be able to get our hands on the graph any time we want during an orchestration run and create a visualization of the state.

Allocating floating IPs on OpenStack may only partially succeed, not be deco'd on de-provision

Allocating a floating IP for a host on openstack is a two-part process: first, the IP must be allocated, and second, it must be associated with a server. It currently is possible to allocate the IP but fail to associate it with a server (if the IP was connect to a network that doesn't have a route to it), which causes the overall operation to fail. However, the FIP remains allocated, even though the process is considered to have failed.

What needs to happen is to treat this in a more transactional fashion: if the association fails, then the FIP should be released, and allow the failure to halt the provisioning process. This will ensure that everything will be in a consistent state such that teardown will release all allocated resources.

Fix argument validation for ContextExpr

Currently, ContextExpr argument validation (and associated tests) are disabled since there are difficulties finding references for the 'container' component object (the AbstractModelReference._inv_cache isn't easy to populate with all the proper entities such that the container can be found). This is a useful feature that isn't working just right and needs to be fixed.

Define semantics for inheritance

A few use cases for model inheritance have come up, and so it probably makes sense to come up with a mechanism that makes model inheritance actually work proper for a specific set of semantics.

One use case is transparent switching of infra models that may be based on either static or dynamic resources. The case is that for debugging, you want to use the same logical model that is used for prod (which is dynamic) but instead use a model that is partly or entirely static. Or, you may want to go the other way around.

Another use case is to allow the inclusion of boilerplate components, such as standard network resources. This is opposition to a composition model of accruing semantics.

This may need to be activated explicitly in a "with_options()" call for the model, as it isn't clear whether or not we want this to just be turned on all the time. OTOH, it isn't clear then what it means to use Python inheritance without Actuator inheritance.

In general, a sketch of the desired semantics are as follows:

  • Type safefy is the responsibility of the user. If a base class defines a component 'wibble' and a derived class re-defines 'wibble' with an incompatible component, then the behavior is undefined.
  • Components defined on the base class are useable by the derived class as if the derived class had defined them.
  • Derived classes can override the definition of a base class component, with the new def taking precedent
  • Additional components can be made available in derived classes
  • Other models that may use this hierarchy should be coded in terms of the components in the base class, but the instance given to the model may be that of a derived class (if it isn't, errors are thrown).
  • References to the base class should be satisfy-able on a derived class, subject to the notes on overriding components noted above
  • Multiple inheritance is supported; multiple models of the same kind (for example, infra models) can serve as the base classes for a new infra model

Look to add __call__() to _ValueAccessMixin

This class was added to sorta-normalize all reference objects in how one gets the actual underlying value. However, for VarReference objects, you make also just 'call' the reference to get the value, which is a shortcut for 'value()'. However, if we add call() to the base class, _ValueAccessMixin, lots of the test code now fails. This is because the attributes are now 'callable', and we use that test all over the place to detect context expressions (we don't expect references to be callable). So that means that if we want to add call() to _ValueAccessMixin, we have to find all the places where there's a test to see if a ref is callable and change the test to something more specific.

Create a Openstack "keypair" resource

A new keypair resource will help with the setup process. The keypair resource should include the following args:

  • Actuator name (the 'name' parameter')
  • Openstack name (this won't ever get mangled)
  • local public key file path, or:
  • local key string (used instead of the file path)
  • force: if True, indicates to update the public key regardless if it exists or not

The provisioning time semantics are:

  • check if the key already exists using the openstack name of the key
  • If so
    • If force == True, delete the existing key
    • If force != True, return done
  • Send the public key to openstack

What to do about substituting hosts?

This will serve as an issue to capture a running train of thoughts regarding the proper handling of substituting a static host for a dynamic one in a model. Some of the issues are:

  1. We need a standard "host" or "server" object that can be used across all provisioners, and can thus be supplied by a static substitution host
  2. How do we effectively substitute one host for another in the model in order to satisfy dependencies, both within the model and in a namespace?
  3. What do we do with items that may not be required when we substitute? For instance, what do we do with a floating ip that is normally associated with a host in the model but isn't needed when we provide a statically allocated substitute?

Add a argument debugger to the utils

In support of better debugging, add an argument debugger function to the utils module. This should let the model author break when an argument for a model component is being evaluated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.