Git Product home page Git Product logo

aminator's People

Contributors

asher avatar bmoyles avatar bunjiboys avatar coryb avatar garethbowles avatar inthecloud247 avatar kvick avatar martopoulos avatar mrowe avatar mtripoli avatar phyrex1an avatar tigeli avatar tjbaker avatar viglesiasce avatar willtrking avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aminator's Issues

--boto-debug broken

EC2CloudPlugin._connect() tests the wrong thing to determine if boto debug logging should be enabled. It tests against the plugin config's boto_debug setting when it should be checking against the context's boto_debug setting.

cc/ @kvick
cc/ @bmoyles

logging config not working as intended

The intended output of aminate is to have general progress written to stdout and detailed information about each step of the aminate process to be logged to a separate file for each aminate invocation. The base config sets up the stdout console handler but when the call to
aminator.config.log_per_package() is called to create the detailed log file, console output stops. log_per_package()calls logging.config.dictConfig() using one of the configs listed in the default logging config. Each of these config specifies a root logger handler. Thus, each call of log_per_package() clobbers the existing handler at the root level.

log_per_package() should add or append the handler.

chroot_unmount() may fail

chroot_mount() mounts the volume, then bind mounts BIND_DIRS. chroot_untmount() reverses this process but this may fail if, during provisioning, something else is mounted under the volume like, say, proc/sys/fs/binfmt_misc

/dev/sdg1 /aminator/oven/sdg1 ext3 rw,data=ordered 0 0
/proc /aminator/oven/sdg1/proc proc rw 0 0
/sys /aminator/oven/sdg1/sys sysfs rw 0 0
none /aminator/oven/sdg1/proc/sys/fs/binfmt_misc binfmt_misc rw 0 0

debug logs:

2013-03-09 22:36:04 aminator.volumemanager: DEBUG: unmounting /dev/sdg1 from /aminator/oven/sdg1
2013-03-09 22:36:04 aminator.utils: DEBUG: lsof -X /aminator/oven/sdg1
2013-03-09 22:36:04 aminator.utils: DEBUG: umount /aminator/oven/sdg1/proc
2013-03-09 22:36:04 aminator.utils: DEBUG: umount: /aminator/oven/sdg1/proc: device is busy
2013-03-09 22:36:04 aminator.utils: DEBUG: umount: /aminator/oven/sdg1/proc: device is busy

remove_provision_config doesn't consider edge case

Module aminator.linux.util has faulty logic in the remove_provision_config method.

Specifically, on line 385, the function checks for the presence of the backup file. However, this already assumes that a backup was necessarily created. Since a file configured under provision_config_files in the provisioner config may be copied to the chroot'd volume without requiring the file to exist in the chroot'd volume already, the assumption that the backup file must exist (otherwise, exit on error) does not consider the case where no backup was created in the first place.

The easy fix is to return True on line 404 and adjust the error level to INFO or WARN, and notify the user. Failing because the backup could not be restored is unnecessary.

An more permanent fix may be to maintain a list of backups that were created out of necessity, and only remove those backup files.

add interactive mode

After the chroot and provision step, it can be invaluable to inspect the volume to ensure that the provisioner did the right thing to speed up dev/testing.

@corby has added a command-line option in the past that he should be able to resurrect.

apt_chef provisioner not checking the status of download_file

in apt_chef.py:

                download_file(context.chef.chef_package_url, local_chef_package_file,
                              context.package.get('timeout', 1), verify_https=context.get('verify_https', False))

in aminator.utils#download_file:

    if response.status_code != 200:
        return False

a 404 gets swallowed when it should kill the job.

Creation of new Provisioners shouldn't need to duplicate distro specific chroot configs and methods

/cc @bmoyles @mtripoli

This will likely require separating the volume construction/chroot process from the commands executed against the volume.

The primary goal is that we can create new Provisioners (e.g. ChefProvisioner) that don't need to also need to understand the mechanics of constructing a chrooted volume for a distro. Attempts to create a Chef provisioner are running into the problem of having to duplicate Apt/Yum provisioner logic to provide chroot and active/deactivate

A desirable secondary goal is that we can support multiple commands executed against the chrooted volume in one amination. This could easily shake out once the volume creation step is decoupled from the command executed on the volume.

e.g. highly contrived example that installs chef, cookbooks from native packages then runs chef

aminate -B ami-1efed35b -r us-west-1 -e ec2_apt_linux pkg_install=chef-10.26 pkg_install=my_cookbooks run="/opt/chef/bin/chef-solo -c /var/chef/solo.rb -j node.json"

Finalizer set

cc/ @kvick
cc/ @mtripoli

Based on Tuesday's talk, I propose FinalizerSet to contain multiple finalizers and hold the finalize() method. A finalizer set itself doesn't need to be a plugin point--it's merely a container that exposes a finalize() method. The pluggable bits are the finalizers themselves.
An environment will consist of the same components (cloud, blockdev, vol, provision) with finalizers replaced by a finalizer set.
A finalizer set is a named, ordered sequence of finalizers.
When finalizer.finalize() is called, finalizers in the FinalizerSet are executed in order.

Consider a finalizer set containing the following finalizers:
register_ebs_ami
register_s3_ami
tag_artifacts
distribute_images

We'd need to find the right place to add the snapshot... Given that the context contains enough metadata by the time we reach the finalizer stage, I think @kvick is spot on with having the exit for the volume manager handle the snapshot, and we finalize once we exit that context (but only finalize if we succeeded provisioning, natch)

Then, finalize() would simply iterate the registered and configured finalizers. A failure to finalize may or may not signal a failure in the overall operation. (Consider an asgard_poke finalizer that simulates the old nacPoke behavior--it's not doomsday if that fails, it's a nice-to-have--Asgard will find the AMI, just not necessarily immediately)

We may even be able to run some finalizers concurrently--ebs and s3 registration, for example, should be able to run in parallel. tag_artifacts, on the other hand, really only makes sense after registration, and distribution only makes sense after tags have been applied (especially as distribute_Images likely has to distribute tags as well)

/var/log/aminator not created when using python setup.py install

After running aminator for the first time i see this. Adding the directory works.

[root@euca-192-168-105-169 aminator]# aminate --debug -B emi-8B8240AD -n first-try -r eucalyptus git
INFO:aminator.core:Aminator starting...
DEBUG:aminator.core:Loading default configuration
2013-03-22 15:26:00 [INFO] Aminator 1.0.0 default configuration loaded
2013-03-22 15:26:00 [INFO] Loaded plugin aminator.plugins.volume.linux
2013-03-22 15:26:00 [INFO] Loaded plugin aminator.plugins.finalizer.tagging_ebs
2013-03-22 15:26:00 [INFO] Loaded plugin aminator.plugins.blockdevice.linux
2013-03-22 15:26:00 [INFO] Loaded plugin aminator.plugins.cloud.ec2
2013-03-22 15:26:00 [INFO] Loaded plugin aminator.plugins.provisioner.yum
2013-03-22 15:26:00 [INFO] Loaded plugin aminator.plugins.provisioner.apt
2013-03-22 15:26:00 [INFO] Configuring per-package logging
2013-03-22 15:26:00 [INFO] Detailed per_package output to /var/log/aminator/git-201303222226.log
Traceback (most recent call last):
File "/usr/bin/aminate", line 9, in
load_entry_point('aminator==1.0.0', 'console_scripts', 'aminate')()
File "/usr/lib/python2.6/site-packages/aminator-1.0.0-py2.6.egg/aminator/cli.py", line 51, in run
sys.exit(Aminator(debug=args.debug).aminate())
File "/usr/lib/python2.6/site-packages/aminator-1.0.0-py2.6.egg/aminator/core.py", line 52, in init
log_per_package(self.config, 'per_package')
File "/usr/lib/python2.6/site-packages/aminator-1.0.0-py2.6.egg/aminator/config.py", line 161, in log_per_package
dictConfig(per_package_config.toDict())
File "/usr/lib/python2.6/site-packages/logutils-0.3.3-py2.6.egg/logutils/dictconfig.py", line 573, in dictConfig
dictConfigClass(config).configure()
File "/usr/lib/python2.6/site-packages/logutils-0.3.3-py2.6.egg/logutils/dictconfig.py", line 365, in configure
'%r: %s' % (name, e))
ValueError: Unable to configure handler 'per_package': [Errno 2] No such file or directory: '/var/log/aminator/git-201303222226.log'
[root@euca-192-168-105-169 aminator]#

KeyError: /dev/sda1

If you follow the tutorial on the wiki (removing the -p option, manually mkdir /var/aminator, /var/aminator/lock, /var/log/aminator, /var/log/aminator/lock), the script is looking for /dev/sda1, which doesn't exist.

File "/usr/lib/python2.6/site-packages/aminator/plugins/cloud/ec2.py", line 141, in allocate_base_volume
rootdev = context.base_ami.block_device_mapping[context.base_ami.root_device_name]
KeyError: u'/dev/sda1'

References:
#26
#42

Support for running an arbitrary script for the provisioning step

I see there are provisioners for apt, yum, and chef. This seems overly constrained. Why not allow for an arbitrary script to be executed as the provisioning step (similar to user-data when launching an EC2 instance)? This would allow flexibility with provisioning and obviate the need (assuming my understanding of the aminator is complete) for specific provisioning plugins.

disable service startup in debian provisioning

Similar to the way we disable/enable service startup in CentOS, we need the same mechanism for debian via policy-rc.d.

I've got a possible solution that refactors the provisioners a bit.

I also removed the short_circuit in the provision method as it seemed redundant as the _configure_chroot_mounts should handle this.

ubuntu amination fails in _deactivate_provisioning_service_block

The symptom is that we can't aminate on Ubuntu as we always fail in the following block.

I believe the issue is that the policy_file is missing in the aminator.plugins.provisioner.apt.yml.

   def _deactivate_provisioning_service_block(self):        """
        Prevent packages installing in the chroot from starting
        For debian based distros, we add /usr/sbin/policy-rc.d
        """
        log.debug('in _deactivate_provisioning_service_block')        config = self._config.plugins[self.full_name]
        log.debug("config = %s", config)        path = self._mountpoint + config.get('policy_file_path', '')
        log.debug("path = %s", path)
        filename = path + "/" + config.get('policy_file')
        log.debug("filename = %s", filename)
        if not os.path.isdir(path):
            log.debug("creating %s", path)
            os.makedirs(path)
            log.debug("created %s", path)
        with open(filename, 'w') as f:
            log.debug("writing %s", filename)
            f.write(config.get('policy_file_content'))
            log.debug("wrote %s", filename)

Change the -e (executor) option to -o (owner) instead

-e executor seems very obscure to me and I wouldn't think of it as the user. I know it represents the user that is executing the script, but that term usually refers to processes and other machine things that are executing stuff.

-o owner I think better represents the results: the owning user of the resulting AMI.

Socialization

I cannot find a Google Group, mailing list or IRC channel for aminator.

pip-python install fails on CentOS 6.3

Using the CentOS 6.3 AMI x86_64 provided by CentOS (from AWS Marketplace) and the python-pip package from EPEL, the pip method to install aminator install doesn't work.

[root@ip-10-245-32-239 ~]# pip-python install git+https://github.com/Netflix/aminator.git#egg=aminator
Exception:
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/pip/basecommand.py", line 124, in main
self.run(options, args)
File "/usr/lib/python2.6/site-packages/pip/commands/install.py", line 170, in run
InstallRequirement.from_line(name, None))
File "/usr/lib/python2.6/site-packages/pip/req.py", line 101, in from_line
return cls(req, comes_from, url=url)
File "/usr/lib/python2.6/site-packages/pip/req.py", line 41, in init
req = pkg_resources.Requirement.parse(req)
File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 2531, in parse
reqs = list(parse_requirements(s))
File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 2456, in parse_requirements
line, p, specs = scan_list(VERSION,LINE_END,line,p,(1,2),"version spec")
File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 2424, in scan_list
raise ValueError("Expected "+item_name+" in",line,"at",line[p:])
ValueError: ('Expected version spec in', 'git+https://github.com/Netflix/aminator.git#egg=aminator', 'at', '+https://github.com/Netflix/aminator.git#egg=aminator')

Storing complete log in /root/.pip/pip.log

"-p" switch not supported!

When following the wiki instructions to create an Asgard AMI (https://github.com/Netflix/aminator/wiki/Aminate-an-Asgard-AMI), it says to enter the command

aminate -p asgard -B ami-86e15bef

which fails because it doesn't recognize the -p switch. Either the documentation is incorrect or the program is wrong!!

Also, perhaps related, in aminate we specify the rpm to run, in this case, asgard. Perhaps the documentation can explain what is expected, and what files it will look for in the file system.

Support AMIs with a partition table

I am trying to use aminator to bake an AMI based on an internally-built AMI. For reasons I don't entirely understand, this AMI's root volume contains a partition table:

[email protected] ~ # fdisk -l /dev/xvdf1

Disk /dev/xvdf1: 10.7 GB, 10737418240 bytes

      Device Boot      Start         End      Blocks   Id  System
/dev/xvdf1p1               1       15625      999984   83  Linux

When aminator tries to mount the volume, it fails because:

aminator.exceptions.VolumeException: Unable to mount /dev/xvdf1 at /var/aminator/volumes/xvdf1: mount: you must specify the filesystem type

Has anyone else seen this?

I'm not sure how best to handle it. We can look at changing the way we make the base AMI to avoid the problem, but it seems like aminator should be able to deal with it. Perhaps the linux blockdevice plugin should check the attached volume device for a partition table, and if it finds one mount the appropriate minor device rather than the raw block device?

EC2Plugin - retry on InvalidAMIID.NotFound

After receiving an AMI id from a successful RegisterImage EC2 API call (register_image()), subsequent DescribeImages EC2 API call (ami.update()) may fail with InvalidAMIID.NotFound. In

    @registration_retry(tries=3, delay=1, backoff=1)
    def _register_image(self, **ami_metadata):
        context = self._config.context
        ami = Image(connection=self._connection)
        ami.id = self._connection.register_image(**ami_metadata)
        if ami.id is None:
            return False
        else:
            ami.update()
            log.info('AMI registered: {0} {1}'.format(ami.id, ami.name))
            context.ami.image = self._ami = ami
            return True

Ref. https://aws.amazon.com/support/case?caseId=93449351&language=en

Block device mapping not consistent

With the newer Linux kernels, even though you attach a device as /dev/sd[x], the device will show up as /dev/xvd[x] (with a symlink for the originally requested device).

In most cases, the x value will match, for example requesting /dev/sdf will result in /dev/xvdf, however this is not guaranteed to be the case.

This patch will change the logic in plugins/cloud/ec2.py from just being a simple string replace, to actually check the requested device name to see if it is a symlink. If it is, it will then return the real block device using os.path.realpath, and if not, simply return the name back. This means that no matter what the device gets mapped as eventually, as long as the requested device is created, we will be able to find the correct block device.

diff --git a/aminator/plugins/cloud/ec2.py b/aminator/plugins/cloud/ec2.py
index a5fef19..808fe6b 100644
--- a/aminator/plugins/cloud/ec2.py
+++ b/aminator/plugins/cloud/ec2.py
@@ -23,6 +23,7 @@ aminator.plugins.cloud.ec2
 ==========================
 ec2 cloud provider
 """
+import os
 import logging
 from time import sleep

@@ -170,7 +171,7 @@ class EC2CloudPlugin(BaseCloudPlugin):
     def attach_volume(self, blockdevice, tag=True):
         self.allocate_base_volume(tag=tag)
         # must do this as amazon still wants /dev/sd*
-        ec2_device_name = blockdevice.replace('xvd', 'sd')
+        ec2_device_name = os.path.realpath(blockdevice) if os.path.islink(blockdevice) else blockdevice
         log.debug('Attaching volume {0} to {1}:{2}({3})'.format(self._volume.id, self._instance.id, ec2_device_name,
                                                                 blockdevice))
         self._volume.attach(self._instance.id, ec2_device_name)

ec2 plugin should wait for new volume to become available before attempting to attach

https://github.com/Netflix/aminator/blob/master/aminator/plugins/cloud/ec2.py
allocate_base_volume should spin towards the end of its execution until the newly created volume transitions to the available state.

Without this, if the volume isn't in the available state when we issue the attach request, boto will throw an exception we're not looking for (and that we'd have to parse apart similar to _registration_retry ).

This plagues both the old proto-aminator bakery code as well as aminator.

support partitioned AMI creation

Aminator currently assumes that we use a partition-less disk. Some users have requested something like this in the past #59. This may be a duplicate, so feel free to close this one and reopen #59.

The end goal is that we can aminate a volume that contains a partition table. We may require some additional configs to know where/how to mount the volume.

Add optional post-exec processing

Add optional post-exec step which contains the newly created AMI ID, etc. in its context. Add a predefined plugin to initiate a rolling deploy via Asgard.

This would make creating an automated CDO deployment chain very easy.

Only load specified plugins

When aminator starts, it loads all the plugins it has and that can cause confusion when listing the help and looking at the configuration options

Note: using eucalyptus as an example, not that we have problems with the generously donated plugin :)

Observed behavior: When running aminator with the specific -e environment, we still get eucalyptus configs and help options even when we specified EC2

aminate -e ec2_yum_linux

in the logs
...
aminator.plugins.finalizer.tagging_ebs_euca: !bunch.Bunch
creator: aminator
default_block_device_map:
...

and in the help

--ec2-endpoint EC2_ENDPOINT

Expected behavior: The configs and loaded plugins should only be those specified in the -e file

/cc @mtripoli @pas256 @cquinn @bunjiboys @bmoyles @viglesiasce

Add command-line overrides

The goal is a simple(r) way of overriding plugins or attributes via the command-line so that it's not necessary to create YAML configs for each combination you need to run.

One initial thought is that we create a -o with a key/value pair option similar to the -D java override. e.g. aminate -oaminator.plugins.blockdevice.linux.device_prefixes=[xvd, sd]

What this will do is override the values of any of the config values that are normally specified in the YAML (so it will need to run after plugins are loaded)

@mtripoli may be able to add additional context for a better example

/cc @pas256 @bunjiboys @cquinn @bmoyles @mtripoli

appversion tag should include job and build number

For Netflix, we need the appversion tag to be correctly generated for asgard/other tools to do the right thing.

Looking at the current aminator code, it looks like the tag stops at the build number.

Expected

appversion = helloworld-1.0.0-1733109.h513/MY-HELLO-JOB/513

Observed

(from ami-44ffd201 in us-west-1)

appversion = helloworld-1.0.0-1733109.h513

I'm using this as an opportunity to learn py.test. Once I have it figured out, I'll need to loop back to make sure we've made it configurable (i.e. not hard-coded) so others can create expressive tags as well.

Package fetching is failing for non nflx repositories

I'm working on getting aminating of 3rd-party packages to work on CentOS (public AMI from CentOS) and am pretty close. I've published an asgard RPM on bintray and have a Chef recipe to setup the aminating environment on CentOS.

When I try to run it, it fails due to the lookup against nflx.mirrors. Shouldn't this work as long as it can resolve the package? Alternately, could we support aminating from a local package. I can do a yum downloadonly and pass in the package if that would help.

[/etc/yum.repos.d/bintray-netflixoss-netflix-asgard-rpm.repo]
[bintraybintray-netflixoss-netflix-asgard-rpm]
name=bintray-netflixoss-netflix-asgard-rpm
baseurl=http://dl.bintray.com/content/netflixoss/netflix-asgard-rpm
gpgcheck=0
enabled=1

[aminator@ip-10-190-90-39 ~]$ aminate -p asgard -B ami-86e15bef
2013-03-13 21:47:40 - looking up base AMI with ID ami-86e15bef
2013-03-13 21:47:40 - base_ami = CentOS-6-x86_64-20121031-ebs-adc4348e-1dc3-41df-b833-e86ba57a33d6-ami-8a8932e3.1(ami-86e15bef)
2013-03-13 21:47:40 - creator = aminator
2013-03-13 21:47:40 - pkg = asgard
2013-03-13 21:47:40 - ami_suffix = 201303132147
2013-03-13 21:47:40 - looking up package asgard
Traceback (most recent call last):
File "/usr/bin/aminate", line 9, in
load_entry_point('aminator==0.9.0', 'console_scripts', 'aminate')()
File "/usr/lib/python2.6/site-packages/aminator-0.9.0-py2.6.egg/aminator/cli.py", line 139, in run
aminate_request = AminateRequest(pkg, baseami, ami_suffix, creator, ami_name=ami_name)
File "/usr/lib/python2.6/site-packages/aminator-0.9.0-py2.6.egg/aminator/core.py", line 40, in init
self.fetcher = PackageFetcher(pkg)
File "/usr/lib/python2.6/site-packages/aminator-0.9.0-py2.6.egg/aminator/packagefetcher.py", line 50, in init
with open('/etc/yum.repos.d/nflx.mirrors') as mirrors:
IOError: [Errno 2] No such file or directory: '/etc/yum.repos.d/nflx.mirrors'
[aminator@ip-10-190-90-39 ~]$

aminate exits with "successful termination" when provisioning fails

[...]
2013-06-12 15:11:53 [CRITICAL] Provisioning failed!
2013-06-12 15:12:10 [INFO] Amination complete!
$ echo $?
0

I believe this is due to some confusion about method return values in the provisioning call chain.

In environment.py, the provisioning loop returns False when provisioning fails:

    def provision(self):
        [...]
                        success = provisioner.provision()
                        if not success:
                            log.critical('Provisioning failed!')
                            return False
                    success = finalizer.finalize()
                    if not success:
                        log.critical('Finalizing failed!')
                        return False
        return None

But the call site in core.py seems to expect provision() to return a "truthy" value for failures:

    def aminate(self):
        with self.environment(self.config, self.plugin_manager) as env:
            error = env.provision()
            if not error:
                log.info('Amination complete!')
        return error

Further, the outermost call site in cli.py passes the return value of aminate() directly to sys.exit(), which seems to interpret False as 0.

It's not clear to me which one of these three places the code is "wrong"... but clearly they are wrong in aggregate. :)

Create a chef-solo provisioner

Create a chef-solo provisioner that will run using local cookbooks.

This will require that the chef-solo is already installed (from the base ami) and that the cookbooks are present on the filesystem in a well-known location, delivered typically using a custom application package (e.g. nflx-base)

cc/ @bmoyles
cc/ @mtripoli

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.