Git Product home page Git Product logo

single-beat's Introduction

Single-beat

Single-beat is a nice little application that ensures only one instance of your process runs across your servers.

Such as celerybeat (or some kind of daily mail sender, orphan file cleaner etc...) needs to be running only on one server, but if that server gets down, well, you go and start it at another server etc.

As we all hate manually doing things, single-beat automates this process.

How

We use redis as a lock server, and wrap your process with single-beat, in two servers,

single-beat celery beat

on the second server

single-beat celery beat

on the third server

single-beat celery beat

The second process will just wait until the first one dies etc.

Installation

sudo pip install single-beat

Configuration

You can configure single-beat with environment variables, like

SINGLE_BEAT_REDIS_SERVER='redis://redis-host:6379/1' single-beat celery beat
  • SINGLE_BEAT_REDIS_SERVER

    you can give redis host url, we pass it to from_url of redis-py

  • SINGLE_BEAT_REDIS_PASSWORD

    for usage in sentinel scenarios (since they ignore SINGLE_BEAT_REDIS_SERVER)

  • SINGLE_BEAT_REDIS_SENTINEL

    use redis sentinel to select the redis host to use, sentinels are defined as colon-separated list of hostname and port pairs, e.g. 192.168.1.10:26379;192.168.1.11:26379;192.168.1.12:26379

  • SINGLE_BEAT_REDIS_SENTINEL_MASTER (default mymaster)

  • SINGLE_BEAT_REDIS_SENTINEL_DB (default 0)

  • SINGLE_BEAT_REDIS_SENTINEL_PASSWORD

  • SINGLE_BEAT_IDENTIFIER

    the default is we use your process name as the identifier, like

    single-beat celery beat

    all processes checks a key named, SINGLE_BEAT_celery but in some cases you might need to give another identifier, eg. your project name etc.

    SINGLE_BEAT_IDENTIFIER='celery-beat' single-beat celery beat
  • SINGLE_BEAT_LOCK_TIME (default 5 seconds)

  • SINGLE_BEAT_INITIAL_LOCK_TIME (default 2 * SINGLE_BEAT_LOCK_TIME seconds)

  • SINGLE_BEAT_HEARTBEAT_INTERVAL (default 1 second)

    when starting your process, we set a key with 10 second expiration (INITIAL_LOCK_TIME) in redis server, other single-beat processes checks if that key exists - if it exists they won't spawn children.

    We continue to update that key every 1 second (HEARTBEAT_INTERVAL) setting it with a ttl of 5 seconds (LOCK_TIME)

    This should work, but you might want to give more relaxed intervals, like:

    SINGLE_BEAT_LOCK_TIME=300 SINGLE_BEAT_HEARTBEAT_INTERVAL=60 single-beat celery beat
  • SINGLE_BEAT_HOST_IDENTIFIER (default socket.gethostname)

    we set the name of the host and the process id as lock keys value, so you can check where your process lives.

    SINGLE_BEAT_IDENTIFIER='celery-beat' single-beat celery beat
    (env)$ redis-cli
    redis 127.0.0.1:6379> keys *
    1) "_kombu.binding.celeryev"
    2) "celery"
    3) "_kombu.binding.celery"
    4) "SINGLE_BEAT_celery-beat"
    redis 127.0.0.1:6379> get SINGLE_BEAT_celery-beat
    "0:aybarss-MacBook-Air.local:43213"
    redis 127.0.0.1:6379>
    SINGLE_BEAT_HOST_IDENTIFIER='192.168.1.1' SINGLE_BEAT_IDENTIFIER='celery-beat' single-beat celery beat
    (env)$ redis-cli
    redis 127.0.0.1:6379> keys *
    1) "SINGLE_BEAT_celery-beat"
    redis 127.0.0.1:6379> get SINGLE_BEAT_celery-beat
    "0:192.168.1.1:43597"
  • SINGLE_BEAT_LOG_LEVEL (default warn)

    change log level to debug if you want to see the heartbeat messages.

  • SINGLE_BEAT_WAIT_MODE (default heartbeat)

  • SINGLE_BEAT_WAIT_BEFORE_DIE (default 60 seconds)

    singlebeat has two different modes: - heartbeat (default) - supervised

    In heartbeat mode, single-beat is responsible for everything, spawning a process checking its status, publishing etc. In supervised mode, single-beat starts, checks if the child is running somewhere and waits for a while and then exits. So supervisord - or another scheduler picks up and restarts single-beat.

    on first server

    SINGLE_BEAT_LOG_LEVEL=debug SINGLE_BEAT_WAIT_MODE=supervised SINGLE_BEAT_WAIT_BEFORE_DIE=10 SINGLE_BEAT_IDENTIFIER='celery-beat' single-beat celery beat -A example.tasks
    DEBUG:singlebeat.beat:timer called 0.100841999054 state=WAITING
    [2014-05-05 16:28:24,099: INFO/MainProcess] beat: Starting...
    DEBUG:singlebeat.beat:timer called 0.999553918839 state=RUNNING
    DEBUG:singlebeat.beat:timer called 1.00173187256 state=RUNNING
    DEBUG:singlebeat.beat:timer called 1.00134801865 state=RUNNING

    this will heartbeat every second, on your second server

    SINGLE_BEAT_LOG_LEVEL=debug SINGLE_BEAT_WAIT_MODE=supervised SINGLE_BEAT_WAIT_BEFORE_DIE=10 SINGLE_BEAT_IDENTIFIER='celery-beat' single-beat celery beat -A example.tasks
    DEBUG:singlebeat.beat:timer called 0.101243019104 state=WAITING
    DEBUG:root:already running, will exit after 60 seconds

    so if you do this in your supervisor.conf

    [program:celerybeat]
    environment=SINGLE_BEAT_IDENTIFIER="celery-beat",SINGLE_BEAT_REDIS_SERVER="redis://localhost:6379/0",SINGLE_BEAT_WAIT_MODE="supervised", SINGLE_BEAT_WAIT_BEFORE_DIE=10
    command=single-beat celery beat -A example.tasks
    numprocs=1
    stdout_logfile=./logs/celerybeat.log
    stderr_logfile=./logs/celerybeat.err
    autostart=true
    autorestart=true
    startsecs=10

    it will try to spawn celerybeat every 60 seconds.

Cli

Single-beat also has a simple cli, that gives info about where your process is living - also can pause single-beat, restart your process etc.

"info" will show where the process is running, the first node identifier is the ip address connecting to redis - by default.

(venv3) $ SINGLE_BEAT_IDENTIFIER=echo SINGLE_BEAT_LOG_LEVEL=critical SINGLE_BEAT_REDIS_SERVER=127.0.0.1 single-beat-cli info
127.0.0.1:95779 | WAITING |
127.0.0.1:95776 | RUNNING | pid: 95778
127.0.0.1:95784 | WAITING |

"stop", will stop your child process, so any node will pick it up again

(venv3) $ SINGLE_BEAT_IDENTIFIER=echo SINGLE_BEAT_LOG_LEVEL=critical SINGLE_BEAT_REDIS_SERVER=127.0.0.1 single-beat-cli stop
127.0.0.1:95776 | PAUSED | killed
127.0.0.1:95779 | WAITING |
127.0.0.1:95784 | WAITING |

(venv3) $ SINGLE_BEAT_IDENTIFIER=echo SINGLE_BEAT_LOG_LEVEL=critical SINGLE_BEAT_REDIS_SERVER=127.0.0.1 single-beat-cli info
127.0.0.1:95776 | WAITING |
127.0.0.1:95779 | WAITING |
127.0.0.1:95784 | RUNNING | pid: 95877

"restart" will restart the child process in the active node.

(venv3) $ SINGLE_BEAT_IDENTIFIER=echo SINGLE_BEAT_LOG_LEVEL=critical SINGLE_BEAT_REDIS_SERVER=127.0.0.1 single-beat-cli info
127.0.0.1:95776 | WAITING |
127.0.0.1:95779 | WAITING |
127.0.0.1:95784 | RUNNING | pid: 95877

(venv3) $ SINGLE_BEAT_IDENTIFIER=echo SINGLE_BEAT_LOG_LEVEL=critical SINGLE_BEAT_REDIS_SERVER=127.0.0.1 single-beat-cli restart
127.0.0.1:95776 | WAITING |
127.0.0.1:95779 | WAITING |
127.0.0.1:95784 | RESTARTING | killed

(venv3) $ SINGLE_BEAT_IDENTIFIER=echo SINGLE_BEAT_LOG_LEVEL=critical SINGLE_BEAT_REDIS_SERVER=127.0.0.1 single-beat-cli info
127.0.0.1:95776 | WAITING |
127.0.0.1:95779 | WAITING |
127.0.0.1:95784 | RUNNING | pid: 95905

"pause" will kill the child, and put all single-beat nodes in pause state. This is useful for when deploying, to ensure that no "old version of the code" is running while the deploy process is in place. after the deploy you can "resume" so any node will pick the child.

(venv3) $ SINGLE_BEAT_IDENTIFIER=echo SINGLE_BEAT_LOG_LEVEL=critical SINGLE_BEAT_REDIS_SERVER=127.0.0.1 single-beat-cli info
127.0.0.1:95776 | WAITING |
127.0.0.1:95779 | WAITING |
127.0.0.1:95784 | RUNNING | pid: 95905

(venv3) $ SINGLE_BEAT_IDENTIFIER=echo SINGLE_BEAT_LOG_LEVEL=critical SINGLE_BEAT_REDIS_SERVER=127.0.0.1 single-beat-cli pause
127.0.0.1:95776 | PAUSED |
127.0.0.1:95779 | PAUSED |
127.0.0.1:95784 | PAUSED | killed

(venv3) $ SINGLE_BEAT_IDENTIFIER=echo SINGLE_BEAT_LOG_LEVEL=critical SINGLE_BEAT_REDIS_SERVER=127.0.0.1 single-beat-cli info
127.0.0.1:95776 | PAUSED |
127.0.0.1:95779 | PAUSED |
127.0.0.1:95784 | PAUSED |

"resume" will put single-beat nodes in waiting state - so any node will pick up the child

(venv3) $ SINGLE_BEAT_IDENTIFIER=echo SINGLE_BEAT_LOG_LEVEL=critical SINGLE_BEAT_REDIS_SERVER=127.0.0.1 single-beat-cli info
127.0.0.1:95776 | PAUSED |
127.0.0.1:95779 | PAUSED |
127.0.0.1:95784 | PAUSED |

(venv3) $ SINGLE_BEAT_IDENTIFIER=echo SINGLE_BEAT_LOG_LEVEL=critical SINGLE_BEAT_REDIS_SERVER=127.0.0.1 single-beat-cli resume
127.0.0.1:95776 | WAITING |
127.0.0.1:95784 | WAITING |
127.0.0.1:95779 | WAITING |

(venv3) $ SINGLE_BEAT_IDENTIFIER=echo SINGLE_BEAT_LOG_LEVEL=critical SINGLE_BEAT_REDIS_SERVER=127.0.0.1 single-beat-cli info
127.0.0.1:95776 | WAITING |
127.0.0.1:95784 | WAITING |
127.0.0.1:95779 | RUNNING | pid: 96025

"quit" will terminate all child processes and then the parent process itself. So there will be no live single-beat nodes. Its useful to have some sort of hand-brake - also might be useful when you have blue/green, or canary style deployments.

(venv3) $ SINGLE_BEAT_IDENTIFIER=echo SINGLE_BEAT_LOG_LEVEL=critical SINGLE_BEAT_REDIS_SERVER=127.0.0.1 single-beat-cli quit
127.0.0.1:95784 | RUNNING |

(venv3) $
(venv3) $ SINGLE_BEAT_IDENTIFIER=echo SINGLE_BEAT_LOG_LEVEL=critical SINGLE_BEAT_REDIS_SERVER=127.0.0.1 single-beat-cli info

Usage Patterns

You can see an example usage with supervisor at example/celerybeat.conf

Why

There are some other solutions but either they are either complicated, or you need to modify the process. And I couldn't find a simpler solution for this celery/celery#251 without modifying or adding locks to my tasks.

You can also check uWsgi's Legion Support which can do the same thing.

Credits

single-beat's People

Contributors

alibozorgkhan avatar chripede avatar demohu-22 avatar edmund-wagner avatar fredpalmer avatar g-cassie avatar johnjameswhitman avatar lowks avatar robvdl avatar s4ke avatar truetug avatar trunneml avatar ybrs avatar ybrsmm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

single-beat's Issues

Should sigterm immediately expire lock?

First off, thanks! This works great.

One thing that I noticed when trying this out is that during redeploy, we have to wait until the lock has timed out until the next single-beat process can pick up the slack. While I applaud the simplicity of this approach, this can be an issue for some deployments that use the locktime of 300 with an interval of 60 as suggested in the README.

Would it be feasible to have the current beat process on detecting a sigterm release the lock in Redis immediately after it is terminated?

Setup CI

Disclaimer: I have not set up any CI tooling in github before.

I think it would be great if we could add some CI tooling to make sure that the tests run on each PR. What are our options here?

replace pyuv with tornado, make single-beat redis sentinel aware

thinking about replacing pyuv with tornado, because of a few reasons, mainly because of redis sentinel setup, if master goes down, slave takes over, but single-beat needs to be updated/restarted which needs manual intervention etc.

for that to work, i need to subscribe to sentinel's pub/sub channel asynchronously, but there is no async. client for redis in pyuv loop. so simply replacing it with tornado and using https://github.com/leporo/tornado-redis might be a better solution.

0.4.2 release

Hi, would you be able to organise a release at some point.

It would be nice to get the last few changes released to pypi PR#25 and PR#27

I know there was also the logger flush thing but I can live with that, I remember you left a comment about that on the issue to make it optional so it can be done later.

Anyway, would like to see another release of what we have in master now.

single-beat depends on a vulnerable version of tornado

We just got a trigger from our security scanner that tornado <= 6.1 has a CVE.

single-beat is dependent on tornado<6 which triggers it to install 5.1.1

Can we please up that upper limit to <7.0.0, is there any reason it's pinned to that older version?

Happy to create a PR for it.

In Process.spawn_process use self.args instead of sys.argv

Because the args are passed to the Process constructor, it should be using self.args in the spawn_process method instead of sys.argv or I cannot send custom args unless I also override sys.argv.

I have my own Python script and constructing Process and calling run() myself, I found it out that way that the method spawn_process is not using self.args

Do you want me to send a patch?

Redis return bytes-like object instead of str

Hi!
When I run code in our environment we have this error:

ERROR:tornado.application:Exception in callback <bound method Process.timer_cb of <singlebeat.beat.Process object at 0x7f0f8b7bcdd8>>
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/tornado-5.1.1-py3.6-linux-x86_64.egg/tornado/ioloop.py", line 1229, in _run
    return self.callback()
  File "/usr/local/lib/python3.6/site-packages/single_beat-0.3.0-py3.6.egg/singlebeat/beat.py", line 139, in timer_cb
    fn()
  File "/usr/local/lib/python3.6/site-packages/single_beat-0.3.0-py3.6.egg/singlebeat/beat.py", line 120, in timer_cb_running
    redis_fence_token = rds.get("SINGLE_BEAT_{identifier}".format(identifier=self.identifier)).split(":")[0]
TypeError: a bytes-like object is required, not 'str'

I think its probably because rds.get(...) function returns bytes instead of str.

License Info in setup.py

Hi,

the single-beat is really cool and I discovered one small thing:
When listing all the used packages of the project with pip-licenses, the single-beat lib will show up with license "unknown".
Perhaps you want to add this info to the setup.py to make this complete.
More info: https://pypi.org/project/pip-licenses/

Greetings,

Felix

Fence token not always set

I'm running on Heroku.

2019-01-13T16:20:08.876536+00:00 app[worker.1]: ERROR:tornado.application:Exception in callback <bound method Process.timer_cb of <singlebeat.beat.Process object at 0x7f2d46a19080>>
2019-01-13T16:20:08.876546+00:00 app[worker.1]: Traceback (most recent call last):
2019-01-13T16:20:08.876552+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/tornado/ioloop.py", line 1229, in _run
2019-01-13T16:20:08.876554+00:00 app[worker.1]: return self.callback()
2019-01-13T16:20:08.876555+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/singlebeat/beat.py", line 139, in timer_cb
2019-01-13T16:20:08.876557+00:00 app[worker.1]: fn()
2019-01-13T16:20:08.876559+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/singlebeat/beat.py", line 120, in timer_cb_running
2019-01-13T16:20:08.876560+00:00 app[worker.1]: redis_fence_token = rds.get("SINGLE_BEAT_{identifier}".format(identifier=self.identifier)).split(b":")[0]
2019-01-13T16:20:08.876752+00:00 app[worker.1]: AttributeError: 'NoneType' object has no attribute 'split'
~ $ pip3 show single-beat
Name: single-beat
Version: 0.3.1
Summary: ensures only one instance of your process across your servers
Home-page: https://github.com/ybrs/single-beat
Author: None
Author-email: None
License: UNKNOWN
Location: /app/.heroku/python/lib/python3.6/site-packages
Requires: redis, tornado

Drop tornado dependency

With our changes in 0.5.0 we could remove the tornado dependency altogether as we are only using it for small parts of the logic. We opted to not do that for 0.5.0 because it would have meant too many changes at once. This is something that would be useful though as it will not block the upgrade path of other dependencies just because single-beat is installed and requires an "old" version of tornado.

show logging from underlying celery beat instance

Is stdout for the child beat instance is only printed on termination of the parent single-beat instance?

I couldn't see a way to show this output, meaning my journalctl logs don't show task scheduling.

Is there a way to always print celery output in realtime to stdio?

SINGLE_BEAT_WAIT_MODE=supervised is buggy when restarting "too fast"

I was using supervisor, and with the SINGLE_BEAT_WAIT_MODE=supervisord environment variable if I restart my supervisor process "too fast" then the child would be terminated and not come back. This is with just one instance.

sudo supervisorctl restart beat   # but the child dissapears now

But this would work fine

sudo supervisorctrl stop beat
# wait a second...
sudo supervisorctl start beat

By using SINGLE_BEAT_WAIT_MODE=heartbeat, the issue went away, so it seems there is something not quite right with SINGLE_BEAT_WAIT_MODE=supervisord.

When refreshing the lock, should check whether the lock is held.

Hello, I am looking a HA solution for celery beat and found this project, Thanks for you project, It is useful to me.

But When I learn the code, I found one possible problem about how to refreshing/extending the lock.

There could check lock before extended lock, so should use a lua script do check lock and extend lock, this ensures atomicity.

Finally, we also need to check the result of the refresh/extend lock, it is possible that the refresh/extend failed, meaning missing lock or network error.


Like this: (from redbeat)
https://github.com/sibson/redbeat/blob/5d1d5c154d2e080d3135afee235321abf5d2da5b/redbeat/schedulers.py#L37

    local token = redis.call('get', KEYS[1])
    if not token or token ~= ARGV[1] then
        return 0
    end
    local expiration = redis.call('pttl', KEYS[1])
    if not expiration then
        expiration = 0
    end
    if expiration < 0 then
        return 0
    end
    redis.call('pexpire', KEYS[1], ARGV[2])
    return 1

or Like this: (from golang redlock lib)
https://github.com/go-redsync/redsync/blob/311e82d385dcf0b228cb1b4974b690f8bb2dc127/mutex.go#L254

	if redis.call("GET", KEYS[1]) == ARGV[1] then
		return redis.call("PEXPIRE", KEYS[1], ARGV[2])
	else
		return 0
	end

When connect Redis sentinel timeout, program crashed

If Redis sentinel mode is used and there is a network failure causing a Redis connection timeout, the program will crash.
So program needs to handle various network issues related to connecting to Redis correctly, instead of crashing directly.

singlebeat-crash2

feature: remote control

Just an issue to track myself.

When single-beat runs with more than a few servers, the biggest issues are how can I restart this, where are the logs now, is it really working, did it spawn etc.

So next version will have some kind of cli with tail command - and stop/restart/pause commands.

License file?

It would be great to add a LICENSE file to clarify under what conditions this can be used. BSD, MIT, etc.

error in single-beat setup command

Trying to install and get this error:

...
Collecting simplejson==3.8.2 (from -r requirements.txt (line 181))
  Downloading simplejson-3.8.2.tar.gz (76kB)
Collecting single-beat==0.1.6 (from -r requirements.txt (line 182))
  Downloading single-beat-0.1.6.tar.gz
    Complete output from command python setup.py egg_info:
    error in single-beat setup command: 'install_requires' must be a string or list of strings containing valid project/version requirement specifiers; Invalid requirement, parse error at "'< 1.0.0'"

    ----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-kEWaXB/single-beat
ERROR: Service 'testweb' failed to build: The command '/bin/sh -c pip install -r requirements.txt -U' returned a non-zero code: 1

Why buffer subprocess output rather than write directly to stdout/stderr?

It appears that the buffering of the subprocess output interferes with shell redirection. Observe a lack of output in either case:

single-beat python test/long_waiting_process.py | less
single-beat python test/long_waiting_process.py > log

I'm curious what the rationale for buffering this output is rather than sending it directly to sys.stdout / sys.stderr via the patch below:

diff --git a/singlebeat/beat.py b/singlebeat/beat.py
index b07c7de..82c1658 100644
--- a/singlebeat/beat.py
+++ b/singlebeat/beat.py
@@ -194,12 +194,6 @@ class Process(object):
         if self.state == State.PAUSED:
             self.state = State.WAITING
 
-    def stdout_read_cb(self, data):
-        sys.stdout.write(data)
-
-    def stderr_read_cb(self, data):
-        sys.stderr.write(data)
-
     async def timer_cb_paused(self):
         pass
 
@@ -324,27 +318,13 @@ class Process(object):
             await self.timer_cb()
             await asyncio.sleep(config.HEARTBEAT_INTERVAL)
 
-    async def _read_stream(self, stream, cb):
-        decoder = codecs.getincrementaldecoder('utf-8')(errors='strict')
-
-        while True:
-            line = await stream.read(100)
-            if line:
-                cb(decoder.decode(line))
-            else:
-                break
-
     async def spawn_process(self):
-        cmd = self.args
-        env = os.environ
-
         self.state = State.RUNNING
+
         try:
             self.sprocess = await asyncio.create_subprocess_exec(
-                *cmd,
-                env=env,
-                stdout=asyncio.subprocess.PIPE,
-                stderr=asyncio.subprocess.PIPE
+                *self.args,
+                env=os.environ
             )
         except FileNotFoundError:
             """
@@ -353,13 +333,9 @@ class Process(object):
             """
             logger.exception("file not found")
             return self.child_exit_cb(1)
+
         try:
-            await asyncio.wait(
-                [
-                    asyncio.create_task(self._read_stream(self.sprocess.stdout, self.forward_stdout)),
-                    asyncio.create_task(self._read_stream(self.sprocess.stderr, self.forward_stderr)),
-                ]
-            )
+            await self.sprocess.wait()
             self.child_exit_cb(self.sprocess.returncode)
         except SystemExit as e:
             os._exit(e.code)
@@ -488,12 +464,6 @@ class Process(object):
         async for msg in self.async_redis.listen():
             self.pubsub_callback(msg)
 
-    def forward_stdout(self, buf):
-        self.stdout_read_cb(buf)
-
-    def forward_stderr(self, buf):
-        self.stderr_read_cb(buf)
-
 
 async def run_process():
     process = Process(sys.argv[1:])

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.