Git Product home page Git Product logo

pyleus's People

Contributors

bgirardeau avatar ecanzonieri avatar hellp avatar imcom avatar johanneshk avatar jswetzen avatar justquick avatar mdaniel avatar melvinross avatar msmakhlouf avatar mzbyszynski avatar patricklucas avatar poros avatar tomelm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pyleus's Issues

Add option to specify nimbus thrift port

Ugh I'm not sure how we missed this, but there is no way to specify the Nimbus thrift port if it is not the default.

If you use the Storm CLI to interact with a cluster, you specify the host with -c nimbus.host=10.1.2.3 like we do, but to specify the port, the option is -c nimbus.thrift.port=12345.

We have two options: combine the options into a host/port pair and pass them both if they are both specified, requiring only 'host'; or having two options, eg. --nimbus-host and --nimbus-port. One could make the argument for us to mirror the arguments that the Storm CLI itself uses and use --nimbus-thrift-port for the latter instead.

Since this is at odds with #32, I'll hold off on merging that for now.

pyleus build fails on OS X

I believe this has to do with filesystem case-insensitivity.

(venv)140-sfoengwifi54-227:~ plucas$ pyleus -v build dev/oss/pyleus/examples/word_count/pyleus_topology.yaml
Traceback (most recent call last):
  File "/Users/plucas/venv/bin/pyleus", line 6, in <module>
    main()
  File "/Users/plucas/venv/lib/python2.7/site-packages/pyleus/cli/cli.py", line 52, in main
    args.func(args)
  File "/Users/plucas/venv/lib/python2.7/site-packages/pyleus/cli/commands/subcommand.py", line 105, in run_subcommand
    self.run(configs)
  File "/Users/plucas/venv/lib/python2.7/site-packages/pyleus/cli/commands/build_subcommand.py", line 40, in run
    build_topology_jar(configs)
  File "/Users/plucas/venv/lib/python2.7/site-packages/pyleus/cli/build.py", line 275, in build_topology_jar
    verbose=configs.verbose,
  File "/Users/plucas/venv/lib/python2.7/site-packages/pyleus/cli/build.py", line 183, in _create_pyleus_jar
    zip_file.extractall(tmp_dir)
  File "/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/zipfile.py", line 1036, in extractall
    self.extract(zipinfo, path, pwd)
  File "/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/zipfile.py", line 1024, in extract
    return self._extract_member(member, path, pwd)
  File "/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/zipfile.py", line 1075, in _extract_member
    os.mkdir(targetpath)
OSError: [Errno 17] File exists: '/var/folders/fc/0l70g4297v19fn7wxml_72l1sh2cw1/T/tmplB1YHA/META-INF/license'

Don't _ensure_storm_path_in_configs for 'pyleus build'

A Storm installation is not strictly required if you are just building a topology. For example, you might want to just build a topology JAR and move that around yourself before actually submitting it.

It should be fairly simple to only do this check when the storm CLI is required.

Stdin and stdout pyleus debug mode

18/02/2014

As suggested by @patricklucas, it would be nice to have a way to read a bunch of lines from stdin and feed a bolt with them as they were tuples. Then the bolt should emit on stdout and become an awesome debug mechanism. At the end, you may really be able to execute your whole topology locally just piping bolts!

18/02/2014

This need a bit of iteration. We should focus on defining a simple interface, defining a pseudo-language for writing debug streams and finiding a way to pass context and conf.

09/07/2014 - from a first review:

This changes allow an "hacky" stdin/stdout debugging of a wide range of bolts and spouts.

cat debug_input.txt | python component.py --debug (or <debug_input.txt, of course) will print on stdout all the messages emitted by the component. Multiple output streams shouldn't be a problem.

Where debug_input.txt is a file containing the JSON storm-multilang-compliant representation of component's incoming tuples.

Options can be passed in JSON format:
cat input3.log | python bandwith_monitoring/traffic_aggregator.py --debug --options '{"time_window":2, "threshold":100}'

Conf is created with the tick_freq_secs attribute set to 1, in order not to fail on the call of the property we defined for the conf dictionary.

Tick tuples and next commands for spouts can be specified in the input file.

10/07/2014

I made up my mind that the easiest way to help people in writing debug streams and, at the same time, avoid to do crazy changes to components is to provide another pyleus command able to produce a Storm protocol compliant debug stream. From the user point of view, it may be a bit uncomfortable to "compile" an hand-made stream every time it changes something, but if you imagine an invocation like this, it makes much more sense:

pyleus input debug_stream.txt | python spout.py --debug | python bolt1.py --debug | python bolt2.py --debug

Here the list of benefits:

  • The changes done for debugging are minimal and do not impact performances (except for an if self.debug in emit, but we can duplicate the function, if you think it's a big loss, since it's called for potentially thousands of tuples per second)
  • Courageous users can still write their input streams by hand and cover all the weird cases they like (multiple streams, different task ids, etc) Eventually the tool will evolve enough to define keyword for all these cases.
  • It would be way easier to change if we move to a binary encoded format (users will still need a plain text way to specify streams)
  • I do not have to reinvent self.read_msg, self._is_cmd, self._is_taskid, duplicate code and do very weird stuff. That means, the code will be less error-prone.

10/07/2014 - from a second review

Limitations so far:

  • Tick tuple frequency is located in the yaml file and is not accessible from the bolt. It defaults to 1 for every component in debug mode.
  • Impossible to concatenate components. This comes from three facts:
    1. components send messages like "sync", "ack", "fail". We can remove those messages muting sync, ack, fail (either with monkey patching or checking self.debug) in debug mode, but users looking at the output of a component may be interested in this kind of messages. A different solution is modifying all the reading functions in order to filter them in debug mode (it's annoying and error-prone, but we can do that)
    2. components send the command "emit" instead of the tuple messages that will be used as input for downstreams components. Again, we can modify either the emit function or the reading functions in order to account for that.
    3. this is kinda unsolvable with our current infrastructure. Components requiring a tick tuple won't receive it if they are downstream. Only users can put tick tuples manually into input streams. The solution may be writing an internal timer that send tick tuples. However it is very unlikely that users will provide input streams that long to last for a second or more...

In addition, solving these issues is really close to rewriting Storm in python for local runs... (the ruby wrapper for Storm does that, actually)

11/10/2014 - from the aforementioned review
@patricklucas:

Do you have documentation for this input "language"? One idea is to declare this feature "experimental" and fill in the blanks later.

@poros:

I moved out this ticket from version 0.2, since the branch it's quite old and I still need to cope with messagepack addition. As soon as I'll get a chance to work on this feature again, I'll write documentation, too. Do you want me to close the review and re-open it when the time comes?

Make 'pyleus build' faster

@patricklucas:

Waiting for a topology to build during development is tedious, let's try to speed it up.

  • Re-use pyleus_venv from previous build if requirements.txt hasn't changed
  • Otherwise cache wheels (curdling?)

Moved from PYLEUS-35

Add cprofile feature to pyleus

@patricklucas:

We could add an option to components profile_probability that causes profiles to automatically be captured with the specified probability and emitted via a special stream.

Additionally, add a top-level option profile_enabled so it can be toggled easily.

Moved from PYLEUS-81

Set output_field of a bolt using configuration file?

Hi,

I'm trying to solve the following problem with pyleus: I have a processor bolt that processes data (wow). Depending on options in some config file, this processing bolt should emit data on different streams. I have multiple processing bolt in my topology, all configured in a different way.
Essentially, the processing bolt matches several queries against input streams in different place of the topology. The queries are user-specified. Each match of a query should be emitted on its on stream (such that downstream components only get those matches for which they subscribed).

Problem: The definition of the output_fields is static and for all instances of the processor bolt the same. This would not be a problem if I could either specify the output_fields during runtime, once the processor parsed its configuration. Or put the output stream configuration into the pyleus_topology.yaml . Both is not possible. I wonder if you have an idea how to tackle this problem.

Long story short: Is there a possibility to set the output fields of a component more flexible? Preferably I would like to set output fields in the pyleus_topology.yaml on a 'per component' basis.

A workaround may be, to define a number of dummy output_fields in the processor bolt, and use these to communicate a varying number of query matches.

Another workaround: Each downstream component gets all matches and has to filter for the interesting ones. Of course, this produce unnecessay communication…

Right now I have my own config file and use a script to create the input file for pyleus.

exclamation_topology java.lang.RuntimeException

I got a runtime exception when I submit exclamation_topology to my storm cluster both in spouts and bolts.

java.lang.RuntimeException: Error when launching multilang subprocess Traceback (most recent call last): File "/usr/lib64/python2.6/runpy.py", line 122, in _run_module_as_main "main", fname,

ptyhon:2.6.6
Java:1.6
storm:0.9.2

chdir to the topology dir during build

It should be valid for a user to have a relative path to a package in their requirements.txt, but currently if they pyleus build from outside the topology directory, the command will fail.

chdir-ing to the topology directory before running virtualenv commands should fix this.

Add support for trident topoligies

By using guaranteeing message processing mechanics of storm one can achieve fault tolerance, but updates to a store may be duplicated when a retry happens in face of a failure.

Applications that requires support for exactly-once semantics, like an update to a Cassandra column, will greatly benefit from support to write topologies in pyleus that support the Trident API.

Make components "print" safe

Components shouldn't crash when people call "print" in their components. Let's redirect print to log streams.

Investigate rebuilding the virtualenv at runtime

There is a class of known problems with moving around virtualenvs, not the least of which is that it doesn't work if the same python version isn't installed in the same place between the build machine and Storm machine.

It is possible that doing the following instead of including a compiled virtualenv in the JAR won't incur that much overhead:

  • download all dependency packages into the tmpdir
  • Build a virtualenv and install the dependencies from disk so pyleus can run --describe on each component
  • Include the sdists/whls instead of the virtualenv in the JAR
  • Before starting up, build a virtualenv from the included sdists/whls

a complete kafka spout example needed

There mentions in README that

use the Kafka spout built into Storm with only a YAML change

What does this mean? How to use? Can you show me a complete kafka spout example by using pyleus?

Thanks!

Error when running examples or dummy sample

I created the structure for the dummy sample and copied the code, tried to submit to a single node cluster on my local machine and I keep getting these errors in the logs:

2014-12-15T14:42:14.739+0300 my-first-bolt [INFO] No handlers could be found for logger "pyleus.storm.component"

2014-12-15T14:42:14.743+0300 b.s.d.executor [ERROR]
java.lang.Exception: Shell Process Exception: Traceback (most recent call last):
File "/home/mak/qarti/apache-storm-0.9.3/storm-local/supervisor/stormdist/my_first_topology-1-1418643533/resources/pyleus_venv/lib/python2.7/site-packages/pyleus/storm/component.py", line 233, in run
self.run_component()
File "/home/mak/qarti/apache-storm-0.9.3/storm-local/supervisor/stormdist/my_first_topology-1-1418643533/resources/pyleus_venv/lib/python2.7/site-packages/pyleus/storm/bolt.py", line 46, in run_component
self._process_tuple(tup)
File "/home/mak/qarti/apache-storm-0.9.3/storm-local/supervisor/stormdist/my_first_topology-1-1418643533/resources/pyleus_venv/lib/python2.7/site-packages/pyleus/storm/bolt.py", line 164, in _process_tuple
self.process_tuple(tup)
File "/home/mak/qarti/apache-storm-0.9.3/storm-local/supervisor/stormdist/my_first_topology-1-1418643533/resources/my_first_topology/dummy_bolt.py", line 11, in process_tuple
sentence, name = tup.values
ValueError: need more than 0 values to unpack

at backtype.storm.task.ShellBolt.handleError(ShellBolt.java:188) [storm-core-0.9.3.jar:0.9.3]
at backtype.storm.task.ShellBolt.access$1100(ShellBolt.java:69) [storm-core-0.9.3.jar:0.9.3]
at backtype.storm.task.ShellBolt$BoltReaderRunnable.run(ShellBolt.java:331) [storm-core-0.9.3.jar:0.9.3]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_72]

2014-12-15T14:42:14.747+0300 b.s.t.ShellBolt [ERROR] Halting process: ShellBolt died.
java.io.IOException: Broken pipe
at java.io.FileOutputStream.writeBytes(Native Method) ~[na:1.7.0_72]
at java.io.FileOutputStream.write(FileOutputStream.java:345) ~[na:1.7.0_72]
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) ~[na:1.7.0_72]
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) ~[na:1.7.0_72]
at java.io.DataOutputStream.flush(DataOutputStream.java:123) ~[na:1.7.0_72]
at com.yelp.pyleus.serializer.MessagePackSerializer.writeMessage(MessagePackSerializer.java:208) ~[stormjar.jar:na]
at com.yelp.pyleus.serializer.MessagePackSerializer.writeBoltMsg(MessagePackSerializer.java:181) ~[stormjar.jar:na]
at backtype.storm.utils.ShellProcess.writeBoltMsg(ShellProcess.java:106) ~[storm-core-0.9.3.jar:0.9.3]
at backtype.storm.task.ShellBolt$BoltWriterRunnable.run(ShellBolt.java:361) ~[storm-core-0.9.3.jar:0.9.3]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_72]
2014-12-15T14:42:14.747+0300 b.s.d.executor [ERROR]
java.io.IOException: Broken pipe
at java.io.FileOutputStream.writeBytes(Native Method) ~[na:1.7.0_72]
at java.io.FileOutputStream.write(FileOutputStream.java:345) ~[na:1.7.0_72]
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) ~[na:1.7.0_72]
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) ~[na:1.7.0_72]
at java.io.DataOutputStream.flush(DataOutputStream.java:123) ~[na:1.7.0_72]
at com.yelp.pyleus.serializer.MessagePackSerializer.writeMessage(MessagePackSerializer.java:208) ~[stormjar.jar:na]
at com.yelp.pyleus.serializer.MessagePackSerializer.writeBoltMsg(MessagePackSerializer.java:181) ~[stormjar.jar:na]
at backtype.storm.utils.ShellProcess.writeBoltMsg(ShellProcess.java:106) ~[storm-core-0.9.3.jar:0.9.3]
at backtype.storm.task.ShellBolt$BoltWriterRunnable.run(ShellBolt.java:361) ~[storm-core-0.9.3.jar:0.9.3]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_72]
2014-12-15T14:42:14.775+0300 b.s.t.ShellBolt [ERROR] Halting process: ShellBolt died.
java.io.EOFException: null
at org.msgpack.io.StreamInput.readByte(StreamInput.java:60) ~[stormjar.jar:na]
at org.msgpack.unpacker.MessagePackUnpacker.getHeadByte(MessagePackUnpacker.java:66) ~[stormjar.jar:na]
at org.msgpack.unpacker.MessagePackUnpacker.trySkipNil(MessagePackUnpacker.java:396) ~[stormjar.jar:na]
at org.msgpack.template.MapTemplate.read(MapTemplate.java:59) ~[stormjar.jar:na]
at org.msgpack.template.MapTemplate.read(MapTemplate.java:27) ~[stormjar.jar:na]
at org.msgpack.template.AbstractTemplate.read(AbstractTemplate.java:31) ~[stormjar.jar:na]
at org.msgpack.MessagePack.read(MessagePack.java:527) ~[stormjar.jar:na]
at org.msgpack.MessagePack.read(MessagePack.java:496) ~[stormjar.jar:na]
at com.yelp.pyleus.serializer.MessagePackSerializer.readMessage(MessagePackSerializer.java:198) ~[stormjar.jar:na]
at com.yelp.pyleus.serializer.MessagePackSerializer.readShellMsg(MessagePackSerializer.java:74) ~[stormjar.jar:na]
at backtype.storm.utils.ShellProcess.readShellMsg(ShellProcess.java:99) ~[storm-core-0.9.3.jar:0.9.3]
at backtype.storm.task.ShellBolt$BoltReaderRunnable.run(ShellBolt.java:318) ~[storm-core-0.9.3.jar:0.9.3]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_72]
2014-12-15T14:42:14.775+0300 b.s.d.executor [ERROR]
java.io.EOFException: null
at org.msgpack.io.StreamInput.readByte(StreamInput.java:60) ~[stormjar.jar:na]
at org.msgpack.unpacker.MessagePackUnpacker.getHeadByte(MessagePackUnpacker.java:66) ~[stormjar.jar:na]
at org.msgpack.unpacker.MessagePackUnpacker.trySkipNil(MessagePackUnpacker.java:396) ~[stormjar.jar:na]
at org.msgpack.template.MapTemplate.read(MapTemplate.java:59) ~[stormjar.jar:na]
at org.msgpack.template.MapTemplate.read(MapTemplate.java:27) ~[stormjar.jar:na]
at org.msgpack.template.AbstractTemplate.read(AbstractTemplate.java:31) ~[stormjar.jar:na]
at org.msgpack.MessagePack.read(MessagePack.java:527) ~[stormjar.jar:na]
at org.msgpack.MessagePack.read(MessagePack.java:496) ~[stormjar.jar:na]
at com.yelp.pyleus.serializer.MessagePackSerializer.readMessage(MessagePackSerializer.java:198) ~[stormjar.jar:na]
at com.yelp.pyleus.serializer.MessagePackSerializer.readShellMsg(MessagePackSerializer.java:74) ~[stormjar.jar:na]
at backtype.storm.utils.ShellProcess.readShellMsg(ShellProcess.java:99) ~[storm-core-0.9.3.jar:0.9.3]
at backtype.storm.task.ShellBolt$BoltReaderRunnable.run(ShellBolt.java:318) ~[storm-core-0.9.3.jar:0.9.3]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_72]
2014-12-15T14:42:14.778+0300 b.s.d.worker [INFO] Shutting down worker my_first_topology-1-1418643533 9b4f9ba9-1497-4122-940d-59730ebdd72c 6702
2014-12-15T14:42:14.778+0300 b.s.d.worker [INFO] Shutting down receive thread
2014-12-15T14:42:14.783+0300 o.a.s.c.r.ExponentialBackoffRetry [WARN] maxRetries too large (300). Pinning to 29
2014-12-15T14:42:14.783+0300 b.s.u.StormBoundedExponentialBackoffRetry [INFO] The baseSleepTimeMs [100] the maxSleepTimeMs [1000] the maxRetries [300]
2014-12-15T14:42:14.784+0300 b.s.m.n.Client [INFO] New Netty Client, connect to localhost, 6702, config: , buffer_size: 5242880
2014-12-15T14:42:14.785+0300 b.s.m.n.Client [INFO] Reconnect started for Netty-Client-localhost/127.0.0.1:6702... [0]
2014-12-15T14:42:14.785+0300 b.s.m.loader [INFO] Shutting down receiving-thread: [my_first_topology-1-1418643533, 6702]

Usin Pyleus with PyPy throws "Acked a non-existent or already acked/failed id" error

I've a topology that works perfectly with CPython, but in order to achieve better performance I'm testing pyleus with PyPy and I've stumbled in a very strange bug. When running this topology with PyPy after a while pyleus breaks with this error:

99517 [Thread-11-commit] ERROR backtype.storm.util - Async loop died!
java.lang.RuntimeException: java.lang.RuntimeException: java.lang.RuntimeException: Acked a non-existent or already acked/failed id: -1661235904777352941
    at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:128) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
    at backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
    at backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
    at backtype.storm.daemon.executor$fn__5641$fn__5653$fn__5700.invoke(executor.clj:746) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
    at backtype.storm.util$async_loop$fn__457.invoke(util.clj:431) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
    at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
    at java.lang.Thread.run(Unknown Source) [na:1.7.0_72]
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: Acked a non-existent or already acked/failed id: -1661235904777352941
    at backtype.storm.task.ShellBolt.execute(ShellBolt.java:157) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
    at backtype.storm.daemon.executor$fn__5641$tuple_action_fn__5643.invoke(executor.clj:631) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
    at backtype.storm.daemon.executor$mk_task_receiver$fn__5564.invoke(executor.clj:399) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
    at backtype.storm.disruptor$clojure_handler$reify__745.onEvent(disruptor.clj:58) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
    at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:125) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
    ... 6 common frames omitted
Caused by: java.lang.RuntimeException: Acked a non-existent or already acked/failed id: -1661235904777352941
    at backtype.storm.task.ShellBolt.handleAck(ShellBolt.java:186) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
    at backtype.storm.task.ShellBolt.access$200(ShellBolt.java:64) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
    at backtype.storm.task.ShellBolt$1.run(ShellBolt.java:111) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
    ... 1 common frames omitted
99518 [Thread-11-commit] ERROR backtype.storm.daemon.executor -
java.lang.RuntimeException: java.lang.RuntimeException: java.lang.RuntimeException: Acked a non-existent or already acked/failed id: -1661235904777352941
    at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:128) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
    at backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
    at backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
    at backtype.storm.daemon.executor$fn__5641$fn__5653$fn__5700.invoke(executor.clj:746) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
    at backtype.storm.util$async_loop$fn__457.invoke(util.clj:431) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
    at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
    at java.lang.Thread.run(Unknown Source) [na:1.7.0_72]
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: Acked a non-existent or already acked/failed id: -1661235904777352941
    at backtype.storm.task.ShellBolt.execute(ShellBolt.java:157) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
    at backtype.storm.daemon.executor$fn__5641$tuple_action_fn__5643.invoke(executor.clj:631) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
    at backtype.storm.daemon.executor$mk_task_receiver$fn__5564.invoke(executor.clj:399) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
    at backtype.storm.disruptor$clojure_handler$reify__745.onEvent(disruptor.clj:58) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
    at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:125) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
    ... 6 common frames omitted
Caused by: java.lang.RuntimeException: Acked a non-existent or already acked/failed id: -1661235904777352941
    at backtype.storm.task.ShellBolt.handleAck(ShellBolt.java:186) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
    at backtype.storm.task.ShellBolt.access$200(ShellBolt.java:64) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
    at backtype.storm.task.ShellBolt$1.run(ShellBolt.java:111) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
    ... 1 common frames omitted

As far as I can tell this is the only problem preventing pyleus to work with PyPy. Does anyone uses PyPy with Pyleus, or knows what cause this error?

Worth noticing that this error goes way when I reduce the size of each tuple that goes through Storm.

Bolt and SimpleBolt responsibilities discussion

How pointed out in #76, we may need to reconsider which behaviors we want to implement in Bolt and SimpleBolt and even whether having a SimpleBolt class makes sense or not.

These changes may be not backward-compatible, but I am personally not too concerned about this, since we are still in beta. However, avoiding this entirely would be even better :)

Here the list of behaviors https://github.com/apache/storm/blob/master/storm-core/src/multilang/py/storm.py implements:

  • Auto anchoring in BasicBolt (our SimpleBolt) (overrides user's anchors)
  • Auto ack after processing a tuple in BasicBolt (prevents user to call ack())
  • Auto fail in case of exception in BasicBolt(prevents user to call fail())
  • Auto sync in case of heartbeat in Bolt (and, to be fair, replicated in BasicBolt)

We also want to add to this list our process_tick().

It's also worth mentioning how our Streamparse cousins have tackled the problem (class variables):
https://github.com/Parsely/streamparse/blob/master/streamparse/bolt.py#L43

With this post I just wanted to detail the current situation, not my own biased ideas. Feel free to comment.

'pip install pyleus' installs pyleus-base.jar under /usr/local on some systems

I believe we ran into this at some point in the past and solved it, or so we thought.

I can reproduce this under docker:

$ docker run --rm -t -i ubuntu:trusty /bin/bash
root@6bd01ed07d4a:/# apt-get update
...
root@6bd01ed07d4a:/# apt-get install -y python-pip
...
root@6bd01ed07d4a:/# pip install pyleus
...
root@6bd01ed07d4a:/# ls /usr/share/pyl^C
root@6bd01ed07d4a:/# ls /usr/local/share/pyleus/
pyleus-base.jar

Everything works fine if you install and run Pyleus in a virtualenv, though:

root@6bd01ed07d4a:/root# virtualenv venv
New python executable in venv/bin/python
Installing setuptools, pip...done.
root@6bd01ed07d4a:/root# venv/bin/pip install pyleus
...
root@6bd01ed07d4a:/root# venv/bin/python -c 'import pyleus; print pyleus.BASE_JAR_PATH'
/root/venv/share/pyleus/pyleus-base.jar
root@6bd01ed07d4a:/root# ls -lh /root/venv/share/pyleus/pyleus-base.jar
-rw-r--r-- 1 root root 19M Oct 17 20:58 /root/venv/share/pyleus/pyleus-base.jar

MsgPack serializer in pyleus topology is mutating float data

I change the exclamation example topology to emit the following dict:

{'start_time': 1413404322.1652811}

and the bolt to simply log out the value

I notice MsgPack is mutating the 1413404322.1652811 to 1413404288.0

When I change the topology.yam to use serializer 'json', expected the value comes up.

Add dependency on simplejson for Python <2.7

From my comment on #18:

Since pyleus deps msgpack now, and we already dep argparse for python <2.7, I think we should just add a dep on simplejson in the same manner.

We could import it in the newly-added compat.py so we don't have to litter try: import simplejson as json everywhere.

Improve pom.xml

There is almost certainly a better way to accomplish the construction of the base jar.

Evaluate other options of building a jar-with-dependencies, and determine the origin of all the files that currently end up in META-INF and which caused #22.

logging example: time gets truncated

Hi,
I'm quite new to pyleus and GitHub, so I hope what I do and write here makes sense...

  • I think there is a mistake in 'line_spout.py': Line 14 should read:
log = logging.getLogger("logging_example.line_spout")
  • Running this example gives me the following output:
2014-11-13 18:58:55,514 logging_example.line_spout INFO Emitted:   1415901535.511
2014-11-13 18:58:55,516 logging_example.logger_bolt INFO Received: 1415901568.0
2014-11-13 18:59:00,583 logging_example.line_spout INFO Emitted:   1415901540.582474
2014-11-13 18:59:00,583 logging_example.logger_bolt INFO Received: 1415901568.0

Debug output looks as follows:

21558 [Thread-9-line-spout] INFO  backtype.storm.daemon.task - Emitting: line-spout default [1.41590157E9]
21559 [Thread-11-logger-bolt] INFO  backtype.storm.daemon.executor - Processing received message source: line-spout:2, stream: default, id: {}, [1.41590157E9]

As you can see received timestamps differ from emitted timestamps. It gets truncated in a strange way and all those numbers don't fit together.

A simple workaround is to encapsulate the time using json. Is something going wrong when communicating the values between storm and python or am I on the wrong track!?

'pyleus build -s' will remove local

When run pyleus build -s xxxx/pyleus_topology.yaml once,
then pyleus-base.jar will be removed.

Rerun pyleus build, it will show

pyleus build: error: [JarError] Base jar not found

Remote pdb debbugging

It would be really great to be able to start pdb sessions and debug components remotely.
We can use as base the code that I used when I hunted down that memory leak in the Python official implementation of the multilang protocol.

Migrated from PYLEUS-125

Problem for the example of exclamation_topology

I have modified as follows:

 def process_tuple(self, tup):
      log.debug(tup)
      if tup.values != [] :+1: 
         word = tup.values[0] + '!!!'
         log.debug(word)
         self.emit((word,), anchors = tup)

My question: if the class derived from SimpleBolt, it only logged 2 lines. if the class derived from Bolt, it will logged 30 seconds. I love Pyleus to write topology for storm. Pyleus's idea can make me emphasize my work to be solved. But now I don't know why the bolt only work a little time. Please help me!

pyleus list command is too verbose

When the verbose option is not used, I would expect to see only the list of active topologies and not the storm command executed under the hood.

pyleus 0.2.4 bug: Num workers , Num executors in Strom UI

$ git clone https://github.com/Yelp/pyleus.git
$ pyleus build pyleus/examples/exclamation_topology/pyleus_topology.yaml
$ pyleus local exclamation_topology.jar

That works well. However, when

$ pyleus submit -n NIMBUS_HOST exclamation_topology.jar

I submit the jar to the storm cluster.

Num workers is 16, Num executors is 1.

That maybe a bug.

qq20141212-1 2x

And plus, I want to roll back to 0.2.2, but I specific pyleus==0.2.2 in requirements.txt . It won't work.

$  pyleus --verbose build  pyleus/examples/exclamation_topology/pyleus_topology.yaml

It shows that :

New python executable in /tmp/tmpuYS5z3/resources/pyleus_venv/bin/python
Installing setuptools, pip...done.
Downloading/unpacking pyleus==0.2.4
......

The version of pyleus is fixed! And I can't change it.

It occured to anyone?

Be smarter about what to include in the jar

@patricklucas:

We exclude the Pyleus jar itself, but it's easy to accidentally include a lot of things that don't need to be there.

For example, if I'm building jars for three environments, each one will contain the jar from the other two environments.

Moved from PYLEUS-75

Relates to #52

Documentation should be clearer about prerequisites

There are some pieces we left out of the documentation that are leaving potential users without the big picture required to actually deploy a pyleus topology.

In no particular order, some of these caveats that we should make clear are:

  • A Storm release is required for Pyleus to work properly
    • Since Pyleus does not reimplement the Storm thrift protocol, users must have a Storm release and reference it in their ~/.pyleus.conf. This is not necessary if the storm cmd exists on the user's PATH.
    • The Quick Start section of the docs mentions this, but the README.rst does not.
  • The Pyleus Kafka spout is simply a wrapper around the Java Kafka spout, and requires only a pyleus_topology.yaml addition.
  • Building a virtualenv to bundle dependencies does not mean Pyleus bundles python; the same version of Python must be available in the same place on the Storm cluster as on the machine where the Pyleus topology was built.

Support direct grouping

This is more of a serious change in how we handled grouping and output streams until now. It is worth a sub-task.

@poros:

This change will make things kinda complicated. I will have to replace the dictionary of lists we use for output_fields with a dictionary of dictionaries containing also the boolean value "direct". In addition I have to change all the Java in order to account for a Map inside a Map and change all the stream declaration calls. Validation will also change, too (that is nasty).

The example bolt that Storm is using for direct streams is 404 now. It seems to me that very few people are using this feature out there. I believe that may be worth to skip this feature for version 0.2 and reconsider it when (and if) we are going to re-write TopologyBuilder in pure Python. Without the Python-JSON-Java interface everything should be a little easier.

@patricklucas:

That sounds like a reasonable plan to me. Let's remember to make a note in the docs before release.

Migrated from PYLEUS-94.

Make pyleus flake8-friendly

Fix up all the minor flake8 issues, then swap out pyflakes for flake8 in test-requirements.txt and invoke it in tox.ini.

Migrated from PYLEUS-149.

Is pyleus 0.2.4 compatible with Storm 0.9.3 ?

Today I set up a storm 0.9.3 cluster, and run

$ git clone https://github.com/Yelp/pyleus.git
$ pyleus build pyleus/examples/exclamation_topology/pyleus_topology.yaml
$ pyleus --verbose local exclamation_topology.jar

Than it occurs:

......
9491 [Thread-11-exclaim2] INFO  backtype.storm.task.ShellBolt - Start checking heartbeat...
9492 [Thread-11-exclaim2] INFO  backtype.storm.daemon.executor - Prepared bolt exclaim2:(3)
10502 [Thread-22] ERROR backtype.storm.daemon.executor -
java.lang.Exception: Shell Process Exception: Traceback (most recent call last):
  File "/tmp/d05ae596-2b00-4eaa-9c4f-17c7f6ee5667/supervisor/stormdist/exclamation_topology-1-1418406574/resources/pyleus_venv/lib/python2.7/site-packages/pyleus/storm/component.py", line 233, in run
    self.run_component()
  File "/tmp/d05ae596-2b00-4eaa-9c4f-17c7f6ee5667/supervisor/stormdist/exclamation_topology-1-1418406574/resources/pyleus_venv/lib/python2.7/site-packages/pyleus/storm/bolt.py", line 46, in run_component
    self._process_tuple(tup)
  File "/tmp/d05ae596-2b00-4eaa-9c4f-17c7f6ee5667/supervisor/stormdist/exclamation_topology-1-1418406574/resources/pyleus_venv/lib/python2.7/site-packages/pyleus/storm/bolt.py", line 164, in _process_tuple
    self.process_tuple(tup)
  File "/tmp/d05ae596-2b00-4eaa-9c4f-17c7f6ee5667/supervisor/stormdist/exclamation_topology-1-1418406574/resources/exclamation_topology/exclamation_bolt.py", line 18, in process_tuple
    word = tup.values[0] + "!!!"
IndexError: list index out of range

    at backtype.storm.task.ShellBolt.handleError(ShellBolt.java:188) [storm-core-0.9.3.jar:0.9.3]
    at backtype.storm.task.ShellBolt.access$1100(ShellBolt.java:69) [storm-core-0.9.3.jar:0.9.3]
    at backtype.storm.task.ShellBolt$BoltReaderRunnable.run(ShellBolt.java:331) [storm-core-0.9.3.jar:0.9.3]
    at java.lang.Thread.run(Thread.java:745) [na:1.7.0_72]
10503 [Thread-20] ERROR backtype.storm.daemon.executor -
java.lang.Exception: Shell Process Exception: Traceback (most recent call last):
  File "/tmp/d05ae596-2b00-4eaa-9c4f-17c7f6ee5667/supervisor/stormdist/exclamation_topology-1-1418406574/resources/pyleus_venv/lib/python2.7/site-packages/pyleus/storm/component.py", line 233, in run
    self.run_component()
  File "/tmp/d05ae596-2b00-4eaa-9c4f-17c7f6ee5667/supervisor/stormdist/exclamation_topology-1-1418406574/resources/pyleus_venv/lib/python2.7/site-packages/pyleus/storm/bolt.py", line 46, in run_component
    self._process_tuple(tup)
  File "/tmp/d05ae596-2b00-4eaa-9c4f-17c7f6ee5667/supervisor/stormdist/exclamation_topology-1-1418406574/resources/pyleus_venv/lib/python2.7/site-packages/pyleus/storm/bolt.py", line 164, in _process_tuple
    self.process_tuple(tup)
  File "/tmp/d05ae596-2b00-4eaa-9c4f-17c7f6ee5667/supervisor/stormdist/exclamation_topology-1-1418406574/resources/exclamation_topology/exclamation_bolt.py", line 18, in process_tuple
    word = tup.values[0] + "!!!"
IndexError: list index out of range

    at backtype.storm.task.ShellBolt.handleError(ShellBolt.java:188) [storm-core-0.9.3.jar:0.9.3]
    at backtype.storm.task.ShellBolt.access$1100(ShellBolt.java:69) [storm-core-0.9.3.jar:0.9.3]
    at backtype.storm.task.ShellBolt$BoltReaderRunnable.run(ShellBolt.java:331) [storm-core-0.9.3.jar:0.9.3]
    at java.lang.Thread.run(Thread.java:745) [na:1.7.0_72]
10509 [ProcessThread(sid:0 cport:-1):] INFO  org.apache.storm.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x14a3f9ea9e1000b type:create cxid:0x33 zxid:0x22 txntype:-1 reqpath:n/a Error Path:/storm/errors/exclamation_topology-1-1418406574 Error:KeeperErrorCode = NodeExists for /storm/errors/exclamation_topology-1-1418406574
10519 [Thread-22] ERROR backtype.storm.task.ShellBolt - Halting process: ShellBolt died.
java.io.EOFException: null
    at org.msgpack.io.StreamInput.readByte(StreamInput.java:60) ~[exclamation_topology.jar:na]
    at org.msgpack.unpacker.MessagePackUnpacker.getHeadByte(MessagePackUnpacker.java:66) ~[exclamation_topology.jar:na]
    at org.msgpack.unpacker.MessagePackUnpacker.trySkipNil(MessagePackUnpacker.java:396) ~[exclamation_topology.jar:na]
    at org.msgpack.template.MapTemplate.read(MapTemplate.java:59) ~[exclamation_topology.jar:na]
    at org.msgpack.template.MapTemplate.read(MapTemplate.java:27) ~[exclamation_topology.jar:na]
    at org.msgpack.template.AbstractTemplate.read(AbstractTemplate.java:31) ~[exclamation_topology.jar:na]
    at org.msgpack.MessagePack.read(MessagePack.java:527) ~[exclamation_topology.jar:na]
    at org.msgpack.MessagePack.read(MessagePack.java:496) ~[exclamation_topology.jar:na]
    at com.yelp.pyleus.serializer.MessagePackSerializer.readMessage(MessagePackSerializer.java:198) ~[exclamation_topology.jar:na]
    at com.yelp.pyleus.serializer.MessagePackSerializer.readShellMsg(MessagePackSerializer.java:74) ~[exclamation_topology.jar:na]
    at backtype.storm.utils.ShellProcess.readShellMsg(ShellProcess.java:99) ~[storm-core-0.9.3.jar:0.9.3]
    at backtype.storm.task.ShellBolt$BoltReaderRunnable.run(ShellBolt.java:318) ~[storm-core-0.9.3.jar:0.9.3]
    at java.lang.Thread.run(Thread.java:745) [na:1.7.0_72]
10519 [Thread-22] ERROR backtype.storm.daemon.executor -
java.io.EOFException: null
    at org.msgpack.io.StreamInput.readByte(StreamInput.java:60) ~[exclamation_topology.jar:na]
    at org.msgpack.unpacker.MessagePackUnpacker.getHeadByte(MessagePackUnpacker.java:66) ~[exclamation_topology.jar:na]
    at org.msgpack.unpacker.MessagePackUnpacker.trySkipNil(MessagePackUnpacker.java:396) ~[exclamation_topology.jar:na]
    at org.msgpack.template.MapTemplate.read(MapTemplate.java:59) ~[exclamation_topology.jar:na]
    at org.msgpack.template.MapTemplate.read(MapTemplate.java:27) ~[exclamation_topology.jar:na]
    at org.msgpack.template.AbstractTemplate.read(AbstractTemplate.java:31) ~[exclamation_topology.jar:na]
    at org.msgpack.MessagePack.read(MessagePack.java:527) ~[exclamation_topology.jar:na]
    at org.msgpack.MessagePack.read(MessagePack.java:496) ~[exclamation_topology.jar:na]
    at com.yelp.pyleus.serializer.MessagePackSerializer.readMessage(MessagePackSerializer.java:198) ~[exclamation_topology.jar:na]
    at com.yelp.pyleus.serializer.MessagePackSerializer.readShellMsg(MessagePackSerializer.java:74) ~[exclamation_topology.jar:na]
    at backtype.storm.utils.ShellProcess.readShellMsg(ShellProcess.java:99) ~[storm-core-0.9.3.jar:0.9.3]
    at backtype.storm.task.ShellBolt$BoltReaderRunnable.run(ShellBolt.java:318) ~[storm-core-0.9.3.jar:0.9.3]
    at java.lang.Thread.run(Thread.java:745) [na:1.7.0_72]
10520 [Thread-20] ERROR backtype.storm.task.ShellBolt - Halting process: ShellBolt died.
java.io.EOFException: null
    at org.msgpack.io.StreamInput.readByte(StreamInput.java:60) ~[exclamation_topology.jar:na]
    at org.msgpack.unpacker.MessagePackUnpacker.getHeadByte(MessagePackUnpacker.java:66) ~[exclamation_topology.jar:na]
    at org.msgpack.unpacker.MessagePackUnpacker.trySkipNil(MessagePackUnpacker.java:396) ~[exclamation_topology.jar:na]
    at org.msgpack.template.MapTemplate.read(MapTemplate.java:59) ~[exclamation_topology.jar:na]
    at org.msgpack.template.MapTemplate.read(MapTemplate.java:27) ~[exclamation_topology.jar:na]
    at org.msgpack.template.AbstractTemplate.read(AbstractTemplate.java:31) ~[exclamation_topology.jar:na]
    at org.msgpack.MessagePack.read(MessagePack.java:527) ~[exclamation_topology.jar:na]
    at org.msgpack.MessagePack.read(MessagePack.java:496) ~[exclamation_topology.jar:na]
    at com.yelp.pyleus.serializer.MessagePackSerializer.readMessage(MessagePackSerializer.java:198) ~[exclamation_topology.jar:na]
    at com.yelp.pyleus.serializer.MessagePackSerializer.readShellMsg(MessagePackSerializer.java:74) ~[exclamation_topology.jar:na]
    at backtype.storm.utils.ShellProcess.readShellMsg(ShellProcess.java:99) ~[storm-core-0.9.3.jar:0.9.3]
    at backtype.storm.task.ShellBolt$BoltReaderRunnable.run(ShellBolt.java:318) ~[storm-core-0.9.3.jar:0.9.3]
    at java.lang.Thread.run(Thread.java:745) [na:1.7.0_72]
10521 [Thread-20] ERROR backtype.storm.daemon.executor -
java.io.EOFException: null
    at org.msgpack.io.StreamInput.readByte(StreamInput.java:60) ~[exclamation_topology.jar:na]
    at org.msgpack.unpacker.MessagePackUnpacker.getHeadByte(MessagePackUnpacker.java:66) ~[exclamation_topology.jar:na]
    at org.msgpack.unpacker.MessagePackUnpacker.trySkipNil(MessagePackUnpacker.java:396) ~[exclamation_topology.jar:na]
    at org.msgpack.template.MapTemplate.read(MapTemplate.java:59) ~[exclamation_topology.jar:na]
    at org.msgpack.template.MapTemplate.read(MapTemplate.java:27) ~[exclamation_topology.jar:na]
    at org.msgpack.template.AbstractTemplate.read(AbstractTemplate.java:31) ~[exclamation_topology.jar:na]
    at org.msgpack.MessagePack.read(MessagePack.java:527) ~[exclamation_topology.jar:na]
    at org.msgpack.MessagePack.read(MessagePack.java:496) ~[exclamation_topology.jar:na]
    at com.yelp.pyleus.serializer.MessagePackSerializer.readMessage(MessagePackSerializer.java:198) ~[exclamation_topology.jar:na]
    at com.yelp.pyleus.serializer.MessagePackSerializer.readShellMsg(MessagePackSerializer.java:74) ~[exclamation_topology.jar:na]
    at backtype.storm.utils.ShellProcess.readShellMsg(ShellProcess.java:99) ~[storm-core-0.9.3.jar:0.9.3]
    at backtype.storm.task.ShellBolt$BoltReaderRunnable.run(ShellBolt.java:318) ~[storm-core-0.9.3.jar:0.9.3]
    at java.lang.Thread.run(Thread.java:745) [na:1.7.0_72]
10526 [Thread-7] INFO  backtype.storm.daemon.nimbus - Shutting down master

......

Bootstrap of bolts/spouts environment through scripts

@anusha-r:

MRJob allows a bunch of options for users to customize the execution environment - eg: users can specify commands to be executed at startup, set env variables, upload python archives that have to be added to PYTHONPATH and so on.

For the hadoop environment, MRJob simply invokes an autogenerated script that performs all these tasks that the user specifies, and then invokes the actual python module to run.
Would it be possible to support something similar in Pyleus? For instance, the user could specify a shell script or some command to run at startup, before the python module is invoked. This will ensure changes to python path happen before we enter python land.

I have a partial workaround now, to execute a setup script during bolt initialization using call(..) API, but having pyleus let me customize the way my bolt module is invoked would be a better fix.

Moved from PYLEUS-86

pyleus build failed

I Running Quick Start on my Mac machine and building failed

Mac Env :
Python 2.7.8
Oracle JDK java version "1.7.0_51"

exception stack tracking as following :

"Unable to locate Storm executable. Please either install "
pyleus.exception.ConfigurationError: [ConfigurationError] Unable to locate Storm executable. 
Please either install Storm or specify its path in your configuration file

thanks a lot ~

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.