Git Product home page Git Product logo

kaldi-gstreamer-server's Introduction

Kaldi GStreamer server

GitHub license Code Climate

This is a real-time full-duplex speech recognition server, based on the Kaldi toolkit and the GStreamer framework and implemented in Python.

Advertisement

Laboratory of Language Technology of Tallinn University of Technology is looking for a PhD student to work on speech recognition, with a focus on lightly code-switched speech (e.g. Finnish containing a lot of English technical terms). More information about the topic, how to apply, requirements, funding.

NB! The position is still open!

Features

  • Full duplex communication based on websockets: speech goes in, partial hypotheses come out (think of Android's voice typing)
  • Very scalable: the server consists of a master component and workers; one worker is needed per concurrent recognition session; workers can be started and stopped independently of the master on remote machines
  • Can do speech segmentation, i.e., a long speech signal is broken into shorter segments based on silences
  • Supports arbitrarily long speech input (e.g., you can stream live speech into it)
  • Supports Kaldi's GMM and "online DNN" models
  • Supports rescoring of the recognition lattice with a large language model
  • Supports persisting the acoustic model adaptation state between requests
  • Supports unlimited set of audio codecs (actually only those supported by GStreamer)
  • Supports rewriting raw recognition results using external programs (can be used for converting words to numbers, etc)
  • Python, Java, Javascript, Haskell clients are available

English demo that uses the server: http://bark.phon.ioc.ee/dictate/

Estonian demo: http://bark.phon.ioc.ee/dikteeri/

Changelog

  • 2019-06-17: The postprocessing mechanism doesn't work properly with Tornado 5+. Use Tornado 4.5.3 if you need it.

  • 2018-04-25: Server should now work with Tornado 5 (thanks to @Gastron). If using Python 2, you might need to install the futures package (pip install futures).

  • 2017-12-27: Somewhat big changes in the way post-processor is invoked. The problem was that in some use cases, the program that is used for post-processing decoded sentences can take a lot of time (let's say 0.5 seconds). Under the previous architecture, post-processor was invoked syncronously, meaning that decoding was suspended during that time. This change fixes that.

  • 2017-06-28: The sample client program can now accept audio from stdin. This can be used to test the server with a live microphone, e.g.: arecord -f S16_LE -r 16000 | python kaldigstserver/client.py -r 32000 -. Thanks to @wkuna!

  • 2016-11-28: Server now supports serving requests using SSL. SSL is automatically turned on when the certfile and keyfile command line arguments are specified.

  • 2016-10-14: Support for nnet3 (including 'chain') models, thanks to @yifan! Not tested very carefully. Set the decoder->nnet-mode property to 3 to use nnet3 models.

  • 2016-10-06: added a sample conf for Librispeech models and the corresponding model download script (thanks to @skoocda)

  • 2015-12-04: added a link to the Dockerfile.

  • 2015-06-30: server now uses the recently added "full final results" functionality of gst-kaldi-nnet2-online. Full results can include things like n-best hypotheses, word and phone alignment information, and possibly other things in the future. You have to upgrade gst-kaldi-nnet2-online (when using this plugin instead of the GMM-based Kaldi GStreamer plugin) prior to using this. Also added a sample full results post-processing script sample_full_post_processor.py (see sample_english_nnet2.yaml on how to use it).

Installation

Docker

Building Kaldi and all the other packages required by this software can be quite complicated. Instead of building all the prerequisites manually, one could use the Dockerfile created by José Eduardo Silva: https://github.com/jcsilva/docker-kaldi-gstreamer-server.

Requirements

Python 2.7 with the following packages:

NB!: The server doesn't work quite correctly with ws4py 0.3.5 because of a bug I reported here: Lawouach/WebSocket-for-Python#152. Use ws4py 0.3.2 instead. To install ws4py 0.3.2 using pip, run:

pip install ws4py==0.3.2

In addition, you need Python 2.x bindings for gobject-introspection libraries, provided by the python-gi package on Debian and Ubuntu.

Kaldi

Download and compile Kaldi (http://kaldi.sourceforge.net). Also compile the online extensions (make ext) and the Kaldi GStreamer plugin (see README in Kaldi's src/gst-plugin directory).

Acoustic and language models for Kaldi

You need GMM-HMM-based acoustic and n-gram language models (actually their FST cascade) for your language.

Working (but not very accurate) recognition models are available for English and Estonian in the test/models/ directory. English models are based on Voxforge acoustic models and the CMU Sphinx 2013 general English trigram language model (http://cmusphinx.sourceforge.net/2013/01/a-new-english-language-model-release/). The language models were heavily pruned so that the resulting FST cascade would be less than the 100 MB GitHub file size limit.

Update: the server also supports Kaldi's new "online2" online decoder that uses DNN-based acoustic models with i-vector input. See below on how to use it. According to experiments on two Estonian online decoding setups, the DNN-based models result in about 20% (or more) relatively less errors than GMM-based models (e.g., WER dropped from 13% to 9%).

Running the server

Running the master server

The following starts the main server on localhost:8888

python kaldigstserver/master_server.py --port=8888

Running workers

The master server doesn't perform speech recognition itself, it simply delegates client recognition requests to workers. You need one worker per recognition session. So, the number of running workers should be at least the number of potential concurrent recognition sessions. Good thing is that workers are fully independent and do not even have to be running on the same machine, thus offering practically unlimited parallelness.

There are two decoders that a worker can use: based on the Kaldi onlinegmmdecodefaster GStreamer plugin or based on the newer kaldinnet2onlinedecoder plugin. The first one supports GMM models, the latter one needs "online2" DNN-based models with i-vector input.

To run a worker, first write a configuration file. A sample configuration that uses the English GMM-HMM models that come with this project is available in sample_worker.yaml. A sample worker that uses "online2" DNN-based models is in sample_english_nnet2.yaml.

Using the 'onlinegmmdecodefaster' based worker

Before starting a worker, make sure that the GST plugin path includes Kaldi's src/gst-plugin directory (which should contain the file libgstkaldi.so), something like:

export GST_PLUGIN_PATH=~/tools/kaldi-trunk/src/gst-plugin

Test if it worked:

gst-inspect-1.0 onlinegmmdecodefaster

The latter should print out information about the Kaldi's GStreamer plugin.

Now, you can start a worker:

python kaldigstserver/worker.py -u ws://localhost:8888/worker/ws/speech -c sample_worker.yaml

The -u ws://localhost:8890/worker/ws/speech argument specifies the address of the main server that the worker should connect to. Make sure you are using the same port as in the server invocation.

You can start any number of worker processes, just use the same command to start the next workers.

It might be a good idea to use supervisord to start and stop the main server and several workers. A sample supervisord configuration file is in etc/english-supervisord.conf.

Server usage

A sample implementation of the client is in kaldigstserver/client.py.

If you started the server/worker as described above, you should be able to test the installation by invoking:

python kaldigstserver/client.py -r 32000 test/data/english_test.raw

Expected output:

THE. ONE TWO THREE FOUR FIVE SIX SEVEN EIGHT.

Expected output when using using the DNN-based online models based on Fisher:

one two or three you fall five six seven eight. yeah.

The -r 32000 in the last command tells the client to send audio to the server at 32000 bytes per second. The raw sample audio file uses a sample rate of 16k with a 16-bit encoding which results in a byterate of 32000.

You can also send ogg audio:

python kaldigstserver/client.py -r 4800 test/data/english_test.ogg

The rate in the last command is 4800. The bit rate of the ogg file is 37.5k, which results in a byte rate of 4800.

Using the 'kaldinnet2onlinedecoder' based worker

The DNN-based online decoder requires a newer GStreamer plugin that is not in the Kaldi codebase and has to be compiled seperately. It's available at https://github.com/alumae/gst-kaldi-nnet2-online. Clone it, e.g., under ~/tools/gst-kaldi-nnet2-online. Follow the instuctions and compile it. This should result in a file ~/tools/gst-kaldi-nnet2-online/src/libgstkaldionline2.so.

Also, download the DNN-based models for English, trained on the TEDLIUM speech corpus and combined with a generic English language model provided by Cantab Research. Run the download-tedlium-nnet2.sh under test/models to download the models (attention, 1.5 GB):

cd test/models 
./download-tedlium-nnet2.sh
cd ../../

Before starting a worker, make sure that the GST plugin path includes the path where the libgstkaldionline2.so library you compiled earlier resides, something like:

export GST_PLUGIN_PATH=~/tools/gst-kaldi-nnet2-online/src

Test if it worked:

gst-inspect-1.0 kaldinnet2onlinedecoder

The latter should print out information about the new Kaldi's GStreamer plugin.

Now, you can start a worker:

python kaldigstserver/worker.py -u ws://localhost:8888/worker/ws/speech -c sample_english_nnet2.yaml

As the acoustic models are trained on TED data, we also test on TED data. The file test/data/bill_gates-TED.mp3 contains about one minute of a TED talk by Bill Gates. It's encoded as 64 kb MP3, so let's send it to the server at 64*1024/8=8192 bytes per second:

python kaldigstserver/client.py -r 8192 test/data/bill_gates-TED.mp3

Recognized words should start appearing at the terminal. The final result should be something like:

when i was a kid the disaster we worry about most was a nuclear war. that's why we had a bear like this down our basement filled with cans of food and water. nuclear attack came we were supposed to go downstairs hunker down and eat out of that barrel. today the greatest risk of global catastrophe. don't look like this instead it looks like this. if anything kills over ten million people in the next few decades it's most likely to be a highly infectious virus rather than a war. not missiles that microbes now part of the reason for this is that we have invested a huge amount in nuclear deterrence we've actually invested very little in a system to stop an epidemic. we're not ready for the next epidemic.

Compare that to the original transcript in test/data/bill_gates-TED.txt:

When I was a kid, the disaster we worried about most was a nuclear war. That's why we had a barrel like this down in our basement, filled with cans of food and water. When the nuclear attack came, we were supposed to go downstairs, hunker down, and eat out of that barrel. Today the greatest risk of global catastrophe doesn't look like this. Instead, it looks like this. If anything kills over 10 million people in the next few decades, it's most likely to be a highly infectious virus rather than a war. Not missiles, but microbes. Now, part of the reason for this is that we've invested a huge amount in nuclear deterrents. But we've actually invested very little in a system to stop an epidemic. We're not ready for the next epidemic.

Retrieving and sending adaptation state

If you use the 'kaldinnet2onlinedecoder' based worker, you can retrieve the adaptation state after the decoding session finishes, and send the previously retrieved adaptation state when starting a new session.

The 'kaldinnet2onlinedecoder' worker always sends the adaptation state encoded in a JSON container once the session ends. Client can store it in a file. This is functionality is implemented by the client.py. Assuming that you started the server and a worker as in the last example, you can do:

python kaldigstserver/client.py -r 32000 --save-adaptation-state adaptation-state.json test/data/english_test.wav

The adaptation-state.json file will contain something like this:

{"type": "string+gzip+base64", "value": "eJxlvUuPdEmSHbavXx...", "time": "2014-11-14T11:08:49"}

As you can see, the adaptation state is not human-readable, it's actually gzipped and base64-encoded text data.

To start another decoding session using the saved adaptation state, you can do something like this:

python kaldigstserver/client.py -r 32000 --send-adaptation-state adaptation-state.json test/data/english_test.wav

Alternative usage through a HTTP API

One can also use the server through a very simple HTTP-based API. This allows to simply send audio via a PUT or POST request to http://server:port/client/dynamic/recognize and read the JSON ouput. Note that the JSON output is differently structured than the output of the websocket-based API. This interface is compatible to the one implemented by http://github.com/alumae/ruby-pocketsphinx-server.

The HTTP API supports chunked transfer encoding which means that server can read and decode an audio stream before it is complete.

Example:

Send audio to server:

 curl  -T test/data/english_test.wav  "http://localhost:8888/client/dynamic/recognize"

Output:

{"status": 0, "hypotheses": [{"utterance": "one two or three you fall five six seven eight. [noise]."}], "id": "7851281f-e187-4c24-9b58-4f3a5cba3dce"}

Send audio using chunked transfer encoding at an audio byte rate; you can see from the worker logs that decoding starts already when the first chunks have been received:

curl -v -T test/data/english_test.raw -H "Content-Type: audio/x-raw-int; rate=16000" --header "Transfer-Encoding: chunked" --limit-rate 32000  "http://localhost:8888/client/dynamic/recognize"

Output (like before):

{"status": 0, "hypotheses": [{"utterance": "one two or three you fall five six seven eight. yeah."}], "id": "4e4594ee-bdb2-401f-8114-41a541d89eb8"}

Websocket-based client-server protocol

Opening a session

To open a session, connect to the specified server websocket address (e.g. ws://localhost:8888/client/ws/speech). The server assumes by default that incoming audio is sent using 16 kHz, mono, 16bit little-endian format. This can be overriden using the 'content-type' request parameter. The content type has to be specified using GStreamer 1.0 caps format, e.g. to send 44100 Hz mono 16-bit data, use: "audio/x-raw, layout=(string)interleaved, rate=(int)44100, format=(string)S16LE, channels=(int)1". This needs to be url-encoded of course, so the actual request is something like:

ws://localhost:8888/client/ws/speech?content-type=audio/x-raw,+layout=(string)interleaved,+rate=(int)44100,+format=(string)S16LE,+channels=(int)1

Audio can also be encoded using any codec recognized by GStreamer (assuming the needed packages are installed on the server). GStreamer should recognize the container and codec automatically from the stream, you don't have to specify the content type. E.g., to send audio encoded using the Speex codec in an Ogg container, use the following URL to open the session (server should automatically recognize the codec):

ws://localhost:8888/client/ws/speech

Sending audio

Speech should be sent to the server in raw blocks of data, using the encoding specified when session was opened. It is recommended that a new block is sent at least 4 times per second (less frequent blocks would increase the recognition lag). Blocks do not have to be of equal size.

After the last block of speech data, a special 3-byte ANSI-encoded string "EOS" ("end-of-stream") needs to be sent to the server. This tells the server that no more speech is coming and the recognition can be finalized.

After sending "EOS", client has to keep the websocket open to receive recognition results from the server. Server closes the connection itself when all recognition results have been sent to the client. No more audio can be sent via the same websocket after an "EOS" has been sent. In order to process a new audio stream, a new websocket connection has to be created by the client.

Reading results

Server sends recognition results and other information to the client using the JSON format. The response can contain the following fields:

  • status -- response status (integer), see codes below
  • message -- (optional) status message
  • result -- (optional) recognition result, containing the following fields:
    • hypotheses - recognized words, a list with each item containing the following:
      • transcript -- recognized words
      • confidence -- (optional) confidence of the hypothesis (float, 0..1)
    • final -- true when the hypothesis is final, i.e., doesn't change any more

The following status codes are currently in use:

  • 0 -- Success. Usually used when recognition results are sent
  • 2 -- Aborted. Recognition was aborted for some reason.
  • 1 -- No speech. Sent when the incoming audio contains a large portion of silence or non-speech.
  • 9 -- Not available. Used when all recognizer processes are currently in use and recognition cannot be performed.

Websocket is always closed by the server after sending a non-zero status update.

Examples of server responses:

{"status": 9}
{"status": 0, "result": {"hypotheses": [{"transcript": "see on"}], "final": false}}
{"status": 0, "result": {"hypotheses": [{"transcript": "see on teine lause."}], "final": true}}

Server segments incoming audio on the fly. For each segment, many non-final hypotheses, followed by one final hypothesis are sent. Non-final hypotheses are used to present partial recognition hypotheses to the client. A sequence of non-final hypotheses is always followed by a final hypothesis for that segment. After sending a final hypothesis for a segment, server starts decoding the next segment, or closes the connection, if all audio sent by the client has been processed.

Client is reponsible for presenting the results to the user in a way suitable for the application.

Client software

Javascript client is available here: https://kaljurand.github.io/dictate.js Haskell client is available here: https://github.com/alx741/kaldi-gstreamer-server-haskell-client

Citing

If you use this software for research, you can cite the paper where this software is described (available here: http://ebooks.iospress.nl/volumearticle/37996):

@inproceedigs{alumae2014,
  author={Tanel Alum\"{a}e},
  title="Full-duplex Speech-to-text System for {Estonian}",
  booktitle="Baltic HLT 2014",
  year=2014,
  address="Kaunas, Lithuania"
}

Of course, you should also acknowledge Kaldi, which does all the hard work.

kaldi-gstreamer-server's People

Contributors

alumae avatar alx741 avatar bilguun0203 avatar dwks avatar fanskyer avatar gastron avatar ghwn avatar gunthercox avatar jbender avatar jcsilva avatar kaljurand avatar mgoldey avatar naxingyu avatar rohithkodali avatar tudt1997 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kaldi-gstreamer-server's Issues

No such file or directory when running master_server.py

I ran into a small snag when running master_server.py. Apparently the readme file could not be found. I fixed the issue by getting the absolute path to the readme based on the path to the current directory.

I am not sure if anyone else has run into this. I would gladly send a pull request if you are interested.

 class MainHandler(tornado.web.RequestHandler):
     def get(self):
+        current_directory = os.path.dirname(os.path.abspath(__file__))
+        parent_directory = os.path.join(current_directory, os.pardir)
+        readme = os.path.join(parent_directory, "README.md")
+        self.render(readme)
-        self.render("../README.md")

Traceback

Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 1443, in _execute
    result = method(*self.path_args, **self.path_kwargs)
  File "kaldigstserver/master_server.py", line 76, in get
    self.render("../README.md")
  File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 699, in render
    html = self.render_string(template_name, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 803, in render_string
    t = loader.load(template_name)
  File "/usr/local/lib/python2.7/dist-packages/tornado/template.py", line 424, in load
    self.templates[name] = self._create_template(name)
  File "/usr/local/lib/python2.7/dist-packages/tornado/template.py", line 451, in _create_template
    with open(path, "rb") as f:
IOError: [Errno 2] No such file or directory: '/home/gcox/GitHub/kaldi-gstreamer-server/templates/../README.md'

re: issues with new update

HI Alumae,

I have seen a couple of bugs with the new update.
1)the server cannot process more than one decoding process per worker, after one decoding processed the worker is not accepting connection from server.

  1. we are not getting the correct json format for the usage

curl -T 752706.wav "http://localhost:8888/client/dynamic/recognize"

{"status": 0, "hypotheses": [{"utterance": ""}], "id": "6ceaf16d-e754-488f-a2a8-3d9c4e63affa"}

                or

curl -v -T 20141014165818Z.wav -H "Content-Type: audio/x-raw-int; rate=16000" --header "Transfer-Encoding: chunked" --limit-rate 32000 "http://localhost:8888/client/dynamic/recognize"

{"status": 0, "hypotheses": [{"utterance": ""}], "id": "6ceaf16d-e754-488f-a2a8-3d9c4e63affa"}

as seen we are not getting any hypothesis after the new update. But i can see the result on the server.

Adding New LM to TEDLIUM nnet2

I'd like to create a new LM to use with the TEDLIUM nnet2 AM, but don't think I can do so without the following:

  • phones dir
  • lexicon.txt
  • tree
  • L.fst
  • L_disambig.fst

Any chance you'd be able to release those?
Thanks,
Tali

Silence detection

Hi,

I am using TED dnn model. I tried running a file through the decoder and it worked.
However, the utterances (or segments) are not splitting based on silences. sometime it even breaks in the middle of a spoken sentence. I think it breaks when it reaches to a certain word limit.

do-endpointing is set to True and I tried playing with endpointing-silence-phones (currently 1:2:3:4:5), but no luck

implementing "continuous" decoder client

I am implementing a continuous decoder client in IOS.

It takes microphone input and uses the websocket protocol to send it to the gstreamer decoder.

a few questions:

  1. I notice i get many "final:false" responses that have the same hypothesis. Is there an option to have the server not send out identical responses? Do you recommend that this be implemented?

  2. Will the server break up utterances based on silence? If so how do you set the appropriate parameters?

  3. Related, will the server send a final result even if I do not send the EOS string?

It appears that a new connection is allowed in the time when client closes connection, but worker has not received EOS from decoder

It appears that the worker is make available and a new connection from a client is allowed when a previous client session has closed connection, but the worker is still waiting for the Final results and EOS from the decoder.

This new connection will generally not lead to a good decode session. Worker should probably stay unavailable until it sends out the final results / adaptation state and then closes the connection.

different results for same input

Hi

When using the client.py script repeatedly on the same wavefile, i get different results.

Most of the time the difference is in the likelihood field of the returned hypotheses, they vary a little from one run to the next (on the same decoder)

Occasionally, the likelihood differences effect the sort order, so on subsequent calls the most likely hypothesis changes.

Is this expected? thanks

run 1:
RESPONSE:{u'status': 0, u'segment-start': 0.0, u'segment-length': 4.52, u'total-length': 4.52, u'result': {u'hypotheses': [
{u'likelihood': 145.337, u'transcript': u'minus one twelve point ten minus one eleven point ten'},
{u'likelihood': 143.347, u'transcript': u'hi minus one twelve point ten minus one eleven point ten'},
{u'likelihood': 140.417, u'transcript': u'minus one twelve point ten minus one eleven point eight ten'},
{u'likelihood': 138.427, u'transcript': u'hi minus one twelve point ten minus one eleven point eight ten'}], u'final': True}, u'segment': 0, u'id': u'ceaeba27-d042-4aef-b576-781fc0eeb252'}

run 2:
RESPONSE: {u'status': 0, u'segment-start': 0.0, u'segment-length': 4.52, u'total-length': 4.52, u'result': {u'hypotheses': [
{u'likelihood': 143.316, u'transcript': u'hi minus one twelve point ten minus one eleven point ten'},
{u'likelihood': 140.427, u'transcript': u'minus one twelve point ten minus one eleven point ten'},
{u'likelihood': 138.364, u'transcript': u'hi minus one twelve point ten minus one eleven point eight ten'},
{u'likelihood': 135.475, u'transcript': u'minus one twelve point ten minus one eleven point eight ten'}], u'final': True}, u'segment': 0, u'id': u'8fb2a55e-0d24-4707-8f1f-37e90c038643'}

appending result

Hi alumae,

is there any where that we can pause and continue recognition but having same UUID

Gstreamer appsrc

Hi,

I'm currently using modified librispeech models and I wanted to do some online experiments with kaldinnet2onlinedecoder but when starting the worker, Gstreamer appsrc end up with an error I can't figure out how to fix :

File "kaldigstserver/worker.py", line 368, in <module>
    main()
  File "kaldigstserver/worker.py", line 348, in main
    decoder_pipeline = DecoderPipeline2(conf)
  File "/vol/experiments/fboye/kaldi-gstreamer-server/kaldigstserver/decoder2.py", line 24, in __init__
    self.create_pipeline(conf)
  File "/vol/experiments/fboye/kaldi-gstreamer-server/kaldigstserver/decoder2.py", line 60, in create_pipeline
    self.appsrc.set_property("is-live", True)
AttributeError: 'NoneType' object has no attribute 'set_property'

I thought it was a problem with Gstreamer plugins or with the online tools from Kaldi but it doesn't seem to be the case. All the tests worked and all decoder properties were set.
Do you have any idea how to fix that?

Thank you in advance.

P.S : post-precessor and full-post-processor properties are only used with the GMM-model decoder, am I right?

worker.py could not decode at second time

Thank you for your quick response before, @alumae .

I met another problem. When I run client.py in the example, the first time is very fine. But, if I run same script It cannot decode and the worker doesn't return data.

I started a worker like this:

python kaldigstserver/worker.py -u ws://localhost:8888/worker/ws/speech -c sample_worker.yaml

tested like this:

python kaldigstserver/client.py -r 32000 test/data/english_test.raw

output was same as you posted.

After I tested same script, It doesn't finish.

master_server log is here:

    INFO 2015-02-26 22:00:40,776 New worker available <__main__.WorkerSocketHandler object at 0x7f9178fcc090>
    INFO 2015-02-26 22:01:27,459 64c34db5-eddc-4972-a587-3cf27c6a166e: OPEN
    INFO 2015-02-26 22:01:27,459 64c34db5-eddc-4972-a587-3cf27c6a166e: Request arguments: content-type="audio/x-raw, layout=(string)interleaved, rate=(int)16000, format=(string)S16LE, channels=(int)1"
    INFO 2015-02-26 22:01:27,459 64c34db5-eddc-4972-a587-3cf27c6a166e: Using worker <__main__.DecoderSocketHandler object at 0x7f9178fcc210>
    INFO 2015-02-26 22:01:27,459 64c34db5-eddc-4972-a587-3cf27c6a166e: Using content type: audio/x-raw, layout=(string)interleaved, rate=(int)16000, format=(string)S16LE, channels=(int)1
    INFO 2015-02-26 22:01:27,674 64c34db5-eddc-4972-a587-3cf27c6a166e: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:27,927 64c34db5-eddc-4972-a587-3cf27c6a166e: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:28,180 64c34db5-eddc-4972-a587-3cf27c6a166e: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:28,432 64c34db5-eddc-4972-a587-3cf27c6a166e: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:28,685 64c34db5-eddc-4972-a587-3cf27c6a166e: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:28,937 64c34db5-eddc-4972-a587-3cf27c6a166e: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:29,188 64c34db5-eddc-4972-a587-3cf27c6a166e: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:29,440 64c34db5-eddc-4972-a587-3cf27c6a166e: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:29,691 64c34db5-eddc-4972-a587-3cf27c6a166e: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:29,943 64c34db5-eddc-4972-a587-3cf27c6a166e: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:30,054 64c34db5-eddc-4972-a587-3cf27c6a166e: Sending event {u'status': 0, u'result': {u'hypotheses': [{u'transcript': u'THE.'}], u'final': False}} to client
    INFO 2015-02-26 22:01:30,055 64c34db5-eddc-4972-a587-3cf27c6a166e: Sending event {u'status': 0, u'result': {u'hypotheses': [{u'transcript': u'THE.'}], u'final': True}} to client
    INFO 2015-02-26 22:01:30,195 64c34db5-eddc-4972-a587-3cf27c6a166e: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:30,446 64c34db5-eddc-4972-a587-3cf27c6a166e: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:30,698 64c34db5-eddc-4972-a587-3cf27c6a166e: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:30,949 64c34db5-eddc-4972-a587-3cf27c6a166e: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:31,061 64c34db5-eddc-4972-a587-3cf27c6a166e: Sending event {u'status': 0, u'result': {u'hypotheses': [{u'transcript': u'ONE.'}], u'final': False}} to client
    INFO 2015-02-26 22:01:31,201 64c34db5-eddc-4972-a587-3cf27c6a166e: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:31,267 64c34db5-eddc-4972-a587-3cf27c6a166e: Sending event {u'status': 0, u'result': {u'hypotheses': [{u'transcript': u'ONE TWO.'}], u'final': False}} to client
    INFO 2015-02-26 22:01:31,267 64c34db5-eddc-4972-a587-3cf27c6a166e: Sending event {u'status': 0, u'result': {u'hypotheses': [{u'transcript': u'ONE TWO THREE.'}], u'final': False}} to client
    INFO 2015-02-26 22:01:31,453 64c34db5-eddc-4972-a587-3cf27c6a166e: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:31,704 64c34db5-eddc-4972-a587-3cf27c6a166e: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:31,957 64c34db5-eddc-4972-a587-3cf27c6a166e: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:32,209 64c34db5-eddc-4972-a587-3cf27c6a166e: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:32,364 64c34db5-eddc-4972-a587-3cf27c6a166e: Sending event {u'status': 0, u'result': {u'hypotheses': [{u'transcript': u'ONE TWO THREE FOUR.'}], u'final': Fa... to client
    INFO 2015-02-26 22:01:32,364 64c34db5-eddc-4972-a587-3cf27c6a166e: Sending event {u'status': 0, u'result': {u'hypotheses': [{u'transcript': u'ONE TWO THREE FOUR FIVE.'}], u'final... to client
    INFO 2015-02-26 22:01:32,365 64c34db5-eddc-4972-a587-3cf27c6a166e: Sending event {u'status': 0, u'result': {u'hypotheses': [{u'transcript': u'ONE TWO THREE FOUR FIVE SIX.'}], u'f... to client
    INFO 2015-02-26 22:01:32,461 64c34db5-eddc-4972-a587-3cf27c6a166e: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:32,712 64c34db5-eddc-4972-a587-3cf27c6a166e: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:32,964 64c34db5-eddc-4972-a587-3cf27c6a166e: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:33,034 64c34db5-eddc-4972-a587-3cf27c6a166e: Sending event {u'status': 0, u'result': {u'hypotheses': [{u'transcript': u'ONE TWO THREE FOUR FIVE SIX SEVEN.'}... to client
    INFO 2015-02-26 22:01:33,035 64c34db5-eddc-4972-a587-3cf27c6a166e: Sending event {u'status': 0, u'result': {u'hypotheses': [{u'transcript': u'ONE TWO THREE FOUR FIVE SIX SEVEN EI... to client
    INFO 2015-02-26 22:01:33,035 64c34db5-eddc-4972-a587-3cf27c6a166e: Sending event {u'status': 0, u'result': {u'hypotheses': [{u'transcript': u'ONE TWO THREE FOUR FIVE SIX SEVEN EI... to client
    INFO 2015-02-26 22:01:33,217 64c34db5-eddc-4972-a587-3cf27c6a166e: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:33,469 64c34db5-eddc-4972-a587-3cf27c6a166e: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:33,720 64c34db5-eddc-4972-a587-3cf27c6a166e: Forwarding client message (<type 'str'>) of length 4608 to worker
    INFO 2015-02-26 22:01:33,720 64c34db5-eddc-4972-a587-3cf27c6a166e: Forwarding client message (<type 'unicode'>) of length 3 to worker
    INFO 2015-02-26 22:01:33,783 Worker <__main__.WorkerSocketHandler object at 0x7f9178fcc090> leaving
    INFO 2015-02-26 22:01:33,784 64c34db5-eddc-4972-a587-3cf27c6a166e: Handling on_connection_close()
    INFO 2015-02-26 22:01:33,784 64c34db5-eddc-4972-a587-3cf27c6a166e: Closing worker connection
    INFO 2015-02-26 22:01:34,786 New worker available <__main__.WorkerSocketHandler object at 0x7f9178fcc290>
    INFO 2015-02-26 22:01:49,265 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: OPEN
    INFO 2015-02-26 22:01:49,266 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Request arguments: content-type="audio/x-raw, layout=(string)interleaved, rate=(int)16000, format=(string)S16LE, channels=(int)1"
    INFO 2015-02-26 22:01:49,266 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Using worker <__main__.DecoderSocketHandler object at 0x7f9178fcc610>
    INFO 2015-02-26 22:01:49,266 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Using content type: audio/x-raw, layout=(string)interleaved, rate=(int)16000, format=(string)S16LE, channels=(int)1
    INFO 2015-02-26 22:01:49,477 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:49,729 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:49,982 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:50,235 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:50,488 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:50,739 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:50,992 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:51,245 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:51,498 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:51,751 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:52,003 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:52,256 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:52,509 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:52,762 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:53,015 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:53,266 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:53,519 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:53,772 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:54,025 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:54,278 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:54,531 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:54,783 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:55,036 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:55,288 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Forwarding client message (<type 'str'>) of length 8000 to worker
    INFO 2015-02-26 22:01:55,539 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Forwarding client message (<type 'str'>) of length 4608 to worker
    INFO 2015-02-26 22:01:55,539 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Forwarding client message (<type 'unicode'>) of length 3 to worker

worker log is here:

2015-02-26 22:00:40 -    INFO:   __main__: Opening websocket connection to master server
2015-02-26 22:00:40 -    INFO:   __main__: Opened websocket connection to server
2015-02-26 22:01:27 -    INFO:    decoder: 64c34db5-eddc-4972-a587-3cf27c6a166e: Setting caps to audio/x-raw, layout=(string)interleaved, rate=(int)16000, format=(string)S16LE, channels=(int)1
2015-02-26 22:01:27 -    INFO:    decoder: 64c34db5-eddc-4972-a587-3cf27c6a166e: Connecting audio decoder
2015-02-26 22:01:27 -    INFO:   __main__: 64c34db5-eddc-4972-a587-3cf27c6a166e: Started timeout guard
2015-02-26 22:01:27 -    INFO:   __main__: 64c34db5-eddc-4972-a587-3cf27c6a166e: Initialized request
2015-02-26 22:01:27 -    INFO:    decoder: 64c34db5-eddc-4972-a587-3cf27c6a166e: Connected audio decoder
2015-02-26 22:01:30 -    INFO:    decoder: 64c34db5-eddc-4972-a587-3cf27c6a166e: Got word: THE
2015-02-26 22:01:30 -    INFO:   __main__: 64c34db5-eddc-4972-a587-3cf27c6a166e: Postprocessing partial result..
2015-02-26 22:01:30 -    INFO:   __main__: 64c34db5-eddc-4972-a587-3cf27c6a166e: Postprocessing done.
2015-02-26 22:01:30 -    INFO:    decoder: 64c34db5-eddc-4972-a587-3cf27c6a166e: Got word: <#s>
2015-02-26 22:01:30 -    INFO:   __main__: 64c34db5-eddc-4972-a587-3cf27c6a166e: Postprocessing final result..
2015-02-26 22:01:30 -    INFO:   __main__: 64c34db5-eddc-4972-a587-3cf27c6a166e: Postprocessing done.
2015-02-26 22:01:31 -    INFO:    decoder: 64c34db5-eddc-4972-a587-3cf27c6a166e: Got word: ONE
2015-02-26 22:01:31 -    INFO:   __main__: 64c34db5-eddc-4972-a587-3cf27c6a166e: Postprocessing partial result..
2015-02-26 22:01:31 -    INFO:   __main__: 64c34db5-eddc-4972-a587-3cf27c6a166e: Postprocessing done.
2015-02-26 22:01:31 -    INFO:    decoder: 64c34db5-eddc-4972-a587-3cf27c6a166e: Got word: TWO
2015-02-26 22:01:31 -    INFO:   __main__: 64c34db5-eddc-4972-a587-3cf27c6a166e: Postprocessing partial result..
2015-02-26 22:01:31 -    INFO:   __main__: 64c34db5-eddc-4972-a587-3cf27c6a166e: Postprocessing done.
2015-02-26 22:01:31 -    INFO:    decoder: 64c34db5-eddc-4972-a587-3cf27c6a166e: Got word: THREE
2015-02-26 22:01:31 -    INFO:   __main__: 64c34db5-eddc-4972-a587-3cf27c6a166e: Postprocessing partial result..
2015-02-26 22:01:31 -    INFO:   __main__: 64c34db5-eddc-4972-a587-3cf27c6a166e: Postprocessing done.
2015-02-26 22:01:32 -    INFO:    decoder: 64c34db5-eddc-4972-a587-3cf27c6a166e: Got word: FOUR
2015-02-26 22:01:32 -    INFO:   __main__: 64c34db5-eddc-4972-a587-3cf27c6a166e: Postprocessing partial result..
2015-02-26 22:01:32 -    INFO:   __main__: 64c34db5-eddc-4972-a587-3cf27c6a166e: Postprocessing done.
2015-02-26 22:01:32 -    INFO:    decoder: 64c34db5-eddc-4972-a587-3cf27c6a166e: Got word: FIVE
2015-02-26 22:01:32 -    INFO:   __main__: 64c34db5-eddc-4972-a587-3cf27c6a166e: Postprocessing partial result..
2015-02-26 22:01:32 -    INFO:   __main__: 64c34db5-eddc-4972-a587-3cf27c6a166e: Postprocessing done.
2015-02-26 22:01:32 -    INFO:    decoder: 64c34db5-eddc-4972-a587-3cf27c6a166e: Got word: SIX
2015-02-26 22:01:32 -    INFO:   __main__: 64c34db5-eddc-4972-a587-3cf27c6a166e: Postprocessing partial result..
2015-02-26 22:01:32 -    INFO:   __main__: 64c34db5-eddc-4972-a587-3cf27c6a166e: Postprocessing done.
2015-02-26 22:01:33 -    INFO:    decoder: 64c34db5-eddc-4972-a587-3cf27c6a166e: Got word: SEVEN
2015-02-26 22:01:33 -    INFO:   __main__: 64c34db5-eddc-4972-a587-3cf27c6a166e: Postprocessing partial result..
2015-02-26 22:01:33 -    INFO:   __main__: 64c34db5-eddc-4972-a587-3cf27c6a166e: Postprocessing done.
2015-02-26 22:01:33 -    INFO:    decoder: 64c34db5-eddc-4972-a587-3cf27c6a166e: Got word: EIGHT
2015-02-26 22:01:33 -    INFO:   __main__: 64c34db5-eddc-4972-a587-3cf27c6a166e: Postprocessing partial result..
2015-02-26 22:01:33 -    INFO:   __main__: 64c34db5-eddc-4972-a587-3cf27c6a166e: Postprocessing done.
2015-02-26 22:01:33 -    INFO:    decoder: 64c34db5-eddc-4972-a587-3cf27c6a166e: Got word: <#s>
2015-02-26 22:01:33 -    INFO:   __main__: 64c34db5-eddc-4972-a587-3cf27c6a166e: Postprocessing final result..
2015-02-26 22:01:33 -    INFO:   __main__: 64c34db5-eddc-4972-a587-3cf27c6a166e: Postprocessing done.
2015-02-26 22:01:33 -    INFO:    decoder: 64c34db5-eddc-4972-a587-3cf27c6a166e: Pushing EOS to pipeline
2015-02-26 22:01:33 -    INFO:    decoder: 64c34db5-eddc-4972-a587-3cf27c6a166e: Pipeline received eos signal
2015-02-26 22:01:33 -    INFO:   __main__: 64c34db5-eddc-4972-a587-3cf27c6a166e: Adaptation state not supported by the decoder, not sending it.
2015-02-26 22:01:34 -    INFO:   __main__: Opening websocket connection to master server
2015-02-26 22:01:34 -    INFO:   __main__: Opened websocket connection to server
2015-02-26 22:01:49 -    INFO:    decoder: 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Setting caps to audio/x-raw, layout=(string)interleaved, rate=(int)16000, format=(string)S16LE, channels=(int)1
2015-02-26 22:01:49 -    INFO:   __main__: 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Started timeout guard
2015-02-26 22:01:49 -    INFO:   __main__: 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Initialized request
2015-02-26 22:01:55 -    INFO:    decoder: 0fe2739b-c4f8-4185-9d77-3c2e134cf3c6: Pushing EOS to pipeline

in the worker's log, printed "Pushing EOS to pipeline" and don't process anymore.

thanks.

Worker websocket closes prematurely after silence timeout reached

I have a 10 second silence timeout configured, which kicks in as expected, however when the worker is sending the final status message to the client the web socket seems to already be closed. It looks a lot like the websocket is being closed in the middle of the worker running finish_request, meaning guard_timeout is unable to send a STATUS_NO_SPEECH event (https://github.com/alumae/kaldi-gstreamer-server/blob/master/kaldigstserver/worker.py#L76) and neither the client nor master server are receiving it. The last event received by the client (browser) is the adaptation state success message (https://github.com/alumae/kaldi-gstreamer-server/blob/master/kaldigstserver/worker.py#L257).

worker-1.log:

   DEBUG 2015-10-11 05:40:18,230 Starting up worker 
2015-10-11 05:40:18 -    INFO:   decoder2: Creating decoder using conf: {'post-processor': "perl -npe 'BEGIN {use IO::Handle; STDOUT->autoflush(1);} s/(.*)/\\1./;'", 'logging': {'version': 1, 'root': {'level': 'DEBUG', 'handlers': ['console']}, 'formatters': {'simpleFormater': {'datefmt': '%Y-%m-%d %H:%M:%S', 'format': '%(asctime)s - %(levelname)7s: %(name)10s: %(message)s'}}, 'disable_existing_loggers': False, 'handlers': {'console': {'formatter': 'simpleFormater', 'class': 'logging.StreamHandler', 'level': 'DEBUG'}}}, 'use-nnet2': True, 'full-post-processor': './sample_full_post_processor.py', 'decoder': {'ivector-extraction-config': 'test/models/english/tedlium_nnet_ms_sp_online/conf/ivector_extractor.conf', 'num-nbest': 10, 'lattice-beam': 6.0, 'acoustic-scale': 0.083, 'do-endpointing': True, 'beam': 10.0, 'max-active': 10000, 'fst': 'test/models/english/tedlium_nnet_ms_sp_online/HCLG.fst', 'mfcc-config': 'test/models/english/tedlium_nnet_ms_sp_online/conf/mfcc.conf', 'use-threaded-decoder': True, 'traceback-period-in-secs': 0.25, 'model': 'test/models/english/tedlium_nnet_ms_sp_online/final.mdl', 'word-syms': 'test/models/english/tedlium_nnet_ms_sp_online/words.txt', 'endpoint-silence-phones': '1:2:3:4:5:6:7:8:9:10', 'chunk-length-in-secs': 0.25}, 'silence-timeout': 10, 'out-dir': 'tmp', 'use-vad': False}
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::endpoint-silence-phones after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::endpoint-rule1-must-contain-nonsilence after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::endpoint-rule1-min-trailing-silence after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::endpoint-rule1-max-relative-cost after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::endpoint-rule1-min-utterance-length after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::endpoint-rule2-must-contain-nonsilence after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::endpoint-rule2-min-trailing-silence after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::endpoint-rule2-max-relative-cost after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::endpoint-rule2-min-utterance-length after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::endpoint-rule3-must-contain-nonsilence after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::endpoint-rule3-min-trailing-silence after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::endpoint-rule3-max-relative-cost after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::endpoint-rule3-min-utterance-length after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::endpoint-rule4-must-contain-nonsilence after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::endpoint-rule4-min-trailing-silence after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::endpoint-rule4-max-relative-cost after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::endpoint-rule4-min-utterance-length after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::endpoint-rule5-must-contain-nonsilence after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::endpoint-rule5-min-trailing-silence after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::endpoint-rule5-max-relative-cost after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::endpoint-rule5-min-utterance-length after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::feature-type after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::mfcc-config after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::plp-config after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::fbank-config after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::add-pitch after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::online-pitch-config after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::ivector-extraction-config after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::ivector-silence-weighting-silence-phones after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::ivector-silence-weighting-silence-weight after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::ivector-silence-weighting-max-state-duration after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::delta after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::max-mem after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::phone-determinize after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::word-determinize after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::minimize after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::beam after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::max-active after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::min-active after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::lattice-beam after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::prune-interval after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::determinize-lattice after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::beam-delta after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::hash-ratio after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::acoustic-scale after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::max-buffered-features after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::feature-batch-size after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::nnet-batch-size after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::max-loglikes-copy after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::decode-batch-sie after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'delta'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'max-mem'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'phone-determinize'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'word-determinize'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'minimize'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'beam'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'max-active'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'min-active'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'lattice-beam'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'prune-interval'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'determinize-lattice'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'beam-delta'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'hash-ratio'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'acoustic-scale'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::pad-input after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: Attempt to add property Gstkaldinnet2onlinedecoder::max-nnet-batch-size after class was initialised
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
2015-10-11 05:40:18 -    INFO:   decoder2: Setting decoder property: ivector-extraction-config = test/models/english/tedlium_nnet_ms_sp_online/conf/ivector_extractor.conf
2015-10-11 05:40:18 -    INFO:   decoder2: Setting decoder property: num-nbest = 10
2015-10-11 05:40:18 -    INFO:   decoder2: Setting decoder property: lattice-beam = 6.0
2015-10-11 05:40:18 -    INFO:   decoder2: Setting decoder property: acoustic-scale = 0.083
2015-10-11 05:40:18 -    INFO:   decoder2: Setting decoder property: do-endpointing = True
2015-10-11 05:40:18 -    INFO:   decoder2: Setting decoder property: beam = 10.0
2015-10-11 05:40:18 -    INFO:   decoder2: Setting decoder property: max-active = 10000
2015-10-11 05:40:18 -    INFO:   decoder2: Setting decoder property: fst = test/models/english/tedlium_nnet_ms_sp_online/HCLG.fst
2015-10-11 05:40:31 -    INFO:   decoder2: Setting decoder property: mfcc-config = test/models/english/tedlium_nnet_ms_sp_online/conf/mfcc.conf
2015-10-11 05:40:31 -    INFO:   decoder2: Setting decoder property: traceback-period-in-secs = 0.25
2015-10-11 05:40:31 -    INFO:   decoder2: Setting decoder property: model = test/models/english/tedlium_nnet_ms_sp_online/final.mdl
2015-10-11 05:40:31 -    INFO:   decoder2: Setting decoder property: word-syms = test/models/english/tedlium_nnet_ms_sp_online/words.txt
2015-10-11 05:40:32 -    INFO:   decoder2: Setting decoder property: endpoint-silence-phones = 1:2:3:4:5:6:7:8:9:10
2015-10-11 05:40:32 -    INFO:   decoder2: Setting decoder property: chunk-length-in-secs = 0.25
2015-10-11 05:40:32 -    INFO:   decoder2: Created GStreamer elements
2015-10-11 05:40:32 -   DEBUG:   decoder2: Adding <__main__.GstAppSrc object at 0x7fe68e966230 (GstAppSrc at 0x16609a0)> to the pipeline
2015-10-11 05:40:32 -   DEBUG:   decoder2: Adding <__main__.GstDecodeBin object at 0x7fe68e966410 (GstDecodeBin at 0x16580b0)> to the pipeline
2015-10-11 05:40:32 -   DEBUG:   decoder2: Adding <__main__.GstAudioConvert object at 0x7fe68e966320 (GstAudioConvert at 0x167bcb0)> to the pipeline
2015-10-11 05:40:32 -   DEBUG:   decoder2: Adding <__main__.GstAudioResample object at 0x7fe68e9662d0 (GstAudioResample at 0x1688f10)> to the pipeline
2015-10-11 05:40:32 -   DEBUG:   decoder2: Adding <__main__.GstTee object at 0x7fe68e9663c0 (GstTee at 0x1638170)> to the pipeline
2015-10-11 05:40:32 -   DEBUG:   decoder2: Adding <__main__.GstQueue object at 0x7fe68e966280 (GstQueue at 0x168e1d0)> to the pipeline
2015-10-11 05:40:32 -   DEBUG:   decoder2: Adding <__main__.GstFileSink object at 0x7fe68e966370 (GstFileSink at 0x1692830)> to the pipeline
2015-10-11 05:40:32 -   DEBUG:   decoder2: Adding <__main__.GstQueue object at 0x7fe68e966460 (GstQueue at 0x168e4c0)> to the pipeline
2015-10-11 05:40:32 -   DEBUG:   decoder2: Adding <__main__.Gstkaldinnet2onlinedecoder object at 0x7fe68e919370 (Gstkaldinnet2onlinedecoder at 0x16bc030)> to the pipeline
2015-10-11 05:40:32 -   DEBUG:   decoder2: Adding <__main__.GstFakeSink object at 0x7fe68e9193c0 (GstFakeSink at 0x16cb200)> to the pipeline
2015-10-11 05:40:32 -    INFO:   decoder2: Linking GStreamer elements
LOG (ComputeDerivedVars():ivector-extractor.cc:182) Computing derived variables for iVector extractor
LOG (ComputeDerivedVars():ivector-extractor.cc:203) Done.
2015-10-11 05:40:32 -    INFO:   decoder2: Setting pipeline to READY
2015-10-11 05:40:32 -    INFO:   decoder2: Set pipeline to READY
2015-10-11 05:40:32 -    INFO:   __main__: Opening websocket connection to master server
2015-10-11 05:40:32 -    INFO:   __main__: Opened websocket connection to server
2015-10-11 05:40:49 -   DEBUG:   __main__: <undefined>: Got message from server of type <class 'ws4py.messaging.TextMessage'>
2015-10-11 05:40:49 -    INFO:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Initializing request
2015-10-11 05:40:49 -    INFO:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Setting caps to audio/x-raw, layout=(string)interleaved, rate=(int)16000, format=(string)S16LE, channels=(int)1
2015-10-11 05:40:49 -    INFO:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Connecting audio decoder
2015-10-11 05:40:49 -    INFO:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Started timeout guard
2015-10-11 05:40:49 -    INFO:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Initialized request
2015-10-11 05:40:49 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Checking that decoder hasn't been silent for more than 10 seconds
2015-10-11 05:40:49 -    INFO:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Connected audio decoder
2015-10-11 05:40:50 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:50 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:40:50 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:50 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:50 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:40:50 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:50 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:50 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:40:50 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:50 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Checking that decoder hasn't been silent for more than 10 seconds
2015-10-11 05:40:50 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:50 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 5944 to pipeline
2015-10-11 05:40:50 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:50 -    INFO:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got partial result: the
2015-10-11 05:40:50 -    INFO:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Postprocessing (final=False) result..
2015-10-11 05:40:50 -    INFO:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Postprocessing done.
2015-10-11 05:40:51 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:51 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8918 to pipeline
2015-10-11 05:40:51 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:51 -    INFO:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got partial result: the
2015-10-11 05:40:51 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:51 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:40:51 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:51 -    INFO:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got partial result: the
2015-10-11 05:40:51 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:51 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:40:51 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:51 -    INFO:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got partial result: the
2015-10-11 05:40:51 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Checking that decoder hasn't been silent for more than 10 seconds
2015-10-11 05:40:51 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:51 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 5944 to pipeline
2015-10-11 05:40:51 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:51 -    INFO:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got partial result: the test
2015-10-11 05:40:51 -    INFO:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Postprocessing (final=False) result..
2015-10-11 05:40:51 -    INFO:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Postprocessing done.
2015-10-11 05:40:52 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:52 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:40:52 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:52 -    INFO:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got partial result: the test
2015-10-11 05:40:52 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:52 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8918 to pipeline
2015-10-11 05:40:52 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:52 -    INFO:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got partial result: the test
2015-10-11 05:40:52 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:52 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 5944 to pipeline
2015-10-11 05:40:52 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:52 -    INFO:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got partial result: the test
2015-10-11 05:40:52 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Checking that decoder hasn't been silent for more than 10 seconds
2015-10-11 05:40:52 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:52 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:40:52 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:52 -    INFO:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got partial result: the test
2015-10-11 05:40:53 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:53 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:40:53 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:53 -    INFO:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got final result: the test
2015-10-11 05:40:53 -    INFO:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got full final result: {"status": 0, "total-length": 2.86, "result": {"final": true, "hypotheses": [{"transcript": "the test", "likelihood": 69.3736}, {"transcript": "a test", "likelihood": 67.7495}, {"transcript": "to test", "likelihood": 66.6572}, {"transcript": "test", "likelihood": 65.2246}, {"transcript": "but test", "likelihood": 64.5778}, {"transcript": "i test", "likelihood": 64.3957}, {"transcript": "one test", "likelihood": 64.1008}, {"transcript": "and test", "likelihood": 63.7347}, {"transcript": "that test", "likelihood": 63.4901}, {"transcript": "we test", "likelihood": 63.4253}]}, "segment-start": 0.0, "segment-length": 2.86}
2015-10-11 05:40:53 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Before postprocessing: {u'status': 0, u'total-length': 2.86, u'result': {u'hypotheses': [{u'likelihood': 69.3736, u'transcript': u'the test'}, {u'likelihood': 67.7495, u'transcript': u'a test'}, {u'likelihood': 66.6572, u'transcript': u'to test'}, {u'likelihood': 65.2246, u'transcript': u'test'}, {u'likelihood': 64.5778, u'transcript': u'but test'}, {u'likelihood': 64.3957, u'transcript': u'i test'}, {u'likelihood': 64.1008, u'transcript': u'one test'}, {u'likelihood': 63.7347, u'transcript': u'and test'}, {u'likelihood': 63.4901, u'transcript': u'that test'}, {u'likelihood': 63.4253, u'transcript': u'we test'}], u'final': True}, u'segment-length': 2.86, u'segment-start': 0.0}
2015-10-11 05:40:53 -    INFO:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Postprocessing done.
2015-10-11 05:40:53 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: After postprocessing: {u'status': 0, u'total-length': 2.86, u'result': {u'hypotheses': [{u'likelihood': 69.3736, u'confidence': 1.6240999999999985, u'transcript': u'the test.'}], u'final': True}, u'segment-length': 2.86, u'segment-start': 0.0}
2015-10-11 05:40:53 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:53 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8918 to pipeline
2015-10-11 05:40:53 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:53 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:53 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 5944 to pipeline
2015-10-11 05:40:53 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:53 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Checking that decoder hasn't been silent for more than 10 seconds
2015-10-11 05:40:54 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:54 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:40:54 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:54 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:54 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:40:54 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:54 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:54 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 5944 to pipeline
2015-10-11 05:40:54 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:54 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:54 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8918 to pipeline
2015-10-11 05:40:54 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:54 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Checking that decoder hasn't been silent for more than 10 seconds
2015-10-11 05:40:55 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:55 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:40:55 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:55 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:55 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:40:55 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:55 -    INFO:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got partial result: the
2015-10-11 05:40:55 -    INFO:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Postprocessing (final=False) result..
2015-10-11 05:40:55 -    INFO:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Postprocessing done.
2015-10-11 05:40:55 -    INFO:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got partial result: the
2015-10-11 05:40:55 -    INFO:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got partial result: the
2015-10-11 05:40:55 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:55 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 5944 to pipeline
2015-10-11 05:40:55 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:55 -    INFO:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got partial result: the
2015-10-11 05:40:55 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:55 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:40:55 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:55 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Checking that decoder hasn't been silent for more than 10 seconds
2015-10-11 05:40:55 -    INFO:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got partial result: the
2015-10-11 05:40:56 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:56 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8918 to pipeline
2015-10-11 05:40:56 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:56 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:56 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 5944 to pipeline
2015-10-11 05:40:56 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:56 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:56 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:40:56 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:56 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:56 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:40:56 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:56 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Checking that decoder hasn't been silent for more than 10 seconds
2015-10-11 05:40:57 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:57 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:40:57 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:57 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:57 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 5946 to pipeline
2015-10-11 05:40:57 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:57 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:57 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:40:57 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:57 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:57 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:40:57 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:57 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Checking that decoder hasn't been silent for more than 10 seconds
2015-10-11 05:40:57 -    INFO:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got final result: the act
2015-10-11 05:40:57 -    INFO:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got full final result: {"status": 0, "total-length": 5.63, "result": {"final": true, "hypotheses": [{"transcript": "the act", "likelihood": 42.8241}, {"transcript": "the", "likelihood": 42.7886}, {"transcript": "the and", "likelihood": 42.7387}, {"transcript": "a", "likelihood": 42.1951}, {"transcript": "the eye", "likelihood": 41.987}, {"transcript": "the that", "likelihood": 41.8408}, {"transcript": "a i", "likelihood": 41.573}, {"transcript": "the past", "likelihood": 41.3998}, {"transcript": "the <unk>", "likelihood": 41.0507}, {"transcript": "a and", "likelihood": 40.9692}]}, "segment-start": 2.86, "segment-length": 2.77}
2015-10-11 05:40:57 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Before postprocessing: {u'status': 0, u'total-length': 5.63, u'result': {u'hypotheses': [{u'likelihood': 42.8241, u'transcript': u'the act'}, {u'likelihood': 42.7886, u'transcript': u'the'}, {u'likelihood': 42.7387, u'transcript': u'the and'}, {u'likelihood': 42.1951, u'transcript': u'a'}, {u'likelihood': 41.987, u'transcript': u'the eye'}, {u'likelihood': 41.8408, u'transcript': u'the that'}, {u'likelihood': 41.573, u'transcript': u'a i'}, {u'likelihood': 41.3998, u'transcript': u'the past'}, {u'likelihood': 41.0507, u'transcript': u'the <unk>'}, {u'likelihood': 40.9692, u'transcript': u'a and'}], u'final': True}, u'segment-length': 2.77, u'segment-start': 2.86}
2015-10-11 05:40:57 -    INFO:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Postprocessing done.
2015-10-11 05:40:57 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: After postprocessing: {u'status': 0, u'total-length': 5.63, u'result': {u'hypotheses': [{u'likelihood': 42.8241, u'confidence': 0.03549999999999898, u'transcript': u'the act.'}], u'final': True}, u'segment-length': 2.77, u'segment-start': 2.86}
2015-10-11 05:40:57 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:57 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 5944 to pipeline
2015-10-11 05:40:57 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:58 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:58 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:40:58 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:58 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:58 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8918 to pipeline
2015-10-11 05:40:58 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:58 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:58 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:40:58 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:58 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Checking that decoder hasn't been silent for more than 10 seconds
2015-10-11 05:40:58 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:58 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 5944 to pipeline
2015-10-11 05:40:58 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:59 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:59 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:40:59 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:59 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:59 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8918 to pipeline
2015-10-11 05:40:59 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:59 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:40:59 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 5944 to pipeline
2015-10-11 05:40:59 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:40:59 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Checking that decoder hasn't been silent for more than 10 seconds
2015-10-11 05:41:00 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:00 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:41:00 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:00 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:00 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:41:00 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:00 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:00 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:41:00 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:00 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Checking that decoder hasn't been silent for more than 10 seconds
2015-10-11 05:41:00 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:00 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 5946 to pipeline
2015-10-11 05:41:00 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:01 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:01 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:41:01 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:01 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:01 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:41:01 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:01 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:01 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:41:01 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:01 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Checking that decoder hasn't been silent for more than 10 seconds
2015-10-11 05:41:01 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:01 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 5944 to pipeline
2015-10-11 05:41:01 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:02 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:02 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8918 to pipeline
2015-10-11 05:41:02 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:02 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:02 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:41:02 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:02 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:02 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 5944 to pipeline
2015-10-11 05:41:02 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:02 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Checking that decoder hasn't been silent for more than 10 seconds
2015-10-11 05:41:02 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:02 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:41:02 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:03 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:03 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:41:03 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:03 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:03 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8918 to pipeline
2015-10-11 05:41:03 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:03 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:03 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 5944 to pipeline
2015-10-11 05:41:03 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:03 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Checking that decoder hasn't been silent for more than 10 seconds
2015-10-11 05:41:03 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:03 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:41:03 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:04 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:04 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:41:04 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:04 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:04 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 5944 to pipeline
2015-10-11 05:41:04 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:04 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:04 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8918 to pipeline
2015-10-11 05:41:04 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:04 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:04 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:41:04 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:04 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Checking that decoder hasn't been silent for more than 10 seconds
2015-10-11 05:41:05 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:05 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:41:05 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:05 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:05 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 5944 to pipeline
2015-10-11 05:41:05 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:05 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:05 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8918 to pipeline
2015-10-11 05:41:05 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:05 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:05 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:41:05 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:05 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Checking that decoder hasn't been silent for more than 10 seconds
2015-10-11 05:41:06 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:06 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 5944 to pipeline
2015-10-11 05:41:06 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:06 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:06 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:41:06 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:06 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:06 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:41:06 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:06 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:06 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8918 to pipeline
2015-10-11 05:41:06 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:06 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Checking that decoder hasn't been silent for more than 10 seconds
2015-10-11 05:41:07 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:07 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 5944 to pipeline
2015-10-11 05:41:07 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:07 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:07 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:41:07 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:07 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:07 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:41:07 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:07 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:07 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 5944 to pipeline
2015-10-11 05:41:07 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:07 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Checking that decoder hasn't been silent for more than 10 seconds
2015-10-11 05:41:08 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:08 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8918 to pipeline
2015-10-11 05:41:08 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:08 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:08 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:41:08 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:08 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:08 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 8916 to pipeline
2015-10-11 05:41:08 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:08 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Got message from server of type <class 'ws4py.messaging.BinaryMessage'>
2015-10-11 05:41:08 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer of size 5944 to pipeline
2015-10-11 05:41:08 -   DEBUG:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pushing buffer done
2015-10-11 05:41:08 - WARNING:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: More than 10 seconds from last decoder hypothesis update, cancelling
2015-10-11 05:41:08 -    INFO:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Master disconnected before decoder reached EOS?
2015-10-11 05:41:08 -    INFO:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Sending EOS to pipeline in order to cancel processing
2015-10-11 05:41:08 -    INFO:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Cancelled pipeline
2015-10-11 05:41:08 -    INFO:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Waiting for EOS from decoder
2015-10-11 05:41:09 -    INFO:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Pipeline received eos signal
2015-10-11 05:41:09 -    INFO:   decoder2: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Resetting decoder state
2015-10-11 05:41:09 -    INFO:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Sending adaptation state to client...
2015-10-11 05:41:09 -   DEBUG:      ws4py: Closing message received (1000) ''
2015-10-11 05:41:09 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Websocket closed() called
2015-10-11 05:41:09 -   DEBUG:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Websocket closed() finished
2015-10-11 05:41:09 -    INFO:   decoder2: <undefined>: Resetting decoder state
2015-10-11 05:41:09 -    INFO:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Finished waiting for EOS
2015-10-11 05:41:09 - WARNING:   __main__: c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Failed to send error event to master
2015-10-11 05:41:10 -    INFO:   __main__: Opening websocket connection to master server
2015-10-11 05:41:10 -    INFO:   __main__: Opened websocket connection to server

server.log:

   DEBUG 2015-10-11 05:40:18,221 Starting up server 
    INFO 2015-10-11 05:40:32,545 New worker available <__main__.WorkerSocketHandler object at 0x7f5681ab8150> 
    INFO 2015-10-11 05:40:32,570 New worker available <__main__.WorkerSocketHandler object at 0x7f5681ab8090> 
    INFO 2015-10-11 05:40:49,802 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: OPEN 
    INFO 2015-10-11 05:40:49,802 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Request arguments: content-type="audio/x-raw, layout=(string)interleaved, rate=(int)16000, format=(string)S16LE, channels=(int)1" 
    INFO 2015-10-11 05:40:49,803 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Using worker <__main__.DecoderSocketHandler object at 0x7f5681ab8650> 
    INFO 2015-10-11 05:40:49,803 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Using content type: audio/x-raw, layout=(string)interleaved, rate=(int)16000, format=(string)S16LE, channels=(int)1 
    INFO 2015-10-11 05:40:50,374 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:40:50,457 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:40:50,666 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:40:50,892 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 5944 to worker 
    INFO 2015-10-11 05:40:50,895 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Sending event {u'status': 0, u'segment': 0, u'result': {u'hypotheses': [{u'transcript': u'the.'}], u'final': Fa... to client 
    INFO 2015-10-11 05:40:51,172 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8918 to worker 
    INFO 2015-10-11 05:40:51,428 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:40:51,678 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:40:51,908 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 5944 to worker 
    INFO 2015-10-11 05:40:51,910 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Sending event {u'status': 0, u'segment': 0, u'result': {u'hypotheses': [{u'transcript': u'the test.'}], u'final... to client 
    INFO 2015-10-11 05:40:52,185 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:40:52,461 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8918 to worker 
    INFO 2015-10-11 05:40:52,660 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 5944 to worker 
    INFO 2015-10-11 05:40:52,985 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:40:53,193 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:40:53,353 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Sending event {u'status': 0, u'segment-start': 0.0, u'segment-length': 2.86, u'total-length': 2.86, u'result': ... to client 
    INFO 2015-10-11 05:40:53,509 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8918 to worker 
    INFO 2015-10-11 05:40:53,670 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 5944 to worker 
    INFO 2015-10-11 05:40:54,034 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:40:54,203 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:40:54,530 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 5944 to worker 
    INFO 2015-10-11 05:40:54,708 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8918 to worker 
    INFO 2015-10-11 05:40:55,082 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:40:55,209 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:40:55,254 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Sending event {u'status': 0, u'segment': 1, u'result': {u'hypotheses': [{u'transcript': u'the.'}], u'final': Fa... to client 
    INFO 2015-10-11 05:40:55,579 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 5944 to worker 
    INFO 2015-10-11 05:40:55,718 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:40:56,131 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8918 to worker 
    INFO 2015-10-11 05:40:56,193 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 5944 to worker 
    INFO 2015-10-11 05:40:56,655 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:40:56,738 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:40:57,179 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:40:57,239 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 5946 to worker 
    INFO 2015-10-11 05:40:57,479 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:40:57,733 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:40:57,887 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Sending event {u'status': 0, u'segment-start': 2.86, u'segment-length': 2.77, u'total-length': 5.63, u'result':... to client 
    INFO 2015-10-11 05:40:57,955 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 5944 to worker 
    INFO 2015-10-11 05:40:58,235 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:40:58,488 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8918 to worker 
    INFO 2015-10-11 05:40:58,753 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:40:58,965 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 5944 to worker 
    INFO 2015-10-11 05:40:59,276 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:40:59,499 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8918 to worker 
    INFO 2015-10-11 05:40:59,774 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 5944 to worker 
    INFO 2015-10-11 05:41:00,002 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:41:00,324 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:41:00,507 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:41:00,822 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 5946 to worker 
    INFO 2015-10-11 05:41:01,012 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:41:01,373 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:41:01,517 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:41:01,871 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 5944 to worker 
    INFO 2015-10-11 05:41:02,018 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8918 to worker 
    INFO 2015-10-11 05:41:02,421 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:41:02,496 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 5944 to worker 
    INFO 2015-10-11 05:41:02,946 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:41:03,033 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:41:03,470 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8918 to worker 
    INFO 2015-10-11 05:41:03,526 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 5944 to worker 
    INFO 2015-10-11 05:41:03,995 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:41:04,078 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:41:04,267 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 5944 to worker 
    INFO 2015-10-11 05:41:04,547 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8918 to worker 
    INFO 2015-10-11 05:41:04,799 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:41:05,050 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:41:05,277 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 5944 to worker 
    INFO 2015-10-11 05:41:05,567 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8918 to worker 
    INFO 2015-10-11 05:41:05,807 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:41:06,064 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 5944 to worker 
    INFO 2015-10-11 05:41:06,312 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:41:06,616 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:41:06,815 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8918 to worker 
    INFO 2015-10-11 05:41:07,113 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 5944 to worker 
    INFO 2015-10-11 05:41:07,323 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:41:07,665 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:41:07,801 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 5944 to worker 
    INFO 2015-10-11 05:41:08,189 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8918 to worker 
    INFO 2015-10-11 05:41:08,336 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:41:08,714 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
    INFO 2015-10-11 05:41:08,814 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 5944 to worker 
    INFO 2015-10-11 05:41:09,102 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Sending event {u'status': 0, u'adaptation_state': {u'type': u'string+gzip+base64', u'id': u'c0b969e3-7035-40a8-... to client 
    INFO 2015-10-11 05:41:09,103 Worker <__main__.WorkerSocketHandler object at 0x7f5681ab8090> leaving 
    INFO 2015-10-11 05:41:09,237 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Forwarding client message (<type 'str'>) of length 8916 to worker 
   ERROR 2015-10-11 05:41:09,238 Uncaught exception in /client/ws/speech 
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/tornado/websocket.py", line 415, in _run_callback
    callback(*args, **kwargs)
  File "/usr/lib/speakeasy/lib/kaldi-gstreamer-server/kaldigstserver/master_server.py", line 323, in on_message
    self.worker.write_message(message, binary=True)
  File "/usr/local/lib/python2.7/dist-packages/tornado/websocket.py", line 213, in write_message
    raise WebSocketClosedError()
WebSocketClosedError
    INFO 2015-10-11 05:41:09,238 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Handling on_connection_close() 
    INFO 2015-10-11 05:41:09,238 c0b969e3-7035-40a8-97e6-c5b0ed1b01b4: Closing worker connection 
    INFO 2015-10-11 05:41:10,106 New worker available <__main__.WorkerSocketHandler object at 0x7f56818664d0> 

adding extra parameters

Hi Tanel,

Is there anyway that we can send some extra parameters using curl
ex: using a particular language model for some specific speakers etc based on some speaker ID and language model name sent to server.

Concept of kaldi-gstreamer-server

Hi Sirs,

I am the beginner for kaldi.
I am a little confusing on kaldi-gstreamer-server recently.
After I generate models and others...

  1. What is the next step I need to do? Apply to kaldi-gstreamer-server (via yaml file)?
  2. If the server could be created by myself? Or I should use this one?
  3. The relationship between kaldi models and gstreamer-server?
  4. Any information I can reference if I want to know "how to apply kaldi models into server"?

Maybe my concepts about kaldi-gstreamer-server are not correct, please give me some suggestions or information please. Thanks very much.

Start/end times for utterances

Thanks for this great project. When sending an audio file to ".../client/dynamic/recognize", the response json contains an id field. Is it possible to get the start and end times of the utterance using the id or any other way?

I've noticed that the server logs contain segment-start and segment-length, can they be sent to the client somehow?

e.g.
INFO 2016-08-14 13:53:28,938 30a81637-769f-4fe0-9be2-2bb7cfb25062: Receiving event {u'status': 0, u'segment-start': 42.68000030517578, u'segment-length': 15.819999694824219, u'tota... from worker

hi tanel,

Is there anyway that we can write the lattice for every file we passes to the server.

online demon does not work

Hi,

Online demo does not work on my computer. Microphone does not seem to be detected, and nothing happens.
Please fix the issue :)

Thanks, Samuel

Segmentation

Hi

I want to know how can I use the segmentation feature?

Thanks

Output timing information

The recognition results from the server should include start and end times for each word and/or utterance

obtaining final lattices

Hello, I was wondering if there is a way to get the full lattice as output with this program; specifically for the purposes of doing KWS later on. Something like the output that is given by something along the lines of:

online2-wav-nnet3-latgen-faster --online=true --frame-subsampling-factor=3 --config=$CONFIG
$GRAPH_DIR/HCLG.fst "ark:echo utterance-id1 utterance-id1|" "scp:echo utterance-id1 1.wav |" ark:1.lat

Thank you for your help!

How to return word in nnet results?

I have this working with the fisher nnet model and would like to return the word level results as depicted in gst-kaldi-nnet2-online structured results.

I've made the following changes to the config file and the post-processor, see below, but I'm failing to get anything back. I can see in the worker log that the most recent segment is being returned with the expected data but it's not being returned to the client (both python and http just stall).

fisher_english_nne2.yaml

use-nnet2: True
decoder:
    # All the properties nested here correspond to the kaldinnet2onlinedecoder GStreamer plugin properties.
    # Use gst-inspect-1.0 ./libgstkaldionline2.so kaldinnet2onlinedecoder to discover the available properties
    use-threaded-decoder:  true
    model : test/models/english/fisher_nnet_a_gpu_online/final.mdl
    fst : test/models/english/fisher_nnet_a_gpu_online/HCLG.fst
    word-syms : test/models/english/fisher_nnet_a_gpu_online/words.txt
    feature-type : mfcc
    mfcc-config : test/models/english/fisher_nnet_a_gpu_online/conf/mfcc.conf
    ivector-extraction-config : test/models/english/fisher_nnet_a_gpu_online/conf/ivector_extractor.fixed.conf
    max-active: 10000
    beam: 11.0
    lattice-beam: 5.0
    do-endpointing : true
    endpoint-silence-phones : "1:2:3:4:5:6:7:8:9:10"
    chunk-length-in-secs: 0.2
    #acoustic-scale: 0.083
    #traceback-period-in-secs: 0.2
    #num-nbest: 10
    #Additional functionality that you can play with:
    #lm-fst:  test/models/english/fisher_nnet_a_gpu_online/G.fst
    #big-lm-const-arpa: test/models/english/fisher_nnet_a_gpu_online/G.carpa
    phone-syms: test/models/english/fisher_nnet_a_gpu_online/phones.txt
    word-boundary-file: test/models/english/fisher_nnet_a_gpu_online/word_boundary.int
    do-phone-alignment: true
out-dir: tmp

use-vad: False
silence-timeout: 10

# Just a sample post-processor that appends "." to the hypothesis
post-processor: perl -npe 'BEGIN {use IO::Handle; STDOUT->autoflush(1);} s/(.*)/\1./;'

# A sample full post processor that add a confidence score to 1-best hyp and deletes other n-best hyps
full-post-processor: ./post_processor.py

logging:
    version : 1
    disable_existing_loggers: False
    formatters:
        simpleFormater:
            format: '%(asctime)s - %(levelname)7s: %(name)10s: %(message)s'
            datefmt: '%Y-%m-%d %H:%M:%S'
    handlers:
        console:
            class: logging.StreamHandler
            formatter: simpleFormater
            level: DEBUG
    root:
        level: DEBUG
        handlers: [console]

post_processor.py

import sys
import json
import logging
from math import exp

def post_process_json(str):
    try:
        event = json.loads(str)
        if "result" in event:
            if len(event["result"]["hypotheses"]) > 1:
                likelihood1 = event["result"]["hypotheses"][0]["likelihood"]
                likelihood2 = event["result"]["hypotheses"][1]["likelihood"]
                confidence = likelihood1 - likelihood2
                confidence = 1 - exp(-confidence)
            else:
                confidence = 1.0e+10;
            event["result"]["hypotheses"][0]["confidence"] = confidence

            event["result"]["hypotheses"][0]["transcript"] += "."
            del event["result"]["hypotheses"][1:]
        return json.dumps(event)
    except:
        exc_type, exc_value, exc_traceback = sys.exc_info()
        logging.error("Failed to process JSON result: %s : %s " % (exc_type, exc_value))
        return str


if __name__ == "__main__":
    logging.basicConfig(level=logging.DEBUG, format="%(levelname)8s %(asctime)s %(message)s ")

    lines = []
    while True:
        l = sys.stdin.readline()
        if not l: break # EOF
        if l.strip() == "":
            if len(lines) > 0:
                result_json = post_process_json("".join(lines))
                print result_json
                print
                sys.stdout.flush()
                lines = []
        else:
            lines.append(l)

    if len(lines) > 0:
        result_json = post_process_json("".join(lines))
        print result_json
        lines = []

starting a worker with the kaldinnet2onlinedecoder fails.

I'm running the master server on Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-85-generic x86_64). I'm am using Python 2.7.6.

The command and output are below. I looked in the source code of decoder2.py. It looks like the GstElement is 'NoneType'.

I did a little experiment in ipython and I noticed that the following calls have different results.

tee = Gst.ElementFactory.make("tee", "tee")
print tee
<main.GstTee object at 0x7f45edcbfb90 (GstTee at 0x235a2d0)>

however,

asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
print asr
None

I have pygobject==3.12.0

Command:

python kaldigstserver/worker.py -u ws://localhost:8888/worker/ws/speech -c sample_english_nnet2.yaml

Output and Error message:

DEBUG 2016-07-22 11:48:56,068 Starting up worker
2016-07-22 11:48:56 - INFO: decoder2: Creating decoder using conf: {'post-processor': "perl -npe 'BEGIN {use IO::Handle; STDOUT->autoflush(1);} s/(.*)/\1./;'", 'logging': {'version': 1, 'root': {'level': 'DEBUG', 'handlers': ['console']}, 'formatters': {'simpleFormater': {'datefmt': '%Y-%m-%d %H:%M:%S', 'format': '%(asctime)s - %(levelname)7s: %(name)10s: %(message)s'}}, 'disable_existing_loggers': False, 'handlers': {'console': {'formatter': 'simpleFormater', 'class': 'logging.StreamHandler', 'level': 'DEBUG'}}}, 'use-nnet2': True, 'full-post-processor': './sample_full_post_processor.py', 'decoder': {'ivector-extraction-config': 'test/models/english/tedlium_nnet_ms_sp_online/conf/ivector_extractor.conf', 'num-nbest': 10, 'lattice-beam': 6.0, 'acoustic-scale': 0.083, 'do-endpointing': True, 'beam': 10.0, 'max-active': 10000, 'fst': 'test/models/english/tedlium_nnet_ms_sp_online/HCLG.fst', 'mfcc-config': 'test/models/english/tedlium_nnet_ms_sp_online/conf/mfcc.conf', 'use-threaded-decoder': True, 'traceback-period-in-secs': 0.25, 'model': 'test/models/english/tedlium_nnet_ms_sp_online/final.mdl', 'word-syms': 'test/models/english/tedlium_nnet_ms_sp_online/words.txt', 'endpoint-silence-phones': '1:2:3:4:5:6:7:8:9:10', 'chunk-length-in-secs': 0.25}, 'silence-timeout': 10, 'out-dir': 'tmp', 'use-vad': False}
None
Traceback (most recent call last):
File "kaldigstserver/worker.py", line 368, in
main()
File "kaldigstserver/worker.py", line 348, in main
decoder_pipeline = DecoderPipeline2(conf)
File "/home/sbraden/workspace/kaldi-gstreamer-server/kaldigstserver/decoder2.py", line 24, in init
self.create_pipeline(conf)
File "/home/sbraden/workspace/kaldi-gstreamer-server/kaldigstserver/decoder2.py", line 54, in create_pipeline
self.asr.set_property("use-threaded-decoder", conf["decoder"]["use-threaded-decoder"])
AttributeError: 'NoneType' object has no attribute 'set_property'

Getting word alignments

I'm able to get phone alignments, but how do I get word alignments? Is there a variable similar to do-phone-alignment?
Thanks,
Tali

re: worker closed issue

Hi Alumae,

the connection issue is resolved when i start running the server in a local PC from Azure cloud it worked fine for a few days but again it throw this exception

INFO 2015-07-21 11:35:36,189 200 PUT /client/dynamic/recognize (175.101.19.134) 29925.95ms
INFO 2015-07-21 11:35:36,189 Everything done
INFO 2015-07-21 11:35:37,208 New worker available <main.WorkerSocketHandler object at 0x7f7dc570c150>
INFO 2015-07-21 12:02:45,748 d591db57-bbc3-43bf-a9b5-2bf4b2d208a8: OPEN: user='none', content='none'
INFO 2015-07-21 12:02:45,748 d591db57-bbc3-43bf-a9b5-2bf4b2d208a8: Using worker <main.HttpChunkedRecognizeHandler object at 0x7f7dc570c650>
INFO 2015-07-21 12:02:45,748 Worker <main.WorkerSocketHandler object at 0x7f7dc61b76d0> leaving
INFO 2015-07-21 12:02:45,748 d591db57-bbc3-43bf-a9b5-2bf4b2d208a8: Receiving 'close' from worker
ERROR 2015-07-21 12:02:46,736 Uncaught exception
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tornado-4.2-py2.7-linux-x86_64.egg/tornado/http1connection.py", line 561, in _read_fixed_body
yield gen.maybe_future(delegate.data_received(body))
File "/usr/local/lib/python2.7/dist-packages/tornado-4.2-py2.7-linux-x86_64.egg/tornado/httpserver.py", line 282, in data_received
return self.delegate.data_received(chunk)
File "/usr/local/lib/python2.7/dist-packages/tornado-4.2-py2.7-linux-x86_64.egg/tornado/web.py", line 1974, in data_received
return self.handler.data_received(data)
File "/home/user/speech/server/kaldi-gstreamer-server/kaldigstserver/master_server.py", line 142, in data_received
self.worker.write_message(chunk, binary=True)
File "/usr/local/lib/python2.7/dist-packages/tornado-4.2-py2.7-linux-x86_64.egg/tornado/websocket.py", line 213, in write_message
raise WebSocketClosedError()
WebSocketClosedError

warning while generating word alignments

Hi,

i'm getting an error when i try to get the word alignments, it is not happening always, it throws a warning and it doesn't provide the word alignment for that particular segment and for the other segments it works fine,

here is the warning

Lattice has input epsiilons and/or non-deterministic (in mohri's sense) --i.e., lattice is not deterministic. Word alignment may be slow and/or blow up ,memory

when i decode the same file with sgmm based models (offline approach) it dint throw any error and provide all word-alignments perfectly.
what is the possible cause for the error and how to solve it?

International Output

Hello,

Does this speech recognition stream-server support international languages such as arabic text ?

AttributeError: 'DecoderPipeline' object has no attribute 'get_adaptation_state'

When I ran client.py, the result is no problem.

the result is:

THE.
ONE TWO THREE FOUR FIVE SIX SEVEN EIGHT.

But, after printing result, client.py not finished.
I found that worker has a error.

the error is:

Traceback (most recent call last):
  File "/home/auc/kaldi-trunk/kaldi-gstreamer-server/kaldigstserver/decoder.py", line 140, in _on_eos
    self.eos_handler[0](self.eos_handler[1])
  File "kaldigstserver/worker.py", line 194, in _on_eos
    self.send_adaptation_state()
  File "kaldigstserver/worker.py", line 209, in send_adaptation_state
    adaptation_state = self.decoder_pipeline.get_adaptation_state()
AttributeError: 'DecoderPipeline' object has no attribute 'get_adaptation_state'

could you solve this problem?

thanks.

Worker crashes on second execution with a message

Client is executed twice as follows:
$python kaldigstserver/client.py -r 32000 test/data/english_test.raw
ONE TWO THREE FOUR FIVE SIX SEVEN EIGHT.
ONE TWO THREE FOUR FIVE SIX SEVEN EIGHT.
$ python kaldigstserver/client.py -r 32000 test/data/english_test.raw

-- second client execution crashes the worker with following message
note - online-feat-input was modified to print Frame, feat_offset_ and feat.NumRows()
Kaldi version: 4441

WARNING (IsValidFrame():online-feat-input.cc:531) Unexpected point reached in code: Frame :624, feat_offest:107, Feature Matrix Rows:30 possibly you are skipping frames?
ERROR (CacheFrame():online-decodable.cc:47) Request for invalid frame (you need to check IsLastFrame, or, for frame zero, check that the input is valid.
terminate called after throwing an instance of 'std::runtime_error'
what(): ERROR (CacheFrame():online-decodable.cc:47) Request for invalid frame (you need to check IsLastFrame, or, for frame zero, check that the input is valid.

ImportError: No module named repository

I get the following error when i try to start the sample worker

moshe@cloud-server-04:/kaldi-gstreamer-server/trunk$ uname -a
Linux cloud-server-04 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt9-3
deb8u1 (2015-04-24) x86_64 GNU/Linux
moshe@cloud-server-04:~/kaldi-gstreamer-server/trunk$ python kaldigstserver/worker.py -u ws://localhost:8888/worker/ws/speech -c sample_worker.yaml
Traceback (most recent call last):
File "kaldigstserver/worker.py", line 9, in
from gi.repository import GObject
ImportError: No module named repository

thanks

When would we expect multiple "final" events?

Hi,

In master_server.py, the send_event function, there's a check for if len(event["result"]["hypotheses"]) > 0 and event["result"]["final"]: and if so, append the first hypothesis's transcript to the eventual final_hyp. Why is this append necessary - if the result is marked as final, why wouldn't that just be used as the final hyp?

The real motive for the question is because I'm trying to return more than just the top transcript (e.g. n-best, or confidence scores) so I was thinking of having final_hyp just be event["result"]["hypotheses"] directly, and then build the response from that data. But if there's an underlying reason for the transcript appending functionality, then maybe that indicates my thinking is not right.

Thanks!

Decoder (worker) gets into an endless loop

If the connection is opened for a dictation, and is used for a few minutes, the worker gets into an endless loop with the message: "Waiting for decoder EOS"

It appears that the fault is intermittent, and therefore I don't have an exact recipe to replicate this issue. If you have ideas on how to simulate the scenario, I am all ears.

re: Confidence scores

HI alumae,

           thank a ton for the project you have created, i have started using it from a couple of weeks. i struck at getting confidence scores,  how can i get confidence scores for every word in the json file.

Support client side VAD

To support client side VAD, the server should send FINAL hyp after a specified timeout, even if it's not been recieving any audio (so not much silence is detected)

Online dictation with Microphone

I work with kaldo-gstreamer-server and system transcribes correctly with different input file.

Now I could try to transcribe in online mode.
Is it possible active the streaming audio with microphone in client.py script.
For example like the script run_gui.py in voxforge/gst_demo
Thanks in Advance

Matteo

problem with GST

Hi
Could you please help me with this ? I started sample_worker.py:
python kaldigstserver/worker.py -u ws://localhost:8888/worker/ws/speech -c sample_worker.yaml
and then used sample_client.py to test it. I changed the 50th line in sample_client.py to this:
ws = MyClient('ws://localhost:8888/client/ws/speech?%s' ...

here is the output of worker:

2014-06-09 12:38:11 - INFO: decoder: Setting decoder property: word-syms = test/models/english/voxforge/tri2b_mmi_b0.05/words.txt
2014-06-09 12:38:11 - INFO: decoder: Setting decoder property: model = test/models/english/voxforge/tri2b_mmi_b0.05/final.mdl
2014-06-09 12:38:11 - INFO: decoder: Setting decoder property: lda-mat = test/models/english/voxforge/tri2b_mmi_b0.05/final.mat
2014-06-09 12:38:11 - INFO: decoder: Setting decoder property: fst = test/models/english/voxforge/tri2b_mmi_b0.05/HCLG.fst
2014-06-09 12:38:11 - INFO: decoder: Setting decoder property: silence-phones = 1:2:3:4:5
2014-06-09 12:38:11 - INFO: decoder: Created GStreamer elements
2014-06-09 12:38:11 - INFO: decoder: Linking GStreamer elements
2014-06-09 12:38:11 - INFO: decoder: Setting pipeline to READY
2014-06-09 12:38:15 - INFO: decoder: Set pipeline to READY
2014-06-09 12:38:15 - INFO: main: Opening websocket connection to master server
2014-06-09 12:38:15 - INFO: main: Opened websocket connection to server
2014-06-09 12:38:29 - INFO: main: d9407236-33ad-43b6-8df4-a93fe85b4a9b: Started timeout guard
2014-06-09 12:38:29 - INFO: main: d9407236-33ad-43b6-8df4-a93fe85b4a9b: Initialized request

(python:15676): GStreamer-CRITICAL **: gst_event_new_caps: assertion 'gst_caps_is_fixed (caps)' failed

(python:15676): GStreamer-CRITICAL **: gst_pad_push_event: assertion 'GST_IS_EVENT (event)' failed
2014-06-09 12:38:29 - ERROR: decoder: (GError('Internal data flow error.',), 'gstbasesrc.c(2865): gst_base_src_loop (): /GstPipeline:pipeline0/GstAppSrc:appsrc:\nstreaming task paused, reason not-negotiated (-4)')
2014-06-09 12:38:34 - INFO: decoder: d9407236-33ad-43b6-8df4-a93fe85b4a9b: Pushing EOS to pipeline

Thank you very much.

error occurs on server log when trying to send chunk using java client api

hello, i refer you java client application to send audio file for transcription. on server side log, it print below log file on it..

INFO 2015-07-04 18:16:23,776 New worker available <main.WorkerSocketHandler object at 0x7f67300b8c10>
ERROR 2015-07-04 18:16:27,708 Uncaught exception in /worker/ws/speech
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tornado/websocket.py", line 415, in _run_callback
callback(_args, *_kwargs)
File "kaldigstserver/master_server.py", line 260, in on_message
assert self.client_socket is not None
AssertionError
INFO 2015-07-04 18:16:27,709 Worker <main.WorkerSocketHandler object at 0x7f67300b8c10> leaving

Core dumped while running the nnet example

I received a core dumped termination error while running the sample nnet worker (sample_english_nnet2.yaml) as in the README.

self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
2015-05-18 12:35:05 -    INFO:   decoder2: Setting decoder property: big-lm-const-arpa = test/models/english/tedlium_nnet_ms_sp_online/G.carpa
terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
Aborted (core dumped)

Is it a specific memory issue on my machine?

Final Transcript have segment number missing

Since the creation of JSON for final was moved to the plugin, it is missing the segment number, while the partial transcripts contain the segment number. This is a little confusing, and probably the we would need to keep two separate counters so that both have segment numbers. Maybe move the partial creation also to the plugin.

AttributeError: 'NoneType' object has no attribute 'binary_message

Exception in thread Thread-28:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
    self.run()
  File "/usr/lib/python2.7/threading.py", line 754, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/e/ashwinraju/kaldi_asr/asr/py-kaldi/tests/client.py", line 58, in send_data_to_ws
    self.send_data(block)
  File "/e/ashwinraju/kaldi_asr/asr/py-kaldi/tests/client.py", line 22, in rate_limited_function
    ret = func(*args,**kargs)
  File "/e/ashwinraju/kaldi_asr/asr/py-kaldi/tests/client.py", line 43, in send_data
    self.send(data, binary=True)
  File "/usr/local/lib/python2.7/dist-packages/ws4py/websocket.py", line 257, in send
    message_sender = self.stream.binary_message if binary else self.stream.text_message
AttributeError: 'NoneType' object has no attribute 'binary_message

I am calculating predictions for a list of test samples in a multiprocessing way. I am able to predict the output but still am gettting this AttributeError . Any idea to avoid this error.

issue running threaded nnet2

Using the sample_english_nnet2.yaml as is, and then adding
use-threaded-decoder : True
to the decoder section
I am running this on a WAV file that is continuously being written to storage, and I get this error message:

KALDI_ASSERT: at BestPathEnd:lattice-faster-online-decoder.cc:667, failed: NumFramesDecoded() > 0 && "You cannot call BestPathEnd if no frames were decoded."
Stack trace is:
kaldi::KaldiGetStackTrace()
kaldi::KaldiAssertFailure_(char const*, char const*, int, char const*)
kaldi::LatticeFasterOnlineDecoder::BestPathEnd(bool, float*) const
kaldi::LatticeFasterOnlineDecoder::GetBestPath(fst::VectorFst<fst::ArcTpl<fst::LatticeWeightTpl<float> > >*, bool) const
kaldi::SingleUtteranceNnet2DecoderThreaded::GetBestPath(bool, fst::VectorFst<fst::ArcTpl<fst::LatticeWeightTpl<float> > >*, float*) const
.
.
.
/usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0(+0x94a16) [0x7fd78e00fa16]
/lib/x86_64-linux-gnu/libglib-2.0.so.0(+0x712b8) [0x7fd78fb1f2b8]
/lib/x86_64-linux-gnu/libglib-2.0.so.0(+0x70925) [0x7fd78fb1e925]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x80a5) [0x7fd791d670a5]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7fd791a94cfd]

It alternates between this message and
You cannot call FinalizeDecoding() and then call BestPathEnd() with use_final_probs == false (the assertion before it in the file).

Do you have any recommendations on getting this to work?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.