Comments (5)
Please clarify your issue. It's difficult to understand where and when this problem occurs.
from kaldi-gstreamer-server.
i have tried this and got the same type of problem in decoder. and i am trying: "Using the 'onlinegmmdecodefaster' based worker"
python kaldigstserver/worker.py -u ws://localhost:8888/worker/ws/speech -c sample_english_nnet2.yaml
DEBUG 2018-01-28 12:35:12,871 Starting up worker
2018-01-28 12:35:12 - INFO: decoder2: Creating decoder using conf: {'post-processor': "perl -npe 'BEGIN {use IO::Handle; STDOUT->autoflush(1);} s/(.*)/\1./;'", 'logging': {'version': 1, 'root': {'level': 'DEBUG', 'handlers': ['console']}, 'formatters': {'simpleFormater': {'datefmt': '%Y-%m-%d %H:%M:%S', 'format': '%(asctime)s - %(levelname)7s: %(name)10s: %(message)s'}}, 'disable_existing_loggers': False, 'handlers': {'console': {'formatter': 'simpleFormater', 'class': 'logging.StreamHandler', 'level': 'DEBUG'}}}, 'use-nnet2': True, 'full-post-processor': './sample_full_post_processor.py', 'decoder': {'ivector-extraction-config': 'test/models/english/tedlium_nnet_ms_sp_online/conf/ivector_extractor.conf', 'num-nbest': 10, 'lattice-beam': 6.0, 'acoustic-scale': 0.083, 'do-endpointing': True, 'beam': 10.0, 'max-active': 10000, 'fst': 'test/models/english/tedlium_nnet_ms_sp_online/HCLG.fst', 'mfcc-config': 'test/models/english/tedlium_nnet_ms_sp_online/conf/mfcc.conf', 'use-threaded-decoder': True, 'traceback-period-in-secs': 0.25, 'model': 'test/models/english/tedlium_nnet_ms_sp_online/final.mdl', 'word-syms': 'test/models/english/tedlium_nnet_ms_sp_online/words.txt', 'endpoint-silence-phones': '1:2:3:4:5:6:7:8:9:10', 'chunk-length-in-secs': 0.25}, 'silence-timeout': 10, 'out-dir': 'tmp', 'use-vad': False}
Traceback (most recent call last):
File "kaldigstserver/worker.py", line 366, in
main()
File "kaldigstserver/worker.py", line 346, in main
decoder_pipeline = DecoderPipeline2(conf)
File "/media/thesis/73EF5F223191438D/kaldi-gstreamer-server/kaldigstserver/decoder2.py", line 24, in init
self.create_pipeline(conf)
File "/media/thesis/73EF5F223191438D/kaldi-gstreamer-server/kaldigstserver/decoder2.py", line 53, in create_pipeline
self.asr.set_property("use-threaded-decoder", conf["decoder"]["use-threaded-decoder"])
AttributeError: 'NoneType' object has no attribute 'set_property'
from kaldi-gstreamer-server.
Does anyone solves this problem?
from kaldi-gstreamer-server.
@Alif112 Do you have sloved this problem?
from kaldi-gstreamer-server.
@wujsy its long time ago, i dont remember much, but i would like to assure u that using docker will help u a lot. And docker image was bug free.
from kaldi-gstreamer-server.
Related Issues (20)
- python kaldigstserver/client.py -r 32000 test/data/english_test.raw gives only THE. as output
- single word audio file gives multiple results, how to choose the correct result? HOT 1
- gstkaldinnet2onlinedecoder vs online2-tcp-nnet3-decoder-faster HOT 7
- The pretrained Chinese model can not process audio file with 48khz sample rate HOT 1
- Error switching between Audio types
- Error when running python kaldigstserver/worker.py on sample chinese HOT 1
- How to get phone alignment and word alignment information
- decoder with CSJ -> worker: segmentation fault (core dumped) HOT 2
- Enable Multiple channels listening HOT 1
- setting up the server for http api call HOT 2
- INTEL MKL ERROR: /opt/intel/mkl/lib/intel64/libmkl_avx2.so: undefined symbol: mkl_sparse_optimize_bsr_trsm_i8. Intel MKL FATAL ERROR: Cannot load libmkl_avx2.so or libmkl_def.so. HOT 2
- server can not get EOS HOT 1
- when I run it in doocker and use the chinese model ,it have this question :2021-04-16 05:52:17 - INFO: __main__: 7404beee-0d39-4d67-963c-01c58da10193: Waiting for EOS from decoder 2021-04-16 05:52:18 - INFO: __main__: 7404beee-0d39-4d67-963c-01c58da10193: Waiting for EOS from decoder HOT 2
- How to run multiple models in a single machine
- How can I save the incoming audio stream to wav file ?
- Invalid parameters supplied to OnlineLdaInput
- cannot download tedlium_nnet_ms_sp_online.tgz HOT 3
- worker process killed when worker replications reach to 3
- Poor performance with nnet3 TDNN-F model
- Any updates of year 2023???
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from kaldi-gstreamer-server.