marian-nmt / marian Goto Github PK
View Code? Open in Web Editor NEWFast Neural Machine Translation in C++
Home Page: https://marian-nmt.github.io
License: Other
Fast Neural Machine Translation in C++
Home Page: https://marian-nmt.github.io
License: Other
Specifying --n-best with mini-batch sizes larger than 1 fails with a cuda error in nth_element.
It outputs 3 lines then exits, no error message. Script found in /home/bhaddow/experiments/wmt17/backtrans/bench/translate-gpu.sh
Configuration (tried changing thread and batch settings)
relative-paths: yes
beam-size: 5
normalize: yes
gpu-threads: 4
cpu-threads: 2
#mini-batch: 60
#maxi-batch: 1200
scorers:
F0:
path: model.npz
type: Nematus
weights:
F0: 1.0
source-vocab: vocab.en.json
target-vocab: vocab.cs.json
Add APE feature to CPU version, currently it is only supported for the GPU version. Should be easy to do.
Usable in-application BPE support: specify BPE code location in config file or per command-line switch, perform proper pre- and post-processing without depending on scripts.
Make sure no buffering occurs.
Make it compile if python development libraries are not installed.
Make crashes when linking CXX executable ../bin/amun
I use CMake 3.5.1, GCC/G++ 4.9, Boost 1.54
Do you have any idea what this could be related to? I'm very sorry for this newbie question!
[ 98%] Linking CXX executable ../bin/amun
/home/sariyildiznureddin/amunmt/src/python/amunmt.cpp: In function ‘History Tran slationTask(const string&, size_t)’:
/home/sariyildiznureddin/amunmt/src/python/amunmt.cpp:35:50: error: no matching function for call to ‘Search::Decode(Sentence)’
return search->Decode(Sentence(taskCounter, in));
^
/home/sariyildiznureddin/amunmt/src/python/amunmt.cpp:35:50: note: candidate is:
In file included from /home/sariyildiznureddin/amunmt/src/python/amunmt.cpp:11:0 :
/home/sariyildiznureddin/amunmt/src/./common/search.h:14:34: note: boost::shared _ptr Search::Decode(const Sentences&)
boost::shared_ptr Decode(const Sentences& sentences);
^
/home/sariyildiznureddin/amunmt/src/./common/search.h:14:34: note: no known co nversion for argument 1 from ‘Sentence’ to ‘const Sentences&’
In file included from /home/sariyildiznureddin/amunmt/src/./common/search.h:8:0,
from /home/sariyildiznureddin/amunmt/src/python/amunmt.cpp:11:
/home/sariyildiznureddin/amunmt/src/./common/history.h: In lambda function:
/home/sariyildiznureddin/amunmt/src/./common/history.h:20:5: error: ‘History::Hi story(const History&)’ is private
History(const History &) = delete;
^
/home/sariyildiznureddin/amunmt/src/python/amunmt.cpp:65:48: error: within this context
[=]{ return TranslationTask(s, i); }
^
/home/sariyildiznureddin/amunmt/src/python/amunmt.cpp:65:48: error: use of delet ed function ‘History::History(const History&)’
In file included from /home/sariyildiznureddin/amunmt/src/./common/search.h:8:0,
from /home/sariyildiznureddin/amunmt/src/python/amunmt.cpp:11:
/home/sariyildiznureddin/amunmt/src/./common/history.h:20:5: note: declared here
History(const History &) = delete;
^
/home/sariyildiznureddin/amunmt/src/python/amunmt.cpp: In function ‘boost::pytho n::list translate(boost::python::list&)’:
/home/sariyildiznureddin/amunmt/src/python/amunmt.cpp:74:44: error: no matching function for call to ‘Printer(History, size_t, std::stringstream&)’
Printer(result.get(), lineCounter++, ss);
^
/home/sariyildiznureddin/amunmt/src/python/amunmt.cpp:74:44: note: candidates ar e:
In file included from /home/sariyildiznureddin/amunmt/src/python/amunmt.cpp:12:0 :
/home/sariyildiznureddin/amunmt/src/./common/printer.h:12:6: note: template void Printer(const History&, OStream&)
void Printer(const History& history, OStream& out) {
^
/home/sariyildiznureddin/amunmt/src/./common/printer.h:12:6: note: template ar gument deduction/substitution failed:
/home/sariyildiznureddin/amunmt/src/python/amunmt.cpp:74:44: note: candidate e xpects 2 arguments, 3 provided
Printer(result.get(), lineCounter++, ss);
^
In file included from /home/sariyildiznureddin/amunmt/src/python/amunmt.cpp:12:0 :
/home/sariyildiznureddin/amunmt/src/./common/printer.h:65:6: note: template void Printer(const Histories&, OStream&)
void Printer(const Histories& histories, OStream& out) {
^
/home/sariyildiznureddin/amunmt/src/./common/printer.h:65:6: note: template ar gument deduction/substitution failed:
/home/sariyildiznureddin/amunmt/src/python/amunmt.cpp:74:44: note: cannot conv ert ‘std::future<_Res>::get() with _Res = History’ (type ‘History’) to type ‘const Histories&’
Printer(result.get(), lineCounter++, ss);
^
In file included from /home/sariyildiznureddin/amunmt/src/./common/search.h:8:0,
from /home/sariyildiznureddin/amunmt/src/python/amunmt.cpp:11:
/usr/include/c++/4.9/future: In instantiation of ‘_Res std::future<_Res>::get() [with _Res = History]’:
/home/sariyildiznureddin/amunmt/src/python/amunmt.cpp:74:24: required from her e
/home/sariyildiznureddin/amunmt/src/./common/history.h:20:5: error: ‘History::Hi story(const History&)’ is private
History(const History &) = delete;
^
In file included from /home/sariyildiznureddin/amunmt/src/./common/threadpool.h: 35:0,
from /home/sariyildiznureddin/amunmt/src/python/amunmt.cpp:10:
/usr/include/c++/4.9/future:700:58: error: within this context
return std::move(this->_M_get_result()._M_value());
^
/usr/include/c++/4.9/future:700:58: error: use of deleted function ‘History::His tory(const History&)’
In file included from /home/sariyildiznureddin/amunmt/src/./common/search.h:8:0,
from /home/sariyildiznureddin/amunmt/src/python/amunmt.cpp:11:
/home/sariyildiznureddin/amunmt/src/./common/history.h:20:5: note: declared here
History(const History &) = delete;
^
In file included from /home/sariyildiznureddin/amunmt/src/./common/threadpool.h: 35:0,
from /home/sariyildiznureddin/amunmt/src/python/amunmt.cpp:10:
/usr/include/c++/4.9/future: In instantiation of ‘static std::__future_base::_Ta sk_setter<_Res_ptr> std::__future_base::_S_task_setter(_Res_ptr&, _BoundFn&&) [w ith _Res_ptr = std::unique_ptr<std::__future_base::_Result, std::__futu re_base::_Result_base::_Deleter>; _BoundFn = std::_Bind_simple<std::reference_wr apper<std::_Bind<translate(boost::python::list&)::<lambda()>()> >()>; typename _ Res_ptr::element_type::result_type = History]’:
/usr/include/c++/4.9/future:1318:70: required from ‘void std::_future_base:: Task_state<_Fn, _Alloc, _Res(_Args ...)>::_M_run(_Args ...) [with _Fn = std::_Bi nd<translate(boost::python::list&)::<lambda()>()>; _Alloc = std::allocator; _Res = History; _Args = {}]’
/home/sariyildiznureddin/amunmt/src/python/amunmt.cpp:85:1: required from here
/usr/include/c++/4.9/future:539:57: error: could not convert ‘std::ref(_Tp&) wi th _Tp = std::_Bind_simple<std::reference_wrapper<std::_Bind<translate(boost::py thon::list&)::<lambda()>()> >()>’ from ‘std::reference_wrapper<std::_Bind_sim ple<std::reference_wrapper<std::_Bind<translate(boost::python::list&)::<lambda() >()> >()> >’ to ‘std::function<History()>’
return _Task_setter<_Res_ptr>{ __ptr, std::ref(__call) };
^
In file included from /home/sariyildiznureddin/amunmt/src/./common/search.h:8:0,
from /home/sariyildiznureddin/amunmt/src/python/amunmt.cpp:11:
/usr/include/c++/4.9/functional: In instantiation of ‘_Res std::function<_Res(_A rgTypes ...)>::operator()(_ArgTypes ...) const [with _Res = History; _ArgTypes = {}]’:
/usr/include/c++/4.9/future:1241:6: required from ‘_Ptr_type std::__future_bas e::_Task_setter<_Res_ptr, _Res>::operator()() [with _Ptr_type = std::unique_ptr< std::__future_base::_Result, std::__future_base::_Result_base::_Deleter >; _Res = History]’
/usr/include/c++/4.9/functional:2024:10: required from ‘static _Res std::_Func tion_handler<_Res(_ArgTypes ...), _Functor>::_M_invoke(const std::_Any_data&, _A rgTypes ...) [with _Res = std::unique_ptr<std::__future_base::_Result_base, std: :__future_base::_Result_base::_Deleter>; _Functor = std::__future_base::_Task_se tter<std::unique_ptr<std::__future_base::_Result, std::_future_base:: Result_base::_Deleter>, History>; _ArgTypes = {}]’
/usr/include/c++/4.9/functional:2428:19: required from ‘std::function<_Res(_Ar gTypes ...)>::function(_Functor) [with _Functor = std::__future_base::_Task_sett er<std::unique_ptr<std::__future_base::_Result, std::__future_base::_Re sult_base::_Deleter>, History>; = void; _Res = std::uni que_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Del eter>; _ArgTypes = {}]’
/usr/include/c++/4.9/future:1319:2: required from ‘void std::__future_base::_T ask_state<_Fn, _Alloc, _Res(_Args ...)>::_M_run(_Args ...) [with _Fn = std::_Bin d<translate(boost::python::list&)::<lambda()>()>; _Alloc = std::allocator; _Res = History; _Args = {}]’
/home/sariyildiznureddin/amunmt/src/python/amunmt.cpp:85:1: required from here
/home/sariyildiznureddin/amunmt/src/./common/history.h:20:5: error: ‘History::Hi story(const History&)’ is private
History(const History &) = delete;
^
In file included from /usr/include/boost/system/error_code.hpp:23:0,
from /usr/include/boost/chrono/detail/system.hpp:12,
from /usr/include/boost/chrono/system_clocks.hpp:64,
from /usr/include/boost/chrono/chrono.hpp:13,
from /usr/include/boost/timer/timer.hpp:14,
from /home/sariyildiznureddin/amunmt/src/python/amunmt.cpp:4:
/usr/include/c++/4.9/functional:2440:71: error: within this context
return _M_invoker(_M_functor, std::forward<_ArgTypes>(__args)...);
^
/usr/include/c++/4.9/functional:2440:71: error: use of deleted function ‘History ::History(const History&)’
In file included from /home/sariyildiznureddin/amunmt/src/./common/search.h:8:0,
from /home/sariyildiznureddin/amunmt/src/python/amunmt.cpp:11:
/home/sariyildiznureddin/amunmt/src/./common/history.h:20:5: note: declared here
History(const History &) = delete;
^
/usr/include/c++/4.9/future: In instantiation of ‘void std::__future_base::_Resu lt<_Res>::_M_set(_Res&&) [with _Res = History]’:
/usr/include/c++/4.9/future:1241:6: required from ‘_Ptr_type std::__future_bas e::_Task_setter<_Res_ptr, _Res>::operator()() [with _Ptr_type = std::unique_ptr< std::__future_base::_Result, std::__future_base::_Result_base::_Deleter >; _Res = History]’
/usr/include/c++/4.9/functional:2024:10: required from ‘static _Res std::_Func tion_handler<_Res(_ArgTypes ...), _Functor>::_M_invoke(const std::_Any_data&, _A rgTypes ...) [with _Res = std::unique_ptr<std::__future_base::_Result_base, std: :__future_base::_Result_base::_Deleter>; _Functor = std::__future_base::_Task_se tter<std::unique_ptr<std::__future_base::_Result, std::_future_base:: Result_base::_Deleter>, History>; _ArgTypes = {}]’
/usr/include/c++/4.9/functional:2428:19: required from ‘std::function<_Res(_Ar gTypes ...)>::function(_Functor) [with _Functor = std::__future_base::_Task_sett er<std::unique_ptr<std::__future_base::_Result, std::__future_base::_Re sult_base::_Deleter>, History>; = void; _Res = std::uni que_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Del eter>; _ArgTypes = {}]’
/usr/include/c++/4.9/future:1319:2: required from ‘void std::__future_base::_T ask_state<_Fn, _Alloc, _Res(_Args ...)>::_M_run(_Args ...) [with _Fn = std::_Bin d<translate(boost::python::list&)::<lambda()>()>; _Alloc = std::allocator; _Res = History; _Args = {}]’
/home/sariyildiznureddin/amunmt/src/python/amunmt.cpp:85:1: required from here
/home/sariyildiznureddin/amunmt/src/./common/history.h:20:5: error: ‘History::Hi story(const History&)’ is private
In file included from /home/sariyildiznureddin/amunmt/src/./common/threadpool.h: 35:0,
from /home/sariyildiznureddin/amunmt/src/python/amunmt.cpp:10:
/usr/include/c++/4.9/future:238:4: error: within this context
::new (_M_storage._M_addr()) _Res(std::move(__res));
^
/usr/include/c++/4.9/future:238:4: error: use of deleted function ‘History::Hist ory(const History&)’
In file included from /home/sariyildiznureddin/amunmt/src/./common/search.h:8:0,
from /home/sariyildiznureddin/amunmt/src/python/amunmt.cpp:11:
/home/sariyildiznureddin/amunmt/src/./common/history.h:20:5: note: declared here
History(const History &) = delete;
^
make[2]: *** [src/CMakeFiles/amunmt.dir/python/amunmt.cpp.o] Error 1
make[1]: *** [src/CMakeFiles/amunmt.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
/tmp/ccwELx9c.ltrans1.ltrans.o: In function std::_Function_handler<void (), std ::reference_wrapper<std::_Bind_simple<std::reference_wrapper<std::_Bind<main::{l ambda()#2} ()> > ()> > >::_M_invoke(std::_Any_data const&) [clone .lto_priv.954] ': <artificial>:(.text+0x2ad4): undefined reference to
TranslationTask(boost::shar ed_ptr, unsigned long, unsigned long)'
/tmp/ccwELx9c.ltrans1.ltrans.o: In function std::_Function_handler<void (), std ::reference_wrapper<std::_Bind_simple<std::reference_wrapper<std::_Bind<main::{l ambda()#1} ()> > ()> > >::_M_invoke(std::_Any_data const&) [clone .lto_priv.952] ': <artificial>:(.text+0x2b84): undefined reference to
TranslationTask(boost::shar ed_ptr, unsigned long, unsigned long)'
collect2: error: ld returned 1 exit status
make[2]: *** [bin/amun] Error 1
make[1]: *** [src/CMakeFiles/amun.dir/all] Error 2
make: *** [all] Error 2
Hi,
I got errors when compiling the code. I am using
and
NAME="Ubuntu"
VERSION="14.04.5 LTS, Trusty Tahr"
ID=ubuntu
The errors are:
collect2: error: ld returned 1 exit status
make[2]: *** [bin/atools] Error 1
make[1]: *** [src/3rd_party/fast_align/CMakeFiles/atools.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 77%] Built target cpumode
make: *** [all] Error 2
Thanks,
Jian
-DNOCUDA=ON is weird.
Once it was possible to use KenLM as on of the features. We should re-enable that. It did not help with anything, but there maybe interesting applications yet undiscovered. The old code is still there.
I would like to get rid of these. Any objections? Or can we at least put them somewhere else than the top-level folder where they have the same name as the project. This is confusing, I constantly find myself checking what's in there as they sound important.
Related, but not a duplicate of #28.
Current behaviour:
amunmt/build $ ctest
*********************************
No test configuration file found!
*********************************
Usage
ctest [options]
Expected behaviour:
kenlm/build $ ctest
Test project /home/kpu/kenlm/build
Start 1: bit_packing_test
1/28 Test #1: bit_packing_test ................. Passed 0.13 sec
Start 2: integer_to_string_test
2/28 Test #2: integer_to_string_test ........... Passed 0.11 sec
Start 3: joint_sort_test
3/28 Test #3: joint_sort_test .................. Passed 0.04 sec
Start 4: multi_intersection_test
4/28 Test #4: multi_intersection_test .......... Passed 0.01 sec
Start 5: pcqueue_test
5/28 Test #5: pcqueue_test ..................... Passed 0.01 sec
Start 6: probing_hash_table_test
6/28 Test #6: probing_hash_table_test .......... Passed 0.01 sec
Start 7: read_compressed_test
7/28 Test #7: read_compressed_test ............. Passed 0.33 sec
Start 8: sized_iterator_test
8/28 Test #8: sized_iterator_test .............. Passed 0.02 sec
Start 9: sorted_uniform_test
9/28 Test #9: sorted_uniform_test .............. Passed 0.04 sec
Start 10: string_stream_test
10/28 Test #10: string_stream_test ............... Passed 0.00 sec
Start 11: tokenize_piece_test
11/28 Test #11: tokenize_piece_test .............. Passed 0.00 sec
Start 12: file_piece_test
12/28 Test #12: file_piece_test .................. Passed 0.12 sec
Start 13: io_test
13/28 Test #13: io_test .......................... Passed 0.13 sec
Start 14: sort_test
14/28 Test #14: sort_test ........................ Passed 5.47 sec
Start 15: stream_test
15/28 Test #15: stream_test ...................... Passed 0.16 sec
Start 16: rewindable_stream_test
16/28 Test #16: rewindable_stream_test ........... Passed 0.12 sec
Start 17: left_test
17/28 Test #17: left_test ........................ Passed 0.18 sec
Start 18: partial_test
18/28 Test #18: partial_test ..................... Passed 0.05 sec
Start 19: model_test
19/28 Test #19: model_test ....................... Passed 2.27 sec
Start 20: model_buffer_test
20/28 Test #20: model_buffer_test ................ Passed 0.08 sec
Start 21: adjust_counts_test
21/28 Test #21: adjust_counts_test ............... Passed 0.02 sec
Start 22: corpus_count_test
22/28 Test #22: corpus_count_test ................ Passed 0.07 sec
Start 23: backoff_reunification_test
23/28 Test #23: backoff_reunification_test ....... Passed 0.05 sec
Start 24: bounded_sequence_encoding_test
24/28 Test #24: bounded_sequence_encoding_test ... Passed 0.60 sec
Start 25: normalize_test
25/28 Test #25: normalize_test ................... Passed 0.02 sec
Start 26: tune_derivatives_test
26/28 Test #26: tune_derivatives_test ............ Passed 0.02 sec
Start 27: tune_instances_test
27/28 Test #27: tune_instances_test .............. Passed 0.08 sec
Start 28: merge_vocab_test
28/28 Test #28: merge_vocab_test ................. Passed 0.11 sec
100% tests passed, 0 tests failed out of 28
Total Test time (real) = 10.39 sec
"A C++ decoder for neural machine translatiion models" -> "A C++ decoder for neural machine translation models"
Is it possible to translate a file consisting of one sentence per line?
I tried
./amun -c config.ens.yml -i source.txt > target.txt
but ended up with no translation. target.txt was created but didn't contain any content.
For reference, seems to be an issue of Boost, not so much of Amun, that appears when using CUDA 8.0.Trying to update Boost to see if this fixes things.
we should do this soon, it's been over 2 weeks since some of them have merged with master, conflicts will start to grow
Add layer-normalization to master, we have it in a branch already. Should recognize automatically if layer-normalization was used (based on presence of certain parameters). Should retain compatibility with Marian, currently not required to maintain compatibility with Nematus.
It seems amun is logging "best translation" twice?
When intercepting both, stderr and stdout, I get the following:
Best translation: matrix tablet enabling extended disposable syringe
matrix tablet enabling extended disposable syringe
Best translation 0 : matrix tablet enabling extended disposable syringe
First line is stderr, second stdout, third stderr again. Also note the trailing newline.
Trying to compile current version in master:
[ 89%] Building NVCC (Device) object src/CMakeFiles/amun.dir/gpu/decoder/amun_generated_encoder_decoder.cu.o
/home/jose/amunmt/src/3rd_party/spdlog/spdlog.h:64:317: error: converting to ‘const milliseconds {aka const std::chrono::duration<long int, std::ratio<1l, 1000l> >}’ from initializer list would use expl
icit constructor ‘constexpr std::chrono::duration<_Rep, _Period>::duration(const _Rep2&) [with _Rep2 = long int; <template-parameter-2-2> = void; _Rep = long int; _Period = std::ratio<1l, 1000l>]’
void set_async_mode(size_t queue_size, const async_overflow_policy overflow_policy = async_overflow_policy::block_retry, const std::function<void()>& worker_warmup_cb = nullptr, const std::chrono::mill
iseconds& flush_interval_ms = std::chrono::milliseconds::zero(), const std::function<void()>& worker_teardown_cb = nullptr);
^
/home/jose/amunmt/src/3rd_party/spdlog/async_logger.h:47:308: error: converting to ‘const milliseconds {aka const std::chrono::duration<long int, std::ratio<1l, 1000l> >}’ from initializer list would u$
e explicit constructor ‘constexpr std::chrono::duration<_Rep, _Period>::duration(const _Rep2&) [with _Rep2 = long int; <template-parameter-2-2> = void; _Rep = long int; _Period = std::ratio<1l, 1000l>]$
/home/jose/amunmt/src/3rd_party/spdlog/async_logger.h:55:307: error: converting to ‘const milliseconds {aka const std::chrono::duration<long int, std::ratio<1l, 1000l> >}’ from initializer list would u$
e explicit constructor ‘constexpr std::chrono::duration<_Rep, _Period>::duration(const _Rep2&) [with _Rep2 = long int; <template-parameter-2-2> = void; _Rep = long int; _Period = std::ratio<1l, 1000l>]$
/home/jose/amunmt/src/3rd_party/spdlog/details/async_log_helper.h:120:375: error: converting to ‘const milliseconds {aka const std::chrono::duration<long int, std::ratio<1l, 1000l> >}’ from initializer
list would use explicit constructor ‘constexpr std::chrono::duration<_Rep, _Period>::duration(const _Rep2&) [with _Rep2 = long int; <template-parameter-2-2> = void; _Rep = long int; _Period = std::rati$
<1l, 1000l>]’
CMake Error at amun_generated_loader_factory.cpp.o.cmake:262 (message):
Error generating file
/home/jose/amunmt/build/src/CMakeFiles/amun.dir/common/./amun_generated_loader_factory.cpp.o
src/CMakeFiles/amun.dir/build.make:7813: recipe for target 'src/CMakeFiles/amun.dir/common/amun_generated_loader_factory.cpp.o' failed
make[2]: *** [src/CMakeFiles/amun.dir/common/amun_generated_loader_factory.cpp.o] Error 1
I'm running Ubuntu 16.04 with g++ 5.4.0.
Seems to be related to this: http://stackoverflow.com/questions/26947704/implicit-conversion-failure-from-initializer-list
There are loads of tabs in the code. Let's get rid of those.
Hi all,
After running the amunmt_server.py, how do I send translation requests and get back translations?
This is what it looks like when I run the .py with my configs.
sariyildiznureddin@instance-2:~/amunmt/scripts$ python amunmt_server.py -c config.ens.yml
[Thu Jan 26 09:21:00 2017] (I) Options:
allow-unk: false
batch-size: 1
beam-size: 12
bpe:
- /home/sariyildiznureddin/de-en/deen.bpe
bunch-size: 1
cpu-threads: 8
devices: [0]
gpu-threads: 0
maxi-batch: 1
mini-batch: 1
n-best: false
no-debpe: false
normalize: false
relative-paths: true
return-alignment: false
scorers:
F0:
path: /home/sariyildiznureddin/de-en/model-ens1.npz
type: Nematus
F1:
path: /home/sariyildiznureddin/de-en/model-ens2.npz
type: Nematus
F2:
path: /home/sariyildiznureddin/de-en/model-ens3.npz
type: Nematus
F3:
path: /home/sariyildiznureddin/de-en/model-ens4.npz
type: Nematus
show-weights: false
softmax-filter:
[]
source-vocab:
- /home/sariyildiznureddin/de-en/vocab.de.json
target-vocab: /home/sariyildiznureddin/de-en/vocab.en.json
weights:
F0: 1
F1: 1
F2: 1
F3: 1
wipo: false
[Thu Jan 26 09:21:02 2017] (I) Loading scorers...
[Thu Jan 26 09:21:02 2017] (I) Loading model /home/sariyildiznureddin/de-en/model-ens1.npz
[Thu Jan 26 09:21:02 2017] (I) Loading model /home/sariyildiznureddin/de-en/model-ens2.npz
[Thu Jan 26 09:21:03 2017] (I) Loading model /home/sariyildiznureddin/de-en/model-ens3.npz
[Thu Jan 26 09:21:03 2017] (I) Loading model /home/sariyildiznureddin/de-en/model-ens4.npz
[Thu Jan 26 09:21:04 2017] (I) Reading from stdin
[Thu Jan 26 09:21:04 2017] (I) using bpe: /home/sariyildiznureddin/de-en/deen.bpe
[Thu Jan 26 09:21:04 2017] (I) De-BPE output
Please let me know if these kind of questions are out of the scope of this forum. I know it's actually not a system issue but the lack of my knowledge.
Dear AmuNMT team,
I've tried to adapt a model by continuing training on an in-domain set. When I try to use that model for translation in AmuNMT, I get the following error (Decoding with Nematus works wel):
[Fri Feb 17 18:38:01 2017] (I) Loading scorers...
[Fri Feb 17 18:38:01 2017] (I) Loading model /home/sariymx/en-de/klin.npz
terminate called after throwing an instance of 'std::runtime_error'
what(): load_the_npy_file: failed fread
This is what the adapted model json looks like:
{
"prior_model": null,
"map_decay_c": 0.0,
"mrt_samples": 100,
"dropout_hidden": 0.2,
"decoder": "gru_cond",
"sort_by_length": true,
"decay_c": 0.0,
"dropout_source": 0.1,
"model_version": 0.1,
"tie_decoder_embeddings": false,
"domain_interpolation_indomain_datasets": [
"indomain.en",
"indomain.fr"
],
"max_epochs": 1,
"dispFreq": 1000,
"finetune_only_last": false,
"domain_interpolation_min": 0.1,
"overwrite": false,
"validFreq": 10000,
"mrt_samples_meanloss": 10,
"dropout_embedding": 0.2,
"external_validation_script": null,
"mrt_ml_mix": 0,
"clip_c": 1.0,
"n_words_src": 85000,
"saveto": "model-ens4.npz",
"dropout_target": 0.1,
"objective": "CE",
"valid_batch_size": 80,
"n_words": 85000,
"mrt_alpha": 0.005,
"tie_encoder_decoder_embeddings": false,
"optimizer": "adadelta",
"alpha_c": 0.0,
"mrt_loss": "SENTENCEBLEU n=4",
"batch_size": 80,
"use_domain_interpolation": false,
"lrate": 0.0001,
"valid_datasets": null,
"pos_win": 10,
"encoder": "gru",
"shuffle_each_epoch": true,
"mrt_reference": false,
"dim": 1024,
"use_dropout": false,
"datasets": [
"data/corpus.bpe.en",
"data/corpus.bpe.de"
],
"dim_word": 500,
"sampleFreq": 10000,
"patience": 10,
"maxibatch_size": 20,
"finetune": false,
"factors": 1,
"dictionaries": [
"vocab.en.json",
"vocab.de.json"
],
"reload_": true,
"maxlen": 50,
"finish_after": 10000000,
"domain_interpolation_inc": 0.1,
"dim_per_factor": [
500
],
"saveFreq": 30000
}
My configs are:
allow-unk: false
batch-size: 1
beam-size: 12
bpe:
- /home/sariymx/en-de/ende.bpe
bunch-size: 1
cpu-threads: 16
devices: [0]
gpu-threads: 0
n-best: false
no-debpe: false
normalize: true
relative-paths: true
return-alignment: false
scorers:
F0:
path: /home/sariymx/en-de/klin.npz
type: Nematus
show-weights: false
softmax-filter:
[]
source-vocab:
- /home/sariymx/en-de/vocab.en.json
target-vocab: /home/sariymx/en-de/vocab.de.json
weights:
F0: 1
wipo: false
The command I used in Nematus, which worked, were:
THEANO_FLAGS=mode=FAST_RUN,floatX=float32,device=$device,on_unused_input=warn python $nematus/nematus/translate.py \
-m klin.npz \
-k 12 -n -p 1 --suppress-un
Thanks for taking your time and your great work!
Hi there!
When I translate one string, amunmt always translates two strings/lines in a list - even when I "line.rstrip()" my input string. Therefore, the overall search takes twice the time.
As a test hack (not the actual fix), I have changed this line:
from
for(int i = 0; i < boost::python::len(in); ++i) {
to
for(int i = 0; i < (boost::python::len(in) - 1); ++i) {
Now the translation is much faster and I don't have that second unnecessary translation which returns some kind of useless token, only, anyway.
Maybe at some point, the string is not correctly converted to a list of strings with length = 1 in case there is only one line to be translated?
Best wishes,
Simon
Add safe softmax to amun, possibly as an default option that can be disabled.
This doesn't work
import sys,libamunmt
libamunmt.init("-f config.yml")
for line in sys.stdin:
print libamunmt.translate([line])[0].strip()
leads to
RuntimeError: /home/germann/amunmt/src/common/god.cpp:267 in amunmt::DeviceInfo amunmt::God::GetNextDevice() const threw amunmt::util::Exception because `ret.threadInd >= gpuThreads'.
Too many GPU threads
Crashes on empty line. This has been fixed once, seems to have reappeared.
README.md says that
To turn on desegmentation on the ouput, set
debpe
totrue
However, commit ebe3a0f changed debpe
to no-debpe
, and hence debpe : true
is now the default behavior, which can be disabled with no-debpe : true
.
Would it be possible to perform R2L rescoring with amunmt? How could I integrate amunmt into Rico's r2l translation script?
#!/bin/bash
# this sample script translates a test set, including
# preprocessing (tokenization, truecasing, and subword segmentation),
# and postprocessing (merging subword units, detruecasing, detokenization).
# instructions: set paths to mosesdecoder, subword_nmt, and nematus,
# then run "./translate.sh < input_file > output_file"
# suffix of source language
SRC=en
# suffix of target language
TRG=de
# path to moses decoder: https://github.com/moses-smt/mosesdecoder
mosesdecoder=/home/sariyildiznureddin/mosesdecoder
# path to subword segmentation scripts: https://github.com/rsennrich/subword-nmt
subword_nmt=/home/sariyildiznureddin/subword-nmt
# path to nematus ( https://www.github.com/rsennrich/nematus )
nematus=/home/sariyildiznureddin/nematus
# theano device
device=cpu
# temporary file (needed for r2l rescoring)
tmpfile=`mktemp`
# preprocess
$mosesdecoder/scripts/tokenizer/normalize-punctuation.perl -l $SRC | \
$mosesdecoder/scripts/tokenizer/tokenizer.perl -l $SRC -penn | \
$mosesdecoder/scripts/recaser/truecase.perl -model truecase-model.$SRC | \
$subword_nmt/apply_bpe.py -c $SRC$TRG.bpe > $tmpfile
# translate
cat $tmpfile | THEANO_FLAGS=mode=FAST_RUN,floatX=float32,device=$device,on_unused_input=warn python $nematus/nematus/translate.py \
-m model-ens1.npz model-ens2.npz model-ens3.npz model-ens4.npz \
-k 50 -n -p 1 --n-best --suppress-unk | \
# reverse
python r2l/reverse_nbest.py | \
# rescore with r2l model
THEANO_FLAGS=mode=FAST_RUN,floatX=float32,device=$device,on_unused_input=warn python $nematus/nematus/rescore.py \
-m r2l/model-ens1.npz r2l/model-ens2.npz r2l/model-ens3.npz r2l/model-ens4.npz -s $tmpfile -b 80 -n | \
python r2l/rerank.py | \
# restore original word order
python r2l/reverse.py | \
# postprocess
sed 's/\@\@ //g' | \
$mosesdecoder/scripts/recaser/detruecase.perl | \
$mosesdecoder/scripts/tokenizer/detokenizer.perl -l $TRG
rm $tmpfile
l2r worked well:
#!/bin/sh
# this sample script translates a test set, including
# preprocessing (tokenization, truecasing, and subword segmentation),
# and postprocessing (merging subword units, detruecasing, detokenization).
# instructions: set paths to mosesdecoder, subword_nmt, and nematus,
# then run "./translate.sh < input_file > output_file"
# suffix of source language
SRC=en
# suffix of target language
TRG=de
# path to moses decoder: https://github.com/moses-smt/mosesdecoder
mosesdecoder=/home/sariyildiznureddin/mosesdecoder
# preprocess
$mosesdecoder/scripts/tokenizer/normalize-punctuation.perl -l $SRC | \
$mosesdecoder/scripts/tokenizer/tokenizer.perl -l $SRC | \
$mosesdecoder/scripts/recaser/truecase.perl -model truecase-model.$SRC | \
# translate
/home/sariyildiznureddin/amunmt/build/bin/amun -c /home/sariyildiznureddin/amunmt/build/bin/config.ens.yml | \
sed 's/\@\@ //g' | \
$mosesdecoder/scripts/recaser/detruecase.perl | \
$mosesdecoder/scripts/tokenizer/detokenizer.perl -l $TRG -penn
Our system is the APE shared-task winner of WMT 2016, would be good to have that working with current commits.
https://github.com/amunmt/amunmt/wiki/AmuNMT-for-Automatic-Post-Editing
The amuNMT project uses many 3rd parties e.g. Blaze, cnpy... It's worth to move all these dependencies into 3rd_party directory.
The task consists of the following steps:
Add command line switch to change between CPU and GPU mode. Currently this can only be done in the configuration file by specifying the model type as Nematus (gpu) or Nematus.CPU (cpu, obviously).
The DL4MT tutorial has a GRU-based RNN-LM. It would be quite easy to add this to the code base and use shallow fusion. Should be there for the sake of completeness. Possible other candidates might be Faster RNN-LM (https://github.com/yandex/faster-rnnlm) models.
I've experienced an issue with cmake not being able to find Threads. It seems that CMake uses short 'C' applications to test/try things. If the CMakeLists.txt states that C++ is used for the project, without also listing C, then some of those shorts tests incorrectly fail, and cmake then thinks those things aren't found.
Changing PROJECT(amunmt CXX) to PROJECT(amunmt C CXX) seems to help on ubuntu 14.04 with cmake 3.5.2
Hi there!
To reproduce this issue:
# Paths are relative to config file location
relative-paths: no
# performance settings
beam-size: 7
devices: [7] #array of gpu devices
normalize: yes
#threads-per-device: 1
#threads: 1
#mode: CPU
gpu-threads: 1
cpu-threads: 0
# scorer configuration
scorers:
F0:
path: <path_to_model>
type: Nematus
# scorer weights
weights:
F0: 1.0
# vocabularies
source-vocab: <vocab>
target-vocab: <vocab>
from websocket import create_connection
import time
start_time = time.time()
with open("<input_file>") as f:
ws = create_connection("ws://localhost:8080/translate")
for line in f:
print("Translating the following line:")
print(line.rstrip())
ws.send(line)
print("Target translation:")
result=ws.recv()
print(result)
ws.close()
Start server
python scripts/amunmt_server.py -c config_gpu.yml -p 8080
Run "latency_client.py"
python latency_client.py
See it work.
Change config.yml: "beam-size: 6"
...
beam-size: 6
...
Start server
Run "latency_client.py"
Server-side error:
terminate called after throwing an instance of 'thrust::system::system_error'
what(): cudaFree in free: an illegal memory access was encountered
(10. Client-side error)
websocket._exceptions.WebSocketConnectionClosedException: Connection is already closed.
Thanks!
Simon
Why is the output of nematus and amunmt different even though I use the same model?
Nematus translates
A win like this might be close.
into
Ein Sieg wie dieser könnte nahe sein - which is a correct translation.
Amunmt translates the same sentence into
Ein Sieg wie dieser könnte schließen. - schließen would rather make sense in the context of closing a door.
This is my Nematus config:
THEANO_FLAGS=mode=FAST_RUN,floatX=float32,device=$device,on_unused_input=warn python $nematus/nematus/translate.py \
-m model-ens4.npz \
-k 50 -n -p 1 --suppress-unk
And this is my Amunmt config:
allow-unk: false
batch-size: 1
beam-size: 50
bpe:
- /home/sariyildiznureddin/en-de/ende.bpe
bunch-size: 1
cpu-threads: 4
devices: [0]
gpu-threads: 0
n-best: false
no-debpe: false
normalize: false
relative-paths: true
return-alignment: false
scorers:
F0:
path: /home/sariyildiznureddin/en-de/model-ens4.npz
type: Nematus
show-weights: false
softmax-filter:
[]
source-vocab:
- /home/sariyildiznureddin/en-de/vocab.en.json
target-vocab: /home/sariyildiznureddin/en-de/vocab.de.json
weights:
F0: 1
wipo: false
Dear AmuNMT Team,
Would it be possible to create a Rest API so that translations could be queried like this:
http://localhost:8045/translate?q=hello+world
?
Hi assignees,
Would be cool if you guys could take a look at the website (https://amunmt.github.io/) and complain here if something is weird or wrong.
Option --softmax-filter is causing segfaults. This seems to be new, either due to batching or due to the safe softmax.
Hi there!
We are experiencing an issue with running the AmuNMT server on one GPU and sending a file for translation to it line by line. To reproduce this issue, please perform the following steps:
Start Ubuntu VM with 1 GPU (Nvidia K80)
Create a config.yml similar to the following:
relative-paths: no
beam-size: 12
devices: [0]
normalize: yes
threads-per-device: 1
threads: 1
gpu-threads: 1
cpu-threads: 0
scorers:
F0:
path: <path_to_your_model>
type: Nematus
weights:
F0: 1.0
source-vocab: <path_to_your_json_file>
target-vocab: <path_to_your_json_file>
./build/bin/amun -c config.yml < testfile.en
python scripts/amunmt_server.py -c config.yml -p 8080
[Fri Feb 17 12:32:51 2017] (I) Options:
allow-unk: false
beam-size: 12
cpu-threads: 0
devices: [0]
gpu-threads: 1
max-length: 500
maxi-batch: 1
mini-batch: 1
n-best: false
no-debpe: false
normalize: yes
relative-paths: no
return-alignment: false
scorers:
F0:
path: <path_to_your_model>
type: Nematus
show-weights: false
softmax-filter:
[]
source-vocab: <path_to_your_json_file>
target-vocab: <path_to_your_json_file>
threads: 1
threads-per-device: 1
weights:
F0: 1.0
wipo: false
[Fri Feb 17 12:32:52 2017] (I) Loading scorers...
[Fri Feb 17 12:32:52 2017] (I) Loading model <path_to_your_model> onto gpu0
[Fri Feb 17 12:32:56 2017] (I) Reading from stdin
from websocket import create_connection
import time
with open("testfile.en") as f:
_for line in f:
__ ws = create_connection("ws://localhost:8080/translate")
__ ws.send(line)
__ result=ws.recv()
__ print(result)
__ ws.close()
__ time.sleep(5)
[Fri Feb 17 12:35:52 2017] (I) Setting CPU thread count to 0
[Fri Feb 17 12:35:52 2017] (I) Setting GPU thread count to 1
[Fri Feb 17 12:35:52 2017] (I) Total number of threads: 1
Batch 0.0: Search took 0.071s
Best translation: ...
Batch 0.0: Search took 0.016s
Best translation: ...
[Fri Feb 17 12:35:57 2017] (I) Setting CPU thread count to 0
[Fri Feb 17 12:35:57 2017] (I) Setting GPU thread count to 1
[Fri Feb 17 12:35:57 2017] (I) Total number of threads: 1
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/bottle.py", line 862, in _handle
return route.call(**args)
File "/usr/local/lib/python2.7/dist-packages/bottle.py", line 1740, in wrapper
rv = callback(*a, **ka)
File "scripts/amunmt_server.py", line 27, in handle_websocket
trans = nmt.translate(message.split('\n'))
RuntimeError: /data/amunmt/src/common/god.cpp:267 in amunmt::DeviceInfo amunmt::God::GetNextDevice() const threw amunmt::util::Exception because `ret.threadInd >= gpuThreads'.
Too many GPU threads
Based on my observations it seems that a whole maxi-batch is being handled by one GPU. That seems suboptimal, I would rather distribute mini-batches. For instance if a document fits into 2 maxi-batches only two GPUs are used despite specifying 4 in the config.
If I wanted to test Rico's model on a non-gpu instance would this be the setup to use:
relative-paths: yes
beam-size: 12
devices: [0]
normalize: yes
gpu-threads: 0
scorers:
F0:
path: model.npz
type: Nematus
weights:
F0: 1.0
source-vocab: vocab.en.json
target-vocab: vocab.de.json
bpe: ende.bpe
debpe: true
cpu-threads: 8
gpu-threads: 0
devices: [1, 0]
When running ./bin/amun -c config.yml <<< "This is a test ." , I end up with Segmentation fault (core dumped) .
Would it be possible to run all four models model-ens{1,2,3,4}?
I created an Eclipse project using the "import existing project option".
Project -> Properties -> C/C++ Build warns
Orphaned configuration. No base extension cfg exists for com.nvidia.cuda.ide.seven_five.configuration.debug.2085852150
Tool Settings is empty and I cannot edit anything in this tab.
Should this be solved on your side or am I missing a plugin or something similar?
Command line options should have higher priority. This used to work once.
A lot of people use tie_embeddings
, which looks helping in some cases.
The model with tie_embeddings doesn't contain (at least) ff_logit_W
, which should be pointing to Wemb_dec
(probably -- it has to confirm by looking into nematus code).
Compile log:
2016-12-01 16:32:56 ☆ mayer in /disk/scratch_ssd/lidong/nmt/amunmt/build
± |master ✓| → cmake ..
-- The C compiler identification is GNU 4.9.2
-- The CXX compiler identification is GNU 4.9.2
-- Check for working C compiler: /opt/rh/devtoolset-3/root/usr/bin/gcc
-- Check for working C compiler: /opt/rh/devtoolset-3/root/usr/bin/gcc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /opt/rh/devtoolset-3/root/usr/bin/g++
-- Check for working CXX compiler: /opt/rh/devtoolset-3/root/usr/bin/g++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Found CUDA: /opt/cuda-7.5.18 (found version "7.5")
-- Compiling with CUDA support
CMake Warning at /afs/inf.ed.ac.uk/user/s14/s1478528/usr/share/cmake-3.6/Modules/FindBoost.cmake:743 (message):
Imported targets not available for Boost version 106200
Call Stack (most recent call first):
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/share/cmake-3.6/Modules/FindBoost.cmake:842 (_Boost_COMPONENT_DEPENDENCIES)
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/share/cmake-3.6/Modules/FindBoost.cmake:1395 (_Boost_MISSING_DEPENDENCIES)
CMakeLists.txt:39 (find_package)
CMake Warning at /afs/inf.ed.ac.uk/user/s14/s1478528/usr/share/cmake-3.6/Modules/FindBoost.cmake:743 (message):
Imported targets not available for Boost version 106200
Call Stack (most recent call first):
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/share/cmake-3.6/Modules/FindBoost.cmake:842 (_Boost_COMPONENT_DEPENDENCIES)
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/share/cmake-3.6/Modules/FindBoost.cmake:1395 (_Boost_MISSING_DEPENDENCIES)
CMakeLists.txt:39 (find_package)
CMake Warning at /afs/inf.ed.ac.uk/user/s14/s1478528/usr/share/cmake-3.6/Modules/FindBoost.cmake:743 (message):
Imported targets not available for Boost version 106200
Call Stack (most recent call first):
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/share/cmake-3.6/Modules/FindBoost.cmake:842 (_Boost_COMPONENT_DEPENDENCIES)
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/share/cmake-3.6/Modules/FindBoost.cmake:1395 (_Boost_MISSING_DEPENDENCIES)
CMakeLists.txt:39 (find_package)
CMake Warning at /afs/inf.ed.ac.uk/user/s14/s1478528/usr/share/cmake-3.6/Modules/FindBoost.cmake:743 (message):
Imported targets not available for Boost version 106200
Call Stack (most recent call first):
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/share/cmake-3.6/Modules/FindBoost.cmake:842 (_Boost_COMPONENT_DEPENDENCIES)
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/share/cmake-3.6/Modules/FindBoost.cmake:1395 (_Boost_MISSING_DEPENDENCIES)
CMakeLists.txt:39 (find_package)
CMake Warning at /afs/inf.ed.ac.uk/user/s14/s1478528/usr/share/cmake-3.6/Modules/FindBoost.cmake:743 (message):
Imported targets not available for Boost version 106200
Call Stack (most recent call first):
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/share/cmake-3.6/Modules/FindBoost.cmake:842 (_Boost_COMPONENT_DEPENDENCIES)
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/share/cmake-3.6/Modules/FindBoost.cmake:1395 (_Boost_MISSING_DEPENDENCIES)
CMakeLists.txt:39 (find_package)
CMake Warning at /afs/inf.ed.ac.uk/user/s14/s1478528/usr/share/cmake-3.6/Modules/FindBoost.cmake:743 (message):
Imported targets not available for Boost version 106200
Call Stack (most recent call first):
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/share/cmake-3.6/Modules/FindBoost.cmake:842 (_Boost_COMPONENT_DEPENDENCIES)
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/share/cmake-3.6/Modules/FindBoost.cmake:1395 (_Boost_MISSING_DEPENDENCIES)
CMakeLists.txt:39 (find_package)
CMake Warning at /afs/inf.ed.ac.uk/user/s14/s1478528/usr/share/cmake-3.6/Modules/FindBoost.cmake:743 (message):
Imported targets not available for Boost version 106200
Call Stack (most recent call first):
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/share/cmake-3.6/Modules/FindBoost.cmake:842 (_Boost_COMPONENT_DEPENDENCIES)
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/share/cmake-3.6/Modules/FindBoost.cmake:1395 (_Boost_MISSING_DEPENDENCIES)
CMakeLists.txt:39 (find_package)
-- Boost version: 1.62.0
-- Found the following Boost libraries:
-- system
-- filesystem
-- program_options
-- timer
-- iostreams
-- python
-- thread
-- Found PythonLibs: /usr/lib64/libpython2.7.so (found suitable version "2.7.9", minimum required is "2.7")
-- Found Python
-- Found ZLIB: /afs/inf.ed.ac.uk/user/s14/s1478528/usr/lib/libz.so (found version "1.2.8")
-- Found Git: /usr/bin/git (found version "1.7.1")
-- Git version: ef30275
-- Could NOT find SparseHash (missing: SPARSEHASH_INCLUDE_DIR)
-- Configuring done
-- Generating done
-- Build files have been written to: /disk/scratch_ssd/lidong/nmt/amunmt/build
2016-12-01 16:33:00 ☆ mayer in /disk/scratch_ssd/lidong/nmt/amunmt/build
± |master ✓| → make -j
Scanning dependencies of target cpumode
Scanning dependencies of target libcnpy
Scanning dependencies of target atools
Scanning dependencies of target fast_align
Scanning dependencies of target extract_lex
Scanning dependencies of target libcommon
Scanning dependencies of target libyaml-cpp
[ 1%] Building CXX object src/3rd_party/CMakeFiles/libcnpy.dir/cnpy/cnpy.cpp.o
[ 2%] Building CXX object src/3rd_party/fast_align/CMakeFiles/atools.dir/src/alignment_io.cc.o
[ 3%] Building CXX object src/3rd_party/fast_align/CMakeFiles/atools.dir/src/atools.cc.o
[ 4%] Building CXX object src/3rd_party/fast_align/CMakeFiles/fast_align.dir/src/fast_align.cc.o
[ 5%] Building CXX object src/3rd_party/fast_align/CMakeFiles/fast_align.dir/src/ttables.cc.o
[ 6%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/regex_yaml.cpp.o
[ 7%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/emitfromevents.cpp.o
[ 8%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/nodebuilder.cpp.o
[ 10%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/parser.cpp.o
[ 11%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/stream.cpp.o
[ 12%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/exp.cpp.o
[ 13%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/nodeevents.cpp.o
[ 14%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/scantoken.cpp.o
[ 15%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/emitter.cpp.o
[ 16%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/null.cpp.o
[ 17%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/singledocparser.cpp.o
[ 18%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/memory.cpp.o
[ 20%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/simplekey.cpp.o
[ 21%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/ostream_wrapper.cpp.o
[ 22%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/scantag.cpp.o
[ 23%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/binary.cpp.o
[ 24%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/contrib/graphbuilder.cpp.o
[ 25%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/emitterutils.cpp.o
[ 26%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/node_data.cpp.o
[ 27%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/node.cpp.o
[ 28%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/contrib/graphbuilderadapter.cpp.o
[ 30%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/scanner.cpp.o
[ 31%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/emit.cpp.o
[ 32%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/emitterstate.cpp.o
[ 33%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/parse.cpp.o
[ 34%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/convert.cpp.o
[ 35%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/tag.cpp.o
[ 36%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/scanscalar.cpp.o
/disk/scratch_ssd/lidong/nmt/amunmt/src/3rd_party/fast_align/src/ttables.cc: In member function ‘void TTable::DeserializeLogProbsFromText(std::istream*, Dict&)’:
/disk/scratch_ssd/lidong/nmt/amunmt/src/3rd_party/fast_align/src/ttables.cc:20:12: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
if (ie >= static_cast(ttable.size())) ttable.resize(ie + 1);
^
[ 37%] Building CXX object src/3rd_party/yaml-cpp/CMakeFiles/libyaml-cpp.dir/directives.cpp.o
[ 38%] Linking CXX executable ../../../bin/fast_align
[ 38%] Built target libcnpy
[ 40%] Linking CXX executable ../../../bin/atools
[ 41%] Building CXX object src/3rd_party/extract_lex/CMakeFiles/extract_lex.dir/extract-lex-main.cpp.o
[ 42%] Building CXX object src/3rd_party/extract_lex/CMakeFiles/extract_lex.dir/utils.cpp.o
[ 43%] Building CXX object src/3rd_party/extract_lex/CMakeFiles/extract_lex.dir/exception.cpp.o
[ 43%] Built target libyaml-cpp
[ 44%] Building CXX object src/CMakeFiles/libcommon.dir/common/git_version.cpp.o
[ 45%] Building CXX object src/CMakeFiles/libcommon.dir/common/config.cpp.o
[ 46%] Building CXX object src/CMakeFiles/libcommon.dir/common/exception.cpp.o
[ 47%] Building CXX object src/CMakeFiles/libcommon.dir/common/filter.cpp.o
[ 48%] Building CXX object src/CMakeFiles/libcommon.dir/common/god.cpp.o
[ 50%] Building CXX object src/CMakeFiles/libcommon.dir/common/logging.cpp.o
[ 51%] Building CXX object src/CMakeFiles/libcommon.dir/common/printer.cpp.o
[ 52%] Building CXX object src/CMakeFiles/libcommon.dir/common/history.cpp.o
[ 53%] Building CXX object src/CMakeFiles/libcommon.dir/common/loader.cpp.o
[ 54%] Building CXX object src/CMakeFiles/libcommon.dir/common/scorer.cpp.o
[ 55%] Building CXX object src/CMakeFiles/libcommon.dir/common/sentence.cpp.o
[ 56%] Building CXX object src/CMakeFiles/libcommon.dir/common/utils.cpp.o
[ 57%] Building CXX object src/CMakeFiles/libcommon.dir/common/search.cpp.o
[ 58%] Building CXX object src/CMakeFiles/libcommon.dir/common/vocab.cpp.o
[ 60%] Building CXX object src/CMakeFiles/libcommon.dir/common/processor/bpe.cpp.o
[ 61%] Building CXX object src/CMakeFiles/cpumode.dir/cpu/mblas/matrix.cpp.o
[ 62%] Building CXX object src/CMakeFiles/cpumode.dir/cpu/mblas/phoenix_functions.cpp.o
[ 63%] Building CXX object src/CMakeFiles/cpumode.dir/cpu/dl4mt/decoder.cpp.o
[ 65%] Building CXX object src/CMakeFiles/cpumode.dir/cpu/dl4mt/encoder.cpp.o
[ 65%] Building CXX object src/CMakeFiles/cpumode.dir/cpu/dl4mt/gru.cpp.o
[ 66%] Building CXX object src/CMakeFiles/cpumode.dir/cpu/dl4mt/model.cpp.o
[ 67%] Building CXX object src/CMakeFiles/cpumode.dir/cpu/decoder/encoder_decoder.cpp.o
[ 67%] Built target atools
[ 68%] Linking CXX executable ../../../bin/extract_lex
[ 68%] Built target fast_align
[ 68%] Built target extract_lex
[ 68%] Built target libcommon
[ 68%] Built target cpumode
[ 70%] Building NVCC (Device) object src/CMakeFiles/amun.dir/gpu/decoder/amun_generated_encoder_decoder.cu.o
[ 71%] Building NVCC (Device) object src/CMakeFiles/amunmt.dir/common/amunmt_generated_loader_factory.cpp.o
[ 72%] Building NVCC (Device) object src/CMakeFiles/amun.dir/gpu/decoder/amun_generated_ape_penalty.cu.o
[ 73%] Building NVCC (Device) object src/CMakeFiles/amun.dir/gpu/decoder/amun_generated_ape_penalty.cu.o
[ 74%] Building NVCC (Device) object src/CMakeFiles/amunmt.dir/gpu/mblas/amunmt_generated_matrix_functions.cu.o
[ 75%] Building NVCC (Device) object src/CMakeFiles/amun.dir/common/amun_generated_loader_factory.cpp.o
[ 76%] Building NVCC (Device) object src/CMakeFiles/amun.dir/gpu/amun_generated_npz_converter.cu.o
[ 77%] Building NVCC (Device) object src/CMakeFiles/amunmt.dir/gpu/decoder/amunmt_generated_ape_penalty.cu.o
[ 78%] Building NVCC (Device) object src/CMakeFiles/amunmt.dir/gpu/dl4mt/amunmt_generated_gru.cu.o
[ 80%] Building NVCC (Device) object src/CMakeFiles/amun.dir/gpu/mblas/amun_generated_nth_element.cu.o
[ 81%] Building NVCC (Device) object src/CMakeFiles/amun.dir/gpu/dl4mt/amun_generated_gru.cu.o
[ 82%] Building NVCC (Device) object src/CMakeFiles/amun.dir/gpu/dl4mt/amun_generated_gru.cu.o
[ 83%] Building NVCC (Device) object src/CMakeFiles/amun.dir/gpu/mblas/amun_generated_nth_element.cu.o
[ 84%] Building NVCC (Device) object src/CMakeFiles/amun.dir/gpu/amun_generated_npz_converter.cu.o
[ 85%] Building NVCC (Device) object src/CMakeFiles/amun.dir/gpu/decoder/amun_generated_encoder_decoder.cu.o
[ 86%] Building NVCC (Device) object src/CMakeFiles/amunmt.dir/gpu/dl4mt/amunmt_generated_encoder.cu.o
[ 87%] Building NVCC (Device) object src/CMakeFiles/amun.dir/gpu/dl4mt/amun_generated_encoder.cu.o
[ 88%] Building NVCC (Device) object src/CMakeFiles/amun.dir/gpu/dl4mt/amun_generated_encoder.cu.o
[ 90%] Building NVCC (Device) object src/CMakeFiles/amun.dir/gpu/mblas/amun_generated_matrix_functions.cu.o
[ 91%] Building NVCC (Device) object src/CMakeFiles/amunmt.dir/gpu/decoder/amunmt_generated_encoder_decoder.cu.o
[ 92%] Building NVCC (Device) object src/CMakeFiles/amunmt.dir/gpu/amunmt_generated_npz_converter.cu.o
[ 93%] Building NVCC (Device) object src/CMakeFiles/amun.dir/common/amun_generated_loader_factory.cpp.o
[ 94%] Building NVCC (Device) object src/CMakeFiles/amun.dir/gpu/mblas/amun_generated_matrix_functions.cu.o
[ 95%] Building NVCC (Device) object src/CMakeFiles/amunmt.dir/gpu/mblas/amunmt_generated_nth_element.cu.o
Error copying file (if different) from "/disk/scratch_ssd/lidong/nmt/amunmt/build/src/CMakeFiles/amun.dir/gpu/amun_generated_npz_converter.cu.o.depend.tmp" to "/disk/scratch_ssd/lidong/nmt/amunmt/build/src/CMakeFiles/amun.dir/gpu/amun_generated_npz_converter.cu.o.depend".
CMake Error at amun_generated_npz_converter.cu.o.cmake:229 (message):
Error generating
/disk/scratch_ssd/lidong/nmt/amunmt/build/src/CMakeFiles/amun.dir/gpu/./amun_generated_npz_converter.cu.o
make[2]: *** [src/CMakeFiles/amun.dir/gpu/amun_generated_npz_converter.cu.o] Error 1
make[2]: *** Waiting for unfinished jobs....
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/include/boost/smart_ptr/detail/sp_counted_base_gcc_x86.hpp(75): warning: variable "tmp" was set but never used
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/include/boost/smart_ptr/detail/sp_counted_base_gcc_x86.hpp(75): warning: variable "tmp" was set but never used
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/include/boost/smart_ptr/detail/sp_counted_base_gcc_x86.hpp(75): warning: variable "tmp" was set but never used
CMake Error at /afs/inf.ed.ac.uk/user/s14/s1478528/usr/share/cmake-3.6/Modules/FindCUDA/make2cmake.cmake:48 (file):
file failed to open for reading (No such file or directory):
/disk/scratch_ssd/lidong/nmt/amunmt/build/src/CMakeFiles/amun.dir/gpu/decoder/amun_generated_encoder_decoder.cu.o.NVCC-depend
CMake Error at amun_generated_encoder_decoder.cu.o.cmake:219 (message):
Error generating
/disk/scratch_ssd/lidong/nmt/amunmt/build/src/CMakeFiles/amun.dir/gpu/decoder/./amun_generated_encoder_decoder.cu.o
make[2]: *** [src/CMakeFiles/amun.dir/gpu/decoder/amun_generated_encoder_decoder.cu.o] Error 1
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/include/boost/smart_ptr/detail/sp_counted_base_gcc_x86.hpp(75): warning: variable "tmp" was set but never used
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/include/boost/smart_ptr/detail/sp_counted_base_gcc_x86.hpp(75): warning: variable "tmp" was set but never used
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/include/boost/smart_ptr/detail/sp_counted_base_gcc_x86.hpp(75): warning: variable "tmp" was set but never used
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/include/boost/smart_ptr/detail/sp_counted_base_gcc_x86.hpp(75): warning: variable "tmp" was set but never used
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/include/boost/smart_ptr/detail/sp_counted_base_gcc_x86.hpp(75): warning: variable "tmp" was set but never used
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/include/boost/smart_ptr/detail/sp_counted_base_gcc_x86.hpp(75): warning: variable "tmp" was set but never used
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/include/boost/smart_ptr/detail/sp_counted_base_gcc_x86.hpp(75): warning: variable "tmp" was set but never used
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/include/boost/smart_ptr/detail/sp_counted_base_gcc_x86.hpp(75): warning: variable "tmp" was set but never used
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/include/boost/smart_ptr/detail/sp_counted_base_gcc_x86.hpp(75): warning: variable "tmp" was set but never used
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/include/boost/smart_ptr/detail/sp_counted_base_gcc_x86.hpp(75): warning: variable "tmp" was set but never used
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/include/boost/smart_ptr/detail/sp_counted_base_gcc_x86.hpp(75): warning: variable "tmp" was set but never used
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/include/boost/smart_ptr/detail/sp_counted_base_gcc_x86.hpp(75): warning: variable "tmp" was set but never used
/afs/inf.ed.ac.uk/user/s14/s1478528/usr/include/boost/smart_ptr/detail/sp_counted_base_gcc_x86.hpp(75): warning: variable "tmp" was set but never used
make[1]: *** [src/CMakeFiles/amunmt.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
Scanning dependencies of target amun
[ 96%] Building CXX object src/CMakeFiles/amun.dir/common/decoder_main.cpp.o
[ 97%] Linking CXX executable ../bin/amun
[ 97%] Built target amun
make: *** [all] Error 2
Add option to collect word alignment based on attention scores.
On meili:
/home/abmayne/software/amunmt/build/bin/amun -m /fs/gna0/rsennrich/wmt16_systems/en-ru/model.npz -s /fs/gna0/rsennrich/wmt16_systems/en-ru/vocab.en.json -t /fs/gna0/rsennrich/wmt16_systems/en-ru/vocab.ru.json -n -d 1 --mini-batch 100 --maxi-batch 1000 -b 5 < /home/abmayne/experiments/wmt17/backtrans/en-ru/test
[Tue Feb 28 15:54:59 2017] (I) Loading scorers...
[Tue Feb 28 15:54:59 2017] (I) Loading model /fs/gna0/rsennrich/wmt16_systems/en-ru/model.npz onto gpu1
[Tue Feb 28 15:55:05 2017] (I) Reading from stdin
[Tue Feb 28 15:55:05 2017] (I) Setting CPU thread count to 0
[Tue Feb 28 15:55:05 2017] (I) Setting GPU thread count to 1
[Tue Feb 28 15:55:05 2017] (I) Total number of threads: 1
[Tue Feb 28 15:55:05 2017] (I) Reading input
terminate called after throwing an instance of 'thrust::system::system_error'
what(): cudaFree in free: an illegal memory access was encountered
Aborted (core dumped)
When I debugged the process it crashed in Search::Process allocating nextStates
so common/search.cpp line 131 in current master
After @hieuhoang cpu-gpu merge, the python bindings disappeared.
We have to have them working again.
In the cpu branch is a small python RESTful server, which uses the bindings. It seems it's a good test checking if everything works correct.
As said in issue #27. Too many broken things are committed to master without checking.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.