Git Product home page Git Product logo

mobilenet-mxnet's People

Contributors

keyky avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mobilenet-mxnet's Issues

Operator ChannelwiseConvolution is not registered

Dear writer, ths for your works,
I have a bug when use your model to get a my own fine-tune model,

mxnet.base.MXNetError: Failed loading Op conv2_1_dw of type ChannelwiseConvolution: [19:46:46] D:\Program Files (x86)\Jenkins\workspace\mxnet\mxnet\nnvm\src\core\op.cc:55: Check failed: op != nullptr Operator ChannelwiseConvolution is not registered

So, what's the reason for this, ChannelwiseConvolution is not found?

Redundant code?

    conv2_1_sep_bn = mx.symbol.BatchNorm(name='conv2_1_sep_bn', data=conv2_1_sep , use_global_stats=False, fix_gamma=False, eps=0.000100)
    conv2_1_sep_scale = conv2_1_sep_bn
    relu2_1_sep = mx.symbol.Activation(name='relu2_1_sep', data=conv2_1_sep_scale , act_type='relu')

Just curious -- what's the benefit of including the line conv2_1_sep_scale = conv2_1_sep_bn?

In other words, would it be equivalent to have the following?

    conv2_1_sep_bn = mx.symbol.BatchNorm(name='conv2_1_sep_bn', data=conv2_1_sep , use_global_stats=False, fix_gamma=False, eps=0.000100)
    relu2_1_sep = mx.symbol.Activation(name='relu2_1_sep', data=conv2_1_sep_bn , act_type='relu')

execute failed

hello, I execute the score.py as below:
1 clone the whole project of yours mobilenet-mxnet
2 cd into mxnet
3 then I execute make command: make -j $(nproc) USE_OPENCV=1 USE_BLAS=openblas USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1
4 lastly I execute python score.py --prefix=mobilenet --epoch=0 --data-val=/data/ILSVRC2012_img_test.rec

In this step it went wrong:
(mobilenet) [hanqing@localhost mobilenet-mxnet]$ python score.py --prefix=mobilenet --epoch=0 --data-val=/data/ILSVRC2012_img_test.rec
[21:56:23] src/io/iter_image_recordio_2.cc:135: ImageRecordIOParser2: /data/ILSVRC2012_img_test.rec, use 4 threads for decoding..
[21:56:24] src/nnvm/legacy_json_util.cc:190: Loading symbol saved by previous version v0.9.5. Attempting to upgrade...
[21:56:24] src/nnvm/legacy_json_util.cc:198: Symbol successfully upgraded!
[21:56:25] src/operator/././cudnn_algoreg-inl.h:65: Running performance tests to find the best convolution algorithm, this can take a while... (setting env variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable)
[22:01:59] include/dmlc/././logging.h:304: [22:01:59] src/recordio.cc:125: Check failed: pbegin_ <= pend_ Invalid RecordIO Format

Stack trace returned 5 entries:
[bt] (0) /home/hanqing/anaconda2/envs/mobilenet/lib/python2.7/site-packages/mxnet-0.10.1-py2.7.egg/mxnet/libmxnet.so(_ZN4dmlc19RecordIOChunkReader10NextRecordEPNS_10InputSplit4BlobE+0xd86) [0x7f3747c89406]
[bt] (1) /home/hanqing/anaconda2/envs/mobilenet/lib/python2.7/site-packages/mxnet-0.10.1-py2.7.egg/mxnet/libmxnet.so(+0x10366ed) [0x7f37478986ed]
[bt] (2) /lib64/libgomp.so.1(+0x16435) [0x7f372e6e9435]
[bt] (3) /lib64/libpthread.so.0(+0x7dc5) [0x7f375a691dc5]
[bt] (4) /lib64/libc.so.6(clone+0x6d) [0x7f3759cb776d]

terminate called after throwing an instance of 'dmlc::Error'
what(): [22:01:59] src/recordio.cc:125: Check failed: pbegin_ <= pend_ Invalid RecordIO Format

Stack trace returned 5 entries:
[bt] (0) /home/hanqing/anaconda2/envs/mobilenet/lib/python2.7/site-packages/mxnet-0.10.1-py2.7.egg/mxnet/libmxnet.so(_ZN4dmlc19RecordIOChunkReader10NextRecordEPNS_10InputSplit4BlobE+0xd86) [0x7f3747c89406]
[bt] (1) /home/hanqing/anaconda2/envs/mobilenet/lib/python2.7/site-packages/mxnet-0.10.1-py2.7.egg/mxnet/libmxnet.so(+0x10366ed) [0x7f37478986ed]
[bt] (2) /lib64/libgomp.so.1(+0x16435) [0x7f372e6e9435]
[bt] (3) /lib64/libpthread.so.0(+0x7dc5) [0x7f375a691dc5]
[bt] (4) /lib64/libc.so.6(clone+0x6d) [0x7f3759cb776d]

Aborted (core dumped)

Is there anything I do wrong?

About training speed

I use mxnet-mobilenet for train, but for my GPU 1080Ti, the input is 224*224,my training speed is about 50 sample/second,so is something wrong for my training, I can't find the reason, looking forward your reply

Is this a MXNet version implementation of mobilenet?

I'm a little confused about README.md.

You wrote: This model is converted from MobileNet-Caffe. However, I found the operator your implemented channelwise_convolution.cc,channelwise_convolution.cu in mxnet@c492b5b.
Besides, does this channelwise equal to depth-wise convolution in original paper?

Have highly optimized GEMM?

The content below is from orginal paper:

Unstructured sparse matrix operations are not typically faster than dense matrix operations until a very high level of sparsity. 

Our model structure puts nearly all of the computation into dense 1×1 convolutions. This can be implemented with highly optimized general matrix multiply (GEMM) functions. Often convolutions are implemented by a GEMM but require an initial reordering in memory called im2col in order to map it to a GEMM. For instance, this approach is used in the popular Caffe package [15].

1×1 convolutions do not require this reordering in memory and can be implemented directly with GEMM which is one of the most optimized numerical linear algebra algorithms.
MobileNet spends 95% of it’s computation time in 1 × 1 convolutions which also has 75% of the parameters.

So I wann ask, does this repo accomplish with highly optimized GEMM?
Besides, I wann ask: does this need im2col during mobilenet? before normal and depth-wise conv
Thanks!

about channelwise convolution

I noticed that you have used ChannelwiseConvolution in mobilenet-symbol.json file and mobilenet-faster.py. I am using mxnet of version 0.11.1, do i have to use ChannelwiseConvolution with depthwiseConvolution instead? Because your update say that mxnet 0.11.1 support depthwiseConvolution.
Thanks for your reply!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.