Git Product home page Git Product logo

fisherpruning's People

Contributors

jshilong avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

fisherpruning's Issues

How compute the speedup on GPUs?

Hi, I wonder how to compute speedup on GPUs? Is it the total inference time of CPU divide by that on GPU? If so, I wonder how to calculate the total time on the CPU by mmdetection, do I need to put all the data on the CPU for retraining? Or just need to put the data on the CPU during inference?
tempsnip
Looking forward to reply!

The batch size and number of gpu during pruning.

Hi, I basically completed the pruning codes of faster-rcnn. I wonder the batch size and the number of gpu during pruning faster-rcnn, is it 16 on 8 2080ti which is same as all detection models? Thank you very much!

question on backbone

DOes your framework focus only on Resnet based backbone or it can be adapted for other models like yolov4 or yolov5.

Resume pruning

Hi, thanks for this awesome work.
I'm wondering if there is an option to continue pruning, I tried setting pruning=True and deploy_from=checkpoint to achieve it, but it didn't work.
And then I modify the code in

if not self.pruning:
, making model loading mask and checkpoint when pruning. At the same time I comment the code in
add_pruning_attrs(m, pruning=self.pruning)
to prevent model sets mask to 1. Although it can obtain information about the pruned channels, it still prunes them according to the initial model.

Is there anything else I need to change?

How prune the FC layer?

Hi, I tried to complete the codes of pruning faster rcnn, but I don't know how to prune the FC layer in roi_head. I have finished the twostage_wrapper function as shown below
twostage
But I got error.
error
Could you help me see where the problem is? Thanks!

The finetuned ATSS model file is bad.

Hi, the given finetuned ATSS model can't be tested, the error is as follows, maybe the file is incomplete. And I have tested finetuned models of RetinaNet and PAA, they are normal.
wrong size

How prune the FC layer?

Hi, I tried to complete the codes of faster rcnn, but I don't know how to prune the FC layer in roi_head. I finished the twostage_wrapper function as shown below.
image
But I got the error.
image
Could you help me find out what went wrong? Thanks!

The batch size and number of gpu during pruning.

Hi, I basically completed the pruning codes of faster-rcnn. I wonder the batch size and the number of gpu during pruning faster-rcnn, is it 16 on 8 2080ti which is same as all detection models? Thank you very much!

About depthwise-conv

Hi,
I have DepthWise Conv layer in my network. And It is not detected in self.conv2ancest.
Do the project support DepthWise convolution ?

Thanks.

Welcome update to OpenMMLab 2.0

Welcome update to OpenMMLab 2.0

I am Vansin, the technical operator of OpenMMLab. In September of last year, we announced the release of OpenMMLab 2.0 at the World Artificial Intelligence Conference in Shanghai. We invite you to upgrade your algorithm library to OpenMMLab 2.0 using MMEngine, which can be used for both research and commercial purposes. If you have any questions, please feel free to join us on the OpenMMLab Discord at https://discord.gg/amFNsyUBvm or add me on WeChat (van-sin) and I will invite you to the OpenMMLab WeChat group.

Here are the OpenMMLab 2.0 repos branches:

OpenMMLab 1.0 branch OpenMMLab 2.0 branch
MMEngine 0.x
MMCV 1.x 2.x
MMDetection 0.x 、1.x、2.x 3.x
MMAction2 0.x 1.x
MMClassification 0.x 1.x
MMSegmentation 0.x 1.x
MMDetection3D 0.x 1.x
MMEditing 0.x 1.x
MMPose 0.x 1.x
MMDeploy 0.x 1.x
MMTracking 0.x 1.x
MMOCR 0.x 1.x
MMRazor 0.x 1.x
MMSelfSup 0.x 1.x
MMRotate 1.x 1.x
MMYOLO 0.x

Attention: please create a new virtual environment for OpenMMLab 2.0.

Finetuning speed is too slow.

Hi, I'm very interested in your work. When I fintune the RetinaNet of your pruned model, the finetuning process will last for 2 days, which is very different from the 7 hours in the paper, as shown in the figure below.
image
One 1080ti is used and I didn't use slurm. The command for finetuning I used is python -u tools/train.py configs/retina/retina_finetune.py --work-dir ./ --gpu-ids 0

About register_backward_hook

In PyTorch >= 1.8, this register_backward_hook can't be used directly, and register_full_backward_hook always return "using input grad before output grad bla...". Did you know how to replace register_backward_hook in higher PyTorch version?

Question on Algorithm 1 (Layer Grouping)

In ResNeXt blocks, conv2 is a GConv and should be grouped with its child conv3.
However, if conv2 is not in any group while we assign a group for conv3, e.g. conv2 is behind conv3 in \mathbb{L}, conv3 will not be assigned to the same group with conv2. Instead, it will be assigned to a new group. The conv2 will not be assigned to this group since conv2 and conv3 have no shared parent.
image
When we modify the make_groups function according to this algorithm, it indeed fails to group the conv2&conv3 in ResNeXt50 blocks though the found ancestors are correct.

Could u please help check this problem?

Now in this repo, we tentatively change this function to
image
and it works.

The batch size and number of gpu during pruning.

Hi, I basically completed the pruning codes of faster-rcnn. I wonder the batch size and the number of gpu during pruning faster-rcnn, is it 16 on 8 2080ti which is same as all detection models? Thank you very much!

How compute the acts of FC layer?

Hi, I'm trying to complete the pruning codes of FC layer. I wonder how to compute the acts of FC layer, is it n times oc where n is the batch size and oc is the channel numbers of output feature?

Question on flops during pruning.

Hi, I wonder if it is normal that the flops is too small during pruning RetinaNet? And does it mean I need to finetune when flops reduces about 50%?
image

An issue about running the code

I create a virtual environment, open-mmlab, with python=3.7, pytorch=1.3.0, cudatoolkit=10.0, mmcv=1.3.16, mmdet=2.13.0.
However, it still can not run.

File "tools/train.py", line 16, in
from mmdet.apis import set_random_seed, train_detector
ImportError: libcudart.so.10.1: cannot open shared object file: No such file or directory

Some questions about the paper and code

Thanks for the great work, I have some questions about this code

  1. In the paper you mentioned coupled channels, I understand what it means is that C2/C5 (you described in Figure 4) receive the same input. Is there any other meaning besides this?

截屏2022-01-10 上午10 30 32

2. For this common concat operation in yolo, can your code still handle it normally?Because the input comes from two ancestors, and it seems that the number of channels cannot correspond to the function **def construct_outchannel_masks(self)** to construct the mask,

截屏2022-01-10 上午10 35 54

Hope to get your reply.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.