Git Product home page Git Product logo

repnet-pytorch's People

Contributors

confifu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

repnet-pytorch's Issues

model overfiting

Hello, when i train the model, after 35 epochs, the valid loss curve is as follows:

Screenshot from 2021-06-03 10-10-50

Can you give me some tips ?

getCombinedDataset

Hello, is there any updated version of the file 'Dataset.py'?

When I tried to run trainingLoop.py, it raised the following error:

Traceback (most recent call last): File "trainingLoop.py", line 26, in <module> testDatasetC = getCombinedDataset('countix/countix_test.csv', TypeError: getCombinedDataset() got an unexpected keyword argument 'frame_per_vid'

It seems that the original getCombineDataset function only receives three arguments, i.e.,

def getCombinedDataset(dfPath, videoDir, videoPrefix).

But when calling this function in trainingLoop.py, it gives five arguments, e.g.,

getCombinedDataset('countix/countix_test.csv', 'testvids', 'test', frame_per_vid=frame_per_vid, multiple=multiple).

I've also looked over the definition of other dataset-related functions like SyntheticDataset, BlenderDataset, unfortunately, none of them is suitable.

require train datasets

I was looking for the dataset associated with it but the download link wasn't working. I was wondering if you could provide a working link to access the dataset? Thank you for your time and I look forward to hearing from you.

can you give me the test demo?

I trained 15 epochs, and the test result is relatively poor. Is there something wrong with my test demo? I hope you can provide me with a test demo. Thank you.

Is the retraining over the parameters of the ResNet Bottom needed?

Hi there, I've got a question.

It seems that in trainLoop.py (training_lop function), you pass all the parameters into the Adam optimizer:

optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr = lr)

To make sure, I've tested with the following code:

for name, param in model.named_parameters(): 
    
    if param.requires_grad:
        
        print(name)

And I got the following output (I've changed some variable names in the original RepNet class):

base_model.original_model.conv1.weight
base_model.original_model.bn1.weight
base_model.original_model.bn1.bias
base_model.original_model.layer1.0.conv1.weight
base_model.original_model.layer1.0.bn1.weight
base_model.original_model.layer1.0.bn1.bias
base_model.original_model.layer1.0.conv2.weight
base_model.original_model.layer1.0.bn2.weight
base_model.original_model.layer1.0.bn2.bias
base_model.original_model.layer1.0.conv3.weight
base_model.original_model.layer1.0.bn3.weight
base_model.original_model.layer1.0.bn3.bias
base_model.original_model.layer1.0.downsample.0.weight
base_model.original_model.layer1.0.downsample.1.weight
base_model.original_model.layer1.0.downsample.1.bias
base_model.original_model.layer1.1.conv1.weight
base_model.original_model.layer1.1.bn1.weight
base_model.original_model.layer1.1.bn1.bias
base_model.original_model.layer1.1.conv2.weight
base_model.original_model.layer1.1.bn2.weight
base_model.original_model.layer1.1.bn2.bias
base_model.original_model.layer1.1.conv3.weight
base_model.original_model.layer1.1.bn3.weight
base_model.original_model.layer1.1.bn3.bias
base_model.original_model.layer1.2.conv1.weight
base_model.original_model.layer1.2.bn1.weight
base_model.original_model.layer1.2.bn1.bias
base_model.original_model.layer1.2.conv2.weight
base_model.original_model.layer1.2.bn2.weight
base_model.original_model.layer1.2.bn2.bias
base_model.original_model.layer1.2.conv3.weight
base_model.original_model.layer1.2.bn3.weight
base_model.original_model.layer1.2.bn3.bias
base_model.original_model.layer2.0.conv1.weight
base_model.original_model.layer2.0.bn1.weight
base_model.original_model.layer2.0.bn1.bias
base_model.original_model.layer2.0.conv2.weight
base_model.original_model.layer2.0.bn2.weight
base_model.original_model.layer2.0.bn2.bias
base_model.original_model.layer2.0.conv3.weight
base_model.original_model.layer2.0.bn3.weight
base_model.original_model.layer2.0.bn3.bias
base_model.original_model.layer2.0.downsample.0.weight
base_model.original_model.layer2.0.downsample.1.weight
base_model.original_model.layer2.0.downsample.1.bias
base_model.original_model.layer2.1.conv1.weight
base_model.original_model.layer2.1.bn1.weight
base_model.original_model.layer2.1.bn1.bias
base_model.original_model.layer2.1.conv2.weight
base_model.original_model.layer2.1.bn2.weight
base_model.original_model.layer2.1.bn2.bias
base_model.original_model.layer2.1.conv3.weight
base_model.original_model.layer2.1.bn3.weight
base_model.original_model.layer2.1.bn3.bias
base_model.original_model.layer2.2.conv1.weight
base_model.original_model.layer2.2.bn1.weight
base_model.original_model.layer2.2.bn1.bias
base_model.original_model.layer2.2.conv2.weight
base_model.original_model.layer2.2.bn2.weight
base_model.original_model.layer2.2.bn2.bias
base_model.original_model.layer2.2.conv3.weight
base_model.original_model.layer2.2.bn3.weight
base_model.original_model.layer2.2.bn3.bias
base_model.original_model.layer2.3.conv1.weight
base_model.original_model.layer2.3.bn1.weight
base_model.original_model.layer2.3.bn1.bias
base_model.original_model.layer2.3.conv2.weight
base_model.original_model.layer2.3.bn2.weight
base_model.original_model.layer2.3.bn2.bias
base_model.original_model.layer2.3.conv3.weight
base_model.original_model.layer2.3.bn3.weight
base_model.original_model.layer2.3.bn3.bias
base_model.original_model.layer3.0.conv1.weight
base_model.original_model.layer3.0.bn1.weight
base_model.original_model.layer3.0.bn1.bias
base_model.original_model.layer3.0.conv2.weight
base_model.original_model.layer3.0.bn2.weight
base_model.original_model.layer3.0.bn2.bias
base_model.original_model.layer3.0.conv3.weight
base_model.original_model.layer3.0.bn3.weight
base_model.original_model.layer3.0.bn3.bias
base_model.original_model.layer3.0.downsample.0.weight
base_model.original_model.layer3.0.downsample.1.weight
base_model.original_model.layer3.0.downsample.1.bias
base_model.original_model.layer3.1.conv1.weight
base_model.original_model.layer3.1.bn1.weight
base_model.original_model.layer3.1.bn1.bias
base_model.original_model.layer3.1.conv2.weight
base_model.original_model.layer3.1.bn2.weight
base_model.original_model.layer3.1.bn2.bias
base_model.original_model.layer3.1.conv3.weight
base_model.original_model.layer3.1.bn3.weight
base_model.original_model.layer3.1.bn3.bias
base_model.original_model.layer3.2.conv1.weight
base_model.original_model.layer3.2.bn1.weight
base_model.original_model.layer3.2.bn1.bias
base_model.original_model.layer3.2.conv2.weight
base_model.original_model.layer3.2.bn2.weight
base_model.original_model.layer3.2.bn2.bias
base_model.original_model.layer3.2.conv3.weight
base_model.original_model.layer3.2.bn3.weight
base_model.original_model.layer3.2.bn3.bias
base_model.original_model.layer3.3.conv1.weight
base_model.original_model.layer3.3.bn1.weight
base_model.original_model.layer3.3.bn1.bias
base_model.original_model.layer3.3.conv2.weight
base_model.original_model.layer3.3.bn2.weight
base_model.original_model.layer3.3.bn2.bias
base_model.original_model.layer3.3.conv3.weight
base_model.original_model.layer3.3.bn3.weight
base_model.original_model.layer3.3.bn3.bias
base_model.original_model.layer3.4.conv1.weight
base_model.original_model.layer3.4.bn1.weight
base_model.original_model.layer3.4.bn1.bias
base_model.original_model.layer3.4.conv2.weight
base_model.original_model.layer3.4.bn2.weight
base_model.original_model.layer3.4.bn2.bias
base_model.original_model.layer3.4.conv3.weight
base_model.original_model.layer3.4.bn3.weight
base_model.original_model.layer3.4.bn3.bias
base_model.original_model.layer3.5.conv1.weight
base_model.original_model.layer3.5.bn1.weight
base_model.original_model.layer3.5.bn1.bias
base_model.original_model.layer3.5.conv2.weight
base_model.original_model.layer3.5.bn2.weight
base_model.original_model.layer3.5.bn2.bias
base_model.original_model.layer3.5.conv3.weight
base_model.original_model.layer3.5.bn3.weight
base_model.original_model.layer3.5.bn3.bias
base_model.original_model.layer4.0.conv1.weight
base_model.original_model.layer4.0.bn1.weight
base_model.original_model.layer4.0.bn1.bias
base_model.original_model.layer4.0.conv2.weight
base_model.original_model.layer4.0.bn2.weight
base_model.original_model.layer4.0.bn2.bias
base_model.original_model.layer4.0.conv3.weight
base_model.original_model.layer4.0.bn3.weight
base_model.original_model.layer4.0.bn3.bias
base_model.original_model.layer4.0.downsample.0.weight
base_model.original_model.layer4.0.downsample.1.weight
base_model.original_model.layer4.0.downsample.1.bias
base_model.original_model.layer4.1.conv1.weight
base_model.original_model.layer4.1.bn1.weight
base_model.original_model.layer4.1.bn1.bias
base_model.original_model.layer4.1.conv2.weight
base_model.original_model.layer4.1.bn2.weight
base_model.original_model.layer4.1.bn2.bias
base_model.original_model.layer4.1.conv3.weight
base_model.original_model.layer4.1.bn3.weight
base_model.original_model.layer4.1.bn3.bias
base_model.original_model.layer4.2.conv1.weight
base_model.original_model.layer4.2.bn1.weight
base_model.original_model.layer4.2.bn1.bias
base_model.original_model.layer4.2.conv2.weight
base_model.original_model.layer4.2.bn2.weight
base_model.original_model.layer4.2.bn2.bias
base_model.original_model.layer4.2.conv3.weight
base_model.original_model.layer4.2.bn3.weight
base_model.original_model.layer4.2.bn3.bias
base_model.original_model.fc.weight
base_model.original_model.fc.bias
conv3D.weight
conv3D.bias
bn1.weight
bn1.bias
sims.bn.weight
sims.bn.bias
mha_sim.in_proj_weight
mha_sim.in_proj_bias
mha_sim.out_proj.weight
mha_sim.out_proj.bias
conv3x3.weight
conv3x3.bias
bn2.weight
bn2.bias
input_projection.weight
input_projection.bias
ln1.weight
ln1.bias
transEncoder1.trans_encoder.layers.0.self_attn.in_proj_weight
transEncoder1.trans_encoder.layers.0.self_attn.in_proj_bias
transEncoder1.trans_encoder.layers.0.self_attn.out_proj.weight
transEncoder1.trans_encoder.layers.0.self_attn.out_proj.bias
transEncoder1.trans_encoder.layers.0.linear1.weight
transEncoder1.trans_encoder.layers.0.linear1.bias
transEncoder1.trans_encoder.layers.0.linear2.weight
transEncoder1.trans_encoder.layers.0.linear2.bias
transEncoder1.trans_encoder.layers.0.norm1.weight
transEncoder1.trans_encoder.layers.0.norm1.bias
transEncoder1.trans_encoder.layers.0.norm2.weight
transEncoder1.trans_encoder.layers.0.norm2.bias
transEncoder1.trans_encoder.norm.weight
transEncoder1.trans_encoder.norm.bias
transEncoder2.trans_encoder.layers.0.self_attn.in_proj_weight
transEncoder2.trans_encoder.layers.0.self_attn.in_proj_bias
transEncoder2.trans_encoder.layers.0.self_attn.out_proj.weight
transEncoder2.trans_encoder.layers.0.self_attn.out_proj.bias
transEncoder2.trans_encoder.layers.0.linear1.weight
transEncoder2.trans_encoder.layers.0.linear1.bias
transEncoder2.trans_encoder.layers.0.linear2.weight
transEncoder2.trans_encoder.layers.0.linear2.bias
transEncoder2.trans_encoder.layers.0.norm1.weight
transEncoder2.trans_encoder.layers.0.norm1.bias
transEncoder2.trans_encoder.layers.0.norm2.weight
transEncoder2.trans_encoder.layers.0.norm2.bias
transEncoder2.trans_encoder.norm.weight
transEncoder2.trans_encoder.norm.bias
fc1_1.weight
fc1_1.bias
ln1_2.weight
ln1_2.bias
fc1_2.weight
fc1_2.bias
fc1_3.weight
fc1_3.bias
fc2_1.weight
fc2_1.bias
ln2_2.weight
ln2_2.bias
fc2_2.weight
fc2_2.bias
fc2_3.weight
fc2_3.bias

These show that the training parameters have included the original ResNet.

Then I've looked into the code provided by RepNet authors; they use randomly initialize weight for their ResNet backend; it is unsure whether they use the pre-trained weights or they retrain the original ResNet.

Does it really need to retrain the original ResNet model?

synthetic dataset repFrames length is 0

repFrames = frames[begNoRepDur : -endNoRepDur]

SyntheticDataset.py: 125 - 149

begNoRepDur value is noRepDur ---> endNoRepDur == 0 ---> repFrames = frames[begNoRepDur : -0] ---> repFrames length is 0

        begNoRepDur = randint(0,  noRepDur)
        endNoRepDur = noRepDur - begNoRepDur
        totalDur = noRepDur + repDur
            
        startFrame = randint(0, total - (clipDur + noRepDur))
        cap.set(cv2.CAP_PROP_POS_FRAMES, startFrame)
        
        frames = []
        while cap.isOpened():
            ret, frame = cap.read()
            if ret is False or len(frames) == clipDur + noRepDur:
                break
            frame = cv2.resize(frame , (112, 112), interpolation = cv2.INTER_AREA)
            frames.append(frame)
        
        cap.release()
        
        
        numBegNoRepFrames = begNoRepDur*64//totalDur
        periodLength = np.zeros((64, 1))
        begNoRepFrames = self.getNFrames(frames[:begNoRepDur], numBegNoRepFrames)
        finalFrames = begNoRepFrames
        
        repFrames = frames[begNoRepDur : -endNoRepDur]
        repFrames.extend(repFrames[::-1])

RepNET: Dataset for training (Process finished with exit code 137)

Saurabh Kumar,

I am sravan kumar, new to Deep learning and computer vision.
I tried to train RepNet model with 20% of Countix dataset but it always showing error while training: process finished with exit code 137 (interrupted by signal 9 sigkill),
Can you please suggest me to overcome this.

I found some problems in the network model

Hello, Confifu ,

I find you are different from Repnet paper on the period length prediction full connection layer. You can look the page 14 raw RepNet paper. Due to network are different ,The loss fuctions and labels are different . We found it has a great impact on the experimental resultsfind after doing experiment.

Did you get right result and could you share them with me ?

Share Weight

Hello, Could you share the weighting files that u have been trained?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.