Git Product home page Git Product logo

cloth-segmentation's People

Contributors

levindabhi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

cloth-segmentation's Issues

final_label = first_channel + second_channel * 2 + third_channel * 3 conflict_mask = (final_label <= 3).astype("uint8") final_label = (conflict_mask) * final_label + (1 - conflict_mask) * 1 target_tensor = torch.as_tensor(final_label, dtype=torch.int64)

Hi,
I am very puzzled about the meaning of this code in "data/aligned_dataset.py".

`final_label = first_channel + second_channel * 2 + third_channel * 3

    conflict_mask = (final_label <= 3).astype("uint8")

    final_label = (conflict_mask) * final_label + (1 - conflict_mask) * 1

    target_tensor = torch.as_tensor(final_label, dtype=torch.int64)`

if I want to add another label , is this OK?
`
final_label = first_channel + second_channel * 2 + third_channel * 3 + fourth_channel * 4

    conflict_mask = (final_label <= 4).astype("uint8") # TODO

    final_label = (conflict_mask) * final_label + (1 - conflict_mask) * 1

    target_tensor = torch.as_tensor(final_label, dtype=torch.int64)`

I would appreciate your response. Thank you.

Model isn't accurate

Contour of another class is added. Even on high quality pictures. I've changed colours' labels but it has same bug also in original colours.
22
11

RuntimeError: CUDA out of memory.

Hello,
Thank you for providing the source code of this project.
I need to run the code on my own machine having 6 GB GPU.
When I am doing so, I am getting the following error:

RuntimeError: CUDA out of memory. Tried to allocate 12.00 MiB (GPU 0; 6.00 GiB total capacity; 5.30 GiB already allocated; 0 bytes free; 5.32 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Can you please help me in solving this error?

[bug] labels should start from 1

Very nice code!
I found a small bug that results in missing one cloth category (cape) and in confusing some cateegories a bit.

In data/aligned_dataset.py at line 82 you add 1 to each label:

labels.append(int(label) + 1)

However, in lines 121-123, you didn't do the same:

upperbody = [0, 1, 2, 3, 4, 5]
lowerbody = [6, 7, 8]
wholebody = [9, 10, 11, 12]

Hope it helps.

Best

多人图片的识别问题

发现在多人图片的识别效果很不好,如下图的右上角的结果。
image

然后我使用一下代码的处理方式(先通过detectronV2检测出人体框,然后将人体框逐个人体输入到网络中,最后进行融合)

def ClothSegMultiGen(self,img_cv,size=-1,):

        img = Image.fromarray(cv2.cvtColor(img_cv, cv2.COLOR_BGR2RGB))

        w,h = img.size

        body_boxes, _, sub_bodys = self.face_analysis.DetectronV2BodyBox(img_cv)
        total_rate = np.zeros((4,img_cv.shape[0],img_cv.shape[1]))-float('inf')  # 初始化一个数值最小的矩阵
        if len(sub_bodys)!=0:
            output_img  = np.zeros()
            for i in range(len(sub_bodys)):
                total_sub_rate = np.zeros((4,img_cv.shape[0],img_cv.shape[1]))-float('inf')  # 初始化一个数值最小的矩阵

                left, top, right, bottom = body_boxes[i][0],body_boxes[i][1],body_boxes[i][0]+body_boxes[i][2],body_boxes[i][1]+body_boxes[i][3]
                sub_img = img.crop((left, top, right, bottom))
                sub_img, sub_rate, sub_img_color   = self.ClothSegGen(sub_img,640)
                if i==0:

                    total_rate[:,top:bottom,left:right]= sub_rate
                else:
                    # np.argmax(sub_rate, axis=1)
                    total_sub_rate[:,top:bottom,left:right]= sub_rate
                    total_rate = maxTwoNumpy(total_sub_rate,total_rate)

            output_img = np.argmax(total_rate, axis=0)
            output_img_color = self.indexColor(output_img,w,h)
        else:
            output_img,_,output_img_color   = self.ClothSegGen(img,640)

def ClothSegGen(self,img_cv,size=-1,): #size表示短边长度
        img = Image.fromarray(cv2.cvtColor(img_cv, cv2.COLOR_BGR2RGB))
        w,h = img.size
        if size!=-1:
            if w>h:
                h_out = size
                w_out = w * h_out // h
            else:  
                w_out = size
                h_out = h * w_out // w

            img = img.resize((w_out,h_out))

        image_tensor = self.transform_rgb(img)
        image_tensor = torch.unsqueeze(image_tensor, 0)
        print('衣服识别时输入网络的图片大小:',image_tensor.shape)
        output_tensor = self.net(image_tensor.to(self.device))
        output_tensor = F.log_softmax(output_tensor[0], dim=1)

        output_tensor_ori = output_tensor.clone()
        output_tensor = torch.max(output_tensor_ori, dim=1, keepdim=True)[1]  # troch.max()[1], 只返回最大值的每个索引


       
        output_tensor = torch.squeeze(output_tensor, dim=0)
        output_tensor = torch.squeeze(output_tensor, dim=0)
        output_arr = output_tensor.cpu().numpy()
        output_img = Image.fromarray(output_arr.astype("uint8"), mode="L")

        
        output_img_color = self.indexColor(output_img,w,h)



        # 单独处理出概率的数据
        output_tensor0 = output_tensor_ori.clone() # 但是此时的float并非0-1之间的概率值
        output_tensor0 = torch.squeeze(output_tensor0, dim=0) 
        # output_tensor0 = torch.squeeze(output_tensor0, dim=0)
        output_rate = output_tensor0.cpu().numpy() # 4*h*w
 

        return output_img,output_rate,output_img_color 

但是发现这种方式有很大的融合问题(上一个人体框会压到下一个上面,比如上图中下面那张图片左边两个人连接处的问题)。不知道是不是因为log_softmax的问题?
因为以前是sigmoid的到的概率值,用同样的融合方式都能很正确的融合。

Inference error on trained checkpoints

Hi,
I ran the training script as your instructions which worked very well thank you. However, I'm getting an error when attempting to use my newly trained weights.

I changed the line of infer.py as such:

checkpoint_path = os.path.join("trained_checkpoint", "cloth_segm_u2net_latest.pth")
to
checkpoint_path = "results/training_cloth_segm_u2net_exp1/checkpoints/itr_00100000_u2net.pth"

And I can see the file sizes of the checkpoints aren't the same:

original:
$ ls -al trained_checkpoint/cloth_segm_u2net_latest.pth
-rw-r--r-- 1 user user 176625341 Mar 12 21:23 trained_checkpoint/cloth_segm_u2net_latest.pth

newly trained:
$ ls -al results/training_cloth_segm_u2net_exp1/checkpoints/itr_00100000_u2net.pth
-rw-r--r-- 1 user user 176607205 Mar 14 09:09 results/training_cloth_segm_u2net_exp1/checkpoints/itr_00100000_u2net.pth

The error I'm getting seems to drop the stageX names from the layers in the state dict. Any ideas?

Traceback (most recent call last):
  File "/nas/nns/fashion_seg/cloth_segmentation/infer.py", line 60, in <module>
    net = load_checkpoint_mgpu(net, checkpoint_path)
  File "/nas/nns/fashion_seg/cloth_segmentation/utils/saving_utils.py", line 29, in load_checkpoint_mgpu
    model.load_state_dict(new_state_dict)
  File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1671, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for U2NET:
	Missing key(s) in state_dict: "stage1.rebnconvin.conv_s1.weight", "stage1.rebnconvin.conv_s1.bias", "stage1.rebnconvin.bn_s1.weight", "stage1.rebnconvin.bn_s1.bias", "stage1.rebnconvin.bn_s1.running_mean", "stage1.rebnconvin.bn_s1.running_var", "stage1.rebnconv1.conv_s1.weight", "stage1.rebnconv1.conv_s1.bias", "stage1.rebnconv1.bn_s1.weight", "stage1.rebnconv1.bn_s1.bias", "stage1.rebnconv1.bn_s1.running_mean", "stage1.rebnconv1.bn_s1.running_var", "stage1.rebnconv2.conv_s1.weight", "stage1.rebnconv2.conv_s1.bias", "stage1.rebnconv2.bn_s1.weight", "stage1.rebnconv2.bn_s1.bias", "stage1.rebnconv2.bn_s1.running_mean", "stage1.rebnconv2.bn_s1.running_var", "stage1.rebnconv3.conv_s1.weight", "stage1.rebnconv3.conv_s1.bias", "stage1.rebnconv3.bn_s1.weight", "stage1.rebnconv3.bn_s1.bias", "stage1.rebnconv3.bn_s1.running_mean", "stage1.rebnconv3.bn_s1.running_var", "stage1.rebnconv4.conv_s1.weight", "stage1.rebnconv4.conv_s1.bias", "stage1.rebnconv4.bn_s1.weight", "stage1.rebnconv4.bn_s1.bias", "stage1.rebnconv4.bn_s1.running_mean", "stage1.rebnconv4.bn_s1.running_var", "stage1.rebnconv5.conv_s1.weight", "stage1.rebnconv5.conv_s1.bias", "stage1.rebnconv5.bn_s1.weight", "stage1.rebnconv5.bn_s1.bias", "stage1.rebnconv5.bn_s1.running_mean", "stage1.rebnconv5.bn_s1.running_var", "stage1.rebnconv6.conv_s1.weight", "stage1.rebnconv6.conv_s1.bias", "stage1.rebnconv6.bn_s1.weight", "stage1.rebnconv6.bn_s1.bias", "stage1.rebnconv6.bn_s1.running_mean", "stage1.rebnconv6.bn_s1.running_var", "stage1.rebnconv7.conv_s1.weight", "stage1.rebnconv7.conv_s1.bias", "stage1.rebnconv7.bn_s1.weight", "stage1.rebnconv7.bn_s1.bias", "stage1.rebnconv7.bn_s1.running_mean", "stage1.rebnconv7.bn_s1.running_var", "stage1.rebnconv6d.conv_s1.weight", "stage1.rebnconv6d.conv_s1.bias", "stage1.rebnconv6d.bn_s1.weight", "stage1.rebnconv6d.bn_s1.bias", "stage1.rebnconv6d.bn_s1.running_mean", "stage1.rebnconv6d.bn_s1.running_var", "stage1.rebnconv5d.conv_s1.weight", "stage1.rebnconv5d.conv_s1.bias", "stage1.rebnconv5d.bn_s1.weight", "stage1.rebnconv5d.bn_s1.bias", "stage1.rebnconv5d.bn_s1.running_mean", "stage1.rebnconv5d.bn_s1.running_var", "stage1.rebnconv4d.conv_s1.weight", "stage1.rebnconv4d.conv_s1.bias", "stage1.rebnconv4d.bn_s1.weight", "stage1.rebnconv4d.bn_s1.bias", "stage1.rebnconv4d.bn_s1.running_mean", "stage1.rebnconv4d.bn_s1.running_var", "stage1.rebnconv3d.conv_s1.weight", "stage1.rebnconv3d.conv_s1.bias", "stage1.rebnconv3d.bn_s1.weight", "stage1.rebnconv3d.bn_s1.bias", "stage1.rebnconv3d.bn_s1.running_mean", "stage1.rebnconv3d.bn_s1.running_var", "stage1.rebnconv2d.conv_s1.weight", "stage1.rebnconv2d.conv_s1.bias", "stage1.rebnconv2d.bn_s1.weight", "stage1.rebnconv2d.bn_s1.bias", "stage1.rebnconv2d.bn_s1.running_mean", "stage1.rebnconv2d.bn_s1.running_var", "stage1.rebnconv1d.conv_s1.weight", "stage1.rebnconv1d.conv_s1.bias", "stage1.rebnconv1d.bn_s1.weight", "stage1.rebnconv1d.bn_s1.bias", "stage1.rebnconv1d.bn_s1.running_mean", "stage1.rebnconv1d.bn_s1.running_var", "stage2.rebnconvin.conv_s1.weight", "stage2.rebnconvin.conv_s1.bias", "stage2.rebnconvin.bn_s1.weight", "stage2.rebnconvin.bn_s1.bias", "stage2.rebnconvin.bn_s1.running_mean", "stage2.rebnconvin.bn_s1.running_var", "stage2.rebnconv1.conv_s1.weight", "stage2.rebnconv1.conv_s1.bias", "stage2.rebnconv1.bn_s1.weight", "stage2.rebnconv1.bn_s1.bias", "stage2.rebnconv1.bn_s1.running_mean", "stage2.rebnconv1.bn_s1.running_var", "stage2.rebnconv2.conv_s1.weight", "stage2.rebnconv2.conv_s1.bias", "stage2.rebnconv2.bn_s1.weight", "stage2.rebnconv2.bn_s1.bias", "stage2.rebnconv2.bn_s1.running_mean", "stage2.rebnconv2.bn_s1.running_var", "stage2.rebnconv3.conv_s1.weight", "stage2.rebnconv3.conv_s1.bias", "stage2.rebnconv3.bn_s1.weight", "stage2.rebnconv3.bn_s1.bias", "stage2.rebnconv3.bn_s1.running_mean", "stage2.rebnconv3.bn_s1.running_var", "stage2.rebnconv4.conv_s1.weight", "stage2.rebnconv4.conv_s1.bias", "stage2.rebnconv4.bn_s1.weight", "stage2.rebnconv4.bn_s1.bias", "stage2.rebnconv4.bn_s1.running_mean", "stage2.rebnconv4.bn_s1.running_var", "stage2.rebnconv5.conv_s1.weight", "stage2.rebnconv5.conv_s1.bias", "stage2.rebnconv5.bn_s1.weight", "stage2.rebnconv5.bn_s1.bias", "stage2.rebnconv5.bn_s1.running_mean", "stage2.rebnconv5.bn_s1.running_var", "stage2.rebnconv6.conv_s1.weight", "stage2.rebnconv6.conv_s1.bias", "stage2.rebnconv6.bn_s1.weight", "stage2.rebnconv6.bn_s1.bias", "stage2.rebnconv6.bn_s1.running_mean", "stage2.rebnconv6.bn_s1.running_var", "stage2.rebnconv5d.conv_s1.weight", "stage2.rebnconv5d.conv_s1.bias", "stage2.rebnconv5d.bn_s1.weight", "stage2.rebnconv5d.bn_s1.bias", "stage2.rebnconv5d.bn_s1.running_mean", "stage2.rebnconv5d.bn_s1.running_var", "stage2.rebnconv4d.conv_s1.weight", "stage2.rebnconv4d.conv_s1.bias", "stage2.rebnconv4d.bn_s1.weight", "stage2.rebnconv4d.bn_s1.bias", "stage2.rebnconv4d.bn_s1.running_mean", "stage2.rebnconv4d.bn_s1.running_var", "stage2.rebnconv3d.conv_s1.weight", "stage2.rebnconv3d.conv_s1.bias", "stage2.rebnconv3d.bn_s1.weight", "stage2.rebnconv3d.bn_s1.bias", "stage2.rebnconv3d.bn_s1.running_mean", "stage2.rebnconv3d.bn_s1.running_var", "stage2.rebnconv2d.conv_s1.weight", "stage2.rebnconv2d.conv_s1.bias", "stage2.rebnconv2d.bn_s1.weight", "stage2.rebnconv2d.bn_s1.bias", "stage2.rebnconv2d.bn_s1.running_mean", "stage2.rebnconv2d.bn_s1.running_var", "stage2.rebnconv1d.conv_s1.weight", "stage2.rebnconv1d.conv_s1.bias", "stage2.rebnconv1d.bn_s1.weight", "stage2.rebnconv1d.bn_s1.bias", "stage2.rebnconv1d.bn_s1.running_mean", "stage2.rebnconv1d.bn_s1.running_var", "stage3.rebnconvin.conv_s1.weight", "stage3.rebnconvin.conv_s1.bias", "stage3.rebnconvin.bn_s1.weight", "stage3.rebnconvin.bn_s1.bias", "stage3.rebnconvin.bn_s1.running_mean", "stage3.rebnconvin.bn_s1.running_var", "stage3.rebnconv1.conv_s1.weight", "stage3.rebnconv1.conv_s1.bias", "stage3.rebnconv1.bn_s1.weight", "stage3.rebnconv1.bn_s1.bias", "stage3.rebnconv1.bn_s1.running_mean", "stage3.rebnconv1.bn_s1.running_var", "stage3.rebnconv2.conv_s1.weight", "stage3.rebnconv2.conv_s1.bias", "stage3.rebnconv2.bn_s1.weight", "stage3.rebnconv2.bn_s1.bias", "stage3.rebnconv2.bn_s1.running_mean", "stage3.rebnconv2.bn_s1.running_var", "stage3.rebnconv3.conv_s1.weight", "stage3.rebnconv3.conv_s1.bias", "stage3.rebnconv3.bn_s1.weight", "stage3.rebnconv3.bn_s1.bias", "stage3.rebnconv3.bn_s1.running_mean", "stage3.rebnconv3.bn_s1.running_var", "stage3.rebnconv4.conv_s1.weight", "stage3.rebnconv4.conv_s1.bias", "stage3.rebnconv4.bn_s1.weight", "stage3.rebnconv4.bn_s1.bias", "stage3.rebnconv4.bn_s1.running_mean", "stage3.rebnconv4.bn_s1.running_var", "stage3.rebnconv5.conv_s1.weight", "stage3.rebnconv5.conv_s1.bias", "stage3.rebnconv5.bn_s1.weight", "stage3.rebnconv5.bn_s1.bias", "stage3.rebnconv5.bn_s1.running_mean", "stage3.rebnconv5.bn_s1.running_var", "stage3.rebnconv4d.conv_s1.weight", "stage3.rebnconv4d.conv_s1.bias", "stage3.rebnconv4d.bn_s1.weight", "stage3.rebnconv4d.bn_s1.bias", "stage3.rebnconv4d.bn_s1.running_mean", "stage3.rebnconv4d.bn_s1.running_var", "stage3.rebnconv3d.conv_s1.weight", "stage3.rebnconv3d.conv_s1.bias", "stage3.rebnconv3d.bn_s1.weight", "stage3.rebnconv3d.bn_s1.bias", "stage3.rebnconv3d.bn_s1.running_mean", "stage3.rebnconv3d.bn_s1.running_var", "stage3.rebnconv2d.conv_s1.weight", "stage3.rebnconv2d.conv_s1.bias", "stage3.rebnconv2d.bn_s1.weight", "stage3.rebnconv2d.bn_s1.bias", "stage3.rebnconv2d.bn_s1.running_mean", "stage3.rebnconv2d.bn_s1.running_var", "stage3.rebnconv1d.conv_s1.weight", "stage3.rebnconv1d.conv_s1.bias", "stage3.rebnconv1d.bn_s1.weight", "stage3.rebnconv1d.bn_s1.bias", "stage3.rebnconv1d.bn_s1.running_mean", "stage3.rebnconv1d.bn_s1.running_var", "stage4.rebnconvin.conv_s1.weight", "stage4.rebnconvin.conv_s1.bias", "stage4.rebnconvin.bn_s1.weight", "stage4.rebnconvin.bn_s1.bias", "stage4.rebnconvin.bn_s1.running_mean", "stage4.rebnconvin.bn_s1.running_var", "stage4.rebnconv1.conv_s1.weight", "stage4.rebnconv1.conv_s1.bias", "stage4.rebnconv1.bn_s1.weight", "stage4.rebnconv1.bn_s1.bias", "stage4.rebnconv1.bn_s1.running_mean", "stage4.rebnconv1.bn_s1.running_var", "stage4.rebnconv2.conv_s1.weight", "stage4.rebnconv2.conv_s1.bias", "stage4.rebnconv2.bn_s1.weight", "stage4.rebnconv2.bn_s1.bias", "stage4.rebnconv2.bn_s1.running_mean", "stage4.rebnconv2.bn_s1.running_var", "stage4.rebnconv3.conv_s1.weight", "stage4.rebnconv3.conv_s1.bias", "stage4.rebnconv3.bn_s1.weight", "stage4.rebnconv3.bn_s1.bias", "stage4.rebnconv3.bn_s1.running_mean", "stage4.rebnconv3.bn_s1.running_var", "stage4.rebnconv4.conv_s1.weight", "stage4.rebnconv4.conv_s1.bias", "stage4.rebnconv4.bn_s1.weight", "stage4.rebnconv4.bn_s1.bias", "stage4.rebnconv4.bn_s1.running_mean", "stage4.rebnconv4.bn_s1.running_var", "stage4.rebnconv3d.conv_s1.weight", "stage4.rebnconv3d.conv_s1.bias", "stage4.rebnconv3d.bn_s1.weight", "stage4.rebnconv3d.bn_s1.bias", "stage4.rebnconv3d.bn_s1.running_mean", "stage4.rebnconv3d.bn_s1.running_var", "stage4.rebnconv2d.conv_s1.weight", "stage4.rebnconv2d.conv_s1.bias", "stage4.rebnconv2d.bn_s1.weight", "stage4.rebnconv2d.bn_s1.bias", "stage4.rebnconv2d.bn_s1.running_mean", "stage4.rebnconv2d.bn_s1.running_var", "stage4.rebnconv1d.conv_s1.weight", "stage4.rebnconv1d.conv_s1.bias", "stage4.rebnconv1d.bn_s1.weight", "stage4.rebnconv1d.bn_s1.bias", "stage4.rebnconv1d.bn_s1.running_mean", "stage4.rebnconv1d.bn_s1.running_var", "stage5.rebnconvin.conv_s1.weight", "stage5.rebnconvin.conv_s1.bias", "stage5.rebnconvin.bn_s1.weight", "stage5.rebnconvin.bn_s1.bias", "stage5.rebnconvin.bn_s1.running_mean", "stage5.rebnconvin.bn_s1.running_var", "stage5.rebnconv1.conv_s1.weight", "stage5.rebnconv1.conv_s1.bias", "stage5.rebnconv1.bn_s1.weight", "stage5.rebnconv1.bn_s1.bias", "stage5.rebnconv1.bn_s1.running_mean", "stage5.rebnconv1.bn_s1.running_var", "stage5.rebnconv2.conv_s1.weight", "stage5.rebnconv2.conv_s1.bias", "stage5.rebnconv2.bn_s1.weight", "stage5.rebnconv2.bn_s1.bias", "stage5.rebnconv2.bn_s1.running_mean", "stage5.rebnconv2.bn_s1.running_var", "stage5.rebnconv3.conv_s1.weight", "stage5.rebnconv3.conv_s1.bias", "stage5.rebnconv3.bn_s1.weight", "stage5.rebnconv3.bn_s1.bias", "stage5.rebnconv3.bn_s1.running_mean", "stage5.rebnconv3.bn_s1.running_var", "stage5.rebnconv4.conv_s1.weight", "stage5.rebnconv4.conv_s1.bias", "stage5.rebnconv4.bn_s1.weight", "stage5.rebnconv4.bn_s1.bias", "stage5.rebnconv4.bn_s1.running_mean", "stage5.rebnconv4.bn_s1.running_var", "stage5.rebnconv3d.conv_s1.weight", "stage5.rebnconv3d.conv_s1.bias", "stage5.rebnconv3d.bn_s1.weight", "stage5.rebnconv3d.bn_s1.bias", "stage5.rebnconv3d.bn_s1.running_mean", "stage5.rebnconv3d.bn_s1.running_var", "stage5.rebnconv2d.conv_s1.weight", "stage5.rebnconv2d.conv_s1.bias", "stage5.rebnconv2d.bn_s1.weight", "stage5.rebnconv2d.bn_s1.bias", "stage5.rebnconv2d.bn_s1.running_mean", "stage5.rebnconv2d.bn_s1.running_var", "stage5.rebnconv1d.conv_s1.weight", "stage5.rebnconv1d.conv_s1.bias", "stage5.rebnconv1d.bn_s1.weight", "stage5.rebnconv1d.bn_s1.bias", "stage5.rebnconv1d.bn_s1.running_mean", "stage5.rebnconv1d.bn_s1.running_var", "stage6.rebnconvin.conv_s1.weight", "stage6.rebnconvin.conv_s1.bias", "stage6.rebnconvin.bn_s1.weight", "stage6.rebnconvin.bn_s1.bias", "stage6.rebnconvin.bn_s1.running_mean", "stage6.rebnconvin.bn_s1.running_var", "stage6.rebnconv1.conv_s1.weight", "stage6.rebnconv1.conv_s1.bias", "stage6.rebnconv1.bn_s1.weight", "stage6.rebnconv1.bn_s1.bias", "stage6.rebnconv1.bn_s1.running_mean", "stage6.rebnconv1.bn_s1.running_var", "stage6.rebnconv2.conv_s1.weight", "stage6.rebnconv2.conv_s1.bias", "stage6.rebnconv2.bn_s1.weight", "stage6.rebnconv2.bn_s1.bias", "stage6.rebnconv2.bn_s1.running_mean", "stage6.rebnconv2.bn_s1.running_var", "stage6.rebnconv3.conv_s1.weight", "stage6.rebnconv3.conv_s1.bias", "stage6.rebnconv3.bn_s1.weight", "stage6.rebnconv3.bn_s1.bias", "stage6.rebnconv3.bn_s1.running_mean", "stage6.rebnconv3.bn_s1.running_var", "stage6.rebnconv4.conv_s1.weight", "stage6.rebnconv4.conv_s1.bias", "stage6.rebnconv4.bn_s1.weight", "stage6.rebnconv4.bn_s1.bias", "stage6.rebnconv4.bn_s1.running_mean", "stage6.rebnconv4.bn_s1.running_var", "stage6.rebnconv3d.conv_s1.weight", "stage6.rebnconv3d.conv_s1.bias", "stage6.rebnconv3d.bn_s1.weight", "stage6.rebnconv3d.bn_s1.bias", "stage6.rebnconv3d.bn_s1.running_mean", "stage6.rebnconv3d.bn_s1.running_var", "stage6.rebnconv2d.conv_s1.weight", "stage6.rebnconv2d.conv_s1.bias", "stage6.rebnconv2d.bn_s1.weight", "stage6.rebnconv2d.bn_s1.bias", "stage6.rebnconv2d.bn_s1.running_mean", "stage6.rebnconv2d.bn_s1.running_var", "stage6.rebnconv1d.conv_s1.weight", "stage6.rebnconv1d.conv_s1.bias", "stage6.rebnconv1d.bn_s1.weight", "stage6.rebnconv1d.bn_s1.bias", "stage6.rebnconv1d.bn_s1.running_mean", "stage6.rebnconv1d.bn_s1.running_var", "stage5d.rebnconvin.conv_s1.weight", "stage5d.rebnconvin.conv_s1.bias", "stage5d.rebnconvin.bn_s1.weight", "stage5d.rebnconvin.bn_s1.bias", "stage5d.rebnconvin.bn_s1.running_mean", "stage5d.rebnconvin.bn_s1.running_var", "stage5d.rebnconv1.conv_s1.weight", "stage5d.rebnconv1.conv_s1.bias", "stage5d.rebnconv1.bn_s1.weight", "stage5d.rebnconv1.bn_s1.bias", "stage5d.rebnconv1.bn_s1.running_mean", "stage5d.rebnconv1.bn_s1.running_var", "stage5d.rebnconv2.conv_s1.weight", "stage5d.rebnconv2.conv_s1.bias", "stage5d.rebnconv2.bn_s1.weight", "stage5d.rebnconv2.bn_s1.bias", "stage5d.rebnconv2.bn_s1.running_mean", "stage5d.rebnconv2.bn_s1.running_var", "stage5d.rebnconv3.conv_s1.weight", "stage5d.rebnconv3.conv_s1.bias", "stage5d.rebnconv3.bn_s1.weight", "stage5d.rebnconv3.bn_s1.bias", "stage5d.rebnconv3.bn_s1.running_mean", "stage5d.rebnconv3.bn_s1.running_var", "stage5d.rebnconv4.conv_s1.weight", "stage5d.rebnconv4.conv_s1.bias", "stage5d.rebnconv4.bn_s1.weight", "stage5d.rebnconv4.bn_s1.bias", "stage5d.rebnconv4.bn_s1.running_mean", "stage5d.rebnconv4.bn_s1.running_var", "stage5d.rebnconv3d.conv_s1.weight", "stage5d.rebnconv3d.conv_s1.bias", "stage5d.rebnconv3d.bn_s1.weight", "stage5d.rebnconv3d.bn_s1.bias", "stage5d.rebnconv3d.bn_s1.running_mean", "stage5d.rebnconv3d.bn_s1.running_var", "stage5d.rebnconv2d.conv_s1.weight", "stage5d.rebnconv2d.conv_s1.bias", "stage5d.rebnconv2d.bn_s1.weight", "stage5d.rebnconv2d.bn_s1.bias", "stage5d.rebnconv2d.bn_s1.running_mean", "stage5d.rebnconv2d.bn_s1.running_var", "stage5d.rebnconv1d.conv_s1.weight", "stage5d.rebnconv1d.conv_s1.bias", "stage5d.rebnconv1d.bn_s1.weight", "stage5d.rebnconv1d.bn_s1.bias", "stage5d.rebnconv1d.bn_s1.running_mean", "stage5d.rebnconv1d.bn_s1.running_var", "stage4d.rebnconvin.conv_s1.weight", "stage4d.rebnconvin.conv_s1.bias", "stage4d.rebnconvin.bn_s1.weight", "stage4d.rebnconvin.bn_s1.bias", "stage4d.rebnconvin.bn_s1.running_mean", "stage4d.rebnconvin.bn_s1.running_var", "stage4d.rebnconv1.conv_s1.weight", "stage4d.rebnconv1.conv_s1.bias", "stage4d.rebnconv1.bn_s1.weight", "stage4d.rebnconv1.bn_s1.bias", "stage4d.rebnconv1.bn_s1.running_mean", "stage4d.rebnconv1.bn_s1.running_var", "stage4d.rebnconv2.conv_s1.weight", "stage4d.rebnconv2.conv_s1.bias", "stage4d.rebnconv2.bn_s1.weight", "stage4d.rebnconv2.bn_s1.bias", "stage4d.rebnconv2.bn_s1.running_mean", "stage4d.rebnconv2.bn_s1.running_var", "stage4d.rebnconv3.conv_s1.weight", "stage4d.rebnconv3.conv_s1.bias", "stage4d.rebnconv3.bn_s1.weight", "stage4d.rebnconv3.bn_s1.bias", "stage4d.rebnconv3.bn_s1.running_mean", "stage4d.rebnconv3.bn_s1.running_var", "stage4d.rebnconv4.conv_s1.weight", "stage4d.rebnconv4.conv_s1.bias", "stage4d.rebnconv4.bn_s1.weight", "stage4d.rebnconv4.bn_s1.bias", "stage4d.rebnconv4.bn_s1.running_mean", "stage4d.rebnconv4.bn_s1.running_var", "stage4d.rebnconv3d.conv_s1.weight", "stage4d.rebnconv3d.conv_s1.bias", "stage4d.rebnconv3d.bn_s1.weight", "stage4d.rebnconv3d.bn_s1.bias", "stage4d.rebnconv3d.bn_s1.running_mean", "stage4d.rebnconv3d.bn_s1.running_var", "stage4d.rebnconv2d.conv_s1.weight", "stage4d.rebnconv2d.conv_s1.bias", "stage4d.rebnconv2d.bn_s1.weight", "stage4d.rebnconv2d.bn_s1.bias", "stage4d.rebnconv2d.bn_s1.running_mean", "stage4d.rebnconv2d.bn_s1.running_var", "stage4d.rebnconv1d.conv_s1.weight", "stage4d.rebnconv1d.conv_s1.bias", "stage4d.rebnconv1d.bn_s1.weight", "stage4d.rebnconv1d.bn_s1.bias", "stage4d.rebnconv1d.bn_s1.running_mean", "stage4d.rebnconv1d.bn_s1.running_var", "stage3d.rebnconvin.conv_s1.weight", "stage3d.rebnconvin.conv_s1.bias", "stage3d.rebnconvin.bn_s1.weight", "stage3d.rebnconvin.bn_s1.bias", "stage3d.rebnconvin.bn_s1.running_mean", "stage3d.rebnconvin.bn_s1.running_var", "stage3d.rebnconv1.conv_s1.weight", "stage3d.rebnconv1.conv_s1.bias", "stage3d.rebnconv1.bn_s1.weight", "stage3d.rebnconv1.bn_s1.bias", "stage3d.rebnconv1.bn_s1.running_mean", "stage3d.rebnconv1.bn_s1.running_var", "stage3d.rebnconv2.conv_s1.weight", "stage3d.rebnconv2.conv_s1.bias", "stage3d.rebnconv2.bn_s1.weight", "stage3d.rebnconv2.bn_s1.bias", "stage3d.rebnconv2.bn_s1.running_mean", "stage3d.rebnconv2.bn_s1.running_var", "stage3d.rebnconv3.conv_s1.weight", "stage3d.rebnconv3.conv_s1.bias", "stage3d.rebnconv3.bn_s1.weight", "stage3d.rebnconv3.bn_s1.bias", "stage3d.rebnconv3.bn_s1.running_mean", "stage3d.rebnconv3.bn_s1.running_var", "stage3d.rebnconv4.conv_s1.weight", "stage3d.rebnconv4.conv_s1.bias", "stage3d.rebnconv4.bn_s1.weight", "stage3d.rebnconv4.bn_s1.bias", "stage3d.rebnconv4.bn_s1.running_mean", "stage3d.rebnconv4.bn_s1.running_var", "stage3d.rebnconv5.conv_s1.weight", "stage3d.rebnconv5.conv_s1.bias", "stage3d.rebnconv5.bn_s1.weight", "stage3d.rebnconv5.bn_s1.bias", "stage3d.rebnconv5.bn_s1.running_mean", "stage3d.rebnconv5.bn_s1.running_var", "stage3d.rebnconv4d.conv_s1.weight", "stage3d.rebnconv4d.conv_s1.bias", "stage3d.rebnconv4d.bn_s1.weight", "stage3d.rebnconv4d.bn_s1.bias", "stage3d.rebnconv4d.bn_s1.running_mean", "stage3d.rebnconv4d.bn_s1.running_var", "stage3d.rebnconv3d.conv_s1.weight", "stage3d.rebnconv3d.conv_s1.bias", "stage3d.rebnconv3d.bn_s1.weight", "stage3d.rebnconv3d.bn_s1.bias", "stage3d.rebnconv3d.bn_s1.running_mean", "stage3d.rebnconv3d.bn_s1.running_var", "stage3d.rebnconv2d.conv_s1.weight", "stage3d.rebnconv2d.conv_s1.bias", "stage3d.rebnconv2d.bn_s1.weight", "stage3d.rebnconv2d.bn_s1.bias", "stage3d.rebnconv2d.bn_s1.running_mean", "stage3d.rebnconv2d.bn_s1.running_var", "stage3d.rebnconv1d.conv_s1.weight", "stage3d.rebnconv1d.conv_s1.bias", "stage3d.rebnconv1d.bn_s1.weight", "stage3d.rebnconv1d.bn_s1.bias", "stage3d.rebnconv1d.bn_s1.running_mean", "stage3d.rebnconv1d.bn_s1.running_var", "stage2d.rebnconvin.conv_s1.weight", "stage2d.rebnconvin.conv_s1.bias", "stage2d.rebnconvin.bn_s1.weight", "stage2d.rebnconvin.bn_s1.bias", "stage2d.rebnconvin.bn_s1.running_mean", "stage2d.rebnconvin.bn_s1.running_var", "stage2d.rebnconv1.conv_s1.weight", "stage2d.rebnconv1.conv_s1.bias", "stage2d.rebnconv1.bn_s1.weight", "stage2d.rebnconv1.bn_s1.bias", "stage2d.rebnconv1.bn_s1.running_mean", "stage2d.rebnconv1.bn_s1.running_var", "stage2d.rebnconv2.conv_s1.weight", "stage2d.rebnconv2.conv_s1.bias", "stage2d.rebnconv2.bn_s1.weight", "stage2d.rebnconv2.bn_s1.bias", "stage2d.rebnconv2.bn_s1.running_mean", "stage2d.rebnconv2.bn_s1.running_var", "stage2d.rebnconv3.conv_s1.weight", "stage2d.rebnconv3.conv_s1.bias", "stage2d.rebnconv3.bn_s1.weight", "stage2d.rebnconv3.bn_s1.bias", "stage2d.rebnconv3.bn_s1.running_mean", "stage2d.rebnconv3.bn_s1.running_var", "stage2d.rebnconv4.conv_s1.weight", "stage2d.rebnconv4.conv_s1.bias", "stage2d.rebnconv4.bn_s1.weight", "stage2d.rebnconv4.bn_s1.bias", "stage2d.rebnconv4.bn_s1.running_mean", "stage2d.rebnconv4.bn_s1.running_var", "stage2d.rebnconv5.conv_s1.weight", "stage2d.rebnconv5.conv_s1.bias", "stage2d.rebnconv5.bn_s1.weight", "stage2d.rebnconv5.bn_s1.bias", "stage2d.rebnconv5.bn_s1.running_mean", "stage2d.rebnconv5.bn_s1.running_var", "stage2d.rebnconv6.conv_s1.weight", "stage2d.rebnconv6.conv_s1.bias", "stage2d.rebnconv6.bn_s1.weight", "stage2d.rebnconv6.bn_s1.bias", "stage2d.rebnconv6.bn_s1.running_mean", "stage2d.rebnconv6.bn_s1.running_var", "stage2d.rebnconv5d.conv_s1.weight", "stage2d.rebnconv5d.conv_s1.bias", "stage2d.rebnconv5d.bn_s1.weight", "stage2d.rebnconv5d.bn_s1.bias", "stage2d.rebnconv5d.bn_s1.running_mean", "stage2d.rebnconv5d.bn_s1.running_var", "stage2d.rebnconv4d.conv_s1.weight", "stage2d.rebnconv4d.conv_s1.bias", "stage2d.rebnconv4d.bn_s1.weight", "stage2d.rebnconv4d.bn_s1.bias", "stage2d.rebnconv4d.bn_s1.running_mean", "stage2d.rebnconv4d.bn_s1.running_var", "stage2d.rebnconv3d.conv_s1.weight", "stage2d.rebnconv3d.conv_s1.bias", "stage2d.rebnconv3d.bn_s1.weight", "stage2d.rebnconv3d.bn_s1.bias", "stage2d.rebnconv3d.bn_s1.running_mean", "stage2d.rebnconv3d.bn_s1.running_var", "stage2d.rebnconv2d.conv_s1.weight", "stage2d.rebnconv2d.conv_s1.bias", "stage2d.rebnconv2d.bn_s1.weight", "stage2d.rebnconv2d.bn_s1.bias", "stage2d.rebnconv2d.bn_s1.running_mean", "stage2d.rebnconv2d.bn_s1.running_var", "stage2d.rebnconv1d.conv_s1.weight", "stage2d.rebnconv1d.conv_s1.bias", "stage2d.rebnconv1d.bn_s1.weight", "stage2d.rebnconv1d.bn_s1.bias", "stage2d.rebnconv1d.bn_s1.running_mean", "stage2d.rebnconv1d.bn_s1.running_var", "stage1d.rebnconvin.conv_s1.weight", "stage1d.rebnconvin.conv_s1.bias", "stage1d.rebnconvin.bn_s1.weight", "stage1d.rebnconvin.bn_s1.bias", "stage1d.rebnconvin.bn_s1.running_mean", "stage1d.rebnconvin.bn_s1.running_var", "stage1d.rebnconv1.conv_s1.weight", "stage1d.rebnconv1.conv_s1.bias", "stage1d.rebnconv1.bn_s1.weight", "stage1d.rebnconv1.bn_s1.bias", "stage1d.rebnconv1.bn_s1.running_mean", "stage1d.rebnconv1.bn_s1.running_var", "stage1d.rebnconv2.conv_s1.weight", "stage1d.rebnconv2.conv_s1.bias", "stage1d.rebnconv2.bn_s1.weight", "stage1d.rebnconv2.bn_s1.bias", "stage1d.rebnconv2.bn_s1.running_mean", "stage1d.rebnconv2.bn_s1.running_var", "stage1d.rebnconv3.conv_s1.weight", "stage1d.rebnconv3.conv_s1.bias", "stage1d.rebnconv3.bn_s1.weight", "stage1d.rebnconv3.bn_s1.bias", "stage1d.rebnconv3.bn_s1.running_mean", "stage1d.rebnconv3.bn_s1.running_var", "stage1d.rebnconv4.conv_s1.weight", "stage1d.rebnconv4.conv_s1.bias", "stage1d.rebnconv4.bn_s1.weight", "stage1d.rebnconv4.bn_s1.bias", "stage1d.rebnconv4.bn_s1.running_mean", "stage1d.rebnconv4.bn_s1.running_var", "stage1d.rebnconv5.conv_s1.weight", "stage1d.rebnconv5.conv_s1.bias", "stage1d.rebnconv5.bn_s1.weight", "stage1d.rebnconv5.bn_s1.bias", "stage1d.rebnconv5.bn_s1.running_mean", "stage1d.rebnconv5.bn_s1.running_var", "stage1d.rebnconv6.conv_s1.weight", "stage1d.rebnconv6.conv_s1.bias", "stage1d.rebnconv6.bn_s1.weight", "stage1d.rebnconv6.bn_s1.bias", "stage1d.rebnconv6.bn_s1.running_mean", "stage1d.rebnconv6.bn_s1.running_var", "stage1d.rebnconv7.conv_s1.weight", "stage1d.rebnconv7.conv_s1.bias", "stage1d.rebnconv7.bn_s1.weight", "stage1d.rebnconv7.bn_s1.bias", "stage1d.rebnconv7.bn_s1.running_mean", "stage1d.rebnconv7.bn_s1.running_var", "stage1d.rebnconv6d.conv_s1.weight", "stage1d.rebnconv6d.conv_s1.bias", "stage1d.rebnconv6d.bn_s1.weight", "stage1d.rebnconv6d.bn_s1.bias", "stage1d.rebnconv6d.bn_s1.running_mean", "stage1d.rebnconv6d.bn_s1.running_var", "stage1d.rebnconv5d.conv_s1.weight", "stage1d.rebnconv5d.conv_s1.bias", "stage1d.rebnconv5d.bn_s1.weight", "stage1d.rebnconv5d.bn_s1.bias", "stage1d.rebnconv5d.bn_s1.running_mean", "stage1d.rebnconv5d.bn_s1.running_var", "stage1d.rebnconv4d.conv_s1.weight", "stage1d.rebnconv4d.conv_s1.bias", "stage1d.rebnconv4d.bn_s1.weight", "stage1d.rebnconv4d.bn_s1.bias", "stage1d.rebnconv4d.bn_s1.running_mean", "stage1d.rebnconv4d.bn_s1.running_var", "stage1d.rebnconv3d.conv_s1.weight", "stage1d.rebnconv3d.conv_s1.bias", "stage1d.rebnconv3d.bn_s1.weight", "stage1d.rebnconv3d.bn_s1.bias", "stage1d.rebnconv3d.bn_s1.running_mean", "stage1d.rebnconv3d.bn_s1.running_var", "stage1d.rebnconv2d.conv_s1.weight", "stage1d.rebnconv2d.conv_s1.bias", "stage1d.rebnconv2d.bn_s1.weight", "stage1d.rebnconv2d.bn_s1.bias", "stage1d.rebnconv2d.bn_s1.running_mean", "stage1d.rebnconv2d.bn_s1.running_var", "stage1d.rebnconv1d.conv_s1.weight", "stage1d.rebnconv1d.conv_s1.bias", "stage1d.rebnconv1d.bn_s1.weight", "stage1d.rebnconv1d.bn_s1.bias", "stage1d.rebnconv1d.bn_s1.running_mean", "stage1d.rebnconv1d.bn_s1.running_var", "side1.weight", "side1.bias", "side2.weight", "side2.bias", "side3.weight", "side3.bias", "side4.weight", "side4.bias", "side5.weight", "side5.bias", "side6.weight", "side6.bias", "outconv.weight", "outconv.bias". 
	Unexpected key(s) in state_dict: "rebnconvin.conv_s1.weight", "rebnconvin.conv_s1.bias", "rebnconvin.bn_s1.weight", "rebnconvin.bn_s1.bias", "rebnconvin.bn_s1.running_mean", "rebnconvin.bn_s1.running_var", "rebnconvin.bn_s1.num_batches_tracked", "rebnconv1.conv_s1.weight", "rebnconv1.conv_s1.bias", "rebnconv1.bn_s1.weight", "rebnconv1.bn_s1.bias", "rebnconv1.bn_s1.running_mean", "rebnconv1.bn_s1.running_var", "rebnconv1.bn_s1.num_batches_tracked", "rebnconv2.conv_s1.weight", "rebnconv2.conv_s1.bias", "rebnconv2.bn_s1.weight", "rebnconv2.bn_s1.bias", "rebnconv2.bn_s1.running_mean", "rebnconv2.bn_s1.running_var", "rebnconv2.bn_s1.num_batches_tracked", "rebnconv3.conv_s1.weight", "rebnconv3.conv_s1.bias", "rebnconv3.bn_s1.weight", "rebnconv3.bn_s1.bias", "rebnconv3.bn_s1.running_mean", "rebnconv3.bn_s1.running_var", "rebnconv3.bn_s1.num_batches_tracked", "rebnconv4.conv_s1.weight", "rebnconv4.conv_s1.bias", "rebnconv4.bn_s1.weight", "rebnconv4.bn_s1.bias", "rebnconv4.bn_s1.running_mean", "rebnconv4.bn_s1.running_var", "rebnconv4.bn_s1.num_batches_tracked", "rebnconv5.conv_s1.weight", "rebnconv5.conv_s1.bias", "rebnconv5.bn_s1.weight", "rebnconv5.bn_s1.bias", "rebnconv5.bn_s1.running_mean", "rebnconv5.bn_s1.running_var", "rebnconv5.bn_s1.num_batches_tracked", "rebnconv6.conv_s1.weight", "rebnconv6.conv_s1.bias", "rebnconv6.bn_s1.weight", "rebnconv6.bn_s1.bias", "rebnconv6.bn_s1.running_mean", "rebnconv6.bn_s1.running_var", "rebnconv6.bn_s1.num_batches_tracked", "rebnconv7.conv_s1.weight", "rebnconv7.conv_s1.bias", "rebnconv7.bn_s1.weight", "rebnconv7.bn_s1.bias", "rebnconv7.bn_s1.running_mean", "rebnconv7.bn_s1.running_var", "rebnconv7.bn_s1.num_batches_tracked", "rebnconv6d.conv_s1.weight", "rebnconv6d.conv_s1.bias", "rebnconv6d.bn_s1.weight", "rebnconv6d.bn_s1.bias", "rebnconv6d.bn_s1.running_mean", "rebnconv6d.bn_s1.running_var", "rebnconv6d.bn_s1.num_batches_tracked", "rebnconv5d.conv_s1.weight", "rebnconv5d.conv_s1.bias", "rebnconv5d.bn_s1.weight", "rebnconv5d.bn_s1.bias", "rebnconv5d.bn_s1.running_mean", "rebnconv5d.bn_s1.running_var", "rebnconv5d.bn_s1.num_batches_tracked", "rebnconv4d.conv_s1.weight", "rebnconv4d.conv_s1.bias", "rebnconv4d.bn_s1.weight", "rebnconv4d.bn_s1.bias", "rebnconv4d.bn_s1.running_mean", "rebnconv4d.bn_s1.running_var", "rebnconv4d.bn_s1.num_batches_tracked", "rebnconv3d.conv_s1.weight", "rebnconv3d.conv_s1.bias", "rebnconv3d.bn_s1.weight", "rebnconv3d.bn_s1.bias", "rebnconv3d.bn_s1.running_mean", "rebnconv3d.bn_s1.running_var", "rebnconv3d.bn_s1.num_batches_tracked", "rebnconv2d.conv_s1.weight", "rebnconv2d.conv_s1.bias", "rebnconv2d.bn_s1.weight", "rebnconv2d.bn_s1.bias", "rebnconv2d.bn_s1.running_mean", "rebnconv2d.bn_s1.running_var", "rebnconv2d.bn_s1.num_batches_tracked", "rebnconv1d.conv_s1.weight", "rebnconv1d.conv_s1.bias", "rebnconv1d.bn_s1.weight", "rebnconv1d.bn_s1.bias", "rebnconv1d.bn_s1.running_mean", "rebnconv1d.bn_s1.running_var", "rebnconv1d.bn_s1.num_batches_tracked", ".rebnconvin.conv_s1.weight", ".rebnconvin.conv_s1.bias", ".rebnconvin.bn_s1.weight", ".rebnconvin.bn_s1.bias", ".rebnconvin.bn_s1.running_mean", ".rebnconvin.bn_s1.running_var", ".rebnconvin.bn_s1.num_batches_tracked", ".rebnconv1.conv_s1.weight", ".rebnconv1.conv_s1.bias", ".rebnconv1.bn_s1.weight", ".rebnconv1.bn_s1.bias", ".rebnconv1.bn_s1.running_mean", ".rebnconv1.bn_s1.running_var", ".rebnconv1.bn_s1.num_batches_tracked", ".rebnconv2.conv_s1.weight", ".rebnconv2.conv_s1.bias", ".rebnconv2.bn_s1.weight", ".rebnconv2.bn_s1.bias", ".rebnconv2.bn_s1.running_mean", ".rebnconv2.bn_s1.running_var", ".rebnconv2.bn_s1.num_batches_tracked", ".rebnconv3.conv_s1.weight", ".rebnconv3.conv_s1.bias", ".rebnconv3.bn_s1.weight", ".rebnconv3.bn_s1.bias", ".rebnconv3.bn_s1.running_mean", ".rebnconv3.bn_s1.running_var", ".rebnconv3.bn_s1.num_batches_tracked", ".rebnconv4.conv_s1.weight", ".rebnconv4.conv_s1.bias", ".rebnconv4.bn_s1.weight", ".rebnconv4.bn_s1.bias", ".rebnconv4.bn_s1.running_mean", ".rebnconv4.bn_s1.running_var", ".rebnconv4.bn_s1.num_batches_tracked", ".rebnconv3d.conv_s1.weight", ".rebnconv3d.conv_s1.bias", ".rebnconv3d.bn_s1.weight", ".rebnconv3d.bn_s1.bias", ".rebnconv3d.bn_s1.running_mean", ".rebnconv3d.bn_s1.running_var", ".rebnconv3d.bn_s1.num_batches_tracked", ".rebnconv2d.conv_s1.weight", ".rebnconv2d.conv_s1.bias", ".rebnconv2d.bn_s1.weight", ".rebnconv2d.bn_s1.bias", ".rebnconv2d.bn_s1.running_mean", ".rebnconv2d.bn_s1.running_var", ".rebnconv2d.bn_s1.num_batches_tracked", ".rebnconv1d.conv_s1.weight", ".rebnconv1d.conv_s1.bias", ".rebnconv1d.bn_s1.weight", ".rebnconv1d.bn_s1.bias", ".rebnconv1d.bn_s1.running_mean", ".rebnconv1d.bn_s1.running_var", ".rebnconv1d.bn_s1.num_batches_tracked", ".rebnconv5.conv_s1.weight", ".rebnconv5.conv_s1.bias", ".rebnconv5.bn_s1.weight", ".rebnconv5.bn_s1.bias", ".rebnconv5.bn_s1.running_mean", ".rebnconv5.bn_s1.running_var", ".rebnconv5.bn_s1.num_batches_tracked", ".rebnconv4d.conv_s1.weight", ".rebnconv4d.conv_s1.bias", ".rebnconv4d.bn_s1.weight", ".rebnconv4d.bn_s1.bias", ".rebnconv4d.bn_s1.running_mean", ".rebnconv4d.bn_s1.running_var", ".rebnconv4d.bn_s1.num_batches_tracked", ".rebnconv6.conv_s1.weight", ".rebnconv6.conv_s1.bias", ".rebnconv6.bn_s1.weight", ".rebnconv6.bn_s1.bias", ".rebnconv6.bn_s1.running_mean", ".rebnconv6.bn_s1.running_var", ".rebnconv6.bn_s1.num_batches_tracked", ".rebnconv5d.conv_s1.weight", ".rebnconv5d.conv_s1.bias", ".rebnconv5d.bn_s1.weight", ".rebnconv5d.bn_s1.bias", ".rebnconv5d.bn_s1.running_mean", ".rebnconv5d.bn_s1.running_var", ".rebnconv5d.bn_s1.num_batches_tracked", ".rebnconv7.conv_s1.weight", ".rebnconv7.conv_s1.bias", ".rebnconv7.bn_s1.weight", ".rebnconv7.bn_s1.bias", ".rebnconv7.bn_s1.running_mean", ".rebnconv7.bn_s1.running_var", ".rebnconv7.bn_s1.num_batches_tracked", ".rebnconv6d.conv_s1.weight", ".rebnconv6d.conv_s1.bias", ".rebnconv6d.bn_s1.weight", ".rebnconv6d.bn_s1.bias", ".rebnconv6d.bn_s1.running_mean", ".rebnconv6d.bn_s1.running_var", ".rebnconv6d.bn_s1.num_batches_tracked", "eight", "ias", ".weight", ".bias". 

Getting the predictions

When I run the infer.py it segments the image but I don't get any of the detected objects or classes. and always have to refer to the image Is there a way I can get the segmented classes (e.g. upper, lower etc.) ?

Prorosal: Pack model for Huggingface inference

Clothes segmentation is an important task in the field of computer vision and has various applications such as fashion analysis, virtual try-on, and image editing. However, there are currently limited pre-trained models available for clothes segmentation on the Hugging Face platform.

I propose adding a pre-trained clothes segmentation model to the Hugging Face platform.

Model very slow on cpu

Hi, thanks for excellent clothes segmentation model.
But model is very slow when inferencing on cpu. In colab demo it takes 14s. for one inference without GPU. Even after converting model to onnx, and openvino format I reached only 2s for inference.
Any ideas why it's so slow?

How can upper body and Lower body segmentation can be separated out?

Hi,
Thanks for sharing this amazing work. Would like to know, once I have generated the output results, how I can separate out the upper body and lower body clothes from the image and process these two individual object for further task? Also, is their a JSON file that is being generated as output specifying the object detected in the image?

Please help.
Thanks

TypeError: Caught TypeError in DataLoader worker process 0. and TypeError: 'float' object cannot be interpreted as an integer

Traceback (most recent call last):
File "C:\dl\u2net-cloth-segmentation-main\train.py", line 182, in
training_loop(opt)
File "C:\dl\u2net-cloth-segmentation-main\train.py", line 111, in training_loop
data_batch = next(get_data)
File "C:\dl\u2net-cloth-segmentation-main\data\custom_dataset_data_loader.py", line 50, in sample_data
for batch in loader:
File "C:\Anaconda3\envs\UnetTorch\lib\site-packages\torch\utils\data\dataloader.py", line 530, in next
data = self._next_data()
File "C:\Anaconda3\envs\UnetTorch\lib\site-packages\torch\utils\data\dataloader.py", line 1224, in _next_data
return self._process_data(data)
File "C:\Anaconda3\envs\UnetTorch\lib\site-packages\torch\utils\data\dataloader.py", line 1250, in _process_data
data.reraise()
File "C:\Anaconda3\envs\UnetTorch\lib\site-packages\torch_utils.py", line 457, in reraise
raise exception
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "C:\Anaconda3\envs\UnetTorch\lib\site-packages\torch\utils\data_utils\worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "C:\Anaconda3\envs\UnetTorch\lib\site-packages\torch\utils\data_utils\fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\Anaconda3\envs\UnetTorch\lib\site-packages\torch\utils\data_utils\fetch.py", line 49, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\dl\u2net-cloth-segmentation-main\data\aligned_dataset.py", line 74, in getitem
sub_mask = self.rle_decode(
File "C:\dl\u2net-cloth-segmentation-main\data\aligned_dataset.py", line 164, in rle_decode
img = np.zeros(shape[0] * shape[1], dtype=np.uint8)
TypeError: 'float' object cannot be interpreted as an integer

environment:
python = 3.9.0
pytorch =1.11.0

Cuda compilation tools, release 11.3, V11.3.58
Build cuda_11.3.r11.3/compiler.29745058_0

Why do I have this problem when I reproduce? Thank you

IsADirectoryError: [Errno 21] Is a directory: 'input_images/.ipynb_checkpoints'

IsADirectoryError Traceback (most recent call last)
in ()
58 pbar = tqdm(total=len(images_list))
59 for image_name in images_list:
---> 60 img = Image.open(os.path.join(image_dir, image_name)).convert('RGB')
61 img_size = img.size
62 img = img.resize((768, 768), Image.BICUBIC)

/usr/local/lib/python3.7/dist-packages/PIL/Image.py in open(fp, mode)
2841
2842 if filename:
-> 2843 fp = builtins.open(filename, "rb")
2844 exclusive_fp = True
2845

IsADirectoryError: [Errno 21] Is a directory: 'input_images/.ipynb_checkpoints'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.