Git Product home page Git Product logo

vedaseg's People

Contributors

darththomas avatar hxcai avatar media-smart avatar mileistone avatar yuxinzou avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vedaseg's Issues

How to train with custom dataset?

I want to train with custom dataset, and I have some questions:
(1) my custom dataset has two folders images and labels, where each label image is RGB image which uses different color for different object class. Should I organize this dataset in Pascal VOC format?
(2) I need to adapt voc_unet.py for custom dataset, Pascal VOC uses ignore_label for object boundary, how to set ignore_label for my own custom dataset?
(3) How to set crop_size_h, crop_size_w = 513, 513? My custom dataset has image dimension 512x512.
Thanks!

Suggest to loosen the dependency on albumentations

Hi, your project vedaseg requires "albumentations==0.4.1" in its dependency. After analyzing the source code, we found that some other versions of albumentations can also be suitable without affecting your project, i.e., albumentations 0.4.0. Therefore, we suggest to loosen the dependency on albumentations from "albumentations==0.4.1" to "albumentations>=0.4.0,<=0.4.1" to avoid any possible conflict for importing more packages or for downstream projects that may use vedaseg.

May I pull a request to loosen the dependency on albumentations?

By the way, could you please tell us whether such dependency analysis may be potentially helpful for maintaining dependencies easier during your development?



For your reference, here are details in our analysis.

Your project vedaseg(commit id: fa4ff42) directly uses 5 APIs from package albumentations.

albumentations.augmentations.functional.scale, albumentations.core.transforms_interface.to_tuple, albumentations.core.transforms_interface.DualTransform.__init__, albumentations.core.composition.Compose.__init__, albumentations.augmentations.transforms.PadIfNeeded.__init__

From which, 15 functions are then indirectly called, including 14 albumentations's internal APIs and 1 outsider APIs, as follows (neglecting some repeated function occurrences).

[/Media-Smart/vedaseg]
+--albumentations.augmentations.functional.scale
|      +--albumentations.augmentations.functional.resize
|      |      +--albumentations.augmentations.functional._maybe_process_in_chunks
|      |      |      +--albumentations.augmentations.functional.get_num_channels
|      |      |      +--numpy.dstack
+--albumentations.core.transforms_interface.to_tuple
+--albumentations.core.transforms_interface.DualTransform.__init__
|      +--albumentations.core.transforms_interface.BasicTransform.__init__
+--albumentations.core.composition.Compose.__init__
|      +--albumentations.core.composition.BaseCompose.__init__
|      |      +--albumentations.core.composition.Transforms.__init__
|      |      |      +--albumentations.core.composition.Transforms._find_dual_start_end
|      |      |      |      +--albumentations.core.composition.Transforms._find_dual_start_end
|      +--albumentations.augmentations.bbox_utils.BboxProcessor.__init__
|      |      +--albumentations.core.utils.DataProcessor.__init__
|      +--albumentations.core.composition.BboxParams.__init__
|      |      +--albumentations.core.utils.Params.__init__
|      +--albumentations.augmentations.keypoints_utils.KeypointsProcessor.__init__
|      |      +--albumentations.core.utils.DataProcessor.__init__
|      +--albumentations.core.composition.KeypointParams.__init__
|      |      +--albumentations.core.utils.Params.__init__
|      +--albumentations.core.composition.BaseCompose.add_targets
+--albumentations.augmentations.transforms.PadIfNeeded.__init__
|      +--albumentations.core.transforms_interface.BasicTransform.__init__

We scan albumentations's versions among [0.4.0] and 0.4.1, the changing functions (diffs being listed below) have none intersection with any function or API we mentioned above (either directly or indirectly called by this project).

diff: 0.4.1(original) 0.4.0
['albumentations.augmentations.transforms.Resize.apply_to_keypoint', 'albumentations.augmentations.transforms.RandomGridShuffle.__init__', 'albumentations.augmentations.transforms.RandomGridShuffle', 'albumentations.augmentations.transforms.Resize']

As for other packages, the APIs of @outside_package_name are called by albumentations in the call graph and the dependencies on these packages also stay the same in our suggested versions, thus avoiding any outside conflict.

Therefore, we believe that it is quite safe to loose your dependency on albumentations from "albumentations==0.4.1" to "albumentations>=0.4.0,<=0.4.1". This will improve the applicability of vedaseg and reduce the possibility of any further dependency conflict with other projects/packages.

AttributeError: 'SyncBatchNorm' object has no attribute '_specify_ddp_gpu_num'

How can i slove this problem?

(vedaseg) E:\00_Public_Project\vedaseg>python tools/train.py configs/voc_deeplabv3plus.py
2021-09-13 14:58:24,709 - INFO - Set cudnn deterministic False
2021-09-13 14:58:24,710 - INFO - Set cudnn benchmark True
2021-09-13 14:58:24,710 - INFO - Set seed 0
2021-09-13 14:58:24,711 - INFO - Build model
Traceback (most recent call last):
File "tools/train.py", line 47, in
main()
File "tools/train.py", line 42, in main
runner = TrainRunner(train_cfg, inference_cfg, common_cfg)
File "tools..\vedaseg\runners\train_runner.py", line 16, in init
super().init(inference_cfg, base_cfg)
File "tools..\vedaseg\runners\inference_runner.py", line 21, in init
self.model = self._build_model(inference_cfg['model'])
File "tools..\vedaseg\runners\inference_runner.py", line 39, in _build_model
model = build_model(cfg)
File "tools..\vedaseg\models\builder.py", line 10, in build_model
encoder = build_encoder(cfg.get('encoder'))
File "tools..\vedaseg\models\encoders\builder.py", line 9, in build_encoder
backbone = build_from_cfg(cfg['backbone'], BACKBONES, default_args)
File "tools..\vedaseg\utils\registry.py", line 51, in build_from_cfg
return build_from_registry(cfg, src, default_args=default_args)
File "tools..\vedaseg\utils\registry.py", line 84, in build_from_registry
return obj_cls(**args)
File "tools..\vedaseg\models\encoders\backbones\resnet.py", line 315, in init
act_cfg=act_cfg)
File "tools..\vedaseg\models\encoders\backbones\resnet.py", line 181, in init
self._make_stem_layer()
File "tools..\vedaseg\models\encoders\backbones\resnet.py", line 270, in _make_stem_layer
self.bn1 = self._norm_layer(self.inplanes)
File "tools..\vedaseg\models\utils\norm.py", line 81, in build_norm_layer
layer._specify_ddp_gpu_num(1) # noqa
File "C:\ProgramData\Anaconda3\envs\vedaseg\lib\site-packages\torch\nn\modules\module.py", line 1131, in getattr
type(self).name, name))
AttributeError: 'SyncBatchNorm' object has no attribute '_specify_ddp_gpu_num'

Benchmark on COCO Dataset

Hi all,
Thanks for adding training code for COCO dataset.
I am training U-Net for segmentation on COCO Dataset. Can you please share the benchmark for COCO dataset in terms of mIOU? I wanted to verify if my training can achieve this.

Single image inference time

Hello

Thanks for sharing your work, could you provide information on the inference time for a single image?

Thanks in advance.

AssertionError: Default process group is not initialized

I am trying to train using "python tools/train.py configs/voc_unet.py"
I get an error saying AssertionError: Default process group is not initialized
Can you please help me resolve this? Do I need to change anything in the config file?

Traceback (most recent call last):
File "tools/train.py", line 47, in
main()
File "tools/train.py", line 42, in main
runner = TrainRunner(train_cfg, inference_cfg, common_cfg)
File "tools/../vedaseg/runner/train_runner.py", line 20, in init
train_cfg['data']['train'])
File "tools/../vedaseg/runner/base.py", line 91, in _build_dataloader
'sampler') is not None else None
File "tools/../vedaseg/dataloaders/samplers/builder.py", line 6, in build_sampler
sampler = build_from_cfg(cfg, SAMPLERS, default_args)
File "tools/../vedaseg/utils/registry.py", line 50, in build_from_cfg
return build_from_registry(cfg, src, default_args=default_args)
File "tools/../vedaseg/utils/registry.py", line 83, in build_from_registry
return obj_cls(**args)
File "/home/rajrup/miniconda3/envs/vedaseg/lib/python3.7/site-packages/torch/utils/data/distributed.py", line 43, in init
num_replicas = dist.get_world_size()
File "/home/rajrup/miniconda3/envs/vedaseg/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 582, in get_world_size
return _get_group_size(group)
File "/home/rajrup/miniconda3/envs/vedaseg/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 196, in _get_group_size
_check_default_pg()
File "/home/rajrup/miniconda3/envs/vedaseg/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 187, in _check_default_pg
"Default process group is not initialized"
AssertionError: Default process group is not initialized

Windows

Hello, I want to apply this program on Windows platform.Do you have any installation tutorials on Windows,thank you!

About accuracy

Hello~
It seems that there is still a gap between the current mIoU and the ones reported in papers. Would you consider to improve the mIoU? Thanks.

Implementation Error of ResNet BasicBlock

Hi, I was trying to train with a resnet34 backbone, and find mismatch when loading pretrained model
I find some error

left, the implementation of this repo, is wrong, right is correct
image

how to pretrain?

if I want to train models use my own VOC dataset base on existing models.

modify this?
resume = "./deeplabv3_resnet101_voc_epoch_50.pth"?

backbone checkpoints

Thank you for your great job. Can you share the link to download the backbone checkpoints.

Bad mIoU when using many GPUs

I use the default deeplabv3plus config to train, and only modify the number of GPUs used. I noticed that the mIoU in the validation set drops significantly when the number of GPUs exceeds 4, as follows:

1 gpu: 0.7729
2 gpus: 0.7750
4 gpus: 0.7478
8 gpus: 0.5373

I guess it is caused by the batch normalization. Maybe sync BN will make a difference.
Things are quite different in object detection, e.g. mmdetection, where basic BN is used. The performance does not vary too much when I change the number of GPUs.

Validating Problem

Hello, I am testing the model, but it shows . How can I fix this? Thanks

2021-08-15 16:53:59,781 - INFO - Start validating Traceback (most recent call last): File "tools/train.py", line 47, in <module> main() File "tools/train.py", line 43, in main runner() File "tools/../vedaseg/runners/train_runner.py", line 148, in __call__ res = self._val() File "tools/../vedaseg/runners/train_runner.py", line 111, in _val for idx, (image, mask) in enumerate(self.val_dataloader): File "/root/anaconda3/envs/vedaseg/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 345, in __next__ data = self._next_data() File "/root/anaconda3/envs/vedaseg/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 838, in _next_data return self._process_data(data) File "/root/anaconda3/envs/vedaseg/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 881, in _process_data data.reraise() File "/root/anaconda3/envs/vedaseg/lib/python3.6/site-packages/torch/_utils.py", line 394, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in DataLoader worker process 1. Original Traceback (most recent call last): File "/root/anaconda3/envs/vedaseg/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop data = fetcher.fetch(index) File "/root/anaconda3/envs/vedaseg/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch return self.collate_fn(data) File "/root/anaconda3/envs/vedaseg/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 79, in default_collate return [default_collate(samples) for samples in transposed] File "/root/anaconda3/envs/vedaseg/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 79, in <listcomp> return [default_collate(samples) for samples in transposed] File "/root/anaconda3/envs/vedaseg/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 55, in default_collate return torch.stack(batch, 0, out=out) RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 641 and 691 in dimension 2 at /pytorch/aten/src/TH/generic/THTensor.cpp:612

ImportError: cannot import name 'weak_module'

I get the error:

ImportError: cannot import name 'weak_module'

when I running the following command python tools/trainval.py configs/deeplabv3plus.py and my PyTorch version is 1.3.0

Reason

After reading the source code from PyTorch, I found weak_script_method is in _jit_internal.py in version v1.1.0.
But, after version v1.2.0 PyTorch has removed the function detail

AttributeError: module 'albumentations.augmentations.functional' has no attribute 'scale'

DESCRIPTION

vedaseg train fails getting AttributeError: module 'albumentations.augmentations.functional' has no attribute 'scale'.

REPRODUCE PROCEDURE

Use current PiPy version of albumentation and execute training.
I'm using the following versions of software stacks.

docker image: pytorch/pytorch:1.7.1-cuda11.0-cudnn8-devel
torch: 1.7.1
torchvision: 0.8.2
conda: 4.10.3
Python: 3.8.10 conda origin
imgaug: 0.4.0
albumentation: 1.1.0 PyPi current version

ANALYSYS and SUGGESTED RESOLUTION

It looks like scale() method in albumentations.augmentations.functional does not exist in albumentations 1.1.0 any longer.
The method exists at least until 0.5.1, and after downgrading albumentations train process worked.

Thus, I think nowadays it's better to write albumentations version in requirements.txt:

albumentations==0.5.1

rather than:

albumentations>=0.4.1

LOG

The below is an exerption from the stack trace I got.

Original Traceback (most recent call last):
  File "/root/miniconda3/envs/py38/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop
    data = fetcher.fetch(index)
  File "/root/miniconda3/envs/py38/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/root/miniconda3/envs/py38/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/work/vedaseg/tools/../vedaseg/datasets/voc.py", line 39, in __getitem__
    image, mask = self.process(img, [mask])
  File "/work/vedaseg/tools/../vedaseg/datasets/base.py", line 16, in process
    augmented = self.transform(image=image, masks=masks)
  File "/root/miniconda3/envs/py38/lib/python3.8/site-packages/albumentations/core/composition.py", line 210, in __call__
    data = t(force_apply=force_apply, **data)
  File "/root/miniconda3/envs/py38/lib/python3.8/site-packages/albumentations/core/transforms_interface.py", line 97, in __call__
    return self.apply_with_params(params, **kwargs)
  File "/root/miniconda3/envs/py38/lib/python3.8/site-packages/albumentations/core/transforms_interface.py", line 112, in apply_with_params
    res[key] = target_function(arg, **dict(params, **target_dependencies))
  File "/work/vedaseg/tools/../vedaseg/transforms/transforms.py", line 22, in apply
    return F.scale(image, scale, interpolation=self.interpolation)
AttributeError: module 'albumentations.augmentations.functional' has no attribute 'scale'

Support more datasets

Nice job!

  • I was wondering whether you would support more datasets, like Cityscape and COCO, since these models are also used widely in related papers.

  • Besides, would you continue to maintain this repo, just like MMDetection, so we can use it without worrying that it would be abandoned suddenly?

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.