Git Product home page Git Product logo

deepcrack's Introduction

DeepCrack: Learning Hierarchical Convolutional Features for Crack Detection

We provide the codes, the datasets, and the pretrained model.

Zou Q, Zhang Z, Li Q, Qi X, Wang Q and Wang S, DeepCrack: Learning Hierarchical Convolutional Features for Crack Detection, IEEE Transactions on Image Processing, vol. 28, no. 3, pp. 1498-1512, 2019. [PDF]

  • Abstract: Cracks are typical line structures that are of interest in many computer-vision applications. In practice, many cracks, e.g., pavement cracks, show poor continuity and low contrast, which bring great challenges to image-based crack detection by using low-level features. In this paper, we propose DeepCrack-an end-to-end trainable deep convolutional neural network for automatic crack detection by learning high-level features for crack representation. In this method, multi-scale deep convolutional features learned at hierarchical convolutional stages are fused together to capture the line structures. More detailed representations are made in larger scale feature maps and more holistic representations are made in smaller scale feature maps. We build DeepCrack net on the encoder-decoder architecture of SegNet and pairwisely fuse the convolutional features generated in the encoder network and in the decoder network at the same scale. We train DeepCrack net on one crack dataset and evaluate it on three others. The experimental results demonstrate that DeepCrack achieves F -measure over 0.87 on the three challenging datasets in average and outperforms the current state-of-the-art methods.

Network Architecture

image

Some Results

image

DeepCrack Datasets

Four datasets are used by DeepCrack. CrackTree260 is used for training, and the other three are used for test.

CrackTree260 dataset

  • It contains 260 road pavement images - an expansion of the dataset used in [CrackTree, PRL, 2012]. These pavement images are captured by an area-array camera under visible-light illumination. We use all 260 images for training. Data augmentation has been performed to enlarge the size of the training set. We rotate the images with 9 different angles (from 0-90 degrees at an interval of 10), flip the image in the vertical and horizontal direction at each angle, and crop 5 subimages (with 4 at the corners and 1 in the center) on each flipped image with a size of 512×512. After augmentation, we get a training set of 35,100 images in total.

CRKWH100 dataset

  • It contains 100 road pavement images captured by a line-array camera under visible-light illumination. The line-array camera captures the pavement at a ground sampling distance of 1 millimeter.

CrackLS315 dataset

  • It contains 315 road pavement images captured under laser illumination. These images are also captured by a line-array camera, at the same ground sampling distance.

Stone331 dataset

  • It contains 331 images of stone surface. When cutting the stone, cracks may occur on the cutting surface. These images are captured by an area-array camera under visible-light illumination. We produce a mask for the area of each stone surface in the image. Then the performance evaluation can be constrained in the stone surface.

Download:

You can download the four datasets from the following link,

CrackTree260 & GT dataset: https://1drv.ms/f/s!AittnGm6vRKLyiQUk3ViLu8L9Wzb 

CRKWH100 dataset: https://1drv.ms/f/s!AittnGm6vRKLtylBkxVXw5arGn6R 
CRKWH100 GT: https://1drv.ms/f/s!AittnGm6vRKLglyfiCw_C6BDeFsP

CrackLS315 dataset: https://1drv.ms/f/s!AittnGm6vRKLtylBkxVXw5arGn6R 
CrackLS315 GT: https://1drv.ms/u/s!AittnGm6vRKLg0HrFfJNhP2Ne1L5?e=WYbPvF

Stone331 dataset: https://1drv.ms/f/s!AittnGm6vRKLtylBkxVXw5arGn6R 
Stone331 GT: https://1drv.ms/f/s!AittnGm6vRKLwiL55f7f0xdpuD9_
Stone331 Mask: https://1drv.ms/u/s!AittnGm6vRKLxmFB78iKSxTzNLRV?e=9Ph5aP

You can also download the datasets from
link:https://pan.baidu.com/s/1PWiBzoJlc8qC8ffZu2Vb8w
passcodes:zfoo

Results:

Some results on our datasets: image image image

Set up

Requirements

PyTorch 1.0.2 or above
Python 3.6
CUDA 10.0
We run on the Intel Core Xeon [email protected], 64GB RAM and two GeForce GTX TITAN-X GPUs.

Pretrained Models

Pretrained models on PyTorch are available at,
https://drive.google.com/file/d/1OO3OAzR4yxYh_UBR9Nu7hV3XayfKVyO-/view?usp=sharing
or at link:https://pan.baidu.com/s/1WsIwVnDgtRBpJF8ktlN84A
passcode:27py
You can download them and put them into "./codes/checkpoints/".

Please notice that, as this model was trained with Pytorch, its performance is slightly different with that of the original version built on Caffe.

Training

Before training, change the paths including "train_path"(for train_index.txt), "pretrained_path" in config.py to adapt to your environment.
Choose the models and adjust the arguments such as class weights, batch size, learning rate in config.py.
Then simply run:

python train.py 

Test

To evlauate the performance of a pre-trained model, please put the pretrained model listed above or your own models into "./codes/checkpoints/" and change "pretrained_path" in config.py at first, then change "test_path" for test_index.txt, and "save_path" for the saved results.
Choose the right model that would be evlauated, and then simply run:

python test.py

Citation:

If you use our codes or datasets in your own research, the citation can be placed as:

@article{zou2018deepcrack,
  title={Deepcrack: Learning Hierarchical Convolutional Features for Crack Detection},
  author={Zou, Qin and Zhang, Zheng and Li, Qingquan and Qi, Xianbiao and Wang, Qian and Wang, Song},
  journal={IEEE Transactions on Image Processing},
  volume={28},
  number={3},
  pages={1498--1512},
  year={2019},
}

The CrackTree260 dataset was constructed based on the CrackTree206 dataset. For details, you can refer to

@article{zou2012cracktree,
  title={CrackTree: Automatic crack detection from pavement images},
  author={Zou, Qin and Cao, Yu and Li, Qingquan and Mao, Qingzhou and Wang, Song},
  journal={Pattern Recognition Letters},
  volume={33},
  number={3},
  pages={227--238},
  year={2012},
  publisher={Elsevier}
}

Copy Right:

This dataset was collected for academic research.

Contact:

For any problem about this dataset or codes, please contact Dr. Qin Zou ([email protected]).

deepcrack's People

Contributors

qinnzou avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

deepcrack's Issues

AP

请问有没有计算AP的代码?

model is missing

Thanks for sharing your great work. Can you update the content inside model/? And if possible please upload the pretrained weights to google drive or another place, because I can not access baidu server. Thank you

ZeroDivisionError: division by zero

hi, i recently tried to run DeepCrack to identify my target, but i faced some problem. The error message is shown below.
i think it seems like it didn't read my mask data, however, i have manage my train.txt and val.txt as the README.md file. i really can't find which part was wrong. Does anyone know how to fix this problem?

-------------------------my error message-----------------------------------
Setting up a new session...
Without the incoming socket you cannot receive events from the server or register event handlers to your Visdom client.
Epoch 1 --- Training --- :: 0% 0/520 [00:00<?, ?it/s]/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:3635: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode)
Epoch 1 --- Evaluation --- :: 0% 0/520 [01:06<?, ?it/s]
Traceback (most recent call last):
File "train.py", line 199, in
main()
File "train.py", line 127, in main
trainer.acc_op(val_pred[0], val_target)
File "/content/drive/MyDrive/Colab_Notebooks/DeepCrack/0411/DeepCrack_master/codes/trainer.py", line 102, in acc_op
mask > 0].numel()
ZeroDivisionError: division by zero

Testing the pre-trained model

I'm trying to test the pre-trained model on some images. What is the input of the model in test_index.txt?
The path in train_example.txt is like:
/home/yueyuanhao/deepcrack/data/CrackTree/crack_train_image/6192_rot0_crop1_mirror0.jpg /home/yueyuanhao/deepcrack/data/CrackTree/crack_train_mask/6192_rot0_crop1_mirror0.png

Do we need a pair of images for testing as well or single image separated by space?

I tested with space separated paths of images, but it is giving the following error:

RuntimeError: CUDA out of memory. Tried to allocate 378.00 MiB (GPU 0; 10.76 GiB total capacity; 9.19 GiB already allocated; 54.44 MiB free; 9.59 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Question about checkpoint path No such file or directory

hi, i recently try running DeepCrack, and i encounter some problem that is about checkpoint file. in README.md, it said that i should revised the pretrain_path, however, i only find pretrain_model but no pretrain_path. and i think maybe is because of that i get the following error.
i have checked that file, it's true that the .pth file didn't exit in my checkpoint file. but i can't find why...
does anyone know how to fix this problem?

in my config.py, i modified the path. maybe someone know if i forget anything to change.
checkpoint_path = '/content/drive/MyDrive/Colab_Notebooks/DeepCrack/0411/DeepCrack_master/codes/'
log_path = 'log'
pretrained_model = ''
save_format = ''
--------------------my error message----------------------
Traceback (most recent call last):
File "train.py", line 199, in
main()
File "train.py", line 172, in main
cfg.name, epoch, cfg.save_pos_acc, cfg.save_acc))
File "/content/drive/MyDrive/Colab_Notebooks/DeepCrack/0411/DeepCrack_master/codes/tools/checkpointer.py", line 114, in save
torch.save(self._get_state(obj), new_filename, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 377, in save
with _open_file_like(f, 'wb') as opened_file:
File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 231, in _open_file_like
return _open_file(name_or_buffer, mode)
File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 212, in init
super(_open_file, self).init(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: '/content/drive/MyDrive/Colab_Notebooks/DeepCrack/0411/DeepCrack_master/codes/0428/checkpoints/0428_0428_epoch(1)_acc(0.00000/0.98645)_0000001_2022-04-28-02-27-52.pth'

two questions for training step

I wanted to run your code and then after that modify it to pursue my project objectives.
At the training step, you had written "Before training, change the paths including "train_path"(for train_index.txt), "pretrained_path" in config.py to adapt to your environment." I can't find them in the config.py, where do you mean here to change the paths?
my second question is about the datasets, I downloaded them but I don't know where to locate them? in the "Data" folder?

thanks for your response in advance

train errors: Sizes of tensors must match except in dimension 1

when I use python train.py to train a model from scratch, I met this problem!

'''shell
Epoch 1 --- Training --- :: 0%| | 0/34 [00:02<?, ?it/s]
torch.Size([1, 512, 37, 50]) torch.Size([1, 512, 37, 50])
fuse5 torch.Size([1, 1, 592, 800])
fuse4 torch.Size([1, 1, 600, 800])
fuse3 torch.Size([1, 1, 600, 800])
fuse2 torch.Size([1, 1, 600, 800])
fuse1 torch.Size([1, 1, 600, 800])
Traceback (most recent call last):
File "train.py", line 200, in
main()
File "train.py", line 67, in main
pred = trainer.train_op(data, target)
File "/ssd10/exec/zhangjie07/2023/cnen_online/tmp/DeepCrack/DeepCrack/DeepCrack-master/codes/trainer.py", line 40, in train_op
pred_output, pred_fuse5, pred_fuse4, pred_fuse3, pred_fuse2, pred_fuse1, = self.model(input)
File "/ssd8/exec/zhangjie07/2023/ALLMs/code/huggingface/transformers_venv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/ssd8/exec/zhangjie07/2023/ALLMs/code/huggingface/transformers_venv/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 171, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/ssd8/exec/zhangjie07/2023/ALLMs/code/huggingface/transformers_venv/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 181, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/ssd8/exec/zhangjie07/2023/ALLMs/code/huggingface/transformers_venv/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 89, in parallel_apply
output.reraise()
File "/ssd8/exec/zhangjie07/2023/ALLMs/code/huggingface/transformers_venv/lib/python3.7/site-packages/torch/_utils.py", line 543, in reraise
raise exception
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/ssd8/exec/zhangjie07/2023/ALLMs/code/huggingface/transformers_venv/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 64, in _worker
output = module(*input, **kwargs)
File "/ssd8/exec/zhangjie07/2023/ALLMs/code/huggingface/transformers_venv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/ssd10/exec/zhangjie07/2023/cnen_online/tmp/DeepCrack/DeepCrack/DeepCrack-master/codes/model/deepcrack.py", line 166, in forward
output = self.final(torch.cat([fuse5,fuse4,fuse3,fuse2,fuse1],1))
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 592 but got size 600 for tensor number 1 in the list.

'''

And my train dataset is CrackTree260:

'''shell
DeepCrack-datasets/CrackTree260/CrackTree260/6223.jpg DeepCrack-datasets/CrackTree260/CrackTree260_gt/gt/6223.bmp
DeepCrack-datasets/CrackTree260/CrackTree260/6224.jpg DeepCrack-datasets/CrackTree260/CrackTree260_gt/gt/6224.bmp
DeepCrack-datasets/CrackTree260/CrackTree260/6225.jpg DeepCrack-datasets/CrackTree260/CrackTree260_gt/gt/6225.bmp
'''

How can i solve this problem? thanks!!

License?

Hello there,

which license is managed?

err when calculate loss in val_op in trainer

hi,

I saw a error in val_op when calculate the loss,
you take loss devided by cfg.train_batch_size(which is defined as 8 in cfg),
but this step is calculate the validation set, so it must be devided by cfg.val_batch_size.

Questions about calculating accuracy in training

In the paper, it is mentioned that adding a sigmoid to the feature map output by the network can convert the feature map into a crack prediction graph. Then I looked at the code. The calculation of loss uses BCEWithLogitsLoss, that is, the network output is added to the sigmoid operation and then calculated. The visualisation also puts the output after the sigmoid operation as a prediction graph. However, when the accuracy is calculated, the output of the network was not processed with sigmoid. What's the reason?

Question about F-measure result.

Hi, I got a question about the code you provided that shows the metrics during traning or testing is 'accuray', 'postive sample accuracy' or 'negative sample accuracy', as the metric decides which checkpoint should be saved meanwhile, is that right? But the paper tolds us your model has achieves F-measure over 0.87 on the three test datasets. And how can you calculate F-measure by those two or three metrics? Or the code you provided is NOT completed? Just curious.

Batch Normalization

In model.py batch, normalization was not used even though it is mentioned in the paper.

data annotation

Hi, I'd like to know whether the cracks are annotated as lines (e.g polyline in ESRI ArcGIS) or polygons. It seems that some thin cracks are unsuitable to be annotated as polygons.
thanks a lot!

how to generator "*_example.txt"

Hi, thanks for your job. Could you please the script for generating 'example.txt' file?
I cannot understand the content in .txt file.
Thanks in advance.

label tool

could you tell me what is your label tool?

Sigmoid Function

Hi, If you want to use the sigmoid activation function in the last layer you should use this to the model.py what is the purpose of using torch. sigmoid in every layer in the training and validation loop? I removed visdom and fit this model just as other CNN models like UNet. so, I also didn't understand of the code in the screenshot. Can you please explain this?

image

test_example file

i wanna know if the test_example.txt file also needs both bmg path and png path

Code of Average Precision

Hello Sir!
Thank you for providing the code. Can you please also provide the code to calculate the Average Precision (area under P-R curve). I have the predicted segmented mask and the ground-truth.

test accuracy measurement

in the trainer.py file, you defined acc_op which is important for measuring accuracy. But, I found that is used in training time. So, how did you measure the F1 score in testing.py?

Test crack detection on my own images issue

Hello,

First, thank you for this very helpful code. Actually, I already developped a python script to detect road cracks using morphology methods, and I want to compare it with your method.

The issue when I launch the test.py script is as following:
File "C:\Program Files (x86)\ia\DeepCrack-master\codes\data\dataset.py", line 33, in __call__ if len(lab.shape) != 2: AttributeError: 'NoneType' object has no attribute 'shape'

How can I solve it please?

If I want to test my own images, do I have to create my own val_example.txt (test_example.txt) with the name and the path to my test images?

I want also to know if label images are necessary to test my own images.

Thanks in advance.

A question about network structure

Hi,I found that the network structure in the paper was inconsistent with the code. The 1×1 convolution was used in the feature fusion part, but the code used 3×3 convolution. May I ask which is accurate?

RuntimeError: CUDA out of memory

Hi,

I have an issue in the test.py code with the pretrained model. Everytime I run it I got the following error:
RuntimeError: CUDA out of memory. Tried to allocate 120.00 MiB (GPU 0; 4.00 GiB total capacity; 2.48 GiB already allocated; 105.14 MiB free; 2.51 GiB reserved in total by PyTorch)

Note that I have reduced the batch size as suggested in other topics but I still get the same eroor. Do you have a solution please?

Thanks in advance

RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 2560 and 1440 in dimension 2 at C:\w\1\s\tmp_conda_3.6_045031\conda\conda-bld\pytorch_1565412750030\work\aten\src\TH/generic/THTensor.cpp:689

Hello, @qinnzou
I am faceing this problem is it possible to get any clarification on what I should do?

python train.py
Setting up a new session...
Without the incoming socket you cannot receive events from the server or register event handlers to your Visdom client.
Loaded checkpoint: D:\test\deepcrack\DeepCrack-master\codes\checkpoints\DeepCrack_CT260_FT1.pth
Epoch 1 --- Training --- :: 0%| | 0/202 [00:00<?, ?it/s]
Traceback (most recent call last):
File "train.py", line 197, in
main()
File "train.py", line 64, in main
for idx, (img, lab) in bar:
File "C:\Users\MadPC.conda\envs\deepcrack\lib\site-packages\tqdm\std.py", line 1178, in iter
for obj in iterable:
File "C:\Users\MadPC.conda\envs\deepcrack\lib\site-packages\torch\utils\data\dataloader.py", line 819, in next
return self._process_data(data)
File "C:\Users\MadPC.conda\envs\deepcrack\lib\site-packages\torch\utils\data\dataloader.py", line 846, in _process_data
data.reraise()
File "C:\Users\MadPC.conda\envs\deepcrack\lib\site-packages\torch_utils.py", line 369, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "C:\Users\MadPC.conda\envs\deepcrack\lib\site-packages\torch\utils\data_utils\worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "C:\Users\MadPC.conda\envs\deepcrack\lib\site-packages\torch\utils\data_utils\fetch.py", line 47, in fetch
return self.collate_fn(data)
File "C:\Users\MadPC.conda\envs\deepcrack\lib\site-packages\torch\utils\data_utils\collate.py", line 80, in default_collate
return [default_collate(samples) for samples in transposed]
File "C:\Users\MadPC.conda\envs\deepcrack\lib\site-packages\torch\utils\data_utils\collate.py", line 80, in
return [default_collate(samples) for samples in transposed]
File "C:\Users\MadPC.conda\envs\deepcrack\lib\site-packages\torch\utils\data_utils\collate.py", line 56, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 2560 and 1440 in dimension 2 at C:\w\1\s\tmp_conda_3.6_045031\conda\conda-bld\pytorch_1565412750030\work\aten\src\TH/generic/THTensor.cpp:689

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.