Git Product home page Git Product logo

yalaudah / facies_classification_benchmark Goto Github PK

View Code? Open in Web Editor NEW
109.0 109.0 61.0 510 KB

The repository includes PyTorch code, and the data, to reproduce the results for our paper titled "A Machine Learning Benchmark for Facies Classification" (published in the SEG Interpretation Journal, August 2019).

License: MIT License

Python 100.00%
benchmark dataset deep-learning facies facies-classification geophysics interpretation machine-learning machine-learning-benchmark seismic

facies_classification_benchmark's People

Contributors

motazalfarraj avatar yalaudah avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

facies_classification_benchmark's Issues

Issue in section_test.py

In section test, line#28:

splits = [args.split if 'both' not in args.split else 'test1', 'test2']

when args.split = "test2", the output split will be ["test2", "test2"]
when args.split = "test1", the output split will be ["test1", "test2"]

@yalaudah

¿Is there possibility of generating horizons and fault detection with Deep Learning in Python?

Hello! I have loved his work with facies and machine learning. I am currently doing my undergraduate work to obtain a Geophysical Engineering degree, and my project is based on the use of supervised and unsupervised neural networks to recognize facies and petrophysical attributes. At the moment I am working with the Opendtect and H&R programs but right now I am looking for a Python library that can automate the generation of Horizons and fault detection in seismic (which I am working on in time). I would really appreciate if you have any suggestions in this regard. @yalaudah

SEGY version of raw data

Hi Alaudah,

Thanks for sharing data and results. I am wondering if you have SEGY version of raw seismic data and label instead of numpy format.

Thanks.

piture resize

Thank you for your wonderful work. If I want to change the size of the input images to even numbers, such as 255701 -> 256768, so that it is more convenient to use Unet and other networks, how should I adjust it? I look forward to your reply.

patch_train_val.txt missing

I've tried running the training part as suggested in the readme, but it seems your training validation splits are missing from the data/repository.

Could you add these files, or point me to where I can obtain these?

Traceback (most recent call last):
  File "patch_train.py", line 375, in <module>
    train(args)
  File "patch_train.py", line 84, in train
    split_train_val(args, per_val=args.per_val)
  File "patch_train.py", line 68, in split_train_val
    pjoin('data', 'splits', loader_type + '_train_val.txt'), 'w')
FileNotFoundError: [Errno 2] No such file or directory: 'data/splits/patch_train_val.txt'

How to get 'data\\splits\\section_train_val.txt', please?

Hello, Thanks for your great code for seismic facies classification. It an useful work for my research.

But, when I try it, a problem appered like follow figure.
image

I think that I miss a file name "data\splits\section_train_val.txt". however, I dont know how to get it.

Please Help Me!
Look Forward to Your Reply.

Problem with TensorboadX add_image writer

I am getting an error of KeyError: ((1, 1, 99), '|u1') with TypeError: Cannot handle this data type.
More specific error:

Epoch [1/101] training Loss: 1.8266
Traceback (most recent call last):
  File "/home/akshita/anaconda2/envs/Faster_RCNN/lib/python3.6/site-packages/PIL/Image.py", line 2460, in fromarray
    mode, rawmode = _fromarray_typemap[typekey]
KeyError: ((1, 1, 99), '|u1')

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "patch_train.py", line 365, in <module>
    train(args)
  File "patch_train.py", line 204, in train
    writer.add_image('train/original_label',correct_label_decoded, epoch + 1)
  File "/home/akshita/anaconda2/envs/Faster_RCNN/lib/python3.6/site-packages/tensorboardX/writer.py", line 412, in add_image
    self.file_writer.add_summary(image(tag, img_tensor), global_step, walltime)
  File "/home/akshita/anaconda2/envs/Faster_RCNN/lib/python3.6/site-packages/tensorboardX/summary.py", line 205, in image
    image = make_image(tensor, rescale=rescale)
  File "/home/akshita/anaconda2/envs/Faster_RCNN/lib/python3.6/site-packages/tensorboardX/summary.py", line 243, in make_image
    image = Image.fromarray(tensor)
  File "/home/akshita/anaconda2/envs/Faster_RCNN/lib/python3.6/site-packages/PIL/Image.py", line 2463, in fromarray
    raise TypeError("Cannot handle this data type")
TypeError: Cannot handle this data type``

I have tried the code on torch 0.4.1 and 1.0

Data shapes not matching

Hello!

Are the split sizes described in the paper correctly matching the data files shapes?
In your train/test split session you state:

  1. Training set: Inline range: [300-701] (402 inlines) and crosslines [300-1000] (701 crosslines)
  2. Testing set 1: Inline range: [100-299] (200 inlines) and crossline range: [300-1000] (701 crosslines)
  3. Testing set 2: Inline range: [100-701] (602 inlines) and crossline range: [1001-1200] (200 crosslines)

But I just opened the corresponding label files, whose shapes give me:

  1. (401,701,255)
  2. (200,701,255)
  3. (601,200,255)

Are the inlines wrong in the volumes 1 and 3 or am I missing something?

Thank you!

_pickle.UnpicklingError: pickle data was truncated

Traceback (most recent call last):
File "patch_train.py", line 373, in
train(args)
File "patch_train.py", line 177, in train
for i, (images, labels) in enumerate(trainloader):
File "D:\Python\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 355, in iter
return self._get_iterator()
File "D:\Python\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 301, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "D:\Python\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 914, in init
w.start()
File "D:\Python\Python38\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "D:\Python\Python38\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "D:\Python\Python38\lib\multiprocessing\context.py", line 326, in _Popen
return Popen(process_obj)
File "D:\Python\Python38\lib\multiprocessing\popen_spawn_win32.py", line 93, in init
reduction.dump(process_obj, to_child)
File "D:\Python\Python38\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
OSError: [Errno 22] Invalid argument
PS D:\dz_yalaudah\facies_classification_benchmark-master> Traceback (most recent call last):
File "", line 1, in
File "D:\Python\Python38\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "D:\Python\Python38\lib\multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
_pickle.UnpicklingError: pickle data was truncated

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.