Git Product home page Git Product logo

capsuleendoscope / endoslam Goto Github PK

View Code? Open in Web Editor NEW
206.0 7.0 46.0 259.32 MB

EndoSLAM Dataset and an Unsupervised Monocular Visual Odometry and Depth Estimation Approach for Endoscopic Videos: Endo-SfMLearner

Home Page: https://data.mendeley.com/datasets/cd2rtzm23r/1

License: MIT License

MATLAB 24.26% C++ 4.19% M 0.02% Python 65.89% Shell 1.80% Jupyter Notebook 3.85%
capsule-endoscopy visual-odometry monocular-depth-estimation pose-estimation slam-dataset

endoslam's People

Contributors

capsuleendoscope avatar guliz-gokceler-2014400192 avatar kutsev-ozyoruk avatar luisperdigoto avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

endoslam's Issues

Unity RGB & Depthmaps misalignment

Hi,

I've noticed that in the Unity sequences there are misalignments between all RGB & Depthmap image pairs.

For example, try loading one pair using the code below and you'll noticed that the edges in both images are not aligned.
I've attached a gif file for illustration.

import cv2
depthmap = cv2.imread('Data/EndoSlam/UnityCam/Colon/Pixelwise Depths/aov_image_0050.png',-1)[...,0]
rgb = cv2.imread('Data/EndoSlam/UnityCam/Colon/Frames/image_0050.png',-1)

plt.imshow(depth)
plt.show()

plt.imshow(rgb)
plt.show()

ezgif com-gif-maker (27)

Can you please support this issue?
Thank you

Run depth estimation

Heya, Please may you explain how to run the Endo-SfMLearner to extract depth information from an endoscopic image... I'm a bit confused on what information the shell scripts need. specifically what is gt_depth for eval_depth.py
Thanks

Data split is not clear

How do you organise the training, validation, and testing set for the synthetic Unity data set?
Can you just provide more details?

Pretrained models

Dear Team,

The link to the pretrained models seem to be broken. Will it be possible to update them?

Evaluate Pose Estimation for Unity Data

The Unity data ground truth pose is given in quaternions. However, the pose estimation test-vo.py estimates the pose in rotation and translation. Is there a ready made function that changes quat to rotation to be able to evaluate?

Issues of camera calibration files and image datasets of Pillcam

Hi,

There are two issues I am concerned about:

(1) According to the README file under the folder EndoSfMLearner, there should be a cam.txt containing the camera calibration parameters. I know that I can get the parameters from the folder Calibration, but I have no idea what the format in cam.txt is. Also, only calibrations of MiroCam and PillCam are provided. Where are the calibration parameters of LowCam and HighCam?

(2) Downloaded from the dropbox link, only images of HighCam, LowCam, and MiroCam are provided under the folder Cameras. Where can I find images of PillCam?

Am I correct that, given the limited data, I can only reproduce the experiments of MiroCam if I get its format of 'cam.txt' correclty?

Data used for training and validation in the paper

Can you please provide the exact subsets of the data that was used for training the EndoSfM learner models? I would be very grateful if you pointed me to the subsets that have been used for training and validation, and also for the trajectory plots as predicted by the Pose estimation network.

The paper states: "The training and validation dataset consist of 2039 and 509 colon images generated in the Unity simulation environ- ment, respectively." Can you please provide this subset?

Thank you.

Confusion About data available and Code Structure

Thank you for your wonderful work!

I have access to the EndoSLAM dataset, and I am trying to replicate your published results. I have found that the code seems to point (by default) to KITTI and cityscapes data (which are not the subject of the study conducted).

Am I missing something? How do use the code on the EndoSLAM dataset - specifically, the data loaders? Or do I have to write a new data loader?

Thank you very much.

accuracy of endoscope calibration

Hi,

I read from your paper that the reprojection error of your endoscope calibration exceeds 0.2. Will this error be a bit large for an endoscope? Have any of you used Calibration methods other than Camera Calibration Toolbox?

thank you!

Does anyone have a good reproduction result?

Tried to train a model using sequence dataset with provided virtual data, and I found very poor performance and different loss recording in log. Could anyone give me more detail about training endoSfM model?

scale in compute_photo_and_geometry_loss

Hello, thanks for publishing the code!
In the function compute_photo_and_geometry_loss, the code that handles scaling is commented out, and there is not adjustment of the camera intrinsic to the scale.
Is this an error or a desired behavior?
Thanks

Full dataset access

Hi, all in a few weeks all dataset will be released as open source. Thank you for your interest.

Originally posted by @CapsuleEndoscope in #7 (comment)

Hi team, following up on this comment. Has there been any progress on releasing the pill cam data?

train matter

/home/dell/anaconda3/envs/endoslam/bin/python /home/dell/EndoSLAM/EndoSfMLearner/train.py --name /home/dell/EndoSLAM/EndoSfMLearner/train
=> will save everything to /home/dell/EndoSLAM/EndoSfMLearner/train/03-21-22:36
=> fetching scenes in '/home/dell/EndoSLAM/EndoSfMLearner/Data_Path'
Traceback (most recent call last):
File "/home/dell/EndoSLAM/EndoSfMLearner/train.py", line 456, in
main()
File "/home/dell/EndoSLAM/EndoSfMLearner/train.py", line 113, in main
dataset=args.dataset
File "/home/dell/EndoSLAM/EndoSfMLearner/datasets/sequence_folders.py", line 33, in init
self.crawl_folders(sequence_length)
File "/home/dell/EndoSLAM/EndoSfMLearner/datasets/sequence_folders.py", line 42, in crawl_folders
intrinsics = np.genfromtxt(scene/'cam.txt').astype(np.float32).reshape((3, 3))
ValueError: cannot reshape array of size 1 into shape (3,3)

Can you tell me how to fix the cam.txt?

rotation problem in UnityCam/Colon /pose

Hello, when I use UnityCam/Colon /pose and I find that the rotation of these cameras are all the same, Is this correct? Maybe it’s a problem with my understanding.

Data missing from Dropbox

I tried downloading the dataset from Dropbox but it seems as all the folders are empty. Do you know why that could be the case?

A question about the area coverage in th paper VR-Caps.

In the paper,there is a area coverage algorithm,but it's not a clear description . I want to konw that when you test the Deep Reinforcement Learning (DRL) algorithm , how can you get the number of vertices seen by the capsule camera ? And how did you know how much vertices the stomach total have?

Scale of the depth map

Hello,

I have a quick question about the depth map. Is there any scale from the colour of the depth map to mm or cm?

Thanks in advance!

UnityCam - normalization and unit measurements of depth map

Hi, first I want to give thanks for the complete and novel synthetic dataset.

But, I'm writing to you to request more details about the normalization, unit measurements, and range of the depth ground truths, specifically of the UnityCAM images. I read the articles related to the database, but I didn't notice such information.

How should I interpret the relationship between the gray level values and the depth? Does each level correspond to 1 millimeter (I guess it is this option), 1 centimeter?

Or these depth maps have been normalized to some minimum and maximum depth values?

Thank you for your time, and I will be pending for your answer.

endoSfm_depth_tests.ipynb

Hello! Hi
I am trying to run endoSfm_depth_tests.ipynb notebook.
But when it runs the output of prediction image is very small compare to input and ground truth image.
Screen Shot 2020-12-17 at 3 46 45 PM

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.