Git Product home page Git Product logo

seeingthroughfog's People

Contributors

fheide avatar manonthegithub avatar mariobijelic avatar martinhahner avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

seeingthroughfog's Issues

Which dataset is used to evaluate the work?

Hi, many thanks for your great work!

As for the quatitative detection AP listed in your paper, which dataset is used? The daytime split, night split or both of them?
And in the training stage, both the daytime and the night data are used?

THX.

Radar point cloud format and coordinate frames

Hi! Radar targets have the following information: rcsLog, azimuthAngle_sc, rVelOverGroundOdo_sc, x_sc, y_sc, x_cc, y_cc and rDist_sc. I was wondering what the coordinate frames sc and cc in there mean.

I'm particularly not sure whether I should use x/y_cc or x/ y_sc when comparing to posx and posy from the labels in gt_labels/cam_left_labels (e.g. to count how many radar points are inside a bounding box).

In tools/DatasetViewer/lib/read.py, the function load_calib_data mentions four coordinate frames: zero, camera, velodyne and radar. Can I assume that all label parameters are given in camera coordinates? And what is this zero reference frame?

Thanks for the dataset and for the support!

errors while unzipping files in FogchamberDataset and SeeingThroughFogCompressed

Hi,
unfortunately, I can not successfully unzip the files within FogchamberDataset and SeeingThroughFogCompressed.

To make sure that the download was not corrupted on my side, I already downloaded FogchamberDataset.zip and SeeingThroughFogCompressed.zip multiple times from your server.

While unzipping FogchamberDataset.zip and SeeingThroughFogCompressed.zip itself works fine,
unzipping the files within it's extracted subfolders doesn't work.

When trying to unzip those, I get the following message for the archives in FogchamberDataset:

4 archives were successfully processed.
7 archives had fatal errors.

and this for the archives in SeeingThroughFogCompressed:

10 archives were successfully processed.
1 archive had warnings but no fatal errors.
32 archives had fatal errors.

I think you should be able to reproduce a "fatal error" e.g. when you do
unzip lidar_hdl64_strongest.zip in FogchamberDataset/lidar_hdl64_strongest.

grafik

This is the trace I get:

Archive:  lidar_hdl64_strongest.zip
warning [lidar_hdl64_strongest.zip]:  zipfile claims to be last disk of a multi-part archive;
  attempting to process anyway, assuming all parts have been concatenated
  together in order.  Expect "errors" and warnings...true multi-part support
  doesn't exist yet (coming soon).
file #1:  bad zipfile offset (local header sig):  4
file #2:  bad zipfile offset (local header sig):  84
file #3:  bad zipfile offset (local header sig):  1292455
file #4:  bad zipfile offset (local header sig):  2585819
file #5:  bad zipfile offset (local header sig):  3877631
file #6:  bad zipfile offset (local header sig):  5169771
file #7:  bad zipfile offset (local header sig):  6462841
file #8:  bad zipfile offset (local header sig):  7755507
file #9:  bad zipfile offset (local header sig):  9046452
file #10:  bad zipfile offset (local header sig):  10341663
file #11:  bad zipfile offset (local header sig):  11633918
file #12:  bad zipfile offset (local header sig):  12929252
file #13:  bad zipfile offset (local header sig):  14225869
file #14:  bad zipfile offset (local header sig):  15518351
file #15:  bad zipfile offset (local header sig):  16816588
file #16:  bad zipfile offset (local header sig):  18111883
file #17:  bad zipfile offset (local header sig):  19405226
file #18:  bad zipfile offset (local header sig):  20698401
file #19:  bad zipfile offset (local header sig):  21990109
file #20:  bad zipfile offset (local header sig):  23281287
file #21:  bad zipfile offset (local header sig):  24574933
file #22:  bad zipfile offset (local header sig):  25868771
file #23:  bad zipfile offset (local header sig):  27161983
file #24:  bad zipfile offset (local header sig):  28453010
file #25:  bad zipfile offset (local header sig):  29745940
file #26:  bad zipfile offset (local header sig):  31039964
file #27:  bad zipfile offset (local header sig):  32334045
file #28:  bad zipfile offset (local header sig):  33629028
file #29:  bad zipfile offset (local header sig):  34923028
file #30:  bad zipfile offset (local header sig):  36218481
file #31:  bad zipfile offset (local header sig):  37512234
file #32:  bad zipfile offset (local header sig):  38810796
file #33:  bad zipfile offset (local header sig):  40106979
file #34:  bad zipfile offset (local header sig):  41401521
file #35:  bad zipfile offset (local header sig):  42699155
file #36:  bad zipfile offset (local header sig):  43992782
file #37:  bad zipfile offset (local header sig):  45287246
file #38:  bad zipfile offset (local header sig):  46581336
file #39:  bad zipfile offset (local header sig):  47873381
file #40:  bad zipfile offset (local header sig):  49170729
file #41:  bad zipfile offset (local header sig):  50466394
file #42:  bad zipfile offset (local header sig):  51766117
file #43:  bad zipfile offset (local header sig):  53062799
file #44:  bad zipfile offset (local header sig):  54360313
file #45:  bad zipfile offset (local header sig):  55654613
file #46:  bad zipfile offset (local header sig):  56949106
file #47:  bad zipfile offset (local header sig):  58244705
file #48:  bad zipfile offset (local header sig):  59539629
file #49:  bad zipfile offset (local header sig):  60835290
file #50:  bad zipfile offset (local header sig):  62129307
file #51:  bad zipfile offset (local header sig):  63422834
file #52:  bad zipfile offset (local header sig):  64718391
file #53:  bad zipfile offset (local header sig):  66013988
file #54:  bad zipfile offset (local header sig):  67311345
file #55:  bad zipfile offset (local header sig):  68606532
file #56:  bad zipfile offset (local header sig):  69899595
file #57:  bad zipfile offset (local header sig):  71192095
file #58:  bad zipfile offset (local header sig):  72485408
file #59:  bad zipfile offset (local header sig):  73781890
file #60:  bad zipfile offset (local header sig):  75075232
file #61:  bad zipfile offset (local header sig):  76376460
file #62:  bad zipfile offset (local header sig):  77671315
file #63:  bad zipfile offset (local header sig):  78964356
file #64:  bad zipfile offset (local header sig):  80256813
file #65:  bad zipfile offset (local header sig):  81549048
file #66:  bad zipfile offset (local header sig):  82844999
file #67:  bad zipfile offset (local header sig):  84137137
file #68:  bad zipfile offset (local header sig):  85437263
file #69:  bad zipfile offset (local header sig):  86731938
file #70:  bad zipfile offset (local header sig):  88028050
file #71:  bad zipfile offset (local header sig):  89325220
file #72:  bad zipfile offset (local header sig):  90617851
file #73:  bad zipfile offset (local header sig):  91915628
file #74:  bad zipfile offset (local header sig):  93210948
file #75:  bad zipfile offset (local header sig):  94503290
file #76:  bad zipfile offset (local header sig):  95796339
file #77:  bad zipfile offset (local header sig):  97087997
file #78:  bad zipfile offset (local header sig):  98383722
file #79:  bad zipfile offset (local header sig):  99678481
file #80:  bad zipfile offset (local header sig):  100973162
file #81:  bad zipfile offset (local header sig):  102270798
file #82:  bad zipfile offset (local header sig):  103566676
file #83:  bad zipfile offset (local header sig):  104860820
file #84:  bad zipfile offset (local header sig):  106155307
file #85:  bad zipfile offset (local header sig):  107451449
file #86:  bad zipfile offset (local header sig):  108749420
file #87:  bad zipfile offset (local header sig):  110054951
file #88:  bad zipfile offset (local header sig):  111346346
file #89:  bad zipfile offset (local header sig):  112641992
file #90:  bad zipfile offset (local header sig):  113894838
file #91:  bad zipfile offset (local header sig):  115147796
file #92:  bad zipfile offset (local header sig):  116399960
file #93:  bad zipfile offset (local header sig):  117654099
file #94:  bad zipfile offset (local header sig):  118910916
file #95:  bad zipfile offset (local header sig):  120168276
file #96:  bad zipfile offset (local header sig):  121429930
file #97:  bad zipfile offset (local header sig):  122694434
file #98:  bad zipfile offset (local header sig):  123955740
file #99:  bad zipfile offset (local header sig):  125219836
file #100:  bad zipfile offset (local header sig):  126479615
file #101:  bad zipfile offset (local header sig):  127734190
file #102:  bad zipfile offset (local header sig):  128988789
file #103:  bad zipfile offset (local header sig):  130241864
file #104:  bad zipfile offset (local header sig):  131497036
file #105:  bad zipfile offset (local header sig):  132754659
file #106:  bad zipfile offset (local header sig):  134011773
file #107:  bad zipfile offset (local header sig):  135268425
file #108:  bad zipfile offset (local header sig):  136526702
file #109:  bad zipfile offset (local header sig):  137790606
file #110:  bad zipfile offset (local header sig):  139056533
file #111:  bad zipfile offset (local header sig):  140327818
file #112:  bad zipfile offset (local header sig):  141598340
file #113:  bad zipfile offset (local header sig):  142868462
file #114:  bad zipfile offset (local header sig):  144135196
file #115:  bad zipfile offset (local header sig):  145403230
file #116:  bad zipfile offset (local header sig):  146680238
error: invalid zip file with overlapped components (possible zip bomb)

This is my setup:

$ hostnamectl

  Operating System: Debian GNU/Linux 9 (stretch)
            Kernel: Linux 4.9.0-13-amd64
      Architecture: x86-64

$ unzip --version

UnZip 6.00 of 20 April 2009, by Debian. Original by Info-ZIP.

Question of supplemenent document

image

First of all, I was impressed by the great research.
I would like to implement and use these models myself, but I do not know how to do it.

Did you label the fog point separately by making such an image?
I'd like to hear a specific explanation.

Also, what should I do if I want to use those models?
In addition, I wonder if LiDAR-Only SSD used 2D Detector or 3D Detector. If there is a ref code, can you tell me?

I'll wait for your answer. Thank you in advance.

Lidar height data

In the supplemental material, you guys mentioned height as part of the LIDAR input. Do you guys have a script to calculate this from the LIDAR data? Currently, it seems like it is only depth and pulse intensity. Thanks!

labels, 27th element

read.py only retrieves up to the 26th element of the labels (i.e 'visibleLidar': kitti_properties[25] ). What is the 27th element supposed to be (kitti_properties[26])? visibleRadar?

Foggification lost points

Hi,

to better understand the proposed foggification I had a look at the code but couldn't find where the lost points are actually discarded. The probabilities are computed but then only used for selecting points that are scattered.
To be precise, I'm talking about this method:

def haze_point_cloud(pts_3D, Radomized_beta, args):
#print 'minmax_values', max(pts_3D[:, 0]), max(pts_3D[:, 1]), min(pts_3D[:, 1]), max(pts_3D[:, 2]), min(pts_3D[:, 2])
n = []
# foggyfication should be applied to sequences to ensure time correlation inbetween frames
# vectorze calculation
# print pts_3D.shape
if args.sensor_type=='VelodyneHDLS3D':
# Velodyne HDLS643D
n = 0.04
g = 0.45
dmin = 2 # Minimal detectable distance
elif args.sensor_type=='VelodyneHDLS2':
#Velodyne HDL64S2
n = 0.05
g = 0.35
dmin = 2
d = np.sqrt(pts_3D[:,0] * pts_3D[:,0] + pts_3D[:,1] * pts_3D[:,1] + pts_3D[:,2] * pts_3D[:,2])
detectable_points = np.where(d>dmin)
d = d[detectable_points]
pts_3D = pts_3D[detectable_points]
beta_usefull = Radomized_beta.get_beta(pts_3D[:,0], pts_3D[:, 1], pts_3D[:, 2])
dmax = -np.divide(np.log(np.divide(n,(pts_3D[:,3] + g))),(2 * beta_usefull))
dnew = -np.log(1 - 0.5) / (beta_usefull)
probability_lost = 1 - np.exp(-beta_usefull*dmax)
lost = np.random.uniform(0, 1, size=probability_lost.shape) < probability_lost
if Radomized_beta.beta == 0.0:
dist_pts_3d = np.zeros((pts_3D.shape[0], 5))
dist_pts_3d[:, 0:4] = pts_3D
dist_pts_3d[:, 4] = np.zeros(np.shape(pts_3D[:, 3]))
return dist_pts_3d, []
cloud_scatter = np.logical_and(dnew < d, np.logical_not(lost))
random_scatter = np.logical_and(np.logical_not(cloud_scatter), np.logical_not(lost))
idx_stable = np.where(d<dmax)[0]
old_points = np.zeros((len(idx_stable), 5))
old_points[:,0:4] = pts_3D[idx_stable,:]
old_points[:,3] = old_points[:,3]*np.exp(-beta_usefull[idx_stable]*d[idx_stable])
old_points[:, 4] = np.zeros(np.shape(old_points[:,3]))
cloud_scatter_idx = np.where(np.logical_and(dmax<d, cloud_scatter))[0]
cloud_scatter = np.zeros((len(cloud_scatter_idx), 5))
cloud_scatter[:,0:4] = pts_3D[cloud_scatter_idx,:]
cloud_scatter[:,0:3] = np.transpose(np.multiply(np.transpose(cloud_scatter[:,0:3]), np.transpose(np.divide(dnew[cloud_scatter_idx],d[cloud_scatter_idx]))))
cloud_scatter[:,3] = cloud_scatter[:,3]*np.exp(-beta_usefull[cloud_scatter_idx]*dnew[cloud_scatter_idx])
cloud_scatter[:, 4] = np.ones(np.shape(cloud_scatter[:, 3]))
# Subsample random scatter abhaengig vom noise im Lidar
random_scatter_idx = np.where(random_scatter)[0]
scatter_max = np.min(np.vstack((dmax, d)).transpose(), axis=1)
drand = np.random.uniform(high=scatter_max[random_scatter_idx])
# scatter outside min detection range and do some subsampling. Not all points are randomly scattered.
# Fraction of 0.05 is found empirically.
drand_idx = np.where(drand>dmin)
drand = drand[drand_idx]
random_scatter_idx = random_scatter_idx[drand_idx]
# Subsample random scattered points to 0.05%
print(len(random_scatter_idx), args.fraction_random)
subsampled_idx = np.random.choice(len(random_scatter_idx), int(args.fraction_random*len(random_scatter_idx)), replace=False)
drand = drand[subsampled_idx]
random_scatter_idx = random_scatter_idx[subsampled_idx]
random_scatter = np.zeros((len(random_scatter_idx), 5))
random_scatter[:,0:4] = pts_3D[random_scatter_idx,:]
random_scatter[:,0:3] = np.transpose(np.multiply(np.transpose(random_scatter[:,0:3]), np.transpose(drand/d[random_scatter_idx])))
random_scatter[:,3] = random_scatter[:,3]*np.exp(-beta_usefull[random_scatter_idx]*drand)
random_scatter[:, 4] = 2*np.ones(np.shape(random_scatter[:, 3]))
dist_pts_3d = np.concatenate((old_points, cloud_scatter,random_scatter), axis=0)
color = []
return dist_pts_3d, color

In the end, old_points are returned although only the points with a distance larger than dmax were removed. Or am I misunderstanding the algorithm?

I also found this related issue: #21

Have a queation about thermal data

Hello, I have a question about your dataset (This is not a real issue)
DENSE dataset is super attractive so that i really want to use this dataset.
BTW, your dataset includes thermal dataset in 8 bit, however, Do you have 14 bit format Thermal data?
14 bit thermal data contains more detailed data, thus, i am really curious about this.
If you have, Could you share the dataset?

-PS. DENSE dataset is really cool I'm trying to do calibrate thermal data with stereo image for my research

OpenGL issue. AttributeError: 'list' object has no attribute 'x'

I am trying to run DatasetViewer.

Even though the window with the images is running.
I am getting many of these errors in the console:

Traceback (most recent call last):
  File "/Users/kirill/.conda/envs/SeeingThroughFog/lib/python3.6/site-packages/pyqtgraph/opengl/GLViewWidget.py", line 189, in paintGL
    self.setModelview()
  File "/Users/kirill/.conda/envs/SeeingThroughFog/lib/python3.6/site-packages/pyqtgraph/opengl/GLViewWidget.py", line 142, in setModelview
    m = self.viewMatrix()
  File "/Users/kirill/.conda/envs/SeeingThroughFog/lib/python3.6/site-packages/pyqtgraph/opengl/GLViewWidget.py", line 152, in viewMatrix
    tr.translate(-center.x(), -center.y(), -center.z())
AttributeError: 'list' object has no attribute 'x'

Looks like something works wrong. No idea how to fix it. I have tried a few versions of pyopengl, but it didn't help

How to get Depth

Thanks for the great work!

I want to know how the depth maps are obtained in supplementary 5.1.

Request for the Detailed Network Script or Architecture

Hello,

I read your paper and would like to reproduce it. However, the details of the architecture in Figure 4 are not clear. Could you please release the network script? If not, could you please provide a table of detailed network layers?

Thank you.

where is the train & test code ?

Hi, thanks for nice research and code upload.
I viewed the repository, but couldn't find the main train & estimation code.
where is it ? I'm sorry If I missed something.

Not able to install conda env

Hello,

I tried the command, conda env create -f environment.yml, it is not working and says to restart the kernel after package installations repetitively.

[Feature request] Documentation for dataset and directory structure

Currently, the dataset is given with no documentation on what each folder contains. Can we add to the README.md a short description of what the data in each folder represents, and if its used in the paper or not? This would allow for students and researchers to use this dataset without trying to match back and forth between the paper/exploratory github code and the dataset (which currently has a bit of educated guesswork involved).

For example, right now, I don't know what the difference is between the lidar_hdl64_strongest and lidar_hdl64_strongest_stereo_left. Likewise, gated_full_rect and gatedX_rect. This documentation would elucidate these types of questions.

I think a good method to document this would simply be a table format like this:

Folder Used for Training Used for Testing Description
cam_stereo_left yes yes 8-bit RGB images captured from the left stereo cameras. cam_stereo_left_lut is derived from this...
fir_axis no no 8-bit Thermal infrared images
gated0_rect yes yes gated infrared images. Different from gated1_rect, gated2_rect and gated_full_rect in these ways: ....
... ... ... ...
FOLDER_X ? ? Description of FOLDER_X

If any of the author's would be interested in spending 15-20 min to guide me through this dataset, I would be more than happy to submit a PR for this addition myself.

SeeingThroughFogData/cam_stereo_left

Dear people,

Thank you for the nice project and sharing the dataset.

I am trying to load the Dense dataset for a sensor fusion project. However,

According to this website The data shall be under

SeeingThroughFogData
|-- cam_stereo_left

But I can not find any data.

How can I load the following data:

|-- SeeingThroughFogData
|-- cam_stereo_left
|-- *.tiff
|-- ...
|-- cam_stereo_left_lut
|-- *.png
|-- ...
|-- lidar_hdl64_last
|-- *.bin
|-- ...
|-- lidar_hdl64_strongest
|-- *.bin
|-- ...
|-- ...

Regards, and thank you

Training object detector on lidar

Has anyone successfully trained an object detector (yolov3, ssd, etc...) on just lidar_hdl64_last_stereo_left or lidar_hdl64_strong_stereo_left? I'm getting pretty trash results, which I believe is due to the sparsity of the points. Anyone got tips for training?

Lidar Data Format and Foggification Parameters for VLP32 Lidar

Hello,

I download some HDL64 and VLP32 Lidar data from the DENSE datasets. Each data file consists of 5 columns where the first 3 columns are x/y/z coordinates and the 4th column is intensity. What is the meaning of the 5th column?

I have a VLP32 Lidar and would like to apply fortification to its data frames. Have you measured the parameters g and n for VLP32 Lidar? If not, is it possible to derive them from the matching data frames of HDL64 and VLP32 Lidar and the known parameters of the HDL64 Lidar?

Thank you and looking forward to your reply.

3D information missing from labels

Some labels apparently don't have information to draw the 3D bounding box in lidar coordinates (height, width, length, posx, posy, poz, and so on)

So instead of looking like this
PassengerCar 0.00 1 -1 218.00 258.00 313.00 309.00 1.38 1.81 4.16 -11.19 1.31 64.94 -4.676 0.000 0.011 -3.106 1.00 0.0053058781 -0.0000952959 -0.9998246710 0.0179573322 True True True False
they look like this
PassengerCar 0.00 -1 -1 388.62 243.29 445.67 290.16 -1.00 -1.00 -1.00 -1000.00 -1000.00 -1000.00 -1.000 -1.000 -1.000 -1.000 1.00 -1000.0000000000 -1000.0000000000 -1000.0000000000 -1000.0000000000 -1 -1 -1 -1
(it's all -1s and -1000s instead of actual information)

(the lines above are from /gt_labels/gated_labels_TMP/2018-02-12_15-59-37_00070.txt as an example)

Does anyone know if this is a bug or if this information is simply unavailable?

Thanks in advance!

Can't fully download the dataset

Hello! Thanks for your works.

I tried to download SeeingThroughFogCompressed from chrome or wget.
However, the download was said to be done before downloading about 5% of the dataset.
I have tried to download it a lot of times, but it didn't work.
Is there any way to reliably download the dataset?

Unable to download the dataset

Hi,

Great work!
I'm interested in experimenting on the dataset, and I've got a link to download it.
However, I can't download any file. It always shows empty files. (It worked a few months ago but not now.)
Can someone help me with this? Thanks!

Lidar to cam projections

Hello!

I am trying to project Lidar data onto the cam images, I have seen calibration files and your code on reading them, but it is still not that easy to understand on how should it work, the best I could make so far is this, but something is definitely missing :). The ones I got so far is this:
image
image

the right one seem to look like:
image

I am doing:

scan = load_velodyne_scan(scan_path)
velodyne_to_camera, camera_to_velodyne, P, R, vtc, radar_to_camera, zero_to_camera = load_calib_data(calib_root, name_camera_calib, tftree)

ps = project_3d_to_2d(scan[:,:3].transpose(), vtc)

the first image I get is when I use vtc on "lidar_hdl64_strongest"
the second is when I use vtc on "lidar_vlp32_strongest", which seems much better, but still with some shift.

Can you please give some more details on how to make projection with the calibration you have?

Thank you a lot in advance!

Gated to RGB homography

In the supplementary material, it says that a homography map between gated and RGB was calculated to warp gated into the RGB plane. Is this transformation matrix provided anywhere?

It also says that RGB was cropped into gated FOV. Was there any resizing done at all prior to cropping or was it just a simple crop?

Corresponding yaw angle in lidar frame : 3D BB GTs

Hi,
Since 3D BB annotations are in camera coordinate system, could you please let me know how to get the corresponding yaw angle
for the objects in the lidar frame? I would like to know if there is any alignment difference in the forward looking axis of camera and lidar, ie the constant angle that needs to be added to ''rotz' in annotations. Also, please let me know what 'orient3d' means in annotations?

Best regards,
George

Dataset download

Hello :)

Where can we download the Seeing through fog dataset instead of using the official website?

I tried to register but received the following e-mail:

Dear Sir or Madam,

Thank you very much for your message. I am currently out of office. 
I will reply as soon as possible. In urgent cases please contact my secretariat
or other known contact persons. 

Best regards

Klaus Dietmayer

Network Architecture Code

Hi,
First of all, I have to say your work is really fascinating and the data you've provided is so wealthy and useful.
Moreover, I am highly interested in working on a topic relevant to this paper. So, the network architecture of this paper would be extremely helpful for its progress. I saw other comments with the same request and your response about improving it as well. I believe it would be so great if you share the network architecture code even before your intended improvement for many interested researchers.
Thanks!

bool instead of float

Those 3 lines should be changed to bool(kitti_properties[*]),
otherwise displaying the labels won't work.

'visibleRGB': float(kitti_properties[23]),

'visibleGated': float(kitti_properties[24]),

'visibleLidar': float(kitti_properties[25]),

Also, I noticed, that there is another bool value a.k.a. kitti_properties[26] in the annotation files.
Does this last value stand for visibleRadar or what does this last value in the annotation files stand for?

Cannot Download Dataset

Hi, thanks for your impressive work! The dataset is so well-collected with novel preprocessing tools developed.

I would like to use the dataset in my research work, but I registered and it said the download link will be sent to my email. But I did not received any link or email. I have also checked the junk box so it is not missed. Is there any problems? Thanks.

Btw, if the network mentioned in the paper "Seeing through Fog" can be released, it would be even more helpful for our to follow your work.

Looking forward to your reply, thanks

Training SSD Model

Hello,

I am very impressed by your work in the SeeingThroughFog paper. I am working on training an SSD model using only Camera Stereo Images to learn from your work. I have found 14 different classes in the dataset ('LargeVehicle', 'PassengerCar_is_group', 'RidableVehicle', 'Pedestrian', 'Pedestrian_is_group', 'PassengerCar', 'Vehicle', 'Obstacle', 'RidableVehicle_is_group', 'Vehicle_is_group', 'LargeVehicle_is_group', 'DontCare', 'train', 'person'). I was trying to understand the label classes, but I have few questions. How many classes have you used for training? What is the difference between label class 'PassengerCar' and 'Vehicle'? What does the label 'Vehicle_is_group' exactly mean?

Also, while generating TFRecord files for Lidar and Radar data, did you gave input as a projected 2D image?

Thank you very much. Looking forward to hearing from you back!

conda environment not working

Hi Mario,
when I want to use your conda environment, I, unfortunately, run in the following error.

Collecting pyqt5==5.14.1 (from -r /scratch_net/hox/mhahner/repositories/SeeingThroughFog/condaenv.6lohyfv7.requirements.txt (line 1
3))
  Using cached https://files.pythonhosted.org/packages/3a/fb/eb51731f2dc7c22d8e1a63ba88fb702727b324c6352183a32f27f73b8116/PyQt5-5.1
4.1.tar.gz        
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):                                                                                            
      File "<string>", line 1, in <module>
      File "/home/mhahner/scratch/apps/anaconda3/envs/LabelTool/lib/python3.7/tokenize.py", line 447, in open
        buffer = _builtin_open(filename, 'rb')
    FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pip-install-ptb8ajni/pyqt5/setup.py'
                 
    ----------------------------------------
                                                                           
Pip subprocess error:
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-ptb8ajni/pyqt5/
 

CondaEnvException: Pip failed

Removing the following line from the yaml file also did not help.

prefix: /home/linux-user/miniconda3

It should be removed in general I think because it is user-specific, no?

Also, the file itself should be renamed to environment.yaml to be concise with the readme.

I am using the latest conda 4.8.3.

P.S.: Thanks for this contribution to the research community of adverse wheater.

scenes without labels

I suggest removing the following scenes from the split files because they do not contain any labels.

2018-02-09_14-51-35,00500
2018-02-05_13-01-37,00300
2018-02-09_09-45-12,00100
2018-02-05_13-01-37,00100
2018-02-05_12-48-21,00200
2018-02-09_14-51-35,00600
2018-02-04_14-31-18,00100
2018-02-06_17-24-50,00000
2018-02-09_18-49-42,00200
2018-02-03_23-02-24,00100
2018-12-10_07-23-55,00500
2018-12-09_10-45-12,00200
2018-10-08_08-10-40,01310
2018-03-15_09-29-41,00050
2018-03-15_09-28-52,00200
2018-10-08_08-10-40,01300
2018-03-15_09-42-29,00100
2018-03-15_09-42-29,00510
2018-10-29_15-09-31,01400
2019-01-09_10-55-25,01400
2018-10-29_16-00-52,02300
2018-10-29_15-37-43,01400
2018-10-08_08-10-40,02550
2018-10-29_16-00-52,00880
2018-10-29_16-00-52,02440
2018-10-29_15-46-53,02600
2018-10-29_15-02-37,00160
2018-12-10_10-51-06,00170
2018-02-04_11-24-11,00000
2019-01-09_14-50-13,00800
2018-12-10_10-51-06,00150
2018-02-04_11-27-05,00200
2018-02-07_12-04-45,00200
2019-01-09_10-55-25,02200
2018-02-06_14-33-00,00600
2019-01-09_11-15-11,03800
2018-12-10_10-51-06,00130
2018-02-07_11-55-33,00500
2019-01-09_13-42-11,00000
2019-01-09_14-54-03,02900
2019-01-09_10-55-25,02000
2018-02-12_11-00-35,00560
2018-02-04_21-47-40,00100
2018-02-04_21-22-55,00300
2018-02-04_21-37-51,00200
2018-02-07_18-20-02,00320
2018-02-07_18-20-02,00310
2018-02-07_18-20-02,00300
2018-02-05_22-27-19,00200
2018-02-12_09-17-17,00600
2018-12-10_08-22-28,00800
2018-12-10_09-39-47,00000
2018-02-05_22-27-19,00300
2018-02-12_08-57-46,00700
2018-02-12_08-57-46,00100
2018-12-17_19-33-05,02800
2018-02-08_17-42-24,00400
2018-02-12_06-46-13,00400
2018-12-17_07-36-29,01240
2018-12-11_14-42-41,00200
2018-12-17_19-27-00,01200
2018-12-10_10-04-35,00500
2018-02-04_11-15-24,00400
2018-10-08_08-10-40,00620
2018-12-10_10-51-06,00140

Attributes for each point in Lidar pcd files

Hi! I was checking the attributes in the lidar pcd files. I understood that 5 attributes are being stored for each point. Other than (x,y,z), could you please let me know what the other two attributes are exactly? I noticed that one of the attribute has range [0,255], and the other [0,63].

Thanks and regards,
George Sebastian

Specific data folders used for training and validation + Slow download speed

I was wondering which specific folders (cam_stereo_left, gated0_raw, etc...) contain the RGB, LIDAR, Gated, and Radar training/validation data in the paper. The overall download size is very large and the download speed is very slow, so I would like to avoid downloading irrelevant data.

Is there any README file or documentation describing what I'm asking?

Also, my download speed is roughly 1500 KB/s. Total download time for SeeingThroughFogCompressed.zip is projected to be a couple days. Is this to be expected? I'm downloading from the US west coast. Is there any quicker way to get the data?

Matching lidar strongest and last returns

Hi,

since the strongest and last returns are stored in different files with different counts of points per point cloud I'm having trouble matching them together, to find out e.g. what the corresponding last return is for a strongest return yielded by the same laser beam. Is it possible to somehow reconstruct this information?
I converted the position data to polar coordinates but noticed the points are not evenly distributed, and a series of points might overlap with another one (see below), so matching them by polar angles similarity won't work properly either.

image

FIR data labels

Hi, I was wondering if there are labels for the FIR data. I haven't found any in the folders. We've tried overlaying the RGB labels after normalizing them to the FIR image size. However, it seems like there is an offset, perhaps due to the placement of the FIR cameras.

How did you guys handle this in the paper?

Releasing of Raw Radar Spectrum

Hello,

I download the DENSE dataset but find its radar data only consists of discrete points. Do you have a plan to release raw radar data as in Fig.3 in your main paper?

Thank you

How to decompress SeeingThroughFogCompressed

I have downloaded the files 01 to 18, how should I unpack them?

My file directory is as follows:
SeeingThroughFogCompressed.z01 SeeingThroughFogCompressed.z08 SeeingThroughFogCompressed.z15
SeeingThroughFogCompressed.z02 SeeingThroughFogCompressed.z09 SeeingThroughFogCompressed.z16
SeeingThroughFogCompressed.z03 SeeingThroughFogCompressed.z10 SeeingThroughFogCompressed.z17
SeeingThroughFogCompressed.z04 SeeingThroughFogCompressed.z11 SeeingThroughFogCompressed.z18
SeeingThroughFogCompressed.z05 SeeingThroughFogCompressed.z12 SeeingThroughFogCompressed.zip
SeeingThroughFogCompressed.z06 SeeingThroughFogCompressed.z13
SeeingThroughFogCompressed.z07 SeeingThroughFogCompressed.z14

Domain Adaptation using SeeingThroughFog

Dear Mario Bijelic, thanks for your great job! Recently, I am trying to use CyCADA to realize domain adaptation from clear winter captures to adverse weather scenes, as shown on your official website. However, my CyCADA generates snow with poor results after training on the SeeingThroughFog for 70 epochs. So I would like to ask you for help. Could you tell me about your training steps and training parameter settings?

1662179604106

lidar foggification ralationed question

I hava review the supplement documents and relevant code。but I have some question :in algorithm1 ,just shows the area where the red box is located,may be somecode is lost .
Snipaste_2020-11-02_10-57-31

UnboundLocalError: local variable 'timedelays' referenced before assignment

Where can I find/gettimestamps.json for --path_timestamps?

parser.add_argument('--path_timestamps', default='./timestamps.json', help='Prevent Label Changes')

Problem when reading matching file for AdverseWeather2Algolux!

python-BaseException
Traceback (most recent call last):
  File "/scratch_net/hox/mhahner/apps/pycharm-2020.1.2/plugins/python/helpers/pydev/pydevd.py", line 1448, in _exec
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "/scratch_net/hox/mhahner/apps/pycharm-2020.1.2/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "/scratch_net/hox/mhahner/repositories/SeeingThroughFog/tools/DatasetViewer/DataViewer_V2.py", line 886, in <module>
    main(args)
  File "/scratch_net/hox/mhahner/repositories/SeeingThroughFog/tools/DatasetViewer/DataViewer_V2.py", line 867, in main
    DatasetViewer(root_dir, topics, timedelays, can_speed_topic, can_steering_angle_topic,
UnboundLocalError: local variable 'timedelays' referenced before assignment

Dataset format

Hi thanks for this dataset!

I am trying to figure out where RGB images and 2D bounding box labels are located in the download folder.
I see the following contents after downloading SeeingThroughFogCompressed.

calib_cam_stereo_left.json
calib_gated_bwv.json
calib_tf_tree_full.json
filtered_relevant_can_data.zip
gated_full_acc_rect.zip
gt_labels_cmore_copied_together.zip
labeltool_labels.zip
lidar_hdl64_last_gated.zip
lidar_hdl64_last_stereo_left.zip
lidar_hdl64_strongest_gated.zip
lidar_hdl64_strongest_stereo_left.zip
radar_targets.zip
road_friction.zip
weather_station.zip

Converting to KITTI format

Hi,
If I would like to generate the calib files for each sample like in KITTI,:

P0: 7.070493000000e+02 0.000000000000e+00 6.040814000000e+02 0.000000000000e+00 0.000000000000e+00 7.070493000000e+02 1.805066000000e+02 0.000000000000e+00 0.000000000000e+00 0.000000000000e+00 1.000000000000e+00 0.000000000000e+00

P1: 7.070493000000e+02 0.000000000000e+00 6.040814000000e+02 -3.797842000000e+02 0.000000000000e+00 7.070493000000e+02 1.805066000000e+02 0.000000000000e+00 0.000000000000e+00 0.000000000000e+00 1.000000000000e+00 0.000000000000e+00

P2: 7.070493000000e+02 0.000000000000e+00 6.040814000000e+02 4.575831000000e+01 0.000000000000e+00 7.070493000000e+02 1.805066000000e+02 -3.454157000000e-01 0.000000000000e+00 0.000000000000e+00 1.000000000000e+00 4.981016000000e-03

P3: 7.070493000000e+02 0.000000000000e+00 6.040814000000e+02 -3.341081000000e+02 0.000000000000e+00 7.070493000000e+02 1.805066000000e+02 2.330660000000e+00 0.000000000000e+00 0.000000000000e+00 1.000000000000e+00 3.201153000000e-03

R0_rect: 9.999128000000e-01 1.009263000000e-02 -8.511932000000e-03 -1.012729000000e-02 9.999406000000e-01 -4.037671000000e-03 8.470675000000e-03 4.123522000000e-03 9.999556000000e-01

Tr_velo_to_cam: 6.927964000000e-03 -9.999722000000e-01 -2.757829000000e-03 -2.457729000000e-02 -1.162982000000e-03 2.749836000000e-03 -9.999955000000e-01 -6.127237000000e-02 9.999753000000e-01 6.931141000000e-03 -1.143899000000e-03 -3.321029000000e-01

Tr_imu_to_velo: 9.999976000000e-01 7.553071000000e-04 -2.035826000000e-03 -8.086759000000e-01 -7.854027000000e-04 9.998898000000e-01 -1.482298000000e-02 3.195559000000e-01 2.024406000000e-03 1.482454000000e-02 9.998881000000e-01 -7.997231000000e-01

would I have to use the functions in DataViewerV2.py to generate these values? Does the camera_to_velodyne returned by the load_calib_data function correspond to Tr_imu_to_velo?

If I am only using the lidar files from lidar_hdl4_last_stereo_left will I be able to iterate over the files and have a corresponding camera calib file and tf tree?

Also, is there a reason some lidar files are .npz and some are .bin?

Thank you for the dataset!

Using Friction Data

@MarioBijelic
First of all thank you for the dataset in adverse weather. I am planning to make use of the friction data provided in the dataset, however is there a way to map the friction data with location or GPS data ?
Also I assume that the friction data corresponds to the friction values under vehicle and time synchronized with the image and velodyne data under the same name.

Reading Cam Labels

Hello,

I am referring to label files from: cam_left_labels_TMP/

It would be helpful if I get the information about what these labels denote and how to read this file.

Issue_Labels

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.