Git Product home page Git Product logo

3dfeatnet's People

Contributors

gtinchev avatar yewzijian avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

3dfeatnet's Issues

How to convert point clouds in other formats

Hi,

Thanks for sharing your code, awesome work.

Just a short question - what software/method do you use to convert point clouds from different formats into the no-header, Nx6x4 bytes .bin file (as per this thread)?

Thanks a lot!

segfault when training or inferring

When I run inference.sh, I got

    gpu: 0
    output_dir: ./example_data/results
    num_samples: 64
    checkpoint: ./ckpt/checkpoint.ckpt
    base_scale: 2.0
    data_dir: ./example_data
    num_points: -1
    model: 3DFeatNet
    feature_dim: 32
    randomize_points: True
    use_keypoints_from: None
    data_dim: 6
    max_keypoints: 1024
    min_response_ratio: 0.01
    nms_radius: 0.5
2019-12-04 14:52:33,039 [DEBUG] __main__ - In compute_descriptors()
2019-12-04 14:52:33,039 [INFO] __main__ - Computed descriptors will be saved to ./example_data/results
2019-12-04 14:52:33,039 [INFO] __main__ - Found 4 bin files in directory: ./example_data, each assumed to be of dim 6
2019-12-04 14:52:33,039 [INFO] Feat3dNet - Model parameters: {'num_samples': 64, 'NoRegress': False, 'Attention': True, 'BaseScale': 2.0, 'feature_dim': 32, 'num_clusters': -1}
Segmentation fault

With

Python 3.5.3
tensorflow-gpu 1.14.0
cuda 10.0

What might be the problem?

About the "keypoints -> clusters" procession during training

Hi,
Thank you for your open source codes first, and when I read your codes, I found that the "keypoints -> clusters" procession is repeated twice in feature detection and feature extraction module. Why can not just take the "cluster" from feature detection module as the input of feature extraction module directly ?

Data preprocessing scripts issue

Hi, I am following your guide to preprocess Oxford dataset for training and I am running into several issues, are you sure the GitHub version is the tested one?

The issues include:

  • not transposing reflectance matrix resulting in dimension error, i.e. scripts_data_processing/oxford/internal/BuildPointclouds.m should have line
    pcloud = pointCloud(pcloud', 'Intensity', reflectance');
    instead of
    pcloud = pointCloud(pcloud', 'Intensity', reflectance);
  • some of the datasets suggested in datasets_train.txt file violate
    assert(startIdx + 5000 > length(ins_positions)) assertion
  • sometimes endIdx gets too large and array out of bounds issue occurs (though I guess this one can be catched and corresponding frames ignored)

Keypoint/Description combination evaluation

Hi,

I'm trying to generate Fig 4. based on the data you have provided (your keypoint extraction + your descriptor).
Am I correct in assuming the data for testing is the test models you have provided with the ground truth alignment?
Also, have you used the provided checkpoint for the evaluation?
Would you provide the script for generating the Precision/Meter curve?

Please correct me if I'm wrong - as far as I understand the meters in Fig 4. refer to the distance between the selected nearest neighbour in the second cloud and the projected ground truth location of the original keypoint.

Therefore, in order to generate the plot you need to:

1. Detect keypoints in all clouds from **test_models**
2. For each line in groundtruth.txt:
        for kp in (each keypoint in left cloud):
            nn_second_kp = find the closest neighbour in feature space to all the keypoints in second cloud.
            projected_gt = project kp given gt transform
            distance_in_3d_space = projected_gt - nn_second_kp
            correct_match=1

When would you consider an incorrect match if that's the case?

Many thanks,
Georgi

Non-Maximal Suppression

Hi, Thank you for sharing your code. When I read your paper, I got confused about the non-maximal suppression in inference stage. What do you mean by

We apply non-maximal suppression over a fixed radius r_nms around each point, and keep the remaining M points with the highest attention weights.

I know how to do the non maximal suppression in object detection, but I can not understand the steps here. In my understanding, points in the same cluster share the same feature vectors and attention weights, then how can you apply NMS and what's the effect of it?

Thank you for your time !

Problems executing train.py

Traceback (most recent call last):
File "train.py", line 327, in
train()
File "train.py", line 99, in train
train_data = DataGenerator(train_file, num_cols=args.data_dim)
File "/home/3DFeatNet/data/datagenerator.py", line 21, in init
self.load_metadata(filename)
File "/home/3DFeatNet/data/datagenerator.py", line 34, in load_metadata
fname, positives, negatives = [l.strip() for l in line.split('|')]
ValueError: not enough values to unpack (expected 3, got 1)

May I ask how to add parameters to make the program run normally?
Thanks a lot.

Error when run inference_example.sh

When I run inference_example.sh, I faced the following error. I have create a new folder called ckpt.

 File "3DFeatNet-master/venv/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1470, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

NotFoundError (see above for traceback): Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ./ckpt/checkpoint.ckpt
         [[Node: save/RestoreV2_3 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2_3/tensor_names, save/RestoreV2_3/shape_and_slices)]]
         [[Node: save/RestoreV2_24/_53 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_172_save/RestoreV2_24", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]

The training step works well. By the way, what loss is recommanded to stop training?
Thank you so much for your help.

how to correctly evaluate registration

Hi, Zi Jian,

I'm currently comparing the registration performance with 3DFeatnet. I find directly using ransacRt merely run 35 iteration and give a bad result on both sides.

So I'm little curious how to use your script correctly on registration. Maybe the 99% confidence or something?

error occured during training with TF 1.15.0

Hi,
I tried to train with TF 1.14, but it can't work successfully, a "Segmentation Fault" has been reported. And then I tried to use TF 1.15, but it still can't train the network, the error is as following:

tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op 'FarthestPointSample' used by node detection/FarthestPointSample (defined at /usr/local/lib/python3.5/dist-packages/tensorflow_core/python/framework/ops.py:1748) with these attrs: [npoint=512] Registered devices: [CPU, XLA_CPU, XLA_GPU] Registered kernels: device='GPU'
It seems FPS procession can't work fine, but I can't fix it. I saw your reply in another issue that you can successfully train with TF 1.15, my environment is as following:
python==3.5
TF==1.15.0
cuda==10.0
cudnn==7.6.0

No OpKernel was registered to support Op 'QueryBallPoint' with these attrs.

Hi,
Thanks for the great paper and the great code.

When I try to run inference_example.sh
I get the following error :

InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'QueryBallPoint' with these attrs.  Registered devices: [CPU], Registered kernels:
  device='GPU'

         [[Node: description/layer1/QueryBallPoint = QueryBallPoint[nsample=64, radius=2, _device="/device:GPU:0"](strided_slice, detection/Identity)]]

I have compiled the tf_grouping with the -lcudart argument as without I was getting some undefined symbol error.
i.e.
g++ -std=c++11 tf_grouping.cpp tf_grouping_g.cu.o -o tf_grouping_so.so -shared -fPIC -I $TF_INC -I /usr/local/cuda-10.0/include -I $TF_INC/external/nsync/public -lcudart -L /usr/local/cuda-10.0/lib64/ -L$TF_LIB -ltensorflow_framework -O2 -D_GLIBCXX_USE_CXX11_ABI=0
Could you help me with solving this please?
I am on Ubuntu 18.04, python 3.6, tf 1.4.0 and g++ 7.5.0

Trained models used in the paper

Hi, i noticed that the pretrained model you made available outputs a vector with 32 features, while the registration tests shown in the supplementary material used feature vectors with higher dimensionality.
Did you notice any differences in performances using a higher dimensionality? In that case, could you make those pretrained models available?
Thank you

question about the NMS module?

I have test the model you have sent to me, and there is some confuse in the inference code about the NMS module. Why did you need the NMS to process the data, what's the difference between the attention and feature in the code. I found the output is not the same size as the input through NMS module. i want to do a visualization of the whole point cloud after registeration not some of it. Can you solve my problem. Wish for you reply.
Thanks!

Datagenerator.py error while reshaping pointcloud data

I am starting the training on the oxford dataset. Unfortunately, I received this error while reshaping the numpy array of the read point cloud.
I performed all the described preprocessing on the pointclouds, could something have gone wrong in the process? I imagine you supposed that the data should have that dimension before training.

Thanks in advance for your help.

Traceback (most recent call last):
File "/mnt/gaiagpfs/users/homedirs/ccimarelli/opt/pycharm-community-2018.2.3/helpers/pydev/pydevd.py", line 1664, in
main()
File "/mnt/gaiagpfs/users/homedirs/ccimarelli/opt/pycharm-community-2018.2.3/helpers/pydev/pydevd.py", line 1658, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "/mnt/gaiagpfs/users/homedirs/ccimarelli/opt/pycharm-community-2018.2.3/helpers/pydev/pydevd.py", line 1068, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/mnt/gaiagpfs/users/homedirs/ccimarelli/opt/pycharm-community-2018.2.3/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/work/users/ccimarelli/git/3DFeatNet/train.py", line 327, in
train()
File "/work/users/ccimarelli/git/3DFeatNet/train.py", line 151, in train
augmentation=train_augmentations)
File "/mnt/gaiagpfs/users/workdirs/ccimarelli/git/3DFeatNet/data/datagenerator.py", line 74, in next_triplet
anchor = self.get_point_cloud(i_anchor)
File "/mnt/gaiagpfs/users/workdirs/ccimarelli/git/3DFeatNet/data/datagenerator.py", line 112, in get_point_cloud
num_cols=self.num_cols)
File "/mnt/gaiagpfs/users/workdirs/ccimarelli/git/3DFeatNet/data/datagenerator.py", line 175, in load_point_cloud
model = np.reshape(model, (-1, num_cols))
File "/opt/apps/resif/data/production/v1.1-20180718/default/software/lib/TensorFlow/1.8.0-foss-2018a-Python-3.6.4-CUDA-9.1.85/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 279, in reshape
return _wrapfunc(a, 'reshape', newshape, order=order)
File "/opt/apps/resif/data/production/v1.1-20180718/default/software/lib/TensorFlow/1.8.0-foss-2018a-Python-3.6.4-CUDA-9.1.85/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 51, in _wrapfunc
return getattr(obj, method)(*args, **kwds)
ValueError: cannot reshape array of size 100120 into shape (6)

ISS+FPFH+ransac code

hello! I want show fig.6 on my data, but i can't compute correct ISS+FPFH features.
Could you share your ISS+FPFH code?
thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.