Git Product home page Git Product logo

sgmnet's People

Contributors

vdvchen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sgmnet's Issues

SGMNet+SP error

@vdvchen 你好, 当运行SGMNet+SP时出现了一个错误,应该是因为superpoint是256维特征,但是SMGNet的权重是128维的,请问方便发送我邮箱[email protected]一个基于superpoint的SGMNet权重么?

aug_desc1, aug_desc2 = x1_pos_embedding + desc1, x2_pos_embedding + desc2
RuntimeError: The size of tensor a (128) must match the size of tensor b (256) at non-singleton dimension 1

Questions about downloading SGMNet datasets

Hi! I have recently been having problems downloading the SGMNet dataset, mainly downloading:
1.gl3d_cams
2.gl3d_depths
3.gl3d_ct
For these three data sets, after running the corresponding bash command, it is displayed that the download cannot be performed. After querying the corresponding download URL, it is found that the website cannot be accessed, so the download fails. Could you please send a new link that can be downloaded? Thank you very much!

How many epochs did you use for training?

Hi! Can you answer a couple of questions

  1. How many epochs did you use for training the SGMNet? And where to set this parameter in the code?
  2. When checkpoints are made? Because I didn't understand from the code.

CPU support

Hi @vdvchen ,
I tried, This source code implementation is not supported to run on CPU
How to run it on CPU?

Hi! Could you provide more details on the dataset for training ?

Thanks for your kind support last time and thank you very much for sharing the training script... It is quite interesting for me.
Here, I would like to kindly ask about the data for training.
As I have tried to follow the instruction to download the data from https://github.com/lzx551402/GL3D

  • I have a question on which of these three datasets (1) gl3d_imgs, (2) gl3d_raw_imgs (3) gl3d_blended_images from https://github.com/lzx551402/GL3D#downloads ...... are to be downloaded ... or all of them ?

  • I have downloaded gl3d_raw_imgs... However, I received the error (below).... Does this mean that I did not download correctly? Or that I have downloaded the wrong dataset?

  • My setting for gl3d.yaml file is as follows. Should rawdata_dir be the cloned directory of https://github.com/lzx551402/GL3D ? I am very sorry as this is not what you wrote in the instruction. The reason that I thought that this maybe the GL3D cloned directory is because dump.py also looks for GL3D/data/list/comb/imageset_train.txt... :

data_name: gl3d_train
rawdata_dir: /mnt/HDD4TB2/GL3D   
feature_dump_dir: /mnt/HDD4TB3/SGMNet/gl3d_desc_dir
dataset_dump_dir: /mnt/HDD4TB3/SGMNet/gl3d_dataset_dir

The error:

python dump.py --config_path configs/gl3d.yaml
dump.py:20: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  config = yaml.load(f)
Formatting data...
  0%|                                                                                                                                                     | 0/109 [00:00<?, ?it/s]
multiprocessing.pool.RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/home/gabby-suwichaya/anaconda3/envs/sgmnet/lib/python3.7/multiprocessing/pool.py", line 121, in worker
    result = (True, func(*args, **kwds))
  File "/home/gabby-suwichaya/anaconda3/envs/sgmnet/lib/python3.7/multiprocessing/pool.py", line 44, in mapstar
    return list(map(*args))
  File "/mnt/HDD4TB3/SGMNet/datadump/dumper/gl3d_train.py", line 147, in format_seq
    pair_list=np.loadtxt(os.path.join(seq_dir,'geolabel','common_track.txt'),dtype=float)[:,:2].astype(int)
  File "/home/gabby-suwichaya/anaconda3/envs/sgmnet/lib/python3.7/site-packages/numpy/lib/npyio.py", line 1067, in loadtxt
    fh = np.lib._datasource.open(fname, 'rt', encoding=encoding)
  File "/home/gabby-suwichaya/anaconda3/envs/sgmnet/lib/python3.7/site-packages/numpy/lib/_datasource.py", line 193, in open
    return ds.open(path, mode, encoding=encoding, newline=newline)
  File "/home/gabby-suwichaya/anaconda3/envs/sgmnet/lib/python3.7/site-packages/numpy/lib/_datasource.py", line 533, in open
    raise IOError("%s not found." % path)
OSError: /mnt/HDD4TB2/GL3D/data/586326ad712e276146904571/geolabel/common_track.txt not found.
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "dump.py", line 27, in <module>
    dataset.format_dump_data()
  File "/mnt/HDD4TB3/SGMNet/datadump/dumper/gl3d_train.py", line 244, in format_dump_data
    pool.map(self.format_seq,indices)
  File "/home/gabby-suwichaya/anaconda3/envs/sgmnet/lib/python3.7/multiprocessing/pool.py", line 268, in map
    return self._map_async(func, iterable, mapstar, chunksize).get()
  File "/home/gabby-suwichaya/anaconda3/envs/sgmnet/lib/python3.7/multiprocessing/pool.py", line 657, in get
    raise self._value
OSError: /mnt/HDD4TB2/GL3D/data/586326ad712e276146904571/geolabel/common_track.txt not found.

Could you please provide more guidance on reproducing Aachen for SIFT?

Hi! I would like to ask for guidance on reproducing Aachen for SIFT and SGMNet+SIFT?

At a starter, I have tried to run MNN+SIFT for Aachen day-night...but I get a much worse result for 8196 keypoints... 23.5 / 33.7 / 41.8
I am guessing that maybe because I am using SIFT instead of RootSIFT... But I am not sure about other settings that could be different...

So, as I saw the result of SGMNet+SIFT as well as MNN+SIFT on both papers and on https://www.visuallocalization.net/details/17655/, I am wondering how to reproduce them (especially Table 4. in the paper)...

Could you please share the setting? I would like to learn how to get a similar result...

So I would like to scope down my question as follows:

  • Firstly, it is RootSIFT not SIFT...where self.root == True (in https://github.com/vdvchen/SGMNet/blob/main/components/extractors.py#L43 ) ? Is that correct? Do you have to further normalize the features (to keep the norm of feature dimension to 1) ?
  • Did you restrict image size? For example, in D2-Net, the maximum size was restricted to 1600 ? Did you do something like that too?
  • For MNN+SIFT, is there any thresholding applied for MNN matching?
  • Is the setting for SGMNet+SIFT for Aachen Day-Night similar to the following? Also, as I read the name ...rootsift8k_upright_512_0.2_SGMNet... What do you mean by 512 and 0.2?
matcher:
  name: SGM
  model_dir: ../weights/sgm/root
  seed_top_k: [256,256]
  seed_radius_coe: 0.01
  net_channels: 128
  layer_num: 9
  head: 4
  seedlayer: [0,6]
  use_mc_seeding: True
  use_score_encoding: False
  conf_bar: [1.11,0.1] #set to [1,0.1] for sp
  sink_iter: [10,100]
  detach_iter: 1000000
  p_th: 0.2
  • Also, is the setting for SG+SIFT for Aachen Day-Night similar to ?
matcher:
  name: SG
  model_dir: ../weights/sg/root
  net_channels: 128
  layer_num: 9
  head: 4
  use_score_encoding: True
  sink_iter: [100]
  p_th: 0.2

Descriptor

Hello, I read your paper, there is a combination of SIFT+Superglue in your comparative experiment, how do you solve the problem that the descriptor and matching network dimensions are inconsistent

Can not find hdf5 file

When I run
python dump.py --config_path configs/gl3d.yaml

I encounter the following issue, seems like it can not find hdf5 file
how to solve it?

Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/usr/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "/home/sshuang/SGMNet/datadump/dumper/gl3d_train.py", line 192, in format_seq
with h5py.File(os.path.join(self.config['feature_dump_dir'],fea_path1),'r') as fea1,
File "/usr/local/lib/python3.6/dist-packages/h5py/_hl/files.py", line 394, in init
swmr=swmr)
File "/usr/local/lib/python3.6/dist-packages/h5py/_hl/files.py", line 170, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 85, in h5py.h5f.open
OSError: Unable to open file (unable to open file: name = '/home/sshuang/SGMNet/datadump/dump_desc/000000000000000000000009/00000007.jpg_sp_500.hdf5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "dump.py", line 27, in
dataset.format_dump_data()
File "/home/sshuang/SGMNet/datadump/dumper/gl3d_train.py", line 244, in format_dump_data
pool.map(self.format_seq,indices)
File "/usr/lib/python3.6/multiprocessing/pool.py", line 266, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/usr/lib/python3.6/multiprocessing/pool.py", line 644, in get
raise self._value
OSError: Unable to open file (unable to open file: name = '/home/sshuang/SGMNet/datadump/dump_desc/000000000000000000000009/00000007.jpg_sp_500.hdf5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

Hi! Could you provide more details on the dataset for training ?

Why can't I find the hdf5 file?Does the code not generate hdF5 files in the folder?

Traceback (most recent call last):
File "dump.py", line 27, in
dataset.format_dump_data()
File "/data5/huZhao/code/SGMNet/datadump/dumper/gl3d_train.py", line 244, in format_dump_data
pool.map(self.format_seq,indices)
File "/data5/huZhao/anaconda3/envs/sgmnet/lib/python3.7/multiprocessing/pool.py", line 268, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/data5/huZhao/anaconda3/envs/sgmnet/lib/python3.7/multiprocessing/pool.py", line 657, in get
raise self._value
FileNotFoundError: [Errno 2] Unable to open file (unable to open file: name = '/data5/huZhao/code/GL3D-2/dump_desc_dir/000000000000000000000009/00000010.jpg_root_1000.hdf5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

A request for a setting of SGMNet+SP

Hi! Thank you so much for releasing the code.
Your paper is very impressive and contains so many interesting findings.

Here, I would like to kindly ask for the following setting in using SGMNet.

  1. SGMNet+SP
  2. SGMNet+SP-10 sink

I have tried running your work with HPatch it seems pretty good with the setting of SIFT (but switch the dimension to 256).
However, I don't know what is the proper setting.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.