Git Product home page Git Product logo

lamar-benchmark's Introduction


LaMAR
Benchmarking Localization and Mapping
for Augmented Reality

Paul-Edouard Sarlin* · Mihai Dusmanu*
Johannes L. Schönberger · Pablo Speciale · Lukas Gruber · Viktor Larsson · Ondrej Miksik · Marc Pollefeys

Logo

ECCV 2022

Logo
LaMAR includes multi-sensor streams recorded by AR devices along hundreds of unconstrained trajectories captured over 2 years in 3 large indoor+outdoor locations.

This repository hosts the source code for LaMAR, a new benchmark for localization and mapping with AR devices in realistic conditions. The contributions of this work are:

  1. A dataset: multi-sensor data streams captured by AR devices and laser scanners
  2. scantools: a processing pipeline to register different user sessions together
  3. A benchmark: a framework to evaluate algorithms for localization and mapping

See our ECCV 2022 tutorial for an overview of LaMAR and of the state of the art of localization and mapping for AR.

Overview

This codebase is composed of the following modules:

  • lamar: evaluation pipeline and baselines for localization and mapping
  • scantools: data API, processing tools and pipeline
  • ScanCapture: a data recording app for Apple devices

Data format

We introduce a new data format, called Capture, to handle multi-session and multi-sensor data recorded by different devices. A Capture object corresponds to a capture location. It is composed of multiple sessions and each of them corresponds to a data recording by a given device. Each sessions stores the raw sensor data, calibration, poses, and all assets generated during the processing.

from scantools.capture import Capture
capture = Capture.load('data/CAB/')
print(capture.sessions.keys())
session = capture.sessions[session_id]  # each session has a unique id
print(session.sensors.keys())  # each sensor has a unique id
print(session.rigs)  # extrinsic calibration between sensors
keys = session.trajectories.key_pairs()  # all (timestamp, sensor_or_rig_id)
T_w_i = sessions.trajectories[keys[0]]  # first pose, from sensor/rig to world

More details are provided in the specification document CAPTURE.md.

Installation

1️⃣ Install the core dependencies:

2️⃣ Install the LaMAR libraries and pull the remaining pip dependencies:

python -m pip install -e .

3️⃣ Optional: the processing pipeline additionally relies on heavier dependencies not required for benchmarking:

  • Pip dependencies: python -m pip install -e .[scantools]
  • raybender for raytracing
  • pcdmeshing for pointcloud meshing

4️⃣ Optional: if you wish to contribute, install the development tools as well:

python -m pip install -e .[dev]

Docker images

The Dockerfile provided in this project has multiple stages, two of which are: scantools and lamar.

Building the Docker Images

You can build the Docker images for these stages using the following commands:

# Build the 'scantools' stage
docker build --target scantools -t lamar:scantools -f Dockerfile ./

# Build the 'lamar' stage
docker build --target lamar -t lamar:lamar -f Dockerfile ./

Pulling the Docker Images from GitHub Docker Registry

Alternatively, if you don't want to build the images yourself, you can pull them from the GitHub Docker Registry using the following commands:

# Pull the 'scantools' image
docker pull ghcr.io/microsoft/lamar-benchmark/scantools:latest

# Pull the 'lamar' image
docker pull ghcr.io/microsoft/lamar-benchmark/lamar:latest

Benchmark

1️⃣ Obtain the evaluation data: visit the dataset page and place the 3 scenes in ./data :

data/
├── CAB/
│   └── sessions/
│       ├── map/                # mapping session
│       ├── query_hololens/     # HoloLens test queries
│       ├── query_phone/        # Phone test queries
│       ├── query_val_hololens/ # HoloLens validation queries
│       └── query_val_phone/    # Phone validation queries
├── HGE
│   └── ...
└── LIN
    └── ...

Each scene contains a mapping session and queries for each device type. We provide a small set of validation queries with known ground-truth poses such that they can be used for developing algorithms and tuning parameters. We keep private the ground-truth poses of the test queries.

2️⃣ Run the single-frame evaluation with the strongest baseline:

python -m lamar.run \
	--scene $SCENE --ref_id map --query_id $QUERY_ID \
	--retrieval fusion --feature superpoint --matcher superglue

where $SCENE is in {CAB,HGE,LIN} and $QUERY_ID is in {query_phone,query_hololens} for testing and in {query_val_phone,query_val_hololens} for validation. All outputs are written to ./outputs/ by default. For example, to localize validation Phone queries in the CAB scene:

python -m lamar.run \
	--scene CAB --ref_id map --query_id query_val_phone \
	--retrieval fusion --feature superpoint --matcher superglue

This executes two steps:

  1. Create a sparse 3D map using the mapping session via feature extraction, pair selection, feature matching, triangulation
  2. Localize each image of the sequence via feature extraction, pair selection, feature matching, absolute pose estimation

3️⃣ Obtain the evaluation results:

  • validation queries: the script print the localization recall.
  • test queries: until the benchmark leaderboard is up and running, please send the predicted pose files to [email protected] ⚠️ we will only accept at most 2 submissions per user per week.

4️⃣ Workflow: the benchmarking pipeline is designed such that

  • the mapping and localization process is split into modular steps listed in lamar/tasks/
  • outputs like features and matches are cached and re-used over multiple similar runs
  • changing a configuration entry automatically triggers the recomputation of all downstream steps that depend on it

Other evaluation options

[Click to expand]

Using radio signals for place recognition:

python -m lamar.run [...] --use_radios

Localization with sequences of 10 seconds instead of single images:

python -m lamar.run [...] --sequence_length_seconds 10

Adding your own algorithms

[Click to expand]

To add a new local feature:

To add a new global feature for image retrieval:

To add a new local feature matcher:

To add a new pose solver: create a new class that inherits from lamar.tasks.pose_estimation.SingleImagePoseEstimation:

class MyPoseEstimation(SingleImagePoseEstimation):
    method = {'name': 'my_estimator'}
    def run(self, capture):
        ...

Processing pipeline

Each step of the pipeline corresponds to a runfile in scantools/run_*.py that can be used as follow:

  • executed from the command line: python -m scantools.run_phone_to_capture [--args]
  • imported as a library:
from scantools import run_phone_to_capture
run_phone_to_capture.run(...)

We provide pipeline scripts that execute all necessary steps:

The raw data will be released soon such that anyone is able to run the processing pipeline without access to capture devices.

Here are runfiles that could be handy for importing and exporting data:

  • run_phone_to_capture: convert a ScanCapture recording into a Capture session
  • run_navvis_to_capture: convert a NavVis recording into a Capture Session
  • run_session_to_kapture: convert a Capture session into a Kapture instance
  • run_capture_to_empty_colmap: convert a Capture session into an empty COLMAP model
  • run_image_anonymization: anonymize faces and license plates using the Brighter.AI API
  • run_radio_anonymization: anonymize radio signal IDs
  • run_combine_sequences: combine multiple sequence sessions into a single session
  • run_qrcode_detection: detect QR codes in images and store their poses

Raw data

We also release the raw original data, as recorded by the devices (HoloLens, phones, NavVis scanner), with minimal post-processing. Like the evaluation data, the raw data is accessed through the dataset page. More details are provided in the specification document RAW-DATA.md.

Release plan

We are still in the process of fully releasing LaMAR. Here is the release plan:

  • LaMAR evaluation data and benchmark
  • Ground truthing pipeline
  • iOS capture app
  • Full raw data
  • Leaderboard and evaluation server
  • 3D dataset viewer

BibTex citation

Please consider citing our work if you use any code from this repo or ideas presented in the paper:

@inproceedings{sarlin2022lamar,
  author    = {Paul-Edouard Sarlin and
               Mihai Dusmanu and
               Johannes L. Schönberger and
               Pablo Speciale and
               Lukas Gruber and
               Viktor Larsson and
               Ondrej Miksik and
               Marc Pollefeys},
  title     = {{LaMAR: Benchmarking Localization and Mapping for Augmented Reality}},
  booktitle = {ECCV},
  year      = {2022},
}

Legal Notices

Microsoft and any contributors grant you a license to the Microsoft documentation and other content in this repository under the Creative Commons Attribution 4.0 International Public License, see the LICENSE file, and grant you a license to any code in the repository under the MIT License, see the LICENSE-CODE file.

Microsoft, Windows, Microsoft Azure and/or other Microsoft products and services referenced in the documentation may be either trademarks or registered trademarks of Microsoft in the United States and/or other countries. The licenses for this project do not grant you rights to use any Microsoft names, logos, or trademarks. Microsoft's general trademark guidelines can be found at http://go.microsoft.com/fwlink/?LinkID=254653.

Privacy information can be found at https://privacy.microsoft.com/en-us/

Microsoft and any contributors reserve all other rights, whether under their respective copyrights, patents, or trademarks, whether by implication, estoppel or otherwise.

lamar-benchmark's People

Contributors

foreveryounggithub avatar joshuaoreilly avatar microsoftopensource avatar mihaidusmanu avatar pablospe avatar sarlinpe avatar skydes avatar soeroesg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lamar-benchmark's Issues

Question about GNC: where was it implemented?

In function optimize_sequence_pose_graph_gnc, the comment says "Apply Graduated Non-Convexity (GNC) to the robust cost of the localization prior. Loose implementation of the paper: Graduated Non-Convexity for Robust Spatial Perception: From Non-Minimal Solvers to Global Outlier Rejection". I'm curious if you actually implemented the GNC. It seems like you only updated mu in the while loop, but didn't actually use it in graph optimization.

Paper link

Hi Sarlin,

Very excited to see your new work!
And could you please publish a link to this paper?

Thanks a lot for your attention.

Question about the trajectories.txt

Hey there,

Thanks for the dataset. Just a clarification question regarding to the trajectories.txt files included in the released dataset.

Sessions of map, query_val_XXX and query_XXX all contains the trajectories.txt file. If I understand it correctly, map/trajectories.txt and query_val_XXX/trajectories.txt are the ground truth poses of the corresponding poses.

What are the content of the query_XXX/trajectories.txt? Are these the poses reported by the device's own odometery system?

Thanks
Weiwu

CVisG?

Hey, amazing work here! Huge project!!

I saw the slide on CVisG and the small call-out in the README. Has CVisG been released? Or is it going to be its own repo, or included here?

Question about evaluation results for test queries

Hi there,

I checked the default queries.txt in the capture folder (e.g. LIN). It contains ~1000 queries. I think it's a subset of the images in the raw_data folders.

In the benchmark submission, would you only accept poses for images in the queries.txt or you also accept poses for other images that are not included in queries.txt but exists in the query_phone?

Thanks!

Question about the coordinates of LaMaR dataset

Thanks for your perfect dataset! I have a question about the coordinate of the dataset. According to your ground truth pose in the " HGE", "sessions", "map", and "ios_2022-01-25_14.34.24_002" scene, I draw the camera in open3D, it looks like all the cameras are looking at UP. While I draw these cameras after I run colmap, the direction looks forward correctly. So I'm wondering what the coordinate of the ground truth pose is. I can't find the definition in the paper and git repo. Thanks!
Ground truth poses looks like this:
Screenshot from 2023-05-27 15-17-08
Colmap poses looks like this:
Screenshot from 2023-05-27 15-25-09

ScanCapture pose format

Hello, I captured several datasets using the ScanCapture App. But I don't know the definition of the image pose in poses.txt file. Could you explain it in detail? Thank you!

Images in the current evaluation

At some timesteps in the hololens sequences there are only images for a subset of the cameras (hetlf, hetll, hetrf, hetrr). For these timesteps, are there additional images that will eventually be released?

Also, the paper says that the query images are sampled every 1s/1m/20°, although I assume the sequences were recorded at a higher frame-rate. Will the full video sequences eventually be released? These intermediate frames could help localization during fast motion, even if they aren't used for evaluation.

gt pose error

Hello, I would like to use your great benchmark to test my program, But I just find some errors in GT pose. Like that:
I think that trajectories.txt in each folder is the GT post, right? But poses in 'query_val_phone' is not the same as poses in 'map' even though they are the same images. So trajectories.txt in  'query_val_phone' folder is not GT pose? 

截屏2022-11-29 22 20 23

截屏2022-11-29 22 20 58

RecursionError: maximum recursion depth exceeded while calling a Python object

Hi there, I encountered a RecursionError while trying to localize a single image using the superpoint features in the session query_val_phone. The error occurred at [2023/02/22 14:44:51 lamar.tasks.pose_estimation INFO] during the localization process. The full error message is: "RecursionError: maximum recursion depth exceeded while calling a Python object". I'm not sure what's causing the error, and I would appreciate any help you could provide in resolving it

(lamar) root@ba188978fb0b:/home/public/lamar-benchmark# python -m lamar.run --scene CAB --ref_id map --query_id query_val_phone --retrieval fusion --feature superpoint --matcher superglue
[2023/02/22 07:20:48 scantools.utils.io INFO] Optional dependency not installed: open3d
[2023/02/22 07:20:48 scantools.utils.io INFO] Optional dependency not installed: plyfile
[2023/02/22 07:20:51 scantools INFO] Loading Capture from /home/public/lamar-benchmark/data/CAB.
[2023/02/22 07:21:59 lamar.tasks.feature_extraction INFO] Extraction local features superpoint for session map.
[2023/02/22 07:22:00 hloc INFO] Extracting local features with configuration:
{'model': {'max_keypoints': 2048, 'name': 'superpoint', 'nms_radius': 3},
'preprocessing': {'grayscale': True, 'resize_max': 1024}}
[2023/02/22 07:22:35 hloc INFO] Skipping the extraction.
[2023/02/22 07:22:35 lamar.tasks.pair_selection INFO] Selecting image pairs with fusion-netvlad-ap-gem-10_frustum_pose-120-20-250 for sessions (map, map).
[2023/02/22 07:23:12 lamar.tasks.pair_selection INFO] Filtering pairs by frustums.
100%|████████████████████████████████████████████████████████████████████████████████████████████████| 49/49 [07:32<00:00, 9.23s/it]
[2023/02/22 07:30:54 lamar.tasks.pair_selection INFO] Filtering pairs by poses.
100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 9/9 [01:12<00:00, 8.06s/it]
[2023/02/22 07:32:09 lamar.tasks.pair_selection INFO] Computing pairs from visual similarity.
[2023/02/22 07:32:09 lamar.tasks.feature_extraction INFO] Extraction local features netvlad for session map.
[2023/02/22 07:32:09 hloc INFO] Extracting local features with configuration:
{'model': {'name': 'netvlad'}, 'preprocessing': {'resize_max': 640}}
[2023/02/22 07:32:27 hloc INFO] Skipping the extraction.
[2023/02/22 07:32:27 lamar.tasks.feature_extraction INFO] Extraction local features netvlad for session map.
[2023/02/22 07:32:27 hloc INFO] Extracting local features with configuration:
{'model': {'name': 'netvlad'}, 'preprocessing': {'resize_max': 640}}
[2023/02/22 07:32:45 hloc INFO] Skipping the extraction.
[2023/02/22 07:33:11 lamar.tasks.feature_extraction INFO] Extraction local features ap-gem for session map.
[2023/02/22 07:33:11 hloc INFO] Extracting local features with configuration:
{'model': {'name': 'dir'}, 'preprocessing': {'resize_max': 640}}
/opt/conda/envs/lamar/lib/python3.8/site-packages/sklearn/base.py:329: UserWarning: Trying to unpickle estimator PCA from version 0.20.2 when using version 1.1.3. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to:
https://scikit-learn.org/stable/model_persistence.html#security-maintainability-limitations
warnings.warn(
=> loading checkpoint '/root/.cache/torch/hub/dirtorch/Resnet-101-AP-GeM.pt' (current_iter 296)
100%|████████████████████████████████████████████████████████████████████████████████████████| 33587/33587 [1:16:44<00:00, 7.29it/s]
[2023/02/22 08:50:07 hloc INFO] Finished exporting features.
[2023/02/22 08:50:07 lamar.tasks.feature_extraction INFO] Extraction local features ap-gem for session map.
[2023/02/22 08:50:07 hloc INFO] Extracting local features with configuration:
{'model': {'name': 'dir'}, 'preprocessing': {'resize_max': 640}}
[2023/02/22 08:50:24 hloc INFO] Skipping the extraction.
[2023/02/22 08:51:40 lamar.tasks.feature_matching INFO] Matching local features with superglue for sessions (map, map).
[2023/02/22 08:51:40 lamar.tasks.feature_matching WARNING] Existing matches will be overwritten.
[2023/02/22 08:51:40 hloc INFO] Matching local features with configuration:
{'model': {'name': 'superglue', 'sinkhorn_iterations': 5, 'weights': 'outdoor'}}
Loaded SuperGlue model ("outdoor" weights)
100%|██████████████████████████████████████████████████████████████████████████████████████| 220311/220311 [4:48:09<00:00, 12.74it/s]
[2023/02/22 13:39:53 hloc INFO] Finished exporting matches.
[2023/02/22 13:39:53 lamar.tasks.mapping INFO] Mapping session map via triangulation of features superpoint.
[2023/02/22 13:39:57 scantools INFO] Writing COLMAP empty .bin reconstruction to /home/public/lamar-benchmark/outputs/CAB/mapping/map/triangulation/superpoint/fusion-netvlad-ap-gem-10_frustum_pose-120-20-250/superglue/sfm_empty.
[2023/02/22 13:40:01 hloc INFO] Importing features into the database...
100%|█████████████████████████████████████████████████████████████████████████████████████████| 33587/33587 [00:34<00:00, 968.04it/s]
[2023/02/22 13:40:36 hloc INFO] Importing matches into the database...
100%|██████████████████████████████████████████████████████████████████████████████████████| 335870/335870 [05:06<00:00, 1094.61it/s]
[2023/02/22 13:45:46 hloc INFO] Performing geometric verification of the matches...
100%|██████████████████████████████████████████████████████████████████████████████████████████| 33587/33587 [29:50<00:00, 18.75it/s]
[2023/02/22 14:15:37 hloc INFO] mean/med/min/max valid matches 77.72/93.44/0.00/100.00%.
[2023/02/22 14:15:37 hloc INFO] Running 3D triangulation...
[2023/02/22 14:35:16 hloc INFO] Finished the triangulation with statistics:
Reconstruction:
num_reg_images = 33587
num_cameras = 6799
num_points3D = 1931868
num_observations = 10034627
mean_track_length = 5.19426
mean_observations_per_image = 298.765
mean_reprojection_error = 1.52068
[2023/02/22 14:35:23 lamar.tasks.feature_extraction INFO] Extraction local features superpoint for session query_val_phone.
[2023/02/22 14:35:23 hloc INFO] Extracting local features with configuration:
{'model': {'max_keypoints': 2048, 'name': 'superpoint', 'nms_radius': 3},
'preprocessing': {'grayscale': True, 'resize_max': 1024}}
Loaded SuperPoint model
100%|██████████████████████████████████████████████████████████████████████████████████████████████| 396/396 [00:18<00:00, 21.11it/s]
[2023/02/22 14:35:42 hloc INFO] Finished exporting features.
[2023/02/22 14:35:42 lamar.tasks.pair_selection INFO] Selecting image pairs with fusion-netvlad-ap-gem-10 for sessions (query_val_phone, map).
[2023/02/22 14:35:42 lamar.tasks.pair_selection INFO] Computing pairs from visual similarity.
[2023/02/22 14:35:42 lamar.tasks.feature_extraction INFO] Extraction local features netvlad for session query_val_phone.
[2023/02/22 14:35:42 hloc INFO] Extracting local features with configuration:
{'model': {'name': 'netvlad'}, 'preprocessing': {'resize_max': 640}}
100%|██████████████████████████████████████████████████████████████████████████████████████████████| 396/396 [00:37<00:00, 10.44it/s]
[2023/02/22 14:36:33 hloc INFO] Finished exporting features.
[2023/02/22 14:36:33 lamar.tasks.feature_extraction INFO] Extraction local features netvlad for session map.
[2023/02/22 14:36:33 hloc INFO] Extracting local features with configuration:
{'model': {'name': 'netvlad'}, 'preprocessing': {'resize_max': 640}}
[2023/02/22 14:36:50 hloc INFO] Skipping the extraction.
[2023/02/22 14:37:04 lamar.tasks.feature_extraction INFO] Extraction local features ap-gem for session query_val_phone.
[2023/02/22 14:37:04 hloc INFO] Extracting local features with configuration:
{'model': {'name': 'dir'}, 'preprocessing': {'resize_max': 640}}
=> loading checkpoint '/root/.cache/torch/hub/dirtorch/Resnet-101-AP-GeM.pt' (current_iter 296)
100%|██████████████████████████████████████████████████████████████████████████████████████████████| 396/396 [01:18<00:00, 5.05it/s]
[2023/02/22 14:38:25 hloc INFO] Finished exporting features.
[2023/02/22 14:38:25 lamar.tasks.feature_extraction INFO] Extraction local features ap-gem for session map.
[2023/02/22 14:38:26 hloc INFO] Extracting local features with configuration:
{'model': {'name': 'dir'}, 'preprocessing': {'resize_max': 640}}
[2023/02/22 14:38:43 hloc INFO] Skipping the extraction.
[2023/02/22 14:38:57 lamar.tasks.feature_matching INFO] Matching local features with superglue for sessions (query_val_phone, map).
[2023/02/22 14:38:57 lamar.tasks.feature_matching WARNING] Existing matches will be overwritten.
[2023/02/22 14:38:57 hloc INFO] Matching local features with configuration:
{'model': {'name': 'superglue', 'sinkhorn_iterations': 5, 'weights': 'outdoor'}}
Loaded SuperGlue model ("outdoor" weights)
100%|████████████████████████████████████████████████████████████████████████████████████████████| 3960/3960 [05:53<00:00, 11.21it/s]
[2023/02/22 14:44:51 hloc INFO] Finished exporting matches.
[2023/02/22 14:44:51 lamar.tasks.pose_estimation INFO] Localizing (single_image) session query_val_phone with features superpoint.
0%| | 0/396 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/opt/conda/envs/lamar/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return run_code(code, main_globals, None,
File "/opt/conda/envs/lamar/lib/python3.8/runpy.py", line 87, in run_code
exec(code, run_globals)
File "/home/public/lamar-benchmark/lamar/run.py", line 154, in
results
= run(**args)
File "/home/public/lamar-benchmark/lamar/run.py", line 120, in run
pose_estimation = PoseEstimation(
File "/home/public/lamar-benchmark/lamar/tasks/pose_estimation.py", line 109, in init
self.poses = self.run(capture)
File "/home/public/lamar-benchmark/lamar/tasks/pose_estimation.py", line 177, in run
map
(_worker_fn, range(len(keys)))
File "/opt/conda/envs/lamar/lib/python3.8/site-packages/tqdm/contrib/concurrent.py", line 94, in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
File "/opt/conda/envs/lamar/lib/python3.8/site-packages/tqdm/contrib/concurrent.py", line 76, in _executor_map
return list(tqdm_class(ex.map(fn, *iterables, **map_args), **kwargs))
File "/opt/conda/envs/lamar/lib/python3.8/site-packages/tqdm/std.py", line 1195, in iter
for obj in iterable:
File "/opt/conda/envs/lamar/lib/python3.8/concurrent/futures/_base.py", line 619, in result_iterator
yield fs.pop().result()
File "/opt/conda/envs/lamar/lib/python3.8/concurrent/futures/_base.py", line 444, in result
return self.__get_result()
File "/opt/conda/envs/lamar/lib/python3.8/concurrent/futures/_base.py", line 389, in __get_result
raise self._exception
File "/opt/conda/envs/lamar/lib/python3.8/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/public/lamar-benchmark/lamar/tasks/pose_estimation.py", line 165, in _worker_fn
pose, _ = estimate_camera_pose(
File "/home/public/lamar-benchmark/lamar/utils/localization.py", line 69, in estimate_camera_pose
matches_2d3d = recover_matches(query, ref_key_names)
File "/home/public/lamar-benchmark/lamar/tasks/pose_estimation.py", line 119, in recover_matches_2d3d
return recover_matches_2d3d(
File "/home/public/lamar-benchmark/lamar/utils/localization.py", line 35, in recover_matches_2d3d
valid, p3ds, p3d_ids = mapping.get_points3D(ref_key, matches[:, 1])
File "/home/public/lamar-benchmark/lamar/tasks/mapping.py", line 114, in get_points3D
if len(image.points2D) > 0:
RecursionError: maximum recursion depth exceeded while calling a Python object

Raw depth data

Very good work! May I ask if the raw depth data of the device is not yet available in the current release?

Rendering navvis mesh to dense depth maps in other (iOS/hl) sessions

Hi LAMAR dataset authors,

Thanks for making and releasing this dataset.

I am wondering is it possible to render dense depth maps to an iOS session using the mesh provided in a navvis session? i.e. projecting mesh using iOS trajectories? If I understand correctly, it seems like some registration files (dir location1/registration) and alignment files (dir hololens1/sessions/proc/alignment and phone1/sessions/proc/alignment)... are not provided in the current raw data release?

The planed data structure in CAPTURE.md is shown below:

location1/                                  # a Capture directory
├── sessions/                               # a collection of Sessions 
│   ├── navvis1/                            # NavVis Session #1
│   │   ├── sensors.txt                     # list of all sensors with specs
│   │   ├── rigs.txt                        # rigid geometric relationship between sensors
│   │   ├── trajectories.txt                # pose for each (timestamp, sensor)
│   │   ├── images.txt                      # list of images with their paths
│   │   ├── pointclouds.txt                 # list of point clouds with their paths
│   │   ├── raw_data/                       # root path of images, point clouds, etc.
│   │   │   ├── images_undistorted/
│   │   │   └── pointcloud.ply
│   │   └── proc/                           # root path of processed assets
│   │       ├── meshes/                     # a collections of meshes
│   │       ├── depth_renderings.txt        # a list of rendered depth maps, one per image
│   │       ├── depth_renderings/           # root path for the depth maps
│   │       ├── alignment_global.txt        # global transforms between sessions
│   │       ├── alignment_trajectories.txt  # transform of each pose to a global reference
│   │       └── overlaps.h5                 # overlap matrix from this session to others
│   ├── hololens1/
│   │   ├── sensors.txt
│   │   ├── rigs.txt
│   │   ├── trajectories.txt
│   │   ├── images.txt
│   │   ├── depths.txt                      # list of depth maps with their paths
│   │   ├── bluetooth.txt                   # list of bluetooth measurements
│   │   ├── wifi.txt                        # list of wifi measurements
│   │   ├── raw_data/
│   │   │   ├── images/
│   │   │   └── depths/
│   │   └── proc/
│   │       └── alignment/
│   └── phone1/
│       └── ...
├── registration/                           # the data generated during alignment
│   ├── navvis2/
│   │   └── navvis1/                        # alignment of navvis2 w.r.t navvis1
│   │       └─ ...                          # intermediate data for matching/registration
│   └── hololens1/
│   │   └── navvis1/
│   └── phone1/
│       └── navvis2/
└── visualization/                          # root path of visualization dumps
    └─ ...                                  # all the data dumped during processing (TBD)

Some extra context:
I am currently using scantools/run_sequence_rerendering.py and my plan is to

  1. [A2A] render the mesh from navvis session A to dense depth maps using trajectories in navvis session A;
  2. [A2B] render the mesh from navvis session A to dense depth maps using trajectories in navvis session B;
  3. [A2C] render the mesh from navvis session A to dense depth maps using trajectories in iOS session C.

I manage to get the A2A working perfectly, A2B working okay (it seems like there are some surface normal direction issues occasionally), but I get stuck at step A2C. I am wondering if I could get some advices or example code?

Best,
Zirui

Question about "Save optimized anchor poses instead of instantaneous camera poses"

Thanks for your great job!

I find that ARKit will do relocation/loop detection while tracking camera pose, it may cause sudden "jumps" in the camera pose. So I'm interested in how to save optimized anchor poses instead of instantaneous camera poses.

I find some links that might help. ARKit Loop Closure / Jumps in camera pose, ARWorld loading works differently on iOS 15

According to the description of these links, I'm not very familiar with ARKit, the following is a feasible solution I guess:

  1. Create a new ARAnchor in front of the camera for every frame, with fix/known known relative pose between ARAnchor and frame.
  2. While ARKit do relocation/loop detection, it will update ARAnchor pose(I'm not sure)
  3. Save optimized ARAnchor/ARFrame pose.

My question is whether the above plan is feasible. If there is a corresponding ARAnchor for each ARFrame, will it increase the amount of calculations, resulting in the inability to scan large scene.Can the optimized ARAnchor pose achieve the similar effect to the loop detection in vins?

Is it convenient to share some ideas and experimental results?

Thank you very much!

lamar.tasks.pose_estimation | RecursionError: maximum recursion depth exceeded while calling a Python object

Run the single-frame evaluation

python -m lamar.run \
	--scene CAB --ref_id map --query_id query_val_phone \
	--retrieval fusion --feature superpoint --matcher superglue
[2023/01/18 19:50:03 scantools INFO] Loading Capture from /data/project/lamar-benchmark/data/CAB.
[2023/01/18 19:50:29 lamar.tasks.feature_extraction INFO] Extraction local features superpoint for session map.
[2023/01/18 19:50:29 hloc INFO] Extracting local features with configuration:
{'model': {'max_keypoints': 2048, 'name': 'superpoint', 'nms_radius': 3},
 'preprocessing': {'grayscale': True, 'resize_max': 1024}}
Loaded SuperPoint model
100%|████████████████████████████████████████████████████████████████████████████████████████████| 33587/33587 [06:18<00:00, 88.79it/s]
[2023/01/18 19:56:49 hloc INFO] Finished exporting features.
[2023/01/18 19:56:49 lamar.tasks.pair_selection INFO] Selecting image pairs with fusion-netvlad-ap-gem-10_frustum_pose-120-20-250 for sessions (map, map).
[2023/01/18 19:57:02 lamar.tasks.pair_selection INFO] Filtering pairs by frustums.
100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 49/49 [02:08<00:00,  2.61s/it]
[2023/01/18 19:59:13 lamar.tasks.pair_selection INFO] Filtering pairs by poses.
100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:40<00:00,  4.55s/it]
[2023/01/18 19:59:56 lamar.tasks.pair_selection INFO] Computing pairs from visual similarity.
[2023/01/18 19:59:56 lamar.tasks.feature_extraction INFO] Extraction local features netvlad for session map.
[2023/01/18 19:59:56 hloc INFO] Extracting local features with configuration:
{'model': {'name': 'netvlad'}, 'preprocessing': {'resize_max': 640}}
100%|████████████████████████████████████████████████████████████████████████████████████████████| 33587/33587 [09:24<00:00, 59.46it/s]
[2023/01/18 20:09:26 hloc INFO] Finished exporting features.
[2023/01/18 20:09:26 lamar.tasks.feature_extraction INFO] Extraction local features netvlad for session map.
[2023/01/18 20:09:26 hloc INFO] Extracting local features with configuration:
{'model': {'name': 'netvlad'}, 'preprocessing': {'resize_max': 640}}
[2023/01/18 20:09:37 hloc INFO] Skipping the extraction.
[2023/01/18 20:09:49 lamar.tasks.feature_extraction INFO] Extraction local features ap-gem for session map.
[2023/01/18 20:09:49 hloc INFO] Extracting local features with configuration:
{'model': {'name': 'dir'}, 'preprocessing': {'resize_max': 640}}
/.virtualenvs/lamar-benchmark-ScoJAjWC/lib/python3.8/site-packages/sklearn/base.py:288: UserWarning: Trying to unpickle estimator PCA from version 0.20.2 when using version 1.2.0. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to:
https://scikit-learn.org/stable/model_persistence.html#security-maintainability-limitations
  warnings.warn(
=> loading checkpoint '/.cache/torch/hub/dirtorch/Resnet-101-AP-GeM.pt' (current_iter 296)
100%|████████████████████████████████████████████████████████████████████████████████████████████| 33587/33587 [20:00<00:00, 27.97it/s]
[2023/01/18 20:29:53 hloc INFO] Finished exporting features.
[2023/01/18 20:29:53 lamar.tasks.feature_extraction INFO] Extraction local features ap-gem for session map.
[2023/01/18 20:29:53 hloc INFO] Extracting local features with configuration:
{'model': {'name': 'dir'}, 'preprocessing': {'resize_max': 640}}
[2023/01/18 20:30:03 hloc INFO] Skipping the extraction.
[2023/01/18 20:30:40 lamar.tasks.feature_matching INFO] Matching local features with superglue for sessions (map, map).
[2023/01/18 20:30:40 lamar.tasks.feature_matching WARNING] Existing matches will be overwritten.
[2023/01/18 20:30:40 hloc INFO] Matching local features with configuration:
{'model': {'name': 'superglue', 'sinkhorn_iterations': 5, 'weights': 'outdoor'}}
Loaded SuperGlue model ("outdoor" weights)
100%|████████████████████████████████████████████████████████████████████████████████████████| 220298/220298 [1:45:55<00:00, 34.67it/s]
[2023/01/18 22:16:35 hloc INFO] Finished exporting matches.
[2023/01/18 22:16:35 lamar.tasks.mapping INFO] Mapping session map via triangulation of features superpoint.
[2023/01/18 22:16:38 scantools INFO] Writing COLMAP empty .bin reconstruction to /data/project/lamar-benchmark/outputs/CAB/mapping/map/triangulation/superpoint/fusion-netvlad-ap-gem-10_frustum_pose-120-20-250/superglue/sfm_empty.
[2023/01/18 22:16:42 hloc INFO] Importing features into the database...
100%|██████████████████████████████████████████████████████████████████████████████████████████| 33587/33587 [00:16<00:00, 2042.11it/s]
[2023/01/18 22:17:00 hloc INFO] Importing matches into the database...
100%|████████████████████████████████████████████████████████████████████████████████████████| 335870/335870 [02:19<00:00, 2408.33it/s]
[2023/01/18 22:19:22 hloc INFO] Performing geometric verification of the matches...
100%|████████████████████████████████████████████████████████████████████████████████████████████| 33587/33587 [12:07<00:00, 46.19it/s]
[2023/01/18 22:31:30 hloc INFO] mean/med/min/max valid matches 77.72/93.44/0.00/100.00%.
[2023/01/18 22:31:31 hloc INFO] Running 3D triangulation...
[2023/01/18 22:42:35 hloc INFO] Finished the triangulation with statistics:
Reconstruction:
        num_reg_images = 33587
        num_cameras = 6799
        num_points3D = 1931307
        num_observations = 10029980
        mean_track_length = 5.19336
        mean_observations_per_image = 298.627
        mean_reprojection_error = 1.52069
[2023/01/18 22:42:37 lamar.tasks.feature_extraction INFO] Extraction local features superpoint for session query_val_phone.
[2023/01/18 22:42:37 hloc INFO] Extracting local features with configuration:
{'model': {'max_keypoints': 2048, 'name': 'superpoint', 'nms_radius': 3},
 'preprocessing': {'grayscale': True, 'resize_max': 1024}}
Loaded SuperPoint model
100%|████████████████████████████████████████████████████████████████████████████████████████████████| 396/396 [00:14<00:00, 27.14it/s]
[2023/01/18 22:42:52 hloc INFO] Finished exporting features.
[2023/01/18 22:42:52 lamar.tasks.pair_selection INFO] Selecting image pairs with fusion-netvlad-ap-gem-10 for sessions (query_val_phone, map).
[2023/01/18 22:42:52 lamar.tasks.pair_selection INFO] Computing pairs from visual similarity.
[2023/01/18 22:42:52 lamar.tasks.feature_extraction INFO] Extraction local features netvlad for session query_val_phone.
[2023/01/18 22:42:52 hloc INFO] Extracting local features with configuration:
{'model': {'name': 'netvlad'}, 'preprocessing': {'resize_max': 640}}
100%|████████████████████████████████████████████████████████████████████████████████████████████████| 396/396 [00:13<00:00, 30.41it/s]
[2023/01/18 22:43:09 hloc INFO] Finished exporting features.
[2023/01/18 22:43:09 lamar.tasks.feature_extraction INFO] Extraction local features netvlad for session map.
[2023/01/18 22:43:09 hloc INFO] Extracting local features with configuration:
{'model': {'name': 'netvlad'}, 'preprocessing': {'resize_max': 640}}
[2023/01/18 22:43:20 hloc INFO] Skipping the extraction.
[2023/01/18 22:43:27 lamar.tasks.feature_extraction INFO] Extraction local features ap-gem for session query_val_phone.
[2023/01/18 22:43:27 hloc INFO] Extracting local features with configuration:
{'model': {'name': 'dir'}, 'preprocessing': {'resize_max': 640}}
=> loading checkpoint '/.cache/torch/hub/dirtorch/Resnet-101-AP-GeM.pt' (current_iter 296)
100%|████████████████████████████████████████████████████████████████████████████████████████████████| 396/396 [00:26<00:00, 14.98it/s]
[2023/01/18 22:43:54 hloc INFO] Finished exporting features.
[2023/01/18 22:43:54 lamar.tasks.feature_extraction INFO] Extraction local features ap-gem for session map.
[2023/01/18 22:43:54 hloc INFO] Extracting local features with configuration:
{'model': {'name': 'dir'}, 'preprocessing': {'resize_max': 640}}
[2023/01/18 22:44:05 hloc INFO] Skipping the extraction.
[2023/01/18 22:44:11 lamar.tasks.feature_matching INFO] Matching local features with superglue for sessions (query_val_phone, map).
[2023/01/18 22:44:11 lamar.tasks.feature_matching WARNING] Existing matches will be overwritten.
[2023/01/18 22:44:11 hloc INFO] Matching local features with configuration:
{'model': {'name': 'superglue', 'sinkhorn_iterations': 5, 'weights': 'outdoor'}}
Loaded SuperGlue model ("outdoor" weights)
100%|██████████████████████████████████████████████████████████████████████████████████████████████| 3960/3960 [01:53<00:00, 34.99it/s]
[2023/01/18 22:46:05 hloc INFO] Finished exporting matches.
[2023/01/18 22:46:05 lamar.tasks.pose_estimation INFO] Localizing (single_image) session query_val_phone with features superpoint.
  0%|                                                                                                          | 0/396 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/data/project/lamar-benchmark/lamar/run.py", line 154, in <module>
    results_ = run(**args)
  File "/data/project/lamar-benchmark/lamar/run.py", line 120, in run
    pose_estimation = PoseEstimation(
  File "/data/project/lamar-benchmark/lamar/tasks/pose_estimation.py", line 109, in __init__
    self.poses = self.run(capture)
  File "/data/project/lamar-benchmark/lamar/tasks/pose_estimation.py", line 177, in run
    map_(_worker_fn, range(len(keys)))
  File "/.virtualenvs/lamar-benchmark-ScoJAjWC/lib/python3.8/site-packages/tqdm/contrib/concurrent.py", line 94, in thread_map
    return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
  File "/.virtualenvs/lamar-benchmark-ScoJAjWC/lib/python3.8/site-packages/tqdm/contrib/concurrent.py", line 76, in _executor_map
    return list(tqdm_class(ex.map(fn, *iterables, **map_args), **kwargs))
  File "/.virtualenvs/lamar-benchmark-ScoJAjWC/lib/python3.8/site-packages/tqdm/std.py", line 1195, in __iter__
    for obj in iterable:
  File "/usr/lib/python3.8/concurrent/futures/_base.py", line 619, in result_iterator
    yield fs.pop().result()
  File "/usr/lib/python3.8/concurrent/futures/_base.py", line 444, in result
    return self.__get_result()
  File "/usr/lib/python3.8/concurrent/futures/_base.py", line 389, in __get_result
    raise self._exception
  File "/usr/lib/python3.8/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/data/project/lamar-benchmark/lamar/tasks/pose_estimation.py", line 165, in _worker_fn
    pose, _ = estimate_camera_pose(
  File "/data/project/lamar-benchmark/lamar/utils/localization.py", line 69, in estimate_camera_pose
    matches_2d3d = recover_matches(query, ref_key_names)
  File "/data/project/lamar-benchmark/lamar/tasks/pose_estimation.py", line 119, in recover_matches_2d3d
    return recover_matches_2d3d(
  File "/data/project/lamar-benchmark/lamar/utils/localization.py", line 35, in recover_matches_2d3d
    valid, p3ds, p3d_ids = mapping.get_points3D(ref_key, matches[:, 1])
  File "/data/project/lamar-benchmark/lamar/tasks/mapping.py", line 114, in get_points3D
    if len(image.points2D) > 0:
RecursionError: maximum recursion depth exceeded while calling a Python object

i added bellow code, but segmentation fault (core dumped) occurs.

import sys
sys.setrecursionlimit(10**6)

pip list

Package                  Version     Editable project location
------------------------ ----------- ------------------------------------------------
addict                   2.4.0
astroid                  2.5
asttokens                2.2.1
attrs                    22.2.0
autopep8                 2.0.1
backcall                 0.2.0
beautifulsoup4           4.11.1
certifi                  2022.12.7
charset-normalizer       3.0.1
click                    8.1.3
comm                     0.1.2
ConfigArgParse           1.5.3
contourpy                1.0.7
coverage                 7.0.5
cycler                   0.11.0
dash                     2.7.1
dash-core-components     2.0.0
dash-html-components     2.0.0
dash-table               5.0.0
debugpy                  1.6.5
decorator                5.1.1
entrypoints              0.4
exceptiongroup           1.1.0
executing                1.2.0
fastjsonschema           2.16.2
filelock                 3.9.0
Flask                    2.2.2
fonttools                4.38.0
gdown                    4.6.0
h5py                     3.7.0
hloc                     1.3         /data/project/Hierarchical-Localization
idna                     3.4
importlib-metadata       6.0.0
importlib-resources      5.10.2
iniconfig                2.0.0
ipykernel                6.20.2
ipython                  8.8.0
ipywidgets               8.0.4
isort                    4.3.21
itsdangerous             2.1.2
jedi                     0.18.2
Jinja2                   3.1.2
joblib                   1.2.0
jsonschema               4.17.3
jupyter_client           7.4.9
jupyter_core             5.1.3
jupyterlab-widgets       3.0.5
kiwisolver               1.4.4
kornia                   0.6.9
lazy-object-proxy        1.9.0
MarkupSafe               2.1.2
matplotlib               3.6.3
matplotlib-inline        0.1.6
mccabe                   0.6.1
nbformat                 5.5.0
nest-asyncio             1.5.6
numpy                    1.24.1
nvidia-cublas-cu11       11.10.3.66
nvidia-cuda-nvrtc-cu11   11.7.99
nvidia-cuda-runtime-cu11 11.7.99
nvidia-cudnn-cu11        8.5.0.96
open3d                   0.16.0
opencv-python            4.7.0.68
packaging                23.0
pandas                   1.5.2
parso                    0.8.3
pexpect                  4.8.0
pickleshare              0.7.5
Pillow                   9.4.0
pip                      22.3.1
pkgutil_resolve_name     1.3.10
platformdirs             2.6.2
plotly                   5.12.0
pluggy                   1.0.0
plyfile                  0.7.4
prompt-toolkit           3.0.36
psutil                   5.9.4
ptyprocess               0.7.0
pure-eval                0.2.2
pyceres                  0.0.0
pycodestyle              2.10.0
pycolmap                 0.3.0
Pygments                 2.14.0
pylint                   2.5.0
pyparsing                3.0.9
pyquaternion             0.9.9
pyrsistent               0.19.3
PySocks                  1.7.1
pytest                   7.2.1
pytest-cov               4.0.0
python-dateutil          2.8.2
pytz                     2022.7.1
PyYAML                   6.0
pyzmq                    25.0.0
requests                 2.28.2
scikit-learn             1.2.0
scipy                    1.10.0
setuptools               66.0.0
six                      1.16.0
soupsieve                2.3.2.post1
stack-data               0.6.2
tenacity                 8.1.0
threadpoolctl            3.1.0
toml                     0.10.2
tomli                    2.0.1
torch                    1.13.1
torchvision              0.14.1
tornado                  6.2
tqdm                     4.64.1
traitlets                5.8.1
typing_extensions        4.4.0
urllib3                  1.26.14
wcwidth                  0.2.6
Werkzeug                 2.2.2
wheel                    0.38.4
widgetsnbextension       4.0.5
wrapt                    1.12.1
zipp                     3.11.0

Access to point cloud data

Hello,

First of congrats for the development of this benchmark, really impressive !
I was wandering, do you have a timeline to roughly know when we could get access to the ground truth points clouds data ?

Thanks.

Ceres version >= 2.1 required when installing python requirements on Ubuntu 20.04

Setup

Ubuntu 20.04 with hloc and COLMAP installed as per installation instructions

Problem

Running python -m pip install -r requirements/lamar.txt returns the following error:

...
CMake Error at CMakeLists.txt:10 (message):
        Ceres version >= 2.1 required.
...

Whereas the most recent version of Ceres pre-built on Ubuntu 20.04 is 1.14.0 (Ubuntu 22.04's version is also too old).

Potential Solution

Build Ceres using latest stable release (at time of writing, 2.1.0), making sure to install with sudo make install. Rerun python -m pip install -r requirements/lamar.txt.
For me at least, pyceres now installs properly, and LaMAR runs, although perhaps having two separate versions of Ceres on the computer will cause issues later on.

ScanCapture app export scene mesh

Thank you for the excellent app and script. When scan the scene, show the real-time ARkit mesh to avoid miss some part of the scene, and add the mesh to export data.

3D Laser Scans for Mapping Session

Hello,

I noticed that the 3D laser scans used to obtain the ground-truth mapping poses weren't included in the evaluation data.

Is there a plan to release these database scans (similar to InLoc), or are we expected to reconstruct a 3D map by running a mapping algorithm using the provided mapping images and poses (e.g. COLMAP MVS)?

Thanks for the clarification.

HL2 IMU Data Release

I didn't see any Hololens 2 IMU data in the evaluation release, did I miss it, or was the plan to include it at the time of the full release?

Thank you.

About 3D Models

Hello, I'm sorry to bother you.
I would like to ask if your dataset can generate a 3D point cloud model of the scene. If so, how do I operate?

Trajectories.txt file contains the gt poses ?

Hello, for the HGE data folder, the map folder contains the mapping data to build the initial model.
There is a trajectories.txt file, that contain camera poses. Are those poses the ground truth data for the map model ?
Are they also in metric units ?

Effect of pnp_error_multiplier

Hi,

I'd like to share my findings on the "pnp_error_multiplier" in pose estimation. I'm running an experiment with the CAB dataset using a fusion image retrieval (Netvlad+APGEM), DISK feature extraction, and lightglue matcher. Using Python open3D library, I attempted to visualize the pose estimation results of CAB phone query data (validation data). The value of "pnp_error_multiplier" has a substantial effect on the pose estimation outcome according to my findings. Pose estimation results vary depending on the value of “pnp_error_multiplier”. This scenario also happens when I run the experiments with a custom dataset.

I'm not familiar with this parameter or its impact on pose estimation. Could you kindly explain why this is happening and provide any resources that can help me learn more about it? Thank you very much.

Visualization Result - https://docs.google.com/document/d/1d9DjTCQMIn7Sf36a-cpOcsu8q-ZHK_nqcodjRF0EnYM/edit?usp=sharing

pnp_error_multiplier value (0.0005)
{'Rt_thresholds': [(1, 0.1), (5, 1.0)], 'recall': [0.020202020202020204, 0.08838383838383838]}

pnp_error_multiplier value (3.0)
{'Rt_thresholds': [(1, 0.1), (5, 1.0)], 'recall': [0.3484848484848485, 0.5]}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.