Git Product home page Git Product logo

Comments (4)

HHHendricks avatar HHHendricks commented on June 28, 2024 1

I meet the wrong:
F1104 16:50:47.200454 1123 sfm.cc:473] Check failed: bundle_adjuster.Solve(&reconstruction)
*** Check failure stack trace: ***
@ 0x7fd90184d0cd google::LogMessage::Fail()
@ 0x7fd90184ef33 google::LogMessage::SendToLog()
@ 0x7fd90184cc28 google::LogMessage::Flush()
@ 0x7fd90184f999 google::LogMessageFatal::~LogMessageFatal()
@ 0x55760f8ec2dd (unknown)
@ 0x55760f8ec886 (unknown)
@ 0x55760f8a0573 (unknown)
@ 0x7fd8fd2fcc87 __libc_start_main
@ 0x55760f8aa44a (unknown)
Aborted (core dumped)
colmap model_converter --input_path=/home/jxh/dense_depth_priors_nerf-master/scannet_dir/recon/sparse_train/0 --output_path=/home/jxh/dense_depth_priors_nerf-master/scannet_dir/recon/sparse_train/0 --output_type=TXT
F1104 16:50:47.313038 1128 reconstruction.cc:745] cameras, images, points3D files do not exist at /home/jxh/dense_depth_priors_nerf-master/scannet_dir/recon/sparse_train/0
*** Check failure stack trace: ***
@ 0x7fd3284700cd google::LogMessage::Fail()
@ 0x7fd328471f33 google::LogMessage::SendToLog()
@ 0x7fd32846fc28 google::LogMessage::Flush()
@ 0x7fd328472999 google::LogMessageFatal::~LogMessageFatal()
@ 0x55a43e0571cd (unknown)
@ 0x55a43df9e68e (unknown)
@ 0x55a43df73573 (unknown)
@ 0x7fd323f1fc87 __libc_start_main
@ 0x55a43df7d44a (unknown)
Aborted (core dumped)
May I ask how is this going?

from dense_depth_priors_nerf.

barbararoessle avatar barbararoessle commented on June 28, 2024

Hi, it can be difficult to run SfM on very few images as in the example scenes. What usually works for me is to run Colmap in a loop until all images have been registed. Still, it is important to visually check that the resulting cameras poses are plausible.
As a reference, i attach a script that first runs SfM on all images (train and test) to obtain camera poses. Then, it repeats feature matching and point triangulationon on the train images alone to get the sparse reconstruction for NeRF optimization.
The script should lie in a directory containing images_all and images_train, where images_all contains all train and test images and images_train contains just the train images.

import os
import shutil
import subprocess

import sqlite3

def list_missing_rgb(rgb_dir, sparse_dir):
    expected_files = os.listdir(rgb_dir)
    found_files = []
    for line in open(os.path.join(sparse_dir, "images.txt")):
        for f in expected_files:
            if " " + f in line:
                found_files.append(f)
                break
    print("Missing: ")
    for exp_f in expected_files:
        if exp_f not in found_files:
            print(exp_f)

data_dir = os.path.dirname(os.path.abspath(__file__))
verbose = False
rgb_all_dir = os.path.join(data_dir, "images_all")
rgb_train_dir = os.path.join(data_dir, "images_train")

success = False

# run colmap sfm in a loop on all images (train and test) until all images are successfully registered
while not success:
    # delete previous failed reconstruction
    recon_dir = os.path.join(data_dir, "recon")
    if os.path.exists(recon_dir):
        shutil.rmtree(recon_dir)

    # run colmap with all images creating database db_all.db
    db_all = os.path.join(recon_dir, "db_all.db")
    sparse_dir = os.path.join(recon_dir, "sparse")
    os.makedirs(sparse_dir, exist_ok=True)
    extract_cmd = "colmap feature_extractor  --database_path {} --image_path {} --ImageReader.single_camera 1 --ImageReader.camera_model SIMPLE_PINHOLE".format(db_all, rgb_all_dir)
    match_cmd = "colmap exhaustive_matcher --database_path {}  --SiftMatching.guided_matching 1".format(db_all)
    mapper_cmd = "colmap mapper --database_path {} --image_path {} --output_path {} --Mapper.multiple_model 0".format(db_all, rgb_all_dir, sparse_dir)
    sparse_dir = os.path.join(sparse_dir, "0")
    convert_cmd = "colmap model_converter --input_path={} --output_path={} --output_type=TXT".format(sparse_dir, sparse_dir)
    colmap_cmds = [extract_cmd, match_cmd, mapper_cmd, convert_cmd]

    number_input_images = len(os.listdir(rgb_all_dir))

    for cmd in colmap_cmds:
        process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
        for line in process.stdout:
            if verbose:
                print(line)
        process.wait()

    # check completeness of reconstruction
    number_lines = sum(1 for line in open(os.path.join(sparse_dir, "images.txt")))
    number_reconstructed_images = (number_lines - 4) // 2 # 4 lines of comments, 2 lines per reconstructed image
    print("Expect {} images in the reconstruction, got {}".format(number_input_images, number_reconstructed_images))
    if number_input_images == number_reconstructed_images:
        success = True
    else:
        list_missing_rgb(rgb_all_dir, sparse_dir)

# transform the reconstruction such that z-axis points up
sparse_dir = os.path.join(recon_dir, "sparse", "0")
in_sparse_dir = sparse_dir
out_sparse_dir = os.path.join(recon_dir, "sparse{}".format("_y_down"), "0")
os.makedirs(out_sparse_dir, exist_ok=True)
align_cmd = "colmap model_orientation_aligner --input_path={} --output_path={} --image_path={} --max_image_size={}".format(in_sparse_dir, out_sparse_dir, rgb_all_dir, 640)
in_sparse_dir = out_sparse_dir
out_sparse_dir = os.path.join(recon_dir, "sparse{}".format("_z_up"), "0")
os.makedirs(out_sparse_dir, exist_ok=True)
trafo_cmd = "colmap model_transformer --input_path={} --output_path={} --transform_path=/home/barbara/data/y_down_to_z_up.txt".format(in_sparse_dir, out_sparse_dir)
convert_cmd = "colmap model_converter --input_path={} --output_path={} --output_type=TXT".format(out_sparse_dir, out_sparse_dir)
colmap_cmds = [align_cmd, trafo_cmd, convert_cmd]
for cmd in colmap_cmds:
    process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
    for line in process.stdout:
        if verbose:
            print(line)
    process.wait()

# extract features of train images into database db.db
db = os.path.join(recon_dir, "db.db")
extract_cmd = "colmap feature_extractor  --database_path {} --image_path {} --ImageReader.single_camera 1 --ImageReader.camera_model SIMPLE_PINHOLE".format(db, rgb_train_dir)
process = subprocess.Popen(extract_cmd, shell=True, stdout=subprocess.PIPE)
for line in process.stdout:
    if verbose:
        print(line)
process.wait()

# copy sparse reconstruction from all images
constructed_sparse_train_dir = os.path.join(recon_dir, "constructed_sparse_train", "0")
os.makedirs(constructed_sparse_train_dir, exist_ok=True)
camera_txt = os.path.join(constructed_sparse_train_dir, "cameras.txt")
images_txt = os.path.join(constructed_sparse_train_dir, "images.txt")
points3D_txt = os.path.join(constructed_sparse_train_dir, "points3D.txt")
shutil.copyfile(os.path.join(out_sparse_dir, "cameras.txt"), camera_txt)
open(images_txt, 'a').close()
open(points3D_txt, 'a').close()

# keep poses of the train images in images.txt and adapt their id to match the id in database db.db
train_files = os.listdir(rgb_train_dir)
db_cursor = sqlite3.connect(db).cursor()
name2dbid = dict((n, id)  for n, id in db_cursor.execute("SELECT name, image_id FROM images"))
with open(os.path.join(out_sparse_dir, "images.txt"), 'r') as in_f:
    in_lines = in_f.readlines()
for line in in_lines:
    split_line = line.split(" ")
    line_to_write = None
    if "#" in split_line[0]:
        line_to_write = line
    else:
        for train_file in train_files:
            if " " + train_file in line:
                db_id = name2dbid[train_file]
                split_line[0] = str(db_id)
                line_to_write = " ".join(split_line) + "\n"
                break
    if line_to_write is not None:
        with open(images_txt, 'a') as out_f:
            out_f.write(line_to_write)

# run exaustive matcher and point triangulator on the train images
match_cmd = "colmap exhaustive_matcher --database_path {}  --SiftMatching.guided_matching 1".format(db)
sparse_train_dir = os.path.join(recon_dir, "sparse_train", "0")
os.makedirs(sparse_train_dir, exist_ok=True)
triangulate_cmd = "colmap point_triangulator --database_path {} --image_path {} --input_path {} --output_path {}".format(db, rgb_train_dir, \
    constructed_sparse_train_dir, sparse_train_dir)
convert_cmd = "colmap model_converter --input_path={} --output_path={} --output_type=TXT".format(sparse_train_dir, sparse_train_dir)
colmap_cmds = [match_cmd, triangulate_cmd, convert_cmd]
for cmd in colmap_cmds:
    process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
    for line in process.stdout:
        if verbose:
            print(line)
    process.wait()

y_down_to_z_up.txt

from dense_depth_priors_nerf.

nanxiangriluo avatar nanxiangriluo commented on June 28, 2024

Hi, thanks for sharing the code! I want to ask for detail about generating poses. I fail to obtain poses for some images by running colmap on each scene with train and test images together. Is that because the feature extractor should be run on all ScanNet images? If this is the case, can you share the scannet_sift_database.db? Otherwise, can you provide the code for generating camera poses? Many thanks!

Have you solved this problem

from dense_depth_priors_nerf.

loveaca-sunlight avatar loveaca-sunlight commented on June 28, 2024

I meet the wrong: F1104 16:50:47.200454 1123 sfm.cc:473] Check failed: bundle_adjuster.Solve(&reconstruction) *** Check failure stack trace: *** @ 0x7fd90184d0cd google::LogMessage::Fail() @ 0x7fd90184ef33 google::LogMessage::SendToLog() @ 0x7fd90184cc28 google::LogMessage::Flush() @ 0x7fd90184f999 google::LogMessageFatal::~LogMessageFatal() @ 0x55760f8ec2dd (unknown) @ 0x55760f8ec886 (unknown) @ 0x55760f8a0573 (unknown) @ 0x7fd8fd2fcc87 __libc_start_main @ 0x55760f8aa44a (unknown) Aborted (core dumped) colmap model_converter --input_path=/home/jxh/dense_depth_priors_nerf-master/scannet_dir/recon/sparse_train/0 --output_path=/home/jxh/dense_depth_priors_nerf-master/scannet_dir/recon/sparse_train/0 --output_type=TXT F1104 16:50:47.313038 1128 reconstruction.cc:745] cameras, images, points3D files do not exist at /home/jxh/dense_depth_priors_nerf-master/scannet_dir/recon/sparse_train/0 *** Check failure stack trace: *** @ 0x7fd3284700cd google::LogMessage::Fail() @ 0x7fd328471f33 google::LogMessage::SendToLog() @ 0x7fd32846fc28 google::LogMessage::Flush() @ 0x7fd328472999 google::LogMessageFatal::~LogMessageFatal() @ 0x55a43e0571cd (unknown) @ 0x55a43df9e68e (unknown) @ 0x55a43df73573 (unknown) @ 0x7fd323f1fc87 __libc_start_main @ 0x55a43df7d44a (unknown) Aborted (core dumped) May I ask how is this going?

I meet the same problem, have you solved the problem yet?
PS:Sorry i checked your github account records. Are you a Chinese researcher? 如果是的话同学可以加微信(18852056170)交流一下吗?

from dense_depth_priors_nerf.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.