hjwdzh / framenet Goto Github PK
View Code? Open in Web Editor NEWFrameNet: Learning Local Canonical Frames of 3D Surfaces from a Single RGB Image
License: MIT License
FrameNet: Learning Local Canonical Frames of 3D Surfaces from a Single RGB Image
License: MIT License
Hi Jingwei,
Your research work, FrameNet is very interesting.
While trying to setup the demo by running "python AttachTexture.py" I see the below error. (On Ubuntu 18.04)
"OSError: ./cpp/build/libDirection.so: undefined symbol: _ZN2cv6String10deallocateEv"
So far I tried Opencv latest version 4.5.4 and 3.4.10. Both show the same error mentioned above.
Could you tell me which exact version of Opencv you have installed?
I see the train-test split is not the one provided in the scannet dataset(neither v1 nor v2). Can you please elaborate on how this split was chosen?
Hi, I noticed that in AffineTestsDataset()
the test samples are drawn every 200 steps. I'm wondering if the quantitative results on the paper are evaluated on the whole test split or test samples from every 200 frames (322 images in total I think)? Thanks in advance.
Lines 122 to 131 in fe5cc45
Hi, Could you describe a bit how do you compute the normals for scannet data?
Dear authors, appreciate for your amazing work.
I tried the AR demo and find that orient-X_pred and Y are provided for each image (or video frame), and wondering the pipeline for preprocessing a image to be ready for the ar demo.
Thanks.
Hi Jingwei, I was trying to compile the render but ended up the error below all the time. Do you have any ideas on how I can solve this? Should I change OPENCV_INCLUDE_DIR
and OPENCV_LIBRARY_DIR
? Any suggestions would be very appreciated.
Traceback (most recent call last):
File "visualize_field.py", line 4, in <module>
import Render.render as render
File "/home/qimin/Projects/Cameralocalization/src/Render/render.py", line 4, in <module>
Render = cdll.LoadLibrary('./Render/libRender.so')
File "/home/qimin/anaconda3/envs/framenet/lib/python3.7/ctypes/__init__.py", line 442, in LoadLibrary
return self._dlltype(name)
File "/home/qimin/anaconda3/envs/framenet/lib/python3.7/ctypes/__init__.py", line 364, in __init__
self._handle = _dlopen(self._name, mode)
OSError: ./Render/libRender.so: undefined symbol: _ZN2cv7imwriteERKNS_6StringERKNS_11_InputArrayERKSt6vectorIiSaIiEE
Here is the output of pkg-config --libs opencv
-L/usr/local/lib -lopencv_stitching -lopencv_superres -lopencv_videostab -lopencv_aruco -lopencv_bgsegm -lopencv_bioinspired -lopencv_ccalib -lopencv_dnn_objdetect -lopencv_dpm -lopencv_face -lopencv_photo -lopencv_freetype -lopencv_fuzzy -lopencv_hfs -lopencv_img_hash -lopencv_line_descriptor -lopencv_optflow -lopencv_reg -lopencv_rgbd -lopencv_saliency -lopencv_stereo -lopencv_structured_light -lopencv_phase_unwrapping -lopencv_surface_matching -lopencv_tracking -lopencv_datasets -lopencv_text -lopencv_dnn -lopencv_plot -lopencv_xfeatures2d -lopencv_shape -lopencv_video -lopencv_ml -lopencv_ximgproc -lopencv_calib3d -lopencv_features2d -lopencv_highgui -lopencv_videoio -lopencv_flann -lopencv_xobjdetect -lopencv_imgcodecs -lopencv_objdetect -lopencv_xphoto -lopencv_imgproc -lopencv_core
and pkg-config --cflags opencv
-I/usr/local/include/opencv -I/usr/local/include
So I modified the compile.sh
export PATH=$PATH:/usr/local/cuda-10.1/bin
export LD_LIBRARY_PATH=/usr/local/cuda-10.1/lib64
export OPENCV_INCLUDE_DIR=/usr/local/include
export OPENCV_LIBRARY_DIR=/usr/local/lib
export CFLAGS="-I/usr/local/include/opencv -I$OPENCV_INCLUDE_DIR"
export DFLAGS="-L$OPENCV_LIBRARY_DIR -lopencv_core -lopencv_highgui"
g++ -std=c++11 -c main.cpp $CFLAGS -O2 -o main.o -fPIC
g++ -std=c++11 main.o $CFLAGS $DFLAGS -O2 -o libRender.so -shared -fPIC
#g++ -std=c++11 main.o buffer.o loader.o render.o $CFLAGS $DFLAGS -o render -lcudart
#rm *.o
It's a great work!
I want to try it on some images in demo downloaded by the script /demo/download.sh
, but now, I cannot get as good results as the data *-orient-X/Y_pred.png
provided in the selected
folder...
Could you provide a more convenient script to get these results?
Hi,
I find that the network architecture is different in this code and the paper.
The paper said that FrameNet use DORN as backbone, and following with a U-Net style architecture.
But in this code, there is only DORN backbone, no U-Net style architecture.
Why is this?
Hi Jingwei, thanks for the amazing code. I have a few questions about the dataset and train_affine_dorn. Would it be possible to release the depth.png? What is the args.horizontal
in train_affine_dorn.py
used for? Thank you.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.