Git Product home page Git Product logo

complexgen's People

Contributors

guohaoxiang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

complexgen's Issues

Trying to allocate about 5000 GiB

Hello!
I'm interested in your great work and trying to run your code.
Although, I have a little trouble solving following error that says I'm trying to allocate about 5000 GiB.
I think the number is too large.
Do you have any idea about this error regarding your data or model size, etc?

Environment:

  • Docker image: pytorch/pytorch:1.7.0-cuda11.0-cudnn8-devel
  • GPU: GeForce RTX 3090, 24GB
root@oucyz:/workspace# scripts/train_small.sh
not detected /blob directory, execute locally
Utilize 1 gpus
/root/.local/lib/python3.8/site-packages/MinkowskiEngine/__init__.py:36: UserWarning: The environment variable `OMP_NUM_THREADS` not set. MinkowskiEngine will automatically set `OMP_NUM_THREADS=16`. If you want to set `OMP_NUM_THREADS` manually, please export it on the command line before running a python script. e.g. `export OMP_NUM_THREADS=12; python your_program.py`. It is recommended to set it below 24.
  warnings.warn(
using instance norm
2022-11-07 09:07:10.839856: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-11-07 09:07:11.057767: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2022-11-07 09:07:11.771248: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64
2022-11-07 09:07:11.771496: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64
2022-11-07 09:07:11.771546: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
not detected /blob directory, execute locally
Utilize 1 gpus
using instance norm
2022-11-07 09:07:13.650011: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-11-07 09:07:13.897804: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2022-11-07 09:07:14.678678: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64
2022-11-07 09:07:14.678788: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64
2022-11-07 09:07:14.678800: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
load data from data/train_small
packed pkl folder detected, will load from packed pkl file
Successfully Loaded from 19 files:19
max number of corners in single sample: 32
2 curves at least
448 valid curves total
250 valid corners total
225 patches total
min and max points in single patch: 512 512
0 open shapes
squared curve length statistics: 448 3.682233176771585e-05 9.269843007646973 0.3276165163696903
patch area statistics: 225 0.000722008498996729 1.3858339398102544 0.1676870428241
normal is included in input signal
load data from data/train_small
packed pkl folder detected, will load from packed pkl file
Successfully Loaded from 19 files:19
max number of corners in single sample: 32
2 curves at least
448 valid curves total
250 valid corners total
225 patches total
min and max points in single patch: 512 512
0 open shapes
squared curve length statistics: 448 3.682233176771585e-05 9.269843007646973 0.3276165163696903
patch area statistics: 225 0.000722008498996729 1.3858339398102544 0.1676870428241
normal is included in input signal
number of params: 22057152 87052323
Try to restore from checkpoint
  0%|                                                                                                                                                                                                                   | 0/5 [00:00<?, ?it/s]Start Training
train data size 19
/workspace/data_loader_abc.py:248: NumbaWarning:
Compilation is falling back to object mode WITH looplifting enabled because Function "points2sparse_voxel" failed type inference due to: No implementation of function Function(<function norm at 0x7fb7228699d0>) found for signature:

 >>> norm(array(float32, 2d, A), axis=Literal[int](1), keepdims=Literal[bool](True))

There are 2 candidate implementations:
  - Of which 2 did not match due to:
  Overload in function 'norm_impl': File: numba/np/linalg.py: Line 2351.
    With argument(s): '(array(float32, 2d, A), axis=int64, keepdims=bool)':
   Rejected as the implementation raised a specific error:
     TypingError: got an unexpected keyword argument 'axis'
  raised from /root/.local/lib/python3.8/site-packages/numba/core/typing/templates.py:791

During: resolving callee type: Function(<function norm at 0x7fb7228699d0>)
During: typing of call at /workspace/data_loader_abc.py (255)


File "data_loader_abc.py", line 255:
def points2sparse_voxel(points_with_normal, voxel_dim, feature_type, with_normal, pad1s):
    <source elided>
    voxel_coord = np.clip(np.floor(points / voxel_length).astype(np.int32), 0, voxel_dim-1)
    points_normal_norm = linalg.norm(points_with_normal[:,3:], axis=1, keepdims=True)
    ^

  @numba.jit()
/root/.local/lib/python3.8/site-packages/numba/core/object_mode_passes.py:151: NumbaWarning: Function "points2sparse_voxel" was compiled in object mode without forceobj=True.

File "data_loader_abc.py", line 249:
@numba.jit()
def points2sparse_voxel(points_with_normal, voxel_dim, feature_type, with_normal, pad1s):
^

  warnings.warn(errors.NumbaWarning(warn_msg,
/root/.local/lib/python3.8/site-packages/numba/core/object_mode_passes.py:161: NumbaDeprecationWarning:
Fall-back from the nopython compilation path to the object mode compilation path has been detected, this is deprecated behaviour.

For more information visit https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit

File "data_loader_abc.py", line 249:
@numba.jit()
def points2sparse_voxel(points_with_normal, voxel_dim, feature_type, with_normal, pad1s):
^

  warnings.warn(errors.NumbaDeprecationWarning(msg,
  0%|                                                                                                                                                                                                                   | 0/5 [00:05<?, ?it/s]
Traceback (most recent call last):
  File "Minkowski_backbone.py", line 4582, in <module>
    mp.spawn(pipeline_abc,
  File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 199, in spawn
    return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
  File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 157, in start_processes
    while not context.join():
  File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 118, in join
    raise Exception(msg)
Exception:

-- Process 0 terminated with the following error:
Traceback (most recent call last):
  File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
    fn(i, *args)
  File "/workspace/Minkowski_backbone.py", line 4370, in pipeline_abc
    patch_loss_dict, patch_matching_indices = patch_loss_criterion(patch_predictions, target_patches_list)
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/workspace/Minkowski_backbone.py", line 2532, in forward
    losses.update(self.get_loss(loss, outputs, targets, indices, num_corners))
  File "/workspace/Minkowski_backbone.py", line 2495, in get_loss
    return loss_map[loss](outputs, targets, indices, num_patches, **kwargs)
  File "/workspace/Minkowski_backbone.py", line 2220, in loss_geometry
    loss_geom[uclose_id] = emd_by_id(target_patch_points_batch[uclose_id], src_patch_points[uclose_id], self.emd_idlist_u, points_per_patch_dim)
RuntimeError: CUDA out of memory. Tried to allocate 4966.70 GiB (GPU 0; 23.69 GiB total capacity; 9.58 GiB already allocated; 12.36 GiB free; 9.62 GiB reserved in total by PyTorch)

oucyz

About generating the ground truth data.

Hi and thank you for the great work!

To create a new dataset, I'm wondering how to sample from the B-rep surfaces (which I get from .step files). I've read through the codebase of ABCDataset and ParseNet but find nothing.

Hope for you rely!

Data preprocessing codes

Dear Authors,

Many thanks for this excellent work and for providing the codes. I would like to know whether you are planning to release the codes for the data preparation from the original step and mesh files of ABC dataset. Indeed, I would like to train on another dataset and I guess that without these codes this would not be possible.

Many thanks in advance!

About data cleaning

Hi and thank you for the great work!
May I ask if you could provide the code you used in the data cleaning process? I saw your data cleaning steps in the supplementary materials, but I don't know how operations are achieved, like "Parametric representations of patches do not match their discrete mesh representation" or "Over-segmentation of patches (curves) by junctions whose neighboring patches (curves) have identical types and geometry".

A ValueError in visualization of .complex file

Thanks for your great work!

I come across a problem running your code. When I execute "./scripts/extraction_default.sh" there is a value error reporting the incorrectness of dimension.
屏幕截图 2022-12-12 133852
I really wonder how to solve it, looking forward to your help.

Preprocessing point cloud ply file into pkl file

Hi and thank you for your great work!

I recently came across your work and am trying to build upon your code.

For testing, I want to test my point clouds .ply files.
But the dataset loader seems requiring prepreocessed .pkl files for that.

Can you tell me how to make pkl file from point cloud .ply files?

Regards,
Eunji.

questions of inputing my own point cloud

Hi, haoxiang,

thanks for sharing such a great work, now I'm trying to follow what you mentioned in #3 to input my own point cloud, and I met some problems.

I got the same error after dumping the data_list into the pkl file as Eusford did in #7 :
image

As you said, data in my own pkl file should be a list not a dictionary, however, when data_loader_abc.py is loading my pkl file, the script tries to get the content of "filename" as the error shows. If it's a list, the index should be a number rather than the string "filename", I think this is why Eusford and I met the same error. So I wonder am I taking it in a wrong way?

Then I tried your train_small/*.pkl file and only change the "surface_points" to my own point cloud, as you said only this part is used for inference, but I visualized the predicition output, I find the "curves" and "patches" are still affecting my point cloud, the output is shown below:
image
So I want to ask that is it correct to only replace "surface_points" and keep the old "curves" and "patches"?

I'll be appreciate if you can help me with these two questions!
Best regards,
Enyang Feng

Download link

Hello! I was very excited to read your paper. It is awesome work!
Although, I have a little trouble downloading from pan.Baidu. It seems pan.baidu not working for overseas countries. It would be great, if you could provide additional link, to other storage cloud service (for example Google Drive), to have access from other countries as an alternative.
Thank you!

Gurobi-less version

Is it possible to make a Gurobi less version, as it is only available for academic research. This doesn't fit well to your open sourcing the code. (by the way thanks for that)

Input .ply format in "Phase 1: ComplexNet prediction"

Hello, I'm trying to run your code.

I built the environment with reference to the configuration in README.md and successfully ran the "Phase 1: ComplexNet prediction". I noticed that in the test script provided, the network input is in the .pkl format, which contains some information beyond the .ply format point cloud(line, patch_points, etc). I want to ask whether the input in the .ply format is supported? (I noticed that your output_data includes _input.ply, but I didn't find this part in the code.)

Tips on improving partial results

Hello,

I would like to know if there are tips to improve our results on this model.

image

image

I have followed the steps as we discussed yesterday in #12 but using the partial checkpoint since our data is incomplete.

image
image

I would like to know if this is the best we can do without training. I know that trimming isn't performed due to numerical instability as discussed in a previous issue.

Also note that I was not able to perform geometric_refinement.

cedric@BOKCHOY:~/Programming/complexgen$ python scripts/geometric_refine.py
00953239_0_extraction.complex
corner: 27 curve: 45 patch: 21
patch close: 0
patch close: 1
patch close: 0
patch close: 0
id: 3 flip uv: 0
patch close: 0
patch close: 0
patch close: 0
patch close: 0
patch close: 0
patch close: 0
patch close: 0
patch close: 1
patch close: 1
patch close: 0
patch close: 0
patch close: 0
patch close: 1
patch close: 0
id: 17 flip uv: 0
patch close: 0
patch close: 1
patch close: 0
terminate called after throwing an instance of 'std::runtime_error'
  what():  PLY parser: Could not open file data/partial/test_point_clouds/00953239_0_10000.ply
Aborted (core dumped)

So I just skipped this step. (I made very sure I performed test_partial.sh and then extraction_partial.sh). I had fixed this error by downloading the .ply from the OneDrive but they
are not available for the partial/ folder.

I have attached our original .ply, the pointcloud inside of a .pkl and the results in a .zip
to_complexgen.zip

Where is the file packed_000000.pkl_10000.ply used in GeometricRefinement?

I followed the steps to get the this default object using the pre-trained model. I'm stuck at the step

python .\scripts\geometric_refine.py

As it requires a point cloud

what(): PLY parser: Could not open file data/default/test_point_clouds/packed_000000.pkl_10000.ply

If I ignore this step I can still get the .obj, just of lower quality.

image

image

This is the line I ran to get the .obj file:
python gen_vis_result.py -i ../experiments/default/test_obj/packed_000000.pkl_extraction.complex

I expected to find a file named packed_000000.pkl_10000.ply somewhere in the .zip from onedrive but I cannot find this file.

In the folder default\data\default\test_point_clouds\ and sorting alphabetically, I can find

image

And in reverse alphabetical order

image

Do you have instructions on how to obtain this file?

packed_000000.pkl_extraction.obj.zip

About how to make patches trimmed in visualization

Hi and thank you for the great work!
I have a question about "Phase 3: Geometric refinement" and visualization. It's mentioned in the article.

Model visualization. The optimized B-Rep model can be converted to specific formats for CAD software consumption. To visualize the whole models, we develop a simple procedure that extracts mesh models from the B-Rep chain complex: we use curve loops to cut their incident patches triangulated and obtain a collection of trimmed patches. The final patches, curves and corners form a mesh model that follows the predicted topology and fits to the input geometry. Examples of diverse complexities are shown in Figs. 1, 6, 8 and 10 and the supplemental document.

As shown in the figure below, after I run the third stage, the edge of some patches have not been cut in my visualization results. It can be seen that the two planes at the center and bottom of the nut have exceeded parts.
图片

I want to know how to get the same result as shown in your paper.
图片

Thanks!

Instructions Link on how to build Geometric refinement project is invalid

In the readme you mention a link to a instruction on how to build the Geometric refinement part on Linux.
However the link provided isn't valid anymore - the page is not found.

Could you please update the link or the readme with the instructions on how to compile under Linux?

Thanks a lot!

sphere_94.obj

Hello!
I'm trying to visualize file "from output of each phase" files. In the gen_vis_result.py file in complexjson_to_obj, load_obj_simple were called for 'load sphere_94.obj' file ( line 250). I could not find this file anywhere. Without this file, I could not visualize json file from your outputs. What can be done ?
P.s.1 In line 291 and 298 gen_vis_result.py file, I got error "AttributeError: 'list' object has no attribute 'shape'".
P.s.2 In docker creation for Phase 1, apt-get update command may get error(4 line). To solve this problem, command "rm /etc/apt/sources.list.d/cuda.list" can be used from here NVIDIA/nvidia-docker#619.
Thanks, Vage

Is the input model normalized on each axis individually?

Thanks for sharing the code with us. I have a question about the normalization process.

I saw that the input point cloud had been normalized into [-0.5,0.5]^3 in each axis individually. Does that make sense? The resulting point cloud after normalization will be somehow flattened. For example, a cylinder might be an elliptical cylinder.

However, the network still predicts it as a cylinder. But it doesn't know the actual shape before the normalization. Will that mean that no matter how elliptical the input point cloud is, the network will constantly predict it as a cylinder?

Different point clouds generate same obj file

Hi Haoxiang, I tested my point clouds using your work but found all input point clouds turn into same *.obj file.

obj.zip

I guess it's because of the method you mentioned in issue #3 which is also a question for me. It says replace the content in 'data_list[0]['surface_points']' with my own point cloud, but other fields in data_list[0](curves, patches and filename) are not covered so I didn't modify these parts. I wonder whether my incorrect outputs are related to this and how to deal with the problem.

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.