Git Product home page Git Product logo

dfc2019's Issues

about track4 pointnet2 training data

Hello,
I retrained pointnet2 with provided codes on 99 point clouds which are produced by create_train_dataset.py and tested on 11 validation point clouds which are produced by create_train_dataset.py. But it is 6% lower than provided trained model. Does the baseline use a different split?

Thank you very much!

Track 1-3 Metrics

Memory error preventing 3d component of track1-3 metrics from being released. Updated metrics are on the way.

Unrectified images and corrected RPC for Track2

Where can I get access to unrectified images and corrected RPC or affine camera models?

The following is mentioned on page4 of the publication "Metadata such as RPC, epipolar rectifying homographies, and collection dates are retained for each stereo pair" I am not able to find RPC parameters in METADATA.json files or any of the .tif files.

From where can I download the whole datasets please?

Hello there, it's been several days since the release of the training and validation dataset. Yet I couldn't pull these data to the local till now, though I had registered on the official website to make the links visible, all as guided. For the first few days, the links contained in the BT files seemed invalid, as the resources couldn't be connected. And today found the links to the BT files and the net disks on the official page all gone.

As is the case, could you please show me another way to download the dataset? Sorry for any possible trouble on reading caused by languages issues as my English is not so good.

Including RGB in the .las

@pubgeo I am using this version of Pointnet++, my dataset has yxzRGB values in it. But when i tried to feed in the RGB values to the .las using the laspy somehow it is not happening.
Can you help please

AttributeError: module 'tensorflow._api.v2.image' has no attribute 'resize_bilinear'

After setting up anaconda with conda env create --name dfc2019 --file=dev-gpu.yml
and running

conda activate dfc2019
cd track3
python ./mvs/test-mvs.py

I receive the following error message

Traceback (most recent call last):
  File "./mvs/test-mvs.py", line 693, in <module>
    predictor.build_seg_model(seg_weights_file)
  File "./mvs/test-mvs.py", line 103, in build_seg_model
    self.seg_model = build_icnet(self.height, self.width, self.bands, self.num_categories + 1,
  File "/mnt/Data-512GB/libraries_ml_geo/dfc2019/dfc2019/track3/mvs/model_icnet.py", line 52, in build_icnet
    y = Lambda(lambda x: tf.image.resize_bilinear(x, size=(int(x.shape[1])//2, int(x.shape[2])//2)), name='data_sub2')(inp)
  File "/home/sebastian/miniconda3/envs/dfc2019/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 922, in __call__
    outputs = call_fn(cast_inputs, *args, **kwargs)
  File "/home/sebastian/miniconda3/envs/dfc2019/lib/python3.8/site-packages/tensorflow/python/keras/layers/core.py", line 888, in call
    result = self.function(inputs, **kwargs)
  File "/mnt/Data-512GB/libraries_ml_geo/dfc2019/dfc2019/track3/mvs/model_icnet.py", line 52, in <lambda>
    y = Lambda(lambda x: tf.image.resize_bilinear(x, size=(int(x.shape[1])//2, int(x.shape[2])//2)), name='data_sub2')(inp)
AttributeError: module 'tensorflow._api.v2.image' has no attribute 'resize_bilinear'

An error in Track4

the interface.py needs "import provider", but in the docker image, the python version is 3.5 which is lower than the minimum requirement(python >=3.6) of the "provider". Does anyone meet the same problem?

Weight files download

When I git lfs clone this repo, the result is:
This repository is over its data quota. Account responsible for
LFS bandwidth should purchase more data packs to restore access.

Is there any other way to download the weight file? Thank you!

ValueError('all input arrays must have the same shape') in create_train_dataset.py

I am trying to run pointnet2 on my own data set, which has got a '_PC3.txt' file with xyzRGB values and a corresponding '_CLS.txt' file. I am getting a error like this when I ran create_train_dataset.py
image
But i checked the shape of CLS and PC3 file, both are same. Is it a problem because of multi-threading? If so how can I rectify it?
I tried in the LiNUX environment as well. But it gave the same error.
Please help

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.