Git Product home page Git Product logo

polygonization-by-frame-field-learning's Introduction

Polygonal Building Segmentation by Frame Field Learning

We add a frame field output to an image segmentation neural network to improve segmentation quality and provide structural information for the subsequent polygonization step.


Figure 1: Close-up of our additional frame field output on a test image.



Figure 2: Given an overhead image, the model outputs an edge mask, an interior mask, and a frame field for buildings. The total loss includes terms that align the masks and frame field to ground truth data as well as regularizers to enforce smoothness of the frame field and consistency between the outputs.



Figure 3: Given classification maps and a frame field as input, we optimize skeleton polylines to align to the frame field using an Active Skeleton Model (ASM) and detect corners using the frame field, simplifying non-corner vertices.

This repository contains the official code for the paper:

Polygonal Building Segmentation by Frame Field Learning
Nicolas Girard, Dmitriy Smirnov, Justin Solomon, Yuliya Tarabalka
CVPR 2021
[paper, video]

Setup

Git submodules

This project uses various git submodules that should be cloned too.

To clone a repository including its submodules execute:

git clone --recursive --jobs 8 <URL to Git repo>

If you already have cloned the repository and now want to load it’s submodules execute:

git submodule update --init --recursive --jobs 8

or:

git submodule update --recursive

For more about explanations about using submodules and git, see SUBMODULES.md.

Venv

As of 2022-12, venv is probably the best solution to install a virtual environment with all required dependencies using the provided requirements.txt. I use Python 3.10 and PyTorch 1.13.

Docker

OLD SOLUTION: use the old Docker image I provide here: docker (see README inside the folder). However, it builds an old environment not guaranteed to work with the updated code.

Once the docker container is built and launched, execute the setup.sh script inside to install required packages.

The environment in the container is now ready for use.

Conda environment

OLD SOLUTION: install all dependencies in a conda environment. I provide my environment specifications in the environment.yml which you can use to create your environment own with:

conda env create -f environment.yml

Data

Several datasets are used in this work. We typically put all datasets in a "data" folder which we link to the "/data" folder in the container (with the -v argument when running the container). Each dataset has it's own sub-folder, usually named with a short version of that dataset's name. Each dataset sub-folder should have a "raw" folder inside containing all the original folders and files fo the datset. When pre-processing data, "processed" folders will be created alongside the "raw" folder.

For example, here is an example working file structure inside the container:

/data 
|-- AerialImageDataset
     |-- raw
         |-- train
         |   |-- aligned_gt_polygons_2
         |   |-- gt
         |   |-- gt_polygonized
         |   |-- images
         `-- test
             |-- aligned_gt_polygons_2
             |-- images
`-- mapping_challenge_dataset
     |-- raw
         |-- train
         |   |-- images
         |   |-- annotation.json
         |   `-- annotation-small.json
         `-- val
              `-- ...

If however you would like to use a different folder for the datasets (for example while not using Docker), you can change the path to datasets in config files. You can modify the "data_dir_candidates" list in the config to only include your path. The training script checks this list of paths one at a time and picks the first one that exists. It then appends the "data_root_partial_dirpath" directory to get to the dataset.

You can find some of the data we used in this shared "data" folder: https://drive.google.com/drive/folders/19yqseUsggPEwLFTBl04CmGmzCZAIOYhy?usp=sharing.

Inria Aerial Image Labeling Dataset

Link to the dataset: https://project.inria.fr/aerialimagelabeling/

For the Inria dataset, the original ground truth is just a collection of raster masks. As our method requires annotations to be polygons in order to compute the ground truth angle for the frame field, we made 2 versions of the dataset:

The Inria OSM dataset has aligned annotations pulled from OpenStreetMap.

The Inria Polygonized dataset has polygon annotations obtained from using our frame field polygonization algorithm on the original raster masks. This was done by running the polygonize_mask.py script like so: python polygonize_mask.py --run_name inria_dataset_osm_mask_only.unet16 --filepath ~/data/AerialImageDataset/raw/train/gt/*.tif

You can find this new ground truth for both cases in the shared "data" folder (https://drive.google.com/drive/folders/19yqseUsggPEwLFTBl04CmGmzCZAIOYhy?usp=sharing.).

Running the main.py script

Execute main.py script to train a model, test a model or use a model on your own image. See the help of the main script with:

python main.py --help

The script can be launched on multiple GPUs for multi-GPU training and evaluation. Simply set the --gpus argument to the number of gpus you want to use. However, for the first launch of the script on a particular dataset (when it will pre-process the data), it is best to leave it at 1 as I did not implement multi-GPU synchronization when pre-processing datasets.

An example use is for training a model with a certain config file, like so: python main.py --config configs/config.mapping_dataset.unet_resnet101_pretrained which will train the Unet-Resnet101 on the CrowdAI Mapping Challenge dataset. The batch size can be adjusted like so: python main.py --config configs/config.mapping_dataset.unet_resnet101_pretrained -b <new batch size>

When training is done, the script can be launched in eval mode, to evaluate the trained model: python main.py --config configs/config.mapping_dataset.unet_resnet101_pretrained --mode eval. Depending on the eval parameters of the config file, running this will output results on the test dataset.

Finally, if you wish to compute AP and AR metrics with the COCO API, you can run: python main.py --config configs/config.mapping_dataset.unet_resnet101_pretrained --mode eval_coco.

Launch inference on one image

Make sure the run folder has the correct structure:

Polygonization-by-Frame-Field-Learning
|-- frame_field_learning
|   |-- runs
|   |   |-- <run_name> | <yyyy-mm-dd hh:mm:ss>
|   |   `-- ...
|   |-- inference.py
|   `-- ...
|-- main.py
|-- README.md (this file)
`-- ...

Execute the [main.py] script like so (filling values for arguments run_name and in_filepath): python main.py --run_name <run_name> --in_filepath <your_image_filepath>

The outputs will be saved next to the input image

Download trained models

We provide already-trained models so you can run inference right away. Download here: https://drive.google.com/drive/folders/1poTQbpCz12ra22CsucF_hd_8dSQ1T3eT?usp=sharing. Each model was trained in a "run", whose folder (named with the format <run_name> | <yyyy-mm-dd hh:mm:ss>) you can download at the provided link. You should then place those runs in a folder named "runs" inside the "frame_field_learning" folder like so:

Polygonization-by-Frame-Field-Learning
|-- frame_field_learning
|   |-- runs
|   |   |-- inria_dataset_polygonized.unet_resnet101_pretrained.leaderboard | 2020-06-02 07:57:31
|   |   |-- mapping_dataset.unet_resnet101_pretrained.field_off.train_val | 2020-09-07 11:54:48
|   |   |-- mapping_dataset.unet_resnet101_pretrained.train_val | 2020-09-07 11:28:51
|   |   `-- ...
|   |-- inference.py
|   `-- ...
|-- main.py
|-- README.md (this file)
`-- ...

Because Google Drive reformats folder names, you have to rename the run folders as above.

Cite:

If you use this code for your own research, please cite

@InProceedings{Girard_2021_CVPR,
    author    = {Girard, Nicolas and Smirnov, Dmitriy and Solomon, Justin and Tarabalka, Yuliya},
    title     = {Polygonal Building Extraction by Frame Field Learning},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2021},
    pages     = {5891-5900}
}

polygonization-by-frame-field-learning's People

Contributors

benjamin-loison avatar lydorn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

polygonization-by-frame-field-learning's Issues

Training flowline

Could you please tell me the training procedure for Indria dataset from scratch? I have aerial images and corresponding masks in .png format right now.

Eval_coco error: Problem in running contour metrics

Error Description

When running eval_coco mode on the CrowdAI mapping dataset, I encounter the same error on both pre-trained UNet-ResNet101 models, whether the frame field is computed or not. A fragment of the log is as follows:

INFO: Running contour metrics
TopologyException: unable to assign free hole to a shell at 236 299
Contour metrics:   0%|                                                                                             | 16/60317 [00:02<2:28:21,  6.77it/s]
Traceback (most recent call last):
  File "main.py", line 400, in <module>
    main()
  File "main.py", line 396, in main
    launch_eval_coco(args)
  File "main.py", line 381, in launch_eval_coco
    eval_coco(config)
  File "/file/Polygonization-by-Frame-Field-Learning/eval_coco.py", line 71, in eval_coco
    eval_one_partial(annotation_filename)
  File "/file/Polygonization-by-Frame-Field-Learning/eval_coco.py", line 139, in eval_one
    max_angle_diffs = contour_eval.evaluate(pool=pool)
  File "/file/Polygonization-by-Frame-Field-Learning/eval_coco.py", line 210, in evaluate
    measures_list.append(compute_contour_metrics(args))
  File "/file/Polygonization-by-Frame-Field-Learning/eval_coco.py", line 156, in compute_contour_metrics
    fixed_gt_polygons = polygon_utils.fix_polygons(gt_polygons, buffer=0.0001)  # Buffer adds vertices but is needed to repair some geometries
  File "/file/Polygonization-by-Frame-Field-Learning/lydorn_utils/lydorn_utils/polygon_utils.py", line 1649, in fix_polygons
    polygons_geom = shapely.ops.unary_union(polygons)  # Fix overlapping polygons
  File "/opt/conda/lib/python3.7/site-packages/shapely/ops.py", line 161, in unary_union
    return geom_factory(lgeos.methods['unary_union'](collection))
  File "/opt/conda/lib/python3.7/site-packages/shapely/geometry/base.py", line 73, in geom_factory
    raise ValueError("No Shapely geometry can be created from null value")
ValueError: No Shapely geometry can be created from null value

Locate bug

The error exists when figuring out the contour metrics, after the COCO stats are correctly computed. The progress is interrupted for the 16th image (start from 0), while executing the function shapely.ops.unary_union(polygons) from polygon_utils.fix_polygons(gt_polygons, buffer=0.0001).

Take a step forward, the code is unable to deal with the 4th polygon (start from 0) of ground truth annotation in the 16th image. I visualize the polygon, and find that the polygon in the GT label has a topology error, i.e. self-intersection (the intersection is circled in blue in the following image). This polygon.is_valid returns False while other polygons returns True.

16_5

Attempt to fix the bug

The change is made in the function fix_polygons.

def fix_polygons(polygons, buffer=0.0):

    #### added by myself ####
    for i in range(len(polygons)):
        polygons[i] = polygons[i].buffer(0)
    #### adding done ####

    polygons_geom = shapely.ops.unary_union(polygons)  # Fix overlapping polygons
    polygons_geom = polygons_geom.buffer(buffer)  # Fix self-intersecting polygons and other things
    ...

I try to fix the self-intersecting polygons using buffer(0) for every distinct polygon in the variable polygons in advance. And then, fix overlapping polygons and self-intersecting non-overlapping polygons in order, along with the original operation.

This change enables the code to continue metrics computation.

My Issue

  1. Have you run into the same problem while evaluating the models on the CrowdAI mapping dataset? And what's your solution?
  2. Will my change influence the evaluation results?

list index out of range when Launch inference on one image in win10

when I :python main.py --run_name mapping_dataset.unet_resnet101_pretrained.train_val --in_filepath F:/gaojituxiangchuli/Polygonization-by-Frame-Field-Learnin
g-master/frame_field_learning/runs/1.jpg
I get : Traceback (most recent call last):
File "F:\gaojituxiangchuli\Polygonization-by-Frame-Field-Learning-master\main.py", line 387, in
ce_from_filepath
run_dirpath = frame_field_learning.local_utils.get_run_dirpath(args.runs_dirpath, run_name)
File "F:\gaojituxiangchuli\Polygonization-by-Frame-Field-Learning-master\frame_field_learning\local_utils.py",
line 81, in get_run_dirpath ce_from_filepath
run_dirpath = run_utils.setup_run_dir(runs_dir, run_name, check_exists=True)
File "F:\gaojituxiangchuli\Polygonization-by-Frame-Field-Learning-master\lydorn_utils\run_utils.py", line 580, line 81, in get_run_dirpath
in setup_run_dir ce_from_filepath
run_dirpath = frame_field_learning.local_utils.get_run_dirpath(args.runs_dirpath, run_name)
File "F:\gaojituxiangchuli\Polygonization-by-Frame-Field-Learning-master\frame_field_learning\local_utils.py", line 81, in get_run_dirpath
run_dirpath = run_utils.setup_run_dir(runs_dir, run_name, check_exists=True)
File "F:\gaojituxiangchuli\Polygonization-by-Frame-Field-Learning-master\lydorn_utils\run_utils.py", line 580, in setup_run_dir
filtered_existing_run_timestamps = [filtered_existing_run_dirname.split(" | ")[1] for
File "F:\gaojituxiangchuli\Polygonization-by-Frame-Field-Learning-master\lydorn_utils\run_utils.py", line 580, in
filtered_existing_run_timestamps = [filtered_existing_run_dirname.split(" | ")[1] for
IndexError: list index out of range
Could you pls give me some hint? Thank you very much!

Error in inference

While running inference, it shows error for the below mentioned two trained models, but it works well for inria model. Please help me to resolve this

mapping_dataset.unet_resnet101_pretrained.train_val
mapping_dataset.unet_resnet101_pretrained.field_off.train_val

Traceback (most recent call last):
  File "main.py", line 374, in <module>
    main()
  File "main.py", line 364, in main
    launch_inference_from_filepath(args)
  File "main.py", line 175, in launch_inference_from_filepath
    inference_from_filepath(config, args.in_filepath, backbone)
  File "/app/Polygonization-by-Frame-Field-Learning/frame_field_learning/inference_from_filepath.py", line 45, in inference_from_filepath
    tile_data = inference.inference(config, model, sample, compute_polygonization=True)
  File "/app/Polygonization-by-Frame-Field-Learning/frame_field_learning/inference.py", line 28, in inference
    inference_no_patching(config, model, tile_data)
  File "/app/Polygonization-by-Frame-Field-Learning/frame_field_learning/inference.py", line 51, in inference_no_patching
    pred, batch = network_inference(config, model, batch)
  File "/app/Polygonization-by-Frame-Field-Learning/frame_field_learning/inference.py", line 17, in network_inference
    pred, batch = model(batch, tta=config["eval_params"]["test_time_augmentation"])
KeyError: 'test_time_augmentation'

Polygonize mask - Inria dataset

From your answer of the "training flowline" issue, I have understood the following,

  1. Masks in .png format can be converted to .geojson files and it can be used for training
  2. polygonize_mask.py script can be used to convert images masks to polygon masks in .geojson format

When I run polygonize_mask.py script, it asks for run_name to convert binary masks into polygon masks. Can you please provide trained weights to do this operation?

If I want to train from scratch ,what kind of data should be prepared?

Hello There:

Great Job! A novel idea in Polygonal Building Extraction as I know .
In traditional segmentation , training data include two kinds of data ,one is image ( usually is RGB image) ,another is one channel label( the same size with the image ,but only one channel ,every pixel store a number for a certain class).
In your case (this repo) ,I'm confusing what kind of data should be prepared , there is no doubt that image will be one of them,but how about others ,there is no details in your paper.
1
In readme, I find the dir structe , I think images is for images just like segmentation, but how about gt?
Is this gt is same as label in the segmentation? what is ge polygonized? I found there is all *.geojson files in the folder ,what are they for ? I found *.npy files in aligned_gt_polygons_2 and gt_polygons , what are these files for ?
Waiting for your kindly reply , than you in advance!

distance transform

hi, may i ask how to calculate the distance to the second nearest building? i'm not quite familiar with that, i know with cv2.distanceTransform we can calculate distance to the nearest building.

error in running main.py

When I run main.py, I get the following error, I am afraid to modify the code because I am afraid that it will cause more errors, please help me to solve this problem,thx!

Traceback (most recent call last):
File "I:/project/Polygonization-by-Frame-Field-Learning-master/main.py", line 375, in
main()
File "I:/project/Polygonization-by-Frame-Field-Learning-master/main.py", line 367, in main
launch_train(args)
File "I:/project/Polygonization-by-Frame-Field-Learning-master/main.py", line 230, in launch_train
torch.multiprocessing.spawn(train_process, nprocs=args.gpus, args=(config, shared_dict, barrier))
File "E:\Anaconda3\envs\pytorch\lib\site-packages\torch\multiprocessing\spawn.py", line 200, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "E:\Anaconda3\envs\pytorch\lib\site-packages\torch\multiprocessing\spawn.py", line 158, in start_processes
while not context.join():
File "E:\Anaconda3\envs\pytorch\lib\site-packages\torch\multiprocessing\spawn.py", line 119, in join
raise Exception(msg)
Exception:

-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "E:\Anaconda3\envs\pytorch\lib\site-packages\torch\multiprocessing\spawn.py", line 20, in _wrap
fn(i, *args)
File "I:\project\Polygonization-by-Frame-Field-Learning-master\child_processes.py", line 19, in train_process
root_dir_candidates = [os.path.join(data_dirpath, config["dataset_params"]["root_dirname"]) for data_dirpath in config["data_dir_candidates"]]
File "I:\project\Polygonization-by-Frame-Field-Learning-master\child_processes.py", line 19, in
root_dir_candidates = [os.path.join(data_dirpath, config["dataset_params"]["root_dirname"]) for data_dirpath in config["data_dir_candidates"]]
TypeError: list indices must be integers or slices, not str

Error during run after halfway through training

The configs for Inria Dataset runs halfway and creates .pt files for half of the image folders but stops abruptly with following errors.

Any kind of help if someone has this issue would be appreciated. Thank you.

Current_half_patch_then_error

Current_half_patch_then_error_next

Problem when install environment on Ubuntu 18.04 by environment.yml

Hi, I tried to install conda environment by using the environment.yml you provided on Ubuntu 18.04, I got the following error:
(base) root@deeplearn-light-sunx:/home/sunx/code/Polygonization-by-Frame-Field-Learning# conda env create -f environment.yml
Collecting package metadata (repodata.json): done
Solving environment: done

Downloading and Extracting Packages
ffmpeg-4.1.3 | 75.8 MB | ##################################### | 100%
_libgcc_mutex-0.1 | 3 KB | ##################################### | 100%
pyzmq-19.0.2 | 502 KB | ##################################### | 100%
tifffile-2020.5.7 | 110 KB | ##################################### | 100%
simplejson-3.17.0 | 112 KB | ##################################### | 100%
proj-6.3.0 | 10.3 MB | ##################################### | 100%
jupyterlab_pygments- | 8 KB | ##################################### | 100%
lz4-c-1.8.3 | 187 KB | ##################################### | 100%
click-7.1.1 | 67 KB | ##################################### | 100%
pip-20.0.2 | 1.9 MB | ##################################### | 100%
future-0.18.2 | 716 KB | ##################################### | 100%
jupyter-1.0.0 | 7 KB | ##################################### | 100%
libuuid-2.32.1 | 26 KB | ##################################### | 100%
xorg-libice-1.0.10 | 57 KB | ##################################### | 100%
pickleshare-0.7.5 | 13 KB | ##################################### | 100%
xerces-c-3.2.2 | 1.7 MB | ##################################### | 100%
pexpect-4.8.0 | 84 KB | ##################################### | 100%
setuptools-50.3.0 | 902 KB | ##################################### | 100%
libtiff-4.1.0 | 568 KB | ##################################### | 100%
pandas-1.0.3 | 11.5 MB | ##################################### | 100%
swagger-spec-validat | 21 KB | ##################################### | 100%
qtconsole-4.7.7 | 94 KB | ##################################### | 100%
bzip2-1.0.8 | 105 KB | ##################################### | 100%
monotonic-1.5 | 14 KB | ##################################### | 100%
asn1crypto-1.3.0 | 163 KB | ##################################### | 100%
cairo-1.16.0 | 1.5 MB | ##################################### | 100%
libstdcxx-ng-9.1.0 | 4.0 MB | ##################################### | 100%
click-plugins-1.1.1 | 11 KB | ##################################### | 100%
parso-0.7.0 | 71 KB | ##################################### | 100%
overpy-0.4 | 34 KB | ##################################### | 100%
libgcc-ng-9.1.0 | 8.1 MB | ##################################### | 100%
jedi-0.17.0 | 807 KB | ##################################### | 100%
pyproj-2.5.0 | 417 KB | ##################################### | 100%
xorg-kbproto-1.0.7 | 26 KB | ##################################### | 100%
neptune-client-0.4.1 | 76 KB | ##################################### | 100%
dbus-1.13.14 | 590 KB | ##################################### | 100%
requests-2.23.0 | 91 KB | ##################################### | 100%
json-c-0.13.1 | 70 KB | ##################################### | 100%
google-auth-1.14.1 | 58 KB | ##################################### | 100%
testpath-0.4.4 | 88 KB | ##################################### | 100%
pyrsistent-0.16.0 | 94 KB | ##################################### | 100%
qt-5.12.5 | 99.2 MB | ##################################### | 100%
xmltodict-0.12.0 | 14 KB | ##################################### | 100%
icu-64.2 | 12.6 MB | ##################################### | 100%
nbconvert-6.0.7 | 557 KB | ##################################### | 100%
pytz-2019.3 | 231 KB | ##################################### | 100%
importlib_metadata-1 | 48 KB | ##################################### | 100%
pysocks-1.7.1 | 27 KB | ##################################### | 100%
pyopenssl-19.1.0 | 87 KB | ##################################### | 100%
jupyter_console-6.2. | 26 KB | ##################################### | 100%
libsodium-1.0.18 | 387 KB | ##################################### | 100%
libssh2-1.9.0 | 346 KB | ##################################### | 100%
ipython-7.13.0 | 1.1 MB | ##################################### | 100%
kealib-1.4.13 | 172 KB | ##################################### | 100%
libclang-9.0.1 | 22.3 MB | ##################################### | 100%
imagecodecs-lite-201 | 197 KB | ##################################### | 100%
libnetcdf-4.7.3 | 1.3 MB | ##################################### | 100%
jdcal-1.4.1 | 11 KB | ##################################### | 100%
ncurses-6.2 | 1.1 MB | ##################################### | 100%
openssl-1.1.1h | 3.8 MB | ##################################### | 100%
libxkbcommon-0.10.0 | 475 KB | ##################################### | 100%
idna-2.9 | 56 KB | ##################################### | 100%
cryptography-2.8 | 618 KB | ##################################### | 100%
xorg-xproto-7.0.31 | 72 KB | ##################################### | 100%
x264-1!152.20180806 | 1.4 MB | ##################################### | 100%
boost-cpp-1.72.0 | 21.8 MB | ##################################### | 100%
notebook-6.1.4 | 6.3 MB | ##################################### | 100%
libopenblas-0.3.7 | 7.6 MB | ##################################### | 100%
yaml-0.1.7 | 73 KB | ##################################### | 100%
plyfile-0.7.2 | 31 KB | ##################################### | 100%
matplotlib-base-3.2. | 7.1 MB | ##################################### | 100%
pandoc-2.11 | 12.5 MB | ##################################### | 100%
ca-certificates-2020 | 128 KB | ##################################### | 100%
joblib-0.14.1 | 202 KB | ##################################### | 100%
oauthlib-3.1.0 | 88 KB | ##################################### | 100%
libkml-1.3.0 | 641 KB | ##################################### | 100%
argon2-cffi-20.1.0 | 49 KB | ##################################### | 100%
jsonschema-3.2.0 | 94 KB | ##################################### | 100%
jupyter_core-4.6.3 | 75 KB | ##################################### | 100%
blinker-1.4 | 21 KB | ##################################### | 100%
cligj-0.5.0 | 12 KB | ##################################### | 100%
prometheus_client-0. | 48 KB | ##################################### | 100%
krb5-1.17.1 | 1.5 MB | ##################################### | 100%
defusedxml-0.6.0 | 23 KB | ##################################### | 100%
fiona-1.8.13 | 1.0 MB | ##################################### | 100%
glib-2.63.1 | 3.4 MB | ##################################### | 100%
liblapack-3.8.0 | 10 KB | ##################################### | 100%
pixman-0.38.0 | 618 KB | ##################################### | 100%
grpcio-1.27.2 | 1.4 MB | ##################################### | 100%
tbb-2018.0.5 | 1.4 MB | ##################################### | 100%
xlrd-1.2.0 | 108 KB | ##################################### | 100%
fontconfig-2.13.1 | 340 KB | ##################################### | 100%
zeromq-4.3.3 | 678 KB | ##################################### | 100%
munch-2.5.0 | 16 KB | ##################################### | 100%
snuggs-1.4.7 | 11 KB | ##################################### | 100%
scikit-learn-0.22.1 | 7.3 MB | ##################################### | 100%
nest-asyncio-1.4.1 | 10 KB | ##################################### | 100%
blas-2.14 | 10 KB | ##################################### | 100%
tiledb-1.7.0 | 2.3 MB | ##################################### | 100%
gitdb-4.0.2 | 49 KB | ##################################### | 100%
jinja2-2.11.2 | 97 KB | ##################################### | 100%
freetype-2.9.1 | 822 KB | ##################################### | 100%
readline-8.0 | 428 KB | ##################################### | 100%
pycparser-2.20 | 93 KB | ##################################### | 100%
google-auth-oauthlib | 21 KB | ##################################### | 100%
xz-5.2.5 | 438 KB | ##################################### | 100%
geos-3.8.0 | 1.0 MB | ##################################### | 100%
numpy-base-1.18.1 | 5.3 MB | ##################################### | 100%
sknw-0.13 | 6 KB | ##################################### | 100%
lame-3.100 | 502 KB | ##################################### | 100%
pcre-8.43 | 260 KB | ##################################### | 100%
cffi-1.14.0 | 228 KB | ##################################### | 100%
openjpeg-2.3.1 | 475 KB | ##################################### | 100%
affine-2.3.0 | 19 KB | ##################################### | 100%
zstd-1.4.4 | 982 KB | ##################################### | 100%
skan-0.8 | 28 KB | ##################################### | 100%
networkx-2.4 | 1.2 MB | ##################################### | 100%
certifi-2020.6.20 | 160 KB | ##################################### | 100%
c-ares-1.15.0 | 102 KB | ##################################### | 100%
wcwidth-0.1.9 | 24 KB | ##################################### | 100%
numpy-1.18.1 | 5 KB | ##################################### | 100%
prompt_toolkit-3.0.4 | 11 KB | ##################################### | 100%
freexl-1.0.5 | 44 KB | ##################################### | 100%
pillow-7.1.2 | 658 KB | ##################################### | 100%
urllib3-1.25.8 | 166 KB | ##################################### | 100%
jpeg-9c | 251 KB | ##################################### | 100%
tornado-6.0.4 | 651 KB | ##################################### | 100%
xorg-libxext-1.3.4 | 51 KB | ##################################### | 100%
markdown-3.1.1 | 113 KB | ##################################### | 100%
traitlets-4.3.3 | 135 KB | ##################################### | 100%
xorg-libxrender-0.9. | 31 KB | ##################################### | 100%
cytoolz-0.10.1 | 456 KB | ##################################### | 100%
imutils-0.5.3 | 37 KB | ##################################### | 100%
gst-plugins-base-1.1 | 6.8 MB | ##################################### | 100%
qtpy-1.9.0 | 39 KB | ##################################### | 100%
numba-0.48.0 | 3.4 MB | ##################################### | 100%
bravado-10.5.0 | 30 KB | ##################################### | 100%
cudatoolkit-10.1.243 | 513.2 MB | ##################################### | 100%
libpq-12.2 | 2.8 MB | ##################################### | 100%
zlib-1.2.11 | 120 KB | ##################################### | 100%
python_abi-3.8 | 4 KB | ##################################### | 100%
widgetsnbextension-3 | 1.8 MB | ##################################### | 100%
send2trash-1.5.0 | 16 KB | ##################################### | 100%
py3nvml-0.2.5 | 74 KB | ##################################### | 100%
openh264-1.8.0 | 1.4 MB | ##################################### | 100%
giflib-5.2.1 | 80 KB | ##################################### | 100%
jsmin-2.2.2 | 21 KB | ##################################### | 100%
ipython_genutils-0.2 | 39 KB | ##################################### | 100%
mkl-2020.0 | 202.1 MB | ##################################### | 100%
gstreamer-1.14.5 | 4.5 MB | ##################################### | 100%
libcurl-7.69.1 | 591 KB | ##################################### | 100%
pytorch-1.4.0 | 433.1 MB | ##################################### | 100%
nss-3.46.1 | 1.9 MB | ##################################### | 100%
harfbuzz-2.4.0 | 1.5 MB | ##################################### | 100%
libopencv-4.2.0 | 55.4 MB | ##################################### | 100%
libxml2-2.9.10 | 1.3 MB | ##################################### | 100%
websocket-client-0.5 | 63 KB | ##################################### | 100%
libgdal-3.0.4 | 18.9 MB | ##################################### | 100%
cfitsio-3.470 | 1.4 MB | ##################################### | 100%
nose-1.3.7 | 211 KB | ##################################### | 100%
zipp-3.1.0 | 13 KB | ##################################### | 100%
xorg-libsm-1.2.3 | 25 KB | ##################################### | 100%
nbclient-0.5.1 | 60 KB | ##################################### | 100%
opencv-4.2.0 | 19 KB | ##################################### | 100%
xorg-xextproto-7.3.0 | 27 KB | ##################################### | 100%
geojson-2.5.0 | 15 KB | ##################################### | 100%
pywavelets-1.1.1 | 4.4 MB | ##################################### | 100%
rasterio-1.1.3 | 8.4 MB | ##################################### | 100%
sqlite-3.31.1 | 2.0 MB | ##################################### | 100%
curl-7.69.1 | 148 KB | ##################################### | 100%
xorg-libx11-1.6.9 | 918 KB | ##################################### | 100%
scipy-1.4.1 | 19.1 MB | ##################################### | 100%
scikit-image-0.16.2 | 25.2 MB | ##################################### | 100%
libgfortran-ng-7.3.0 | 1.3 MB | ##################################### | 100%
poppler-data-0.4.9 | 3.5 MB | ##################################### | 100%
gitpython-3.1.1 | 341 KB | ##################################### | 100%
matplotlib-3.2.1 | 20 KB | ##################################### | 100%
libllvm9-9.0.1 | 25.1 MB | ##################################### | 100%
gnutls-3.6.5 | 1.9 MB | ##################################### | 100%
pyqt-5.12.3 | 6.3 MB | ##################################### | 100%
prompt-toolkit-3.0.4 | 240 KB | ##################################### | 100%
hdf4-4.2.13 | 714 KB | ##################################### | 100%
tk-8.6.10 | 3.2 MB | ##################################### | 100%
entrypoints-0.3 | 10 KB | ##################################### | 100%
et_xmlfile-1.0.1 | 21 KB | ##################################### | 100%
jsonref-0.2 | 11 KB | ##################################### | 100%
mistune-0.8.4 | 54 KB | ##################################### | 100%
smmap-3.0.2 | 24 KB | ##################################### | 100%
pyasn1-modules-0.2.7 | 63 KB | ##################################### | 100%
python-dateutil-2.8. | 224 KB | ##################################### | 100%
libxcb-1.13 | 502 KB | ##################################### | 100%
torchvision-0.5.0 | 9.1 MB | ##################################### | 100%
nspr-4.22 | 1.6 MB | ##################################### | 100%
python-3.8.2 | 57.8 MB | ##################################### | 100%
jupyter_client-6.1.7 | 76 KB | ##################################### | 100%
msgpack-python-1.0.0 | 98 KB | ##################################### | 100%
jasper-1.900.1 | 198 KB | ##################################### | 100%
shapely-1.7.1 | 443 KB | ##################################### | 100%
backcall-0.1.0 | 20 KB | ##################################### | 100%
libcblas-3.8.0 | 10 KB | ##################################### | 100%
libwebp-1.0.2 | 938 KB | ##################################### | 100%
dask-core-2.15.0 | 609 KB | ##################################### | 100%
cachetools-3.1.1 | 14 KB | ##################################### | 100%
libspatialite-4.3.0a | 3.1 MB | ##################################### | 100%
markupsafe-1.1.1 | 34 KB | ##################################### | 100%
libffi-3.2.1 | 40 KB | ##################################### | 100%
ninja-1.9.0 | 1.6 MB | ##################################### | 100%
rsa-4.0 | 30 KB | ##################################### | 100%
absl-py-0.9.0 | 166 KB | ##################################### | 100%
pyjwt-1.7.1 | 32 KB | ##################################### | 100%
six-1.14.0 | 27 KB | ##################################### | 100%
terminado-0.9.1 | 26 KB | ##################################### | 100%
chardet-3.0.4 | 170 KB | ##################################### | 100%
intel-openmp-2020.0 | 916 KB | ##################################### | 100%
nbformat-5.0.8 | 101 KB | ##################################### | 100%
packaging-20.4 | 35 KB | ##################################### | 100%
toolz-0.10.0 | 50 KB | ##################################### | 100%
werkzeug-1.0.1 | 243 KB | ##################################### | 100%
openpyxl-3.0.3 | 158 KB | ##################################### | 100%
liblapacke-3.8.0 | 10 KB | ##################################### | 100%
llvmlite-0.31.0 | 17.7 MB | ##################################### | 100%
pyasn1-0.4.8 | 58 KB | ##################################### | 100%
pandocfilters-1.4.2 | 13 KB | ##################################### | 100%
hdf5-1.10.5 | 3.1 MB | ##################################### | 100%
ptyprocess-0.6.0 | 23 KB | ##################################### | 100%
xorg-renderproto-0.1 | 8 KB | ##################################### | 100%
ld_impl_linux-64-2.3 | 645 KB | ##################################### | 100%
libpng-1.6.37 | 364 KB | ##################################### | 100%
geotiff-1.5.1 | 279 KB | ##################################### | 100%
ipywidgets-7.5.1 | 102 KB | ##################################### | 100%
pyparsing-2.4.7 | 64 KB | ##################################### | 100%
pyyaml-5.3.1 | 195 KB | ##################################### | 100%
decorator-4.4.2 | 14 KB | ##################################### | 100%
cycler-0.10.0 | 14 KB | ##################################### | 100%
webencodings-0.5.1 | 20 KB | ##################################### | 100%
libiconv-1.15 | 2.0 MB | ##################################### | 100%
ipykernel-5.3.4 | 177 KB | ##################################### | 100%
cloudpickle-1.4.0 | 29 KB | ##################################### | 100%
postgresql-12.2 | 4.4 MB | ##################################### | 100%
libdap4-3.20.4 | 18.5 MB | ##################################### | 100%
gdal-3.0.4 | 1.4 MB | ##################################### | 100%
requests-oauthlib-1. | 22 KB | ##################################### | 100%
protobuf-3.11.4 | 714 KB | ##################################### | 100%
multiprocess-0.70.9 | 205 KB | ##################################### | 100%
async_generator-1.10 | 23 KB | ##################################### | 100%
tqdm-4.43.0 | 56 KB | ##################################### | 100%
bravado-core-5.16.0 | 46 KB | ##################################### | 100%
bleach-3.2.1 | 111 KB | ##################################### | 100%
libblas-3.8.0 | 10 KB | ##################################### | 100%
libprotobuf-3.11.4 | 4.8 MB | ##################################### | 100%
dill-0.3.1.1 | 120 KB | ##################################### | 100%
poppler-0.67.0 | 8.9 MB | ##################################### | 100%
pygments-2.6.1 | 687 KB | ##################################### | 100%
imageio-2.8.0 | 3.1 MB | ##################################### | 100%
olefile-0.46 | 33 KB | ##################################### | 100%
tensorboard-2.1.0 | 3.4 MB | ##################################### | 100%
attrs-19.3.0 | 39 KB | ##################################### | 100%
kiwisolver-1.2.0 | 91 KB | ##################################### | 100%
libedit-3.1.20181209 | 188 KB | ##################################### | 100%
graphite2-1.3.13 | 101 KB | ##################################### | 100%
wheel-0.34.2 | 49 KB | ##################################### | 100%
py-opencv-4.2.0 | 21 KB | ##################################### | 100%
expat-2.2.9 | 191 KB | ##################################### | 100%
nettle-3.4.1 | 6.7 MB | ##################################### | 100%
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
Installing pip dependencies: / Ran pip subprocess with arguments:
['/root/anaconda3/envs/frame_field/bin/python', '-m', 'pip', 'install', '-U', '- r', '/home/sunx/code/Polygonization-by-Frame-Field-Learning/condaenv.bjguh6zr.re quirements.txt']
Pip subprocess output:
Collecting calmsize==0.1.3
Downloading calmsize-0.1.3.tar.gz (3.7 kB)
Collecting coverage==5.1
Downloading coverage-5.1-cp38-cp38-manylinux1_x86_64.whl (229 kB)
Collecting cython==0.29.17
Downloading Cython-0.29.17-cp38-cp38-manylinux1_x86_64.whl (2.0 MB)
Collecting descartes==1.0.1
Downloading descartes-1.0.1.tar.gz (3.3 kB)
Collecting gym==0.17.1
Downloading gym-0.17.1.tar.gz (1.6 MB)
Collecting imagecodecs==2020.2.18
Downloading imagecodecs-2020.2.18-cp38-cp38-manylinux2014_x86_64.whl (19.2 MB)

Pip subprocess error:
ERROR: Could not find a version that satisfies the requirement kornia==0.1.2+7bc b521 (from -r /home/sunx/code/Polygonization-by-Frame-Field-Learning/condaenv.bj guh6zr.requirements.txt (line 7)) (from versions: 0.1.3.post2, 0.1.4, 0.1.4.post 2, 0.2.0, 0.2.1, 0.2.2, 0.3.0, 0.3.1, 0.3.2, 0.4.0, 0.4.1)
ERROR: No matching distribution found for kornia==0.1.2+7bcb521 (from -r /home/s unx/code/Polygonization-by-Frame-Field-Learning/condaenv.bjguh6zr.requirements.t xt (line 7))
fa iled

CondaEnvException: Pip failed

What else I can do to solve it.

Running pretrained model issue

I've encountered a size mismatch issue when trying to run the author's pretrained model on a single image. Could someone who have successfully run the model share your environment.yml file and the running command? Lot of thanks! (The error screenshot is as follows.)
19ec4f6a-6049-45f0-b9ff-54ebf7ae41c7

Problems regarding transfer learning

Hello,

and thank you for making your work available, its been very helpful!

I'm trying to use the pretrained model for the inria dataset on a custom dataset and continuing the training process. I'm almost certain that I have preprocessed the images correctly with corresponding config files. The training process runs smoothly, and I experience no errors. The problem arises when I'm testing the model after training, where the model outputs results that is significantly worse than the pretrained model's output on the custom test data .

To make sure that the custom dataset wasn't the problem I tried to resume training on the original inria dataset with the .geojson files provided in the google drive and the pretrained model. The same problem arises, it looks like the model dosen't continue from the downloaded checkpoints. I've printed a variety of weights from selected layers before and after calling the load_checkpoint function in trainer.py to make sure that the weights do in fact load - and they do.

Do you have any idea where something might go wrong? Are there any variables in the config files that needs to be changed in order to enable transfer learning (To my understanding I think not, but asking just to be on the safe side)? I'm totally in the dark as to what goes wrong and where to look

Thanks in andvance!
Maria

How can I get the geojson files to train my own dataset?

Hello! Thank you for making your work available, it is very helpful! I have a certain issue with one of the experiments I'm running using your method.

  1. How can I get the geojson files to train my own dataset? I trained the model with my own data, and the results were all black. How did you get the geojson file to train your own dataset?
  2. How do I get the pixelsize and their mean and standard deviation in pytorch_lydorn/torch_lydorn/torchvision/datasets/inria_aerial.py?
    Looking forward to your reply, thank you very much!

How to use my own data to train?

As the tile, if I want to use my own data, just like the mapping challenge, what should I do to achieve it?
Do I need to put an annotation.json in the folder? How to build an annotation like that?

eval_coco error ValueError: No Shapely geometry can be created from null value

Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/multiprocess/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/home/fei.qi/frame_field_learning/frame_field_attention_learning/eval_coco.py", line 156, in compute_contour_metrics
fixed_gt_polygons = polygon_utils.fix_polygons(gt_polygons, buffer=0.0001) # Buffer adds vertices but is needed to repair some geometries
File "/home/fei.qi/frame_field_learning/frame_field_attention_learning/lydorn_utils/polygon_utils.py", line 1645, in fix_polygons
polygons_geom = shapely.ops.unary_union(polygons) # Fix overlapping polygons
File "/opt/conda/lib/python3.7/site-packages/shapely/ops.py", line 161, in unary_union
return geom_factory(lgeos.methods'unary_union')
File "/opt/conda/lib/python3.7/site-packages/shapely/geometry/base.py", line 73, in geom_factory
raise ValueError("No Shapely geometry can be created from null value")
ValueError: No Shapely geometry can be created from null value

Colab Clone Problem

when cloning the repository to the google colab, the utility folders (lydorn_utils and lydorn_pytorch) are empty. Has anyone faced the same problem and found a solution?

Local usage mode

Can these codes run on train and inference mode with local installation? I mean without putting them inside docker containers

Input and weights data type mismatch

ERROR: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor

INPUT data:

{'image': tensor([[[[104, 104, 109,  ...,  26,  26,  26],
          [104, 104, 110,  ...,  26,  26,  26],
          [104, 105, 110,  ...,  26,  26,  26],
          ...,
          [198, 199, 198,  ..., 172, 175, 177],
          [197, 197, 198,  ..., 173, 176, 177],
          [197, 197, 197,  ..., 175, 177, 177]],

         [[111, 110, 114,  ...,  30,  30,  30],
          [111, 111, 115,  ...,  30,  30,  30],
          [112, 112, 115,  ...,  30,  30,  30],
          ...,
          [163, 164, 163,  ..., 146, 149, 151],
          [162, 162, 163,  ..., 147, 150, 151],
          [162, 162, 162,  ..., 149, 151, 151]],

         [[ 80,  79,  83,  ...,  41,  41,  41],
          [ 80,  80,  84,  ...,  41,  41,  41],
          [ 81,  81,  84,  ...,  41,  41,  41],
          ...,
          [135, 136, 135,  ..., 131, 134, 136],
          [134, 134, 135,  ..., 132, 135, 136],
          [134, 134, 134,  ..., 134, 136, 136]]]], device='cuda:0',
       dtype=torch.uint8), 'image_mean': tensor([[0.3452, 0.3205, 0.2841]], device='cuda:0', dtype=torch.float64), 'image_std': tensor([[0.1883, 0.1543, 0.1420]], device='cuda:0', dtype=torch.float64)}
       
       
       I have tried multiple things like .to('cuda:0') , .to_gpu() etc, nothing worked.
       
       
       CAN SOMEONE PLEASE HELP ME RESOLVE THIS?

Polygonise mask on custom dataset

Hi @Lydorn !
I've been finding this repo really useful in my research and have a certain issue with one of the experiments I'm running using your method. I am trying to use your frame field learning approach on a custom version of the INRIA dataset where each image tile has been spit into a hundred 500x500 patches. When I try to run the polygonize_mask.py script on this dataset using one of your pretrained runs, I get the following error:

RuntimeError: The size of tensor a (1024) must match the size of tensor b (500) at non-singleton dimension 3

I am using the 'inria_dataset_osm_mask_only.unet16' run for this. Do you have any suggestions on how I could resolve this? Or would I need to train a different model from scratch to allow processing my modified patch size? Looking forward to your reply. Thank you.

share weights

Thanks for the excellent work! I would like to check how modules such as polygonise_asm.py work, but doesn't have the weights/checkpoints to do that.

Error during Model Evaluation: ValueError: need at least one array to concatenate

After training on INRIA dataset using the polygonized_unetresnet_leaderboard run, I tried to evaluate the model using --mode eval on the same config. The error was faced in the tensorpoly.py file inside the transforms folder.

File "Polygonization-by-Frame-Field-Learning/pytorch_lydorn/torch_lydorn/torchvision/transforms/tensorpoly.py", line 78, in polygons_to_tensorpoly
pos = np.concatenate(polygon_list, axis=0)
File "<array_function internals>", line 180, in concatenate
ValueError: need at least one array to concatenate

Here, the polygon list is empty, there are no polygons there so it is not finding any arrays to concatenate. It is in the polygon_to_tensorpoly function and the author has mentioned in the comments in Line 50 # TODO: If there are no polygons but it has not been implemented. I would like to ask if anyone had come across this problem. I would glad to learn more about it.

Thank you.

problem while using docker run

As instructed I created docker image, but when i run the image using the following command shows error

docker run --rm -it --init --gpus 1 --ipc=host --network=host-e -v "./data:/data" lydorn/frame-field-learning

Unable to find image 'frame-field-learning:latest' locally

But when I run docker ps -a it shows the image

question of training bug

Hello, this has a error in draw_linear_ring fuction of torch_lydorn/torchvision/transforms/angle_field_init.py,
traceback info: ValueError: diff requires input that is at least one dimensional

ASM and ACM polygonization issue

When we do polygonization using ASM or ACM method, polygon results of some buildings looks strange. It appears like saw tooth edge across all faces of the building.

Details:

  1. It does like that for some random patches(not all)
  2. No error found in Simple polygonization method

Kindly tell me the way to fix this problem as soon as you return to your repository :octocat:

The frame_field_learning package is not installed!

When I run the command "python main.py --help", it will always show the error:

ERROR: The frame_field_learning package is not installed! Execute script setup.sh to install local dependencies such as frame_field_learning in develop mode.

BUT after I execute script setup.sh, it will still show the same ERROR.......

Could you please give me some hints that how to deal with this issue?

Thank you very much.

Bug in seg_coco

In frame_field_learning/save_utils.py line 236, I noticed that if I use the key "image_id" into the sample dictionary, it is invalid. Instead I replace it with "number" as a workaround.

Several different classes

Greetings! Thanks for great work
I would like to clarify, if I have a dataset with several different classes in it - how is it possible to modify your framework, so it can do segmentation and polygonization for all of them?

when evaluate the trained mode on inria dataset, process handling

Hi, I installed the environment in Ubuntu 18.04. I first run the command

python main.py --config configs/config.inria_dataset_osm_aligned.unet_resnet101_pretrained
after training finish
I run
python main.py --config configs/config.inria_dataset_osm_aligned.unet_resnet101_pretrained --mode eval
the program will hanging there with the following output:
INFO: Loading defaults from configs/config.defaults.inria_dataset_osm_aligned.json
INFO: Loading defaults from configs/config.defaults.json
INFO: Loading defaults from configs/loss_params.json
INFO: Loading defaults from configs/optim_params.json
INFO: Loading defaults from configs/polygonize_params.json
INFO: Loading defaults from configs/dataset_params.inria_dataset_osm_aligned.json
INFO: Loading defaults from configs/eval_params.inria_dataset.json
INFO: Loading defaults from configs/eval_params.defaults.json
INFO: Loading defaults from configs/backbone_params.unet_resnet101.json
GPU 0 -> Using data from /gimastorage/Xiaoyu/data/AerialImageDataset
INFO: annotations will be loaded from disk
# --- Start evaluating ---#
Saving eval outputs to /gimastorage/Xiaoyu/data/AerialImageDataset/eval_runs/inria_dataset_osm_aligned.unet_resnet101_pretrained | 2020-12-05 09:55:09
Loading best val checkpoint: /home/sunx/Polygonization-by-Frame-Field-Learning/frame_field_learning/runs/inria_dataset_osm_aligned.unet_resnet101_pretrained | 2020-12-05 09:55:09/checkpoints/checkpoint.best_val.epoch_000001.tar
Eval test: 0%| | 0/34 [00:00<?, ?it/s]Traceback (most recent call last):

It will keep it still, if I stop the process, it gives following errors:
Process SpawnProcess-2:
Traceback (most recent call last):
File "/home/sunx/Polygonization-by-Frame-Field-Learning/main.py", line 387, in
Traceback (most recent call last):
File "/home/sunx/anaconda3/envs/frame_field1/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/home/sunx/Polygonization-by-Frame-Field-Learning/child_processes.py", line 75, in eval_process
evaluate(gpu, config, shared_dict, barrier, eval_ds, backbone)
File "/home/sunx/Polygonization-by-Frame-Field-Learning/frame_field_learning/evaluate.py", line 62, in evaluate
evaluator.evaluate(split_name, eval_ds)
File "/home/sunx/Polygonization-by-Frame-Field-Learning/frame_field_learning/evaluator.py", line 85, in evaluate
inference.inference_with_patching(self.config, self.model, tile_data)
File "/home/sunx/Polygonization-by-Frame-Field-Learning/frame_field_learning/inference.py", line 79, in inference_with_patching
assert len(tile_data["image"].shape) == 4 and tile_data["image"].shape[0] == 1,
AssertionError: When using inference with patching, tile_data should have a batch size of 1, with image's shape being (1, C, H, W), not torch.Size([6, 3, 725, 725])

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/sunx/anaconda3/envs/frame_field1/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/sunx/anaconda3/envs/frame_field1/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/sunx/anaconda3/envs/frame_field1/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 26, in _wrap
sys.exit(1)
SystemExit: 1

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/sunx/anaconda3/envs/frame_field1/lib/python3.8/multiprocessing/process.py", line 318, in _bootstrap
util._exit_function()
main() File "/home/sunx/anaconda3/envs/frame_field1/lib/python3.8/multiprocessing/util.py", line 334, in _exit_function
p.join()
File "/home/sunx/anaconda3/envs/frame_field1/lib/python3.8/multiprocessing/process.py", line 149, in join
res = self._popen.wait(timeout)

File "/home/sunx/anaconda3/envs/frame_field1/lib/python3.8/multiprocessing/popen_fork.py", line 47, in wait
return self.poll(os.WNOHANG if timeout == 0.0 else 0)
File "/home/sunx/Polygonization-by-Frame-Field-Learning/main.py", line 381, in main
File "/home/sunx/anaconda3/envs/frame_field1/lib/python3.8/multiprocessing/popen_fork.py", line 27, in poll
pid, sts = os.waitpid(self.pid, flag)
KeyboardInterrupt
Traceback (most recent call last):
launch_eval(args)
File "/home/sunx/Polygonization-by-Frame-Field-Learning/main.py", line 321, in launch_eval
File "/home/sunx/anaconda3/envs/frame_field1/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/sunx/anaconda3/envs/frame_field1/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/sunx/Polygonization-by-Frame-Field-Learning/lydorn_utils/lydorn_utils/async_utils.py", line 8, in async_func_wrapper
if not out_queue.empty():
File "/home/sunx/anaconda3/envs/frame_field1/lib/python3.8/multiprocessing/queues.py", line 123, in empty
return not self._poll()
File "/home/sunx/anaconda3/envs/frame_field1/lib/python3.8/multiprocessing/connection.py", line 257, in poll
return self._poll(timeout)
File "/home/sunx/anaconda3/envs/frame_field1/lib/python3.8/multiprocessing/connection.py", line 424, in _poll
r = wait([self], timeout)
File "/home/sunx/anaconda3/envs/frame_field1/lib/python3.8/multiprocessing/connection.py", line 924, in wait
selector.register(obj, selectors.EVENT_READ)
File "/home/sunx/anaconda3/envs/frame_field1/lib/python3.8/selectors.py", line 352, in register
key = super().register(fileobj, events, data)
File "/home/sunx/anaconda3/envs/frame_field1/lib/python3.8/selectors.py", line 244, in register
self._fd_to_key[key.fd] = key
KeyboardInterrupt
torch.multiprocessing.spawn(eval_process, nprocs=args.gpus, args=(config, shared_dict, barrier))
File "/home/sunx/anaconda3/envs/frame_field1/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 171, in spawn
while not spawn_context.join():
File "/home/sunx/anaconda3/envs/frame_field1/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 75, in join
ready = multiprocessing.connection.wait(
File "/home/sunx/anaconda3/envs/frame_field1/lib/python3.8/multiprocessing/connection.py", line 930, in wait
ready = selector.select(timeout)
File "/home/sunx/anaconda3/envs/frame_field1/lib/python3.8/selectors.py", line 415, in select
fd_event_list = self._selector.poll(timeout)
KeyboardInterrupt
Eval test: 0%| | 0/34 [13:02<?, ?it/s]

Process finished with exit code 130

I looked at the code of inference file
def inference_with_patching(config, model, tile_data):
*assert len(tile_data["image"].shape) == 4 and tile_data["image"].shape[0] == 1, *
f"When using inference with patching, tile_data should have a batch size of 1, "
f"with image's shape being (1, C, H, W), not {tile_data['image'].shape}"

Here the code assert needs the data to be a certain size which is different from the patch size.

I run the eval command twice, the output above is the second time, so there is no log about the patching process. the first time, it will first patch the test data.

Other things I do is reduce the data size by changing the code inside the inria_aerial.py

CITY_METADATA_DICT = {

"bellingham": {
    "fold": "test",
    "pixelsize": 0.3,
    "numbers": list([2,3]) ,
    "mean": [0.3766195, 0.391402, 0.32659722],
    "std": [0.18134978, 0.16412577, 0.16369793],
},

"austin": {
    "fold": "train",
    "pixelsize": 0.3,
    "numbers": list(range(1, 2)),
    "mean": [0.39584444, 0.40599795, 0.38298687],
    "std": [0.17341954, 0.16856597, 0.16360443],
}

}

Annotation for the frame field

I have the angles for each pixel as a .npy file to be used as groundtruth as mentioned in the paper.

My question is: is this directly used in the network or is there a need to calculate frame field to be fed into the network? If so what should the format be?

I would appreciate any kind of help. Thank you.

How to open .npy file as shown in the paper ?

As the title... I run an inference and got a "crossfield" dir with the output "X.npy" inside. I guess this is the frame field in Paper, but how to show it? I tried using numpy but not work, which function did you use?
Could you give me some hint? Thank you very much!!!

question about weighted loss

Hi, i saw your paper and got something unclear: when you did the edge segmentation task, were you using dice loss + weighted cross entropy (2 classes: edge and background)? For the weighted BCE, were you using the pixel frequency of each class as the weights?

Hyperparameters for Training/evaluation

Hello, I have opened new thread to ask details about paramters, I will close that old issue.
My full image size is 1024x1024, In "dataset_params" you configured it as "data_patch_size": 725, and "input_patch_size": 512, but the Inria dataset image resolution is 5000x5000.
How did you give it as 725 & 512? Is it just to reduce GPU memory allocation?

ERROR in installing frame field learning package

I'm running the code on Windows 10 system. I successfully executed bash setup.sh
Post that I ran python setup.py install and got this output:

Installed d:\anaconda3\envs\topo\lib\site-packages\frame_field_learning-0.0.1-py3.6.egg
Processing dependencies for frame-field-learning==0.0.1
Finished processing dependencies for frame-field-learning==0.0.1

But when I run python main.py --run_name "frame_field_learning/runs/mapping_dataset.unet_resnet101_pretrained.field_off.train_val | 2020-09-07 11:54:48" --in_filepath grid_100-100_3698.0.tif
This throws an error
ERROR: The frame_field_learning package is not installed! Execute script setup.sh to install local dependencies such as frame_field_learning in develop mode.

Please reply @Lydorn
Thanks in Advance

No output after running an Inference on an image.

After I run "python main.py --run_name mapping_dataset.unet_resnet101_pretrained.train_val --in_filepath /home/yqs/Polygonization-by-Frame-Field-Learning/frame_field_learning/runs/images/1.jpg"
The shell show me:
Infer images: 100%|███████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.13s/it, status=Saving output]
But I cannot find the output image anyway...
Could you pls give me some hint? Thank you very much!

Questions for pretrained models

Hi,

Thank you for sharing the outstanding work to the public, your contribution is worth to appreciate too much.
When I try to use your shared pretrained weights to perform inference, I found a very interesting thing. The pretrained weights from
"inria_dataset_polygonized.unet_resnet101_pretrained.leaderboard | 2020-06-02 07:57:31" could produce more regularized building footprints (see figure below), but other three pretrained models such as “mapping_dataset.unet_resnet101_pretrained.train_val | 2020-09-07 11:28:51” cannot do the same thing. So I'm wondering that if you perform any regurlization operations when you train the model "inria_dataset_polygonized.unet_resnet101_pretrained.leaderboard | 2020-06-02 07:57:31" . Why the building boundaries look so regular?
image

Thanks

Kornia and PyTorch dependency issue in Ubuntu

Hi I tried installing the necessary dependencies using environment.yml and it throws me the error discussed here (kornia/kornia#1290)

I tried applying the fix which was to install Kornia from source using the command pip install git+https://github.com/kornia/kornia

However it ended up installing torch==1.9.1 as a dependency (environment.yml in this repo has a torch==1.4) and breaks the entire dependency tree.

Is there a fix to get the most recent version of Kornia that's compatible with this code and with torch==1.4.0 or any possible workarounds?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.