Git Product home page Git Product logo

utilities's Introduction

This repository is no longer being updated. Future development of code tools for geospatial machine learning analysis will be done at https://github.com/cosmiq/solaris.

SpaceNet Utilities

This repository has three python packages, geoTools and evalTools and labelTools. The geoTools packages is intended to assist in the preprocessing of SpaceNet satellite imagery data corpus hosted on SpaceNet on AWS to a format that is consumable by machine learning algorithms. The evalTools package is used to evaluate the effectiveness of object detection algorithms using ground truth. The labelTools package assists in transfering geoJson labels into common label schemes for machine learning frameworks This is version 3.0 and has been updated with more capabilities to allow for computer vision applications using remote sensing data

Download Instructions

Further download instructions for the SpaceNet Dataset can be found here

Installation Instructions

Several packages require binaries to be installed before pip installing the other packages. Conda is a simple way to install everything and their dependencies

  • Install GDAL binaries and scripts
conda install -c conda-forge gdal
conda install -c conda-forge rtree
conda install -c conda-forge pyproj
conda install -c conda-forge geopandas
conda install -c conda-forge shapely
conda install -c conda-forge rasterio
  • Pip Install from github
    git clone -b spacenetV3 https://github.com/SpaceNetChallenge/utilities.git
    cd utilities
    pip install -e .
    

or

    pip install --upgrade git+https://github.com/SpaceNetChallenge/utilities.git

Evaluation Metric

The evaluation metric for this competition is an F1 score with the matching algorithm inspired by Algorithm 2 in the ILSVRC paper applied to the detection of building footprints. For each building there is a geospatially defined polygon label to represent the footprint of the building. A SpaceNet entry will generate polygons to represent proposed building footprints. Each proposed building footprint is either a “true positive” or a “false positive”.

  • The proposed footprint is a “true positive” if the proposal is the closest (measured by the IoU) proposal to a labeled polygon AND the IoU between the proposal and the label is about the prescribed threshold of 0.5.
  • Otherwise, the proposed footprint is a “false positive”.

There is at most one “true positive” per labeled polygon. The measure of proximity between labeled polygons and proposed polygons is the Jaccard similarity or the “Intersection over Union (IoU)”, defined as:

alt text

The value of IoU is between 0 and 1, where closer polygons have higher IoU values.

The F1 score is the harmonic mean of precision and recall, combining the accuracy in the precision measure and the completeness in the recall measure. For this competition, the number of true positives and false positives are aggregated over all of the test imagery and the F1 score is computed from the aggregated counts.

For example, suppose there are N polygon labels for building footprints that are considered ground truth and suppose there are M proposed polygons by an entry in the SpaceNet competition. Let tp denote the number of true positives of the M proposed polygons. The F1 score is calculated as follows:

alt text

The F1 score is between 0 and 1, where larger numbers are better scores.

Hints:

  • The images provided could contain anywhere from zero to multiple buildings.
  • All proposed polygons should be legitimate (they should have an area, they should have points that at least make a triangle instead of a point or a line, etc).
  • Use the metric implementation code to self evaluate. To run the metric you can use the following command.
python python/evaluateScene.py /path/to/SpaceNetTruthFile.csv \
                               /path/to/SpaceNetProposalFile.csv \
                               --resultsOutputFile /path/to/SpaceNetResults.csv

Using SpaceNet Utilities to Process Imagery and Vector Data

The SpaceNet imagery provided for this challenge must be processed and transformed into a deep-learning compatible format. SpaceNet utilites helps to achieve this transformation. A traditional implementation strategy may look similar to this:

  1. Chipping and clipping SpaceNet imagery into smaller areas to allow for deep learning consumption (create_spacenet_AOI.py)

  2. Split imagery and vector datasets (such as building or road labels) into training, testing, and validation datasets randomly (splitAOI_Train_Test_Val.py)

  3. Easily add or update your vector datasets to seamlessly match existing SpaceNet imagery chips. (externalVectorProcessing.py)

  4. Translate SpaceNet image chips and vector data into various machine learning and deep learning consumable formats such as PASCAL VOC2012, DarkNet, or Semantic Boundaries Dataset (SBD). (createDataSpaceNet.py)

  5. Evaluate your deep learning outputs against validation datasets to determine your results' accuracy and estimate the amount of comission and omission errors ocurring. (evaluateScene.py)

  6. Various other maintenance utility scripts to enhance ease of use.

Chipping Imagery Code

The script create_spacenet_AOI.py is used to create a SpaceNet competition dataset (chips) from a larger imagery dataset. Its base function is to create an N x N meters (or N x N pxiels) chip with associated object labels (such as buildings or roads). The script will only create chips in the area where labeled items exist, thus saving space and reducing computational intensity.

The script requires a few pre-processing steps, a recommended process for this would be

  1. Build a VRT file to point to the source imagery. A VRT is essentailly a virtual mosaic that links all the files, but does not build an entirely new (and monsterous) mosaic. http://www.gdal.org/gdalbuildvrt.html is one of the best ways to do this easily. A seperate VRT should be built for each type of imagery data you plan to use (Ex: Pan, Multi-spectral, etc..)

  2. Build two CSV pointer files that point to the specific location of both your imagery VRT's and the labeled vector data (buildings, roads, etc..). This file will have two columns with NO headers.

    Example raster CSV:
   
    Column A:   Column B:
    PAN         C:/SpaceNet/Imagery/Vegas_PAN.vrt
    MUL         C:/SpaceNet/Imagery/Vegas_MUL.vrt
    MUL-PS      C:/SpaceNet/Imagery/Vegas_MUL-PS.vrt
 
    Example vector CSV:
    
    Column A:   Column B:
    Buildings   C:/SpaceNet/Vector/Vegas_BuildingLabels.geojson

Script Inputs:

  1. CSV of raster imagery VRT locations
  2. CSV of vector labels (geojson)
  3. SRC_Outline- The outline of where labelling is ocurring in a geojson format
  4. Other optional inputs are also availble, more information on these can be gleaned by looking into the raw code itself or using the -h help feature.

The script will then chip and clip the source SpaceNet imagery and vector labels. An example prompt of running the script is as follows:

python create_spacenet_AOI.py --srcOutline /data/vectorData/Shanghai_AOI_Fixed.geojson --outputDirectory /data/output --AOI_Name Shanghai --AOI_Num 4 --createSummaryCSV --featureName Building /data/AOI_4_Shanghai_srcRasterList.csv /data/AOI_4_Shanghai_srcVectorList.csv 

This will output chipped imagery into your outputDirectory folder for further usage.

Data Transformation Code

To make the Spacenet dataset easier to use we have created a tool createDataSpaceNet.py This tool currently supports the creation of datasets with annotation to support 3 Formats

  1. PASCAL VOC2012
  2. Darknet
  3. Segmenation Boundaries Dataset (SBD)

It will create the appropriate annotation files and a summary trainval.txt and test.txt in the outputDirectory

Create an PASCAL VOC2012 Compatiable Dataset

The final product will have image dimensions of 400 pixels

python python/createDataSpaceNet.py /path/to/spacenet_sample/AOI_2_Vegas_Train/ \
           --srcImageryDirectory RGB-PanSharpen
           --outputDirectory /path/to/spacenet_sample/annotations/ \
           --annotationType PASCALVOC2012 \
           --imgSizePix 400

Changing the raster format

Some GIS Images have 16-bit pixel values which openCV has trouble with. createDataSpaceNet.py can convert the 16bit GeoTiff to an 8bit GeoTiff or 8bit JPEG

To create the 8bit GeoTiff

python python/createDataSpaceNet.py /path/to/spacenet_sample/AOI_2_Vegas_Train/ \
           --srcImageryDirectory RGB-PanSharpen
           --outputDirectory /path/to/spacenet_sample/annotations/ \
           --annotationType PASCALVOC2012 \
           --convertTo8Bit \
           --outputFileType GTiff \
           --imgSizePix 400
    

To create the 8bit JPEG

python python/createDataSpaceNet.py /path/to/spacenet_sample/AOI_2_Vegas_Train/ \
           --srcImageryDirectory RGB-PanSharpen
           --outputDirectory /path/to/spacenet_sample/annotations/ \
           --annotationType PASCALVOC2012 \
           --convertTo8Bit \
           --outputFileType JPEG \
           --imgSizePix 400

For more Features

python python/createDataSpaceNet.py -h

Use our Docker Container

We have created two Docker files at /docker/standalone/cpu and /docker/standalone/gpu These Dockerfiles will build a docker container with all packages neccessary to run the package

More documenation to follow

Dependencies

All dependencies can be found in the docker file Dockerfile

License

See LICENSE.

utilities's People

Contributors

dlindenbaum avatar jshermeyer avatar lncohn avatar nrweir avatar toddstavish avatar williemaddox avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

utilities's Issues

Unable to download dataset from aws

I'm unable to download most of the spacenet datasets. For e.g., when I'm trying to download the Las Vegas Building Training Dataset using the command
aws s3api get-object --bucket spacenet-dataset --key AOI_2_Vegas/AOI_2_Vegas_Train.tar.gz --request-payer requester AOI_2_Vegas_Train.tar.gz, it's downloading partially (~70%) and I'm getting the error HTTPSConnectionPool(host='spacenet-dataset.s3.amazonaws.com', port=443): Read timed out.. It's happening for most of the datasets. I've tried tweaking the parameters as mentioned here but with no use. Kindly guide me.

evaluate buildings detection from spaceNet

Hello,
The codes provides scripts for result evaluation, just like following:

python python/evaluateScene.py /path/to/SpaceNetTruthFile.csv \
                               /path/to/SpaceNetProposalFile.csv \
                               --resultsOutputFile /path/to/SpaceNetResults.csv

I confuse how to evaluate because my alogrithm output is not a csv table, but a label image which while pixels belong to building and dark pixels belong to background.
I also can convert the label images to geojson files by using GDAL Polygonize function, but there are two issues:

  1. how to assign the id for each building polygon? In the "SpaceNetTruthFile.csv", each building polygon has a building ID, how to make sure the building in my result has the same ID as in the "SpaceNetTruthFile.csv". At least, the correct detected building polygons should have correct ID.
  2. For the case which two or more building are close to each other, the detected result shows they are connected to each other (see the bottom white area of below figure), the polygonized function will convert this white area to one polygon, but the truth is two separated polygons, this will make the IOU value small and consider the detected result is wrong. So how to deal this case? I don't think it's wrong, just because they are too close.
    rgb-pansharpen_aoi_2_vegas_8bit_img4_blob_0 copy

coreLabelTools.py function createRasterFromGDF: make np.zeroes instead of np.empty

coreLabelTools.py function createRasterFromGDF currently does this for an empty geojson (=no buildings):

burned = np.empty(rst.shape, dtype='uint8')

This gives a somewhat random array depending on what was in memory.
But if I understand correctly, what we want is an array with all zeroes:

burned = np.zeros(rst.shape, dtype='uint8')

evalTools.py

hi,
i use your code to evaluate my result and i noticed that there is a bug,
when i use iou function sometimes the intersection between the gt-poly to the pred-poly return GEOMETRYCOLLECTION and in this case the code fail and give score 0.
i add this lines if its interesting you:

elif intersection_result.GetGeometryName() == 'GEOMETRYCOLLECTION':
    maxlist = []
    for geoCol in intersection_result:
        if geoCol.GetGeometryName() == 'POLYGON' or \
                        geoCol.GetGeometryName() == 'MULTIPOLYGON':
            intersection_area = geoCol.GetArea()
            union_area = test_poly.Union(truth_polys[fid]).GetArea()
            maxlist.append(intersection_area / union_area)
    iou_list.append(np.max(maxlist))

COCO annotations?

Currently it is possible to utilise spacenet utilities to convert spacenet annotations to Pascal VOC/Darknet annotation formats. Are there any plans to introduce a functionality that will allow us to convert annotations into the MS COCO format?

Building footprint data and its source

What is the source of the ground-truth building footprints provided in the SpaceNet data on AWS? Is it data from by the relevant national map agency of each sample dataset? Or is it produced from some internal Digital Globe building database or manually digitised from the same imagery? What QA has been done on it? I'm most interested in the Shanghai AOI which is a WV-3 dataset.

Spacenet availability on AWS

Is spacenet still available on AWS?
Command:
aws s3 ls spacenet-dataset
returns:

An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied

createDataSpacenet.py crashed when transforming data to PASCAL VOL2012

I used createDataSpacenet.py to transform SpanceNet to PASCAL VOL 2012, the script crashed with the following message:
Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)
The codes position of crashing is in "labelTools.py":

        featureDefn = innerBufferLayer.GetLayerDefn()
        bufferDist = srcRaster.GetGeoTransform()[1]*bufferSizePix
        for idx, feature in enumerate(source_layer):
            ingeom = feature.GetGeometryRef()  #cashed at second loop
            geomBufferOut = ingeom.Buffer(bufferDist) 
            geomBufferIn  = ingeom.Buffer(-bufferDist)
            print(geomBufferIn.ExportToWkt())
            print(geomBufferIn.IsEmpty())
            print(geomBufferIn.IsSimple())

It worked fine in the first loop but crashed in the second, the images I used is
AOI_2_Vegas_Train/RGB-PanSharpen/RGB-PanSharpen_AOI_2_Vegas_img1.tif'

I also tried to use other images, some of them work well, but some of them failed.

The parameters I used are: aws_SpaceNet/un_gz/AOI_2_Vegas_Train --srcImageryDirectory RGB-PanSharpen --outputDirectory aws_SpaceNet/voc_format/AOI_2_Vegas_Train/ --annotationType PASCALVOC2012 --imgSizePix 400 --convertTo8Bit --spacenetVersion 2
The gdal I used is version 2.1.3, I don't know whether this matter. I upgraded to this version because of the requirement of other packages.

I don't know what's going on, many thanks for any suggestion.

AOI_2_Vegas_Train.tar.gz can not be rightly unzipped

When I unzip the file AOI_2_Vegas_Train.tar.gz, I meet a problem. I use ubuntu command "tar zxvf AOI_2_Vegas_Train.tar.gz". It shows the error information as below
"gzip: stdin: unexpected end of file
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now
"

Utilities to rotate images and recompute the pixel-level bounding polygons.

Hello:

I'm interested in augmenting the dataset via rotation.

I want to start with the original untiled images, rotate them (from the center of the image), then retile them. Along with that step, I'd like the pixel-level annotations for the bounding polygons to the adjusted accordingly.

Will you make available utilities to rotate images and recompute the pixel-level bounding polygons.

-Auro

Dataset for height of all objects

Hi,
I am exploring the SpaceNet dataset. I am interested in ground truth labels for heights of objects off of the ground. Do we have ground truth labels for that in the dataset?
Thanks!

Write pytest tests for all functions

Write pytest tests for all functions in the module

This is marked as hard not due to the intrinsic difficulty, but due to the amount of thought that will need to go into identifying all possible edge cases that could interfere with each function's performance. For example, if a function reads in a geojson-formatted file, can it handle an empty geojson (which throws an error in gdal/fiona)?

One could start by writing tests to cover the "normal operation" cases to ensure functions perform as expected.

Useful references to get started:

Missing scene metadata / .imd files

Hi,

I've been looking over the Spacenet data for Shanghai. The WV3 metadata doesn't appear to be included (the .IMD files). For my classification research I need to be able to calculate TOA radiance (and potentially reflectance) using the metadata file. Is it possible for the IMD files to be provided please?

Thanks

outputFileType does not convert to JPEG

$ python ./utilities/python/createDataSpaceNet.py AOI_2_Vegas_Train/ --srcImageryDirectory RGB-PanSharpen --annotationType DARKNET --outputFileType JPEG

Does everything correctly, except that the files remain as .tif.

error with SBD data

i try run the command

python python/createDataSpaceNet.py PATH/AOI_5_Khartoum_Train  
--srcImageryDirectory RGB-PanSharpen  
--outputDirectory PATH/AOI_5_Khartoum_Train/annotations 
 --annotationType SBD --outputFileType JPEG 

there is problem with line 118 in createDataSpaceNet.py because we send bboxresize to geoJsonToSBD function and the parmeter not exist there

Specify errors for bare excepts

There are bare except statements in the code which caused me problems on several occasions as I tried to debug. These should have specific errors indicated to ensure accurate behavior. @dlindenbaum, I can try to fix these (or at least ID the excepts that need to be targeted), but I'm not going to know what each one is supposed to be catching so might need help.

Cannot access public dataset SPACENET

On http://spacenet-dataset.s3.amazonaws.com/ , public s3 bucket is arn:aws:s3:::spacenet-dataset.
However when I try to access the bucket via https
http://spacenet-dataset.s3.amazonaws.com/, it is giving following error

<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>80AA03A4A2B74B07</RequestId>
<HostId>
vpeLzorRhAVQAc5WIy7aC9atGuyrgnBw+xdRR8GklvasUI4jgQ+Bd8Yl6Jo9QU5rQnbcMOkwxME=
</HostId>
</Error>

Is this dataset no longer public ?

Pascal format file can't be rightly converted

I try to use the createDataSpaceNet function to convert the original geojson to pascal format as the following command:

python python/createDataSpaceNet.py /media/lok/dataset/lok/data/spacenet/data/AOI_2_Vegas_Train/
--outputDirectory /media/lok/dataset/lok/data/spacenet/data/VOC/AOI_2_Vegas_Train/
--convertTo8Bit

It can work on AOI_3 and AOI_4 dataset. But when it comes to AOI_2 and AOI_5 dataset, the ground truth only contain at most one building footprint. The others are all missing. Can you help fix this bug?

Thanks for your time!

Does the SpaceNet utilities support the Off Nadir Dataset (AOI 6 Atlanta)?

Just wanted to know whether the utilities supports the SpaceNet Off Nadir dataset (AOI 6 Atlanta). I tried converting the off nadir annotations into the Darknet format but was unable to do so due to the createDataSpaceNet.py requiring, among other things, matching image and geojson pairs which is not the case for the Off Nadir dataset.

I would highly appreciate some clarification on this matter.

Update README.md

Update README.md

We need to read through the README carefully and see what is deprecated going from the "old" master to v3/dev. For example, the paths to some of the scripts referenced in the readme have changed, I'm not sure if the cli remains as it's described, etc.

Other tasks:

  • include install instructions relevant to 4.0.0
  • include links to readthedocs when that's live
  • include a "version history" section where we can indicate what has changed between releases (? this can also be done as its own .txt file in the repo if it will be too unwieldy)
  • probably more I'm not thinking of right now

createDataSpaceNet.py appends extra sub folder...

Running this:
python ./utilities/python/createDataSpaceNet.py AOI_2_Vegas_Train --srcImageryDirectory RGB-PanSharpen --annotationType DARKNET --outputFileType JPEG
Returns this...
fullpathImageDirectory = AOI_2_Vegas_Train/AOI_2_Vegas_Train/RGB-PanSharpen

I found this on line 275-276 of the createDataSpaceNet.py script:
for aoiSubDir in listOfAOIs: fullPathSubDir = os.path.join(srcSpaceNetDirectory, aoiSubDir)
Where listOfAOIs is defined on line 251 as:
listOfAOIs = [srcSpaceNetDirectory]

This appears to be a mistake in the code, but maybe I'm doing something wrong in the command line call?

Permissions on the spacenet AWS S3 bucket

I'm having trouble accessing data from the SpaceNet AWS public dataset. From previous similar issues there might be a permissions issue on the spacenet bucket.

I've verified that my access keyid, secret access key and region are configured correctly and was able to successfully list one of the LandSat open dataset buckets at s3://landsat-pds

I've attached the list command I use:
aws s3 ls s3://spacenet-dataset/ --request-payer requester

And the error message that is returned:
An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied

NULL Pointer error when trying to process a geojson file with no building footprints

I ran the code below to convert the Spacenet 2 Khartoum dataset to SBD format.

python2.7 createDataSpaceNet.py /path/spacenet/AOI_5_Khartoum_Train/ \
           --srcImageryDirectory RGB-PanSharpen \
           --outputDirectory /path/spacenet/AOI_5_Khartoum_Train/sbd_labels \
           --annotationType SBD \
           --convertTo8Bit \
           --outputFileType JPEG \
           --imgSizePix 650

On buildings_AOI_5_Khartoum_img3.geojson the script crashed and I got the traceback below indicating a NULL pointer.

Traceback (most recent call last):
  File "createDataSpaceNet.py", line 317, in <module>
    bboxResize= args.boundingBoxResize
  File "createDataSpaceNet.py", line 110, in processChipSummaryList
    entry = lT.geoJsonToSBD(annotationName_cls, annotationName_inst, chipSummary['geoVectorName'], chipSummary['rasterSource'])
  File "/path/spacenet_utilities/python/spaceNetUtilities/labelTools.py", line 1066, in geoJsonToSBD
    my_inst_segmentation = createInstanceSegmentation(my_raster_source, my_vector_source)
  File "/path/spacenet_utilities/python/spaceNetUtilities/labelTools.py", line 1016, in createInstanceSegmentation
    cell_array[i] = createSegmentationByFeatureIndex(i, rasterSrc, vectorSrc, npDistFileName='', units='pixels')
  File "/path/spacenet_utilities/python/spaceNetUtilities/labelTools.py", line 1004, in createSegmentationByFeatureIndex
    dist_trans_by_feature = createDistanceTransformByFeatureIndex(feature_index, rasterSrc, vectorSrc, npDistFileName='', units='pixels')
  File "/path/spacenet_utilities/python/spaceNetUtilities/labelTools.py", line 950, in createDistanceTransformByFeatureIndex
    Feature_Layer.CreateFeature(my_feature)
  File "/path/python2.7/site-packages/osgeo/ogr.py", line 1727, in CreateFeature
    return _ogr.Layer_CreateFeature(self, *args)
ValueError: Received a NULL pointer.

The error seems to be caused by the geojson file having no building footprints in it. So the script passes this empty set of footprints to OGR and OGR interprets that as a NULL. I think the solution would be to check for an empty value before passing the value to OGR.

error on running createDataSpaceNet.py

File "/Users/lilimeng/Desktop/Price_prediction/SpaceNet/utilities/python/spaceNetUtilities/labelTools.py", line 4, in
import geoTools as gT
ImportError: No module named 'geoTools'

Implement deprecation cycle

copied from scikit-image's CONTRIBUTING.txt:

Deprecation cycle

If the behavior of the library has to be changed, a deprecation cycle must be
followed to warn users.

a deprecation cycle is not necessary when:

adding a new function, or
adding a new keyword argument to the end of a function signature, or
fixing what was buggy behaviour
a deprecation cycle is necessary for any breaking API change, meaning a
change where the function, invoked with the same arguments, would return a
different result after the change. This includes:

changing the order of arguments or keyword arguments, or
adding arguments or keyword arguments to a function, or
changing a function's name or submodule, or
changing the default value of a function's arguments.
Usually, our policy is to put in place a deprecation cycle over two releases.

For the sake of illustration, we consider the modification of a default value in
a function signature. In version N (therefore, next release will be N+1), we
have

.. code-block:: python

def a_function(image, rescale=True):
out = do_something(image, rescale=rescale)
return out
that has to be changed to

.. code-block:: python

def a_function(image, rescale=None):
if rescale is None:
warn('The default value of rescale will change to False in version N+3')
rescale = True
out = do_something(image, rescale=rescale)
return out
and in version N+3

.. code-block:: python

def a_function(image, rescale=False):
out = do_something(image, rescale=rescale)
return out
Here is the process for a 2-release deprecation cycle:

In the signature, set default to None, and modify the docstring to specify
that it's True.
In the function, if rescale is set to None, set to True and warn that the
default will change to False in version N+3.
In doc/release/release_dev.rst, under deprecations, add "In
a_function, the rescale argument will default to False in N+3."
In TODO.txt, create an item in the section related to version N+3 and write
"change rescale default to False in a_function".
Note that the 2-release deprecation cycle is not a strict rule and in some
cases, the developers can agree on a different procedure upon justification
(like when we can't detect the change, or it involves moving or deleting an
entire function for example).

Implement continuous integration

Implement continuous integration

At one point in the past we had CircleCI set up for this repository. @dlindenbaum, do you know what the status on that was?

Ideally we should get this up and running again for at least python 2.7 and 3.6 with the versions of dependencies we want (see the related issue) with testing (see #95). We should probably also automatically trigger for PRs.

@dlindenbaum let me know if you want to take this one, or I can have a shot at figuring out how to do it.

error while running creatDataSpaceNet.py

hi
i run this line in python
createDataSpaceNet.py ~/spacenet-data/AOI_5_Khartoum_Train/
and the code exit like this (in the terminal)-
Segmentation fault (core dumped)
and like this in pycharm -
Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)
when i do debugging i see that the problrm acure in this line at labelTools.py
gdal.RasterizeLayer(target_ds, [1], innerBufferLayer, burn_values=[100])
you saw somthing like this before?
thank

rtree, centerline not installed

I am using Ubuntu 16.04 and v3 of this project. When running the utilities package with python 3 (default version when installing python on Ubuntu is 3.6.7), I get the error that the versions are not compatible (package is written in python 2.7 I assume?) just like this:

python3 createDataSpaceNet.py /LabData/AOI_2_Vegas_Roads_Train/
Traceback (most recent call last):
File "createDataSpaceNet.py", line 4, in
from spaceNetUtilities import labelTools as lT
File "/home/marcin/utilities-3.0/python/spaceNetUtilities/labelTools.py", line 381
print imageId
^
SyntaxError: Missing parentheses in call to 'print'

And when I force python 2.7 installation and then switch to python 2.7 I still get the error like this one:

python createDataSpaceNet.py /LabData/AOI_2_Vegas_Roads_Train/
Traceback (most recent call last):
File "createDataSpaceNet.py", line 4, in
from spaceNetUtilities import labelTools as lT
File "/home/marcin/utilities-3.0/python/spaceNetUtilities/labelTools.py", line 4, in
import geoTools as gT
File "/home/marcin/utilities-3.0/python/spaceNetUtilities/geoTools.py", line 17, in
import centerline
ImportError: No module named centerline

I tried installing centerline separately but I would still get an error with rtree/geopandas/osgeo or simply GDAL package.

Write docstrings for all functions in inferenceTools

Write docstrings for all functions

We need to begin writing docstrings for all of the functions in the package.

Important components:

  • Single-line (<80 character) one-line summary.
  • Description of each argument, including type, possible value(s), whether or not the argument is required, and any other important information for the user to understand how to use that argument.
  • Description of what the function returns, including type and any other information required to understand the outputs.
  • Any further description needed to understand usage.

Example:

def rescale_image(image_input, lower_limit=0, upper_limit=255):
    """Re-scale image intensities.
    
    Arguments:
    ------------
    image_input (numpy array of ints, required): An image in numpy array format. Values should be 
        integers.
    lower_limit (int, optional): Lower limit for original pixel intensity values. Defaults to 0.
    upper_limit(int, optional): Upper limit for original pixel intensity values. Defaults to 255.

    Returns:
    ---------
    A numpy array of the same dtype with values in the range (lower_limit, upper_limit)
    rescaled to [0, 255].
    """
    
    (Function defined here)

Format: If authors want to add docstrings in whatever format you like, feel free, but these will eventually need to be formatted according to Sphinx formats, so if you put in the work to use that style you'll be cutting out future re-structuring effort.

An error occurred (AccessDenied) when calling the GetObject operation: Access Denied

I've been trying to download pre-trained models of a listed solution of road detection challenge i.e. by selim_sef. Following is the script given on this git repository:

mkdir trained_models
aws s3 sync s3://spacenet-dataset/SpaceNet_Roads_Competition/Pretrained_Models/04-selim_sef/ trained_models/

which results in

An error occurred (AccessDenied) when calling the GetObject operation: Access Denied

then i ran the following code to check the files present in this folder at aws:

aws s3 ls s3://spacenet-dataset/SpaceNet_Roads_Competition/Pretrained_Models/04-selim_sef/

and it successfully showed all files.

But when i tried to download a single file using this command:

aws s3api get-object --bucket spacenet-dataset --key SpaceNet_Roads_Competition/Pretrained_Models/04-selim_sef/000_paris_linknet_inception.h5 --request-payer requester 000_paris_linknet_inception.h5

it still gives me the same error.

Implement doctest backbone

Doctest implementation

Per #95, I'd like to implement doctest to run examples in the docstrings of the code, as this will make it easier to enable testing while including examples. There's some core functionality that will need to be implemented for each .py file to make this happen - see the link above.

Example code for converting from lat/log to x/y

I need help with converting geojson lat/log to x/y pixels. Is there a simple example?

The helper function is available in getTools.py, what's missing is an example of how to put it together.

The example can take as input any tif file and its corresponding geojson file and return the pixel coordinates of the bounding-boxes.

Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.