Git Product home page Git Product logo

utae-paps's Introduction

Panoptic Segmentation of Satellite Image Time Series with Convolutional Temporal Attention Networks (ICCV 2021)

This repository is the official implementation of Panoptic Segmentation of Satellite Image Time Series with Convolutional Temporal Attention Networks .

PWC

PWC

Updates

  • 27.06.2022 Major Bugfix 🪲 A bug in the panoptic metrics was driving the Recognition Quality down artificially. The bug is now fixed and the metrics have been updated here and on Arxiv. Across experiments, solving this bug improved PQ by ~2-3pts on PASTIS. See this issue for more details.

Contents

This repository contains the following PyTorch code:

  • Implementation of U-TAE spatio-temporal encoding architecture for satellite image time series UTAE
  • Implementation of Parcels-as-Points (PaPs) module for panoptic segmentation of agricultural parcels PaPs
  • Code for reproduction of the paper's results for panoptic and semantic segmentation.

Results

Our model achieves the following performance on :

PASTIS - Panoptic segmentation

Our spatio-temporal encoder U-TAE combined with our PaPs instance segmentation module achieves 40.4 Panoptic Quality (PQ) on PASTIS for panoptic segmentation. When replacing U-TAE with a convolutional LSTM the performance drops to 33.4 PQ.

Model name SQ RQ PQ
U-TAE + PaPs (ours) 81.5 53.2 43.8
UConvLSTM+PaPs 80.2 43.9 35.6

PASTIS - Semantic segmentation

Our spatio-temporal encoder U-TAE yields a semantic segmentation score of 63.1 mIoU on PASTIS, achieving an improvement of approximately 5 points compared to the best existing methods that we re-implemented (Unet-3d, Unet+ConvLSTM and Feature Pyramid+Unet). See the paper for more details.

Model name #Params OA mIoU
U-TAE (ours) 1.1M 83.2% 63.1%
Unet-3d 1.6M 81.3% 58.4%
Unet-ConvLSTM 1.5M 82.1% 57.8%
FPN-ConvLSTM 1.3M 81.6% 57.1%

Requirements

PASTIS Dataset download

The Dataset is freely available for download here.

Python requirements

To install requirements:

pip install -r requirements.txt

(torch_scatter is required for the panoptic experiments. Installing this library requires a little more effort, see the official repo)

Inference with pre-trained models

Panoptic segmentation

Pre-trained weights of U-TAE+Paps are available here

To perform inference of the pre-trained model on the test set of PASTIS run:

python test_panoptic.py --dataset_folder PATH_TO_DATASET --weight_folder PATH_TO_WEIGHT_FOLDER --res_dir OUPUT_DIR

Semantic segmentation

Pre-trained weights of U-TAE are available here

To perform inference of the pre-trained model on the test set of PASTIS run:

python test_semantic.py --dataset_folder PATH_TO_DATASET --weight_folder PATH_TO_WEIGHT_FOLDER --res_dir OUPUT_DIR

Training models from scratch

Panoptic segmentation

To reproduce the main result for panoptic segmentation (with U-TAE+PaPs) run the following :

python train_panoptic.py --dataset_folder PATH_TO_DATASET --res_dir OUT_DIR

Options are also provided in train_panoptic.py to reproduce the other results of Table 2:

python train_panoptic.py --dataset_folder PATH_TO_DATASET --res_dir OUT_DIR_NoCNN --no_mask_conv
python train_panoptic.py --dataset_folder PATH_TO_DATASET --res_dir OUT_DIR_UConvLSTM --backbone uconvlstm
python train_panoptic.py --dataset_folder PATH_TO_DATASET --res_dir OUT_DIR_shape24 --shape_size 24

Note: By default this script runs the 5 folds of the cross validation, which can be quite long (~12 hours per fold on a Tesla V100). Use the fold argument to execute one of the 5 folds only (e.g. for the 3rd fold : python train_panoptic.py --fold 3 --dataset_folder PATH_TO_DATASET --res_dir OUT_DIR).

Semantic segmentation

To reproduce results for semantic segmentation (with U-TAE) run the following :

python train_semantic.py --dataset_folder PATH_TO_DATASET --res_dir OUT_DIR

And in order to obtain the results of the competing methods presented in Table 1 :

python train_semantic.py --dataset_folder PATH_TO_DATASET --res_dir OUT_DIR_UNET3d --model unet3d
python train_semantic.py --dataset_folder PATH_TO_DATASET --res_dir OUT_DIR_UConvLSTM --model uconvlstm
python train_semantic.py --dataset_folder PATH_TO_DATASET --res_dir OUT_DIR_FPN --model fpn
python train_semantic.py --dataset_folder PATH_TO_DATASET --res_dir OUT_DIR_BUConvLSTM --model buconvlstm
python train_semantic.py --dataset_folder PATH_TO_DATASET --res_dir OUT_DIR_COnvGRU --model convgru
python train_semantic.py --dataset_folder PATH_TO_DATASET --res_dir OUT_DIR_ConvLSTM --model convlstm

Finally, to reproduce the ablation study presented in Table 1 :

python train_semantic.py --dataset_folder PATH_TO_DATASET --res_dir OUT_DIR_MeanAttention --agg_mode att_mean
python train_semantic.py --dataset_folder PATH_TO_DATASET --res_dir OUT_DIR_SkipMeanConv --agg_mode mean
python train_semantic.py --dataset_folder PATH_TO_DATASET --res_dir OUT_DIR_BatchNorm --encoder_norm batch
python train_semantic.py --dataset_folder PATH_TO_DATASET --res_dir OUT_DIR_SingleDate --mono_date "08-01-2019"

Reference

Please include a citation to the following paper if you use the U-TAE, PaPs or the PASTIS benchmark.

@article{garnot2021panoptic,
  title={Panoptic Segmentation of Satellite Image Time Series with Convolutional Temporal Attention Networks},
  author={Sainte Fare Garnot, Vivien  and Landrieu, Loic },
  journal={ICCV},
  year={2021}
}

Credits

  • This work was partly supported by ASP, the French Payment Agency.

  • Code for the presented methods and dataset is original code by Vivien Sainte Fare Garnot, competing methods and some utility functions were adapted from existing repositories which are credited in the corresponding files.

utae-paps's People

Contributors

vsainteuf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

utae-paps's Issues

Optimization problem: missing m_hat parameters

Hi, how is it going?
@VSainteuf @loicland

In equation (6), which is apply to extract the centerness map, I would like to ask what is the estimated parameters $\hat{I}{p}$ and $\hat{J}{p}$.

I'm asking since I have implemented this optimization problem and I didn't got those parameters:

def loss(params):
    i_measured, j_measured, sigma_h, sigma_w = params
    term_i = np.power((i_measured), 2) / np.power((2*sigma_h), 2)
    term_j = np.power((j_measured), 2) / np.power((2*sigma_w), 2)

    return np.exp(-(term_i + term_j))

Thank you so much!

Problem with filtering out multiple void instances

I run some tests with your implementation for the panoptic metric and got a curious result.

I made a simple test case with two instance who overlap the void class -> they should get filtered out.

if I have only one of those instances it works properly and I don’t get FP.
If I now have both instances they don’t get filtered out and I get two FP

It might have to do with

*torch.unique(void_mask, return_counts=True)

since you here do the unique over basicly a binary mask if I understand correctly. I supect you wanted to do here instead of void_mask -> instance_ture[batch_idx]

visualization

Hi, the code can visualize the RGB images. May I know how to visualize the multispectral images from this datasets?

PQ=SQ*RQ not matching from the code

Hi
I trained an RGB panoptic model (Panoptic-deeplab) by converting PASTIS data to RGB (with only 3-channels) and we used your evaluation code to calculate PQ,SQ and RQ. The problem is that when we run test code and get the PQ,SQ and RQ they are very different like PQ is not a product of SQ*RQ. When we see your paper it matches accordingly. Is that code has any bug?
In your metrices.py code, it returns SQ.mean(), RQ.mean() and PQ.mean() which are the mean of all classes and PQ.mean() is not equal to RQ.mean() and SQ.mean(). How do you calculate the metrices that are in your paper?

Question regarding numpy arrays

Very well done job on the paper and the code!

I did have a question about the numpy arrays used for training, specifically the 'heatmap_xx.npy', 'zones_xx.npy', and 'instances_xx.npy' files. I've already found some information about the 'ParcelIDs_xx.npy' and 'target_xx.npy' files, but I was hoping you could provide me with some additional details on how the other files were created.
Thank you!

Pre Trained Weights

Adding a new folder with the Pre Trained weights will help Colab users to run the code

Things and stuff id confusion

Hi @VSainteuf,
I was trying to train detectron panoptic segmentation model using PASTIS dataset. We are basically doing an eavluation of five different panoptic segmentation models for multispectrum sattelite data. We choose PASTIS as our primary data since its clean and well curated, (Thanks for clean data buildup). Most of themodels we choose can easily be trained using coco format. We decided to make a coco formated jeson file for PASTIS so that we can easily train those models on PASTIS. But here is a thing which confuses use while creating jeson in coco panoptic format. COCO format requires 'stuff" id and 'things' id in its format. When I check your paper and code we can easily relate the nomenclature you provided as stuff id or category_id but how should we define the instance id. I checked the code in your data loader and seems like 'zone' files are acting as instance id. Can you pls clarify how to map 19 class category_id to things id or 'id' in coco format.?

kind regards

Inference pretrained model

@loicland @VSainteuf Hello, thank you for the work. I want to test the trained model for my own region. How should my input data ? For example, I obtained all cloud-free Sentinel-2 images from 2020-2021 for the area I specified. Afterwards, I combined these different dates with numy. Now I have an image numpy array of size 10800(h)*10800(w)*13(ch)*34(date). I should save them in 256(h)*256(w)*13(ch)*34(date) pieces with numpy.save and use these npy files for inference?

Target and input mismatch?

Hi i am getting a target and input mismatch. I'm unsure how the input has gotten to this shape, as it starts with a shape (b,t,c,h,w) of [1,12,15,256,256]. Do you have any suggestions on where to look in the code?

UserWarning: Using a target size (torch.Size([1, 1, 256, 256])) that is different to the input size (torch.Size([1, 20, 256, 256])). This will likely lead to incorrect results due to broadcasting.

Dataset format: Zones

@VSainteuf @watch24hrs-iiitd, Hello

it is possible to do a new training with a new dataset. I am studying the implementation and I find that the file train_panoptic.py needs the zone parameters for this. What are zones and what type of data are they.

Than you

if mode != "train": with torch.no_grad(): predictions = model( x, batch_positions=dates, pseudo_nms=compute_metrics, heatmap_only=heatmap_only, ) else: zones = y[:, :, :, 2] if config.supmax else None optimizer.zero_grad() predictions = model( x,

Originally posted by @jhonjam in #3 (comment)

A couple of questions about the data set

Hi, I want to replace the original dataset with my own, but I don't know how some NPY files are generated (e.g. INSTANCE_Id file, it seems to add some boundaries and determine the background), could you explain

Background class supervision problem

from the code train_semantic.py:
parser.add_argument("--ignore_index", default=-1, type=int)
classes=20
weights[config.ignore_index] = 0
iou_meter = IoU(
num_classes=config.num_classes,
ignore_index=config.ignore_index,
cm_device=config.device,
)
Background class loss is not calculated during training, and background IoU evaluation is ignored during evaluation?

Problem With --mono_date Variable.

Hello Sir, how can I initialize the --mono_date variable, because when I run the code, an error appears with " if "-" in mono_date

TypeError: argument of type 'NoneType' is not iterable. Do you have any suggestions for solving this problem?

Thank you in advance.

Runtime error during testing panoptic segmentation (tensors on different devices)

During testing panoptic segmentation, with the following command:

python3 test_panoptic.py --dataset_folder "../PASTIS" --weight_folder "../UTAE_PAPs"

I ran into the following error:

  File "test_panoptic.py", line 142, in <module>
    main(config)
  File "test_panoptic.py", line 125, in main
    device=device,
  File "/utae-paps/train_panoptic.py", line 243, in iterate
    pano_meter.add(predictions, y)
  File "/utae-paps/src/panoptic/metrics.py", line 126, in add
    self.cumulative_ious[i] += torch.stack(ious).sum()
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

It seems that tensor self.cumulative_ious is on cpu while the other tensor is on cuda.

The following change in file /utae-paps/src/panoptic/metrics.py:

   self.cumulative_ious[i] += torch.stack(ious).sum().to(device='cpu')

manages to fix this issue. Kindly confirm the validity of this fix.

Kindly also note that I also tried transferring self.cumulative_ious to cuda, but it runs into errors later during execution.

multi GPU training problem

When I use multiple GPUs for training, the inconsistent size of tensor types in instance_masks leads to data parallelism errors. How can I solve this error?

Config mismatch during testing

For testing panoptic segmentation, (using downloaded UTAE-PAPs weights and PASTIS dataset from Zenodo).

I ran the following command (as mentioned in readme):

python3 test_panoptic.py --dataset_folder "./PASTIS" --weight_folder "./UTAE_PAPs"

Despite the dataset_folder explicitly given as ./PASTIS, the dataset_folder is assigned the value /home/DATA/PASTIS.

On further investigation, in test_panoptic.py,

if __name__ == "__main__":
    test_config = parser.parse_args()

    with open(os.path.join(test_config.weight_folder, "conf.json")) as file:
        model_config = json.loads(file.read())

    config = {**vars(test_config), **model_config}
    config = argparse.Namespace(**config)
    config.fold = test_config.fold
   
    pprint.pprint(config)
    main(config)

Here the variable config.dataset_folder is assigned the value of model_config.dataset_folder (which is /home/DATA/PASTIS) rather than test_config.dataset_folder (which is ./PASTIS), thus resulting in configuration mismatch.

Data types

Hello!
I have a question about the nature of the data if you may.
In my case, I have as input data "GeoTiff" and the labels are GeoJson.
So my question here is that can I use this type of data in order to train the model, os I should use, strictly, npy and Json as data.
And if I can, Is there anything to change in the code?
Thank you!

Question about visualization

Hi,
Thank you very much for the visualization code you shared, I am getting the following error when I run it

TypeError                                 Traceback (most recent call last)
Input In [50], in <cell line: 8>()
      5 alpha=.5
      8 for b in range(batch_size):
      9     # Plot S2 background
---> 10     im = get_rgb(x,b=b, t_show=t)
     11     axes[b,0].imshow(im)
     12     axes[b,2].imshow(im)

Input In [48], in get_rgb(x, b, t_show)
     17 def get_rgb(x,b=0,t_show=6):
     18     """Gets an observation from a time series and normalises it for visualisation."""
---> 19     im = x[b,t_show,[2,1,0]].cpu().numpy()
     20     mx = im.max(axis=(1,2))
     21     mi = im.min(axis=(1,2))   

TypeError: 'int' object is not subscriptable

How can I fix this error please?

[missing input argment]

parser.add_argument(

Please, add:

parser.add_argument(
    "--model",
    default="utae",
    type=str,
    help="Type of architecture to use. Can be one of: (utae/unet3d/fpn/convlstm/convgru/uconvlstm/buconvlstm)",
)

Since we'll get this error:

AttributeError: 'Namespace' object has no attribute 'model'

Data loading logic

Hey!!
NIce paper and well organized code. Thanks for sharing the code. I read your paper and also checked the code but bit confused with the way you are loading data. This is first encounter with sattelite data so may be I am asking silly question. So, when I checked the data its time series and that is fine, but how you use a reference data and use mono date is not very clear to me.My questions are:

  1. What is the reference date means here? When I use a mono data that means I am taking data for only that date?
  2. If I want to use whole time series data, do I need to give reference date and also mono data as argument?
  3. For the panoptic segmentation there is no condition for that in the dataset wraper function. Instance segmentation is same as panoptic? bit confused with that.

Hope you understand my questions.

Inference

Greetings Mr @VSainteuf

During the inference, I've been following the instructions in order to do panoptic segmentation.
However, after I wrote the instruction (python test_panoptic.py --dataset_folder data --weight_folder weights/UTAE_PAPs --res_dir Output) It prompted this:
Traceback (most recent call last):
File "test_panoptic.py", line 148, in
main(config)
File "test_panoptic.py", line 97, in main
dt_test = PASTIS_Dataset(**dt_args, folds=test_fold)
File "/home/nextav/LULC/U-TAE-main/src/dataset.py", line 80, in init
if "-" in mono_date
TypeError: argument of type 'NoneType' is not iterable

So, I made some modifications; More precisely I changed the If condition by: mono_date=config.mono_date; I'm sure it's not pretty right, but it works :)
After some minutes of doing inference(and the gpu memory reached 10GB), I've faced this new error, and the Output folder stills empty:

Step [100/124], Loss: 7.8338, SQ 81.0, RQ 53.0 , PQ 43.4
Epoch time : 289.6s
Traceback (most recent call last):
File "test_panoptic.py", line 148, in
main(config)
File "test_panoptic.py", line 134, in main
save_results(fold + 1, test_metrics, tables, config)
File "/home/nextav/LULC/U-TAE-main/train_panoptic.py", line 329, in save_results
with open(
FileNotFoundError: [Errno 2] No such file or directory: 'Output/Fold_1/test_metrics.json'

Do you have any idea for where the error could be? And thank you in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.