Git Product home page Git Product logo

its_live_production's Introduction

itslive_production

The Inter-mission Time Series of Land Ice Velocity and Elevation (ITS_LIVE) is a NASA MEaSUREs project to provide automated, low latency, global glacier flow and elevation change datasets.

Welcome to the ITS_LIVE production github repository. This repository is intended for internal project use and captures tools and scripts that are used to build velocity-pair datacubes and mosaics. To access notebooks that show how to access and use ITS_LIVE data products please refer to the its-live repository.

its_live_production's People

Contributors

agoodm avatar alex-s-gardner avatar jhkennedy avatar markf6 avatar mliukis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

its_live_production's Issues

Priority datacube & catalogue tasks

  • @mliukis complete re-projection code (due 4/16/2021)

  • @mliukis to get @markf6 geojson catalogue code running and working on new directory structure, remove granule (same images run twice) duplicates from geojson (due 4/30/2021):

    • Allow for new directory structure
    • Remove duplicate optical granules
    • Post skipped and used granule lists to S3
    • Read list of granules from S3
    • Warning for Radar format granules when removing duplicates
    • Wildcard to iterate all sub-directories for nc files to build a list
    • Optional: Dask parallelization to create catalogs
    • Write access to S3 bucket without AWS credentials is broken (contacted Evelin 4/20/2021, JPL Cloud blocker request on 4/21/2021)
  • @mliukis to get datacube code running in batch or lambda for global production (due 5/21/2021)

    • Create Docker image for the code (due 04/30/2021)

    • Create AWS ECR repository and push Docker image (due 04/30/2021)

    • Configure AWS Batch (due 05/21/2021):

      • JPL Cloud Help to create a role: filed Cloud-302 ticket 04/28/2021, filed Critical ticket Cloud-316 on 05/04/2021 about Cloud-302

      • The role "its-live-s3-access" for Batch processing has been created in kh9 account (05/04/2021)

      • Evelin is setting up ECR permissions for account hosting datacube-dev.jpl.nasa.gov EC2 instance so we are able to push Docker images (05/05/2021)
        Abandon this account once new EC2 instance and S3 buckets are available.

      • Danny set up new EC2 instance and S3 bucket (kh9-1) in kh9 account, ran datacube test case OK

      • Configure Batch example job (due 05/27/21)

      • Configure Batch job for the datacube generation (due 06/04/21):

      • Write driver script to spawn Batch jobs to create datacubes (due 06/08/2021)

    • Define datacube polygons (due 5/14/2021)
      @alex-s-gardner: Take the regional shapefile... convert polygon boundaries into regional specific projections... take min and maximum extents... round down minimums and up maximums to the nearest 100 km then find those grid points that fall within 100 by 100 km rectangles for each cube.
      @markf6: short version is finding all the 100 x 100 km squares (on even 100 km grid) that fit in a region in that region’s projection

  • @mliukis to get work with ASF to write code to move production files from ASF accounts to JPL S3 bucket (due 5/7/2021)

    • datacube-dev role does not allow to access ASF S3 bucket, sent request to Evelin (05/05/2021)
    • Transferred test set of 1382 granules to the datacube-dev S3 bucket (05/11/2021)
    • Transferred test set of 1382 granules to the its-live-data S3 bucket, created geojson catalog file (05/13/2021)
    • Transferred 390339 granules to the its-live-data S3 bucket (06/30/2021)
      • Fixed time stamp for all img_pair_info.acquisition_date_img[12]
      • Compress all granules as compression was not used when storing fixed files to S3 bucket
  • During data transfer from ASF, update the image acquisition times on the L8 data as this can only be added by crossing with a database external of the image data (due 05/28/2021)

    • Filed SA team ticket to get public access to s3://usgs-landsat (05/19/2021)
  • @mliukis once production files start to arrive, build and update datacubes

    • Implement cube updates (06/18/2021)
    • Add missing v_error in Optical legacy format granules (06/04/2021) (don't do it as [vx|vy].stable_shift is missing for some granules, datacube generation runs 5x slower)
    • Set chip_size_height.values = chip_size_width.values for optical legacy granules if chip_size_height.values are not set
    • Add new data variables per latest Yang's code changes (06/09/2021)
  • @mliukis to get projection code running on AWS

    • Create Docker image for the code
    • Optional or what is the priority? - create new issue if needed: Write code to determine which EPSG code to project each granule into

Sentinel-2 granules are missing sensor_img1 and sensor_img2 in img_pair_info

Variable “img_pair_info”
In file “S2B_20190827_L1C_X_S2A_20190901_L1C_G0120V02_P099.nc”
String img_pair_info;
  :mission_img1 = "S";
  :satellite_img1 = "2B";
  :correction_level_img1 = "L1C";
  :acquisition_date_img1 = "20190827T15:13:15.";
  :time_standard_img1 = "UTC";
  :mission_img2 = "S";
  :satellite_img2 = "2A";
  :correction_level_img2 = "L1C";
  :acquisition_date_img2 = "20190901T15:13:11.";
  :time_standard_img2 = "UTC";
  :date_dt = 4.999953703703704; // double
  :date_center = "20190830T03:13:13.";
  :latitude = 69.76; // double
  :longitude = -46.95; // double
  :roi_valid_percentage = 99.4; // double

Per @alex-s-gardner, "there is only one sensor on the S2 satellites so it is safe to use satellite_img1/2... but we should add sensor_img1/2 attributes for consistency"

Granules, datacubes, composites and mosaics fixes

The following changes are required for:

Elevation:

  • Fix mapping attributes for ANT_G1920V01_GroundedIceHeight.nc according to NSIDC standard
  • Create premet and spacial files for ingest
  • Fill out data submission form

V02 Granules:

  • Correction for projection distortion in optical and radar data

Datacubes:

  • Add radar variables to the data cubes (M11, M12). Keep them empty for optical image pairs
    • va and vr are already stored in datacubes
  • Add chunking for 1-d data variables
  • Add landice mask to each of the datacubes
  • Add floatingice mask to each of the datacubes
  • Set mid_date to (date_center + microseconds(int('YYMMDD'))) where YYMMDD is date of acquisition_date_img1 of the granule
  • If there is no stable_shift then stable_shift_flag = 0 and vy/x_stable_shift should equal zero.
  • Map all possible L89 sensor values ( '8.', '9.', '8.0', '9.0', 8.0, 9.0) to the same string: "8" or "9".

Composites:

  • Fix v_error: re-compute v_error based on vx and vy components instead of vx0 and vy0 components as
    it was done originally.
  • Change v_error computation to autoRIFT computation: V_error = np.sqrt((vx_error * VX / V)**2 + (vy_error * VY / V)**2)
  • Use analytical solution for v_phase and v_amplitude
  • Fix datecube typo in some composite attributes:
    datecube_created
    datecube_s3
    datecube_updated
    datecube_url
  • Add EPSG code back to composites filenames to avoid multiple composites for different EPSG codes to have the same filename under the same subdirectory. Example of datacubes that result in the same composite filename s3://its-live-data/composites/annual/v02/N40E070/ITS_LIVE_velocity_120m_X650000_Y4750000.zarr:
s3://its-live-data/datacubes/v02/N40E070/ITS_LIVE_vel_EPSG32642_G0120_X650000_Y4750000.zarr
s3://its-live-data/datacubes/v02/N40E070/ITS_LIVE_vel_EPSG32643_G0120_X650000_Y4750000.zarr
  • Add landice to composites if it's not in datacube, propagate the mask to mosaics
  • Add floating ice mask to composites if it's not in datacube, propagate the mask to mosaics
  • Add stable_shift filter
  • Change dtype of count0 data variable to uint32 to avoid overflow
  • Revise dtype of all data variables (see Chad's and Alex's summary spreadsheet on Slack from 8/24/2022)
  • Convert outlier_fraction to percent and set _FillValue=255 when writing to the NetCDF or Zarr store
  • Determine minimum observation threshold to avoid huge v0 values for some composites (examine count0 for such points for existing GRE composites):
    Set model threshold to 30, invalidate all of the return variables from LSQ fit for all values exceeding that threshold
  • New filter to handle "bad" composite within GRE mosaics:
    • For the composite exclude S2 cube layers that contain 23WPN in their file name
    • Look at as possibly adding a mission specific seasonal amplitude check: compute seasonal amplitudes for S1+L8 and S2 separately... if S2_amp > (S1+L8_amp)*2 then exclude S2 from seasonal fit
    • add a minimum difference in amplitude before removing S2 data: compute seasonal amplitudes for S1+L8 and S2 separately... if {S2_amp > [2 x (S1_L8)_amp]} & {(S2_amp - [(S1_L8)_amp] ) > 2 m/yr} then exclude S2 from seasonal fit
    • Use 2km in-buffer land ice masking:
    • SensorExcludeFilter should only be applied if landice_2km_inbuff == 0
    • The 2nd LSQ S2 filter should only be applied where landice_2km_inbuff == 1
  • Fix outlier_percent when second LSQ fit is applied (don't exclude land ice cells from the count; reset total count only for the cells which use 2nd LSQ fit results)
Existing composites already include some of the mentioned above fixes (ran itslive/src/tools/fix_composites_*.py scripts):
  • v_error
  • analytical solution for v_phase and v_amplitude

Annual mosaics:

  • Strip zero from data variables names and their metadata in static mosaics

  • Parallelize re-projection code (matrix creation, apply transformation)
    * Using Dask to parallelize processing is slower than not using Dask
    * Try to use https://taichi-lang.org/

  • Increase chunking size when storing to NetCDF (to speed up data access)

  • Add missing sensor_flag data variable to static mosaics

  • Store cumulative attributes from composites to standalone file per each of the mosaics - not to overcrowd mosaics with metadata :

    composites_software_version
    datacube_software_version
    composites_created
    composites_updated
    datecube_created
    datecube_s3
    datecube_updated
    datecube_url
    geo_polygon
    proj_polygon
    composites_url

  • Re-generate composites for HMA mosaics, build annual HMA mosaics based on "good" composites to verify the code works as expected

Production Runs

Note: these are possible runs as they might not be relevant if we need re-generate all datacubes and composites
*** If we need to fix granules due to map distortion issue, then we need to re-generate all datacubes and composites as well.

  • Re-name "good" composites (not affected by described above issues) to include EPSG code into filename
  • Change dtype of count0 to uint32 for the composites where values don't overflow, re-generate composites for which current count0 overflows

Change simple summation to a root sum of squares for vx_error and vy_error in composites

Consider a change for v3:
a code is written as a simple summation of errors. It might be better to add it as a root sum of squares: sqrt(v[xy]_error2 + error2).

# Note: a code is written as a simple summation of errors. It might be better to add it
# as a root sum of squares: sqrt(v[xy]_error**2 + error**2). Something to consider for v3.
for value, error in ITSLiveComposite.CO_REGISTRATION_ERROR.items():
mask = (stable_shift_values == value)
self.vx_error[mask] += error
self.vy_error[mask] += error

Restore M11 and M12 encoding's attributes: scale_factor and add_offset for V2 S1 granules

When saving corrected or cropped S1 granules to NetCDF file, scale_factor and add_offset attributes were lost when storing values as short datatype (the attributes belong to the encoding settings and had to be set explicitly):

  • Need to restore original M11 and M12 from original cropped granules that exist in the backup
  • All granules that were affected by the correction will need to have autoRIFT generate input parameter files that store corrected M11 and M12

Consider code changes

Composites:

  • Exclude all cells where lake mask is set:inlandwater == 1 (field in shapefile)

itslive_ui._get_temporal_coverage

Hi,

At line 116-117 of itslive.py:

start_date = datetime.strptime(file_components[3], "%Y%m%d").date()
end_date = datetime.strptime(file_components[4], "%Y%m%d").date()

Here is a sample file name from the query URL: LC08_L1TP_082233_20180627_20180704_01_T1_X_LC08_L1TP_082233_20180611_20180615_01_T1_G0240V01_P094.nc. Maybe I am wrong, but it looks that the start_date should be file_components[11] and the end_date should be file_components[3]?

Fix GeoTransform in datacubes

Datacube is copying first granule's mapping which results in wrong X and Y origin in GeoTransform. Datacube should populate GeoTransform attribute of mapping with tile information: [min(x)-120/2, 120, 0, max(y)+120/2, 0, -120]

where fields are
GT(0) x-coordinate of the upper-left corner of the upper-left pixel.
GT(1) w-e pixel resolution / pixel width.
GT(2) row rotation (typically zero).
GT(3) y-coordinate of the upper-left corner of the upper-left pixel.
GT(4) column rotation (typically zero).
GT(5) n-s pixel resolution / pixel height (negative value for a north-up image).

Roadmap for final V02 data sets

Image pair velocities:

Mosaics:

ITS_LIVE next steps

  • fix filenames for existing V2 S2 image-pair data

  • fix all existing V2 image-pair data

  • Sentinel-1 acquisition attributes (acquisition_img1/2 attribute should be acquisition_date_img1/2)

  • add missing sensor_img1="MSI" and sensor_img2="MSI" attributes for Sentinel-2's img_pair_info

  • fix all references to https://its-live-data.jpl.nasa.gov.s3.amazonaws.com

  • recompute date_center and date_dt attributes of img_pair_info to use time stamp of acquisition_date_img[12]

  • remove vp, vxp, vyp, vp_error layers from S1 layers

  • recompute S1, S2 and L8 stable shift
     

  • update datacube code

  • fix all references to https://its-live-data.jpl.nasa.gov.s3.amazonaws.com

  • add cube name and url of data cubes need to be added as global attributes

  • add proj_polygon (x, y)

  • add geo_polygon (lon, lat)

  • finish composites code

 

  • develop mosaic code

requests.exceptions.HTTPError: 404 Client Error: Not Found for url

Hello, I would like to inquire if the recent downloads of velocity image pairs have been temporarily closed. I have been trying to download some velocity image pairs of Antarctica using a script that I previously used to download data. However, I encountered the following error: "requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://its-live-data.s3.amazonaws.com/velocity_image_pair/landsatOLI/v02/S70E160/LC08_L1GT_052117_20181015_20200830_02_T2_X_LC08_L1GT_052117_20181031_20201016_02_T2_G0120V02_P064.nc" Thank you for your assistance.

Recompute stable shift and errors for all S1, S2, and L8 V2 image-pair data

Yang has provide this code for patching

#!/usr/bin/env python3
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#Yang Lei, Jet Propulsion Laboratory
#November 2017

import argparse
import isce
import isceobj
import numpy as np
import shelve
import os
import datetime
from isceobj.Constants import SPEED_OF_LIGHT
import pdb
import logging
from imageMath import IML
from scipy import signal
import scipy.io as sio
import rioxarray
import xarray
from osgeo import osr,gdal


def cmdLineParse():
'''
Command line parser.
'''

parser = argparse.ArgumentParser( description='Converting netcdf variables to geotiff')
parser.add_argument('-i', '--nc', dest='nc', type=str, required=True,
help = 'input netcdf file path')
parser.add_argument('-vx', '--vx', dest='VXref', type=str, required=True,
help = 'input reference velocity x file path')
parser.add_argument('-vy', '--vy', dest='VYref', type=str, required=True,
help = 'input reference velocity y file path')
parser.add_argument('-ssm', '--ssm', dest='SSM', type=str, required=True,
help = 'input stable surface mask file path')
parser.add_argument('-o','--out', dest='output', type=str, required=True,
help = 'Variable to output')

parser.add_argument('-e','--epsg', dest='epsg', type=int, default=3413,

help = 'EPSG code')

return parser.parse_args()




def v_error_cal(vx_error, vy_error):
vx = np.random.normal(0, vx_error, 1000000)
vy = np.random.normal(0, vy_error, 1000000)
v = np.sqrt(vx2 + vy2)
return np.std(v)




if name == 'main':

#####Parse command line
inps = cmdLineParse()

xds = xarray.open_dataset(inps.nc)
VX = xds['vx'].data
VY = xds['vy'].data
V = xds['v'].data
V_error = xds['v_error'].data
if xds.attrs['scene_pair_type'] == 'radar':
VR = xds['vr'].data
VA = xds['va'].data
M11 = xds['M11'].data * xds['M11'].dr_to_vr_factor
M12 = xds['M12'].data * xds['M12'].dr_to_vr_factor
VXP = xds['vxp'].data
VYP = xds['vyp'].data
VP = xds['vp'].data
VP_error = xds['vp_error'].data

if np.logical_not(np.isnan(xds['vx'].stable_shift)):
    VX += xds['vx'].stable_shift
    VY += xds['vy'].stable_shift
    if xds.attrs['scene_pair_type'] == 'radar':
        VR += xds['vr'].stable_shift
        VA += xds['va'].stable_shift
        VXP += xds['vxp'].stable_shift
        VYP += xds['vyp'].stable_shift


ds = gdal.Open(inps.VXref)
band = ds.GetRasterBand(1)
tran = ds.GetGeoTransform()
xoff = int(round((np.min(xds['vx'].x.data)-tran[0]-tran[1]/2)/tran[1]))
yoff = int(round((np.max(xds['vx'].y.data)-tran[3]-tran[5]/2)/tran[5]))
xcount = VX.shape[1]
ycount = VX.shape[0]

pdb.set_trace()

VXref = band.ReadAsArray(xoff, yoff, xcount, ycount)
ds = None
band = None
    
ds = gdal.Open(inps.VYref)
band = ds.GetRasterBand(1)
VYref = band.ReadAsArray(xoff, yoff, xcount, ycount)
ds = None
band = None

ds = gdal.Open(inps.SSM)
band = ds.GetRasterBand(1)
SSM = band.ReadAsArray(xoff, yoff, xcount, ycount)
SSM = SSM.astype('bool')
ds = None
band = None

pdb.set_trace()

if xds.attrs['scene_pair_type'] == 'radar':
    VRref = M11 * VXref + M12 * VYref
    VAref = VRref * 0.0


NoDataValue = -32767

VX and VY stable shift

stable_count = np.sum(SSM & np.logical_not(np.isnan(VX)))

V_temp = np.sqrt(VXref**2 + VYref**2)
try:
    V_temp_threshold = np.percentile(V_temp[np.logical_not(np.isnan(V_temp))],25)
    SSM1 = (V_temp <= V_temp_threshold)
except IndexError:
    SSM1 = np.zeros(V_temp.shape).astype('bool')
            
stable_count1 = np.sum(SSM1 & np.logical_not(np.isnan(VX)))
            
vx_mean_shift = 0.0
vy_mean_shift = 0.0
vx_mean_shift1 = 0.0
vy_mean_shift1 = 0.0
            
if stable_count != 0:
    temp = VX.copy() - VXref.copy()
    temp[np.logical_not(SSM)] = np.nan
    vx_mean_shift = np.median(temp[np.logical_not(np.isnan(temp))])-0.0
    
    temp = VY.copy() - VYref.copy()
    temp[np.logical_not(SSM)] = np.nan
    vy_mean_shift = np.median(temp[np.logical_not(np.isnan(temp))])-0.0


if stable_count1 != 0:
temp = VX.copy() - VXref.copy()
temp[np.logical_not(SSM1)] = np.nan
vx_mean_shift1 = np.median(temp[np.logical_not(np.isnan(temp))])-0.0

    temp = VY.copy() - VYref.copy()
    temp[np.logical_not(SSM1)] = np.nan
    vy_mean_shift1 = np.median(temp[np.logical_not(np.isnan(temp))])-0.0


if stable_count == 0:
if stable_count1 == 0:
stable_shift_applied = 0
else:
stable_shift_applied = 2
VX = VX - vx_mean_shift1
VY = VY - vy_mean_shift1
else:
stable_shift_applied = 1
VX = VX - vx_mean_shift
VY = VY - vy_mean_shift

VX and VY error

if stable_count != 0:
    temp = VX.copy() - VXref.copy()
    temp[np.logical_not(SSM)] = np.nan
    vx_error_mask = np.std(temp[np.logical_not(np.isnan(temp))])
else:
    vx_error_mask = np.nan
if stable_count1 != 0:
    temp = VX.copy() - VXref.copy()
    temp[np.logical_not(SSM1)] = np.nan
    vx_error_slow = np.std(temp[np.logical_not(np.isnan(temp))])
else:
    vx_error_slow = np.nan
if stable_shift_applied == 1:
    vx_error = vx_error_mask
elif stable_shift_applied == 2:
    vx_error = vx_error_slow
else:
    vx_error = xds['vx'].vx_error_modeled


if stable_count != 0:
temp = VY.copy() - VYref.copy()
temp[np.logical_not(SSM)] = np.nan
vy_error_mask = np.std(temp[np.logical_not(np.isnan(temp))])
else:
vy_error_mask = np.nan
if stable_count1 != 0:
temp = VY.copy() - VYref.copy()
temp[np.logical_not(SSM1)] = np.nan
vy_error_slow = np.std(temp[np.logical_not(np.isnan(temp))])
else:
vy_error_slow = np.nan
if stable_shift_applied == 1:
vy_error = vy_error_mask
elif stable_shift_applied == 2:
vy_error = vy_error_slow
else:
vy_error = xds['vy'].vy_error_modeled

V and V_error

V = np.sqrt(VX**2+VY**2)
V_error = np.sqrt((vx_error * VX / V)**2 + (vy_error * VY / V)**2)


VX[np.isnan(VX)] = NoDataValue
VY[np.isnan(VY)] = NoDataValue
V[np.isnan(V)] = NoDataValue

VX = np.round(np.clip(VX, -32768, 32767)).astype(np.int16)
VY = np.round(np.clip(VY, -32768, 32767)).astype(np.int16)
V = np.round(np.clip(V, -32768, 32767)).astype(np.int16)

v_error = v_error_cal(vx_error, vy_error)
V_error[V==0] = v_error
V_error[np.isnan(V_error)] = NoDataValue
V_error = np.round(np.clip(V_error, -32768, 32767)).astype(np.int16)

Update nc file

xds['vx'].attrs['vx_error'] = int(round(vx_error*10))/10
if stable_shift_applied == 2:
    xds['vx'].attrs['stable_shift'] = int(round(vx_mean_shift1*10))/10
elif stable_shift_applied == 1:
    xds['vx'].attrs['stable_shift'] = int(round(vx_mean_shift*10))/10
else:
    xds['vx'].attrs['stable_shift'] = np.nan


xds['vx'].attrs['stable_count_mask'] = stable_count
xds['vx'].attrs['stable_count_slow'] = stable_count1
if stable_count != 0:
xds['vx'].attrs['stable_shift_mask'] = int(round(vx_mean_shift10))/10
else:
xds['vx'].attrs['stable_shift_mask'] = np.nan
if stable_count1 != 0:
xds['vx'].attrs['stable_shift_slow'] = int(round(vx_mean_shift1
10))/10
else:
xds['vx'].attrs['stable_shift_slow'] = np.nan

if stable_count != 0:
    xds['vx'].attrs['vx_error_mask'] = int(round(vx_error_mask*10))/10
else:
    xds['vx'].attrs['vx_error_mask'] = np.nan
if stable_count1 != 0:
    xds['vx'].attrs['vx_error_slow'] = int(round(vx_error_slow*10))/10
else:
    xds['vx'].attrs['vx_error_slow'] = np.nan



xds['vy'].attrs['vy_error'] = int(round(vy_error10))/10
if stable_shift_applied == 2:
xds['vy'].attrs['stable_shift'] = int(round(vy_mean_shift1
10))/10
elif stable_shift_applied == 1:
xds['vy'].attrs['stable_shift'] = int(round(vy_mean_shift*10))/10
else:
xds['vy'].attrs['stable_shift'] = np.nan

xds['vy'].attrs['stable_count_mask'] = stable_count
xds['vy'].attrs['stable_count_slow'] = stable_count1
if stable_count != 0:
    xds['vy'].attrs['stable_shift_mask'] = int(round(vy_mean_shift*10))/10
else:
    xds['vy'].attrs['stable_shift_mask'] = np.nan
if stable_count1 != 0:
    xds['vy'].attrs['stable_shift_slow'] = int(round(vy_mean_shift1*10))/10
else:
    xds['vy'].attrs['stable_shift_slow'] = np.nan


if stable_count != 0:
xds['vy'].attrs['vy_error_mask'] = int(round(vy_error_mask10))/10
else:
xds['vy'].attrs['vy_error_mask'] = np.nan
if stable_count1 != 0:
xds['vy'].attrs['vy_error_slow'] = int(round(vy_error_slow
10))/10
else:
xds['vy'].attrs['vy_error_slow'] = np.nan

xds['vx'].data = VX
xds['vy'].data = VY
xds['v'].data = V
xds['v_error'].data = V_error


if xds.attrs['scene_pair_type'] == 'radar':

    #   VR and VA stable shift
    
    vr_mean_shift = 0.0
    va_mean_shift = 0.0
    vr_mean_shift1 = 0.0
    va_mean_shift1 = 0.0
    
    if stable_count != 0:
        temp = VR.copy() - VRref.copy()
        temp[np.logical_not(SSM)] = np.nan
        vr_mean_shift = np.median(temp[np.logical_not(np.isnan(temp))])-0.0
        
        temp = VA.copy() - VAref.copy()
        temp[np.logical_not(SSM)] = np.nan
        va_mean_shift = np.median(temp[np.logical_not(np.isnan(temp))])-0.0
    
    if stable_count1 != 0:
        temp = VR.copy() - VRref.copy()
        temp[np.logical_not(SSM1)] = np.nan
        vr_mean_shift1 = np.median(temp[np.logical_not(np.isnan(temp))])-0.0
        
        temp = VA.copy() - VAref.copy()
        temp[np.logical_not(SSM1)] = np.nan
        va_mean_shift1 = np.median(temp[np.logical_not(np.isnan(temp))])-0.0
    
    if stable_count == 0:
        if stable_count1 == 0:
            stable_shift_applied = 0
        else:
            stable_shift_applied = 2
            VR = VR - vr_mean_shift1
            VA = VA - va_mean_shift1
    else:
        stable_shift_applied = 1
        VR = VR - vr_mean_shift
        VA = VA - va_mean_shift
    
    
    #   VR and VA error
    if stable_count != 0:
        temp = VR.copy() - VRref.copy()
        temp[np.logical_not(SSM)] = np.nan
        vr_error_mask = np.std(temp[np.logical_not(np.isnan(temp))])
    else:
        vr_error_mask = np.nan
    if stable_count1 != 0:
        temp = VR.copy() - VRref.copy()
        temp[np.logical_not(SSM1)] = np.nan
        vr_error_slow = np.std(temp[np.logical_not(np.isnan(temp))])
    else:
        vr_error_slow = np.nan
    if stable_shift_applied == 1:
        vr_error = vr_error_mask
    elif stable_shift_applied == 2:
        vr_error = vr_error_slow
    else:
        vr_error = xds['vr'].vr_error_modeled
    
    if stable_count != 0:
        temp = VA.copy() - VAref.copy()
        temp[np.logical_not(SSM)] = np.nan
        va_error_mask = np.std(temp[np.logical_not(np.isnan(temp))])
    else:
        va_error_mask = np.nan
    if stable_count1 != 0:
        temp = VA.copy() - VAref.copy()
        temp[np.logical_not(SSM1)] = np.nan
        va_error_slow = np.std(temp[np.logical_not(np.isnan(temp))])
    else:
        va_error_slow = np.nan
    if stable_shift_applied == 1:
        va_error = va_error_mask
    elif stable_shift_applied == 2:
        va_error = va_error_slow
    else:
        va_error = xds['va'].va_error_modeled
    
    #   V and V_error
    VR[np.isnan(VXP)] = NoDataValue
    VA[np.isnan(VYP)] = NoDataValue


VR = np.round(np.clip(VR, -32768, 32767)).astype(np.int16)
VA = np.round(np.clip(VA, -32768, 32767)).astype(np.int16)

# Update nc file
xds['vr'].attrs['vr_error'] = int(round(vr_error*10))/10
if stable_shift_applied == 2:

Rename V2 datacubes to have names consistent with composites

Currently used naming convention for the datacubes
http://its-live-data.s3.amazonaws.com/datacubes/v2/S50W070/ITS_LIVE_vel_EPSG32718_G0120_X450000_Y4450000.zarr
should be changed to be consistent with corresponding composites:
http://its-live-data.s3.amazonaws.com/datacubes/v2/S50W070/ITS_LIVE_**velocity**_EPSG32718_**120m**_X450000_Y4450000.zarr

To change global attributes within each of the datacubes in place (thanks to Alex Goodman: I think actually you might not even need to specify mode and the cube will just be writable by default):

import s3fs
import zarr
fs = s3fs.S3FileSystem(key=<aws_access_key_id>, 
                       secret=<aws_secret_access_key>,
                       token=<aws_session_token>)
mapper = fs.get_mapper(<new_s3_url>)
g = zarr.open(mapper, mode='a')
g.attrs['s3'] = <new_s3_url>
g.attrs['url'] = <new_https_url>

Computed X and Y components of v, v0, v_amp for composites should be filtered against large values

Computed v, v0 and v_amplitude invalid values are filtered out:

v = np.sqrt(vx0**2 + vy0**2) # velocity magnitude
invalid_mask = (v >= v_limit)
if np.sum(invalid_mask) > 0:
# Since it's invalid v0, report all output as invalid
v[invalid_mask] = np.nan

https://github.com/nasa-jpl/its_live_production/blob/dd5dcbd7bb2de88f6f18333b198c3cd12a2d267d/src/itslive_composite.py#L2769C25-L2774

Need to add the same filter for corresponding X and Y components.

Add new sensor_uid1 & sensor_uid2 data variables to datacubes

Add new sensor_uid1 & sensor_uid2 data variables of Uint8 type to datacubes. Both variables should have description of unique values:

description: unique sensor id: 4 = Landsat 4, 5 = Landsat 5, 6 = Landsat 6, 7 = Landsat 7, 8 = Landsat 8, 9 = Landsat 9, 11 = Sentinel 1A, 12 = Sentinel 1B, 21 = Sentinel 2A, 22 = Sentinel 2B

Final roadmap for V2 data products

Finish granule processing:

  • Finish remaining L7 processing @jhkennedy
  • Transfer granules to s3://its-live-data @mliukis

Fix S1 granules:

  • develop and validate S1 correction workflow @mliukis
  • provide list of S1 granules that need correction @mliukis
    Then either:
  • correct granules:
    • create intermediate files for all reference tracks @jhkennedy
    • provide table linking reference tracks to each granule @jhkennedy
    • apply correction to old S1 granules @mliukis
  • or re-process granules
    • reprocess all granules that need correction @jhkennedy
    • delete S1 granules that need correction @mliukis
    • Transfer newly-processed granules to s3://its-live-data @mliukis

Fix Landsat 7

  • provide example data for percent valid scaling fix @jhkennedy
  • develop percent valid scaling fix for Landsat 7 data @markf6
  • provide list of all L7 granules for which the % valid needs to be fixed @jhkennedy
  • apply % valid fix to L7 granules @mliukis
  • @chadagreene to validate

Crop all granules

Build all datacubes

Build all composites

  • Track down large v0 values that appear in mosaics
    The problem was not with re-projection but with composites code not applying threshold to X and Y components of some data variables while applying the threshold to corresponding magnitude value
  • @mliukis
  • @chadagreene to validate

Build all Mosaics

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.