Git Product home page Git Product logo

rstoolbox's Introduction

RStoolbox

CI CRAN version codecov Downloads

RStoolbox is an R package providing a wide range of tools for your every-day remote sensing processing needs. The available tool-set covers many aspects from data import, pre-processing, data analysis, image classification and graphical display. RStoolbox builds upon the terra package, which makes it suitable for processing large data-sets even on smaller workstations.

Find out more on the RStoolbox webpage.

Installation

The package is available on CRAN and can be installed as usual via

install.packages("RStoolbox")

To install the latest version from GitHub you need to have r-base-dev (Linux) or Rtools (Windows) installed. Then run the following lines:

library(devtools)
install_github("bleutner/RStoolbox")

Get started

RStoolbox implements a variety of remote sensing methods and workflows. Below are a few examples to get started. Further examples can be found in the documentation of the respective functions.

Example 1: Classifications

The example below shows an unsupervised classification workflow based on kmeans clustering:

library(RStoolbox)

# unsupervised classification with 3 classes
uc <- unsuperClass(lsat, nClasses = 3)

# plot result using ggplot
ggR(uc$map, geom_raster = T, forceCat = T) +
  scale_fill_manual(values = c("darkgreen", "blue", "sandybrown"))

If training data are available, e.g. labeled polygons, RStoolbox can be used to conduct a supervised classification. The workflow below employs randomForest to train a classification model:

library(RStoolbox)
library(caret)
library(randomForest)
library(ggplot2)
library(terra)

# example: training data from digitized polygons
train <- readRDS(system.file("external/trainingPolygons_lsat.rds", package="RStoolbox"))

# plot input data
ggRGB(lsat, r = 3, g = 2, b=1, stretch = "lin") +
  geom_sf(data = train, aes(fill = class)) + 
  scale_fill_manual(values = c("yellow", "sandybrown", "darkgreen", "blue"))
#> Coordinate system already present. Adding new coordinate system, which will
#> replace the existing one.

# fit random forest (splitting training into 70\% training data, 30\% validation data)
sc <- superClass(lsat, trainData = train, responseCol = "class",
                 model = "rf", tuneLength = 1, trainPartition = 0.7)

# print model performance and confusion matrix
sc$modelFit
#> [[1]]
#>   TrainAccuracy TrainKappa method
#> 1     0.9992293  0.9988032     rf
#> 
#> [[2]]
#> Cross-Validated (5 fold) Confusion Matrix 
#> 
#> (entries are average cell counts across resamples)
#>  
#>             Reference
#> Prediction   cleared fallen_dry forest water
#>   cleared      141.6        0.0    0.0   0.0
#>   fallen_dry     0.0       22.0    0.0   0.0
#>   forest         0.4        0.0  255.0   0.0
#>   water          0.0        0.0    0.0  99.4
#>                             
#>  Accuracy (average) : 0.9992

# plotting: convert class IDs to class labels (factorize) and plot
r <- as.factor(sc$map)
levels(r) <- data.frame(ID = 1:4, class_supervised = levels(train$class))
ggR(r, geom_raster = T, forceCat = T) + scale_fill_manual(values = c("yellow", "sandybrown", "darkgreen", "blue"))

Created on 2024-04-19 with reprex v2.1.0

Example 2: Spectral Unmixing

RStoolbox offers spectral unmixing by implementing the Multiple Endmember Spectral Mixture Analysis (MESMA) approach for estimating fractions of spectral classes, such as spectra of surfaces or materials, on a sub-pixel scale. The following workflow shows a simple Spectral Mixture Analysis (SMA) with single endmembers per class, extracted from the lsat example image by cell id:

library(RStoolbox)
library(terra)

#  to perform a SMA, use a single endmember per class, row by row:
em <- data.frame(lsat[c(5294, 47916)])
rownames(em) <- c("forest", "water")

# umix the lsat image
probs <- mesma(img = lsat, em = em)
plot(probs)

Instead, one can define multiple endmembers per class to conduct a Multiple Endmember Spectral Mixture Analysis (MESMA):

library(RStoolbox)
library(terra)


# to perform a MESMA, use multiple endmembers per class, differntiating them
# by a column named 'class':
em <- rbind(
  data.frame(lsat[c(4155, 17018, 53134, 69487, 83704)], class = "forest"),
  data.frame(lsat[c(22742, 25946, 38617, 59632, 67313)], class = "water")
)

# unmix the lsat image
probs <- mesma(img = lsat, em = em)
plot(probs)

# MESMA can also be performed on more than two endmember classes:
em <- rbind(
  data.frame(lsat[c(4155, 17018, 53134, 69487, 83704)], class = "forest"),
  data.frame(lsat[c(22742, 25946, 38617, 59632, 67313)], class = "water"),
  data.frame(lsat[c(4330, 1762, 1278, 1357, 17414)], class = "shortgrown")
)

# unmix the lsat image
probs <- mesma(img = lsat, em = em)
plot(probs)

Example 3: Cloud Masking

RStoolbox comes with a suite of pre-processing functions, including cloudMask to identify clouds in optical satellite imagery:

library(ggplot2)

# lsat example scene, with two tiny clouds in the east
ggRGB(lsat, stretch = "lin")

# calculate cloud index
cldmsk    <- cloudMask(lsat, blue = 1, tir = 6)
ggR(cldmsk, 2, geom_raster = TRUE) 

# mask by threshold, region-growing around the core cloud pixels
cldmsk_final <- cloudMask(cldmsk, threshold = 0.1, buffer = 5) 

## plot cloudmask 
ggRGB(lsat, stretch = "lin") +
  ggR(cldmsk_final[[1]], ggLayer = TRUE, forceCat = TRUE, geom_raster = TRUE) +
  scale_fill_manual(values = c("red"), na.value = NA)
#> Warning: Removed 88752 rows containing missing values or values outside the scale range
#> (`geom_raster()`).

Example 4: Radiometric and atmospheric correction

With radCor, users can compute radiometric and simple atmospheric corrections (based on dark object substraction):

library(terra)

# import Landsat meta data
mtlFile  <- system.file("external/landsat/LT52240631988227CUB02_MTL.txt", package="RStoolbox")
metaData <- readMeta(mtlFile)
lsat_t <- stackMeta(mtlFile)

# convert DN to top of atmosphere reflectance and brightness temperature
lsat_ref <- radCor(lsat_t, metaData = metaData, method = "apref")

# correct DN to at-surface-reflecatance with DOS (Chavez decay model)
lsat_sref <- radCor(lsat_t, metaData = metaData)

# correct DN to at-surface-reflecatance with simple DOS and automatic haze estimation
hazeDN    <- estimateHaze(lsat_t, hazeBands = 1:4, darkProp = 0.01, plot = FALSE)
lsat_sref <- radCor(lsat_t, metaData = metaData, method = "sdos",
                    hazeValues = hazeDN, hazeBands = 1:4)

# plot result
ggRGB(lsat_sref, r = 3, g = 2, b = 1, stretch = "lin")

Created on 2024-04-19 with reprex v2.1.0

rstoolbox's People

Contributors

16eagle avatar jjarosch avatar konstide avatar mfansler avatar nfultz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rstoolbox's Issues

Memory management

As a test, I'm running rasterPCA() with a raster image of 8282 * 11329 * 5 on
a machine with 64 Gb of RAM and it takes longer
than I had expected. I can see that the process does not take more than 2.4 Gb of RAM
Do you have any kind of memory management limits that could make that rasterPCA()
would not be taking as much RAM as it could?
Thanks

How do we parallel the RStoolbox?

I am preparing training data for keras GAN model, which need to crop lots of raster images to jpeg images. Does anyone have an idea how to parallel this process? Is Spark or doParallel the candidate?

some corrected reflectant value of TOA Band 5 seems out of range

I used topCor() on 62 stack of TOA Band 5 Landsat 8 image and I provided 62 MTL files which are sort to match the order of 62 stack of TOA Band 5.

But 32 output files of corrected TOA Band 5 are out of range (ie: reflectance value between -134.381 to 203.657. I did the same topCor () for TOA Band 4 Landsat 8 and the 62 output files seem to be in correct reflectance range. Can anyone tell me what when wrong? Advice are very much appreciated. Thanks

My codes are as follows:

dem_kam <-raster("E:/R/DEM/Kam_DEMslopeAspct_rad.tif")
#path to directory with folders
path1 <- "E:/R/C_crrctn/kam_c_crrctn/B5_tif/"
path2 <- "E:/R/C_crrctn/kam_c_crrctn/MTL/"
path3 <- "E:/R/C_crrctn/kam_c_crrctn/B5_C/"

#get file names using list.files() function
toa_stack <-list.files(path1, pattern=".tif$",full.names=T)
#print(path2)
mtl_stack <-list.files(path2,pattern =".txt$",full.names =T)

#Loop over every stack in filelist
for(i in 1:length(toa_stack)) {
print(i)

#stack image i
img <- raster(toa_stack[i])
mtl <- file.path(mtl_stack[i])
print(img)
print(mtl)
#calc corrected B5 for image i
B5_C <- topCor(img, dem = dem_kam, metaData = mtl, method = "C")

#export raster
writeRaster(B5_C, file.path(path3,names(img)), format="GTiff",
bylayer=TRUE, overwrite=T)
}

Metadata Errors

Dear Bleutner,
this is the error i am getting while i want to stack the MTL file, i can read it perfectly but when i want to stack it its giving erros.
mtfile<- system.file("MTL.txt", package = "RStoolbox")
meta<- readMeta("Z:/Band/MTL.txt")

meta

lsat<- stackMeta(meta)

lsat<- stackMeta(mtfile)
Error: Metadata file does not exist. Looking for:

lsat<- stackMeta("Z:/Band/MTL.txt")
Error in .local(.Object, ...) :

Error in .rasterObjectFromFile(x, band = band, objecttype = "RasterLayer", :
Cannot create a RasterLayer object from this file. (file does not exist)

lsat<- stackMeta(MTL.txt)
Error in inherits(file, "ImageMetaData") : object 'MTL.txt' not found
lsat<- stackMeta(meta)
Error in .local(.Object, ...) :

Regards

[getValidation] Adding time execution

Hello,
I have a proposal, in addition to "resultsValidation" that gives "getValidation", it will be better if the execution time (of"supclass") will be calculated and added to the validation results.

I need this for analysis please.

Thank you :)
Daliben

EVI

Considering that spectralIndices() do not rescale G, C1 and C2, shouldn't EVI index be obtained for toa or sr values but not for dn?

SI <- spectralIndices(lsat, red = "B3_dn", nir = "B4_dn", blue='B2_dn')
plot(SI)
sidn

lsat_sref0<- radCor(lsat, metaData = metaData0, method = "dos")
SI <- spectralIndices(lsat_sref0, red = "B3_sre"
, nir = "B4_sre"
, blue='B2_sre', indices = "EVI")
plot(SI)
sisre

A warning could/should be provided or sre bands computed instead.

Error message while computing spectral indices

I created a raster stack (landsat_data) with 6 layers, each of which represents one band.I also saved the names of the layers in a vector (names_landsat). Now I want to calculate the spectral indices with the following code:

    ind <- landsat_data
    ind <- spectralIndices(ind,blue=as.character(names_landsat[1]),green=as.character(names_landsat[2]),red=as.character(names_landsat[3]),nir=as.character(names_landsat[4]),swir1=as.character(names_landsat[5]),swir2=as.character(names_landsat[6]))

Unfortunately R returns the following error message:

  Error in if ((maxValue(img[[red]])/scaleFactor > 1.5 | minValue(img[[red]])/scaleFactor <  : 
      missing value where TRUE/FALSE needed
    In addition: Warning messages:
    1: In .local(x, ...) : max value not known, use setMinMax
    2: In .local(x, ...) : min value not known, use setMinMax

I have absolutely zero clue what that means as I am pretty new to this package.
I would be very grateful if somebody could give me any kind of help.
Thanks.

Read EarthExplorer new CSV files

Hi,

Congrats for the tool (any future update for Sentinels?) ! I try to read and plot the new CSV files from EarthExplorer (Collection 1) but couldn't have "acceptable"results following the code example in the vignette. Attached the csv i try to plot and read.
LSR_LANDSAT_TM_C1_189326.zip

Best
D.

rescaleImage ignoring ymin and ymax?

Doing:

> ima <- raster(matrix(rnorm(n=10^4,m=0,sd=2)))
> cellStats(ima,range)
[1] -7.437958  7.845097
> q <- quantile(ima,probs=c(0.02,0.98))
> q
       2%       98% 
-4.168026  4.168633 
> ima2 <- rescaleImage(ima,xmin=q[1],xmax=q[2],ymin=0,ymax=255,forceMinMax = TRUE)
> cellStats(ima2,range)
[1] -100.0200  367.4549

I was expecting ima2 to be rescaled from range -7.437958 , 7.845097 -4.168026, 4.168633 to 0, 255
Am I misunderstanding the doc or is there an error?

radCor memory consumption

memory consumption can be too high due to single layer processing.

Todo:

  • add manual canProcessInMemory checks for sum of all output layers.
  • add ellipsis filename support

Output eigenvectors in rasterPCA

I often miss eigenvectors in the rasterPCA() output (pcomp$rotation or princomp$loadings),
in addition to the eigenvalues currently provided.

Use prcomp() in rasterPCA

rasterPCA is based on princomp(), but (from the princomp help page)
"The calculation is done using eigen on the correlation or covariance matrix, as determined by cor. This is done for compatibility with the S-PLUS result. A preferred method of calculation is to use svd on x, as is done in prcomp."

I thus suggest using prcomp() in rasterPCA()

Covariance matrix is not non-negative definite

Error in princomp.default(covmat = covMat[[1]], cor = spca) :
covariance matrix is not non-negative definite

Error on PCA's of worldclim data. Tested on 6 different worldclim datasets and tested on 5 different machines (3 windows, 1 mac and 1 linux supercluster).

PCA_he45bi70 <- rasterPCA(he45bi70,spca=TRUE, nComp = 3, maskCheck = F)
Tested with maskcheck F and T.

Problems installing package on Ubuntu 16.04

I'm not able to install the package on my Ubuntu server. It fails with the following step:

configure: Need to download and build NLopt
trying URL 'http://ab-initio.mit.edu/nlopt/nlopt-2.4.2.tar.gz'
Error in download.file(url = "http://ab-initio.mit.edu/nlopt/nlopt-2.4.2.tar.gz", :
cannot open URL 'http://ab-initio.mit.edu/nlopt/nlopt-2.4.2.tar.gz'
In addition: Warning message:
In download.file(url = "http://ab-initio.mit.edu/nlopt/nlopt-2.4.2.tar.gz", :
URL 'http://ab-initio.mit.edu/nlopt/nlopt-2.4.2.tar.gz': status was 'Failure when receiving data from the peer'
Execution halted
Error in file(con, "rb") : cannot open the connection
Calls: untar -> readBin -> file
In addition: Warning message:
In file(con, "rb") :
cannot open file 'nlopt-2.4.2.tar.gz': No such file or directory
Execution halted
configure: Starting to install library to /tmp/Rtmp9oSNpM/R.INSTALL32603998a7/nloptr/nlopt-2.4.2
./configure: line 3325: cd: nlopt-2.4.2: No such file or directory

Thanks

estimateHaze

Dear authors,

first of all congratulations and thank you very much for this very nice package.

I mainly used the package to calculate the atmospheric correction using the DOS method with LandSat8 images from 2014.

In Landsat8 images the DN values are encoded in 16bit and thus have the range 0 - 65535. Because of that the default value of the darkProp = 0.02 seems not to be appropriate since if running estimateHaze on gets the Warnings for each band:

darkProp for band X was chosen too high. It exceeds the value range.

haze1

Adjusting the value of darkProp to about 0.000025 fixes the problem and SHV values are evaluated properly.
haze2

Nevertheless the provided examples are not working with my images and the manual setting of the darkProp parameter might not always be applicable. Thus, I was wondering if you could add some piece of code to check the DN values and adjust the default parameters somehow.

In addition I am seeing that the following two corrections are giving exactly the same result:

Convert DN to top of atmosphere reflectance and brightness temperature

lsat_ref <- radCor(lsat, metaData = metaData, method = "apref")

hazeDN <- estimateHaze(lsat, hazeBands = 1:4, darkProp = 0.000025, plot = TRUE)
lsat_sref <- radCor(lsat, metaData = metaData, method = "sdos",
hazeValues = hazeDN, hazeBands = 1:4)

So it seems that in my case the atmospheric correction is not doing anything which does not seem right. Do you have an idea where this could be coming from? Maybe it is somehow related to the issue with the very small haze values I mentioned.

Thank you very much in advance for your help,
Hendrik

blueish colors by default in ggR() if layers=1:3

I find this weird:

ggR(rlogo) #gray scale
s <- stack(rlogo,rlogo,rlogo)
ggR(s,layer=1:3, geom_raster = TRUE) #blue tones
ggR(s[[1]]) #gray scale
#hue, sat have no effect.
#To get gray tones I have to do:
ggR(s, layer=1:3, geom_raster = TRUE) + scale_fill_gradientn(colours = grey.colors(10))

why? I find it very inconvenient. Any simpler way to avoid the bluish tones?
Thanks

pifMatch Pseudo-Invariant Features based Image Matching

I believe there is a issue with pifmatch returning the adjusted image.
"Value
Returns a RasterStack with the adjusted image (img) by default"

The only return I see when running your default code is the pixel-wise similarity map $simMap

library(gridExtra)
## Import Landsat example data
data(lsat)
## Create fake example data
## In practice this would be an image from another acquisition date
lsat_b <- log(lsat)
## Run pifMatch
lsat_b_adjusted <- pifMatch(lsat_b, lsat)

Spectral Mixture Analysis

Thank you for the spectral unmixing function. However, I think the assignment of the name "mesma" seems to be wrong!. Since mesma means that multiple endmembers get the lowest RMSE. I executed the mesma function of RStoolbox for a total of 100 endmembers and the result is the number of fractions equal to the number of endmembers. Please I suggest to change the name of the function because "mesma" means to choose the endmember with the lowest RMSE from a set of endmembers. For more details see: Somers et al. 2011.

If I'm wrong, please let me know.
Best regards,

Upcoming caret release fails test

The current github version of caret induces a (small) test failure for RStoolbox:

* checking tests ...
 ERROR
Running the tests in โ€˜tests/testthat.Rโ€™ failed.
Last 13 lines of output:
  Loading required package: sp
  [1] "testdata/earthexplorer/EE_LANDSAT_5.txt"
  [2] "testdata/earthexplorer/EE_LANDSAT_7.txt"
  [3] "testdata/earthexplorer/EE_LANDSAT_8.txt"
  1. Failure: regression (@test-validateMap.R#38) --------------------------------
  names(val$performance) not equal to c("RMSE", "Rsquared").
  Lengths differ: 3 vs 2
  
  
  testthat results ================================================================
  OK: 766 SKIPPED: 0 FAILED: 1
  1. Failure: regression (@test-validateMap.R#38) 
  
  Error: testthat unit tests failed
  Execution halted

postResample now reports, RMSE, R^2, and MAE.

Example ENVI spectral libraries

Reading RStoolbox\external

Example data

sliFile <- system.file("external/vegSpec.sli", package="RStoolbox")
sliTmpFile <- paste0(tempdir(),"/vegetationSpectra.sli")

Read spectral library

sli <- readSLI(sliFile)
head(sli)

I change to

sfile<- system.file("vegSpec.sli", package = "RStoolbox")
meta<- system.file("C:/RStoolbox/external/landsat/vegSpec.sli")
sliTmpFile<- paste0(tempdir(),"C:/RStoolbox/external/landsat/vegetationSpectra.sli")
sli <- readSLI(meta)
Error in readBin(f, "int", 1000, size = 1, signed = FALSE) :
can only read from a binary connection
In addition: Warning message:
In file(path, "rb", raw = TRUE) :
file("") only supports open = "w+" and open = "w+b": using the former

I would like to know what is the problem?

Error in train.default(x, y, weights = w, ...) : Stopping

Hi,

I have used the caret model "rotationForest" for supervised classification "superclass" , and the message error below appears:

Something is wrong; all the Accuracy metric values are missing:
Accuracy Kappa
Min. : NA Min. : NA
1st Qu.: NA 1st Qu.: NA
Median : NA Median : NA
Mean :NaN Mean :NaN
3rd Qu.: NA 3rd Qu.: NA
Max. : NA Max. : NA
NA's :9 NA's :9

Error in train.default(x, y, weights = w, ...) : Stopping

In addition: There were 50 or more warnings (use warnings() to see the first 50)

N.B:

  1. the warnings message is in 'warning.txt'
    warning.txt
  2. I had the same error message by testing the caret model "AdaBoost".
  3. with the model "rf" everything goes well ( no error message).

Thanks in advance.

Daliben

readMeta fails to read Landsat MSS metadata (*MTL.txt) files

Hi,

Thanks bleutner for this package, it has been very helpful so far in processing Landsat scenes!

An issue I have encountered is that readMeta fails to properly load the *MTL.txt metadata files from the various Landsat MSS scenes I have tried to process.

Here is a basic example of trying to load the metadata and the error it throws:

meta <- readMeta("./MSS/1978/LM30520251978217PAC03_MTL.txt")

Error in `rownames<-`(`*tmp*`, value = "B6_dn") : 
attempt to set 'rownames' on an object with no dimensions

and if I use the example you provide in the documentation, readMeta throws a different error because mtlFile is only an empty character vector.

mtlFile <- system.file("./MSS/1978/LM30520251978217PAC03_MTL.txt", package = "RStoolbox")
print(mtlFile)
[1] ""

meta <- readMeta(mtlFile)

Error: Metadata file does not exist. Looking for:

I have attached one of the metadata .txt files as obtained when the MSS scene was downloaded from the USGS. Any suggestions on how to get around this issue if I am trying to run radCor and topCor on the MSS data?

LM30520251978217PAC03_MTL.txt

Thanks,
Andras

Error in if (!is.projected(x)) { : missing value where TRUE/FALSE needed

Hi,

Hello,I am currently working on the classification of hyperspectral imagry, I use the "superclass" function package "RStoolbox" the image to be classified and its corresponding traindata have coordinate reference equal to NA, as you can see below .

class : RasterStack
dimensions : 1476, 256, 377856, 145 (nrow, ncol, ncell, nlayers)
resolution : 1, 1 (x, y)
extent : 0, 256, 0, 1476 (xmin, xmax, ymin, ymax)
coord. ref. : NA
names : image_brute.1, image_brute.2, image_brute.3, image_brute.4, image_brute.5, image_brute.6, image_brute.7, image_brute.8, image_brute.9, image_brute.10, image_brute.11, image_brute.12, image_brute.13, image_brute.14, image_brute.15, ...
min values : 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
max values : 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, ...


class : SpatialPointsDataFrame
features : 3248
extent : 18, 252, 118, 1093 (xmin, xmax, ymin, ymax)
coord. ref. : NA
variables : 1
names : label
min values : 1

max values : 14

When I run the classification, the following error message appears: "Error in {if (is.projected (x)!): Where missing value TRUE / FALSE needed".
Please help me to resolve this problem.
Thank you very much in advance.

New landsat product identifier

Hello,

I have been trying to run RSToolbox on the latest LandSat8 images (Collection 1 product identifiers have changed) and some of the functions fail to run with an error invalid layer names. I have been trying to change the functions radCor and stackMeta to run on the new datasets (Data from the MTL file has longer row names and band names[i've changed it back to B1_dn etc.)) and changed the data from MTL file to look similar to the pre collection versions of landsat images. This solved the invalid layer names issue, but now it fails at the .LANDSAT.db object. Is it a known issue or i am doing something wrong (my code was perfectly running on the pre collection versions) Thanks in advance L.

Oh and I am running RStudio v1.0.143 with R version 3.3.2 on a Win 10 machine.

The code is simple:

SatFolder is an unchanged unpacked LS folder with all original files (MTL and all bands)
(eg: F:/.../LC81880272017086LGN00

mtlFile<-list.files(SatFolder, pattern = glob2rx("*_MTL.txt"), full.names = TRUE)
metaData <- readMeta(mtlFile)
lsat <- stackMeta(mtlFile)
lsat_ref <- radCor(lsat, metaData = metaData, method = "apref")

readSLI cannot find the hdr file if files are not in the working directory

readSLI cannot find the hdr file if files are not in the working directory
unless hdr file is named *.sli.hdr

> require("RStoolbox")
> #readSLI works fine with the files in the working directory
> list.files(patt=glob2rx("a*"))
[1] "a.hdr" "a.sli" "adir" 
> x <- readSLI("a.sli")

> #but does not work if the files are in another directory
> list.files("adir",patt=glob2rx("a*"))
[1] "a.hdr" "a.sli"
> x <- readSLI(file.path("./adir","a.sli"))
Error: Can't find header file of./adir/a.sli

#instead it also works fine if the hdr is renamed to a.sli.hdr
> file.copy(file.path("./adir","a.hdr"),file.path("./adir","a.sli.hdr") )
[1] TRUE
> x <- readSLI(file.path("./adir","a.sli"))

> sessionInfo()
R version 3.5.2 (2018-12-20)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1

Matrix products: default

locale:
[1] LC_COLLATE=Spanish_Spain.1252  LC_CTYPE=Spanish_Spain.1252    LC_MONETARY=Spanish_Spain.1252
[4] LC_NUMERIC=C                   LC_TIME=Spanish_Spain.1252    

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] RStoolbox_0.2.4

Additional facet_wrap arguments to ggR()

I would like to plot multiple layers with ggR():

ggR(lsat, 1:3, geom_raster=TRUE, stretch = "lin") +
    scale_fill_gradientn(colors=grey.colors(100), guide = FALSE) +
    theme(axis.text = element_text(size=5), 
          axis.text.y = element_text(angle=90),
          axis.title=element_blank())

But would like the subplots to appear in one column rather than one row.
Would it be possible to extend the function to an additional argument (ncol, nrow) that feeds into the facet_wrap() function. If you like, I can also issue a pull request for this.

[superclass_getvalidation] return of final values of model parameters

Hello,
"superclass" uses โ€œtuneLengthโ€ argument to indicate the number of different values to try for each model parameter and uses also the k fold cross validation for resampling during model tuning.
Please, is it possible that ยซget validation function" provides the final values tuned used of the caret model ?
For example: for the model โ€œsvmRadialโ€, the final values of two parameters (sigma)and Cost (C) used on classification task will be returned also as results of getvalidation.
Thanks in advance.

readMeta displays acquisition time in local time zone without conversion

First of all, I thank the author for developing this tool which is very useful. It would be nice to correct this minor issue in readMeta.R. The use of the function as.POSIXlt on scene_center_time variable without parameter 'tz=GMT', simply adds local time zone tag (in my case IST) to the time value.

$ACQUISITION_DATE
[1] "2000-04-10 04:57:51 IST"

This made me to convert the time value back to GMT by subtracting 5.30 from the time value displayed by readMeta, which was wrong as the time value was already denoting GMT but wrongly tagged with local time zone. (I found the issue with clues from sun azimuth and elevation angles.). I request you to either add the parameter 'tz="GMT"' to as.POSIXlt or correct the time value with respect to the time zone of the scenes. Once again I thank u for such a nice tool.

How to avoid excess of white space among plots

I want to plot all bands of a multi-spectral image one next of the other.
I do it with RStoolbox::ggR() and gridExtra::grid.arrange(), but get
lots of wasted blank space:

require(raster)
require(gridExtra)
require(RStoolbox)

t <- theme(
  axis.text = element_blank(),
  axis.ticks = element_blank(),
  axis.title = element_blank())

r <- brick(array(rep(100,6*10^6),dim=c(1000,1000,6)))
a <- list(1:6,TRUE)
for(i in 1:6) a[[i]] <- ggR(r[[i]]) + t
grid.arrange(grobs=a)

Is there a way of avoiding so much space among band plots?
Perhaps a way not using grid.arrange() ?
Thanks

readSLI: take byte order of hdr into consideration

When I use readSLI() to read a library with byte order = 1 in its hdr, I get weird values in R, while libraries
that have byte order = 0 result in R obkects with meaningful values.
byte order = 1 : https://lost-contact.mit.edu/afs/enea.it/software/rsi/envi_3.5/spec_lib/jpl_lib/
byte order = 0: https://lost-contact.mit.edu/afs/enea.it/software/rsi/envi_3.5/spec_lib/usgs_min/

I understand that readSLI() assumes byte order = 0 (least significant byte first (LSF) data ),
am I wrong?
If I'm not wrong, could byte order be taken into account by readSLI() ?

wish: multivariate stats command

a command for linear regression of x~y1+y2+... raster would be handy and optional output of residuals (maybe optional with piecewise regressions, robust linear regressions etc.). To ease this code snippet:

   library(raster)
   library(RStoolbox)
   
   lsat88 <- brick("p224r63_1988.grd")
   lsat11 <- brick("p224r63_2011.grd")
   
   lsat88.ndvi <- spectralIndices(lsat88, red="B3_sre",nir="B4_sre",indices="ndvi")
   lsat11.ndvi <- spectralIndices(lsat11, red="B3_sre",nir="B4_sre",indices="ndvi")

   lm.lsat <- lm(lsat88.ndvi[]~lsat11.ndvi[], na.action = NULL) # can also be done as multivariate regression: x~y1+y2+y3 with dependent and explanatory variables
   summary(lm.xy)

   Coefficients:
        Estimate Std. Error t value Pr(>|t|)    
   yc.ndvi[]   0.248039   0.001339   185.3   <2e-16
   ...
   Multiple R-squared:  0.1702,	Adjusted R-squared:  0.1702 
   ...
   
   resid.lsat <- raster(lsat88)	
   resid.lsat[] <- lm.lsat$residuals
   plot(resid.lsat,stretch="lin")

[SuperClass] Comparison Classification Accuarcies

Hi,
I want to know how can I compare the results of classification (Accuarcy, kappa) of two or three models (such as RF, SVM and LDA for example) after using superclass. How can I use for example: "Box and Whisker Plots" or "Density Plots" and "Scatterplot Matrix" or others techniques to illustrate the estimated accuracy by the differents models.

Thanks.

Daliben

using factors in caret::confusionMatrix

The version of caret being sent to CRAN (6.0-79) has a change that affects your package. The details are here:

topepo/caret@e4dc43d

In summary, when creating a confusion matrix using two vectors, the input data are now required to be factors with the same levels. If this is an issue, you can get around it by creating a table and using confusionMatrix.table to get around the requirement.

The issues for your respective packages are shown below. When you submit to CRAN, please make sure that the dependency has caret (>= 6.0-79).

Let me know if you have any questions.

Thanks,

Max

```
 ERROR
Running the tests in โ€˜tests/testthat.Rโ€™ failed.
Last 13 lines of output:
  โ”€โ”€ 2. Error: classification, with class mapping (@test-validateMap.R#26)  โ”€โ”€โ”€โ”€โ”€โ”€
  `data` and `reference` should be factors with the same levels.
  1: validateMap(sc$map, valData = poly, nSample = 100, responseCol = "class", classMapping = sc$classMapping) at testthat/test-validateMap.R:26
  2: confusionMatrix(valiSet[[1]][, "prediction"], reference = valiSet[[1]][, "reference"])
  3: confusionMatrix.default(valiSet[[1]][, "prediction"], reference = valiSet[[1]][, 
         "reference"])
  4: stop("`data` and `reference` should be factors with the same levels.", call. = FALSE)
  
  โ•โ• testthat results  โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•
  OK: 926 SKIPPED: 2 FAILED: 2
  1. Error: classification, without class mapping (@test-validateMap.R#19) 
  2. Error: classification, with class mapping (@test-validateMap.R#26) 
  
  Error: testthat unit tests failed
  Execution halted
```

I transfer an issue related to memory leak

Hi,

For one purpose, I had to deploy an R script including RStoobox with an execuatble windows batch to many users.

I am french and I noticed that the printed accents like รฉ were not displayed well after RStoobox was loaded.

I contacted an R forum to explain my problem. One person made tests to see what was going on.

She wrote the following script named bugged.R

cat("\n\nร  รข รฏ รฎ รฉ รจ รด รผ รง\n\n")
require("RStoolbox")
cat("\n\nร  รข รฏ รฎ รฉ รจ รด รผ รง\n\n")

She tested the code against valgrind
R -d "valgrind --leak-check=full --show-reachable=yes --xml=yes --xml-file=valgrind_out.xml" -f bugged.R

grep '<kind>' valgrind_out.xml valgrind_out_good.xml | uniq

And it gave the following output

valgrind_out.xml:  <kind>Leak_StillReachable</kind>
valgrind_out.xml:  <kind>Leak_DefinitelyLost</kind>
valgrind_out.xml:  <kind>Leak_StillReachable</kind>
valgrind_out.xml:  <kind>Leak_PossiblyLost</kind>
valgrind_out.xml:  <kind>Leak_StillReachable</kind>
valgrind_out_good.xml:  <kind>Leak_StillReachable</kind>
valgrind_out_good.xml:  <kind>Leak_PossiblyLost</kind>
valgrind_out_good.xml:  <kind>Leak_StillReachable</kind>

To quote her, she said Leak_DefinietelyLost was a leak of memory and she advised me to warn you about this issue.

Here is the forum link

RStoolbox is an awesome package

Column names for "x" object

I've made a change to caret:::train that requires the predictor matrix (x) to always have column names and that breaks your test:

  > test_check("RStoolbox")
  Loading required package: sp
  1. Error: predict mlc (@test-mlc.R#16) -----------------------------------------
  Please use column names for `x`
  1: train(mat, y, method = mlcCaret, trControl = trainControl(method = "none")) at testthat/test-mlc.R:16
  2: train.default(mat, y, method = mlcCaret, trControl = trainControl(method = "none"))
  3: stop("Please use column names for `x`", call. = FALSE)
  
  testthat results ================================================================
  OK: 726 SKIPPED: 0 FAILED: 1
  1. Error: predict mlc (@test-mlc.R#16) 

I have a version set to CRAN that found this. Can you update your test (and perhaps relevant parts of the package) to have column names? If this is a big deal, I can just set names if there are none but that seems like a poor solution (but feedback is welcome).

Thanks

cloud masking/ estimation of shadows

Sir,
hope you do well. the error i get while intractively selecting the pixels where cloud exist.

shadow<- cloudShadowMask(lsatt, cldmsk_final, nc=2)
Please select cloud and corresponding shadow pixels (alternating cloud,shadow,cloud,shadow...)

Error in cut.default(me, breaks = mbreax, labels = FALSE, include.lowest = TRUE) :
invalid number of intervals
In addition: Warning message:
In .local(x, size, ...) :
size set to the number of cells within "ext": 47591017
cloudmasking

rasterCVA function: error "cannot use this formula, probably because it is not vectorized"

Dear all,
I try to use the rasterCVA function on two bands of two Landsat 5 images (with the same extent), but I receive an error, as seen in the following:

changeVec <- rasterCVA(LS_Aug_2011[[3:4]], LS_Aug_2010[[3:4]]) Error in (function (x, fun, filename = "", recycle = TRUE, ...) : cannot use this formula, probably because it is not vectorized

Here some details about my raster objects:

`

LS_Aug_2011
class : RasterBrick
dimensions : 7421, 8221, 61008041, 7 (nrow, ncol, ncell, nlayers)
resolution : 30, 30 (x, y)
extent : 173985, 420615, 4989285, 5211915 (xmin, xmax, ymin, ymax)
coord. ref. : +proj=utm +zone=35 +datum=WGS84 +units=m +no_defs +ellps=WGS84 +towgs84=0,0,0
data source : /home/gce04/C7_Martini_2017/landsat_data/LS_Aug_2011_stack.tif
names : LS_Aug_2011_stack.1, LS_Aug_2011_stack.2, LS_Aug_2011_stack.3, LS_Aug_2011_stack.4, LS_Aug_2011_stack.5, LS_Aug_2011_stack.6, LS_Aug_2011_stack.7
min values : 0, 0, 0, 0, 0, 0, 0
max values : 255, 255, 255, 255, 255, 182, 255

LS_Aug_2010
class : RasterBrick
dimensions : 7421, 8221, 61008041, 7 (nrow, ncol, ncell, nlayers)
resolution : 30, 30 (x, y)
extent : 173985, 420615, 4989285, 5211915 (xmin, xmax, ymin, ymax)
coord. ref. : +proj=utm +zone=35 +datum=WGS84 +units=m +no_defs +ellps=WGS84 +towgs84=0,0,0
data source : /home/gce04/C7_Martini_2017/landsat_data/LS_Aug_2010_stack.tif
names : LS_Aug_2010_stack.1, LS_Aug_2010_stack.2, LS_Aug_2010_stack.3, LS_Aug_2010_stack.4, LS_Aug_2010_stack.5, LS_Aug_2010_stack.6, LS_Aug_2010_stack.7
min values : 0, 0, 0, 0, 0, 0, 0
max values : 255, 255, 255, 255, 255, 180, 255
`

Can you help me out with that issue?

Looking forward to comments.

Cheers!

topCor on small raster: Error in .local(x, size, ...) : size <= ncell(x) is not TRUE

Hello,

I'm trying to apply a topographic correction to a stack of MODIS images, and I'm getting an error that I assume has to do with with the fixed sample size of 10000 used by .kestimate (my raster stack has 5046 cells). Is there a straightforward way to pass a different sample size to .kestimate?

Thanks,
Nate

topCor(modis_4band_means, dem = srtm_MODIS, solarAngles = c(1, 0.7), method = "C")

Error in .local(x, size, ...) : size <= ncell(x) is not TRUE

traceback()

8: stop(simpleError(msg, call = sys.call(-1)))
7: stopifnot(size <= ncell(x))
6: .local(x, size, ...)
5: sampleRandom(stack(img, illu, slope, strat), size = 10000)
4: sampleRandom(stack(img, illu, slope, strat), size = 10000)
3: as.data.frame(sampleRandom(stack(img, illu, slope, strat), size = 10000))
2: .kestimate(img, illu, slope, method = "stat")
1: topCor(modis_4band_means, dem = srtm_MODIS, solarAngles = c(1,
0.7), method = "C")

modis_4band_means
class : RasterBrick
dimensions : 58, 87, 5046, 4 (nrow, ncol, ncell, nlayers)
resolution : 288, 288.6207 (x, y)
extent : 694419.2, 719475.2, 2149977, 2166717 (xmin, xmax, ymin, ymax)
crs : +proj=utm +zone=14 +datum=WGS84 +units=m +no_defs +ellps=WGS84 +towgs84=0,0,0
source : memory
names : layer.1, layer.2, layer.3, layer.4
min values : 0.015384210, 0.115567346, 0.009112088, 0.042510675
max values : 0.13556559, 0.36798491, 0.08137419, 0.19660914

Error in running RotationForest in caret

cctrl <- trainControl(method = "cv", number = 3, returnResamp = "all",
classProbs = TRUE,
summaryFunction = twoClassSummary)

fit <- train(Class ~ ., data = training,
method = "rotationForest",
trControl = cctrl1,
metric = "ROC",
preProc = c("center", "scale"))

Giving me error "Model rotationForest is not in caret's built-in library"

readSLI collapses " " from "spectra names" in hdr files to ""

Currently, *.sli files from EnMapBox come with "spectra names" field in the corresponding *.hdr file that have " " separators, such as:
spectra names = {
FX10FX17_Quartz1_small.raw 738.785219770416 358.03143995717653 USER:100025,
FX10FX17_Quartz2_small.raw 615.0409742461194 446.71481591625576 USER:100025,
FX10FX17_Quartz3_small.raw 346.9284422768101 586.9582941771253 USER:100025}
readSLI() collapses the " " and names become:
[2] "FX10FX17_Quartz1_small.raw738.785219770416358.03143995717653USER:100025"
[3] "FX10FX17_Quartz2_small.raw615.0409742461194446.71481591625576USER:100025"
[4] "FX10FX17_Quartz3_small.raw346.9284422768101586.9582941771253USER:100025"

which is very inconvenient for ulterior processing. Could you use "_" as separator?
"FX10FX17_Quartz1_small.raw_738.785219770416_358.03143995717653_USER:100025"
"FX10FX17_Quartz2_small.raw_615.0409742461194_446.71481591625576_USER:100025"
"FX10FX17_Quartz3_small.raw_346.9284422768101_586.9582941771253_USER:100025"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.