bleutner / rstoolbox Goto Github PK
View Code? Open in Web Editor NEWRemote Sensing Data Analysis in R ๐ฐ
Remote Sensing Data Analysis in R ๐ฐ
Hi,
I want to know how can I compare the results of classification (Accuarcy, kappa) of two or three models (such as RF, SVM and LDA for example) after using superclass. How can I use for example: "Box and Whisker Plots" or "Density Plots" and "Scatterplot Matrix" or others techniques to illustrate the estimated accuracy by the differents models.
Daliben
note to self:
I find this weird:
ggR(rlogo) #gray scale
s <- stack(rlogo,rlogo,rlogo)
ggR(s,layer=1:3, geom_raster = TRUE) #blue tones
ggR(s[[1]]) #gray scale
#hue, sat have no effect.
#To get gray tones I have to do:
ggR(s, layer=1:3, geom_raster = TRUE) + scale_fill_gradientn(colours = grey.colors(10))
why? I find it very inconvenient. Any simpler way to avoid the bluish tones?
Thanks
I want to plot all bands of a multi-spectral image one next of the other.
I do it with RStoolbox::ggR() and gridExtra::grid.arrange(), but get
lots of wasted blank space:
require(raster)
require(gridExtra)
require(RStoolbox)
t <- theme(
axis.text = element_blank(),
axis.ticks = element_blank(),
axis.title = element_blank())
r <- brick(array(rep(100,6*10^6),dim=c(1000,1000,6)))
a <- list(1:6,TRUE)
for(i in 1:6) a[[i]] <- ggR(r[[i]]) + t
grid.arrange(grobs=a)
Is there a way of avoiding so much space among band plots?
Perhaps a way not using grid.arrange() ?
Thanks
I am preparing training data for keras GAN model, which need to crop lots of raster images to jpeg images. Does anyone have an idea how to parallel this process? Is Spark or doParallel the candidate?
rasterPCA is based on princomp(), but (from the princomp help page)
"The calculation is done using eigen on the correlation or covariance matrix, as determined by cor. This is done for compatibility with the S-PLUS result. A preferred method of calculation is to use svd on x, as is done in prcomp."
I thus suggest using prcomp() in rasterPCA()
Hey,
it dosn't work for Landsat 8 level 1 products. The bitorder in the QA file changed compared to the pre-level 1 products. Could you fix it?
Thanks
Alex
Thank you for the spectral unmixing function. However, I think the assignment of the name "mesma" seems to be wrong!. Since mesma means that multiple endmembers get the lowest RMSE. I executed the mesma function of RStoolbox for a total of 100 endmembers and the result is the number of fractions equal to the number of endmembers. Please I suggest to change the name of the function because "mesma" means to choose the endmember with the lowest RMSE from a set of endmembers. For more details see: Somers et al. 2011.
If I'm wrong, please let me know.
Best regards,
Hello there,
thanks for the good work. I believe I just stumbled upon a typo in the TC greenness coefficient for the green band: https://github.com/bleutner/RStoolbox/blob/b2bbfa198ada2120ad4960b5e02956e8606a165d/data-
raw/generate_sysdata.R#L156 I think itยดs supposed to be -0.1603.
Cheers!
I believe there is a issue with pifmatch returning the adjusted image.
"Value
Returns a RasterStack with the adjusted image (img) by default"
The only return I see when running your default code is the pixel-wise similarity map $simMap
library(gridExtra)
## Import Landsat example data
data(lsat)
## Create fake example data
## In practice this would be an image from another acquisition date
lsat_b <- log(lsat)
## Run pifMatch
lsat_b_adjusted <- pifMatch(lsat_b, lsat)
Hi,
For one purpose, I had to deploy an R script including RStoobox with an execuatble windows batch to many users.
I am french and I noticed that the printed accents like รฉ were not displayed well after RStoobox was loaded.
I contacted an R forum to explain my problem. One person made tests to see what was going on.
She wrote the following script named bugged.R
cat("\n\nร รข รฏ รฎ รฉ รจ รด รผ รง\n\n")
require("RStoolbox")
cat("\n\nร รข รฏ รฎ รฉ รจ รด รผ รง\n\n")
She tested the code against valgrind
R -d "valgrind --leak-check=full --show-reachable=yes --xml=yes --xml-file=valgrind_out.xml" -f bugged.R
grep '<kind>' valgrind_out.xml valgrind_out_good.xml | uniq
And it gave the following output
valgrind_out.xml: <kind>Leak_StillReachable</kind>
valgrind_out.xml: <kind>Leak_DefinitelyLost</kind>
valgrind_out.xml: <kind>Leak_StillReachable</kind>
valgrind_out.xml: <kind>Leak_PossiblyLost</kind>
valgrind_out.xml: <kind>Leak_StillReachable</kind>
valgrind_out_good.xml: <kind>Leak_StillReachable</kind>
valgrind_out_good.xml: <kind>Leak_PossiblyLost</kind>
valgrind_out_good.xml: <kind>Leak_StillReachable</kind>
To quote her, she said Leak_DefinietelyLost was a leak of memory and she advised me to warn you about this issue.
Here is the forum link
RStoolbox is an awesome package
Hi,
Something is wrong; all the Accuracy metric values are missing:
Accuracy Kappa
Min. : NA Min. : NA
1st Qu.: NA 1st Qu.: NA
Median : NA Median : NA
Mean :NaN Mean :NaN
3rd Qu.: NA 3rd Qu.: NA
Max. : NA Max. : NA
NA's :9 NA's :9
Error in train.default(x, y, weights = w, ...) : Stopping
N.B:
Daliben
Feature request: full unmixing workflow
Relevant for sam()
and mesma()
Reading RStoolbox\external
sliFile <- system.file("external/vegSpec.sli", package="RStoolbox")
sliTmpFile <- paste0(tempdir(),"/vegetationSpectra.sli")
sli <- readSLI(sliFile)
head(sli)
I change to
sfile<- system.file("vegSpec.sli", package = "RStoolbox")
meta<- system.file("C:/RStoolbox/external/landsat/vegSpec.sli")
sliTmpFile<- paste0(tempdir(),"C:/RStoolbox/external/landsat/vegetationSpectra.sli")
sli <- readSLI(meta)
Error in readBin(f, "int", 1000, size = 1, signed = FALSE) :
can only read from a binary connection
In addition: Warning message:
In file(path, "rb", raw = TRUE) :
file("") only supports open = "w+" and open = "w+b": using the former
I would like to know what is the problem?
Error in princomp.default(covmat = covMat[[1]], cor = spca) :
covariance matrix is not non-negative definite
Error on PCA's of worldclim data. Tested on 6 different worldclim datasets and tested on 5 different machines (3 windows, 1 mac and 1 linux supercluster).
PCA_he45bi70 <- rasterPCA(he45bi70,spca=TRUE, nComp = 3, maskCheck = F)
Tested with maskcheck F and T.
Hello,
I have a proposal, in addition to "resultsValidation" that gives "getValidation", it will be better if the execution time (of"supclass") will be calculated and added to the validation results.
Thank you :)
Daliben
readSLI cannot find the hdr file if files are not in the working directory
unless hdr file is named *.sli.hdr
> require("RStoolbox")
> #readSLI works fine with the files in the working directory
> list.files(patt=glob2rx("a*"))
[1] "a.hdr" "a.sli" "adir"
> x <- readSLI("a.sli")
> #but does not work if the files are in another directory
> list.files("adir",patt=glob2rx("a*"))
[1] "a.hdr" "a.sli"
> x <- readSLI(file.path("./adir","a.sli"))
Error: Can't find header file of./adir/a.sli
#instead it also works fine if the hdr is renamed to a.sli.hdr
> file.copy(file.path("./adir","a.hdr"),file.path("./adir","a.sli.hdr") )
[1] TRUE
> x <- readSLI(file.path("./adir","a.sli"))
> sessionInfo()
R version 3.5.2 (2018-12-20)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1
Matrix products: default
locale:
[1] LC_COLLATE=Spanish_Spain.1252 LC_CTYPE=Spanish_Spain.1252 LC_MONETARY=Spanish_Spain.1252
[4] LC_NUMERIC=C LC_TIME=Spanish_Spain.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] RStoolbox_0.2.4
Sir,
hope you do well. the error i get while intractively selecting the pixels where cloud exist.
shadow<- cloudShadowMask(lsatt, cldmsk_final, nc=2)
Please select cloud and corresponding shadow pixels (alternating cloud,shadow,cloud,shadow...)
Error in cut.default(me, breaks = mbreax, labels = FALSE, include.lowest = TRUE) :
invalid number of intervals
In addition: Warning message:
In .local(x, size, ...) :
size set to the number of cells within "ext": 47591017
The version of caret being sent to CRAN (6.0-79) has a change that affects your package. The details are here:
In summary, when creating a confusion matrix using two vectors, the input data are now required to be factors with the same levels. If this is an issue, you can get around it by creating a table and using confusionMatrix.table
to get around the requirement.
The issues for your respective packages are shown below. When you submit to CRAN, please make sure that the dependency has caret (>= 6.0-79)
.
Let me know if you have any questions.
Thanks,
Max
```
ERROR
Running the tests in โtests/testthat.Rโ failed.
Last 13 lines of output:
โโ 2. Error: classification, with class mapping (@test-validateMap.R#26) โโโโโโ
`data` and `reference` should be factors with the same levels.
1: validateMap(sc$map, valData = poly, nSample = 100, responseCol = "class", classMapping = sc$classMapping) at testthat/test-validateMap.R:26
2: confusionMatrix(valiSet[[1]][, "prediction"], reference = valiSet[[1]][, "reference"])
3: confusionMatrix.default(valiSet[[1]][, "prediction"], reference = valiSet[[1]][,
"reference"])
4: stop("`data` and `reference` should be factors with the same levels.", call. = FALSE)
โโ testthat results โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
OK: 926 SKIPPED: 2 FAILED: 2
1. Error: classification, without class mapping (@test-validateMap.R#19)
2. Error: classification, with class mapping (@test-validateMap.R#26)
Error: testthat unit tests failed
Execution halted
```
Hi,
Congrats for the tool (any future update for Sentinels?) ! I try to read and plot the new CSV files from EarthExplorer (Collection 1) but couldn't have "acceptable"results following the code example in the vignette. Attached the csv i try to plot and read.
LSR_LANDSAT_TM_C1_189326.zip
Best
D.
I often miss eigenvectors in the rasterPCA() output (pcomp$rotation or princomp$loadings),
in addition to the eigenvalues currently provided.
I created a raster stack (landsat_data) with 6 layers, each of which represents one band.I also saved the names of the layers in a vector (names_landsat). Now I want to calculate the spectral indices with the following code:
ind <- landsat_data
ind <- spectralIndices(ind,blue=as.character(names_landsat[1]),green=as.character(names_landsat[2]),red=as.character(names_landsat[3]),nir=as.character(names_landsat[4]),swir1=as.character(names_landsat[5]),swir2=as.character(names_landsat[6]))
Unfortunately R returns the following error message:
Error in if ((maxValue(img[[red]])/scaleFactor > 1.5 | minValue(img[[red]])/scaleFactor < :
missing value where TRUE/FALSE needed
In addition: Warning messages:
1: In .local(x, ...) : max value not known, use setMinMax
2: In .local(x, ...) : min value not known, use setMinMax
I have absolutely zero clue what that means as I am pretty new to this package.
I would be very grateful if somebody could give me any kind of help.
Thanks.
Hello,
I have been trying to run RSToolbox on the latest LandSat8 images (Collection 1 product identifiers have changed) and some of the functions fail to run with an error invalid layer names. I have been trying to change the functions radCor and stackMeta to run on the new datasets (Data from the MTL file has longer row names and band names[i've changed it back to B1_dn etc.)) and changed the data from MTL file to look similar to the pre collection versions of landsat images. This solved the invalid layer names issue, but now it fails at the .LANDSAT.db object. Is it a known issue or i am doing something wrong (my code was perfectly running on the pre collection versions) Thanks in advance L.
Oh and I am running RStudio v1.0.143 with R version 3.3.2 on a Win 10 machine.
The code is simple:
SatFolder is an unchanged unpacked LS folder with all original files (MTL and all bands)
(eg: F:/.../LC81880272017086LGN00
mtlFile<-list.files(SatFolder, pattern = glob2rx("*_MTL.txt"), full.names = TRUE)
metaData <- readMeta(mtlFile)
lsat <- stackMeta(mtlFile)
lsat_ref <- radCor(lsat, metaData = metaData, method = "apref")
Feature request: Wallis filter for local contrast enhancement
memory consumption can be too high due to single layer processing.
Todo:
Equivalent to #42 but with ggRGB. Here I also have the problem of the axis labels:
a <- ggRGB(rlogo,r=1,g=2,b=3)
g <- list(a,a,a,a,a,a)
grid.arrange(grobs=g)
Any solution using RStoolbox?
Resources:
An illumination correction algorithm on Landsat-TM data
http://ieeexplore.ieee.org/document/5653492/
http://www.arcgis.com/home/item.html?id=fceca6e14fcc4d25953958a52f3eba5b
"Improved forest change detection with terrain illumination corrected Landsat images"
https://www.sciencedirect.com/science/article/pii/S0034425713001673?via%3Dihub#f0025
from your webpage http://bleutner.github.io/RStoolbox/rstbx-docu/readMeta.html
mtlFile <- system.file("external/landsat/LT52240631988227CUB02_MTL.txt", package="RStoolbox")
metaData <- readMeta(mtlFile)
summary(metaData)
I change to :
mtfile<- system.file("MTL.txt", package = "RStoolbox")
meta<- readMeta("Directory/MTL.txt")
summary(meta)
a command for linear regression of x~y1+y2+... raster would be handy and optional output of residuals (maybe optional with piecewise regressions, robust linear regressions etc.). To ease this code snippet:
library(raster)
library(RStoolbox)
lsat88 <- brick("p224r63_1988.grd")
lsat11 <- brick("p224r63_2011.grd")
lsat88.ndvi <- spectralIndices(lsat88, red="B3_sre",nir="B4_sre",indices="ndvi")
lsat11.ndvi <- spectralIndices(lsat11, red="B3_sre",nir="B4_sre",indices="ndvi")
lm.lsat <- lm(lsat88.ndvi[]~lsat11.ndvi[], na.action = NULL) # can also be done as multivariate regression: x~y1+y2+y3 with dependent and explanatory variables
summary(lm.xy)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
yc.ndvi[] 0.248039 0.001339 185.3 <2e-16
...
Multiple R-squared: 0.1702, Adjusted R-squared: 0.1702
...
resid.lsat <- raster(lsat88)
resid.lsat[] <- lm.lsat$residuals
plot(resid.lsat,stretch="lin")
First of all, I thank the author for developing this tool which is very useful. It would be nice to correct this minor issue in readMeta.R. The use of the function as.POSIXlt on scene_center_time variable without parameter 'tz=GMT', simply adds local time zone tag (in my case IST) to the time value.
$ACQUISITION_DATE
[1] "2000-04-10 04:57:51 IST"
This made me to convert the time value back to GMT by subtracting 5.30 from the time value displayed by readMeta, which was wrong as the time value was already denoting GMT but wrongly tagged with local time zone. (I found the issue with clues from sun azimuth and elevation angles.). I request you to either add the parameter 'tz="GMT"' to as.POSIXlt or correct the time value with respect to the time zone of the scenes. Once again I thank u for such a nice tool.
Doing:
> ima <- raster(matrix(rnorm(n=10^4,m=0,sd=2)))
> cellStats(ima,range)
[1] -7.437958 7.845097
> q <- quantile(ima,probs=c(0.02,0.98))
> q
2% 98%
-4.168026 4.168633
> ima2 <- rescaleImage(ima,xmin=q[1],xmax=q[2],ymin=0,ymax=255,forceMinMax = TRUE)
> cellStats(ima2,range)
[1] -100.0200 367.4549
I was expecting ima2 to be rescaled from range -7.437958 , 7.845097 -4.168026, 4.168633 to 0, 255
Am I misunderstanding the doc or is there an error?
Using a single raster layer as input containing values from 0-228.1111, the output contains 5 classes instead of the requested 2.
Example code:
out <- unsuperClass(subset(focal.stack,names(focal.stack)[i],nClasses=2))
I used topCor() on 62 stack of TOA Band 5 Landsat 8 image and I provided 62 MTL files which are sort to match the order of 62 stack of TOA Band 5.
But 32 output files of corrected TOA Band 5 are out of range (ie: reflectance value between -134.381 to 203.657. I did the same topCor () for TOA Band 4 Landsat 8 and the 62 output files seem to be in correct reflectance range. Can anyone tell me what when wrong? Advice are very much appreciated. Thanks
My codes are as follows:
dem_kam <-raster("E:/R/DEM/Kam_DEMslopeAspct_rad.tif")
#path to directory with folders
path1 <- "E:/R/C_crrctn/kam_c_crrctn/B5_tif/"
path2 <- "E:/R/C_crrctn/kam_c_crrctn/MTL/"
path3 <- "E:/R/C_crrctn/kam_c_crrctn/B5_C/"
#get file names using list.files() function
toa_stack <-list.files(path1, pattern=".tif$",full.names=T)
#print(path2)
mtl_stack <-list.files(path2,pattern =".txt$",full.names =T)
#Loop over every stack in filelist
for(i in 1:length(toa_stack)) {
print(i)
#stack image i
img <- raster(toa_stack[i])
mtl <- file.path(mtl_stack[i])
print(img)
print(mtl)
#calc corrected B5 for image i
B5_C <- topCor(img, dem = dem_kam, metaData = mtl, method = "C")
#export raster
writeRaster(B5_C, file.path(path3,names(img)), format="GTiff",
bylayer=TRUE, overwrite=T)
}
As a test, I'm running rasterPCA() with a raster image of 8282 * 11329 * 5 on
a machine with 64 Gb of RAM and it takes longer
than I had expected. I can see that the process does not take more than 2.4 Gb of RAM
Do you have any kind of memory management limits that could make that rasterPCA()
would not be taking as much RAM as it could?
Thanks
Considering that spectralIndices() do not rescale G, C1 and C2, shouldn't EVI index be obtained for toa or sr values but not for dn?
SI <- spectralIndices(lsat, red = "B3_dn", nir = "B4_dn", blue='B2_dn')
plot(SI)
lsat_sref0<- radCor(lsat, metaData = metaData0, method = "dos")
SI <- spectralIndices(lsat_sref0, red = "B3_sre"
, nir = "B4_sre"
, blue='B2_sre', indices = "EVI")
plot(SI)
A warning could/should be provided or sre bands computed instead.
I've made a change to caret:::train
that requires the predictor matrix (x
) to always have column names and that breaks your test:
> test_check("RStoolbox")
Loading required package: sp
1. Error: predict mlc (@test-mlc.R#16) -----------------------------------------
Please use column names for `x`
1: train(mat, y, method = mlcCaret, trControl = trainControl(method = "none")) at testthat/test-mlc.R:16
2: train.default(mat, y, method = mlcCaret, trControl = trainControl(method = "none"))
3: stop("Please use column names for `x`", call. = FALSE)
testthat results ================================================================
OK: 726 SKIPPED: 0 FAILED: 1
1. Error: predict mlc (@test-mlc.R#16)
I have a version set to CRAN that found this. Can you update your test (and perhaps relevant parts of the package) to have column names? If this is a big deal, I can just set names if there are none but that seems like a poor solution (but feedback is welcome).
Thanks
I would like to plot multiple layers with ggR():
ggR(lsat, 1:3, geom_raster=TRUE, stretch = "lin") +
scale_fill_gradientn(colors=grey.colors(100), guide = FALSE) +
theme(axis.text = element_text(size=5),
axis.text.y = element_text(angle=90),
axis.title=element_blank())
But would like the subplots to appear in one column rather than one row.
Would it be possible to extend the function to an additional argument (ncol, nrow) that feeds into the facet_wrap() function. If you like, I can also issue a pull request for this.
Hi,
class : RasterStack
dimensions : 1476, 256, 377856, 145 (nrow, ncol, ncell, nlayers)
resolution : 1, 1 (x, y)
extent : 0, 256, 0, 1476 (xmin, xmax, ymin, ymax)
coord. ref. : NA
names : image_brute.1, image_brute.2, image_brute.3, image_brute.4, image_brute.5, image_brute.6, image_brute.7, image_brute.8, image_brute.9, image_brute.10, image_brute.11, image_brute.12, image_brute.13, image_brute.14, image_brute.15, ...
min values : 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
max values : 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, ...
class : SpatialPointsDataFrame
features : 3248
extent : 18, 252, 118, 1093 (xmin, xmax, ymin, ymax)
coord. ref. : NA
variables : 1
names : label
min values : 1
When I run the classification, the following error message appears: "Error in {if (is.projected (x)!): Where missing value TRUE / FALSE needed".
Please help me to resolve this problem.
Thank you very much in advance.
When I use readSLI() to read a library with byte order = 1 in its hdr, I get weird values in R, while libraries
that have byte order = 0 result in R obkects with meaningful values.
byte order = 1 : https://lost-contact.mit.edu/afs/enea.it/software/rsi/envi_3.5/spec_lib/jpl_lib/
byte order = 0: https://lost-contact.mit.edu/afs/enea.it/software/rsi/envi_3.5/spec_lib/usgs_min/
I understand that readSLI() assumes byte order = 0 (least significant byte first (LSF) data ),
am I wrong?
If I'm not wrong, could byte order be taken into account by readSLI() ?
Currently, *.sli files from EnMapBox come with "spectra names" field in the corresponding *.hdr file that have " " separators, such as:
spectra names = {
FX10FX17_Quartz1_small.raw 738.785219770416 358.03143995717653 USER:100025,
FX10FX17_Quartz2_small.raw 615.0409742461194 446.71481591625576 USER:100025,
FX10FX17_Quartz3_small.raw 346.9284422768101 586.9582941771253 USER:100025}
readSLI() collapses the " " and names become:
[2] "FX10FX17_Quartz1_small.raw738.785219770416358.03143995717653USER:100025"
[3] "FX10FX17_Quartz2_small.raw615.0409742461194446.71481591625576USER:100025"
[4] "FX10FX17_Quartz3_small.raw346.9284422768101586.9582941771253USER:100025"
which is very inconvenient for ulterior processing. Could you use "_" as separator?
"FX10FX17_Quartz1_small.raw_738.785219770416_358.03143995717653_USER:100025"
"FX10FX17_Quartz2_small.raw_615.0409742461194_446.71481591625576_USER:100025"
"FX10FX17_Quartz3_small.raw_346.9284422768101_586.9582941771253_USER:100025"
Hello,
"superclass" uses โtuneLengthโ argument to indicate the number of different values to try for each model parameter and uses also the k fold cross validation for resampling during model tuning.
Please, is it possible that ยซget validation function" provides the final values tuned used of the caret model ?
For example: for the model โsvmRadialโ, the final values of two parameters (sigma)and Cost (C) used on classification task will be returned also as results of getvalidation.
Thanks in advance.
Dear Bleutner,
this is the error i am getting while i want to stack the MTL file, i can read it perfectly but when i want to stack it its giving erros.
mtfile<- system.file("MTL.txt", package = "RStoolbox")
meta<- readMeta("Z:/Band/MTL.txt")
meta
lsat<- stackMeta(meta)
lsat<- stackMeta(mtfile)
Error: Metadata file does not exist. Looking for:
lsat<- stackMeta("Z:/Band/MTL.txt")
Error in .local(.Object, ...) :
Error in .rasterObjectFromFile(x, band = band, objecttype = "RasterLayer", :
Cannot create a RasterLayer object from this file. (file does not exist)
lsat<- stackMeta(MTL.txt)
Error in inherits(file, "ImageMetaData") : object 'MTL.txt' not found
lsat<- stackMeta(meta)
Error in .local(.Object, ...) :
Regards
Hello,
I'm trying to apply a topographic correction to a stack of MODIS images, and I'm getting an error that I assume has to do with with the fixed sample size of 10000 used by .kestimate (my raster stack has 5046 cells). Is there a straightforward way to pass a different sample size to .kestimate?
Thanks,
Nate
topCor(modis_4band_means, dem = srtm_MODIS, solarAngles = c(1, 0.7), method = "C")
Error in .local(x, size, ...) : size <= ncell(x) is not TRUE
traceback()
8: stop(simpleError(msg, call = sys.call(-1)))
7: stopifnot(size <= ncell(x))
6: .local(x, size, ...)
5: sampleRandom(stack(img, illu, slope, strat), size = 10000)
4: sampleRandom(stack(img, illu, slope, strat), size = 10000)
3: as.data.frame(sampleRandom(stack(img, illu, slope, strat), size = 10000))
2: .kestimate(img, illu, slope, method = "stat")
1: topCor(modis_4band_means, dem = srtm_MODIS, solarAngles = c(1,
0.7), method = "C")
modis_4band_means
class : RasterBrick
dimensions : 58, 87, 5046, 4 (nrow, ncol, ncell, nlayers)
resolution : 288, 288.6207 (x, y)
extent : 694419.2, 719475.2, 2149977, 2166717 (xmin, xmax, ymin, ymax)
crs : +proj=utm +zone=14 +datum=WGS84 +units=m +no_defs +ellps=WGS84 +towgs84=0,0,0
source : memory
names : layer.1, layer.2, layer.3, layer.4
min values : 0.015384210, 0.115567346, 0.009112088, 0.042510675
max values : 0.13556559, 0.36798491, 0.08137419, 0.19660914
Dear all,
I try to use the rasterCVA function on two bands of two Landsat 5 images (with the same extent), but I receive an error, as seen in the following:
changeVec <- rasterCVA(LS_Aug_2011[[3:4]], LS_Aug_2010[[3:4]]) Error in (function (x, fun, filename = "", recycle = TRUE, ...) : cannot use this formula, probably because it is not vectorized
Here some details about my raster objects:
`
LS_Aug_2011
class : RasterBrick
dimensions : 7421, 8221, 61008041, 7 (nrow, ncol, ncell, nlayers)
resolution : 30, 30 (x, y)
extent : 173985, 420615, 4989285, 5211915 (xmin, xmax, ymin, ymax)
coord. ref. : +proj=utm +zone=35 +datum=WGS84 +units=m +no_defs +ellps=WGS84 +towgs84=0,0,0
data source : /home/gce04/C7_Martini_2017/landsat_data/LS_Aug_2011_stack.tif
names : LS_Aug_2011_stack.1, LS_Aug_2011_stack.2, LS_Aug_2011_stack.3, LS_Aug_2011_stack.4, LS_Aug_2011_stack.5, LS_Aug_2011_stack.6, LS_Aug_2011_stack.7
min values : 0, 0, 0, 0, 0, 0, 0
max values : 255, 255, 255, 255, 255, 182, 255
LS_Aug_2010
class : RasterBrick
dimensions : 7421, 8221, 61008041, 7 (nrow, ncol, ncell, nlayers)
resolution : 30, 30 (x, y)
extent : 173985, 420615, 4989285, 5211915 (xmin, xmax, ymin, ymax)
coord. ref. : +proj=utm +zone=35 +datum=WGS84 +units=m +no_defs +ellps=WGS84 +towgs84=0,0,0
data source : /home/gce04/C7_Martini_2017/landsat_data/LS_Aug_2010_stack.tif
names : LS_Aug_2010_stack.1, LS_Aug_2010_stack.2, LS_Aug_2010_stack.3, LS_Aug_2010_stack.4, LS_Aug_2010_stack.5, LS_Aug_2010_stack.6, LS_Aug_2010_stack.7
min values : 0, 0, 0, 0, 0, 0, 0
max values : 255, 255, 255, 255, 255, 180, 255
`
Can you help me out with that issue?
Looking forward to comments.
Cheers!
The current github version of caret
induces a (small) test failure for RStoolbox
:
* checking tests ...
ERROR
Running the tests in โtests/testthat.Rโ failed.
Last 13 lines of output:
Loading required package: sp
[1] "testdata/earthexplorer/EE_LANDSAT_5.txt"
[2] "testdata/earthexplorer/EE_LANDSAT_7.txt"
[3] "testdata/earthexplorer/EE_LANDSAT_8.txt"
1. Failure: regression (@test-validateMap.R#38) --------------------------------
names(val$performance) not equal to c("RMSE", "Rsquared").
Lengths differ: 3 vs 2
testthat results ================================================================
OK: 766 SKIPPED: 0 FAILED: 1
1. Failure: regression (@test-validateMap.R#38)
Error: testthat unit tests failed
Execution halted
postResample
now reports, RMSE, R^2, and MAE.
cctrl <- trainControl(method = "cv", number = 3, returnResamp = "all",
classProbs = TRUE,
summaryFunction = twoClassSummary)
fit <- train(Class ~ ., data = training,
method = "rotationForest",
trControl = cctrl1,
metric = "ROC",
preProc = c("center", "scale"))
Giving me error "Model rotationForest is not in caret's built-in library"
Hi,
Thanks bleutner for this package, it has been very helpful so far in processing Landsat scenes!
An issue I have encountered is that readMeta
fails to properly load the *MTL.txt metadata files from the various Landsat MSS scenes I have tried to process.
Here is a basic example of trying to load the metadata and the error it throws:
meta <- readMeta("./MSS/1978/LM30520251978217PAC03_MTL.txt")
Error in `rownames<-`(`*tmp*`, value = "B6_dn") :
attempt to set 'rownames' on an object with no dimensions
and if I use the example you provide in the documentation, readMeta throws a different error because mtlFile is only an empty character vector.
mtlFile <- system.file("./MSS/1978/LM30520251978217PAC03_MTL.txt", package = "RStoolbox")
print(mtlFile)
[1] ""
meta <- readMeta(mtlFile)
Error: Metadata file does not exist. Looking for:
I have attached one of the metadata .txt files as obtained when the MSS scene was downloaded from the USGS. Any suggestions on how to get around this issue if I am trying to run radCor
and topCor
on the MSS data?
Thanks,
Andras
Dear authors,
first of all congratulations and thank you very much for this very nice package.
I mainly used the package to calculate the atmospheric correction using the DOS method with LandSat8 images from 2014.
In Landsat8 images the DN values are encoded in 16bit and thus have the range 0 - 65535. Because of that the default value of the darkProp = 0.02 seems not to be appropriate since if running estimateHaze on gets the Warnings for each band:
darkProp for band X was chosen too high. It exceeds the value range.
Adjusting the value of darkProp to about 0.000025 fixes the problem and SHV values are evaluated properly.
Nevertheless the provided examples are not working with my images and the manual setting of the darkProp parameter might not always be applicable. Thus, I was wondering if you could add some piece of code to check the DN values and adjust the default parameters somehow.
In addition I am seeing that the following two corrections are giving exactly the same result:
lsat_ref <- radCor(lsat, metaData = metaData, method = "apref")
hazeDN <- estimateHaze(lsat, hazeBands = 1:4, darkProp = 0.000025, plot = TRUE)
lsat_sref <- radCor(lsat, metaData = metaData, method = "sdos",
hazeValues = hazeDN, hazeBands = 1:4)
So it seems that in my case the atmospheric correction is not doing anything which does not seem right. Do you have an idea where this could be coming from? Maybe it is somehow related to the issue with the very small haze values I mentioned.
Thank you very much in advance for your help,
Hendrik
I'm not able to install the package on my Ubuntu server. It fails with the following step:
configure: Need to download and build NLopt
trying URL 'http://ab-initio.mit.edu/nlopt/nlopt-2.4.2.tar.gz'
Error in download.file(url = "http://ab-initio.mit.edu/nlopt/nlopt-2.4.2.tar.gz", :
cannot open URL 'http://ab-initio.mit.edu/nlopt/nlopt-2.4.2.tar.gz'
In addition: Warning message:
In download.file(url = "http://ab-initio.mit.edu/nlopt/nlopt-2.4.2.tar.gz", :
URL 'http://ab-initio.mit.edu/nlopt/nlopt-2.4.2.tar.gz': status was 'Failure when receiving data from the peer'
Execution halted
Error in file(con, "rb") : cannot open the connection
Calls: untar -> readBin -> file
In addition: Warning message:
In file(con, "rb") :
cannot open file 'nlopt-2.4.2.tar.gz': No such file or directory
Execution halted
configure: Starting to install library to /tmp/Rtmp9oSNpM/R.INSTALL32603998a7/nloptr/nlopt-2.4.2
./configure: line 3325: cd: nlopt-2.4.2: No such file or directory
Thanks
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.