cbeleites / hyperspec Goto Github PK
View Code? Open in Web Editor NEWR package hyperSpec can now be found at https://github.com/r-hyperspec/hyperSpec
Home Page: http://r-hyperspec.github.io/hyperSpec/
License: GNU General Public License v3.0
R package hyperSpec can now be found at https://github.com/r-hyperspec/hyperSpec
Home Page: http://r-hyperspec.github.io/hyperSpec/
License: GNU General Public License v3.0
9 fields in extra data:
> read.spe("polystyrene.SPE", xaxis = "file")
hyperSpec object
1 spectra
9 data columns
1340 data points / spectrum
wavelength: Raman ~ shift/cm^-1 [numeric] 289.9212 293.0820 ... 3529.283
data: (1 rows x 9 columns)
1. px.y: [integer] 1
2. frame: [integer] 1
3. exposure_sec: [numeric] 0.1
4. LaserWavelen: [numeric] 785
5. accumulCount: [integer] 1
6. numFrames: [integer] 1
7. darkSubtracted: [integer] 0
8. spc: counts [matrix1340] 585 590 ... 586
9. filename: filename [character] polystyrene.SPE
8 fields in extra data, filename
is missing!
> read.spe("polystyrene.SPE", xaxis = "px")
hyperSpec object
1 spectra
8 data columns
1340 data points / spectrum
wavelength: pixel number [integer] 1 2 ... 1340
data: (1 rows x 8 columns)
1. px.y: [integer] 1
2. frame: [integer] 1
3. exposure_sec: [numeric] 0.1
4. LaserWavelen: [numeric] 785
5. accumulCount: [integer] 1
6. numFrames: [integer] 1
7. darkSubtracted: [integer] 0
8. spc: counts [matrix1340] 585 590 ... 586
translate from svUnit
to testthat
translate from svUnit
to testthat
translate to testthat
translate to testthat
translate from svUnit
to testthat
translate from svUnit
to testthat
translate from svUnit
to testthat
If the project has not been saved in Witec Control before exporting Graph ASCII, the Filename field in the Header file is empty.
In new version or R 3.4.0 a new bug has been introduced that brakes makefile-based build of Sweave documents. The problem is that after a compilation of a pdf file R/Sweave returns 1 on success instead of 0. This is indication to an external program, in our case make
, that an error has occured, so it stops.
After update to R 3.4.0
building of hyperSpec typically fails on the flu
vignette. make
just stops after a successfull building of the vignette without any meaning expalanation. The error message does not make any sense and looks like this:
...
Output file: flu.pdf
Compacting PDF document
compacted ‘flu.pdf’ from 222Kb to 85Kb
Execution halted
make[1]: *** [flu.pdf] Error 1
make[1]: Leaving directory `hyperspec/Vignettes/flu'
I think the only option right now is to wait until this issue with R is resolved.
S3method(rbind,fill.matrix)
S3method(rbind,hyperSpec)
for new roxygen2
versions > 5.0.1
Workaround: roll back to roxygen2 5.0.1: run in R devtools::install_version(package = 'roxygen2', version = '5.0.1')
translate to testthat
hyperSpec
was tested with file format 2.5, which specifies that the SPE file consists of two parts: a 4100-long binary header, and a data block with spectral values.
The new file format is backwards-compatible with the previous version 2.5, and the most important for us differences are:
float32
number at offset 1992 bytes indicates the file format version. It was changed from 2.5
to 3.0
Currently read.spe
cannot read new file format, as it treats everything behind the 4100-bytes header, including the XML text, as the binary block of spectral values.
I want to start a short discussion, what is the best way to circumvent this issue.
read.spe.header
function to read the file format version and the start of XML data (offset 678)read.spe
: determine size of the spectral data block from the fields xdim
/ ydim
/ numFrames
/ datatype
in the header and stop reading at the end of the data block, before the XML header.read.spe.xml
that checks the file format version and returns the XML data if format version is >=3.0. We can add package XML
as an optional dependency. If it is available, the function will return the pretty-printed XML object. I think we should not try to process the XML any further – let the users do what they want with it. Also I suppose that the XML DOM structure may vary, as it is not rigorously described in the specification.translate from svUnit
to testthat
translate from to
read.txt.Shimadzu
working with the new UV-VIS-NIR example spectrumi.e. tests inside attribute attached to the function.
NIR SRS device, by Indatech company :
http://indatech.eu/hyternity-spatially-resolved-spectroscopy-nir-spectrometer/
spc <- read.txt.Witec ("~/tmp/A1_BR.txt", sep =",")
Minimal working example with chondro.txt from fileio vignette raw data
spc <- read.txt.Renishaw ("chondro.txt")
spc.fit.poly.below (spc [800], debuglevel = 2)
reproduces the problem
ggplot2 examples
qplotmixmap
Hi!
I noticed that it's pretty annoying to type 'new('hyperSpec',...)' every time. How about adding function as.hyperspec like:
as.hyperspec <-
function(X, wl = NULL, other.data = NULL, use.colnames = TRUE) {
# if wl is setted just use it, otherwise
if (is.null(wl)) {
wl <- 1:ncol(X) # if use.colnames is FALSE use 1,2,3... for wavelengths
if (use.colnames && !is.null(colnames(X))) {
# use colnames.
wl <- as.numeric(gsub(pattern = '[[:alpha:]]', replacement = '', x = colnames(X)))
}
}
if (is.null(other.data)) {
new('hyperSpec', spc = as.matrix(X), wavelength = wl)
} else {
new('hyperSpec', spc = as.matrix(X), wavelength = wl, data = other.data)
}
}
Here I use gsub
to remove prefix characters which were added automatically because colnames can't start with numbers.Like, for example:
> A <- matrix(1:8, ncol = 4)
[,1] [,2] [,3] [,4]
[1,] 1 3 5 7
[2,] 2 4 6 8
> data.frame(A)
X1 X2 X3 X4
1 1 3 5 7
2 2 4 6 8
translate from svUnit
to testthat
... for bug reporting
translate from svUnit
to testthat
scan.*
import functions to read.*
translate from svUnit
to testthat
single NA
s cause whole spectrum to be deleted.
Instead of giving specific functionality in hyperSpec
this can be done via lattice
options:
trellis.par.set(regions = list (col = matlab.palette()))
However, this possibility should be explained in the plotting
vignette.
translate from svUnit
to testthat
translate from svUnit
to testthat
Warning messages:
1: In (function (x, y, z, subscripts, at = pretty(z), ..., col.regions = regions$col, :
'x' values are not equispaced; output may be wrong
occures for non corrupted data - increase debuglevel for this message to be displayed?
Each time I use spc.fit.poly.below
, it prints an informative message
> bl <- spc.fit.poly.below(chondro)
Fitting with npts.min = 15
This message is not really informative (at least for me), but it is very hard to get rid of. It gets especially annoying when using knitr.
The only real way to suppress it is to provide the npts.min
argument, which is not obvious unless you dig into the source code. The message is produced by the following code from spc.fit.poly.R
:
if (is.null (npts.min)){
npts.min <- max (round (nwl(fit.to) * 0.05), 3 * (poly.order + 1))
cat ("Fitting with npts.min = ", npts.min, "\n") # <----------- HERE
} else if (npts.min <= poly.order){
npts.min <- poly.order + 1
warning (paste ("npts.min too small: adjusted to", npts.min))
}
I would suggest to print this message only when debuglevel is set to 1, or to remove it completely. Any other ideas?
R searches vignettes using their names. In the case of "introduction" the name can refer basically to any package.
vignette("introduction")
Warning message:
vignette ‘introduction’ found more than once,
using the one found in ‘/home/ximeg/R/x86_64-pc-linux-gnu-library/3.3/dplyr/doc’
translate from svUnit
to testthat
Vignettes/fileio/Makefile
Suppose I try to set a new spectral matrix and accidentally provide a matrix of a wrong size.
> spc <- chondro[1]
> spc@data$spc <- matrix(1:2, ncol=2)
Error in validObject(x) :
invalid class “hyperSpec” object: Length of wavelength vector differs from number of data points per spectrum.
This results in an error message, which is understandable. But I would expect, that the spc
object is still intact (because there was an error), however it was modified by this operation.
> spc
Error in validObject(x) :
invalid class “hyperSpec” object: Length of wavelength vector differs from number of data points per spectrum.
> spc@data$spc
[,1] [,2]
[1,] 1 2
> nwl(spc)
Error in validObject(x) :
invalid class “hyperSpec” object: Length of wavelength vector differs from number of data points per spectrum.
I find this behavior really-really confusing.
translate from svUnit
to testthat
Hi,
Recently, I updated my R Studio to the latest version of R Stutio (1.0.136). And now the following error keeps popping up, when I try to use 'spc$' instead "spc@data$":
Error in grep(pattern, colnames(x@data), value = TRUE) : argument "pattern" is missing, with no default
The message pops up each time a character is typed. I.e. after typing flu$file
console looks like this:
Error in grep(pattern, colnames(x@data), value = TRUE) :
argument "pattern" is missing, with no default
Error in grep(pattern, colnames(x@data), value = TRUE) :
argument "pattern" is missing, with no default
Error in grep(pattern, colnames(x@data), value = TRUE) :
argument "pattern" is missing, with no default
Error in grep(pattern, colnames(x@data), value = TRUE) :
argument "pattern" is missing, with no default
Error in grep(pattern, colnames(x@data), value = TRUE) :
argument "pattern" is missing, with no default
> flu$file
I've checked this on 3 different computers and this orrurs on all of them.
P.S. I also tried to fix it on my own. But the message doesn't occur (neither '$' works) when I load 'hyperSpec' from local ( devtools::load_all(".")
). Perhaps, I still doing something wrong with package building..
translate from svUnit
to testthat
Combining two hyperspec objects with rbind does not work when joining bigger to smaller (in terms number of spectra) object
Error in rbind(deparse.level, ...) : replacement has length zero
In other order (smaller to bigger) works OK.
collapse works in both cases correctly.
typing spc.nmax = ...
for plotting of moderate data sets is tedious.
R 3.4.1 ignores the supplied pdfs in inst/doc
which were previously used. Instead, the pdfs produced from the stub .Rnws (which were needed to get the vignettes listed in R) are used.
This is apparently a fix of long-standing but undesired behaviour.
The algorithms which use van der monde matrices (spc.fit.poly and spc.rubberband) can not handle higher order polinomials if the xaxis does not match the right criteria. The returned error message is Error in qr.solve(vdm[use, ], y[use, i]) : singular matrix 'a' in solve
.
The hypothesis for that is that the numerical fitting becomes instable for an axis containing high numbers far away from 0.
The workaround for this problem is to normalise the wavelength axis before doing the fit and recover it afterwards.
linear interpolation with linapprox
not suitable.
I think I am running into a corner case with spc.fit.poly.below
when I set npts.min = 2
. The function call never ends:
spc.fit.poly.below(spc[39], npts.min = 2)
For higher npts.min
values or other spectra in my data it works fine:
> system.time(spc.fit.poly.below(spc[39], npts.min = 3))
user system elapsed
0.006 0.000 0.007
> system.time(spc.fit.poly.below(spc[38], npts.min = 2))
user system elapsed
0.008 0.000 0.008
This "39
" spectrum does not look very different than the "38
" one.
> sessionInfo()
R version 3.4.1 (2017-06-30)
Platform: x86_64-redhat-linux-gnu (64-bit)
Running under: Fedora 24 (Workstation Edition)
Matrix products: default
BLAS/LAPACK: /usr/lib64/R/lib/libRblas.so
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=en_US.UTF-8 LC_NAME=C LC_ADDRESS=C LC_TELEPHONE=C LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] grid stats graphics grDevices utils datasets methods base
other attached packages:
[1] scidb_2.0.0 bit64_0.9-7 bit_1.1-12 hyperSpec_0.98-20161118 caret_6.0-76 lattice_0.20-35 gWidgetsRGtk2_0.0-84
[8] cairoDevice_2.24 gWidgets_0.0-54 RGtk2_2.20.33 baseline_1.2-1 ggplot2_2.2.1 MASS_7.3-47 Peaks_0.2
loaded via a namespace (and not attached):
[1] Rcpp_0.12.12 RColorBrewer_1.1-2 nloptr_1.0.4 compiler_3.4.1 plyr_1.8.4 iterators_1.0.8 tools_3.4.1 svUnit_0.7-12 lme4_1.1-13
[10] digest_0.6.12 tibble_1.3.3 gtable_0.2.0 nlme_3.1-131 mgcv_1.8-17 rlang_0.1.1 Matrix_1.2-10 foreach_1.4.3 parallel_3.4.1
[19] curl_2.8.1 SparseM_1.77 stringr_1.2.0 MatrixModels_0.4-1 stats4_3.4.1 nnet_7.3-12 data.table_1.10.4 latticeExtra_0.6-28 minqa_1.2.4
[28] reshape2_1.4.2 car_2.1-5 magrittr_1.5 splines_3.4.1 scales_0.4.1 codetools_0.2-15 ModelMetrics_1.1.0 pbkrtest_0.4-7 colorspace_1.3-2
[37] quantreg_5.33 labeling_0.3 stringi_1.1.5 lazyeval_0.2.0 openssl_0.9.6 munsell_0.4.3
translate from svUnit
to testthat
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.