Git Product home page Git Product logo

process-rate-estimator's Introduction

Hello there.

process-rate-estimator's People

Contributors

answaltan avatar damian-oswald avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar

process-rate-estimator's Issues

Create panel tabset for hyperparameters

The hyperparameter visualizations are too long down the page, a panel tabset would solve this issue and allow the reader to manually switch between visualizations.

Resolve conflict between (semi-)partial R-squared and SRC

The standardized regression coefficients $\beta^\ast$ and the semi-partial R2 ($\Delta R^2$) are very much related, but they are not exactly the same, as described by Chris Novak (2015). The difference between these is the denominator, which for the $\beta^\ast$ and $\Delta R^2$ contains only the correlation between the predictors. If the correlation among predictors is low, the two values will be close.

The partial R2 needs to be calculated from Type-III sum of squares ANOVA; i.e. the default method in R leads to wrong results.

Similar issue: https://stats.stackexchange.com/questions/65919/difference-between-effect-size-partial-r2-and-coefficients

Global sensitivity analysis

Write a script which runs the model in order to generate the data for a global sensitivity analysis.

Main parameters of interest:

  • Isotope end members
  • Bulk density

Check parameter range for sensitivity analysis

In table 6.1, check the distribution functions of the isotope end member parameters. The range from which these parameters are sampled massively influence their estimated importance to the overall model outcome.

Depth-specific soil parameters

The process rate estimator assumes (for now), that all soil parameters such as bulk density are constant no matter the measurement depth.

However, it is much more realistic that some soil parameters change as a function of the measurement depth.

This could be incorporated by allowing parameters both in scalar form, where they would be repeatedly used for all depths, as well as in vector form, where they are provided for every depth at hand.

Write a chapter on the sensitivity analysis

This should include the following content:

  • Table with the uncertainty values as well as their source.
  • Function used to randomly generate uncertainty values (based on runif, rbeta or rnorm).
  • Exact process of the sensitivity analysis.
  • Results: Which parameters (most importantly isotope end members) are how important?

Improve the visualizations of results over time

The results visualization over time don't work that well. They need to:

  • Be in a wide layout to better match the interactivity.
  • Include x and y axis descriptions with units.
  • Text needs to be larger and better cover the available space.

Update the parameter ranges in table 6.1

In table 6.1, adjust the parameter range to match the conducted sensitivity analysis. The range from which these parameters are sampled massively influence their estimated importance to the overall model outcome.

Conduct uncertainty analysis

Quantify the magnitude of the process estimate uncertainty per column and date.

  • Use the covariance matrixes $\Sigma$ of figure 6.2
  • Apply the Frobenius Norm for quantifying the magnitude of $\Sigma$. The Frobenius norm for a matrix $A$ is defined as:
$$\|A\|_F = \sqrt{\sum_{i=1}^{m} \sum_{j=1}^{n} |a_{ij}|^2}$$

Fix the `getMissing` function

The interpolated data for N2O, SP, and d18O are missing values for column 6 at depth 90. This leads to errors in the loops when running the PRE later on.

Here's some code showing the error:

library(PRE)

with(measurements, tapply(d18O, list(column, depth), function(x) sum(!is.na(x))))
with(measurements[measurements$column==6&measurements$depth==90,], plot(SP, date))

data = getMissing()

with(data, tapply(d18O, list(column, depth), function(x) sum(!is.na(x))))

with(measurements[measurements$column==6&measurements$depth==90,], plot(date, SP))
with(data[data$column==6&data$depth==90,], points(date, SP))

Use emulators for sensitivity analysis

  • As for now, the sensitivity analysis is conducted by using linear regression models.
  • Specifically important: Mixed effects models, where the coefficients are estimated conditionally on the column and depth combinations.
  • Emulators such as random forests or XGBoost could be used as well.

Add SRC table

  • After figure 6.1, add a table of both the coefficients as well as the standardized regression coefficients.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.