Git Product home page Git Product logo

maars's Introduction

Hi ๐Ÿ‘‹, I'm Shamindra Shrotriya

  • ๐Ÿ›๏ธ Principal Data Scientist at Walmart (Retail Intelligence)

  • ๐Ÿซ I graduated with a PhD in Statistics & Data Science from Carnegie Mellon University (CMU), advised by Prof. Matey Neykov.

  • ๐Ÿ”ญ In my research enjoy working on location-scale estimation, shape constrained estimation, and spatiotemporal modeling (wildfire prediction). I really enjoy learning statistical concepts deeply by listening to experts, or trying to explain them clearly.

  • ๐Ÿ“ I regulary blog (mostly statistics related) on my website

  • ๐Ÿ“„ Know about my experiences CV

  • ๐Ÿ˜„ Pronouns: He/Him/His

Please browse around, and feel free to get in touch for any collaborations.

shamindras

maars's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Forkers

ehsanx

maars's Issues

FT: Fix `get_confint()` and `confint()` and assertion checking

This is to be done after #55 is fully completed and merged in #56 .

get_confint() and confint() related

  • Write prototype of get_confint() tibble with pivot_longer() working with the confidence level. Ensure that this is a valid tidy format for this table. Only conf.low and conf.high have the level parameter as not an NA value
  • Get prototype of get_confint() tibble with pivot_wider() working with the confidence level
  • Fix get_confint() to return conf.low/conf.high return levels
  • Fix confint to use get_confint() and group_split as discussed [here](https://maarshq.slack.com/archives/D01JL7EAW64/p1615811868000700?thread_ts=1615777275.013500&cid=D01JL
  • Fix confint to print levels

Assertion Checking

  • @shamindras : Fix assertion handling for lm only, maars_lm, maars_glm, glm only. This will replace code like this
  • Do assertion handling for missing parameters e.g. running just lm_fit %>% maars::comp_var() with no parameters, and handle this elegantly

FT: Modify Assertion checking + Assumptions prototype

Assertion Checking fixes

Per discussion we should implement the following:

  • Update assertion checking for get_summary() and summary()
  • Update assertion checking for get_confint() and confint()
  • Check whether we need to update assertion checking for print()

Update tibble based assumptions

  • @shamindras : Setup a prototype of get_assumptions() to use tibbles
  • @shamindras : Do the same for the variance titles
  • @shamindras : Do the same for the variance emojis
  • @shamindras : Ensure that this flows through to comp_var() and it contains only tibbles
  • @shamindras : Ensure that the flow through to other parts of the code is easier using the revised tibbles

FT: Add function for the multiplier bootstrap of the variance

We need to implement the following estimators of the variance.

  • multiplier bootstrap

We will later create separate issues for the following bootstrap estimators
once we implement the empirical bootstrap variance estimators

  • empirical bootstrap
  • residual bootstrap

FT: Post-demo fix issues

This is per #55

Note: This is just a temporary issue setup to prioritize key items post our presentation and demo.
It will be revised by the group once after the presentation

Basic Cleanup/bugfix (High priority)

  • Change Significance heading in our summary output to be abbreviated to Signif: to be consistent with lm() output
  • Change column ordering for bootstrap standard errors and t-statistics rigorously
  • Change p-value format to be 2/3 digits
  • Clean R/scripts_and_filters/experiments/ dir, remove old experiments
  • @shamindras : Change license to GPL-v3 per here
  • @shamindras : Clean up script that gets metadata of maars function metadata from pkgdown
  • @shamindras : Remove all base:: prefix use e.g. base::return()
  • @shamindras : Remove the class(object) output in print.maars_lm
  • @shamindras : Ensure all stats functions use the stats:: prefix
  • @shamindras : Fix majority of .data issues
  • @shamindras : We shouldn't name a variable df since this conflicts with stats::df. We should change this after the demo
  • @shamindras : Create the Boston Housing Data per here and sourced from here
  • @ricfog : Look at Boston Housing Data. Note this is now included in our package boston_housing
  • @ricfog : Clean vignette. Leave the plot in the vignette (to be moved elsewhere)
  • @shamindras : Correct Boston Housing Dataset provided with citation. Add unit tests for the corrections!
  • @shamindras : Clean up spelling notes
  • @ricfog : Change fetch_mms_comp_var_attr weights code to use switch based approach
  • @ricfog : Change multiplier weights code to use switch based approach
  • @ricfog : Switch to model.matrix in sandwich variance and use residuals in the computation
  • @shamindras : Update maars to have a package level doc per here. Add @importFrom statements
  • @shamindras : Change dplyr::summarise to dplyr::summarize for spelling consistency
  • @shamindras : Fix NOTE by adding .gitkeep in vignettes to .RBuildignore
  • @shamindras : Fix NOTE by DESCRIPTION meta-information by making it a couple of sentences. This is a placeholder and we should refine it before official CRAN release.
  • @shamindras : Consolidate boston-housing.R and la-county.R files into a single data-maars.R files. Consolidate test files accordingly
  • @shamindras : Make some minor changes to the vignette
  • @shamindras : Add styling to our code using the Makefile and styler::style_dir(here::here('R')) and for tests
  • @shamindras : Have a make style which does both R and tests
  • @shamindras : Make sure styling does not include vignettes
  • @shamindras : Ensure that url_check() are all resolved for CRAN
  • Remove DOI entries from inst/REFERENCES.bib since they can cause CRAN url issues e.g. remove this line
  • @shamindras : Add search functionality to our site. This will make it easier for users to find our documentation and vignettes
  • @shamindras : Use MIT License
  • Fix the no visible binding for global variable errors in our code e.g. per here
  • @ricfog : Remove mixture of %>% and base code, and just break pipes into variables.
  • @ricfog @shamindras : Handle explicitly the %>% of . e.g. here
  • @ricfog : Change all attr(obj, "class") <- c("obj_class_name") to be of the form class(obj) <- "obj_class_name" for consistency
  • Change dplyr::_all scoped words using the superseded across function i.e. see here. This is important since we use dplyr internally, so helpful to replace superseded functionality
  • @shamindras @ricfog : We shouldn't name variables data since it can be confused with .data, and is a very generic name. Note that data directly conflicts with utils::data which we use in our vignette, for example

FT: speed up the bootstrap functions

  • reuse the QR decomposition from lm in the regressions fitted in the residual bootstrap
  • check whether the speed gap between our comp_boot_emp function and covBS from the sandwich package is worth justifying the use of that package. Note that covBS does not return the coefficients estimates
  • sample rademacher weights with Rcpp in the multiplier bootstrap
  • sample the indices in empirical and multiplier bootstrap in the same way and try sampling the indices in a list
  • in comp_boot_mul_wgt, change if statement to switch
  • transform the dataframe to tibble only once in comp_boot_mul
  • generate the datasets for the empirical bootstrap "on the fly" when calling fit_reg rather than storing them in a list by calling comp_boot_emp_samples

FT: write tidy method for maars_lm

  • @ricfog write a tidy method that for maars_lm that uses NextMethod and warning
  • think about what augment and glance mean for a maars_lm object.
  • can broom referencetidy.maars_lm methods?

Cleanup of lm and related utils

  • Include .gitattributes file
  • We need to set.seed in all our vignette chunks. This is for reproducibility, but to also avoid merge conflicts every time we rebuild the package using make build_package
  • roxygen2: Rewrite documentation of the functions making the style more consistent across functions. For example, we could adopt the style used for the quantile function
  • roxygen2: insert dots at end of sentences in Roxygen. See below
  • roxygen2: use @details responsibly. See below
  • roxygen2: replace var_name with \code{var_name}. See below
  • ggplot2: Replace hardcoded values 1.96 with appropriate outputs from statistics functions, in vignettes
  • Check the order in which we have our functions written in each file
  • ggplot2: We should break up our ggplot2 code into separate plots and then combine them using +. This will make it much easier to manage the code and make it readable
  • ggplot2: Need to be consistent with our ggplot2 themes used in our plots. See ggplot2 theme wrapper below
  • ggplot2: Need to be consistent with the ggplot2 font and other settings used in our plots. As a preference we should only use labs for example. See ggplot2 theme wrapper below
  • ggplot2: Add names_prefix = "q" to pivot_wider for the ggplot2 code i.e. to avoid .data$0.275 issue. Note the "q" here stands for quantile.
  • Create a utils-common.R similar to the selectiveInference package, ensure that common functions are put in this file
  • clean up the tests directory i.e. delete unused R test files
  • clean up the R directory i.e. delete unused R files
  • We should add #' @importFrom rlang .data in all our functions under #' @export

FT: Design the maars_lm class

  • function to create maars_lm object from mss_var which calls comp_var
  • write a as.maars_lm function to be run on an lm object
  • write print method for maars_lm
  • write summary method for maars_lm which calls comp_var_summary
    Add the functions to a single file.

FT: Fix the comp_var function

  • Check that we define/design inputs consistently for comp_var
  • Ensure that default values for all inputs is set to NULL for empirical bootstrap, multiplier bootstrap, residual bootstrap
  • Perform input assertion checking for all inputs consistently for var_comp
  • Create an if-then-else skeleton for comp_var
  • Update roxygen2 documentation for comp_var
  • Add residual bootstrap to comp_var
  • Update tests for comp_var for empirical bootstrap, multiplier bootstrap, residual bootstrap: (i) add tests against the sandwich package; (ii) add suite of tests that compare variance estimates across the three types of bootstrap

FT: Write a sandwich estimator function + Vignette on `lm` object

  • Update version number for our package using usethis::use_version(). Then update NEWS.md with appropriate feature release information
  • Ensure that we have BibTeX citations correct in our roxygen2 comments for our functions
  • Add in unit test where naive solve based variance estimator is unstable by construction. See such an example for a linear model here. We should show that our unit test shows that qr and sandwich::sandwich estimator are equal to each other, but naive solve based estimator is not equal to 7 dec. pl.
  • Write a benchmark using the bench package to the sandwich::sandwich estimator. See here for an example, we would need to create a bench dir and add a dependency on the bench package.
  • Write the unit test to show the numerical instability of the naive variance sandwich estimator implementation using solve. We should show that we do not match to 7 dec. pl. when we match to the other sandwich estimator. This naive estimator can be included in our test-ols-var.R file and not in our maar package
  • Write a unit test to check our sandwich estimator using the manual solve based approach. The solve based approach should be less computationally efficient.
  • Write a unit test to check our sandwich estimator using Arun's regression approach
  • Update our sandwich estimator using Matrix package and the QR decomposition
  • Set a seed for the unit tests
  • Write a unit test to check our sandwich estimator against the sandwich::sandwich estimator
  • Create a file called ols-var.R and an associated test-ols-var.R function
  • Implement a function to calculate the sandwich estimator of the variance
  • Add correct citations for Models As Approximations papers in README.Rmd
  • Add a citation for the package

FT: Setup bootstrap utils and `lm` wrapper

This is for issue #27

From #24

From #23

  • create function for confidence intervals for broom::tidy() types of output

FT: Miscellaneous

Adding in various items here, that don't quite fit in other topics (for now)

Basic Code improvements (Medium High priority)

  • Change maars:::get_plot() to be an external function, it is currently an internal function
  • For assumption wording in Well Specified and Residual Bootstrap sections, we need to change Errors are assumed to be homoscedastic, from Residuals are assumed to be homoscedastic
  • @ricfog @shamindras - add comment section headers to all of our code e.g. in maars-lm.R we can have comments that say # summary.maars_lm ----. This section header will allow us to use the Alt + Shift + O to do code folding in our functions. Will make our life much easier to navigate within these long files! This should first be done on maars-lm.R and then reviewed, and then rolled out to other files later
  • Rename function names to be consistent i.e. use functions from pkgdown to get metadata
  • Perhaps we do a CRAN check, based on the RStudio checklist?
  • Rename R scripts and corresponding test R scripts
  • Move code and consolidate R scripts i.e. see whether we can reduce the number of R scripts we are using e.g. get_confint.R could perhaps move to ols-summary.R
  • Make documentation standardized across all our user facing functions. Make sure that we meet the RStudio checklist
  • Ensure that our code works on win-builder. This was recommended by Alex. R to check before we submit on CRAN
  • Change this code to directly call get_mms_rnm_cols_suff
  • @shamindras : Mark our experimental functions as such using the lifecycle package
  • Use Student's t distribution in the computation of the confidence interval for the sandwich here similarly to what we do for confidence intervals. Consult @Arun-Kuchibhotla on how to proceed with this. I (@ricfog ) suggest using the coeftest function in the lmtest package for generating the summary tables (i.e., coefficients estimates, standard errors, etc.)

Basic Code improvements (Medium Low priority)

  • We need to change all of our NULL input defaults to be NA. This is much easier to incorporate into cross joins of grids than NULL values, which can't be assigned to vectors (but NA values can be)

  • Currently for comp_var our assertion checking for valid inputs is done within individual variance estimation functions rather than at the start of comp_var. This can result in many unnecessary computations before a known error will occur. For example consider the following case:

    Click to expand!
    n <- 1e3
    X <- stats::rnorm(n, 0, 1)
    y <- 2 + X * 1 + stats::rnorm(n, 0, 10)
    lm_fit <- stats::lm(y ~ X)
    # Add valid empirical bootstrap, but failing subsampling with
    # m = n + 1 i.e. m > n
    comp_var(mod_fit = lm_fit,
             boot_emp = list(B = 1e3, m = n + 1), # this works
             boot_sub = list(B = 1e3, m = n + 1), # this will fail since m > n
             boot_res = NULL,
             boot_mul = NULL)

The above code returns the correct assertion error for boot_sub but it runs the large boot_emp computation first! This is rather unfortunate. Perhaps we should do global input assertion checking outside of our boot_ functions rather than inside them, to catch this error globally and avoid this unnecessary computation issue.

  • Memory Profiling: Use more memory saving sampling like rsample does. We should use the lobstr::obj_size to do all our memory profiling of our returned maars_lm objects. See the comment by Dr. Silge in the linked post about using this functionality. The end result is our maars_lm objects should not save all the data or indices for bootstrap (ideally) but the pointers to the B replicates of these datasets, ready to be called as needed using lazy evaluation
  • Warnings: Perhaps we use message() instead of warning() given this post

Speed up the bootstrap functions

This is from #41

  • Use the collapse package to speed up sampling
  • reuse the QR decomposition from lm in the regressions fitted in the residual bootstrap
  • check whether the speed gap between our comp_boot_emp function and covBS from the sandwich package is worth justifying the use of that package. Note that covBS does not return the coefficients estimates
  • sample rademacher weights with Rcpp in the multiplier bootstrap
  • sample the indices in empirical and multiplier bootstrap in the same way and try sampling the indices in a list

Ensure our estimator calculates weights correctly

This is from #16

  • See if we can use lm.fit weights correctly when in our qr variance sandwich estimator

Improve model diagnostics

This is from #34

  • @ricfog - add more testing for diagnostics
  • @ricfog - add testing. Try the vdiffr package
  • @ricfog - allow user to feed their own set of weights into the function
  • @ricfog - allow users to decide which objects should be added to the plots

Outstanding items from our Wharton demo (03/19/2021) (Low priority)

  • Jeff: Consider using vector input for summary
  • Jeff: Look into stargazer output tables
  • RB: Write down documentation for what columns return
  • RF/AK: Weights for linear regression, and diagnostics can be done after the paper is written

FT: Fix and consolidate code in `maars-lm` and `ols-summary`

General improvements:

  • we should always return by default all estimates that have been computed through comp_var. Cases like this should be avoided. We can use this code to check which estimates are available in comp_var. If everything is set to FALSE, then the all available elements are returned
  • there are multiple (3-5) functions that call the cli package and contain only a few lines of code. We should remove these functions and insert their code directly into the functions where they are actually used. However, before going ahead with this consolidation we should make sure that each of these functions is used only once

Clean/fix/more documentation:

  • potentially change the name of the functioncheck_fn_args_summary because it is very similar to [check_fn_args](https://github.com/shamindras/maars/blob/iss-55-post-demo-fixes/R/utils-common.R#L92). Also check whether this information is correct
  • add more examples in the documentation of get_summary
  • get_assumptions should return a tibble
  • comp_var needs more examples in its documentation
  • remove the lines of code relative to stats::lm in summary.maars_lm because the corresponding output is not printed
  • rename get_mms_summary_print_lm_style and do not print anything within the function. This function should only return the summary table with the significance codes

Consolidation:

  • remove get_mms_rnm_cols_suff. This function was created in an old iteration of the code and is not used anymore
  • remove fetch_mms_comp_var_attr. There is almost no synergy at play in this function, which currently serves as a very simple wrapper for many other functions. This only makes the dependencies between functions we need to be aware of more complicated. We should be calling those specific functions directly (up for a [short] discussion given @shamindras cares about this a lot :) )

ENH: Variance estimators return lists

  • rewrite the sandwich function to return a list
  • rewrite comp_sand_var2 to handle the list outputs generated by the estimators of the variance
  • change the bootstrap functions such that they return a list and compute get_summary within the function
  • update the documentation of comp_var2 and of the estimators of the variance
  • make the names of the lists generated by the estimators of the variance consistent
  • adapt tests to handle the list outputs generated by the variance estimators
  • Adapt tests to handle the new function comp_sand_var2
  • create a function to nicely return a string containing the assumptions behind each computation of the variance (e.g., call it get_assumptions)

Add in Github Actions for benchmarks

  • We should add in Github actions for benchmarks with the bench package e.g. see here for dplyr setup.
  • This is still in an experimental stage, so we should examine the details of the bench package just prior to deploying this feature, particularly the cb_fetch function
  • Need to ensure that this works with bench::press and bench::mark since we use both these functions

Emoji characters on Windows

The special emoji characters used in the specification headers don't render correctly on Windows when printed inside of the cli functions. For example, this is what I see for the well-specified heading:

-- <U+0001F4C9><U+0001F4C8>: Well Specified Model: Assumptions ----

This could be fixed if the emoji were manually passed to cat()

cat("\U0001F4C9\U0001F4C8")

But that might be difficult to coordinate with the cli functions. It also won't work if the user is working with R GUI, R CMD, or another environment without UTF-8 support. I might suggest instead checking if the environment supports UTF-8 characters with the cli::is_utf8_output() function and only including the emoji if that returns TRUE.

Write lm.fit wrapper

  • Create wrapper for lm.fit that includes our sandwich estimator computation
  • Create wrapper for lm.fit summary to incorporate sandwich estimator output. Make this toggled with existing output. See here for an example

Rename `maar` to `maars`

As Arun had mentioned on 11/18/2020, to emphasize that we approximation(s) is plural, not singular as in approximation (without the s).

DOC: Paper Submission

For our paper submission on overleaf, we need to start making the following changes:

  • Make abstract modular in yaml file and call it 00-abstract.Rmd, and source it like other chunks
  • @shamindras Look into autonaming of chunks in our files. We can try using the namer package to consistently automate this
  • @shamindras In make paper target put paper.pdf, code.R, and code.html in a paper/submission subdirectory
  • @shamindras: In make paper target zip the 3 created files [paper.pdf, code.R, code.html] together in a single submission.tar.gz file in the paper/submission subdirectory
  • @ricfog create modular Rmd's (one for each section) and copy the overleaf into the single files (just do one section for now)
  • @ricfog update scaffolding (e.g., authors and metadata) and use the strict JSS markdown
  • Create a JSS template using rticles package, see here
  • Setup JSS paper in Rmd using the rticles package
  • Add JSS Rmarkdown paper to Rbuildignore
  • Clean up the overleaf dir, and leave only essential maars related files
  • Change maar to maars across document
  • Don't use texttt in the paper format use either pkg{maars}, or \proglang{R} or code{}
  • Move away from listing for code blocks, to just R console output e.g. see Section 3.2 here
  • Always display code output after every R console code block

ENH: Fix Bootstrap Issues

  • Fix the sqrt(n) scaling multiplier_bootstrap variance
  • check assumptions for empirical and multiplier bootstrap
  • add additional tests for the bootstrap
  • Fix the sqrt(m) scaling empirical_bootstrap variance. Check this
  • Change empirical bootstrap to just generate indices inside purrr::map and only store the output glm or lm object
  • add testing for residual bootstrap
  • check the computation of the statistic in the get_summary function here
  • add testing for confidence intervals. Check why this code does not work for stats::glm in the testing here
  • add testing for variance estimation and summaries
  • Add residual bootstrap
  • Remove default value of B in empirical_bootstrap variance
  • Fix the calculation of standard errors i.e. ensure that it is divided by sqrt(n)
  • compute covariance matrix in all types of bootstrap

FT: Subsampling

Implement subsampling within empirical bootstrap:

  • add additional argument boot_sub. The user should be passing in something like boot_sub = list(B=100, m=50)
  • add an additional argument replace which is TRUE by default in both comp_boot_emp_sample and comp_boot_emp
  • in comp_mms_var, add assertion checking for B, m
  • A unit test here is to do n-out-of-n subsampling i.e. return the full data set, and we can then check that we match all estimates identically to lm

FT: (pseudo) t-test for coefficients

Create

  • a function that tests H_0: R beta = r vs. H_1: R beta \neq r, where beta is the regression coefficient
  • another function (wrapper of the first) to test H_0: beta = 0 vs. H1: R beta \neq r, but also allows the user to specify the hypotheses such as H_0: beta(i)>=0

FT: Write vignettes

  • vignette 1: linear regression lesson (from start to end)
  • vignette 2.1: consequence of misspecification on variance estimators and confidence intervals
  • vignette 2.2: empirical coverage and average width of intervals under misspecification
  • vignette 3.1: effect of increasing B in (n-out-of-n) empirical / multiplier / (sqrt(n) out of n) subsampling bootstrap samples on coverage when compared to sandwich
  • vignette 3.2: effect of increasing m in m-out-of-n bootstrap/subsampling on coverage when compared to sandwich.

Items on vignette based on LA county data (partially inherited from #13):

  • remove unnecessary packages from the first snippet
  • solve issues with citations
  • center tables
  • remove explicit calls to packages (e.g., ggplot2:: or dplyr::). Simply load all packages in a snippet in the beginning of the vignette
  • remove latex code (or move it to another vignette)

ENH: miscellaneous

  • list of assumptions to be included in summary for boot_emp, boot_res, etc.
  • about how to deal with tidy, glance, and augment methods on maars_lm
  • set weights in comp_var for multiplier bootstrap by default to be rademacher
  • fix assumptions in estimators of the variance
  • add documentation in mss_var file
  • add documentation for methods
  • @Arun-Kuchibhotla: Add more specific citations in our documentation

FT: Future Feature requests (AK)

@Arun-Kuchibhotla (2021-03-18)

Also, this reminds of another important function to be implemented. The usual F-tests for comparing models; the classical ones are implemented as anova() and Anova(). I can never tell the difference between these two. The sandwich-based tests follow readily. Also, see https://bookdown.org/ccolonescu/RPoE4/heteroskedasticity.html
I should mention that some of these robust versions are already implemented in SAS (and SPSS??) and it might be a good idea to compare the numbers to other packages or programs to ensure correctness. This can be done at the very end.

FT: Updates in preparation for upcoming demo

get_summary and summary.maars_lm:

  • We should make sure digits = 3 is reflected in all columns in summary.maars_lm console output
  • Update Assumptions - all are valid under i.n.i.d (conservative under i.i.d) update to "independent" only
  • In boot_ functions include more assumptions for given bootstrap models e.g. include B, m for empirical bootstrap, just B for residual bootstrap, and B, weights_type for multiplier bootstrap
  • Remove the left joined summary(maars_lm) version? Keep things minimal, since we have solved all problems with our sequential summary display. For discussion.

plots

  • Write a get_plot function with default options
  • Cooks Distance point scaling should be smaller
  • QQ plot points should be smaller
  • Need to add confidence interval plots
  • Need to add variable QQ plots - from JSM
  • Make plots titles more specific

print.maars_lm

  • We should print assumptions i.e. for all run maars standard errors. So sand, well_specified assumptions by default, and other standard errors

get_assumptions

  • Need to check the format of the printed assumptions

conf_int

  • AK: Mentioned to just show term, conf.low, conf.high. Fix it to be the same as confint(lm)
  • @shamindras Perhaps we can still return the whole tidy tibble? We should be consistent with broom. Should discuss together

data

documentation

  • Change README.Rmd to use devtools rather than remotes as the preferred package installation
    method

summary

  • @ricfog change the statistic based on the F-distribution to one based on the Chi square

slides

  • @shamindras can setup first draft, then share for discussion and iteration

bugfix

  • Fix the no visible binding for global variable errors in our code e.g. per here

vignettes

  • Use the la_county tibble from maars
  • These are important, but we could quickly write these next week once we have our main features done
  • Table 1 from Buja et. al. 1
  • get_plot, get_summary, summary.maars_lm, and get_confint, confint.maars_lm functionality for maars_lm objects

FT: Model diagnostics

  • create function for model diagnostics
  • add option for weights
  • add focal slope diagnostics
  • add nonlinearity detection diagnostics
  • add focal reweighting variable diagnostics
  • add normal QQ plot for bootstrap
  • improve documentation

FT: Add confidence intervals

  • create function for confidence intervals for boostrap types output
  • check the computation of confidence intervals for m-out-of-n bootstrap

FT: Function for confidence intervals

  • write initial function to generate confidence intervals for a maars_lm object
  • allow for the several different types of corrections (e.g., see "Satterthwaite" correction here)
  • create a print method for confidence intervals. confint and get_confint should do the same thing

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.