Git Product home page Git Product logo

glmm-course's Introduction

Generalized Linear Mixed-Effects Modeling in R

This two-day workshop will focus on generalized linear mixed-effects models (GLMMs; hierarchical/multilevel models) using the R programming language. We will concentrate on practical elements of GLMMs such as choosing a modeling approach, the process of building up and understanding a model, model checking, and plotting and interpreting model output. We will focus mainly on linear mixed-effects models, but we will also cover generalized linear mixed-effect models, variance and correlation structures, and zero-inflated models.

By the end of the two-day workshop, you will be able to develop models using your own data and troubleshoot the main problems that arise in the process. You will also become familiar with a number of R packages that can fit GLMMs (e.g. lme4, nlme, glmmTMB) and R packages to help manipulate and plot your data and models (e.g. dplyr, ggplot2, broom).

Prior to taking this workshop, you should be reasonably comfortable with R and linear regression, and ideally have some experience with GLMs (e.g. logistic regression). Some background with dplyr and ggplot2 would be helpful.

Downloading these notes/exercises

https://github.com/seananderson/glmm-course

Click "Clone or download", "Download ZIP".

Generating the exercises

Open the file glmm-course.Rproj by double-clicking on it. Run the following:

source("99-make.R")

Then look in the folder exercises. Lines with # exercise will be left blank in this version.

glmm-course's People

Contributors

jdunic avatar seananderson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

glmm-course's Issues

Add set of GLM exercises

Something a bit like the variance structure exercises but ideally with simple real data sets. Simulated would also be okay if there is some believable context given like with the random effect grouping exercises.

Make sure to include exercises that deal with interpreting the coefficients.

Make sure to include some overdispersed examples.

R v>4 fails to run the broom package functions

Hi Sean,
Upon running your random-slopes and temporal-autocorrelation functions I discovered that the broom package throws an error which can be rectified by loading the library(broom.mixed) in its place.
Great course materials by the way, really helped with my analysis of temporally correlated fish movement dataset.
Cheers,
Ross

zero-inflated code - tibble error

hey

i tried to follow the code in the zero-inflated markdown and got a tibble error. not sure why since you never call a function from tibble..

inverse_logit <- function(x) plogis(x)
set.seed(123)
d <- data.frame(f = factor(rep(letters[1:10], each = 10)), x = runif(100))
u <- rnorm(10, sd = 2)
d$eta <- with(d, u[f] + 1 + 4 * x)
pz <- 0.2 # probability of excess zeros
zi <- rbinom(100, size = 1, prob = pz)
d$y <- ifelse(zi, 0, rpois(100, lambda = exp(d$eta)))
d$zi <- zi

ggplot(d, aes(x, y, colour = f, shape = as.factor(zi))) + geom_point() +
    scale_shape_manual(values = c(20, 21))

all works

but

ggplot(d, aes(x, y, colour = f)) + geom_point() + facet_wrap(~zi)

fails, returning Error: 'as_tibble' is not an exported object from 'namespace:tibble'

Provenance of chopstick data

I'm wondering what the chopstick data really represent. The original reference defines food-pinching efficiency as:

Food-pinching efficiency. The subject would sit on an adjustable seat, and was required to pick up peanuts from a dish (150 mm diameter) in front of the subject (450 mm) to a cup (200 mm high and 70 mm diameter) under the mouth for 1 min. During pinching, the experimenter counted the numbers of peanuts in the cup. The reason for using
peanuts was that it was difficult to pick them up and hence was more representative as a measure of the effect of the length of the chopsticks on food pinching efficiency. Fig. 3 demonstrates the workplace layout and task of the food- pinching.

This would suggest count data to me?? Table 5 in the original paper gives mean values by chopstick length that are consistent with the data here, but the values given in this data set are continuous rather than count data. I don't have access to the paper on the Elsevier web site, so I can't see if there is supplemental material there (although I kind of doubt it in a paper from 1991?) The data sets themselves seem to be floating around in a variety of places:

remotes::install_github("jr-packages/jrModelling")
?jrModelling::chopsticks

credits https://bmdatablog.files.wordpress.com/2016/04/chopsticks.pdf , which refers to https://www.udacity.com/api/nodes/4576183932/supplemental_media/chopstick-effectivenesscsv/download; I think these are the same data referred to here: https://towardsdatascience.com/chopstick-length-analysis-2c4c7e9b6136

(The jrModelling data set only has 2 of the 4 chopstick lengths).

There are only 171 unique values out of 186 rows, so I guess it's possible these data are rescaled versions of integer counts?

Create an entire exercise on interpretation of coefficients

LMs, GLMs, LMMs

y = b0 + b1 * x
log(y) = b0 + b1 * x
log(y) = b0 + b1 * log(x)

Factor/binary predictors

Centered / scaled predictors.

Give output or coefficient values for hypothetical models and ask to answer specific questions about interpretation or quantifying values.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.