Comments (8)
tab_model()
exponentiates coefficients by default, for easier interpretation. See also messages from the underlying function of the parameters package.
test<-glm(Sepal.Length~Petal.Length, family = Gamma(link="log"), data = iris)
summary(test)
#>
#> Call:
#> glm(formula = Sepal.Length ~ Petal.Length, family = Gamma(link = "log"),
#> data = iris)
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 1.493970 0.013011 114.82 <2e-16 ***
#> Petal.Length 0.070164 0.003136 22.38 <2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> (Dispersion parameter for Gamma family taken to be 0.00456538)
#>
#> Null deviance: 2.97256 on 149 degrees of freedom
#> Residual deviance: 0.67623 on 148 degrees of freedom
#> AIC: 148.13
#>
#> Number of Fisher Scoring iterations: 3
parameters::model_parameters(test)
#> Parameter | Log-Prevalence | SE | 95% CI | t(148) | p
#> -------------------------------------------------------------------------
#> (Intercept) | 1.49 | 0.01 | [1.47, 1.52] | 114.82 | < .001
#> Petal Length | 0.07 | 3.14e-03 | [0.06, 0.08] | 22.38 | < .001
#>
#> Uncertainty intervals (profile-likelihood) and p-values (two-tailed)
#> computed using a Wald t-distribution approximation.
#>
#> The model has a log- or logit-link. Consider using `exponentiate =
#> TRUE` to interpret coefficients as ratios.
parameters::model_parameters(test, exponentiate = TRUE)
#> Parameter | Prevalence Ratio | SE | 95% CI | t(148) | p
#> ---------------------------------------------------------------------------
#> (Intercept) | 4.45 | 0.06 | [4.34, 4.57] | 114.82 | < .001
#> Petal Length | 1.07 | 3.36e-03 | [1.07, 1.08] | 22.38 | < .001
#>
#> Uncertainty intervals (profile-likelihood) and p-values (two-tailed)
#> computed using a Wald t-distribution approximation.
Created on 2024-07-09 with reprex v2.1.1
test<-glm(Sepal.Length~Petal.Length, family = Gamma(link="log"), data = iris)
summary(test)
#>
#> Call:
#> glm(formula = Sepal.Length ~ Petal.Length, family = Gamma(link = "log"),
#> data = iris)
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 1.493970 0.013011 114.82 <2e-16 ***
#> Petal.Length 0.070164 0.003136 22.38 <2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> (Dispersion parameter for Gamma family taken to be 0.00456538)
#>
#> Null deviance: 2.97256 on 149 degrees of freedom
#> Residual deviance: 0.67623 on 148 degrees of freedom
#> AIC: 148.13
#>
#> Number of Fisher Scoring iterations: 3
parameters::model_parameters(test)
#> Parameter | Log-Prevalence | SE | 95% CI | t(148) | p
#> -------------------------------------------------------------------------
#> (Intercept) | 1.49 | 0.01 | [1.47, 1.52] | 114.82 | < .001
#> Petal Length | 0.07 | 3.14e-03 | [0.06, 0.08] | 22.38 | < .001
#>
#> Uncertainty intervals (profile-likelihood) and p-values (two-tailed)
#> computed using a Wald t-distribution approximation.
#>
#> The model has a log- or logit-link. Consider using `exponentiate =
#> TRUE` to interpret coefficients as ratios.
parameters::model_parameters(test, exponentiate = TRUE)
#> Parameter | Prevalence Ratio | SE | 95% CI | t(148) | p
#> ---------------------------------------------------------------------------
#> (Intercept) | 4.45 | 0.06 | [4.34, 4.57] | 114.82 | < .001
#> Petal Length | 1.07 | 3.36e-03 | [1.07, 1.08] | 22.38 | < .001
#>
#> Uncertainty intervals (profile-likelihood) and p-values (two-tailed)
#> computed using a Wald t-distribution approximation.
Created on 2024-07-09 with reprex v2.1.1
from sjplot.
this makes a lot of sense, thanks for pointing this out, missed the incorporation of the parameters package here.
Just in case reviewers ask to see both the coefficient and the ratio, am I correct there is no way to turn off the exponentiation in the tab_model command?
from sjplot.
or, alternatively, to specify other commands (e.g., ci_method = "satterthwaite") from the parameters package within the tab_model command to ensure compatibility with lmer() models?
from sjplot.
You can use transform = NULL
. There are plenty of options in the function, which makes it difficult to keep an overview. You can use df.method
to use, e.g., Satterthwaite approximation.
from sjplot.
thank you so much! sorry I was unable to find these before, mostly self-taught in R and trying to branch out is pushing my fluency a bit.
I hope you have a great rest of your week, thanks for your work with all these amazing tools
from sjplot.
Hello again,
Sorry to bother but I found a couple of models, specifically those using heteroscedasticity-corrected SEs, and the p-values are slightly off compared to the coeftest output. Is there another modification I can't find that I need to add to make the tables compatible with the output? Also, if there is a better place to put these trouble-shooting inquiries than this github page, please let me know, I'm just getting used to the github ecosystem.
head(iris)
test<-glm(Sepal.Length~Petal.Length+Species+Sepal.Width+Petal.Width, family = Gamma(link="log"),
data = iris)
coeftest(test, vcov = vcovHC(test))
tab_model(test,
df.method="wald",transform = NULL,vcov.fun = vcovHC(test)) #p-values don't match coeftest summary output
from sjplot.
(one thing I forgot to note, when I rerun the same models as a lm() object this discrepancy no-longer occurs)
head(iris)
test2<-lm(Sepal.Length~Petal.Length+Species+Sepal.Width+Petal.Width,
data = iris)
coeftest(test2, vcov = vcovHC(test2)) #uniformity is good (similar to issue of normality in lm models), heteroscedasticity seems somewhat better but not great, posterior predictive check improved
tab_model(test2,
df.method="wald",transform = NULL,vcov.fun = vcovHC(test2)) #statistcs don't match summary output
model_parameters(test2, vcov = vcovHC(test2))
from sjplot.
realizing now that these don't actually matter now that I switched to gamma regression, since homogeneity of variance isn't an assumption of gamma regression, so I'll close this as the rest of the models are squared away- have a great day!
from sjplot.
Related Issues (20)
- support for a quantile regression
- factor level labels not corresponding
- questions about *, ** and ***
- "Model has log-transformed response. Back-transforming predictions to original response scale. Standard errors are still on the log-scale." - solution? HOT 1
- `tab_model` not working with large `rlmerMod` model with compositional data
- Site Not Found HOT 1
- Wrong AIC values with tab_model HOT 2
- Plotting three-way interactions without panels?
- Confidence interval bands partially or completely disappear when axes rescaled
- Discrepancy between plot_model output and estimate from lmer summary #424 HOT 2
- Discrepancy between summary() and tab_model() for brms models HOT 5
- I installed the sjPlot package successfully but when I opened it with library I got that it called the estimability package HOT 5
- Problem with sjPlot HOT 2
- Couldn't report residual standard errors of lm object
- Enabling weights column in sjplot data selection
- plot_model type = "int" errors when scale() is included in model formula HOT 1
- Using tab_corr in an .rmd file
- Are signifcane asterisks reliable when using robust linear models (rlm) in combination with estimate plots (plot_model)? HOT 3
- Backtransformation of sqrt() transformed estimates using plot_model() HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from sjplot.