taddylab / bds Goto Github PK
View Code? Open in Web Editor NEWCode and examples from Business Data Science
Code and examples from Business Data Science
Pg 143/144 & https://github.com/TaddyLab/BDS/blob/master/examples/paidsearch.R
Text:
Figure 5.3 shows the log difference between average revenues in each group.
Caption:
The log-scale average revenue difference ..
Although, in the code, both plots are using totalrev
and are created before semavg
is defined.
The total vs average log differences will produce the same pattern on different scales, but initially confused me as I walked through the code/example.
Related, let's assume the graphs plot the mean
instead of total
, so it is the same as the model.
The graphs first take the average (or total in the current code) and then take the log of the average. (i.e. log(mean(revenue))
)
The model uses y
from semavg
which takes the log and then the mean. In the code, y
is defined as y=mean(log(revenue)))
Whether we use sum
or mean
in the model, it seems like would want to take the log after the mean. This seems especially true if we were going to use sum
rather than mean
.
Original Code (mean(log(revenue))
)
library(data.table)
sem <- as.data.table(sem)
sem_avg_log <- sem[,
list(d=mean(1-search.stays.on), y=mean(log(revenue))),
by=c("dma","treatment_period")]
setnames(sem_avg_log, "treatment_period", "t") # names to match slides
sem_avg_log <- as.data.frame(sem_avg_log)
coef(glm(y ~ d*t, data=sem_avg_log))['d:t']
gives -0.006586852
log(mean(revenue))
:
sem_log_avg <- sem[,
list(d=mean(1-search.stays.on), y=log(mean(revenue))),
by=c("dma","treatment_period")]
setnames(sem_log_avg, "treatment_period", "t") # names to match slides
sem_log_avg <- as.data.frame(sem_log_avg)
coef(glm(y ~ d*t, data=sem_log_avg))['d:t']
gives -0.005775498
If we were to use sum
rather than mean
and then log i.e. log(sum(revenue))
sem_log_sum <- sem[,
list(d=mean(1-search.stays.on), y=log(sum(revenue))),
by=c("dma","treatment_period")]
setnames(sem_log_sum, "treatment_period", "t") # names to match slides
sem_log_sum <- as.data.frame(sem_log_sum)
coef(glm(y ~ d*t, data=sem_log_sum))['d:t']
gives -0.005775498
, which is the same as log(mean(revenue))
If we were to do sum(log(revenue))
which would clearly be wrong because the control is a larger group, then we'd get -0.2534986
...
Is there a reason we should specifically use mean(log(revenue))
rather than log(mean(revenue))
?
Some of the example code in the repo (and the book) does not work on R 4.0.
Instead of a PR on fixes separately, maybe I can guide you to my forked repo with Jupyter notebooks reproducing the examples chapter by chapter (with minimal comments, mostly reformatting the original repo's inline comments into cells with Markdown.)
First, thanks for your work which really provides a useful knowledge source for the data science community. Is there a list of errata already available? Thanks.
Error when runing the semiconductor.R code on the Out of sample prediction experiment.
When using the loop to run the experiment in line 72 when defining the rcut object, cutvar is not specified do define the data.
Any clarifications on this?
Thanks
When running line 77 summary(ARdj <- glm(dja[2:n] ~ d
of timespace.R, R Studio crashes and reports a "Segmentation fault" in the Bash terminal. I read on StackOverflow that this could be due to limited memory. I'm running Crostini / Debian on a Pixelbook with 8 GB ram and approx. 80 GB free disk space. Can anyone suggest a possible solution or troubleshooting tips?
Hi,
The link to the book's datasets that appears in the Kindle edition (Introduction) is broken.
Should be: http://taddylab.com/bds.html
(LOVE the book!)
First off, thanks for your work and I'm excited for the second edition!
I have been reproducing some examples in the book with the Julia language and came across something that threw me for a loop in the introduction. This is the first time I'd seen the response variable as a matrix with the lm
function. When the regression is done with a matrix as the response variable the lm
documentation notes:
If
response
is a matrix a linear model is fitted separately by least-squares to each column of the matrix.
This all made sense but I was getting different coefficients than in the stocks.R
script provided in this repo. It turns out that lm
will drop records in the response matrix if any of the variables have missing values. Since the GOOGL
ticker has missing values from 2010-01-01
until 2014-03-01
it drops those records for all the other tickers as well before fitting the models.
So the original plot of coefficients was this:
If all the available data is used then the plot becomes:
This changes the interpretation of the plot slightly for most stocks but Facebook (FB) is likely notable.
Here's the code I used to create the plots:
library(tidyverse)
# get all stocks data
url_stocks <- "https://raw.githubusercontent.com/TaddyLab/BDS/master/examples/stocks.csv"
stocks <- read.csv(url_stocks)
stocks$RET <- as.numeric(as.character(stocks$RET))
stocks$date <- as.Date(as.character(stocks$date), format="%Y%m%d")
stocks <- stocks %>% filter(TICKER!="" & RET!="")
dups <- which(duplicated(stocks[,c("TICKER","date")]))
stocks <- stocks[-dups,]
stocks$month <- paste(format(stocks$date, "%Y-%m"),"-01",sep="")
stocks$month <- as.Date(stocks$month)
agg <- function(r) prod(1+r, na.rm=TRUE) - 1
mnthly <- stocks %>%
group_by(TICKER, month) %>%
summarize(RET = agg(RET), SNP = agg(sprtrn))
RET <- as.data.frame(mnthly[,-4]) %>% spread(TICKER, RET)
SNP <- as.data.frame(mnthly[,c("month","SNP")])
SNP <- SNP[match(unique(SNP$month),SNP$month),]
RET <- RET %>% select(-MPET)
# get three-month U.S. treasury bills data
url_tbill <- "https://raw.githubusercontent.com/TaddyLab/BDS/master/examples/tbills.csv"
tbills <- read.csv(url_tbill)
tbills$date <- as.Date(tbills$date)
# get big company market cap data
url_bigs <- "https://raw.githubusercontent.com/TaddyLab/BDS/master/examples/bigstocks.csv"
bigs <- read.csv(url_bigs, header = FALSE, as.is = TRUE)
exr <- (as.matrix(RET[,bigs[,1]]) - tbills[,2])
mkt <- (SNP[,2] - tbills[,2])
# regression models from book
capm <- lm(exr ~ mkt)
(ab <- t(coef(capm))[,2:1])
ab <- ab[-9,]
par(mai=c(.8,.8,0,0), xpd=FALSE)
plot(ab, type="n", bty="n", xlab="beta", ylab="alpha")
abline(v=1, lty=2, col=8)
abline(h=0, lty=2, col=8)
text(ab, labels=rownames(ab), cex=bigs[,2]/350, col="navy")
# create regression per variable
exrdf <- as.data.frame(exr)
exrdf <- mutate(exrdf, mkt = mkt)
allmods <- exrdf %>%
pivot_longer(-mkt, names_to = "ticker", values_to = "exr") %>%
group_by(ticker) %>%
nest() %>%
mutate(
regmods = map(data, ~ lm(exr ~ mkt, data = .)),
coefs = map(regmods, broom::tidy)
) %>%
unnest(coefs) %>%
select(ticker, term, estimate) %>%
pivot_wider(names_from = term, values_from = estimate) %>%
filter(ticker != "WMT") %>%
ungroup() %>%
select(ticker, mkt, `(Intercept)`) %>%
column_to_rownames("ticker")
# plot new results
par(mai=c(.8,.8,0,0), xpd=FALSE)
plot(allmods, type="n", bty="n", xlab="beta", ylab="alpha")
abline(v=1, lty=2, col=8)
abline(h=0, lty=2, col=8)
text(allmods, labels=rownames(allmods), cex=bigs[,2]/350, col="navy")
Small subscript missing in equation 2.22
In the digital edition (maybe printed also) we have:
lhd = Πni=1 p(yi | xi) =Πni=1 piyi(1-pi)1-yi
should be:
lhd = Πni=1 p(yi | xi) =Πni=1 piyi(1-pi)1-yi
Note the added subscript at piyi instead of piyi
New to R :), but I think pg. 25 line ~6 in the browser example should be betas[1:5,]
instead of betas[,1:5]
to print the first 5 lines of betas.
Hi!
There seems to be an issue with the cross-validation code (starts from line 59. The problem lies in line 72 where the subsetting results in a NULL variable (it uses "cutvar" that is not declared anywhere throughout the code).
I resolved the problem by correcting the subsetting bit: changed data=cutvar
to data=SC[,c("FAIL",names(signif))]
. It's rather crude but works like magic,
Some little corrections (as of first printing):
trucks
, also missing a $ in penultimate examplecv.gamlr
but insert calls cv.glmnet
tapply
is missing a closing parenthesis for ybar_w
Hi,
Bought the digital edition of your book and, while not too far into it as for right now, I enjoy it a lot.
I noticed some typos in the Bayesian Inference subsection, specifically for equation 1.10 and the Marginal Likelihood equation.
Given the defintion of P(X|Θ) given as the probability of X given Θ
For equation 1.10, in the book (digital) we have:
P(Θ|X) = P(Θ|X)π(Θ)/P(X) ∝ P(Θ|X)π(Θ)
I believe it should instead be:
P(Θ|X) = P(X|Θ)π(Θ)/P(X) ∝ P(X|Θ)π(Θ)
Similarly, for the Marginal Likelihood equation:
P(X) = ∫P(Θ|X)π(Θ)dΘ
I believe it should instead be:
P(X) = ∫P(X|Θ)π(Θ)dΘ
Lines 75 through 84 reference oj$logmove which is not part of oj.csv data and isn't established in the previous code.
Great book - got both the digital and print version!
Just a couple of typos I found:
fixed affects (vs. fixed effects) - Footnote 14, Chapter 5
In the code comments under semi-conductors there is a typo in the word deviance.
Line 48 in 128b8ba
page 33, line -2: The simple proof for this assumes independence between tests and (see Figure 1.12) ## and?
page 34, fig 1.12: FDP was confusing in print (only clarified in the caption), but online it's completely different, looks truncated and also mentions FDF (so neither FDR or FDP)
(Bayes' Rule still wrong in 1.10, by the way)
Original code
## read in the data oj <- read.csv("oj.csv") head(oj) levels(oj$brand)
levels(oj$brand) returns NULL.
Modified code
## read in the data oj <- read.csv("oj.csv", stringsAsFactors=T) head(oj) levels(oj$brand)
This works.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.