Comments (7)
Hi, thanks for a good question. We implement the LogNormMix model in a non-standard way, which changes the formula slightly. The standard way to obtain a LogNormMix is using the following sequence of operations
x ~ GMM(w, \mu, s^2)
\tau = exp(x)
What we do instead is
x ~ GMM(w, \mu, s^2)
y = ax + b
\tau = exp(y)
The parameters a
and b
are chosen based on the (training) dataset, such that the distribution of x = (log \tau - b)/a
has zero mean and unit variance. (We found this to speed up the training, and we use a similar trick for other methods as well.) In our code (train.py
) the parameter a
is called std_out_train
and b
is called mean_out_train
in .
The correct way to compute the mean in this case is E[\tau] = \sum_k w_k \exp(a * \mu_k + b + a^2 * s_k^2 / 2)
Let me know if this works for you.
from ifl-tpp.
@shchur Thank you for your reply. I got it. But I have another question.
Since x ~ GMM
, could I first get the mean of x by E[x]=\sum_k w_k \mu_k
. Then use E[y] = aE[x]+b and E[\tau]=exp(E[y])
to get the mean of tau?
from ifl-tpp.
Unfortunately, it's not that simple. In general, E[f(x)] != f(E[x])
. For example if you have x ~ Normal(\mu, \sigma^2)
, then exp(E[x]) = \exp(\mu)
, but E[exp(x)] = \exp(\mu + \sigma^2/2)
(since exp(x)
follows log-normal distribution).
from ifl-tpp.
@shchur OK, I know. Thank you very much!
from ifl-tpp.
Besides, when splitting each sequence into train/val/test, in your code:
def train_val_test_split_each(self, train_size=0.6, val_size=0.2, test_size=0.2, seed=123)
I see in_train.append(self.in_times[idx][:n_train]) in_val.append(self.in_times[idx][n_train : (n_train + n_val)]) in_test.append(self.in_times[idx][(n_train + n_val):])
. So I wonder the val sequence does not use the historical information in the previous train sequence? right? But I think when testing/validation, historical information in the previous train sequence should also be used. it maybe helpful.
In addition, what's the purpose of def step(self, x, h) in RNNLayer
. Thank you!
from ifl-tpp.
That's indeed what happens. Even though RNNs should in theory be able to capture long-range interactions, we found that this additional history lead to absolutely no improvement in performance. So we decided to stick with this version when refactoring the code. This is also consistent with results in other papers (e.g. https://arxiv.org/abs/1905.09690), where they also say that RNNs basically don't learn long-range interactions (or at least no long-range interactions are necessary for TPP models). Also, you can see that our RNN model basically matches the optimal performance on synthetic datasets (like Hawkes), which means that we don't lose much by discarding the history here.
This function is necessary when generating new sequences with the RNN. Since the parameters of p(tau_{i+1} | History)
depend on tau_i
, we have to process the history one-by-one and cannot use fused RNN kernels (i.e. RNN.forward
).
from ifl-tpp.
I really appreciate your patience in answering all my questions.
from ifl-tpp.
Related Issues (20)
- LogNorm curiosity HOT 4
- Code for using context vector in the models HOT 2
- on log likelihood misunderstanding HOT 4
- Loss with NLL of mark and MAE of inter-event time HOT 6
- history HOT 5
- Hyperparameters for reproducibility HOT 6
- Sampling points of a specific mark HOT 3
- Implementation on missing data imputation HOT 1
- Understanding given datasets HOT 5
- NLL results HOT 5
- Sampling with additional conditional information
- use other dataset HOT 1
- Missing data imputation HOT 1
- all evaluation expriments code HOT 1
- Calculate the mean of the entire distribution. HOT 23
- How could I get the predicted results? HOT 20
- ATM dataset testing HOT 2
- Learning with Marks HOT 9
- How to get the expression of the distribution of inter-event time HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ifl-tpp.