jmtomczak / intro_dgm Goto Github PK
View Code? Open in Web Editor NEW"Deep Generative Modeling": Introductory Examples
License: MIT License
"Deep Generative Modeling": Introductory Examples
License: MIT License
Hi, it seems there is a copy-paste typo in fm_example (not sure if eval/train are needed there at all):
self.vnet.eval() # set the vector field net to train again
I'm checking your most recent update on conditional flow matching and I see here you passed a tanh function after the Euler forward step. I wonder if there is any reason behind it?
https://github.com/jmtomczak/intro_dgm/blob/main/sbgms/fm_example.ipynb
def sample(self, batch_size=64):
# Euler method
# sample x_0 first
x_t = self.sample_base(torch.empty(batch_size, self.D))
# then go step-by-step to x_1 (data)
ts = torch.linspace(0., 1., self.T)
delta_t = ts[1] - ts[0]
for t in ts[1:]:
t_embedding = self.time_embedding(torch.Tensor([t]))
x_t = x_t + self.vnet(x_t + t_embedding) * delta_t
# Stochastic Euler method
if self.stochastic_euler:
x_t = x_t + torch.randn_like(x_t) * delta_t
x_final = torch.tanh(x_t) # **here's my question**
return x_final
In the ddgm example shouldn't the line:
self.p_dnns = p_dnns
be:
self.p_dnns = nn.ModuleList( p_dnns)
My understanding is that by not placing them in a ModuleList
the p_dnns
networks are not added in the learned parameters and are left unoptimized.
Thanks for an awesome resource by the way!
Hi,
I might be missing a step here but in your explanation for the reparameterization trick you state the following formula
And I can see
def reparameterization_gaussian_diffusion(self, x, i):
return torch.sqrt(1. - self.beta) * x + torch.sqrt(self.beta) * torch.randn_like(x)
Hi,
As far as I understand, the option A in arm_example.ipynb refers to causality. Therefore A = True should signal that the model should not be taking the current time step t as an input. In the code implementation we discard the latest outpout of the convolution layer. I am confused by this since I thought we are supposed to discard the latest input. Are we not supposed to discard the latest (current time step) input to the model. In other words, for example, for the first convolution operation for t = 0, are we not supposed to pad left (for 1D) with F-1 many elements, and feed into the first convolution the {(F-1), (F-2), ..., -1} time step elements to produce the output for t = 0 (initial) ?
if self.A:
# Remember , we cannot be dependent on the current
component; therefore , the last element is removed .
return conv1d_out [:, :, : โ1]
Sorry if I'm off. Just started to learn about these from the book. I will appreciate any help.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.