jiwoncpark / breakfastclub Goto Github PK
View Code? Open in Web Editor NEWSLAC HXML Reading Group
License: BSD 3-Clause "New" or "Revised" License
SLAC HXML Reading Group
License: BSD 3-Clause "New" or "Revised" License
Next week, I will present a general method for debiasing a Monte Carlo estimator. I will discuss the work of Don McLeish (2010) and some applications for variational inference (https://arxiv.org/abs/2004.00353). I may also give a few words about debiasing Multilevel Monte Carlo methods.
Slides will be shared by email before the presentation.
Greenberg et al. 2019 uses normalizing flows to perform Bayesian inference on simulators with intractable likelihoods. This seems close in spirit to what we're trying to do here, and a good review of this paper would be helpful.
Rezende and Mohamed 2015 is a classic paper on using normalizing flows, rather than e.g. mean-field approximations, for variational inference. Normalizing flows are appropriate for problems with degeneracies and outliers, as they represent a more flexible family of posterior distributions as compared with factorized Gaussians. Flow-based models were also heavily endorsed by Francois at the Likelihood-free Inference Workshop! They deserve mention in our DL lit review, I think.
Possibly interesting keywords:
RealNVP, NICE, Glow (models with normalizing flows)
MADE, PixelRNN, WaveNet, MAF, IAF (models with autoregressive flows)
The paper I mentioned today was actually Hermans et al 2019 (https://arxiv.org/abs/1903.04057), related to SRE. This helps keep to our theme of ratio estimators, e.g. SRE and APT (the two being equivalent, as it turns out!). I'm happy to present, but also happy to take volunteers. Anyone?
Paper on geometry and metric for comparing distributions (with a focus on GANS): https://arxiv.org/abs/1712.07822
For further Likelihood-Free Inference work, there is:
Likelihood-free MCMC with Amortized Approximate Ratio Estimators: https://arxiv.org/abs/1903.04057
On Contrastive Learning for Likelihood-free Inference: https://arxiv.org/abs/2002.03712
Paper: https://arxiv.org/abs/2006.10288
Per-example model calibration that's more robust for individual predictions than average-based ones
aka the MultiSWAG paper: https://arxiv.org/abs/2002.08791
Related ICML tutorial: https://www.youtube.com/watch?v=E1qhGw8QxqY](https://www.youtube.com/watch?v=E1qhGw8QxqY
@makagan Want to propose a good starter paper for our first meeting, 0900 PST on 1/28/2020? Sebastian @swagnercarena has agreed to be our first lead reader :-) Others, feel free to chime in as well.
Joshua @joshualin24 has volunteered to present the first half of this very nice (and long!) review paper on normalizing flows for inference. https://arxiv.org/abs/1912.02762 This coming Wednesday, he'll cover up to Section 3, which includes the overview of flows for modeling and inference as well as various types of flows. Comment here if you'd like him to cover a specific subsection!
The Automatic Posterior Transformation paper we discussed:
https://arxiv.org/abs/1905.07488
A paper on doing sequential neural likelihood estimation
https://arxiv.org/abs/1805.07226
And probably too long for us, but a recent review on normalizing flows for density estimation and inference:
https://arxiv.org/abs/1912.02762
Maxime could potentially present a practice talk for his masters thesis defense on some empirical bayes work he has been doing.
I presented the importance-weighted hierarchical variational inference (IWHVI) paper: https://arxiv.org/abs/1905.03290. IWHVI is a new family of tighter variational upper bounds on marginal log density that generalizes the hierarchical variational model (HVM) upper bound and semi-implicit variational inference bound (SIVI). It enjoys the same nice guarantees as the importance-weighted autoencoder (IWAE) bound and lends itself to similar jackknife debiasing estimators.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.