Git Product home page Git Product logo

nimblesmc's People

Contributors

dochvam avatar paciorek avatar perrydv avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

dochvam rynk14

nimblesmc's Issues

improve enKF step efficiency

  1. organize order of operations of efx,efy,Kmat calculations. Do (xf%*%oneVec).
  2. multiply scalars by lowest dimensional objects in order of ops
  3. avoid inverse(x); use solve(x,y)? But use of inverse is in a loop so need to think more (CP could go back to his class notes on this question.)
  4. use of matrix mult in scalar case seems a bit silly.
  5. don't need matrix of perturb in non-scalar case; don't need vector of perturb in scalar case -- allocate vector or one-column matrix

various packaging questions

@perrydv @danielturek @dochvam A few questions about nimbleSMC:

  1. We currently have nimbleSMC at version 0.0.1. I'm inclined to either start at 0.1.0 or 0.10.0, the latter indicating branching off our our main package.
  2. nimbleSMC is its own repository yet we have the nimble/packages/ structure within the nimble repo, so we could put nimble/packages/nimbleSMC there. The packages would be separate but commits would be going to the same repository. I think it makes sense to have a separate repo but wanted to check.
  3. Assuming we have a separate repo, my plan is to create nimbleSMC/packages/nimbleSMC to mimic the nimble repo directory structure.
  1. Who should be the maintainer? Ben put himself down, but it probably makes most sense in terms of longevity and consistency with nimble itself (given that nimbleSMC is so closely aligned with nimble) for me to do it.
  2. @dochvam , why is devtools needed as a 'depends'?
  3. Given the new package, we need to move Dao from authorship on nimble to authorship on nimbleSMC. I am doing that.
  4. Help on SMC samplers is under help(SMCsamplers) (as well as help(RW_PF)). @danielturek what do you think?

Thoughts?

NA values in block pmcmc test

I'm seeing NA warnings in the block pmcmc test, from the uncompiled Rmcmc$run execution. The tests passes and in general I think things are fine, but I'm confused by why we haven't seen this before. I think the NAs are occurring with either nimble 0.13.1 or 0.13.2 and with the nimbleSMC on CRAN, so probably not something that just popped up.

A couple odd things:

  • the first call to my_particleFilter$getLastLogLik() returns -Inf. In fact I'm not even sure if my_particleFiltuer$run is run before the first iteration of the sampler is done...
  • the first set of propValueVector values includes a negative value for one of the standard deviations, which I think is ultimately what triggers the NA warnings. But I don't see why this would have changed relative to running this test a while back.

improve numerical robustness in all PF algos

In auxiliary filter in particular, but presumably others, we exponentiate the weights and then sum and divide by the sum of weights. We also exponeniate and then take the log to get the log-likelihood. This is not robust to numerical underflow. We can presumably address this using the log-sum-exp trick to avoid exponentiating overly large magnitude values.

Here's a reprex from a user:

library(nimble)
library(nimbleSMC)

#### Model
SI <- nimbleCode({
  S[1] <- iniState[1]
  I[1] <- iniState[2]
  y[1] ~ dpois(pdet * I[1] + 1e-5)
  for (t in 2:nT) {
    lambda[t] <- beta * I[t-1] / N * S[t-1] + 1e-5
    negdS[t] ~ dpois(lambda[t])
    
    S[t] <- S[t-1] - negdS[t]
    I[t] <- I[t-1] + negdS[t]
    
    y[t] ~ dpois(pdet * I[t] + 1e-5)
    X[1:2,t] <- c(S[t], I[t])
  }
  
  beta ~ dunif(0, 0.3) #Transimissibility
  pdet ~ dunif(0, 1) #Probability of detection
})

states <- c(10000, 5)
constants <- list(
  N = sum(states),
  nT = 50,
  iniState = states
)

y <- c(0, 3, 2, 2, 8, 9, 3, 6, 7, 5, 6, 10, 14, 9, 8, 16, 
       15, 20, 27, 29, 37, 33, 32, 52, 55, 67, 56, 68, 85, 85, 111, 
       124, 145, 169, 206, 203, 260, 277, 320, 335, 414, 438, 539, 621, 
       681, 783, 932, 1015, 1099, 1222)

SIMod <- nimbleModel(code = SI,
                     constants = constants,
                     data = list(y = y),
                     inits = list(beta = 0.2, pdet = 0.5))

my_AuxF <- buildAuxiliaryFilter(SIMod, 'I',
                                control = list(saveAll = TRUE))
SIModC <- compileNimble(SIMod)
my_AuxFC <- compileNimble(my_AuxF, project = SIMod)
logLik <- my_AuxFC$run(m = 10000)
logLik # Always -Inflibrary(nimble)
library(nimbleSMC)

#### Model
SI <- nimbleCode({
  S[1] <- iniState[1]
  I[1] <- iniState[2]
  y[1] ~ dpois(pdet * I[1] + 1e-5)
  for (t in 2:nT) {
    lambda[t] <- beta * I[t-1] / N * S[t-1] + 1e-5
    negdS[t] ~ dpois(lambda[t])
    
    S[t] <- S[t-1] - negdS[t]
    I[t] <- I[t-1] + negdS[t]
    
    y[t] ~ dpois(pdet * I[t] + 1e-5)
    X[1:2,t] <- c(S[t], I[t])
  }
  
  beta ~ dunif(0, 0.3) #Transimissibility
  pdet ~ dunif(0, 1) #Probability of detection
})

states <- c(10000, 5)
constants <- list(
  N = sum(states),
  nT = 50,
  iniState = states
)

y <- c(0, 3, 2, 2, 8, 9, 3, 6, 7, 5, 6, 10, 14, 9, 8, 16, 
       15, 20, 27, 29, 37, 33, 32, 52, 55, 67, 56, 68, 85, 85, 111, 
       124, 145, 169, 206, 203, 260, 277, 320, 335, 414, 438, 539, 621, 
       681, 783, 932, 1015, 1099, 1222)

SIMod <- nimbleModel(code = SI,
                     constants = constants,
                     data = list(y = y),
                     inits = list(beta = 0.2, pdet = 0.5))

my_AuxF <- buildAuxiliaryFilter(SIMod, 'I',
                                control = list(saveAll = TRUE))
SIModC <- compileNimble(SIMod)
my_AuxFC <- compileNimble(my_AuxF, project = SIMod)
logLik <- my_AuxFC$run(m = 10000)
logLik # Always -Inf

slightly different block PMCMC results

I'm getting slightly different results for the block PMCMC example than in the goldfile.

If I go back to nimbleSMC version 0.10.1 even with nimble version 0.10.1 I get the same slightly different results.
Can't figure out how to reproduce the values from the goldfile.

The difference seems likely to just be variation from different random number generation. The MCMC summary stats are similar and the test passes.

deal with `before_chain` for PMCMC samplers

I'm getting this error in the "scalar pmcmc" test when running nimbleSMC tests with nimble version 1.0.0. I didn't think we needed to modify any of our samplers to handle the addition of before_chain but I must be missing something.

@danielturek can you take a look? No urgency, unless this suggests we need to modify nimble itself before release.

── Error (Line 36): scalar pmcmc ───────────────────────────────────────────────
Error in `envRefInferField(x, what, getClass(class(x)), selfEnv)`: ‘before_chain’ is not a valid field or method name for reference class “sampler_RW_PF”
Backtrace:
 1. global test_mcmc_internal(...)
 3. Rmcmc$run(numItsR)
 4. samplerFunctions[[i]]$before_chain
 5. methods:::envRefInferField(x, what, getClass(class(x)), selfEnv)

items for next release

  • move CI to GitHub Actions
  • update multiple algorithms related to EFFI discussions
  • see nimble-dev/nimble issue 161
  • see nimble-dev/nimble issue 172
  • after nimbleSMC updated, do nimble PR that adds nimbleSMC testing to nimble itself (in GH Actions CI) - go back and see whether anything needs updating on the nimbleSMC side for this

make IF2 parameter names clear

Ideally we'd attach the parameter names to the output and to logLik/estimates/estSD, but not clear we have a way to manage names in compiled methods outside of a modelValues situation.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.