Git Product home page Git Product logo

gmjmcmc's People

Contributors

aliaksah avatar jonlachmann avatar

Watchers

 avatar  avatar  avatar

gmjmcmc's Issues

open blas threaring feature, not a bug

When the "normal" gmjmcmc function is running, calculating this one run will start a single process that accesses 11 threads (unfortunately I don't know why there are exactly 11) (see screenshot #1 for the corresponding display)
However, if the gmjmcmc.parallel function is called with a certain number of runs and cores, min(#runs, #cores) processes are started, but only access ONE thread at a time. (see screenshot #2 for the display after calling gmjmcmc.parallel with cores=8 and runs=20).

What I conclude from this: When calling the "normal" gmjmcmc function, my Macbook automatically accesses multiple threads to calculate a single run. However, when the gmjmcmc.parallel function is called, only a single thread is assigned to each run.
It is therefore not surprising that on my Macbook, for example, the gmjmcmc.parallel with Runs = Cores takes significantly longer than the "normal" gmjmcmc function with otherwise the same arguments.

I don't think the internal functionality of gmjmcmc.parallel is intended to work that way, but unfortunately I can't say more about it due to my lack of knowledge about the connection between R and the CPU. It's also quite possible that this behavior is due to some specific setting on my Macbook, but I thought you might still be interested in this because (at least on my Macbook) it shows the advantages of using gmjmcmc.parallel over the "normal " gmjmcmc function significantly reduced.

Screenshot 2024-03-25 at 19 52 13

Implement local optimizers

Simulated Annealing, Greedy algorithm and MT MCMC must be implemented to be able to continue testing.

transforms in summary

global transforms for printing the feature

summary(result)
Importance | Feature
##############################| sigmoid(1+1x5+1x5)
#########################| x2
##############################| x4
##############################| x5
##############################| x6
##############################| x7
##############################| x9
##############################| sigmoid(1+1x3+1x7+1x4+1x5)
##############################| x8
##############################| x1

Best population: 7 log marginal posterior: -7342.925
Report population: 10 log marginal posterior: -7342.925

                feats.strings marg.probs

1 sigmoid(1+1x5+1x5) 1.0000000
2 x4 1.0000000
3 x5 1.0000000
4 x6 1.0000000
5 x7 1.0000000
6 x9 1.0000000
7 sigmoid(1+1x3+1x7+1x4+1x5) 1.0000000
8 x8 1.0000000
9 x1 1.0000000
10 x2 0.8519006

If you continue running the code, then the second time calling summary(result) yields

summary(result)
Importance | Feature
##############################| p0(1+1x5+1x5)
#########################| x2
##############################| x4
##############################| x5
##############################| x6
##############################| x7
##############################| x9
##############################| p0(1+1x3+1x7+1x4+1x5)
##############################| x8
##############################| x1

Best population: 7 log marginal posterior: -7342.925
Report population: 10 log marginal posterior: -7342.925

           feats.strings marg.probs

1 p0(1+1x5+1x5) 1.0000000
2 x4 1.0000000
3 x5 1.0000000
4 x6 1.0000000
5 x7 1.0000000
6 x9 1.0000000
7 p0(1+1x3+1x7+1x4+1x5) 1.0000000
8 x8 1.0000000
9 x1 1.0000000
10 x2 0.8519006

Allow to remove features before starting the first population.

When starting gmjmcmc it should be possible to:

  • Manually specify the set of covariates that should be included in the first population.
  • Randomly select a set number of features for the first population.

This allows the algorithm to work with "wide" data where it is not feasible to include all covariates in the initial population.

Roll own likelihood computation or use BAS

At the moment we are using the standard GLM function in R for likelihood computation. It might be more efficient to use the one available from BAS (as Aliaksandr has done before), or completely implement a slimmed version of it that can have additional features to make it more efficient for our purposes.

Implement a predict function (for GMJMCMC)

Given a result from gmjmcmc, a function for prediction should be implemented, doing the following

  1. For the top N models in every thread make a prediction.
  2. Weight the predictions w.r.t. to the renormalised posteriors.
  3. Weight the aggregated posteriors across the threads (in the same way as when merging results).

Implement a function for parallel runs

We should have a function that

  • Run multiple threads of gmjmcmc from a single function call
  • Add the possibility to save the parallel results to files
  • Add a neat way to pass the results to the merge.results function

progress in parallel version

I am now running a parallel version on 40 cores. For some reason this takes much longer than the single core run although I am using more or less the same parameter settings. Is there any chance to get some idea about the progress of the algorithm even in case of using several cores? Perhaps some kind of information from each core when it goes into the feature generation mode of gmjmcmc? Something like

C1P1
C2P1
C3P1
C2P2

where C is the core and P relates to the population which is newly generated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.