Comments (11)
for an example notebook, this shouldn't be there. Instead, it should download the prepared data from wherever we put it.
[ ] dummy Algorithm for users to implement custom reconstruction with
But BSREM is already one? Maybe not with the callbacks though.
from petric.
Notebook should show how to download one data-set.
from petric.
BSREM is a CIL Algorithm, hence callback are already useable.
from petric.
This is what I have so far
import os
from sirf import STIR as pet
from sirf.contrib.partitioner import partitioner
from cil.optimisation.functions import SGFunction
from cil.optimisation.algorithms import GD
from cil.utilities.display import show2D
from cil.optimisation.utilities import Sampler
from cil.optimisation.utilities.callbacks import TextProgressCallback, ProgressCallback
# engine's messages go to files, except error messages, which go to stdout
_ = pet.MessageRedirector('info.txt', 'warn.txt')
# Needed for get_subsets()
pet.AcquisitionData.set_storage_scheme('memory')
# fewer message from STIR and SIRF
pet.set_verbosity(0)
os.chdir('/home/jovyan/work/Challenge24/data')
acquired_data = pet.AcquisitionData('prompts.hs')
additive_term = pet.AcquisitionData('additive.hs')
mult_factors = pet.AcquisitionData('multfactors.hs')
# somewhat crazy initialisation, currently hand-tuned scale
initial_image = pet.ImageData('20170809_NEMA_MUMAP_UCL.hv')+.5
initial_image=initial_image.zoom_image(zooms=(1,1,1), offsets_in_mm=(0,0,0), size=(-1,200,200))
num_subsets = 7
data, acq_models, obj_funs = partitioner.data_partition(acquired_data, additive_term, mult_factors, num_subsets, mode='staggered', initial_image=initial_image)
osem_sol = initial_OSEM(acquired_data, additive_term, mult_factors, initial_image)
# add RDP prior to the objective functions
step_size = 1e-7
add_regulariser = True
if add_regulariser:
alpha = 5
epsilon = initial_image.max()*1e-4
prior = add_RDP(alpha, epsilon, obj_funs)
step_size = 1e-10
#set up and run the stochastic gradient descent algorithm
sampler = Sampler.random_without_replacement(len(obj_funs))
F = - SGFunction(obj_funs, sampler=sampler)
alg = GD(initial=osem_sol, objective_function=F, step_size=step_size)
alg.update_objective_interval = num_subsets
alg.run(num_subsets * 1, callbacks=[TextProgressCallback()])
cmax = .15
im_slice = 70
show2D([osem_sol.as_array()[im_slice,:,:],
alg.solution.as_array()[im_slice,:,:]],
title=['OSEM',f"SGD epoch {alg.iteration/num_subsets}"], cmap="PuRd", fix_range=[(0, 0.2),(0,0.2)])
from petric.
# somewhat crazy initialisation, currently hand-tuned scale
initial_image = pet.ImageData('20170809_NEMA_MUMAP_UCL.hv')+.5
initial_image=initial_image.zoom_image(zooms=(1,1,1), offsets_in_mm=(0,0,0), size=(-1,200,200))
osem_sol = initial_OSEM(acquired_data, additive_term, mult_factors, initial_image)
will need to become
initial_image = pet.ImageData('OSEM_image.hv')
We'll use need
kappa_image = pet.ImageData('penalty_weights.hv')
and construct the prior using that. With that, we can provide a function
def construct_RDP(penalty_strength, initial_image, kappa):
'''
construct RDP prior, set epsilon etc.
'''
prior = sirf.STIR.RelativeDifferencePrior()
# make it differentiable
epsilon = initial_image.max()*1e-4
prior.set_epsilon(epsilon)
prior.set_penalisation_factor(penalty_strength)
prior.set_kappa(kappa)
prior.set_up(initial_image)
return prior
As part of the example, we can then have
def add_prior(prior, obj_funs):
'''
add prior evenly to every objective function.
WARNING: modifies the prior to divide the penalty_strength by the number of `obj_funs`
'''
prior.set_penalisation_factor(pror.get_penalisation_factor ()/ len(obj_funs));
prior.set_up(initial_image)
for f in obj_funs:
f.set_prior(prior)
The add_RDP
will have to be part of what they do, so timings would then need to encompass
num_subsets = 7
data, acq_models, obj_funs = partitioner.data_partition(acquired_data, additive_term, mult_factors, num_subsets, mode='staggered', initial_image=initial_image)
add_RDP(prior, obj_funs)
step_size = 1e-10
#set up and run the stochastic gradient descent algorithm
sampler = Sampler.random_without_replacement(len(obj_funs))
F = - SGFunction(obj_funs, sampler=sampler)
alg = GD(initial=osem_sol, objective_function=F, step_size=step_size)
# optional, you might want to disable this
alg.update_objective_interval = num_subsets
# This would need changing for the actual timing
alg.run(num_subsets * 1, callbacks=[TextProgressCallback()])
So, their algo needs to get acquired_data, additive_term, mult_factors, prior
and run from there. So, maybe we subclass Algorithm
to PETRICAlgorithm
which has an __init__
taking all of that, and they have to subclass from that. How that would work with using a given CIL algo, I'm not sure.
from petric.
I think, Python wise, we could make use of multiple inheritance and define these things with a PETRICInterface
, then people would subclass as class WinningAlgo(Algorithm, PETRICInterface)
. The term interface is from Java.
from petric.
The kappa
image is described here and I could calculate it.
Where is it pre-calculated?
from petric.
I updated the code above and pushed to https://github.com/SyneRBI/SIRF-Contribs/blob/edo_challenge_test1/src/notebooks/challenge_notebook0.py
from petric.
initial_OSEM
has to be removed, construct_RDP
and penalty_factor, 'kappa' needs to be as in BSREM_common.py
(currently still in #16 )
from petric.
Should use ISTA. Not pushed yet?
from petric.
I think we can close this now, with the examples that we have.
from petric.
Related Issues (20)
- monitor / verify usage of local GPU HOT 3
- new registration HOT 1
- use mode="staggered" in partitioner of main_BSREM.py for consistency HOT 2
- cost function mentioned in main_OSEM.py HOT 9
- metric names in tensorboard HOT 14
- uploaded VOI figures are out-of-date HOT 3
- new registration HOT 1
- get_data() should return data-path as extra member HOT 1
- evaluation phase: skip objective
- leaderboard: objective functions are hard to interpret HOT 1
- factor 1/2 in RDP HOT 1
- definition of RMSE in wiki vs metrics HOT 5
- submission of several algorithms (using branches) HOT 2
- update_objective_interval = 10 HOT 1
- conda error when installing deps from environment.yaml HOT 12
- "new" data sets are missing in the leader board HOT 5
- seeding of (e.g. numpy) random generator HOT 1
- accuracy / quantization of (cuda) relative difference prior HOT 4
- timing of elementwise multiplication of ImageData HOT 3
- Stochsastic Optimisation Submission HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from petric.