Git Product home page Git Product logo

petric's Introduction

PETRIC: PET Rapid Image reconstruction Challenge

website wiki register leaderboard discord

Participating

The organisers will provide GPU-enabled cloud runners which have access to larger private datasets for evaluation. To gain access, you must register. The organisers will then create a private team submission repository for you.

Layout

The organisers will import your submitted algorithm from main.py and then run & evaluate it. Please create this file! See the example main_*.py files for inspiration.

SIRF, CIL, and CUDA are already installed (using synerbi/sirf). Additional dependencies may be specified via apt.txt, environment.yml, and/or requirements.txt.

  • (required) main.py: must define a class Submission(cil.optimisation.algorithms.Algorithm) and a (potentially empty) list of submission_callbacks, e.g.:

  • apt.txt: passed to apt install

  • environment.yml: passed to conda install, e.g.:

    name: winning-submission
    channels: [conda-forge, pytorch, nvidia]
    dependencies:
    - cupy
    - cuda-version =11.8
    - pytorch-cuda =11.8
    - tensorflow-gpu
    - cudatoolkit =11.8.0
    - pip
    - pip:
      - git+https://github.com/MyResearchGroup/prize-winning-algos
  • requirements.txt: passed to pip install, e.g.:

    cupy-cuda11x
    torch --index-url https://download.pytorch.org/whl/cu118
    tensorflow[and-cuda]
    git+https://github.com/MyResearchGroup/prize-winning-algos

Tip

You probably should create either an environment.yml or requirements.txt file (but not both).

You can also find some example notebooks here which should help you with your development:

Organiser setup

The organisers will execute (after installing nvidia-docker & downloading https://petric.tomography.stfc.ac.uk/data/ to /path/to/data):

# 1. git clone & cd to your submission repository
# 2. mount `.` to container `/workdir`:
docker run --rm -it --gpus all -p 6006:6006 \
  -v /path/to/data:/mnt/share/petric:ro \
  -v .:/workdir -w /workdir synerbi/sirf:edge-gpu /bin/bash
# 3. optionally, conda/pip/apt install environment.yml/requirements.txt/apt.txt
# 4. install metrics & run your submission
pip install git+https://github.com/TomographicImaging/Hackathon-000-Stochastic-QualityMetrics
python petric.py &
# 5. optionally, serve logs at <http://localhost:6006>
tensorboard --bind_all --port 6006 --logdir ./output

FAQ

See the wiki/Home and wiki/FAQ for more info.

Tip

petric.py will effectively execute:

from main import Submission, submission_callbacks  # your submission (`main.py`)
from petric import data, metrics  # our data & evaluation
assert issubclass(Submission, cil.optimisation.algorithms.Algorithm)
Submission(data).run(numpy.inf, callbacks=metrics + submission_callbacks)

Warning

To avoid timing out (currently 10 min runtime, will likely be increased a bit for the final evaluation after submissions close), please disable any debugging/plotting code before submitting! This includes removing any progress/logging from submission_callbacks and any debugging from Submission.__init__.

  • data to test/train your Algorithms is available at https://petric.tomography.stfc.ac.uk/data/ and is likely to grow (more info to follow soon)
    • fewer datasets will be available during the submission phase, but more will be available for the final evaluation after submissions close
    • please contact us if you'd like to contribute your own public datasets!
  • metrics are calculated by class QualityMetrics within petric.py
    • this does not contribute to your runtime limit
    • effectively, only Submission(data).run(np.inf, callbacks=submission_callbacks) is timed
  • when using the temporary leaderboard, it is best to:
    • change Horizontal Axis to Relative
    • untick Ignore outliers in chart scaling
    • see the wiki for details

Any modifications to petric.py are ignored.

petric's People

Contributors

casperdcl avatar kristhielemans avatar evgueni-ovtchinnikov avatar paskino avatar ckolbptb avatar

Stargazers

Georg Schramm avatar Zi Wang avatar  avatar  avatar Evangelos Stamos avatar

Watchers

 avatar Matthias J. Ehrhardt avatar  avatar  avatar  avatar Charalampos Tsoumpas avatar  avatar Daniel Deidda avatar  avatar Vaggelis Papoutsellis avatar

petric's Issues

leaderboard: objective functions are hard to interpret

There are a few issues with how we plot the objective functions via TensorBoard:

  • BSREM maximises the SIRF objective function
  • ISTA minimises minus the SIRF objective function
  • OSEM maximises the SIRF log-likelihood, but update_objective() just does return 0 (as it's not used by the OSEM update())

We will not fix this, as:

  • the objective function is not part of the metrics (#82)
  • participants could use other conventions for the objective function (e.g. the CIL KL implementation uses a different offset)

We will keep the objective function on TensorBoard for now to help participants debug, but they will likely need to select/filter one algorithm at a time to benefit from y-axis autoscaling.

one example notebook

cost function mentioned in main_OSEM.py

NB: It should be `sum(prompts * log(acq_model.forward(self.x)) - self.x * sensitivity)` across all subsets.

The cost function in main_OSEM.py is sum(prompts * log(acq_model.forward(self.x)) - self.x * sensitivity).

Shouldn't that be:
sum(prompts * log(acq_model.forward(self.x)) - acq_model.forward(self.x)) (plus prior)?

metric names in tensorboard

  1. I am struggling to understand what the metrics labelled "AEM" in the tensorboard output represent
    (see below). Do those correspond to the last metric mentioned in the wiki:
(mean(recon,ROI) - mean(ref recon,ROI)) / mean(ref recon,background) 

and should be below 0.01? Or are those "unnormalized" absolute errors.

  1. According the the wiki the RMSEs get normalized by the background ROI mean of the ref recon.
    Are those values also reported in the tensorboard output?

image

Test

Team Name

TestMatthias

GitHub users

No response

Terms

  • (Optional) I want to be eligible for prizes, and will make my code public after the challenge ends.
  • I agree to abide by the rules.

run NEMA reconstruction (BSREM Algorithm)

I'm going through points in #5

Trying to run NEMA phantom with BSREM algorithm. The code by @KrisThielemans supposes to have the following 4 files available

acquired_data = STIR.AcquisitionData('prompts.hs')
background = STIR.AcquisitionData('background.hs')
mult_factors = STIR.AcquisitionData('mult_factors.hs')

# somewhat crazy initialisation, currently hand-tuned scale
initial_image = STIR.ImageData('20170809_NEMA_MUMAP_UCL.hv')+.5

However, the prepare_data.py script does not save a background rather an additive file. Is this the same @KrisThielemans @evgueni-ovtchinnikov ?

https://github.com/SyneRBI/Challenge24/blob/7d7942e3fc30f3751d03f805f3d30463d68593b4/src/SIRF_data_preparation/prepare_data.py#L22-L25

What should file should I use?

exception handling in petric.main

AssertionError is not caught, nor possibly other exceptions, so use

-except Exception as exc:
-    print(exc)
+import traceback
+except (AssertionError, Exception):
+    traceback.print_exc()

new registration

Team Name

UCL-EWS

GitHub users

alexdenker

Terms

  • (Optional) I want to be eligible for prizes, and will make my code public after the challenge ends.
  • I agree to abide by the rules.

non-negativity constraint missing?

In the description of the optimization problem in the wiki, the non-negativity constraint is not
explicitly mentioned. I think most people will apply it automatically, but strictly speaking it
is in general not clear that the solution with the best cost is non-negative everywhere.

On the other hand, I think the RDP prior only makes sense for non-negative images, but
this is not obvious.

saved objectives.csv doesn't look good

content is like this

iter,objective
[-1],[-30777588.921137523]
"[-1, 40]","[-30777588.921137523, -30430800.142905027]"
"[-1, 40, 80]","[-30777588.921137523, -30430800.142905027, -30381843.509056322]"
"[-1, 40, 80, 120]","[-30777588.921137523, -30430800.142905027, -30381843.509056322, -30321309.7592065]"
"[-1, 40, 80, 120, 160]","[-30777588.921137523, -30430800.142905027, -30381843.509056322, -30321309.7592065, -30265179.396964855]"

Clearly, we write it wrong

New Registration

Team Name

MaGeZ

GitHub users

mehrhardt and gschramm

Terms

  • (Optional) I want to be eligible for prizes, and will make my code public after the challenge ends.
  • I agree to abide by the rules.

AttributeError: 'ListmodeToSinograms' object has no attribute 'prompts_and_randoms_from_listmode'

I built SIRF 3.7.0 with the branch SyneRBI/SIRF-SuperBuild#897

Executing https://github.com/SyneRBI/Challenge24/blob/nema-data/src/SIRF_data_preparation/prepare_data.py it fails with the following AttributeError which is correct for ListmodeToSinograms

ERROR: ProjDataFromStream: error reading data
0.0

INFO: CListModeDataECAT8_32bit: opening file /home/jovyan/work/Challenge24/src/SIRF_data_preparation/../../../ChallengeData/PET/mMR/NEMA_IQ/20170809_NEMA_60min_UCL.l
Traceback (most recent call last):
  File "/home/jovyan/work/Challenge24/src/SIRF_data_preparation/prepare_data.py", line 25, in <module>
    prepare_challenge_data(data_path, sirf_data_path, challenge_data_path, intermediate_data_path, '20170809_NEMA_',
  File "/home/jovyan/work/Challenge24/lib/sirf_exercises/__init__.py", line 115, in prepare_challenge_data
    prompts, randoms = lm2sino.prompts_and_randoms_from_listmode(listmode_data, 0, 10, acq_data_template)
AttributeError: 'ListmodeToSinograms' object has no attribute 'prompts_and_randoms_from_listmode'

new registration

Team Name

Tomo-Unimib

GitHub users

ColomboMatte0

Terms

  • (Optional) I want to be eligible for prizes, and will make my code public after the challenge ends.
  • I agree to abide by the rules.

QualityMetrics whole/local (internal)

PETRIC/petric.py

Lines 108 to 120 in a9ac16c

def evaluate(self, test_im: STIR.ImageData) -> dict[str, float]:
assert not any(self.filter.values()), "Filtering not implemented"
test_im_arr = test_im.as_array()
whole = {
"RMSE_whole_object": np.sqrt(
mse(self.ref_im_arr[self.whole_object_indices], test_im_arr[self.whole_object_indices])) / self.norm,
"RMSE_background": np.sqrt(
mse(self.ref_im_arr[self.background_indices], test_im_arr[self.background_indices])) / self.norm}
local = {
f"AEM_VOI_{voi_name}": np.abs(test_im_arr[voi_indices].mean() - self.ref_im_arr[voi_indices].mean()) /
self.norm
for voi_name, voi_indices in sorted(self.voi_indices.items())}
return {**whole, **local}

However, the "background" ROI is generally also local, so this distinction makes little sense. It doesn't really matter of course, as this seems just a local implementation, not used anywhere else. Can just be closed without code changes therefore.

conda error when installing deps from environment.yaml

I added an environment.yml (see below) to install a few extra python packages. Unfortunately, when after pushing a tag the workflow fails with the error (see here):

CondaValueError: could not parse 'name: PETRIC-MAGEZ' in: environment.yml

My environment.yml contains

name: PETRIC-MAGEZ
channels: [conda-forge]
dependencies:
- my_package_1
- my_package_2

I guess I don't understand what the name of the conda env should be.

`import petric` reads all data and other comments

When experimenting on one data-set, it is convenient to import petrid to get petric.get_data(). However, currently the import actually reads all (3) data-sets. That takes some time and a lot of memory (and will just become worse). This is due to

PETRIC/petric.py

Lines 171 to 181 in 5875054

if SRCDIR.is_dir():
data_metrics_pairs = [
(get_data(srcdir=SRCDIR / "Siemens_mMR_NEMA_IQ", outdir=OUTDIR / "mMR_NEMA"),
[MetricsWithTimeout(outdir=OUTDIR / "mMR_NEMA", transverse_slice=72, coronal_slice=109)]),
(get_data(srcdir=SRCDIR / "NeuroLF_Hoffman_Dataset", outdir=OUTDIR / "NeuroLF_Hoffman"),
[MetricsWithTimeout(outdir=OUTDIR / "NeuroLF_Hoffman", transverse_slice=72)]),
(get_data(srcdir=SRCDIR / "Siemens_Vision600_thorax",
outdir=OUTDIR / "Vision600_thorax"), [MetricsWithTimeout(outdir=OUTDIR / "Vision600_thorax")])]
else:
log.warning("Source directory does not exist: %s", SRCDIR)
data_metrics_pairs = [(None, [])]

I suggest to convert data_metric_pairs a list of pairs of srcdir, outdir and metrics. This will also have the effect that the redirection

PETRIC/petric.py

Lines 155 to 156 in 5875054

_ = STIR.MessageRedirector(str(outdir / 'BSREM_info.txt'), str(outdir / 'BSREM_warnings.txt'),
str(outdir / 'BSREM_errors.txt'))

will write files in the relevant outdir when get_data is called (currently, all redirections will be in the folder for the last get_data call).
(By the way, the redirection filenames should not use the BSREM prefix.)

Also, I see no reason to have

PETRIC/petric.py

Lines 182 to 183 in 5875054

# first dataset
data, metrics = data_metrics_pairs[0]

Additional scripts needed

  • create separate script SIRF_data_preparation/download_Siemens_mMR_NEMA_IQ.py to download/mMR_NEMA_IQ
    • download from zenodo (use zenodo_get)
    • copy mMR_template_span11.hs from sirf_data_path = os.path.join(examples_data_path('PET'), 'mMR')
  • create script SIRF_data_preparation/create_initial_images.py to run OSEM and kappa and save to disk(OSEM_image.hv, kappa.hv) takes as arguments
    • a folder-name where all the data is
    • an image-name for image size (for NEMA IQ will be the mumap as in the BSREM notebook)
  • NEMA_EQ specific: run script to generate ROIs
  • create script SIRF_data_preparation/create_converged_image.py to run BSREM for along time, see #16
    • compute metrics: CIL callback using TensorBoard

Prepared data (sinograms, OSEM_image, kappa, some plots) are at https://petric.tomography.stfc.ac.uk/data/

TODO

  • overall penalty weight is fixed to 1/700, might not be ok for future data-sets. Save the value somehow (e.g. simple text file), and read it from the script?, (done in #16 as optionally reading file)
  • Clean-up BSREM scripts https://github.com/SyneRBI/PETRIC/tree/BSREM_scripts - the 3 datasets are identical aside from the slice_number for the plot and the num_subsets (should be parameterised?), see #16
    • merge common functionailty into one file
    • use new callbacks & tensorboard logging
    • drop old-style interface of Algorithm (algo.x, algo.loss)?
  • End-to-end example
    • example_BSREM.ipynb
      • main.py aka main_BSREM.py (user-configurable) #19
      • petric.py (read-only, we overwrite any user changes)
  • another example but also showing ISTA etc
    • example_ISTA.ipynb, main_ISTA.py
  • Update wiki to give instructions on notebook, getting data, create a status page (as we’ll need to give some more updates later). saying that from today -> added here https://github.com/SyneRBI/PETRIC/wiki/Recent-Updates
    • Registration is open
    • 3 example datasets are available (without VOIs), more to come
    • Docker build is edge (no release yet). Manual build is DEVEL_BUILD=ON.
    • Final threshold on metrics TBC
  • Insert "Edo's link" at https://github.com/SyneRBI/PETRIC/wiki
  • Make this repo public
  • Final consistency check
  • Announce

use mode="staggered" in partitioner of main_BSREM.py for consistency

Update1: The difference disappears if I use 1 instead of 7 subsets, so I guess the issue is related to the definition of the subsets. What is the easiest way to see which views are involved in a subset acquisition model / subset obj_func?

Update2: The issue is related to the mode of the partitioner. If set to staggered the difference also vanished for more than 1 subsets.
Might be better to change the partioner mode in the main_BRSEM.py here. In main_ISTA.py it is already uses staggered.

Hi all,

to test if setup the (data fidelity) subset objective functions correctly, I was comparing the multiplicative OSEM update from `main_OSEM.py

$$ x^+ = \frac{x}{A^T 1} A^T \frac{y}{Ax + s} $$

` with my additive OSEM update using the subset objective function gradient (see below).

$$ x^+ = x + \frac{x}{A^T 1} \nabla_x logL(x) $$

Right now, I don't understand why the two give different updates.
The image below shows:

  • multiplicative OSEM update (left)
  • additive OSEM update (middle)
  • difference (on a +- 5% of max of updated image scale)

test

class Submission(Algorithm):
    """
    OSEM algorithm example.
    NB: In OSEM, the multiplicative term cancels in the back-projection of the quotient of measured & estimated data
    (so this is used here for efficiency). Note that a similar optimisation can be used for all algorithms using the Poisson log-likelihood.
    NB: OSEM does not use `data.prior` and thus does not converge to the MAP reference used in PETRIC.
    NB: this example does not use the `sirf.STIR` Poisson objective function.
    NB: see https://github.com/SyneRBI/SIRF-Contribs/tree/master/src/Python/sirf/contrib/BSREM
    """

    def __init__(
        self,
        data: Dataset,
        num_subsets: int = 7,
        update_objective_interval: int = 10,
        **kwargs
    ):
        """
        Initialisation function, setting up data & (hyper)parameters.
        NB: in practice, `num_subsets` should likely be determined from the data.
        This is just an example. Try to modify and improve it!
        """

        self.subset = 0
        self.x = data.OSEM_image.clone()

        #############################################################################
        #############################################################################
        #############################################################################
        #############################################################################

        self._data_sub, self._acq_models, self._obj_funs = partitioner.data_partition(
            data.acquired_data,
            data.additive_term,
            data.mult_factors,
            num_subsets,
            initial_image=data.OSEM_image,
        )
        # WARNING: modifies prior strength with 1/num_subsets (as currently needed for BSREM implementations)
        #data.prior.set_penalisation_factor(
        #    data.prior.get_penalisation_factor() / num_subsets
        #)
        #data.prior.set_up(data.OSEM_image)

        # for f in self._obj_funs:  # add prior evenly to every objective function
        #    f.set_prior(data.prior)

        #############################################################################
        #############################################################################
        #############################################################################
        #############################################################################

        self._acquisition_models = []
        self._prompts = []
        self._sensitivities = []

        # find views in each subset
        # (note that SIRF can currently only do subsets over views)
        views = data.mult_factors.dimensions()[2]
        partitions_idxs = partition_indices(
            num_subsets, list(range(views)), stagger=True
        )

        # for each subset: find data, create acq_model, and create subset_sensitivity (backproj of 1)
        for i in range(num_subsets):
            prompts_subset = data.acquired_data.get_subset(partitions_idxs[i])
            additive_term_subset = data.additive_term.get_subset(partitions_idxs[i])
            multiplicative_factors_subset = data.mult_factors.get_subset(
                partitions_idxs[i]
            )

            acquisition_model_subset = STIR.AcquisitionModelUsingParallelproj()
            acquisition_model_subset.set_additive_term(additive_term_subset)
            acquisition_model_subset.set_up(prompts_subset, self.x)

            subset_sensitivity = acquisition_model_subset.backward(
                multiplicative_factors_subset
            )
            # add a small number to avoid NaN in division
            subset_sensitivity += subset_sensitivity.max() * 1e-6

            self._acquisition_models.append(acquisition_model_subset)
            self._prompts.append(prompts_subset)
            self._sensitivities.append(subset_sensitivity)

        super().__init__(update_objective_interval=update_objective_interval, **kwargs)
        self.configured = True  # required by Algorithm

 def update(self):
        x_cur = self.x

        denom = self._acquisition_models[self.subset].forward(x_cur) + 1e-4
        # divide measured data by estimate (ignoring mult_factors!)
        quotient = self._prompts[self.subset] / denom

        # mult. OSEM update
        x1 = x_cur * (
            self._acquisition_models[self.subset].backward(quotient)
            / self._sensitivities[self.subset]
        )

        # additive OSEM update using gradient of subset objective function
        x2 = x_cur + (x_cur / self._sensitivities[self.subset]) * self._obj_funs[
            self.subset
        ].gradient(x_cur)

        d = x2.as_array() - x1.as_array()
        sl = x1.shape[0] // 2
        vmax = x1.as_array()[sl, :, :].max()

        import matplotlib.pyplot as plt

        fig, ax = plt.subplots(1, 3, figsize=(15, 5), tight_layout=True)
        ax[0].imshow(x1.as_array()[sl, :, :], vmin=0, vmax=vmax, cmap="Greys")
        ax[1].imshow(x2.as_array()[sl, :, :], vmin=0, vmax=vmax, cmap="Greys")
        ax[2].imshow(d[sl, :, :], vmin=-0.05 * vmax, vmax=0.05 * vmax, cmap="bwr")
        fig.show()
        fig.savefig("test.png")

Move this repo to SyneRBI

For whatever reason I created this repo in the TomographicImaging organisation but, it belongs to the SyneRBI one.

Transfer ownership.

Neighborhood definition

The wiki says:

$N_i$ the neighbourhood of voxel $i$ (here taken as the 8 nearest neighbours in the 3 directions),

What does that mean? I am a bit confused by the definition.
Do we have 8x3 = 24 out of 3x3x3=27 nearest neighbors?

A small drawing / sketch could help to clarify that.

Georg

main.py needs updating

This currently doesn't make any sense yet. For instance, it subclasses from GD.

PETRIC/main.py

Line 21 in bd28698

class Submission(GD):

I'm rather lost here. Possibly the Submission class should contain the reading of the data, and construction of prior, see #16 (comment)

new registration

Team Name

IMT-NucI

GitHub users

romarjor

Terms

  • (Optional) I want to be eligible for prizes, and will make my code public after the challenge ends.
  • I agree to abide by the rules.

update_objective_interval = 10

In all demo algorithms, update_objective_interval = 10.

def __init__(self, data: Dataset, num_subsets: int = 7, update_objective_interval: int = 10):

  1. Are we allowed to change that value?
  2. Does that value have an impact on the final evaluation? E.g., is there a wall time penalty for calculating the metrics?

definition of RMSE in wiki vs metrics

I think it is slightly confusing that the wiki says:

RMSE / mean(BG) should be < 0.01

but the corresponding metric in tensorboard
is called:

RMSE

Technically, the definition of the wiki is correct (and could be called "normalized RMSE or NRMSE").
But I understand that changing the metric names in tensorboard is probably not trivial anymore.

However, it should be clear that the RMSE shown in tensorboard is RMSE / mean(BG) and should be < 0.01.

Finalise challenge data

  • Sort out final software issues and release STIR/CIL/SIRF/SIRF-SuperBuild (or at least tag, but far easier for people if we release)
  • Sort out VOI situation. People generally didn’t follow instructions and have given VOIs in separate files, with different names. Our instructions ask them to give them in a single file with “labels”, which I think might be the easiest way to handle it, depending on our callback metrics (Vision600_thorax VOIs currently need manual header manipulation).
  • Run BSREM long enough for all data-sets and upload to share/petric
  • Run e.g. preconditioned ISTA with Stochast gradient (or even SAGA) saving images for comparison
  • Need to be able to generate metrics on saved images (i.e. after running BSREM long enough, we get the converged image. Then run an algo, save images and loss function, check how metrics and loss function evolves).
  • More data-sets

new registration team Leuven

Team Name

TeamLeuven1

GitHub users

No response

Terms

  • (Optional) I want to be eligible for prizes, and will make my code public after the challenge ends.
  • I agree to abide by the rules.

evaluation phase: skip objective

For timing for final evaluation,

  • run(..., update_objective_interval=inf)
    • so we don't slow down algorithms which don't actually need the explicit objective
  • set metrics interval=1
    • so we get fine-grained progress (doesn't contribute to reported time)

submission of several algorithms (using branches)

If we want to submit several algorithms, the README says they have to be in the same repo, but on different branches.
Does that mean that when we create a tag, all remote branches are checked out?

Or do we have to "mark" remote branches that should be run and evaluated?

And is the branch name reflected in the leader board?

clarify run-time

Brought up by @ajreader: we need to clarify what is included in the measured run-time, i.e. loading if initial data (get_data) and our our callbacks (including metrics and saving) are not.

new registration

Team Name

JerryHanson

GitHub users

No response

Terms

  • (Optional) I want to be eligible for prizes, and will make my code public after the challenge ends.
  • I agree to abide by the rules.

framework v1

one example notebook

  1. download data https://github.com/TomographicImaging/SyneRBI-Challenge/blob/nema-data/src/SIRF_data_preparation/try_data_preparation.py
  2. run NEMA reconstruction (BSREM Algorithm) https://github.com/SyneRBI/SIRF-Contribs/blob/master/src/notebooks/BSREM_illustration.ipynb
  3. get ROIs https://github.com/SyneRBI/SIRF-Contribs/blob/master/src/Python/sirf/contrib/NEMA/generate_nema_rois.py
  4. run metrics https://github.com/TomographicImaging/Hackathon-000-Stochastic-QualityMetrics/blob/cil_callback/img_quality_cil_stir/image_quality_callback.py
  5. dummy Algorithm for users to implement custom reconstruction with

submissions & leaderboard

  1. template repo (this one)
  2. README instructions
    • must use SIRF projectors
    • must use CIL Algorithm
    • max binary size 1GB (e.g. pretrained CNN weights)
  3. one private fork per team (maybe hosted by us + we manually give write access to GH usernames)
  4. GH workflow to evaluate metrics & update leaderboard
    • on: push: quick basic tests
    • on: tag: full run one dataset
    • after submission deadline: full run on more datasets (maybe only on top 30%)

ideas/TODO

Need to provide FOV mask as part of the input data

Dataset as returned by petric.get_data should have another member FOV_mask, which is a sirf.STIR.ImageData which is zero where the solution has to be zero, and 1 where the solution has to be non-negative.

Regularization level

How are the levels of regularization for ranking chosen (in the "training" and "test" data sets).
Or in other words, is it important that the submitted method converges quickly for a wide
range of (reasonable) regularization levels?

monitor / verify usage of local GPU

Hi all,

I was creating a small test script to estimate the computing time needed to
evaluate the complete data forward operator and it's adjoint (see below),
based on the main_OSEM example.

When I run the script which uses STIR.AcquisitionModelUsingParallelproj,
nvidia-smi -l claims that my GPU is not used at all. Is that to be expected?
I am running the docker contrainer as mentioned with --gpus all.

For the mMR and vision600, I obtain the following timing results for the
complete fwd / backward projections:

mMR: t_fwd = 6.6s, t_back = 10.1s
Vision600: t_fwd = 43s, t_back = 60s

which seem indeed long to me when using parallelproj and my A4500.

for srcdir, outdir, metrics in data_dirs_metrics:
        data = get_data(srcdir=srcdir, outdir=outdir)

        x = data.OSEM_image.clone()

        # find views in each subset
        # (note that SIRF can currently only do subsets over views)
        views = data.mult_factors.dimensions()[2]
        partitions_idxs = partition_indices(
            num_subsets, list(range(views)), stagger=True
        )

        # for each subset: find data, create acq_model, and create subset_sensitivity (backproj of 1)
        i = 0
        prompts_subset = data.acquired_data.get_subset(partitions_idxs[i])
        additive_term_subset = data.additive_term.get_subset(partitions_idxs[i])
        multiplicative_factors_subset = data.mult_factors.get_subset(partitions_idxs[i])

        acquisition_model_subset = STIR.AcquisitionModelUsingParallelproj()
        acquisition_model_subset.set_additive_term(additive_term_subset)
        acquisition_model_subset.set_up(prompts_subset, x)

        print(f"Timing forward projection - for 1 out of {num_subsets} subsets")
        t1 = time()
        for i in range(num_rep):
            x_fwd = acquisition_model_subset.forward(x)
        t2 = time()

        t_fwd_subset = (t2 - t1) / num_rep
        print(f"t fwd subset: {t_fwd_subset:.3f} s")
        print(f"t fwd: {num_subsets*t_fwd_subset:.3f} s")

        print(f"\nTiming back projection - for 1 out of {num_subsets} subsets")
        t1 = time()
        for i in range(num_rep):
            y_back = acquisition_model_subset.backward(multiplicative_factors_subset)
        t2 = time()

        t_back_subset = (t2 - t1) / num_rep
        print(f"t back subset: {t_back_subset:.3f} s")
        print(f"t back: {num_subsets*t_back_subset:.3f} s")

CI test run

Hi all,

I just registered and @casperdcl created a private repo for me :)

To test the CI (and the leaderboard ;) I just pushed a dummy tag.

But the CI run fails with this error message.

2024-07-01T15:14:57.2896052Z Current runner version: '2.317.0'
2024-07-01T15:14:57.2902584Z Runner name: 'synerbi@stfc'
2024-07-01T15:14:57.2903428Z Runner group name: 'Default'
2024-07-01T15:14:57.2904344Z Machine name: 'synerbi'
2024-07-01T15:14:57.2908953Z ##[group]GITHUB_TOKEN Permissions
2024-07-01T15:14:57.2911479Z Actions: write
2024-07-01T15:14:57.2912029Z Attestations: write
2024-07-01T15:14:57.2912549Z Checks: write
2024-07-01T15:14:57.2913116Z Contents: write
2024-07-01T15:14:57.2913638Z Deployments: write
2024-07-01T15:14:57.2914161Z Discussions: write
2024-07-01T15:14:57.2914672Z Issues: write
2024-07-01T15:14:57.2915178Z Metadata: read
2024-07-01T15:14:57.2915734Z Packages: write
2024-07-01T15:14:57.2916236Z Pages: write
2024-07-01T15:14:57.2916750Z PullRequests: write
2024-07-01T15:14:57.2917304Z RepositoryProjects: write
2024-07-01T15:14:57.2917907Z SecurityEvents: write
2024-07-01T15:14:57.2918469Z Statuses: write
2024-07-01T15:14:57.2918957Z ##[endgroup]
2024-07-01T15:14:57.2923086Z Secret source: Actions
2024-07-01T15:14:57.2923838Z Prepare workflow directory
2024-07-01T15:14:57.3763263Z Prepare all required actions
2024-07-01T15:14:57.3942344Z Getting action download info
2024-07-01T15:14:57.6347806Z Download action repository 'actions/checkout@v4' (SHA:692973e3d937129bcbf40652eb9f2f61becf3332)
2024-07-01T15:14:58.2395119Z Complete job name: full
2024-07-01T15:15:01.0359792Z ##[group]Run actions/checkout@v4
2024-07-01T15:15:01.0360462Z with:
2024-07-01T15:15:01.0361055Z   fetch-depth: 0
2024-07-01T15:15:01.0361544Z   submodules: recursive
2024-07-01T15:15:01.0362035Z   repository: SyneRBI/PETRIC-TeamLeuven1
2024-07-01T15:15:01.0362893Z   token: ***
2024-07-01T15:15:01.0363345Z   ssh-strict: true
2024-07-01T15:15:01.0363781Z   ssh-user: git
2024-07-01T15:15:01.0364239Z   persist-credentials: true
2024-07-01T15:15:01.0364737Z   clean: true
2024-07-01T15:15:01.0365594Z   sparse-checkout-cone-mode: true
2024-07-01T15:15:01.0366125Z   fetch-tags: false
2024-07-01T15:15:01.0366572Z   show-progress: true
2024-07-01T15:15:01.0367039Z   lfs: false
2024-07-01T15:15:01.0367474Z   set-safe-directory: true
2024-07-01T15:15:01.0367966Z ##[endgroup]
2024-07-01T15:15:01.1812895Z Syncing repository: SyneRBI/PETRIC-TeamLeuven1
2024-07-01T15:15:01.1814901Z ##[group]Getting Git version info
2024-07-01T15:15:01.1815913Z Working directory is '/home/runner01/_work/PETRIC-TeamLeuven1/PETRIC-TeamLeuven1'
2024-07-01T15:15:01.1817661Z Unexpected error attempting to determine if executable file exists '/opt/runner/runner01/miniconda/bin/git': Error: EACCES: permission denied, stat '/opt/runner/runner01/miniconda/bin/git'
2024-07-01T15:15:01.1819913Z Unexpected error attempting to determine if executable file exists '/opt/runner/runner01/miniconda/condabin/git': Error: EACCES: permission denied, stat '/opt/runner/runner01/miniconda/condabin/git'
2024-07-01T15:15:01.1888690Z [command]/usr/bin/git version
2024-07-01T15:15:01.1890696Z git version 2.25.1
2024-07-01T15:15:01.1892813Z ##[endgroup]
2024-07-01T15:15:01.1906575Z Temporarily overriding HOME='/home/runner01/_work/_temp/2a1735bf-fb18-4a0f-b320-1c04fd90e25d' before making global git config changes
2024-07-01T15:15:01.1907846Z Adding repository directory to the temporary git global config as a safe directory
2024-07-01T15:15:01.1909106Z [command]/usr/bin/git config --global --add safe.directory /home/runner01/_work/PETRIC-TeamLeuven1/PETRIC-TeamLeuven1
2024-07-01T15:15:01.1945032Z Deleting the contents of '/home/runner01/_work/PETRIC-TeamLeuven1/PETRIC-TeamLeuven1'
2024-07-01T15:15:01.1950092Z ##[group]Initializing the repository
2024-07-01T15:15:01.1954958Z [command]/usr/bin/git init /home/runner01/_work/PETRIC-TeamLeuven1/PETRIC-TeamLeuven1
2024-07-01T15:15:01.1994268Z Initialized empty Git repository in /home/runner01/_work/PETRIC-TeamLeuven1/PETRIC-TeamLeuven1/.git/
2024-07-01T15:15:01.2005914Z [command]/usr/bin/git remote add origin https://github.com/SyneRBI/PETRIC-TeamLeuven1
2024-07-01T15:15:01.2039156Z ##[endgroup]
2024-07-01T15:15:01.2040673Z ##[group]Disabling automatic garbage collection
2024-07-01T15:15:01.2042592Z [command]/usr/bin/git config --local gc.auto 0
2024-07-01T15:15:01.2070266Z ##[endgroup]
2024-07-01T15:15:01.2071565Z ##[group]Setting up auth
2024-07-01T15:15:01.2079650Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand
2024-07-01T15:15:01.2108425Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :"
2024-07-01T15:15:01.2338904Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader
2024-07-01T15:15:01.2361303Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :"
2024-07-01T15:15:01.2595565Z [command]/usr/bin/git config --local http.https://github.com/.extraheader AUTHORIZATION: basic ***
2024-07-01T15:15:01.2633463Z ##[endgroup]
2024-07-01T15:15:01.2634184Z ##[group]Fetching the repository
2024-07-01T15:15:01.2644415Z [command]/usr/bin/git -c protocol.version=2 fetch --prune --no-recurse-submodules origin +refs/heads/*:refs/remotes/origin/* +refs/tags/*:refs/tags/*
2024-07-01T15:15:01.8621593Z From https://github.com/SyneRBI/PETRIC-TeamLeuven1
2024-07-01T15:15:01.8623358Z  * [new branch]      main       -> origin/main
2024-07-01T15:15:01.8624529Z  * [new tag]         v0.0       -> v0.0
2024-07-01T15:15:01.8651250Z [command]/usr/bin/git tag --list v0.0
2024-07-01T15:15:01.8672239Z v0.0
2024-07-01T15:15:01.8679684Z [command]/usr/bin/git rev-parse refs/tags/v0.0
2024-07-01T15:15:01.8706749Z fd7cd97e00a29b5afd1a183420acf47059271693
2024-07-01T15:15:01.8711699Z ##[endgroup]
2024-07-01T15:15:01.8713155Z ##[group]Determining the checkout info
2024-07-01T15:15:01.8714540Z ##[endgroup]
2024-07-01T15:15:01.8715719Z ##[group]Checking out the ref
2024-07-01T15:15:01.8718390Z [command]/usr/bin/git checkout --progress --force refs/tags/v0.0
2024-07-01T15:15:01.8773386Z Note: switching to 'refs/tags/v0.0'.
2024-07-01T15:15:01.8773813Z 
2024-07-01T15:15:01.8776110Z You are in 'detached HEAD' state. You can look around, make experimental
2024-07-01T15:15:01.8777451Z changes and commit them, and you can discard any commits you make in this
2024-07-01T15:15:01.8778373Z state without impacting any branches by switching back to a branch.
2024-07-01T15:15:01.8779029Z 
2024-07-01T15:15:01.8779565Z If you want to create a new branch to retain commits you create, you may
2024-07-01T15:15:01.8780918Z do so (now or later) by using -c with the switch command. Example:
2024-07-01T15:15:01.8781591Z 
2024-07-01T15:15:01.8781895Z   git switch -c <new-branch-name>
2024-07-01T15:15:01.8782261Z 
2024-07-01T15:15:01.8782482Z Or undo this operation with:
2024-07-01T15:15:01.8782946Z 
2024-07-01T15:15:01.8783141Z   git switch -
2024-07-01T15:15:01.8783404Z 
2024-07-01T15:15:01.8783829Z Turn off this advice by setting config variable advice.detachedHead to false
2024-07-01T15:15:01.8784410Z 
2024-07-01T15:15:01.8784655Z HEAD is now at fd7cd97 continue on error
2024-07-01T15:15:01.8786033Z ##[endgroup]
2024-07-01T15:15:01.8786695Z ##[group]Setting up auth for fetching submodules
2024-07-01T15:15:01.8787714Z [command]/usr/bin/git config --global http.https://github.com/.extraheader AUTHORIZATION: basic ***
2024-07-01T15:15:01.8831483Z [command]/usr/bin/git config --global --unset-all url.https://github.com/.insteadOf
2024-07-01T15:15:01.8857226Z [command]/usr/bin/git config --global --add url.https://github.com/.insteadOf [email protected]:
2024-07-01T15:15:01.8884230Z [command]/usr/bin/git config --global --add url.https://github.com/.insteadOf [email protected]:
2024-07-01T15:15:01.8904710Z ##[endgroup]
2024-07-01T15:15:01.8905537Z ##[group]Fetching submodules
2024-07-01T15:15:01.8908791Z [command]/usr/bin/git submodule sync --recursive
2024-07-01T15:15:01.9153322Z [command]/usr/bin/git -c protocol.version=2 submodule update --init --force --recursive
2024-07-01T15:15:01.9386087Z [command]/usr/bin/git submodule foreach --recursive git config --local gc.auto 0
2024-07-01T15:15:01.9591060Z ##[endgroup]
2024-07-01T15:15:01.9591917Z ##[group]Persisting credentials for submodules
2024-07-01T15:15:01.9594180Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'url\.https\:\/\/github\.com\/\.insteadOf' && git config --local --unset-all 'url.https://github.com/.insteadOf' || :"
2024-07-01T15:15:01.9829121Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local 'http.https://github.com/.extraheader' 'AUTHORIZATION: basic ***' && git config --local --show-origin --name-only --get-regexp remote.origin.url"
2024-07-01T15:15:02.0049880Z [command]/usr/bin/git submodule foreach --recursive git config --local --add 'url.https://github.com/.insteadOf' '[email protected]:'
2024-07-01T15:15:02.0262807Z [command]/usr/bin/git submodule foreach --recursive git config --local --add 'url.https://github.com/.insteadOf' '[email protected]:'
2024-07-01T15:15:02.0499647Z ##[endgroup]
2024-07-01T15:15:02.0530847Z [command]/usr/bin/git log -1 --format='%H'
2024-07-01T15:15:02.0551059Z 'fd7cd97e00a29b5afd1a183420acf47059271693'
2024-07-01T15:15:02.0986374Z ##[group]Run petric
2024-07-01T15:15:02.0986969Z �[36;1mpetric�[0m
2024-07-01T15:15:02.1003381Z shell: /usr/bin/bash -e {0}
2024-07-01T15:15:02.1003895Z ##[endgroup]
2024-07-01T15:15:02.7167204Z start gadgetron
2024-07-01T15:15:02.7169997Z /opt/SIRF-SuperBuild /w
2024-07-01T15:15:02.7171157Z /w
2024-07-01T15:15:06.6818484Z 
2024-07-01T15:15:06.6836771Z WARNING: RadioNuclideDB::get_radionuclide: unknown modality. Returning "unknown" radionuclide.
2024-07-01T15:15:06.6921485Z 
2024-07-01T15:15:06.6922930Z WARNING: RadioNuclideDB::get_radionuclide: unknown modality. Returning "unknown" radionuclide.
2024-07-01T15:15:06.7536695Z Traceback (most recent call last):
2024-07-01T15:15:06.7537740Z   File "/opt/conda/lib/python3.10/pathlib.py", line 1175, in mkdir
2024-07-01T15:15:06.7549269Z     self._accessor.mkdir(self, mode)
2024-07-01T15:15:06.7556818Z FileNotFoundError: [Errno 2] No such file or directory: '/o/logs/TeamLeuven1/v0.0/mMR_NEMA'
2024-07-01T15:15:06.7557779Z 
2024-07-01T15:15:06.7558639Z During handling of the above exception, another exception occurred:
2024-07-01T15:15:06.7559363Z 
2024-07-01T15:15:06.7559759Z Traceback (most recent call last):
2024-07-01T15:15:06.7560773Z   File "/opt/conda/lib/python3.10/pathlib.py", line 1175, in mkdir
2024-07-01T15:15:06.7561829Z     self._accessor.mkdir(self, mode)
2024-07-01T15:15:06.7563316Z FileNotFoundError: [Errno 2] No such file or directory: '/o/logs/TeamLeuven1/v0.0'
2024-07-01T15:15:06.7564162Z 
2024-07-01T15:15:06.7564995Z During handling of the above exception, another exception occurred:
2024-07-01T15:15:06.7567213Z 
2024-07-01T15:15:06.7567523Z Traceback (most recent call last):
2024-07-01T15:15:06.7568316Z   File "/w/petric.py", line 158, in <module>
2024-07-01T15:15:06.7569383Z     [MetricsWithTimeout(outdir=OUTDIR / "mMR_NEMA", transverse_slice=72, coronal_slice=109)]),
2024-07-01T15:15:06.7570442Z   File "/w/petric.py", line 85, in __init__
2024-07-01T15:15:06.7571194Z     SaveIters(outdir=outdir),
2024-07-01T15:15:06.7571909Z   File "/w/petric.py", line 36, in __init__
2024-07-01T15:15:06.7572778Z     self.outdir.mkdir(parents=True, exist_ok=True)
2024-07-01T15:15:06.7573728Z   File "/opt/conda/lib/python3.10/pathlib.py", line 1179, in mkdir
2024-07-01T15:15:06.7574700Z     self.parent.mkdir(parents=True, exist_ok=True)
2024-07-01T15:15:06.7575610Z   File "/opt/conda/lib/python3.10/pathlib.py", line 1179, in mkdir
2024-07-01T15:15:06.7576518Z     self.parent.mkdir(parents=True, exist_ok=True)
2024-07-01T15:15:06.7577580Z   File "/opt/conda/lib/python3.10/pathlib.py", line 1175, in mkdir
2024-07-01T15:15:06.7578477Z     self._accessor.mkdir(self, mode)
2024-07-01T15:15:06.7579482Z PermissionError: [Errno 13] Permission denied: '/o/logs/TeamLeuven1'
2024-07-01T15:15:07.4173421Z ##[error]Process completed with exit code 1.
2024-07-01T15:15:07.4314307Z Post job cleanup.
2024-07-01T15:15:07.5477304Z Unexpected error attempting to determine if executable file exists '/opt/runner/runner01/miniconda/bin/git': Error: EACCES: permission denied, stat '/opt/runner/runner01/miniconda/bin/git'
2024-07-01T15:15:07.5500329Z Unexpected error attempting to determine if executable file exists '/opt/runner/runner01/miniconda/condabin/git': Error: EACCES: permission denied, stat '/opt/runner/runner01/miniconda/condabin/git'
2024-07-01T15:15:07.5545003Z [command]/usr/bin/git version
2024-07-01T15:15:07.5591749Z git version 2.25.1
2024-07-01T15:15:07.5666778Z Temporarily overriding HOME='/home/runner01/_work/_temp/f2be6136-2b40-47a6-a9d4-22ff7f7815be' before making global git config changes
2024-07-01T15:15:07.5667996Z Adding repository directory to the temporary git global config as a safe directory
2024-07-01T15:15:07.5673536Z [command]/usr/bin/git config --global --add safe.directory /home/runner01/_work/PETRIC-TeamLeuven1/PETRIC-TeamLeuven1
2024-07-01T15:15:07.5723204Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand
2024-07-01T15:15:07.5755845Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :"
2024-07-01T15:15:07.5993395Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader
2024-07-01T15:15:07.6014579Z http.https://github.com/.extraheader
2024-07-01T15:15:07.6026382Z [command]/usr/bin/git config --local --unset-all http.https://github.com/.extraheader
2024-07-01T15:15:07.6053299Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :"
2024-07-01T15:15:07.6669813Z Cleaning up orphan processes

can't find challenge data

I downloaded the github repository of the challenge and a docker container (docker run --rm -it -v data:/mnt/share/petric:ro ghcr.io/synerbi/sirf:edge). NB: I noticed that the usage was much simpler than described by David here. Is this video outdated now?

I attached the image to VScode. Somehow the preinstalled conda environments didn't work with my interactive python sessions. This was "easily" fixed by creating a new environment and installing some dependencies (numpy, tensorboardx, deprecation, cil).

When running $python petric.py I received an error which I think is due to the data being missing:

Traceback (most recent call last):                                                                                                                                                                   
  File "/home/jovyan/work/PETRIC/petric.py", line 139, in <module>                                                                                                                                   
    get_data(srcdir=SRCDIR / "Siemens_mMR_NEMA_IQ", outdir=OUTDIR / "mMR_NEMA")),                                                                                                                    
  File "/home/jovyan/work/PETRIC/petric.py", line 123, in get_data                                                                                                                                   
    acquired_data = STIR.AcquisitionData(str(srcdir / 'prompts.hs'))                                                                                                                                 
  File "/opt/SIRF-SuperBuild/INSTALL/python/sirf/STIR.py", line 1206, in __init__                                                                                                                    
    check_status(self.handle)                                                                                                                                                                        
  File "/opt/SIRF-SuperBuild/INSTALL/python/sirf/Utilities.py", line 451, in check_status                                                                                                            
    raise error(errorMsg)                                                                                                                                                                            
sirf.Utilities.error: ??? "'Error opening file /mnt/share/petric/Siemens_mMR_NEMA_IQ/prompts.hs\\n' exception caught at line 419 of /opt/SIRF-SuperBuild/sources/SIRF/src/xSTIR/cSTIR/cstir.cpp; the reconstruction engine output may provide more information"    

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.