Git Product home page Git Product logo

stir's Introduction

STIR: Software for Tomographic Image Reconstruction.

GitHub Actions status Appveyor Build status Codacy Badge Zenodo DOI

STIR is Open Source software for use in tomographic imaging. Its aim is to provide a Multi-Platform Object-Oriented framework for all data manipulations in tomographic imaging. Currently, the emphasis is on (iterative) image reconstruction in PET and SPECT, but other application areas and imaging modalities can and might be added.

STIR is the successor of the PARAPET software library which was the result of a (European Union funded) collaboration between 6 different partners. It has since received contributions from numerous institutions and individuals. Check the CREDITS.

Please check the STIR web-site at http://STIR.sourceforge.net for more information.

This software is distributed under an open source license, see LICENSE.txt for details.

stir's People

Contributors

alalehrn avatar alexjazz008008 avatar anderbiguri avatar ashgillman avatar bathomas avatar carterbox avatar casperdcl avatar danieldeidda avatar dvolgyes avatar eliseemond avatar emikhay avatar evgueni-ovtchinnikov avatar gefeichen avatar gfardell avatar gschramm avatar htunne avatar kristhielemans avatar mastergari avatar mehrhardt avatar nicolejurjew avatar nikefth avatar ottavia avatar paskino avatar pw351 avatar rebeccagillen avatar robbietuk avatar roemicha avatar samdporter avatar tniknejad avatar web-flow avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stir's Issues

inconsistent return value of ProjDataInfo::get_bin() and CListEvent::get_bin() when out-of-range

ProjDataInfo::get_bin sets bin_value to -1 if the request bin is outside of the allowed range (and says so in the doxygen).

Some derived ClistEvent*::get_bin() sets bin_value=0 if out-of-range, others to -1. (CListEvent`'s doxygen says it will return bin_value<=0, so is also correct).

Luckily LmToProjData::get_bin checks if bin_value<=0 so both options are ok.

Nevertheless, it would be better to be consistent. A value of 0 might make most sense as LmToProjData::get_bin will set it to zero (as it has to be able to return negative numbers to cope with delayeds). This would be inconsistent with ProjDataInfo::get_bin's documentation though so might affect someone.

set_get_position fails for ROOT listmode files

Going back to a previous location in a ROOT file currently fails if you first read to the end of the file (bug found with @eliseemond ).

This currently probably only affects you if you use LmToProjData if you use num_segments_in_memory to use less memory. however, it also breaks the TOF unlister for ROOT (which is currently under development).

Scatter Correction can not handle sinograms with few segments having no data

During reconstruction of list mode data using STIR, by first unlisting the list mode data for extracting an emission sinogram with no axial compression, the sinogram that was extracted had no values for some segments. This led to error in scatter correction where upsample and fit single scatter
--min-scale-factor
--max-scale-factor
--remove-interleaving 0
--output-filename myfile
--data-to-fit emission_sinogram
--data-to-scale normscatter2d
--weights emission_sinogram.hs
--norm norm3d.par

The error reads
ERROR: Problem at segment -63, axial pos 0 in finding sinogram scaling factor.
Weighted data in denominator 0 too small compared to total in sinogram 13981.
Adjust weights?
terminate called after throw

Which I think is due to zero values in segment -63, and 63.
I am able to solve this by using sinogram with axial compresision, but cannot do no span reconstruction.

InputStreamFromROOTFileForCylindricalPET uses wrong conventions for blocks and buckets?

Terminology in STIR (from Scanner.h) :

      \li \c crystal the smallest detection unit
      \li \c block several crystals are grouped in a block, this can be in
          3 dimensions (see layer). This information could be useful for finding the
          geometry of the scanner, but we would plenty more
          size info presumably.
      \li \c layer Some scanners have multiple layers of detectors to give
          Depth Of Interaction information
      \li \c bucket several \c blocks send detected events to one \c bucket.
          This has consequences for the dead-time modelling. For instance,
          one bucket could have much more singles than another, and hence
          presumably higher singles-dead-time.
      \li \c singles_unit (non-standard terminology)
          Most scanners report the singles detected during the acquisition.
          Some scanners (such as GE scanners) report singles for every crystal,
          while others (such as CTI scanners) give only singles for a 
          collection of blocks. A \c singles_unit is then a set of crystals
          for which we can get singles rates.

In GATE CylindricalPET terminology, this seems to lead naturally to

  • Module = bucket
  • submodule = block

with number of modules per RSector=1. (I think GATE would be able to have one extra level compared to STIR).

This seems consistent with InputStreamFromROOTFileForCylindricalPET::get_num_axial_crystals_per_singles_unit. However, this is different from other functions implemented in InputStreamFromROOTFileForCylindricalPET.inl, e.g.

int get_num_dets_per_ring() const
{
    return static_cast<int>( this->crystal_repeater_z * this->module_repeater_z *
                             this->submodule_repeater_z);
}
int get_num_axial_blocks_per_bucket_v() const
{
    return this->module_repeater_z;
}
int get_num_axial_crystals_per_block_v() const
{
    return static_cast<int>(this->crystal_repeater_z *
       this->module_repeater_z);
}

This seems to lead to some strange arrangement. I think the correct way is

int get_num_axial_blocks_per_bucket_v() const
{
    return this->submodule_repeater_z;
}
int get_num_axial_crystals_per_block_v() const
{
    return static_cast<int>(this->crystal_repeater_z );
}

@NikEfth what do you think?

Of course, I wouldn't know what to do with the RSector stuff.

CMake Find ROOT

we're currently using our own FindCERN_ROOT.cmake. However, ROOT supplies its own, see
https://root.cern.ch/how/integrate-root-my-project-cmake

Possibly we should use that file. Trouble is that you need to know where ROOT is before you can use it. In their example they rely on the env variable ROOTSYS. Not sure if that will always be defined. @NikEfth, do you know if this variable is required?

Request: get_total_number_of_events() from listmode files.

A simple function to be added in the CLimodeData interface, which will return the total number of events.
I will implement it for ROOT files, while for the rest of Listmode data, temporarily it will throw an error.

I don't add this feature directly in the listmode reconstruction branch. Because I regard it more as data feature which should have an issue id, until the function has been implemented for all listmode classes.

CMake should give error if building python without shared libs

Currently, when configuring, you can select BUILD_SWIG_PYTHON and neglect to select BUILD_SHARED_LIBS. This lets you build all of STIR (to linking) before you get an error saying you need to build shared libs. CMake should instead give you a warning at generate, this would save some time.

STIR's include-path should be first

At present, include_directories for some dependencies (such as Boost) occurs before the one for STIR (which is in stir_dirs.cmake). This means that if a dependency is found in the same location as an older version of STIR, unexpected compilation problems (or segmentation faults) can occur.

Presumably, the include_directories statement from stir_dirs.cmake needs to be moved first.

See also SyneRBI/SIRF-SuperBuild#28

Wierd order of 3d values in python

When creating a FloatVoxelsOnCartesianGrid with this input file, we get

>>> volume = stir.FloatVoxelsOnCartesianGrid.read_from_file('initial.hv')
>>> volume.shape()
(15, 64, 64)

But the file says that the shape should be x = 64, y = 64, z = 15.

Some testing shows that almost all STIR methods give the results in [z, y, x] order. Is this intended?

speed-up building using object libraries

Since v2.8.8, CMake supports "libraries" with object files, see https://cmake.org/Wiki/CMake/Tutorials/Object_Library

This would enable us to prevent recompilation of the registry files for every executable. We might even use this to reduce the number of libraries and therefore solve #6.

It isn't entirely obvious how to do this while being backwards compatible though. We currently rely on the CMake variable STIR_REGISTRIES to let anyone add to the list of registries (via the STIR_LOCAL mechanism), and that variable is exported as well (while object libraries cannot be exported).

Cannot find numpy

When compiling I get an error:

/.../STIR/build/src/swig/stirPYTHON_wrap.cxx:4212:10: fatal error:
    'numpy/arrayobject.h' file not found

This is due to you only importing python and not numpy. When building STIR with python using the anaconda python distributions, the python includes are in

/home/name/anaconda2/include/python2.7

while the numpy includes are located in:

/home/name/anaconda2/lib/python2.7/site-packages/numpy/core/include

What is needed is that you need to add some "find numpy" cmake command that also finds the numpy headers. We have one in odl, but you could also write your own.

SAFIRCListmodeInputFileFormat::can_read should not write errors

At present, when a non-SAFIR listmode file is used, SAFIRCListmodeInputFileFormat::can_read will write warnings/error messages when it tries to parse the non-SAFIR listmode file.

suggested solution: modify actual_do_parsing to

bool actual_do_parsing(std::istream& input, bool write_warning = true)

and pass it to the parse call. (Obviously for all signatures). Then call it appropriately in can_read.

@jafische could you have a look?

CMake ROOT variable

CMake variable HAS_GLOBAL_ROOT. Can we rename this to HAS_GLOBAL_CERN_ROOT. Or HAS_CERN_ROOT_CONFIG? Or even better CERN_ROOT_CONFIG (no reason to have the "HAS")

Otherwise the user might think this ROOT variable is something else entirely.

bug in OPENMP in BinNormalisationFromProjData

there is a race condition when specifying a nomalisation from projdata. BinNormalisationFromProjData::apply uses get_related_viewgrams which is not thread-safe.

It would be best to fix this by making STIR IO thread-safe, but that's a lot of work...

This bug affects upsample_and_fit_single_scatter when using a normalisation file

listmode files can start with non-zero time-tag

Timing is currently in terms of the value of the time tag. For Siemens files, the first time tag is usually zero, but in ~1% of the cases not. GE files never start with zero. It would therefore be better to have timing relative to the value of the first time tag. Current work-around: use list_lm_events to find first time tag and create initial “skip-frame” in the frame-defs file.

Number of bins in PoissonLogLikelihoodWithLinearModelForMeanAndListModeDataWithProjMatrixByBin

Listmode reconstruction doesn't support arc-corrected data.
Yet, in the initialisation of the proj_data_info_cyl_uncompressed_ptr,
in PoissonLogLikelihoodWithLinearModelForMeanAndListModeDataWithProjMatrixByBin,
the default_num_arccorrected_bins is used.

I am not sure if this a bug or I have not understood something correctly.

Line:

Link failure with shared libraries on Ubuntu

When building on ubuntu using GCC, I get the following error:

[ 86%] Building CXX object src/test/CMakeFiles/test_ArcCorrection.dir/test_ArcCorrection.cxx.o
Linking CXX executable test_ArcCorrection
../buildblock/libbuildblock.so: undefined reference to `stir::InputFileFormatRegistry<stir::DiscretisedDensity<3, float> >::find_factory(stir::FileSignature const&, std::string const&) const'
../buildblock/libbuildblock.so: undefined reference to `stir::Array<1, std::complex<float> > stir::fourier_for_real_data<1, float>(stir::Array<1, float> const&, int)'
../buildblock/libbuildblock.so: undefined reference to `stir::Array<3, float> stir::inverse_fourier_for_real_data_corrupting_input<3, float>(stir::Array<3, std::complex<float> >&, int)'
../buildblock/libbuildblock.so: undefined reference to `stir::write_basic_interfile_PDFS_header(std::string const&, std::string const&, stir::ProjDataFromStream const&)'
../buildblock/libbuildblock.so: undefined reference to `stir::OutputFileFormat<stir::DiscretisedDensity<3, float> >::write_to_file(std::string const&, stir::DiscretisedDensity<3, float> const&) const'
../buildblock/libbuildblock.so: undefined reference to `stir::InputFileFormatRegistry<stir::DynamicDiscretisedDensity>::default_sptr()'
../buildblock/libbuildblock.so: undefined reference to `stir::InputFileFormatRegistry<stir::DiscretisedDensity<3, float> >::default_sptr()'
../buildblock/libbuildblock.so: undefined reference to `vtable for stir::InterfilePDFSHeader'
../buildblock/libbuildblock.so: undefined reference to `stir::InterfileHeader::get_exam_info_ptr() const'
../buildblock/libbuildblock.so: undefined reference to `stir::OutputFileFormat<stir::DiscretisedDensity<3, float> >::default_sptr()'
../buildblock/libbuildblock.so: undefined reference to `stir::InputFileFormatRegistry<stir::DynamicDiscretisedDensity>::find_factory(stir::FileSignature const&, std::string const&) const'
../buildblock/libbuildblock.so: undefined reference to `stir::InterfilePDFSHeader::InterfilePDFSHeader()'
../buildblock/libbuildblock.so: undefined reference to `vtable for stir::InterfileHeader'
../buildblock/libbuildblock.so: undefined reference to `stir::Array<2, std::complex<float> > stir::fourier_for_real_data<2, float>(stir::Array<2, float> const&, int)'
../buildblock/libbuildblock.so: undefined reference to `stir::Array<2, float> stir::inverse_fourier_for_real_data_corrupting_input<2, float>(stir::Array<2, std::complex<float> >&, int)'
../buildblock/libbuildblock.so: undefined reference to `void stir::display<float, float, char*>(stir::Array<3, float> const&, stir::VectorWithOffset<float> const&, stir::VectorWithOffset<char*> const&, double, char const*, int)'
../buildblock/libbuildblock.so: undefined reference to `void stir::display<float, float, char const*>(stir::Array<3, float> const&, stir::VectorWithOffset<float> const&, stir::VectorWithOffset<char const*> const&, double, char const*, int)'
../buildblock/libbuildblock.so: undefined reference to `stir::Array<3, std::complex<float> > stir::fourier_for_real_data<3, float>(stir::Array<3, float> const&, int)'
../buildblock/libbuildblock.so: undefined reference to `stir::read_interfile_PDFS(std::string const&, std::_Ios_Openmode)'
../buildblock/libbuildblock.so: undefined reference to `stir::is_interfile_signature(char const*)'
../buildblock/libbuildblock.so: undefined reference to `stir::Array<1, float> stir::inverse_fourier_for_real_data_corrupting_input<1, float>(stir::Array<1, std::complex<float> >&, int)'
../buildblock/libbuildblock.so: undefined reference to `stir::InterfileHeader::get_exam_info_sptr() const'
../buildblock/libbuildblock.so: undefined reference to `stir::InterfileHeader::InterfileHeader()'
collect2: error: ld returned 1 exit status
make[2]: *** [src/test/test_ArcCorrection] Error 1
make[1]: *** [src/test/CMakeFiles/test_ArcCorrection.dir/all] Error 2
make: *** [all] Error 2

Is this a known error?

keyword parsing in .hroot files

CListModeDataROOT initialises the scanner's num_rings and num_detectors_per_ring from the module et al keywords (read by InputStreamFromROOTFileForCylindricalPET et al.), but it initialises the max_ring_difference and num_views parameters for its ProjDataInfoCylindricalNoArcCorr object from independent num_rings and num_detectors_per_ring members (parsed from the corresponding keywords in the .hroot).

This seems very dangerous to me. I think we should remove the explicit keywords from the .hroot file.

@NikEfth, I think this is a left-over from the (currently disabled) code to do axial compression and view-mashing directly when reading the ROOT-file, but that should use different keywords in any case (if we ever enable it, I don't really see why we would, except for testing of the list-mode recon).

Problem while re-reading an RDF file when EOF was reached

Currently, if one wants to limit the number of segments while reading the entire listmode file, some events might not be stored/displayed (via lm_to_projdata for example). This is crucial in the future addition of the time-of-flight capability where the listmode file will require to be re-read multiple times.

ProjDataFromStream::set_viewgram etc followed by reading the file can fail

Currently, the stream will only get flushed when the ProjData object gets destructed. This leads to very surprising problems when using Python/MATLAB.

Example:

  1. create an object (e.g. ScatterSimulation) that contains a ProjData object that it writes to.
  2. run its process_data function to create the ProjData
  3. try to read the ProjData from MATLAB/Python (or elsewhere)
    This silenty fails (tested on MacOS by @ludovicabrusaferri) as ScatterSimulation::process_data wrote the file, but not all content is on disk yet. The result was that not all data in the new ProjData object were filled. (Note that this is surprising. I would have expected the read to fail).

This currently can only be resolved by either quitting Python/MATLAB, deleting the ScatterSimulation object manually, or by using a ProjDataInMemory object (although presumably even that isn't guaranteed to work).

Recommended solution: flush the stream after every set_viewgram et al. This could slow down IO a bit unfortunately, but the current behaviour is highly undesirable.

move ExamData to buildblock

it is currently in IO.

(arguably best location would be data_buildblock but that will need a lot of work)

Installing the python bindings

Currently the python bindings are "installed" by copying them to a pre-designated path. In python, the general pattern is to use setuptools, distutils or something similar to let python do this automatically.

In my forked library i made a commit adler-j@a9a6388 that does exactly this. You can use it if you want.

GATE support

I was trying out the STIR's

run_root_GATE.sh

script in recon_test_examples. First there is an error in the script as it fails to recognize the ROOT flag given out by --input-formats of lm_proj_data. So, running lm_proj_data by hand I get the error that supplied file is not a ROOT file (while root opens it without a problem). More specifically, I think that CListModeDataROOT.cxx does just the incorrect thing in verifying the inputfile to be of type root, as in comparison of the first four bytes read from the file it branches on the equality rather than inequality of the test string (to be confirmed).

Might be good to correct it. I can do it, but am not a member of the team yet :-)

Remove set additive and multiplicative functions from iterative reconstruction

virtual void set_additive_proj_data_sptr(const shared_ptr&);
virtual void set_normalisation_sptr(const shared_ptr&);

IterativeReconstruction has no clue about additive data nor BinNormalisation… they are implemented in terms of

this->objective_function_sptr->set_additive_proj_data_sptr(arg);

but GeneralisedObjectiveFunction doesn’t have a clue about these either..

So, these additions have made GeneralisedObjectiveFunction and IterativeReconstruction really rather specific to our problems.

Question is if we need them…
Kris

replace std::auto_ptr with std::unique_ptr

std::auto_ptr is deprecated. We need to replace it. The only places where we’re using auto_ptr is in the STIR API are

 std::auto_ptr<DataT>   InputFileFormat::read_from_file(…)
 std::auto_ptr<DataT> read_from_file(…)
 std::auto_ptr<SymmetryOperation>
    DataSymmetriesForBins::find_symmetry_operation_from_basic_bin(Bin&) const;
  std::auto_ptr<SymmetryOperation>
   DataSymmetriesForDensels_PET_CartesianGrid::find_symmetry_operation_from_basic_densel(Densel&);

When doing this change, I’m also planning to remove the following class member:

DiscretisedDensity * DiscretisedDensity::read_from_file(const string& filename)

This was there just for backwards compatibility. If you are still using that, you’ll need to replace it with

read_from_file<DiscretisedDensity>(…)

These changes will break your own code if you use any of these functions of course. You should normally also just be able to replace auto_ptr with unique_ptr.

For compatibility with old compilers, I will put a

#ifdef BOOST_NO_CXX11_SMART_PTR
  #define unique_ptr auto_ptr
#endif

in stir/common.h. However, in a future STIR release, we will give up on non-C++11 compilers.

LmToProjData num_events_to_store

the new handling of num_events_to_store isn't quite consistent. stored_num_events is still signed (and i think it's best to keep it that way, as delayed might be subtracted). I therefore think that we should keep num_events_to_store signed long as well. I think that'd still allow a large number of events (~2.1 10^9). going to unsigned buys as only a factor of 2.

If we really think this number is too small we could using long long on systems that support it. Anyone thinks this is necessary?

Or we could use double (allows a large number of ints) but I find that a bit strange and scary (putting a really large number will lead to an infinite loop as it won't increment anymore after a while).

As an additional benefit reverting to a signed type would allow us to avoid backwards-compatibility problems with num_events_to_store=-1 (the previous default).

@NikEfth, what do you think?

ExamData

  • default constructor of ExamData needs to initialise the shared_ptr. @NikEfth you've fixed this in the scatter PR, but I presume this will take more time before we merge it? Best to create a separate PR for ExamData then. (After acceptance of that one, you'll have to merge master back onto scatter, resolving the conflict presumably).
  • ExamData shouldn't be in IO but in buildblock (or potentially data_buildblock, but that'd work only of we'd move ProjData et al to data_buildblock, which we won't do right now...)

LmToProjData crash with arc-corrected template

most (but not all) scanners have list-mode data that corresponds to non-arccorrected data. Currently the code crashes (with segmentation fault) when given a wrong template proj_data. In debug mode, this is checked with an assert resulting in

/src/include/stir/listmode/CListEventCylindricalScannerWithDiscreteDetectors.inl:80:
virtual void stir::CListEventCylindricalScannerWithDiscreteDetectors::get_bin(stir::Bin&, const stir::ProjDataInfo&) const:
Assertion `dynamic_cast<ProjDataInfoCylindricalNoArcCorr const*>(&proj_data_info) != 0' failed.

We need to write an error message. However, this is not easy, as we probably don't want to do this check in stir::CListEventCylindricalScannerWithDiscreteDetectors::get_bin as it'd slow it down.

SAFIR code assumes long is 64-bit

When compiling STIR with Visual Studio with 32-bit tools I get

C:\Users\\krisf\Documents\devel\STIR\src\include\stir/listmode/CListRecordSAFIR.h(156): error C2034: 'stir::CListTimeDataSAFIR::time': type of bit field too small for number of bits
C:\Users\krisf\Documents\devel\STIR\src\include\stir/listmode/CListRecordSAFIR.h(146): warning C4293: '<<': shift count negative or too big, undefined behavior

Both of these are severe problems, not Windows specific. @jafische, can you look at this?

Reconstruct::set_up and ProjectorPair::set_up when using reduced segment range

the following causes a crash or causes surprising behaviour

proj_data = ...
shared_ptr p = ...
p->set_up(proj_data.get_proj_data_info_sptr())
recon.set_projector_pair_sptr(p)
recon.set_max_segment_to_process(3) // lower number than in actual data
recon.set_up()
p->forward(image)

This last forward projection will either only forward project the reduced range, or crash when it goes out-of-range.
Reason: recon_set_up() calls p.set_up() with a reduced proj_data_info.

Possible fix: let p.set_up() check if it has been set-up with a "large enough" range. if so, keep its previous settings.

FindMATLAB.cmake fails for MATLAB 2017a

mex -v now outputs some variables twice, confusing our current FindMATLAB.cmake. We should probably switch to CMake's FindMatlab.cmake (which has is now more up-to-date)

LM reconstrcution using frame definition file

I recently came across to two different problems:
The first is when I define a frame definition file inside the parameter file the reconstruction does not work.
This happens because for some reason do_time_frames stays "false". In order to make it work I add the following at the beginning of "compute_sub_gradient_without_penalty_plus_sensitivity()":

if (this->frame_defs_filename.size()!=0)
{
this->frame_defs = TimeFrameDefinitions(this->frame_defs_filename);
this->do_time_frame = true;
}

the second problem happens when I want to reconstruct a frame which is in the middle of the LM file. So when I define a frame like:

0 t1
1 t2

t1 has to be skipped and the reconstruction should start from t1 to t2, however, what is happening is that all the events from the beginning of the LM file are being taken into account. I was thinking that the following bit was enough:

if(record.is_time() && end_time > 0.01)
{

current_time = record.time().get_time_in_secs();
if (this->do_time_frame && current_time >= end_time)
break; // get out of while loop
if (current_time < start_time)

continue;}

but it is not so. As a consequence, I added the condition:

if(record.time ().get_time_in_secs ()<=start_time){
continue;
}

also inside

if (record.is_event() && record.event().is_prompt())
{

this seems to solve the problem.
Kind regards
Daniel

List-Mode File Processing error for Siemens mMR Scanner

We are trying to work with data obtained from Siemens mMR scanner and it is in list mode format. We would like to unlist it using the same procedure we usually follow for every list-mode data. But for some list-mode files, lm_to_projdata frame definition doesn't work.
It only processes very few events for these list-mode files even though list_lm_events and coincidence flag gives a huge amount of events. The message is as below:

WARNING: KeyParser: keyword 'PET STUDY (Emission data)' already registered for parsing, overwriting previous value

WARNING: KeyParser: keyword 'PET STUDY (Image data)' already registered for parsing, overwriting previous value

WARNING: KeyParser: keyword 'PET STUDY (General)' already registered for parsing, overwriting previous value

WARNING: KeyParser: keyword 'PET data type' already registered for parsing, overwriting previous value

WARNING: KeyParser: keyword 'process status' already registered for parsing, overwriting previous value

WARNING: KeyParser: keyword 'IMAGE DATA DESCRIPTION' already registered for parsing, overwriting previous value

WARNING: KeyParser: keyword 'data offset in bytes' already registered for parsing, overwriting previous value

WARNING: Interfile error: 'number of bytes per pixel' keyword should be set
to a number > 0

INFO: CListModeDataECAT8_32bit: opening file PET_ACQ_542_20161117184346-0.l
LmToProjData NOT Using FRAME_BASED_DT_CORR

Processing time frame 1

Number of prompts stored in this time period : 17949
Number of delayeds stored in this time period: 0

Processing time frame 2

Number of prompts stored in this time period : 1829
Number of delayeds stored in this time period: 0
Last stored event was recorded before time-tick at 36459.7 secs
Total number of counts (either prompts/trues/delayeds) stored: 19778

This took 1.46s CPU time.

We define the frame in different frame definition file as follows:
1 30
1 130

This scan is 3600 s long but the frame we defined are less time.
We also tried:
1 3600 ; as our frame definition;
the message it gives is as follows:

WARNING: KeyParser: keyword 'PET STUDY (Emission data)' already registered for parsing, overwriting previous value

WARNING: KeyParser: keyword 'PET STUDY (Image data)' already registered for parsing, overwriting previous value

WARNING: KeyParser: keyword 'PET STUDY (General)' already registered for parsing, overwriting previous value

WARNING: KeyParser: keyword 'PET data type' already registered for parsing, overwriting previous value

WARNING: KeyParser: keyword 'process status' already registered for parsing, overwriting previous value

WARNING: KeyParser: keyword 'IMAGE DATA DESCRIPTION' already registered for parsing, overwriting previous value

WARNING: KeyParser: keyword 'data offset in bytes' already registered for parsing, overwriting previous value

WARNING: Interfile error: 'number of bytes per pixel' keyword should be set
to a number > 0

INFO: CListModeDataECAT8_32bit: opening file PET_ACQ_542_20161117184346-0.l
LmToProjData NOT Using FRAME_BASED_DT_CORR

Processing time frame 1

Number of prompts stored in this time period : 17949
Number of delayeds stored in this time period: 0
Last stored event was recorded before time-tick at 36459.7 secs
Total number of counts (either prompts/trues/delayeds) stored: 17949

This took 0.7s CPU time.

Whereas when we use all the events, then it processes everything but only with one computer whereas it doesnt work on other including HPC.
This happens for some list-mode files only, whereas others are fine!!
Very mysterious!

problem with incremental interpolating backprojector

On some systems/compilers, this projector has trouble in the central pixel of each plane, and sometimes on pixels on the 45 degrees (and related) angles. This is due to (unavoidable?) rounding errors that are a consequence of the incremental scheme (voxel location and voxel value are essentially decoupled). The errors tend to disappear in debug mode, so have to do with optimisation in math calculations.

This is a very old bug. It'd probably be best to get rid of this projector and replace it with a non-incremental interpolating one (vastly simpler code as well).

Quick start troubles for Qt Creator (in examples/src/demo1.cxx etc)

Greetings!
This issue was updated.

I've downloaded a development version of STIR.
With the intention of using Qt creator, I've tried to create .cpp project using partial demo1.cxx code just to test dependencies:

#include <iostream>
#include "stir/recon_buildblock/BackProjectorByBinUsingInterpolation.h"
#include "stir/recon_buildblock/DataSymmetriesForBins_PET_CartesianGrid.h"
#include "stir/IO/write_to_file.h"
#include "stir/IO/read_from_file.h"
#include "stir/ProjData.h"
#include "stir/DiscretisedDensity.h"
#include "stir/shared_ptr.h"
#include "stir/utilities.h"
#include "stir/Succeeded.h"

int main()
{
    using namespace stir;
    const std::string input_filename =
      ask_filename_with_extension("Input file",".hs");     // input sinogram
    shared_ptr<ProjData>
        proj_data_sptr(ProjData::read_from_file(input_filename));
    return 0;
}

With added paths to libraries in .pro file:

LIBS += -L"/usr/local/lib" \
        -lanalytic_FBP2D \
        -lanalytic_FBP3DRP \
        -lbuildblock \
        -ldata_buildblock \
        -ldisplay \
        -leval_buildblock \
        -lIO \
        -literative_OSMAPOSL \
        -literative_OSSPS \
        -llistmode_buildblock \
        -lmodelling_buildblock \
        -lnumerics_buildblock \
        -lrecon_buildblock \
        -lscatter_buildblock \
        -lShape_buildblock \
        -lspatial_transformation_buildblock

Total 124 errors appear, starting with:

ExamData.cxx:-1: error: undefined reference to `stir::TimeFrameDefinitions::TimeFrameDefinitions()'
interfile.cxx:-1: error: undefined reference to `stir::TimeFrameDefinitions::get_num_time_frames() const'
interfile.cxx:-1: error: undefined reference to `stir::TimeFrameDefinitions::get_duration(unsigned int) const'
...

Looks like there are problems with ExamData.cxx and interfile.cxx in libraries.

What should I adjust in order to proceed with using STIR in Qt?

For the examples 'intended for beginning STIR developers' located in STIR/examples/src/, apparently, README is outdated, since not all the steps to reproduce appear to work. Depending on compilation options, files extra_dirs.mk and extra_stir_dirs.cmake need to be moved to STIR/src/local/

For the first option,

mkdir -p ../src/local
cp extra_dirs.mk ../src/local/
cd ..
make examples

... I obtain the next:

$ make examples
make: Nothing to be done for 'examples'.

and for the second one denoted as:

mkdir -p ../src/local
cp extra_stir_dirs.cmake ../src/local/
cd your-build-dir
# reconfigure your project
ccmake .

...a CMake error appears during configuration:

 CMake Error at CMakeLists.txt:12 (include):
   _include could not find load file:

     stir_exe_targets

 CMake Warning (dev) in CMakeLists.txt:
   No cmake_minimum_required command is present.  A line of code such as

     cmake_minimum_required(VERSION 3.5)

   should be added at the top of the file._

Travis CI integration

Travis CI is continuous integration system which is free for open source projects hosted on GitHub. The basic idea is that: every commit is compiled and tested automatically, and the status is reported in an e-mail and can be checked online any time.

It also can test pull requests which would ensure that only tested requests are merged.

A proposed config file (.travis.yml) will be sent as a pull request.

Travis website: https://travis-ci.org/

test_PoissonLogLikelihoodWithLinearModelForMeanAndProjData (SEGFAULT)

After merging with _dataInput_from_ExamInfo (PR #19) the test_PoissonLogLikelihoodWithLinearModelForMeanAndProjData reports a segmentation fault.

After PR #25 the bug persists.

It probably has to do with the move of the target image from the IterativeReconstruction.h to Reconstruction.h.

I gonna try tackle it.

Install removes binaries

When running make install, the following type of messages are given repeatedly:

-- Installing: /usr/local/lib/libanalytic_FBP3DRP.so
-- Removed runtime path from "/usr/local/lib/libanalytic_FBP3DRP.so"

This causes python to crash since it cannot access the library:

>>> import stir
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "stir.py", line 28, in <module>
    _stir = swig_import_helper()
  File "stir.py", line 24, in swig_import_helper
    _mod = imp.load_module('_stir', fp, pathname, description)
ImportError: libanalytic_FBP3DRP.so: cannot open shared object file: No such file or directory

A solution is to select (when configuring) CMAKE_SKIP_RPATH. I suggest this either be the default, or give users a warning if they try compiling without it.

EDIT: it seems the suggested fix does not help with the python error, which persists.

OPENMP problem in FBP2D

sometimes, FBP2D finishes with a memory problem. This happens on the Virtual System distributed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.