avigan / sphere Goto Github PK
View Code? Open in Web Editor NEWReduction and analysis code for the VLT/SPHERE instrument
License: MIT License
Reduction and analysis code for the VLT/SPHERE instrument
License: MIT License
Different levels of documentation:
Several possibilities:
Sometimes the files are listed in random order
Current version doesn't include any correction at the moment. Should be straightforward with the imutils.scale() method.
Several methods are common between the two instruments and could be factorized into a single module.
The IRDIS module is extremely long with the implementation of the ImagingReduction() and SpectroReductions() classes. It would be much better to split it into different files if possible.
Must be done before calling any of the static calibration methods.
Maybe also check that the SPHERE recipes are installed.
At the moment the pipeline selects the first DIT of the first OBJECT,CENTER for the centring of an OBJECT sequence. This could be improved, e.g. when all DITs of the OBJECT,CENTER are saved independently, or when there are several OBJECT,CENTER.
The user should be warned e.g. if (s)he tries to preprocess the science data if the calibrations have not been generated, or if (s)he tries to recalibrate wavelength if the data has not been pre-processed, etc.
Automatically save plots
The method should handle:
But before issue #6 must be solved to provide a final, accurate wavelength solution
In sph_ifs_fix_badpix, there is some confusion between whether the region to be searched is from -ext to +ext+1, or -ext//2 to +ext//2+1. Most of the code uses the former, but the definitions of sub_low and sub_high use the latter, leading to inconsistent array lengths and an error.
Can be fixed by removing the "//2" on lines 217 and 218 of IFS.py, assuming that gives the desired definition.
Correction needed because of derotator tracking issue identified in May/June 2016, documented in the user manual.
Correction factor (IDL):
;; derotator drift correction
alt_beg = sxpar_eso(hdr,'HIERARCH ESO TEL ALT')
drot2_beg = sxpar_eso(hdr,'HIERARCH ESO INS4 DROT2 BEGIN')
if jul_out lt date_conv('2016-07-12','J') then begin
corr = atan(tan((alt_beg-2.*drot2_beg)*!pi/180.))*180./!pi
endif else begin
corr = 0
endelse
The correction factor needs to be added to the parallactic angle value.
Show information like:
Will simplify the creation of a reduction. Most methods are currently being implemented independently.
To make sure that the user does not ask images larger than the physical does of the date. Applies to all types of data.
Make sure it works.
Would be useful for images with saturated data or with lots of uncorrected bad pixels data
Related issue: how to deal with data that has no OBJECT,CENTER frames
Current IFS version works by using the center of the frame as a first guess of the center, but for IRDIS the approximate is quite different in both field. A common approach could be to use a correlation with a pre-generated pattern, like in the IDL version used in the IRDIS LAM pipeline.
Current version does not take into account dithering when combining science data.
vltpf.utils.imutils.fix_badpix throws an error if too few good pixels (i.e. less than npix) are found, specifically on line 1039 when it tries to index the first npix pixels. This can occur at the corners of IFS flat frames, where (0, 0) is more than ddmax (= 100) pixels from any good, illuminated pixels, and so the array good_pix is empty.
One possible solution is just to increase ddmax, with ~170 needed to avoid problems on IFS frames. Alternatively, a simple replacement value could be used in such cases - if a pixel is more than 100px outside the illuminated area, it's value really doesn't matter.
Edit: The value does matter slightly, actually, as the flat frame is normalised again after the step where I was getting the error. Maybe fill with the frame mean, or fill with np.nan and use np.nanmedian to normalise, perhaps replacing nans with a nicer value later?
Current version is coded for IFS, so the final pupil offset is not right for IRDIS data.
Current order defined sequentially by light path in instrument makes it difficult to find the information.
Need to test the reduction of a dataset downloaded directly from the archive with the automatic selection of the calibration.
Needed to remove all the temporary data generated by the pre-processing
Will require to implement a mask option in imutils
Two possible optimisations for sigma_filter in imutils.py:
# Current
box2 = box**2
kernel = Box2DKernel(box)
img_clip = (convolve(img, kernel)*box2 - img) / (box2-1)
imdev = (img - img_clip)**2
fact = nsigma**2 / (box2-2)
imvar = fact*(convolve(imdev, kernel)*box2 - imdev)
# Replacement
from scipy.ndimage.filter import uniform_filter
box2 = box**2
img_clip = (uniform_filter(img, box, mode = "constant")*box2 - img)/(box2-1)
imdev = (img - img_clip)**2
fact = nsigma**2 / (box2-2)
imvar = fact*(uniform_filter(imdev, box, mode = "constant")*box2 - imdev)
nchange = img.size - nok
if (iterate is True):
_iters = _iters+1
if (_iters >= max_iter) or nchange == 0:
# return...
Independent repository for imutils, aperture, mft and others will make it easier to manage and use in different projects (pyZELDA, pySPHERE, etc).
Create setup.py and find how to make it available through pip.
The interface should be identical to be able to follow the exact same procedure to pre-process and reduce the IRDIS and IFS data sets.
Current version saves the complete frames dataframe, but there is no index to find the corresponding frames in the FITS science cubes.
Routine should be able to handle:
IRDIS and IFS reduction
ipython notebook
Include a dataset?
Similar to the version in the SPHERE-legacy IDL
Necessary steps:
Implement method to extract information available in SPARTA files downloaded with the science from the archive.
Current implementation of SPHERE.Dataset will step when an error occurs in one of the reductions
Whilst the "nocenter" option is correctly adhered to for FLUX and CENTER frames in sph_ifs_combine_data, it is ignored for SCIENCE frames, as the "if nocenter/else" check is missing for SCIENCE frames.
List them and try to make them compatible between the two instruments to simplify the class implementation.
Current behaviour is to report an error when there not exactly the expected number of calibration files. It could be improved, e.g. by selecting the most recent ones when more than one are available.
Current implementation of SPHERE.Dataset only works with using xml files downloaded from ESO archives. Another method to sort FITS files based on the header keywords should be implemented.
Useful to provide the value when computing astrometry on science data
Reduction configuration is read from disk and can be configured but is not used in the analysis yet.
There is no check that the provided reduction path has a raw/ subdirectory
Enable automatically sorting all the files downloaded with the calibrations from the ESO archive. Script is current stored in private repo sources/projects/shine/misc/sort_archive_data.py
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.