Git Product home page Git Product logo

vortex-exoplanet / vip Goto Github PK

View Code? Open in Web Editor NEW
70.0 70.0 57.0 381.36 MB

VIP is a python package/library for angular, reference star and spectral differential imaging for exoplanet/disk detection through high-contrast imaging.

Home Page: http://vip.readthedocs.io/

License: MIT License

Python 96.50% Makefile 0.12% TeX 3.38%
data-processing extrasolar-planets-disks high-contrast-imaging image-processing low-rank-approximation mcmc pca

vip's People

Contributors

avigan avatar carlos-gg avatar chdahlqvist avatar chrisdelax avatar corentindoco avatar dfm avatar eixalde avatar elenakoko avatar faustineastro avatar gillesorban avatar gruane avatar henry-ngo avatar iainhammond avatar jmilou avatar justlatour avatar kanti92 avatar lewis-picker avatar madelineceleste avatar mileslucas avatar nenasedk avatar r4lv avatar sand-jrd avatar vachristiaens avatar vchristiaens avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vip's Issues

Call to pp_subplots in fit_2dgaussian

Hi @CARLGOGO

Just wanted to let you know that the call to pp_subplots in fit_2dgaussian should not use the keyword "colorb" any more. There might be other places where this should be updated.

Cheers,
Olivier.

Unexpected error in vip.pca

The PCA reduction works for most of our data, but fails on one with an error message that I do not understand: ValueError: work array size computation for internal gesdd failed: -10. The error message seems to be generated by a scipy package though.


ValueError Traceback (most recent call last)
in ()
39
40 # Full Frame ADI PCA
---> 41 image = pca(cube_hpf_0227, angs_0227, ncomp=5, svd_mode='randsvd', full_output=False, mask_center_px=0)
42 display_array_ds9(image)
43

/Users/henryngo/SourceTreeRepos/VIP/vip/pca/pca_fullfr.pyc in pca(cube, angle_list, cube_ref, scale_list, ncomp, ncomp2, svd_mode, scaling, mask_center_px, full_output, verbose, debug)
316 residuals_result = subtract_projection(cube, None, ncomp, scaling,
317 mask_center_px, debug, svd_mode,
--> 318 verbose, full_output)
319 if full_output:
320 residuals_cube = residuals_result[0]

/Users/henryngo/SourceTreeRepos/VIP/vip/pca/pca_fullfr.pyc in subtract_projection(cube, cube_ref, ncomp, scaling, mask_center_px, debug, svd_mode, verbose, full_output)
145 ref_lib = matrix
146
--> 147 V = svd_wrapper(ref_lib, svd_mode, ncomp, debug, verbose)
148
149 if verbose: timing(start_time)

/Users/henryngo/SourceTreeRepos/VIP/vip/pca/utils.pyc in svd_wrapper(matrix, mode, ncomp, debug, verbose, usv)
291 elif mode=='randsvd':
292 U, S, V = randomized_svd(matrix, n_components=ncomp, n_iter=2,
--> 293 transpose='auto', random_state=None)
294 if debug: reconstruction(ncomp, U, S, V, 1)
295 if verbose: print('Done SVD/PCA with randomized SVD')

/Users/henryngo/anaconda/lib/python2.7/site-packages/sklearn/utils/extmath.pyc in randomized_svd(M, n_components, n_oversamples, n_iter, transpose, flip_sign, random_state)
301
302 # compute the SVD on the thin matrix: (k + p) wide
--> 303 Uhat, s, V = linalg.svd(B, full_matrices=False)
304 del B
305 U = np.dot(Q, Uhat)

/Users/henryngo/anaconda/lib/python2.7/site-packages/scipy/linalg/decomp_svd.pyc in svd(a, full_matrices, compute_uv, overwrite_a, check_finite)
97 lwork, info = gesdd_lwork(a1.shape[0], a1.shape[1], compute_uv=compute_uv, full_matrices=full_matrices)
98 if info != 0:
---> 99 raise ValueError('work array size computation for internal gesdd failed: %d' % info)
100 lwork = int(lwork.real)
101

ValueError: work array size computation for internal gesdd failed: -10

The true SNR of a companion

This topic was briefly mentioned during a recent telecon. It might be useful to dwell a bit more on it:

The SNR of a companion detection computed with snr_ss() is inconsistent with the SNR of the companion that is estimated with the ratio of the NEGFC contrast to the noise in the local annulus (that can be visualized for example when plotting the companion contrast on top of the 5-sigma contrast curve). Several factors can explain this discrepancy: (i) throughput (only considered in the second case); (ii) a possible underlying speckle contamination (that could artificially enhance the SNR in the first case); (iii) the local noise estimate, which is estimated on the whole annulus after removing the companion itself in the second case, but computed on a truncated annulus in the first case (this effect gets significant only for very close-in companions).

For example, in the case of companion HD 206893 B (Milli+2017, accepted), snr_ss() would yield a SNR of ~10, while based on the NEGFC contrast and noise estimate on the local annulus, the SNR becomes ~6 (for the same PCA-ADI algo and npc used). In the paper, we conservatively quoted the second value.

The first one is a pure signal detection theory SNR which can be computed directly on the final PCA-processed frame, while the second requires also the initial cube to inject negative fake companions. So you'd inherently expect to obtain different values. The question is: which value makes more sense to be quoted in a paper to reflect the significance of the detection?

Good way to use VIP

Hello,

I have a question about optimally using the VIP code. If you have a none-detection image (as usually ;) ), is it a good idea to proceed the following ways (computation time doesn't matter as a lot can be parallelized and we have many idle cores):

For each FWHM-wide annulus:

  1. Insert one fake planet at a random angle.
  2. Optimize for this fake planet the number of PCs only on this annulus (possible in version 0.5.3)
  3. Repeat 1 and 2 for many times and find the median of optimal PCs.
    Now we have an optimal number of PCs for each annulus. So
  4. For each annulus use the optimal number of PCs found before and do PCA/LLSG (possible in version 0.5.3)
  5. Calculate the contrast curve by including and recovering different fakes for each annulus using the number of PCs found before (done automatically by your pipeline)

Thank you,
Stefan

VIP tutorial

Hi @CARLGOGO

There seems to be some issues in the VIP Ipython notebook tutorial. Mike ran into trouble when trying it out, and I could reproduce the issues on my machine. Here are the main ones I've spotted:

vip.phot.detection(fr_adi, fwhm, psf, debug=True, mode='log', snr_thresh=5)
--> there shouldn't be a fwhm argument here

Even with this correction, is crashed after displaying the figure:


IndexError Traceback (most recent call last)
in ()
----> 1 vip.phot.detection(fr_adi, psf, debug=True, mode='log', snr_thresh=5)

/Users/oabsil/Python/vip/vip/phot/detection.pyc in detection(array, psf, bkg_sigma, mode, matched_filter, mask, snr_thresh, plot, debug, full_output, verbose)
248 threshold=bkg_level,
249 min_sigma=sigma-.5, max_sigma=sigma+.5)
--> 250 coords = coords[:,:2]
251 if coords.shape[0]>0 and verbose: print_coords(coords)
252

IndexError: too many indices for array

vip.phot.snr_student(fr_adi, 63, 62, fwhm, plot=True)
--> does not exist any more

vip.phot.detection(fr_pca, psf, debug=True, mode='log', snr_thresh=5)
--> should be fr_pca1 in input

(I've stopped my exploration here, maybe other bugs remain below)

Could you please update the tutorial?

Cheers,
Olivier.

firstguess and run_mcmc_astrometry with RDI

Hi @CARLGOGO ,

I don't know if you already looked into adding the possibility to run vip.negfc.simplex_opt.firstguess and vip.negfc.mcmc_opt.run_mcmc_astrometry with RDI.
If not, whenever you have time, could you add that functionality?

Thanks,
Maddalena

Non ADI function

Dear Carlos,

I was looking for a non ADI reduction and could not find this functionality in vip. Did you implement this functionality ?
By non ADI I mean, no star subtraction, only derotation and stacking of the frames, optionally after subtracting the median in annuli to get rid of the radial gradient in the final image.
This is quite important in a data reduction pipeline, as it very often yields the best reduction in the background limited region.

Julien

Contrast_curve noise per annulus

Hi Carlos,

I ran into another problem when calculating the contrast_curve information.

On line 154 of contrcurve.py, when noise_samp is being defined:

noise_samp = noise_samp[cutin1:]

I run into the following error:

TypeError: only integer scalar arrays can be converted to a scalar index

It seems that cutin1 is a one element array. The same goes for the cutin2 when it's defined a couple lines later. I have attached the full traceback below with printed values of each array. I am performing this on simulated data, so perhaps that is the reason? In which case, I am unsure how to proceed.

image

image

image

FWHM as non-integer

Hello,
I have encountered another problem which I am not sure about:
I learned that in order to calculate the contrast curve you should use the exact same settings as in the reduction process. Hoever, weher I choose my FWHM to be a float (e.g. 4.78), I cannot use the same parameters for the contrast curve.
The reason is, that in contrcurve.py in on l.119 and l.369 the value is rounded (to 5 in the example). This leads to a different reduction. One is that the number of annuli changes (and is how I noticed it as you get an error about the number of different PCs, which should be 9 for fwhm=4.78 but only 7 for fwhm=5 in the new mode where you provide the PCs beforehand).
Uncommenting l.119 and l.369 only leads to errors later on as some routines expect fwhm as aninteger and not as a float.
Simply setting FWHM to 5 also for the reduction cannot be the solution, as you get a non-optimal result (e.g. for a wrong protection angle etc...).

In short: vip.pca.pca_optimize_snr() takes fwhm as float
vip.phot.contrast_curve() takes int(np.round(fwhm)) and therewith changes the settings.

Am I somehow mislead? I just think it is important to use the same settings for the reduction and the contrast-curve calculation, or not in this case?

Thanks,
Stefan

small sample correction in vip.phot.contrast_curve

Hi Carlos,

since the small sample correction ss_corr is defined as:
ss_corr = np.sqrt(1 + 1/(n_res_els-1))
I think sigma should be multiply to it, and not divided, meaning
sigma_corr = stats.t.ppf(stats.norm.cdf(sigma), n_res_els)*ss_corr
instead of
sigma_corr = stats.t.ppf(stats.norm.cdf(sigma), n_res_els)/ss_corr

Cheers,
Maddalena

Small bugs in phot/fakecomp.py

Hola @CARLGOGO ,

As it is, there are 2 small bugs in function psf_norm (phot/fakecomp.py):

  1. the first is related to variable "size": if it takes the value "None", an error would be raised.
    You could add the following line at l.153 to avoid the bug:
    else: psfs = array.copy()
    (this would also require to import copy)
  2. The description of variable "array" states that it should consist of the relative path of the psf fits image, while it is the psf array itself that is required.

Saludos,
Valentin

Suggestion for vip.calib.cube_derotate()

Hi @CARLGOGO,

I hope you're doing well.

For the function vip.calib.cube_derotate(), @oabsil and I would suggest to add a shape matching check between the number of frames in the input array and the number of parallactic angles (PA) given in the angle_list. Since the loop is performed on the number N of frames, the function consistently raises an error when the number of PA is smaller than N. However, the function runs well without raising any error or warning when the number of PA is greater than N. It might be useful to prevent the case when bad frames are removed from the cube while forgetting to do the same for the corresponding PA in the PA vector. I would suggest to insert the following code after the line 91 in vip.calib.derotation.py:

if not array.shape[0] == angle_list.shape[0]:
    raise TypeError('Pythonic error message of your choice')

which should do the trick.

Cheers,
Olivier

Slicing issue in vip.phot.detection?

Hi all,

When trying to identify blobs in the 2D image it seems there is a type issue when getting the value at a given pixel coordinates. In vip.phot.detection, line 317 (of version 0.7.1) x and y can be float, and the code tries to get "array[y,x]" which results in an error:

/Users/joolof/anaconda/lib/python2.7/site-packages/vip/phot/detection.py in detection(array, psf, bkg_sigma, mode, matched_filter, mask, snr_thresh, plot, debug, full_output, verbose)
    315         snr = snr_ss(array, (x,y), fwhm, False, verbose=False)
    316         snr_list.append(snr)
--> 317         px_list.append(array[y,x])
    318         if snr >= snr_thresh and array[y,x]>0:
    319             if plot:

IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices

If I recall properly, it was working in earlier versions of the code (maybe 0.5 or 0.6, not sure). I have vip version 0.7.1 and numpy version 1.12.1.
Let me know if you need more info or if I missed something.
Cheers, and thanks for the awesome package !
Johan

Bug in new annulus mode (v.05.4)?

Hey,
I think there is a bug in the new vip.pca.pca_ad_annular mode where you can give ncomp as a list matching the number of annuli (then every annulus get's its own number of PCs):
I get an error in utils.py on line 360 [ if ncomp>min(matrix.shape[0],matrix.shape[1])]:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

if I print ncomp there it is the list which explains the error.

Tracing back led me to:
in pca_local there is a function get_eigenvectors differentiating different cases , i.e. ncomp=None (l.579 ff). I guess there should also be a case for ncomp being a list?
However I didn't manage to come up with a (simple) solution on my own :(

Thank you,
Stefan

Negative values in cube resampling

Hi @CARLGOGO

I've just noticed that, when using the function vip.calib.cube_px_resampling with a cube that has zero background, but that contains no negative value, you get several negative values in the output cube. I assume that this is due to the interpolation, but it can have significant side effects if one is not aware of that. Not sure what to do. Maybe just add a warning in the docstring?

Cheers,
Oli.

bug in cube_recenter_dft_upsampling with subi_size=None

There is a minor bug in file "VIP/vip/calib/recentering.py", line 646, in cube_recenter_dft_upsampling:
x[0] = cx

UnboundLocalError: local variable 'cx' referenced before assignment

This happens when the option subi_size is set to None. It can be easily corrected by taking the line
cy, cx = frame_center(array[0])
out of the if condition like this

cy, cx = frame_center(array[0])
if subi_size is not None:
size = int(fwhm*subi_size)
y1, x1 = _centroid_2dg_frame(array_rec, 0, size, cy_1, cx_1)
array_rec[0] = frame_shift(array_rec[0], shift_y=cy-y1, shift_x=cx-x1)
x[0] = cx-x1
y[0] = cy-y1
else:
x[0] = cx
y[0] = cy

Cheers
Julien

Minor bug in the reject_outliers() prototype.

Hi Carlos,
Hi Valentin,

It seems that there is a minor bug at line 577 of vip.calib.badpixremoval.py. In the function reject_outliers() prototype, a non-default argument follows a default argument.

Cheers,
Olivier

Contrast curve and forward modeling with RDI

Hi Carlos et al.,

It seem like there is not support for RDI in the current contrast curve computation script. Do you plan to do that? It seems like an easy update. Looking at the code, we need to add both an entry for the reference cube, and a condition for RDI in the line 285 to 307 & 363 to 382 sections, right?

I may attempt to do that mysefl if I find the time.

Best,

Dimitri

classical ADI for nested_negfc_sampling

Hello,
to get companion candidate's (CC) fluxes and positions, as far as I understood the MCMC_sampling got changed to a nested MCMC sampling. Meaning you call vip.negfc.nested_negfc_sampling.
But now I wonder if it is possible to do this approach with classical ADI (not PCA). I don't see a keyword or something to cange the algorithm. The reason why I do not want to use PCA/LLSG is that the CC is in a corner where one cant create a full annulus.
Thank you,
Stefan

PS: And a rather stupid question: how do you measure the angles in this routine? E.g. what is the angle of x,y = (2,1) wrt (0,0)? ±63° or ±27°? I just want to make sure I measure the right 'speckle' ;)

Removing telescope (headers) related functions

Hi @oabsil @VChristiaens

To be consistent with the idea of having VIP as a observatory/instrument agnostic package we should check if we have in VIP such functions/modules that are specific to a concrete instrument/observatory. The module vip.fits.ESO_headers.py is one example of those modules/functions that we could remove.

Cheers,
Carlos

Small bug in pca/utils.py

Hola Carlos,

There is a small bug in the scale_cube_for_pca module (that I wrote), throwing me an error that pad lengths should be integers when the function is called. To solve it, the 2 following lines:
pad_len_y = (new_y - y)/2
pad_len_x = (new_x - x)/2
have to be changed to:
pad_len_y = (new_y - y)//2
pad_len_x = (new_x - x)//2
The reason for that, and I just learned it, is that when one is importing:
from future import division
the result of any division of 2 integers is a float. The result can be forced to be an integer with "//" instead of "/".

Saludos,
Valentin

Scaling in pca_rdi_annular

Hi all,

I'm a bit puzzled by the scaling steps in pca_rdi_annular, lines 161 and 162:

matrix = scale(matrix, with_mean=True, with_std=True)
data_ref = scale(data_ref, with_mean=True, with_std=True)

With a 35-frame science cube and 40 reference stars (each themselves 35-frame cubes), I make a contrast curve via pca_rdi_annular. For simplicity, I ignore the throughput by setting it to 1.0 everywhere. Attached are the resulting contrast curves with and without the scaling lines. The contrast curve with the scaling is considerably worse.

contrast_with_rescaling
contrast_without_rescaling

For what it's worth, I'm in the i' band, with low (<10%) Strehl and no coronagraph.

Unify the input of coordinates as a tuple (X,Y)

Hi @owertz @VChristiaens @oabsil

I noticed the input of pixel coordinates in functions that require it is not consistent, sometimes as separate parameters Y and X and sometimes as a single tuple (X,Y). It makes sense to unify this to have tuples (X,Y) in all the functions that require the input of a px coordinates.

Cheers,
Carlos

vipDS9

Hi Carlos,

since yesterday I am having problems displaying fits files with ds9 through VIP.
I have seen it has changed and I tried both
the old vip.fits.display_array_ds9() and
vip.fits.vipds9.vipDS9.display()
but it did not work.
I don't know if this is part of all the broken things I had yesterday, or if I am not using it correctly.
Is it working fine for you?

Thanks,
Maddalena

floats passed as indexes instead of ints

Hey Carlos,

While running register_and_center_via_speckles in NIRC2_preprocessing , I get the error listed in the images.

Looks like when it hits vip.var.filters on line 297, the routine median_filter doesn't like it when the median_size parameter is a float (e.g. 24.0 instead of 24). I've fixed it temporarily to force it to be an int, but I thought I'd bring this to your attention in case it should be a float/an issue anywhere else in the code.

Cheers,
Rahul

image

cube_derotate() function recently modified ?

Hi @CARLGOGO , @VChristiaens ,

Could you please tell me if one of you have recently modified the function cube_derotate() (in vip/calib/derotation.py) ? I've tried to run my mcmc stuff but my chiquare() (in vip/negfc/func_merit.py) function doesn't work anymore. It seems the reason is that the cube_derotate() function now returns two outputs instead of one. I'm pretty sure that a couple of days ago it returned only a single array-like object.

Thanks in advance,
Olivier

Bug in the function pca_optimize_snr()?

Hi Carlos,

I see the default value of function pca_optimize_snr()'s optional parameter annulus_width is None. However if one changes argument mode to 'annular', the algorithm would arrive to the call of function pca_annulus() without having been assigned a value to annulus_width. In pca_annulus(), this would create an error (as it only accepts float, not None, for annulus_width).

I suggest to change the default value of annulus_width from None to 2; so that the default value for annulus width is set to 2*fwhm (or any preferred value), then to change the description of the parameter correspondingly.

Cheers,
Valentin

vip.calib.cube_px_resampling vs vip.calib.frame_px_resampling

Hi @CARLGOGO

It looks like these two functions have different behaviors regarding flux conservation. For the cube, the flux is preserved, while for the frame, the flux is modified according to the square pixel size ratio. The function vip.calib.frame_px_resampling should be modified to have the same behavior as the other one.

Cheers,
Olivier

Fourier shift implementation

This would be a nice addition to VIP, having the option to perform geometric transformation on images (shift and rotation) with FFTs instead of conventional interpolation (Hagelberg et al. 2015). The challenge is to implement this efficiently, without impacting too much the computational cost.

Possible issue with cube_derotate: even and odd values

Hello,

I am using VIP v 0.6.0.

There is an issue with the behavior of cube_derotate where it gives incorrect behavior on even-sized image cubes. This appears to be due to the center of the image being miscalculated.

The following code demonstrates this problem:

import numpy as np
import sys
import vip
import time
import skimage                                                                                                                                                     
                                                                                                                                                                   

def psfmodel(frame, xpos, ypos, amplitude, sigma = 2, maxsize = 10):                                                                                               
    out = np.zeros(frame.shape)                                                                                                                                    
    sig = sigma
    intx = int(np.round(xpos))                                                                                                                                     
    inty = int(np.round(ypos))                                                                                                                                     
    hw = maxsize//2
    for i in range(intx-hw, intx+hw+1):
        for j in range(inty-hw, inty+hw+1):
            out[i,j] = amplitude*np.exp( -1.0*( (i-xpos)**2 + (j-ypos)**2)/(2*sig**2))                                                                             
    return out                                                                                                                                                     

def psfmodel_rth(frame, r, theta, amplitude, sigma = 2, maxsize = 10):                                                                                             
    #index of center of frame--works for odd and evens!
    cx, cy  = frame.shape[0]/2.0-0.5, frame.shape[1]/2.0 -0.5                                                                                                      
    theta_r = np.pi*theta/180.0
    xpos    = cx + r*np.cos(theta_r)                                                                                                                               
    ypos    = cy + r*np.sin(theta_r)
    return psfmodel(frame, xpos, ypos, amplitude, sigma = sigma, maxsize = maxsize)                                                                                

def vanilla_derotate(cube, angle_list = None):                                                                                                                     
    output = np.zeros(cube.shape)
    for idx, angle in enumerate(angs):
        output[idx,:,:] = skimage.transform.rotate(cube[idx,:,:], -1.0*angle)                                                                                      
    return output                                                                                                                                                  

if __name__ == "__main__":
    testdict = [('odd'  , np.zeros((21, 21)) ),                                                                                                                    
                ('even' , np.zeros((22, 22)) )]                                                                                                                    
    angs = np.arange(360)

    for tup in testdict:                                                                                                                                           
        key, value = tup
        print "testing "+key
       
        print "Creating rotating psf"
        a = np.array([psfmodel_rth(value, r=5, theta=th,amplitude=1) for th in angs])
        print "Derotating..."
        vip_derot  = vip.calib.cube_derotate(a, angle_list= angs, imlib = 'skimage')
        van_derot  = vanilla_derotate(a, angle_list = angs)

        assert np.allclose(vip_derot, van_derot, atol = 0.2)
        print "ds9(vip_derot)"
        time.sleep(1)

The code creates a rotating psf in a datacube. In the odd case, the vip derotate function correctly derotates the cube. In the even case, the psf rotates around where it should be in an epicyclic fashion. You may view this if you replace the print "ds9(vip_derot)" line with a command to open ds9 with the vip_derot and van_derot data.

This is not a feature of the PSF generating code, as the vanilla derotate function correctly derotates the cube in both cases.

Array slicing in /var/shapes.get_square

I ran into a a bug while running contrast_curve.

It seems that when the code encounters get_square under shapes.py online 140:

array_view = array[y-wing:y+wing+1, x-wing:x+wing+1].copy()

A TypeError is called because y, x, and wing are not "ints". This seems to be ( atleast in the case of x and y) becuase when the centers are passed and calculated from shapes.frame_center, the centers are calculated using np.ceil, which returns "floats" and not "ints". I can temporarily put in fixes to the local copy I have. I've also attached the full traceback below. I hope I've correctly explained this. Let me know if additional information is required to address this.

image
image

Small bug in phot/fakecomp.py

Hi Carlos,

There is an extra argument (verbose) in the call to psf_norm in function create_psf_template, which generates an error when the function is called.

Saludos,
Valentin

Problems importing after name change

Congrats on a big change to v0.8! Following the README, I now import vip with

import vip_hci as vip

This imports vip as expected. And I can refer to specific functions or modules such as

vip.fits, or
vip.fits.open_fits()

with no issues. However, when we try to do things like

import vip_hci as vip from vip.fits import open_fits,

we get an error! It says no module named "vip".

Is there something else we need to do to properly alias "vip_hci" as "vip" for these examples?

(Backup plan is to just change all lines like "from vip.fits import open_fits" to "vip_hci.fits import open_fits" but just thought I should ask if anyone has a better suggestion)

Header keywords 'CD1_1', 'CD1_2', 'CD2_1', 'CD2_2' and 'CUNIT1' are general to any observatory header or not?

One of the functions I want to add to calib/derotation.py consists in computing the derotation angles list to be applied to each frame to match North up.
This function would be based on the header information.
So far, I have only handled ESO headers. I was wondering if the header keywords 'CD1_1', 'CD1_2', 'CD2_1', 'CD2_2' and 'CUNIT1', which I use in the function, are general to any header of any observatory or if they are specific to ESO?

In the first case, do you think I would better not include the function to VIP, as it would not be general enough?

Thanks.
Valentin

PA threshold in annular PCA

Hi @CARLGOGO

I'm a bit confused with the behaviour of the PA threshold when applying pca_local.annular_pca on the 2015-10-24 HR8799 ADI data set (files n0040 to n0125). Here is the list of PAs:

array([ 277.89462, 277.62609, 277.34579, 277.07128, 276.73914,
276.43368, 276.10337, 275.73944, 275.38659, 275.00226,
274.58319, 274.15879, 273.68044, 273.16059, 272.66642,
272.09372, 271.46899, 270.81985, 270.15044, 269.39985,
268.57026, 267.66633, 266.70345, 265.66192, 264.4353 ,
263.10607, 261.67292, 260.00574, 258.25847, 256.16358,
253.82578, 251.02348, 247.98707, 244.23552, 239.8129 ,
234.72543, 228.70085, 221.98772, 213.8008 , 205.02589,
125.88666, 124.71132, 123.55023, 122.56172, 121.63624,
120.79717, 119.99221, 119.30114, 118.67914, 118.0255 ,
117.45395, 116.95032, 116.45874, 115.97881, 115.53998,
115.10231, 114.7175 , 114.36046, 114.00123, 113.6464 ,
113.34686, 113.03144, 112.75133, 112.47832, 112.20291,
111.93917, 111.67862, 111.44675, 111.2186 , 110.99489,
110.77652, 110.56603, 110.37987, 110.1867 , 109.99541,
109.80226, 109.62599])

What I'd like to do is to set a PA threshold so that, for the innermost annuli, the first part of the sequence (277° to 205°) will be "calibrated" by the 2nd part of the sequence (125° to 109°), which basically means that we're doing so kind of RDI between the two parts of the sequence (before and after transit). Here is the kind of result I get:

PA threshold 57.30 is too big, will be set to -75.72
Annulus 1, PA thresh = -75.72, Inn radius = 0.00, Ann center = 8.00
Done PCA with randsvd for current annulus
Running time: 0:00:00.204388

PA threshold 19.10 is too big, will be set to -75.72
Annulus 2, PA thresh = -75.72, Inn radius = 16.00, Ann center = 24.00
Done PCA with randsvd for current annulus
Running time: 0:00:00.432867

I don't understand why it complains about the PA threshold being too big, and changes it to -75.72°. Furthermore, the final result of the reduction looks a bit suspicious to me. Could you have a look into this behaviour?

Cheers,
Olivier.

Bug in cube_detect_badfr_pxstats

Dear Carlos,

I found a little bug in vip/calib/badframes.py
On line 85, in cube_detect_badfr_pxstats :
mean_smooth = pn.rolling_median(mean_values, window , center='True')
It should be
mean_smooth = pn.rolling_median(mean_values, window , center=True)

Cheers
Julien

Fake companions and reference frame choices

Hi all,

I'm working with the contrast curve code with the new annular RDI mode. It looks like when the throughput function injects fake companions, the choice of the RDI reference frames are affected relative to the no fake companion case. Is this correct?

It seems that the throughput should use the same reference frames as the no fake companion case.

readme.rst should be README.rst

Hello,

Installing on ubuntu 14 and get the following:
Traceback (most recent call last): File "setup.py", line 10, in <module> readme = open('README.rst').read() IOError: [Errno 2] No such file or directory: 'README.rst'

Did this and sorted it out:
mv readme.rst README.rst

Not a biggy, but thought you should know.

Cheers,
Martyn

Bad througput in RDI

Hi Carlos,

When examining the throughput curves in all RDI examples, I am puzzled by the fact that it is very low and going up with angular separation. It really looks like an ADI throughput curve, limited by self-attenuation. It's hard to see from the code if the fake companions are inserted in the simulated target cube only.

Could you double check? With RDI, I expect the throughput curve to be fairly flat.

Best,

Dimitri

VORTEX transmission in contrcurve.py

In contrcurve.py the vortex transmission is applied to the final contrast curve and not to the throughput. Therefore it does not appear in the end in the throughput vector and in the corresponding plot (as well as in the student-t corrected contrast curve).
Since at the moment it is applied directly to the curve and not to the throughput, I also think it is misleading to call it transmission, as it is defined as cont_curve_samp = cont_curve_samp*trans_interp (instead of cont_curve_samp = cont_curve_samp/trans_interp).
I would keep calling it transmission, but I would apply it directly to the throughput (thruput_interp_attenuated=thruput_interp * trans_interp), so that both cont_curve_samp and cont_curve_samp_corr would then be affected.

Cheers,
Maddalena

The call of a module object raises an error in vip/pca/pca_fullfr.py

Hi Carlos,

When get_snr() is called, it seems that, at line 178 in vip/pca/pca_fullfr.py, one tries to call vip.phot.snr which is a module and not a function or method. As a consequence, it raises a TypeError exception because of module object is not callable. I guess that one should call vip.phot.snr.snr_ss() or snr_peakstddev() instead. Which one of these do you advocate ?

For information: vip.pca.pca_optimize_snr() --> grid() --> get_snr() --> phot.snr

Cheers,
Olivier

Some small bugs and suggestions

1/ in frame_center_radon() (calib/recentering.py):

  • cost_bound variable is only defined when plot=True, but is then later called even if plot=False.
  • (added in the edit) When I want to use the wavelet filter, I get systematically an error of this type: "ValueError: Level value of 100 is too high. Maximum allowed is 6." I see that the value of 100 is actually hardcoded at l.432. When I change the value to say 5, I get an other error stemming from wavelet_denoise(): "AttributeError: 'module' object has no attribute 'thresholding'"

2/ in contrast_curve() (phot/contr_curve.py):

  • l.173-175: when noise_samp size is < 20 (i.e. when your cropped frames can carry less than 20 concentric annuli), an error is raised by the savgol_filter function because the polynomial order is set to 2 and is less or equal than the filter window length (set to 0.1*noise_samp size). Maybe we can allow a bit more flexibility? A contrast curve could still make sense with less than 20 concentric annuli, there is maybe just no need to use a savgol_filter in that kind of case.
  • when the contrast curve is saved, the pdf appears truncated
    => simple solution: figsize set to (10,6) instead of (8,4)

3/ in _centroid_2dm_frame() (calib/recentering.py):
I'd remove the condition
"if np.allclose(miny, cy, atol=2) and np.allclose(minx, cx, atol=2):"
which I find arbitrary to detect automatically if the data were taken with the AGPM or not! With some recent NACO+AGPM data, half of the frames pass the condition, the other half not.
It is probably better to implement it in the same way as _centroid_2dg_frame()

4/ in var/shapes.py:
I implemented the equivalent of get_circle and get_annulus, but with ellipses. I found it useful in the case of inclined disks. I let you decide whether to include them in vip or not:

def get_ellipse(array, a, b, PA, output_values=False, cy=None, cx=None, output_indices=False):

"""
Returns a centered elliptical region from a 2d ndarray. All the rest 
pixels are set to zeros.

Parameters
----------
array : array_like
    Input 2d array or image. 
a : flt
    Semi-major axis.
b : flt
    Semi-minor axis.
PA : deg
    The PA of the semi-major axis.
output_values : {False, True}, optional
    If True returns the values of the pixels in the annulus.
cy, cx : int
    Coordinates of the circle center.
output_indices : {False, True}, optional
    If True returns the indices inside the annulus.

Returns
-------
Depending on output_values, output_indices:
values : array_like
    1d array with the values of the pixels in the circular region.
array_masked : array_like
    Input array with the circular mask applied.
y, x : array_like
    Coordinates of pixels in circle.
"""

if not array.ndim == 2:
    raise TypeError('Input array is not a frame or 2d array.')
sy, sx = array.shape
if not cy and not cx:
    cy, cx = frame_center(array, verbose=False)
    
# Definition of other parameters of the ellipse
f = np.sqrt(a**2-b**2) #distance between center and foci of the ellipse
PA_rad = np.deg2rad(PA)    
pos_f1 = (cy+f*np.cos(PA_rad),cx+f*np.sin(PA_rad)) #coords of first focus
pos_f2 = (cy-f*np.cos(PA_rad),cx-f*np.sin(PA_rad)) # coords of second focus
   
yy, xx = np.ogrid[:sy, :sx] 
# ogrid is a multidim mesh creator (faster than mgrid)
ellipse = dist(yy,xx,pos_f1[0],pos_f1[1]) + dist(yy,xx,pos_f2[0],pos_f2[1])       
ellipse_mask = ellipse < 2*a   # mask of 1's and 0's

if output_values and not output_indices:
    values = array[ellipse_mask]
    return values
elif output_indices and not output_values:      
    indices = np.array(np.where(ellipse_mask))
    y = indices[0]
    x = indices[1]
    return y, x
elif output_indices and output_values:
    msg =  'output_values and output_indices cannot be both True.'
    raise ValueError(msg)
else:
    array_masked = array*ellipse_mask
    return array_masked

def get_ell_annulus(array, a, b, PA, width, output_values=False, output_indices=False, cy=None, cx=None):

"""Returns a centered elliptical annulus from a 2d ndarray. All the rest pixels are 
set to zeros. 

Parameters
----------
array : array_like
    Input 2d array or image. 
a : flt
    Semi-major axis.
b : flt
    Semi-minor axis.
PA : deg
    The PA of the semi-major axis.
width : flt
    The size of the annulus along the semi-major axis; it is proportionnally 
    thinner along the semi-minor axis).
output_values : {False, True}, optional
    If True returns the values of the pixels in the annulus.
output_indices : {False, True}, optional
    If True returns the indices inside the annulus.
cy,cx: float, optional
    Location of the center of the annulus to be defined. If not provided, 
it assumes the annuli are centered on the frame.

Returns
-------
Depending on output_values, output_indices:
values : array_like
    1d array with the values of the pixels in the annulus.
array_masked : array_like
    Input array with the annular mask applied.
y, x : array_like
    Coordinates of pixels in annulus.
    
"""
if not array.ndim == 2:
    raise TypeError('Input array is not a frame or 2d array.')
if cy is None or cx is None:
    cy, cx = frame_center(array)
sy, sx = array.shape
    
width_a = width
width_b = width*b/a

# Definition of big ellipse
f_big = np.sqrt((a+width_a/2.)**2-(b+width_b/2.)**2) #distance between center and foci of the ellipse
PA_rad = np.deg2rad(PA)    
pos_f1_big = (cy+f_big*np.cos(PA_rad),cx+f_big*np.sin(PA_rad)) #coords of first focus
pos_f2_big = (cy-f_big*np.cos(PA_rad),cx-f_big*np.sin(PA_rad)) # coords of second focus

# Definition of small ellipse
f_sma = np.sqrt((a-width_a/2.)**2-(b-width_b/2.)**2) #distance between center and foci of the ellipse   
pos_f1_sma = (cy+f_sma*np.cos(PA_rad),cx+f_sma*np.sin(PA_rad)) #coords of first focus
pos_f2_sma = (cy-f_sma*np.cos(PA_rad),cx-f_sma*np.sin(PA_rad)) # coords of second focus
   
yy, xx = np.ogrid[:sy, :sx] 
big_ellipse = dist(yy,xx,pos_f1_big[0],pos_f1_big[1]) + dist(yy,xx,pos_f2_big[0],pos_f2_big[1])
small_ellipse = dist(yy,xx,pos_f1_sma[0],pos_f1_sma[1]) + dist(yy,xx,pos_f2_sma[0],pos_f2_sma[1])         
ell_ann_mask = (big_ellipse < 2*(a+width/2.)) & (small_ellipse >= 2*(a-width/2.))  # mask of 1's and 0's
 
if output_values and not output_indices:
    values = array[ell_ann_mask]
    return values
elif output_indices and not output_values:      
    indices = np.array(np.where(ell_ann_mask))
    y = indices[0]
    x = indices[1]
    return y, x
elif output_indices and output_values:
    msg =  'output_values and output_indices cannot be both True.'
    raise ValueError(msg)
else:
    array_masked = array*ell_ann_mask
    return array_masked

Minor bug in vip.negfc.confidence

Dear Carlos,

When we use the keyword title in the function vip.negfc.confidence, it generates an error line 933 (and probably 937 too) because the variable fig has no title attribute:

File "/vip/negfc/mcmc_opt.py", line 933, in confidence
fig.title(title+' '+msg.format(mu[j],sigma[j]),

AttributeError: 'Figure' object has no attribute 'title'

This is probably a minor bug. Looking a bit on the way to do it, you can probably use instead
ax.set_title(title+' '+msg.format(mu[j],sigma[j])

Cheers
Julien

Flux conservation in PCA

Hi @CARLGOGO ,

Looks like the fluxes are not preserved in the PCA processing. I mean, the flux levels in the final image has nothing to do with the flux levels in the cubes that we feed the PCA with. Keeping the fluxes would be useful to have a quick look at the brightness of the candidate companions, and will be even more important when computing contrast curves.

Cheers!

Annular PCA in RDI mode

Dear all,

This is more of a suggestion than an issue, but it would be very useful to be able to run annular PCA in RDI mode (currently, only full frame PCA has that option).

Thanks!

Rebecca

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.