barnabytprowe / great3-public Goto Github PK
View Code? Open in Web Editor NEWPublic repository for the Third Gravitational Lensing Accuracy Testing Challenge
License: BSD 3-Clause "New" or "Revised" License
Public repository for the Third Gravitational Lensing Accuracy Testing Challenge
License: BSD 3-Clause "New" or "Revised" License
I'm curious to know if there is any way to change these settings other than by modifying the file itself prior to importing great3sims. It appears that these values are already in use during module initialization. Does that mean that a script which runs great3sims.run() cannot change the constants.py values programmatically?
I am getting the following throw from builder.py when I run the following:
great3sims.run("real_galaxy", gal_dir="sample", truth_dir="truth", experiments=["real_galaxy"], obs_type="ground", shear_type=["constant"])
The error appears during the creation of a tar file at the end of the script. Not sure if the file in question is not being created, or if I am running the real_galaxy option incorrectly. I just updated to the latest commit, d4... just to be sure.
-----------------------------------------------------------------
Packaging data for real_galaxy / ground / constant
Traceback (most recent call last):
File "real.py", line 2, in <module>
great3sims.run("real_galaxy", gal_dir="sample", truth_dir="truth", experiments=["real_galaxy"], obs_type="ground", shear_type=["constant"])
File "/sandbox/lsstshared/pgee/mylsst9/great3-public/great3sims/__init__.py", line 242, in run
builder.packageTruth(subfield_min, subfield_max)
File "/sandbox/lsstshared/pgee/mylsst9/great3-public/great3sims/builder.py", line 1409, in packageTruth
shutil.copy2(infile, outfile)
File "/sandbox/lsstshared/v10_0/Linux64/anaconda/2.1.0/lib/python2.7/shutil.py", line 130, in copy2
copyfile(src, dst)
File "/sandbox/lsstshared/v10_0/Linux64/anaconda/2.1.0/lib/python2.7/shutil.py", line 82, in copyfile
with open(src, 'rb') as fsrc:
IOError: [Errno 2] No such file or directory: '/sandbox/lsstshared/pgee/mylsst9/great3-public/great3sims/real_galaxy/real_galaxy/ground/constant/star_test_images.fits'
---------------------------------------------------------------------
I think one or more of the example scripts might need updating in the light of recent changes to GalSim (specifically to the galsim.Image
API: GalSim-developers/GalSim#495)
I know for a fact that coadd_multiepoch.py
needs updating, because the following line (and those like it) is out of date:
# Load full image array into an ImageF instance
image = galsim.ImageViewF(
pyfits.getdata(inprefix+"%03d-%1d.fits" % (field, 0)).astype(numpy.float32), scale=1.)
as the galsim.ImageViewF()
way of initializing images has gone the way of the Dinosaur since GalSim-developers/GalSim#495 was merged in. I think that similar things may be found in the other codes, but part of this issue will see me check that.
My hacky solution is a try:
, except:
block:
# Load full image array into an ImageF instance
try:
image = galsim.ImageViewF(
pyfits.getdata(inprefix+"%03d-%1d.fits" % (field, 0)).astype(numpy.float32), scale=1.)
except: # For those users who have a newer version of GalSim than v1.0.1...
image = galsim.ImageF(
pyfits.getdata(inprefix+"%03d-%1d.fits" % (field, 0)).astype(numpy.float32), scale=1.)
I generally don't like to use a bare except:
, but to be honest there is very little else here that can be going wrong assuming that correct GREAT3 images are being used, so I don't think it is a real problem. (The Boost.Python-thrown exception claims to be an ArgumentError
but this is not defined at the Python level. I don't think it's a big issue to use the catch-all as above for these simple, uni-purpose scripts. If it's a pyfits exception that was thrown in the first place, it will still trigger after the except:
anyway.)
Comments welcome.
This is a question we received via email at GREAT3 HQ:
would you mind answering two small questions about Great3 :-)? Sorry if this is already explained somewhere; may be I missed something but I could not find that in the Handbook.
While testing various PSFEx settings (and as expected), we noticed that the results were quite sensitive to the exact extent of the footprint we would use for the PSF model. In PSFEx, the PSF is modelled inside a disk to minimize directional biases while convolving this model with limited footprint. Now, the Great3 point source samples are cropped inside square images. So here comes my questions!
- while making the galaxy images, did you use areas of the PSF which are not inside those stellar images, and which we would therefore have to "invent" (or say, "recover") ?
- if not then, did you simply convolve the galaxy models with the aforementioned squarish PSF footprint or did you crop the PSF model inside a disk?
Thanks!
According to the guide:
The first star in the catalog is precisely centered within a pixel, and the others have
random sub-pixel offsets.
However, looking at starfield_image-100-0.fits, the first star is actually centered on the stamp which, because of the even stamp size, is actually not centered in a pixel. I think the star images are fine as is, but it would be good to clarify how they are centered in the guide.
A related issue is that the corresponding catalog gives the star center as (23,23) but with the usual convention that the bottom-left corner is (1,1), the center is actually on the corner between (24,24) and (25,25). Presumably the convention here is actually 0 indexed, so that would be good to highlight (or perhaps change?)
Hiya,
for all the dataset links I've tried on the challenge website, the US mirror site gives a 'page not available' error message. For example ftp://ftp.great3.caltech.edu/pub/great3/data/public/control-ground-constant.tar.gz
Is this a problem on my end (the non-US-mirror links work fine), or do the links need to be updated? I know the US mirror sites were down for a little while, but I thought that was a few weeks ago!
-Debbie
Hi Guys,
I've worked on the variable_PSF/ground/constant branch for quite a while. The interesting thing is that I got roughly the same shear values for different sub-fields in the same field (there are 20 of them in each field). I tried two different ways of reconstructing the PSF field myself, and I also tried to use the PSF reconstructed by the code provided by GREAT3. All of these methods yielded very similar answers consistently, but scored "0". Now I suspect that I downloaded the wrong set of data. Have people ever had this situation? Had the data in the "variable_PSF/ground/constant" branch been updated before? Thanks in advance for your time!!
Best,
Jun
I'm opening this issue after some email discussion with @HironaoMiyatake and Yuki Okura. When running the presubmission script, they're getting a corr2 assertion error (reproduced below), presumably due to a problem with the presubmission script or the data. I don't know the inner workings of corr2 well enough to know how to diagnose this problem, so I thought we would ping @rmjarvis for some help as to possible causes.
The error:
Running presubmission.py on file /data1/GREAT3/control/ground/variable/results/GREAT3.dat
Using variable-shear analysis
Column containing galaxy ids is 0 (default)
Column containing g1 is 1 (default)
Column containing g2 is 2 (default)
Column containing weight is 3
Reading file 1/1 ...
All files read.
Checking for correct galaxy IDs... (this may take a while)
Computing summary statistics for field 0 ...
Error - Assert _w != 0. failed
on line 80 in file Cell.cpp
Traceback (most recent call last):
File "presubmission.py", line 710, in <module>
generate_submission_files(args)
File "presubmission.py", line 580, in generate_submission_files
corr2=args.corr2)
File "presubmission.py", line 338, in print_variable_summary
nbins=nbins, xy_units='degrees', sep_units='degrees', corr2_executable=corr2)
File "presubmission.py", line 281, in run_corr2
results = readfile(m2file)
File "presubmission.py", line 193, in readfile
with open(filename) as f:
IOError: [Errno 2] No such file or directory: '/tmp/tmps_cVkw_temp.m2'
So there is an assertion error (Assert _w != 0. failed
) from corr2, and no output file is written. The file, then, does not exist, and the presubmission script fails because it expects to read that output file back in.
(As a general question: would users prefer a more useful error in presubmission.py
? I don't mind writing one, but I'm also not sure we want to make a change to the script this close to the challenge end date for something this minor.)
There are donut-shaped PSF from control-ground-constant image-002.fits and image-155.fits, which are not common. It is caused by the stochastic aberrations added to the basic optical model according to Rachel's email. So just proceed with those "Donuts".
We’re doing some post-Great3 tests with the BFD code, and we ‘d like to make some further Great3 like simulations with a modification to the galaxy size cut.
We would like to modify the simulations to make a size cut for a fixed galaxy size across all images. Initially we’d like to do this for just one branch - the control ground constant branch.
I think this would require only a small change to the public Great3 simulations code - is the best approach to make the modification in the great3-public/great3sims/mass_produce.py script? [If so, could you point us towards the relevant line numbers?]
This Issue is for encouragement purposes!
Beyond reporting bugs or problems with the GREAT3, you can use the great3-public
issues pages to meet & greet one another, share experiences, share hints and tips for the challenge.
Any genuine issues (bugs, inconsistencies) please also do raise those problems here too...
I think the best way to look for a non-linear shear response would be to rotate the shear estimates to the frame where the true shear is in the +g1 direction. Then the estimated shear (g1hat, g2hat) would rotate to (g+hat, gxhat) in this coordinate system. Then g+hat could be fit as:
g+hat = c+ + m+ g_true + q+ g_true^3
It would be weird for gxhat to be inconsistent with zero from symmetry considerations, but you could fit that as well with a similar formula and see what you find.
I think Gary is right that it would be also be weird to have a g_true^2 term. I'm not completely convinced that it is entirely unphysical, but it would be non-analytic at the origin, so that would be a bit odd.
STEP1 only had applied shears in the +g1 direction, so they kind of did precisely this. And Gary suggested (in a conversation just now) that probably the quadratic term that was measured there was just fitting the cubic term and erroneously ascribing it to a quadratic. With as few points as they had, that's certainly plausible.
@barnabytprowe , I think that with the new license change, I can replace the GPL license file in here with the BSD one that we've put in the header of each file. Is that correct?
Hi,
Recently, I've tried to submit on the variable psf, space, variable using results generated from my usual pipeline. It failed when the presubmission script was checking the IDs of the galaxies. The error raised is the following:
RuntimeError: Unexpected galaxy IDs found. Please check that your galaxy IDS are the same as the catalogs distributed with the Great3 simulations, and that you have passed the correct branch name.
This is apparently method independent as our team (MegaLUT) and the CEA-EPFL we both had the same issue.
Cheers
This is just an issue for assistance, discussion, and coordination of some of the fantastic plots and plot ideas that were generated at the GREAT3 hack session 29 May 2014 at CMU.
I've made a directory analysis/
to store the code, and in there you'll also find the .csv
file export.csv
that @joezuntz put together to store the basic information about GREAT3 submissions. All this information is public anyhow, via the GREAT3 webpages, but this makes it convenient.
Have fun!
Hi there,
I'm trying to reproduce parts of the Great3 simulations. I am contemplating the tests/test_run.py
file. The settings for this file on the github repository is to run all branches. I decided to restrict myself to CSC. However, the scripts seems to ignore this because it still needs the COSMOS files. So that's my first question: how to make sure we use only what we need?
I decided to download everything while setting up test_run.py
to CSC only. Now the script goes a little bit further. However, it stumbles when executing generateSubfieldParameters()
in great3sims/galaxies.py
. Actually when in the block of code between lines 312 to 317. I get the following error:
bash-4.1$ python test_run.py
Generating metaparameters for control / space / constant
Regenerating metaparameters for all subfields, not just the chosen subset
Generating catalogs for control / space / constant
subfield 0Traceback (most recent call last):
File "test_run.py", line 113, in <module>
subfield_min=subfield_min, subfield_max=subfield_max, **kwargs)
File "../great3sims/__init__.py", line 188, in run
builder.writeSubfieldCatalog(subfield_index)
File "../great3sims/builder.py", line 382, in writeSubfieldCatalog
effective_seeing)
File "../great3sims/galaxies.py", line 318, in generateCatalog
self.peak_image_pixel_count[self.peak_image_pixel_count == 0.] = 1.e-4
File "/usr/lib64/python2.6/site-packages/pyfits/core.py", line 5507, in __setitem__
self.array.field(indx)[self.row] = value
ValueError: field named peak_image_pixel_count not found
which comes from
mask_catalog = pyfits.getdata(os.path.join(self.gal_dir, self.rgc_mask_file)
[...]
self.peak_image_pixel_count = mask_catalog['peak_image_pixel_count']
where the fits array to load is called real_galaxy_mask_info.fits
which does exist in the real galaxies data, but doesn't seem to contain this peak_image_pixel_count field. Again a real galaxy issue. How can I get rid of this error ?
Cheers
Thibault
This is an email received today, to which the authors very kindly said I could respond publicly here at great3-public:
We, the amalgam@iap team, have been spending a lot of time on the challenge playing around with weighting schemes and so on, in order to understand why we were doing so well on some fields and so bad on some others. We're not doing so badly after all but I'd like to have some precision on the way you calculate c and m values before inferring score if this is not considered an unfair request.
Is the fit of the g_supplied - g_true regression performed in a least square sense or is it a more robust mean absolute deviation minimization? Do you clip some outlyier fields off? Why don't you allow us to provide errors (or weights) for each field, if done in a least square sense? Of course this all applies to constant shear branches (either control or real_galaxy).
Responses below...
In an email exchange with members of the MaltaOx team this morning, I discovered an issue with the submission_checker.py
script for variable_psf-*
-constant branches. This therefore also impacts the checking of full-*
-constant branches too, and is possibly directly relevant to #17 .
I'm very sorry that this bug in the submission checker was only discovered so late. It assumes the wrong number of fields for these branches, and this Issue will be to fix this problem so that people submitting to the GREAT3 post-challenge leaderboards will be able to use the submission_checker to verify all branches.
Hi Guys,
Is there a switch for doing bootstrapping in "presubmission.py"? I am dealing with variable shear catalogs. Thanks a lot in advance!
-Jun Zhang
This paragraph:
In the starfield image, only the lower-left PSF image (following standard FITS conventions) is
coadded and output: the other (offset) PSF images are each given a different, unknown offset in
each epoch.
This issue is to revise this!
Looking at the multiepoch-space-(constant,variable) psf images, the optical psf model seems to change quite significantly (the number of struts for example) from epoch to epoch. This does not seem physical and the logic demonstrated in the example coadd doesn't seem to account for this. Is this correct?
I want to play livestream on my android mobile and also download tthis video by videoview.
please help me if you have any idea for live streaming
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.