THIS REPOSITORY IS OBSOLETE: HAZARDLIB HAVE BEEN INTEGRATED INTO https://github.com/gem/oq-engine
gem / oq-hazardlib Goto Github PK
View Code? Open in Web Editor NEWOpenquake Hazard Library
Home Page: http://www.globalquakemodel.org/openquake/
Openquake Hazard Library
Home Page: http://www.globalquakemodel.org/openquake/
THIS REPOSITORY IS OBSOLETE: HAZARDLIB HAVE BEEN INTEGRATED INTO https://github.com/gem/oq-engine
The Zhao et al. (2006) GMPEs have been updated, and I'm putting in this issue to request new GSIM classes for the Zhao et al. (2016) GMPEs for subduction zones (interface, slab, crustal, and upper mantle).
The version variable is not alligned with the package version, we want to align them.
In Python3 speedups are not loaded due changes in extensions modules for Py3: https://docs.python.org/3/howto/cporting.html
It will be very common for a MultiMFD to have homogeneous parameters (i.e. equal for all sites). We should make a special case for that in the XML, to save space.
Now both ComplexFaultSurfaces and SimpleFaultSurfaces store the original edges, so we could use those instead of downsampling. If the original edges are not available, should be considered an error?
This is needed for the Japan model. If missing should it be considered 1?
In the current disaggregation calculator we repeat the same (expensive) calculation twice. First we compute the Rjb distance between each site and the rupture (i.e. a mesh of points) (see https://github.com/gem/oq-hazardlib/blob/master/openquake/hazardlib/calc/disagg.py#L171) then we compute Rjb to find out the lon and lat of the points on the mesh at closer distance to each site (see https://github.com/gem/oq-hazardlib/blob/master/openquake/hazardlib/calc/disagg.py#L173). During the first call we can easily obtain also the lons and lats and return them if needed. In this way we will reduce of a 50% the time needed for rupture-site distance calculation.
This is needed for the Japan model. A GriddedSurface is defined by a list of 3D points. The NRML representation could be
<surface>
<griddedSurface>
<gml:posList>
-124.704 40.363 5.49326 -124.977 41.214 4.98856 -125.14 42.096 4.89734
</gml:posList>
</griddedSurface>
</surface>
At least one GMPE that uses a depth parameter does not require the depth parameter in the SitesContext (e.g., BooreEtAl2014). Other GMPEs require the depth parameter (e.g., AbrahamsonEtAl2014, CampbellBozorgnia2014, ChiouYoungs2014). So I would like to request subclasses of these latter GMPEs which do not require the depth parameters. I think this would be a fairly trivial change; in AbrahamsonEtAl2014, I think it should be as simple as setting z1pt0 = z1pt0ref. Currently, we are providing a SitesContext that achieves the same result, but it requires a bit of needless bookkeeping when working with multiple GMPEs that use different equations for z1pt0. I would think that this change may be useful generally also since the depth parameters are often unavailable.
Pull #492 breaks "mesh" tests on Windows (independently from numpy/scipy/shapely versions). Errors are like this:
test_get_closest_points_mesh1D (openquake.hazardlib.tests.geo.surface.multi_test.DistancesTestCase) ... ERROR
======================================================================
ERROR: test_get_closest_points_mesh1D (openquake.hazardlib.tests.geo.surface.multi_test.DistancesTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\OQM\lib\openquake\hazardlib\tests\geo\surface\multi_test.py", line 151, in test_get_closest_points_mesh1D
surf = MultiSurface(self.surfaces_mesh1D)
File "C:\OQM\lib\openquake\hazardlib\geo\surface\multi.py", line 105, in __init__
self.edge_set = self._get_edge_set(tol)
File "C:\OQM\lib\openquake\hazardlib\geo\surface\multi.py", line 137, in _get_edge_set
raise ValueError("Surface %s not recognised" % str(surface))
ValueError: Surface <openquake.hazardlib.tests.geo.surface.multi_test.FakeSurface object at 0x0FC2FF70> not recognised
----------------------------------------------------------------------
How to reproduce:
cd lib\openquake\hazardlib
python -m nose -v -a "!slow"
At least we should check how/if this affects demos. If not, the blocker
label can be removed.
This would avoid the problem of authors forgetting to update openquake.hazardlib.gsim.rst
.
Current version of hazardlib is 0.24
; in my opinion
This will reduce a lot the confusion.
I'm creating this issue to request the Atkinson (2008) GMPE. As with some of the other Atkinson and Boore GMPEs currently implemented in openquake, this was updated in this 2011 paper. The reason I'm requesting this GMPE is because it was included in the 2014 US National Seismic Hazard Maps for the CEUS.
Building the oq-hazardlib (commit dc15640) deb fails for Debian Wheezy with the following error:
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/lib/pymodules/python2.6/numpy/core/include -I/usr/include/python2.6 -c speedups/geodeticmodule.c -o build/temp.linux-x86_64-2.6/speedups/geodeticmodule.o -Wall -O2
speedups/geodeticmodule.c:19:20: fatal error: Python.h: No such file or directory
compilation terminated.
Traceback (most recent call last):
File "setup.py", line 98, in <module>
zip_safe=False,
File "/usr/lib/python2.6/distutils/core.py", line 152, in setup
dist.run_commands()
File "/usr/lib/python2.6/distutils/dist.py", line 975, in run_commands
self.run_command(cmd)
File "/usr/lib/python2.6/distutils/dist.py", line 995, in run_command
cmd_obj.run()
File "/usr/lib/python2.6/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/usr/lib/python2.6/distutils/cmd.py", line 333, in run_command
self.distribution.run_command(command)
File "/usr/lib/python2.6/distutils/dist.py", line 995, in run_command
cmd_obj.run()
File "/usr/lib/python2.6/dist-packages/setuptools/command/build_ext.py", line 46, in run
_build_ext.run(self)
File "/usr/lib/python2.6/distutils/command/build_ext.py", line 340, in run
self.build_extensions()
File "/usr/lib/python2.6/distutils/command/build_ext.py", line 449, in build_extensions
self.build_extension(ext)
File "/usr/lib/python2.6/dist-packages/setuptools/command/build_ext.py", line 182, in build_extension
_build_ext.build_extension(self,ext)
File "/usr/lib/python2.6/distutils/command/build_ext.py", line 499, in build_extension
depends=ext.depends)
File "/usr/lib/python2.6/distutils/ccompiler.py", line 621, in compile
self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
File "/usr/lib/python2.6/distutils/unixccompiler.py", line 180, in _compile
raise CompileError, msg
distutils.errors.CompileError: command 'gcc' failed with exit status 1
dh_auto_build: python2.6 setup.py build --force returned exit code 1
make: *** [build] Error 1
This is because the Build-Require line from debian/control is missing "python2.6-dev", but python-2.6 is pulled in by python-nose.
Adding the dependency allows the build to succeed, but I don't know if building against python 2.6 is correct behaviour.
New issues must be reported to https://github.com/gem/oq-engine/issues
The Rx distance calculator (https://github.com/gem/oq-hazardlib/blob/master/openquake/hazardlib/geo/surface/base.py#L228) provides wrong results for certain fault geometries.
Considering a simple fault surface defined in the following way:
fault_trace = Line([Point(0., 0.), Point(0.3, 0.8), Point(1., 1.)])
surf = SimpleFaultSurface.from_fault_data(
fault_trace=fault_trace,
upper_seismogenic_depth=0,
lower_seismogenic_depth=10,
dip=50.,
mesh_spacing=2.
)
the Rx distance pattern is ok:
however if I consider the following fault trace:
fault_trace = Line([Point(0., 0.), Point(0.8, 0.3), Point(1., 1.)])
then the Rx distance pattern is wrong:
By looking at https://github.com/gem/oq-hazardlib/blob/engine-2.2/openquake/hazardlib/stats.py#L35 one can see that there is a special case when the weights are None; here is the code which has been used for 5+ years:
if weights is None:
# this implementation is an alternative to
# numpy.array(mstats.mquantiles(curves, prob=quantile, axis=0))[0]
# more or less copied from the scipy mquantiles function, just special
# cased for what we need (and a lot faster)
arr = numpy.array(curves).reshape(len(curves), -1)
p = numpy.array(quantile)
m = 0.4 + p * 0.2
n = len(arr)
aleph = n * p + m
k = numpy.floor(aleph.clip(1, n - 1)).astype(int)
gamma = (aleph - k).clip(0, 1)
data = numpy.sort(arr, axis=0).transpose()
qcurve = (1.0 - gamma) * data[:, k - 1] + gamma * data[:, k]
return qcurve
This code is used in the case of sampling. I submit that it is broken, since for weights=None one would expect to get the same result than assigning identical weights to all realizations, consistently with how it works in mean_curve
, which is not the case.
The problem is that quantile_curve(curves, quantile, weights)
uses a different algorithm if the weights are not None, so the numbers are different. For instance:
In[1]: import numpy
In [2]: from openquake.hazardlib.stats import quantile_curve
In [3]: quantile = 0.75
In [4]: curves = numpy.array([
[.98161, .97837, .95579],
[.97309, .96857, .93853],
])
In [5]: quantile_curve(curves, quantile, None)
Out[5]: array([ 0.98161, 0.97837, 0.95579])
In [6]: quantile_curve(curves, quantile, [.5, .5])
Out[6]: array([ 0.97735, 0.97347, 0.94716])
Given that we do not have a performance problem (the postprocessing is fast compared to the real computation) I would use the same algorithm in all cases, i.e. I would use the algorithm used for the full enumeration case, with nontrivial weights, which is based on interpolation.
PS: the code for weights=None
was built to be compatible with scipy.mstats.mquantiles(curves, prob=quantile)
, however the tests has this comment:
# TODO(LB): Check with our hazard experts to see if this is reasonable
# tolerance. Better yet, get a fresh set of test data. (This test data
# was just copied verbatim from from some old tests in
# `tests/hazard_test.py`.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.