eida / mediatorws Goto Github PK
View Code? Open in Web Editor NEWEIDA NG Mediator/Federator web services
License: GNU General Public License v3.0
EIDA NG Mediator/Federator web services
License: GNU General Public License v3.0
StationLite handles Virtual Networks.
This also implies: Refactoring of StationLite harvesting using SQLAlchemy ORM features.
Hi guys, the bad request error message does not show up well in the browser:
mediator-devel.ethz.ch/fdsnws/dataselect/1/query?net=NL&start=2016-01-01T00:00:00&end=2016-01-01T00:01:00
>>> {"message": "\nError 400: Bad request\n\nBad request\n\nUsage details are available from http://www.fdsn.org/webservices/\n\nRequest:\nhttp://localhost:5000/fdsnws/dataselect/1/query?net=NL&start=2016-01-01T00:00:00&end=2016-01-01T00:01:00\n\nRequest Submitted:\n2017-02-07 10:38:22.295255\n\nService version:\n0.9.1\n"}
I guess the JSON should be parsed.
Best,
Mathijs
While harvesting stationlite
has to cache the stream epoch's restricted_status=open|closed|partial
property from StationXML. Currently eida-stationlite-harvest
does not distinguish between the stream's restricted_status
.
The evaluation of the restricted_status
property will be implemented bottom-up. I.e. if a station epoch has a restricted_status
defined the parent's network epoch restricted_status
value is not taken into consideration.
Implement a tool verifying the StationLite routing integrity.
Using the configuration variable EIDA_NODES from settings.py
while initializing the StationLite database by means of eida-stationlite-db-init
is error-prone.
Instead, eida-stationlite-harvest
should receive a list of URLs for eidaws-routing localconfig
configuration files to be parsed to fill/update the internal SQLite database. Hence, service endpoint URLs are managed dynamically.
Hi all,
we have questions on the parsing of the arguments of the fedetaror :
1- does the federator parse the arguments before forwarding the requests the the nodes ?
2- if yes, are the fdsn specifications for station and dataselect respected ?
and about 'exit code' of the federator. Do you have any documentation ?
Below, two queries tested by our script developed to test federator and compare results.
It is available here : https://github.com/resif/ws-eida-test
1/ query with an error in parameter chanel
|-----EIDA:
| Query: http://federator-testing.ethz.ch/fdsnws/dataselect/1/query?starttime=2004-12-23T00:00:00&endtime=2004-12-23T00:01:00&chanel=LHZ ...
| Status code : **200**
| Request duration time : 11.879999999888241
|-----RESIF:
| Query: http://ws.resif.fr/fdsnws/dataselect/1/query?starttime=2004-12-23T00:00:00&endtime=2004-12-23T00:01:00&chanel=LHZ ...
| Status code : **400**
| Request duration time : 0.019999999552965164
| Comparaison not possible if code status is different to 200
... RESIF does not accept 'chanel', and retuns 400, but the is data for this request whith 'channel' (and chanel is not in the spec)
2/ Query with a wrong parameter (toto):
|-----EIDA:
| Query: http://federator-testing.ethz.ch/fdsnws/dataselect/1/query?toto=_ALPARRAY_FR&channel=H??&starttime=2016-10-01T00:00:00&endtime=2016-10-02T00:00:00 ...
| Status code : **200**
| Request duration time : 114.30000000074506
|-----RESIF:
| Query: http://ws.resif.fr/fdsnws/dataselect/1/query?toto=_ALPARRAY_FR&channel=H??&starttime=2016-10-01T00:00:00&endtime=2016-10-02T00:00:00 ...
| Status code : **400**
| Request duration time : 0.019999999552965164
| Comparaison not possible if code status is different to 200
Thanks
Continued discussion about a proper application logging setup.
Sort mseed data regarding stream epochs. After resolving the SNCL properly by means of the StationLite webservice, federator threads send request to EIDA nodes with exactly one resolved stream epoch. Hence, the data is written to the output stream sorted by epochs.
Note, that this implies also a simplification for the MseedCombiner i.e. miniseed data does not need to be validated anymore. HTTP will take care for a proper flow control.
When initializing the federator serving with mod_wsgi, the option --start-local
(or rather start_local
within the eidangws_config
configuration file) should be disabled.
The current splitting mechanism does not handle large requests for e.g. fdsnws-dataselect
. Currently federator splitting is implemented by dividing the number of postline requests. However, for granular requests splitting must be implemented by means of splitting stream epochs regarding time constraint parameters.
Currently only format=post
is implemented. For additional output formats see: http://routing.readthedocs.io/en/latest/#output-description-and-format
We should consider having some special treatment for some very common queries. E.g. asking for all stations in EIDA:
http://127.0.0.1:8080/fdsnws/station/1/query?format=text&level=channel
could possibility only hit all the endpoints only a single time and simply concatenate the results. I think for some derived products this will be an important query to optimize.
When querying the list of networks in text format there is always a timeout (Error 504). That is weird because text format does not have any complication like merging of XML responses and specially because it should be present in the cache.
mediator-devel.ethz.ch/fdsnws/station/1/query?level=network&format=text
Just to compare, a query like this from a node is instantaneous.
http://geofon.gfz-potsdam.de/fdsnws/station/1/query?level=network&format=text
This is a proposal of @damb and @kaestli from the ETC meeting at Grenoble (09/2018). Comments are welcome.
fdsnws/dataselect/auth
(HTTPS):
username:password
to the clienteida-federator
token context specific credentialsfdsnws/dataselect/queryauth
(HTTPS):
stationlite
(restriction=closed
):
eida-federator
token specificeida-federator
token:
fdsnws/dataselect/auth
method -> temporary credentials for fdsnws/dataselect/queryauth
are created (at endpoints)eida-federator
stores credentials (endpoint specific)fdsnws/dataselect/queryauth
method with corresponding credentialsfdsnws/dataselect/queryauth
method with credentials passedstationlite
(restriction=open
):
fdsnws/dataselect/query
methodWhen truncating the DB remove endpoint information not referenced anymore (see #26). To proceed we'd have to add a lastseen
parameter to the orm.Endpoint
entity.
Hi, there is a difference in requesting from the node directly or through the federator:
mediator-devel.ethz.ch/fdsnws/dataselect/1/query?net=NL&sta=HGN&start=2016-01-01T00:00:00&end=2016-01-02T00:00:00
>>> Content-Length: 422 kB
orfeus-eu.org/fdsnws/dataselect/1/query?net=NL&sta=HGN&start=2016-01-01&end=2016-01-02
>>> Content-Length: 14.1 MB
Federator only seems to return one channel:
NL.HGN.02.BHZ | 2015-12-31T23:59:59.544538Z - 2016-01-01T02:20:30.944538Z | 40.0 Hz, 337257 samples
While our services gives multiple:
NL.HGN.02.BHZ | 2015-12-31T23:59:59.544538Z - 2016-01-01T02:20:30.944538Z | 40.0 Hz, 337257 samples
NL.HGN.02.BHZ | 2016-01-01T02:20:31.044500Z - 2016-01-01T02:20:31.044500Z | 40.0 Hz, 0 samples
NL.HGN.02.BHZ | 2016-01-01T02:20:30.969538Z - 2016-01-01T02:20:55.869538Z | 40.0 Hz, 997 samples
NL.HGN.02.BHZ | 2016-01-01T02:20:56.344500Z - 2016-01-01T02:20:56.344500Z | 40.0 Hz, 0 samples
NL.HGN.02.BHZ | 2016-01-01T02:20:55.894538Z - 2016-01-02T00:00:06.769538Z | 40.0 Hz, 3118036 samples
NL.HGN.02.LHZ | 2015-12-31T23:59:34.069538Z - 2016-01-01T02:21:12.069538Z | 1.0 Hz, 8499 samples
NL.HGN.02.LHZ | 2016-01-01T02:20:43.069500Z - 2016-01-01T02:20:43.069500Z | 1.0 Hz, 0 samples
NL.HGN.02.LHZ | 2016-01-01T02:21:13.069538Z - 2016-01-01T04:21:55.069538Z | 1.0 Hz, 7243 samples
NL.HGN.02.LHZ | 2016-01-01T04:21:17.069500Z - 2016-01-01T04:21:17.069500Z | 1.0 Hz, 0 samples
NL.HGN.02.LHZ | 2016-01-01T04:21:56.069538Z - 2016-01-01T15:21:05.069538Z | 1.0 Hz, 39550 samples
NL.HGN.02.LHZ | 2016-01-01T15:22:21.069500Z - 2016-01-01T15:22:21.069500Z | 1.0 Hz, 0 samples
NL.HGN.02.LHZ | 2016-01-01T15:21:06.069539Z - 2016-01-02T00:00:22.069539Z | 1.0 Hz, 31157 samples
NL.HGN.02.BHE | 2015-12-31T23:59:58.094538Z - 2016-01-02T00:00:00.844538Z | 40.0 Hz, 3456111 samples
NL.HGN.02.LHE | 2015-12-31T23:59:15.069538Z - 2016-01-02T00:02:35.069538Z | 1.0 Hz, 86601 samples
NL.HGN.02.LHN | 2015-12-31T23:57:44.069539Z - 2016-01-02T00:01:31.069539Z | 1.0 Hz, 86628 samples
NL.HGN.02.BHN | 2015-12-31T23:59:55.819538Z - 2016-01-02T00:00:00.144538Z | 40.0 Hz, 3456174 samples
Best,
Mathijs
Hi guys, why is this a bad request:
http://mediator-devel.ethz.ch/fdsnws/dataselect/1/query?net=NL&start=2016-01-01T00:00:00&end=2016-01-01T00:01:00
but when I give it a station the response is fine (albeit incomplete):
http://mediator-devel.ethz.ch/fdsnws/dataselect/1/query?net=NL&sta=HGN&start=2016-01-01T00:00:00&end=2016-01-01T00:01:00
Best,
Mathijs
Hey, I wanted to open some points for discussion and thought GitHub was a good place for it so everyone can follow and join. After going through the ETC minutes again I have one burning question:
Possibly I am entirely overlooking the problem that you are solving, but I think that 1) virtual networks can be resolved by the routing service when all nodes participate (see recent communication on the ETC mailing list about this). 2) Expansion to the stream level can be done at request-time going through routing
->station
->dataselect
. Naturally it will be slower than a local cache, but adhere more closely to the philosophy of EIDA, that is a federated archive. There will be some unnecessary hits on the endpoints (e.g. when querying for a minimum latitude) but that is entirely acceptable IMO. Keeping cache state in sync is extremely difficult and adds a layer of complexity that can be avoided. There may be a mismatch between a data center inventory, and the "same" Federator inventory.
With all things considered, I am under the impression that we could support federation in EIDA without a local cache, despite being marginally slower. Please correct me here if I am wrong. I have some experience and know as much as you there are some tricky bits without caching that will need to be solved. But here are some ideas to consider:
The plan is to query routing, then station to resolve requests to stream level, and finally query dataselect for each stream (each datacenter in parallel). mSEED per request is buffered in memory (to prevent record mixing) and then forwarded to the user when completed.
For station requests we can query the routing service and then directly the station webservice. We should definitely not split inventory requests by stream as it would take forever to finish. 💡 Did you know: there are 41474 channels in EIDA.
One thing I noticed is that Z3 queries are really slow because of the excessive number of routes returned. But nothing that this particular optimization cannot fix.
station=A001A,A002A,A003A
). Usually this means each Z3 endpoint is hit only once. This can also be applied for networks in certain cases. E.g. network=N*
can be one query to ORFEUS Data center (despite the routing service returning multiple routes for NA, NL, etc..).This is a pretty late-stage optimization and can be implemented in the routing service too. It doesn't have to be the job of the Federator.
network=*&station=*
to hit all the endpoints returned by the routing service only once. This could also be hoisted to the routing service, that when the network & station parameters are both *
only one route per data center is returned. This will make the query from @javiquinte reported in issue #28 almost instant ⚡.A very specific optimization but really useful nonetheless. A query for all stations within EIDA can be expected often.
To conclude, I hope these are some points to consider and to think about an alternative federated architecture of the system. Also I want to emphasize that the code right now looks really professional 👍 for that @damb and co.
Hi guys, submitted this request as POST and it fails. It returns data when I query our own service. Any idea why?
request file:
NL WTSB * BHZ 2015-01-01T00:00:00 2015-01-01T00:01:00
request:
wget --post-file=req.txt http://mediator-devel.ethz.ch/fdsnws/dataselect/1/query
>>> 2017-02-07 11:52:49 ERROR 400: BAD REQUEST.
Best,
Mathijs
Hi All,
We are still testing eida-federator, with the following request firefox does not return a result or response.
But it is working for all the others nodes.
For example at RESIF and GFZ :
Could you please have a look ?
Thank you,
Best,
Rima
Gregory
federator returns HTTP status code 422 on validation errors:
damb@ansilta:~/tmp$ wget -O data -v "http://localhost:5000/fdsnws/dataselect/1/query?start=2017-01-01&end=2017-01-02T12:12:12&cha=LH?&sta=BFO,DAVOXa"
--2017-12-08 16:34:37-- http://localhost:5000/fdsnws/dataselect/1/query?start=2017-01-01&end=2017-01-02T12:12:12&cha=LH?&sta=BFO,DAVOXa
Resolving localhost (localhost)... ::1, 127.0.0.1
Connecting to localhost (localhost)|::1|:5000... failed: Connection refused.
Connecting to localhost (localhost)|127.0.0.1|:5000... connected.
HTTP request sent, awaiting response... 422 UNPROCESSABLE ENTITY
2017-12-08 16:34:37 ERROR 422: UNPROCESSABLE ENTITY.
Provide a StationLite + mod_wsgi setup.
Implement strict query parameter parsing. Do not silently ignore unknown query parameter values. See also #43.
Use the widedly used requests package instead of using urllib directly. StationLite already has this dependency.
Should set the mime type from text/plain to application/xml
Requests for node INGV returns 204:
http://mediator-devel.ethz.ch/fdsnws/station/1/query?network=IV
Maybe because of the xmlns:ingv
namespace?
Hi guys, I was trying to federator requesting all LHZ channels from networks FR and IV. Individually it works fine but when I ask for both at the same time the service response is unpredictable (I'm making the same request multiple times):
Making request 3
nBytes: 1260032
419 Trace(s) in Stream:
FR.AJAC.00.LHZ | 2017-01-01T00:00:00.590339Z - 2017-01-01T00:59:59.590339Z | 1.0 Hz, 3600 samples
...
(417 other traces)
...
IV.ZCCA..LHZ | 2017-01-01T00:01:10.120000Z - 2017-01-01T01:02:59.120000Z | 1.0 Hz, 3710 samples
[Use "print(Stream.__str__(extended=True))" to print all Traces]
Request OK
====
Making request 4
nBytes: 1260032
/Users/Mathijs/Documents/GitHub/obspy/obspy/io/mseed/core.py:413: InternalMSEEDReadingWarning: readMSEEDBuffer(): Record starting at offset 294912 is not valid SEED. The rest of the file will not be read.
warnings.warn(*_i)
28 Trace(s) in Stream:
FR.AJAC.00.LHZ | 2017-01-01T00:00:00.590339Z - 2017-01-01T00:59:59.590339Z | 1.0 Hz, 3600 samples
...
(26 other traces)
...
FR.PAND.00.LHZ | 2017-01-01T00:00:00.945659Z - 2017-01-01T00:00:20.945659Z | 1.0 Hz, 21 samples
[Use "print(Stream.__str__(extended=True))" to print all Traces]
Request OK
===
Making request 5
nBytes: 1260032
Traceback (most recent call last):
File "request.py", line 28, in <module>
print read(io.BytesIO(r.content))
File "<decorator-gen-31>", line 2, in read
File "/Users/Mathijs/Documents/GitHub/obspy/obspy/core/util/decorator.py", line 294, in _map_example_filename
return func(*args, **kwargs)
File "/Users/Mathijs/Documents/GitHub/obspy/obspy/core/stream.py", line 210, in read
stream = _read(pathname_or_url, format, headonly, **kwargs)
File "<decorator-gen-32>", line 2, in _read
File "/Users/Mathijs/Documents/GitHub/obspy/obspy/core/util/decorator.py", line 144, in uncompress_file
return func(filename, *args, **kwargs)
File "/Users/Mathijs/Documents/GitHub/obspy/obspy/core/stream.py", line 273, in _read
headonly=headonly, **kwargs)
File "/Users/Mathijs/Documents/GitHub/obspy/obspy/core/util/base.py", line 466, in _read_from_plugin
list_obj = read_format(filename, **kwargs)
File "/Users/Mathijs/Documents/GitHub/obspy/obspy/io/mseed/core.py", line 412, in _read_mseed
raise _i
obspy.io.mseed.InternalMSEEDReadingError: FR_CHMF_00_LHZ_M: Impossible Steim2 dnib=00 for nibble=10
I think somewhere in the concatenation of the mSEED from different sources there is a problem.
Best,
Mathijs
Add radius search relative to events returned by an event query, for target services station/waveform/quality. Use existing query parameter min/maxradius in respective target service namespace.
In order to distinguish between restricted_status=open
and restricted_status=closed
stream epochs the new query parameter access=open|closed|any
will be added to the stationlite
webservice frontend. By default access=any
i.e. both closed
and open
routes will be listed.
Automate federator webservice testing procedure. Use Service-Test.
Continued discussion from [email protected]
.
Dear All,
At RESIF we are testing eida-federator
, we are doing manual and automatic tests.
For our automatic tests we are using one script in Python and we will share it next week. For now it seems working well however we have strange things between text
format and xml
format
For example with net=Z3
eida-federator
provides two different results:
federator-testing.ethz.ch/fdsnws/station/1/query?level=network&network=Z3&format=text
#Network|Description|StartTime|EndTime|TotalStations
Z3|AlpArray Seismic Network (AASN) temporary
component|2015-01-01T00:00:00|2020-07-01T00:00:00|29
Z3|AlpArray Seismic Network (AASN) temporary
component|2015-01-01T00:00:00|2020-07-01T00:00:00|66
Z3|AlpArray DSEBRA|1980-01-01T00:00:00||92
Z3|AlpArray Seismic Network (AASN) temporary
component|2015-01-01T00:00:00|2020-07-01T00:00:00|51
Z3|Egelados project, RUB Bochum,
Germany|2005-06-05T00:00:00|2007-04-30T00:00:00|56
Z3|AlpArray backbone temporary
stations|2015-07-01T00:00:00|2020-07-31T00:00:00|68
federator-testing.ethz.ch/fdsnws/station/1/query?level=network&network=Z3&format=xml
<FDSNStationXML schemaVersion="1.0">
<Source>EIDA</Source>
<Created>2018-07-27T10:11:57.586061</Created>
<Network code="Z3"
alternateCode="ALPARRAY"
startDate="2015-07-01T00:00:00.000000"
endDate="2020-07-31T00:00:00.000000"
restrictedStatus="closed">
<Description>AlpArray backbone temporary stations</Description>
<Comment>
<Value>DOI:http://dx.doi.org/10.12686/alparray/z3_2015</Value>
<BeginEffectiveTime>2015-07-01T00:00:00.000000</BeginEffectiveTime>
<EndEffectiveTime>2020-07-31T00:00:00.000000</EndEffectiveTime>
<Author><Name>Resif Information System</Name>
<Agency>Réseau Sismologique et géodésique Français (RESIF)</Agency>
<Email>[email protected]</Email></Author>
</Comment>
<TotalNumberStations>68</TotalNumberStations>
<SelectedNumberStations>306</SelectedNumberStations>
</Network>
</FDSNStationXML>
Best,
Gregory
Hi guys, here is a little issue. Starttime and endtime must always be given a time or the response will be 204 No Content. It should either just assume 00:00:00 if left blank (preferable) or return 400 Bad Request.
mediator-devel.ethz.ch/fdsnws/dataselect/1/query?net=NL&sta=HGN&start=2016-01-01T00:00:00&end=2016-01-02T00:00:00
>>> returns 200
mediator-devel.ethz.ch/fdsnws/dataselect/1/query?net=NL&sta=HGN&start=2016-01-01&end=2016-01-02
>>> returns 204
Best,
Mathijs
federator should make use of stationlite
Hi,
Other remark I don't know if at RESIF we are doing well but we may have special characters in descriptions field for example :
Provence-Alpes-Côte d'Azur
VS
Provence-Alpes-Côte d'Azur
eida-federator :
http://federator-testing.ethz.ch/fdsnws/station/1/query?station=REVF&minlatitude=42&maxlatitude=44&minlongitude=6&maxlongitude=8&format=text&network=FR
#Network|Station|Latitude|Longitude|Elevation|SiteName|StartTime|EndTime
FR|REVF|43.740000|7.367500|700.0|Fort de La Revere, 06059 Eze, Alpes-Maritimes, Provence-Alpes-Côte d'Azur, France|2003-08-06T00:00:00|2500-12-31T23:59:59
#Network|Station|Latitude|Longitude|Elevation|SiteName|StartTime|EndTime
FR|REVF|43.740000|7.367500|700.0|Fort de La Revere, 06059 Eze, Alpes-Maritimes, Provence-Alpes-Côte d'Azur, France|2003-08-06T00:00:00|2500-12-31T23:59:59
Could you please have a look ?
Thank you,
Best,
Rima
Gregory
To be implemented.
Implement a proper Flask/Flask-Restful/webargs error handling. Follow the
Common HTTP status codes returned by FDSN services (https://www.fdsn.org/webservices/FDSN-WS-Specifications-1.1.pdf)
Try:
http://federator-testing.ethz.ch/fdsnws/station/1/query?net=Nl
issue one individual request for each stream epoch (as returned by stationlite)
Considering the following query:
http://www.orfeus-eu.org/eidaws/routing/1/query?net=NL,NA,NO
This returns a single route to one data center. Logically we could write an HTTP redirect header to forward the entire query immediately and cut out the Federator. I'm not sure this is a good idea at all so let us discuss below. This assumes the endpoints & the Federator will both be publicly available!
In any case, in this situation we can directly pipe the result from data center to client with zero parsing efforts. Surely we lose some granular logging on the Federator's end, but the data center will record it regardless.
Allow a comma-separated list for parameter eventservice. Throw BadRequest error if this is done for target service 'event', since merging events and identifying duplicates poses problems that are difficult to solve.
Hi @damb, I will be requesting a new machine to deploy the Federator. Currently these are my specifications:
What do you think? Please advise.
The same query as in #28 . The timeout is not there anymore, because the Federator is streaming. Great.
However, the performance is quite poor.
It took almost 7 minutes to return the list of networks in the format.
If I query the endpoints directly the answer comes in less than 1 second! (total time).
javier@sec24c79:~/git/fdsnws_scripts$ curl "http://federator-testing.ethz.ch/fdsnws/station/1/query?level=network&format=text" -o ~/delete.me
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 14879 0 14879 0 0 36 0 --:--:-- 0:06:42 --:--:-- 0
The Federator should kill any pending requests to the end points when the client disconnects from the Federator. Right now it seems like the Federator attempts to finish all requests regardless of whether the client is connected.
Hi, can we get the Dockerfile too instead of just the image. It will be easier to do some logging configuration since we want to write to a file and mount it outside the container for ingestion in ELK.
fdsnws_fetch writes the output data into a file. Until now, federator simply deliveres this file to the client. However an implementation using streams/pipes is preferred.
Advantages: as soon as data is available federator deliveres such data to the client. Also, federator is able to detect when the client terminates a session.
Implementation of a stationlite webservice frontend. This does not include a proper ORM implementation of stationlite.
A proper MANIFEST.in
must be created when invoking
$ python setup.py [subsys] install
directly without using the Makefile. Changes should be implemented within setup.py.
Continued discussion from eida_maint
(see below):
@massimo1962 wrote:
Many thanks for your help,
actually IFIK the stationlite rely on routing service in order to retrieve the
information and store into a db (sqlite). Consider that I have seen the load
db procedure works and fill the db ... so I have deduced that the routing
behind worked fine... seem quite strange.
Moreover , consider that I have already do another installation (without
docker) and I have the same problem... o_0
@Jollyfant wrote:
Hi Massimo, I have the same problem. For me requests to the version paths work:
/fdsnws/station/1/version -> 1.1.0
but anything else is 500 Internal Server Error.
In
/var/www/mediatorws/eidangservices/settings.py
I found a setting:
EIDA_FEDERATOR_DEFAULT_ROUTING_URL
that points to ETH. But this routing
(stationLite?) service times out and is not reachable at all. It may be the
cause of the server error, so I tried changing it to localhost. But inside the
container I cannot seem to reach the stationLite service at all:
curl 0.0.0.0/eidaws/routing/1/version -> 404 Not Found
@massimo1962 wrote:
The point is: the services are up and running but every query (on federator)
that I do give me an empty response and error 500. I think it could be
something related to the stationlite service or something like that but, at
the moment, I can't figure out the problem. If someone can help me I will be
very happy.
Station requests are resolved to the stream level and perhaps this should not.
e.g.
http://federator-testing.ethz.ch/fdsnws/station/1/query?net=NL&level=channel&format=text
will take a long time. Imagine doing this for full EIDA.
Update to the most recent marshmallow version (3.0.0b8). According to the CHANGELOG changes are affecting us, since we are using the load_from
parameter. Such changes are backwards-incompatible.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.