Git Product home page Git Product logo

cheta's People

Contributors

chandra-xray avatar gmrehbein avatar javierggt avatar jeanconn avatar matthewdahmer avatar mbaski avatar something43234523454325 avatar taldcroft avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cheta's Issues

Time-Weighted Mean Values

Currently, the daily and 5 minute means are computed by assigning equal weight to each telemetry point. It would be more accurate if the telemetry points were weighted by the length of the duration at the given value. This would prevent large spikes that occur for short periods of time but in high rate telemetry from overpowering the result.

This phenomenon is currently particularly visible in the reaction wheel torque current daily means, AWDxTQI where x=1-6. This data normally comes down every 32 sec. However, during sim motions when the spacecraft transitions to telemetry format 4, the data comes down every 0.25 sec. Since sim motions also cause high torque currents, these spikes overwhelm the daily mean. For reference, according to the LTT database, the daily mean for AWD1TQI from 2000:001 - 2012:001 should be a tight band roughly in the range of -0.22 to -0.12 A. The current Engineering Archive database shows a band ranging from -0.7 to 0.5 A.

@taldcroft @matthewdahmer @jeanconn

MSIDs to add to engineering archive

These thermal MSIDs are regularly monitored and exist in the TDB, but are not in the engineering archive. Please add these to the archive:

3FASEAAT
3FAPSAT
3FAFLAAT
3FAFLBAT
3FAFLCAT
3FAMTRAT
3TRMTRAT
AC_AC_HT
AC_AC_OT
TFSPCMM
TFSPPCU
TFSPPCM
5EHSE300

Thank You,
Matt

Msidset not compatible with Stats

Fetch.Msidset (with filter_bad = 'True') is not compatible with stat = 'daily' or stat = '5min'. (See error below.) Perhaps this is intentional? If not, I'd like to suggest an enhancement where the stats are available provided that there is valid data for each parameter within the statistical timeframe.

Note that it is possible to do:
fetch.MSIDset (for multiple parameters with filter_bad = 'False')
OR
fetch.Msid (for a single parameter with filter_bad = 'True').

In [202]: x=fetch.Msidset(['aosares1','aosares2'],'2000:001:00:00:00.000','2000:365:00:00:00.000',stat='daily')

AttributeError Traceback (most recent call last)
/home/aarvai/ in ()
----> 1 x=fetch.Msidset(['aosares1','aosares2'],'2000:001:00:00:00.000','2000:365:00:00:00.000',stat='daily')

/proj/sot/ska/dev/arch/x86_64-linux_CentOS-5/lib/python2.7/site-packages/Ska.engarchive-0.15-py2.7.egg/Ska/engarchive/fetch.pyc in init(self, msids, start, stop, filter_bad, stat)
642 def init(self, msids, start, stop=None, filter_bad=True, stat=None):
643 super(Msidset, self).init(msids, start, stop=stop,
--> 644 filter_bad=filter_bad, stat=stat)
645
646

/proj/sot/ska/dev/arch/x86_64-linux_CentOS-5/lib/python2.7/site-packages/Ska.engarchive-0.15-py2.7.egg/Ska/engarchive/fetch.pyc in init(self, msids, start, stop, filter_bad, stat)
526
527 if filter_bad:
--> 528 self.filter_bad()
529
530 def filter_bad(self):

/proj/sot/ska/dev/arch/x86_64-linux_CentOS-5/lib/python2.7/site-packages/Ska.engarchive-0.15-py2.7.egg/Ska/engarchive/fetch.pyc in filter_bad(self)
561 for msid in msids:
562 if bads is None:
--> 563 bads = msid.bads.copy()
564 else:
565 bads |= msid.bads

AttributeError: 'NoneType' object has no attribute 'copy'

@taldcroft @jeanconn

Improve bad data handling

Currently the mechanism for filtering known bad data with filter_bad_times() is not that convenient and requires user vigilance. Would be better to make this automatic or make a tool to fix the HDF5 archive files.

For reference, AOSARES1 has bad values (> 180) during the 2011:187 recovery. See msid_bad_times.dat.

BTW, need to install the updated msid_bad_times.dat file from 32f01d to GRETA.

AOSARES1 is wrong during 2011:187 SSM dwell

Telemetry shows a value of around 110 pitch, should be 90. This may be the case during other NSM dwells. This has an impact on thermal modeling using these long NSM dwells for calibration. Tool to fix HDF or post-fetch hook to fix?

Corruption of archfiles.db3 for prop1eng

Around 2011:293 (Oct 20) there was a corruption in the archfiles.db3 database index that is used to select row ranges based on time. This was in prop1eng on GRETA but not HEAD. The archfiles rowstart and rowstop got out of sync with the HDF5 files. This was associated with a large (-120000) jump in time in TIME.h5. Plotted data looked fine over the interval so the assumption is that there was a re-ingest but bad qualitity was set appropriately. Have not done full analysis though.

Bad files are in /proj/sot/ska/data/eng_archive/bad/data/prop1eng.

Need to check all other content types by using fetch.filetypes with a representative MSID. Spot check of two other MSIDs showed no problem.

An easy way to test for this corruption problem is to query on a short date interval, e.g one day. Then see if any data values are returned. I checked 'tephin' and 'aoattqt1' and did not see this issue.

@aarvai @jeanconn

Re-define PCAD Derived Parameters DP_ROLL_FSS and DP_PITCH_FSS

Enhancement request:

The existing definitions for DP_ROLL_FSS and DP_PITCH_FSS are approximations. To improve accuracy, please re-define them in 'pcad.py' as follows:

#--------------------------------------------
class DP_PITCH_FSS(DerivedParameterPcad):
    """Sun Pitch Angle from FSS Data in ACA Frame [Deg]

    Defined as the angle between the sun vector and ACA X-axis.

    When in FSS FOV per AOSUNPRS:
    Calculated using the FSS alpha and beta angles to compute the sun vector 
    in the FSS frame.  The sun vector is then rotated into the ACA frame 
    using the rotation matrix (an OBC k-constant).  Pitch is computed using the
    arccos function.

    When NOT in FSS FOV per AOSUNPRS:
    <data>.bads = 1     

    """
    rootparams = ['aoalpang', 'aobetang', 'aosunprs']
    time_step = 1.025
    max_gap = 10.0
    dtype = np.float32

    def calc(self, data):
        in_fss_fov = (data['aosunprs'].vals == 'SUN ')
        data.bads = data.bads | ~in_fss_fov
        # rotation matrix from FSS to ACA frame
        A_AF = array([[ 9.999990450374580 * 10 ** -01, 
                        0.0, 
                       -1.382000062241829 * 10 ** -03],
                      [-5.327615067743422 * 10 ** -07, 
                        9.999999256947376 * 10 ** -01, 
                       -3.854999811959735 * 10 ** -04],
                      [ 1.381999959551952 * 10 ** -03, 
                        3.855003493343671 * 10 ** -04, 
                        9.999989707322665 * 10 ** -01]])
        # FSS's sun vector in FSS frame
        sun_fss = array([tan(data['aobetang'].vals), 
                         tan(data['aoalpang'].vals), 
                         -1])            
        sun_aca = A_AF * sun_fss
        magnitude = sqrt((sun_aca * sun_aca).sum(axis=0))
        data.bads |= magnitude == 0.0
        magnitude[data.bads] = 1.0
        sun_vec_norm = sun_aca / magnitude
        pitch_fss = degrees(arccos_clip(sun_vec_norm[0]))
        return pitch_fss


#--------------------------------------------
class DP_ROLL_FSS(DerivedParameterPcad):
    """Off-Nominal Roll Angle from FSS Data in ACA Frame [Deg]

    Defined as the rotation about the ACA X-axis required to align the sun
    vector with the ACA X/Z plane.

    When in FSS FOV per AOSUNPRS:
    Calculated using the FSS alpha and beta angles to compute the sun vector 
    in the FSS frame.  The sun vector is then rotated into the ACA frame 
    using the rotation matrix (an OBC k-constant).  Roll is computed using the
    arctan function.

    When NOT in FSS FOV per AOSUNPRS:
    <data>.bads = 1     

    """
    rootparams = ['aoalpang', 'aobetang', 'aosunprs']
    time_step = 1.025
    max_gap = 10.0
    dtype = np.float32

    def calc(self, data):
        in_fss_fov = (data['aosunprs'].vals == 'SUN ')
        data.bads = data.bads | ~in_fss_fov
        # rotation matrix from FSS to ACA frame
        A_AF = array([[ 9.999990450374580 * 10 ** -01, 
                        0.0, 
                       -1.382000062241829 * 10 ** -03],
                      [-5.327615067743422 * 10 ** -07, 
                        9.999999256947376 * 10 ** -01, 
                       -3.854999811959735 * 10 ** -04],
                      [ 1.381999959551952 * 10 ** -03, 
                        3.855003493343671 * 10 ** -04, 
                        9.999989707322665 * 10 ** -01]])
        # FSS's sun vector in FSS frame
        sun_fss = array([tan(data['aobetang'].vals), 
                         tan(data['aoalpang'].vals), 
                         -1])            
        sun_aca = A_AF * sun_fss
        magnitude = sqrt((sun_aca * sun_aca).sum(axis=0))
        data.bads |= magnitude == 0.0
        magnitude[data.bads] = 1.0
        sun_vec_norm = sun_aca / magnitude
        roll_fss = degrees(arctan2(-sun_vec_norm[1, :], -sun_vec_norm[2, :]))
        return roll_fss

PCAD derived parameters

From @aarvai :

I reviewed G_TREND_24HR and A_SOH this morning and drafted a list of candidates (attached).  I was pleasantly surprised to see that the list wasn't as long as I expected.  I didn't include every parameter currently calculated, just those that aren't already extremely easily calculated using your database.  (For example, we currently track some parameters that are simply unit conversions or filtered by PCAD mode.)  

iplot() hangs on pan on i686 (32-bit) platforms

When panning out to the full mission data range from a previously zoomed interval, iplot() hangs. Not clear where this is happening or why. Full data range can be plotted initially, but after zooming in then panning out the hang recurs. Only occurs on i686 arch.

Ephemeris data for years into future

I'd love it if the predictive ephemeris pseudo-msids (e.g. solarephem0_x) went out further than the near future, perhaps even years into the future (with the acknowledgement that the fidelity would decrease).

Add DSN monitor data

Not sure if this is even feasible, but Ops indicated it would be very useful.

Overflow encountered in square for AOMANTIM daily

The following warning was issue by update_archive.py:

2012-11-21 06:49:30,164 Updating stats file /proj/sot/ska/data/eng_archive/data/pcad8eng/daily/AOMANTIM.h5
/proj/sot/ska/share/eng_archive/update_archive.py:326: RuntimeWarning: overflow encountered in square
  sigma_sq = np.sum(dts * (vals - out['mean'][i]) ** 2) / sum_dts

The day 2012:324 value is corrupted (2.208e22). Suspect there were no data points so the daily mean failed, but this might be worth investigating or patching to make more robust.

Since the daily mean of AOMANTIM is not really useful, no immediate corrective action is needed.

@aarvai @emartin496 @jeanconn

Make DP_ optional for derived parameter names

Enhance eng archive so derived parameter names don't have to start with "DP". Either in the core derived class code, or more easily as a hook in fetch that tries adding "DP_" to an MSID name if the MSID is not found.

Provide a convenient way to run update_archive in a loop

Helper scripts like run_rebuild_stats.csh should be replaced by a nice way to run update_archive.py in a loop. This might be done by allowing for a --start param in update_archive.py. Along with the --max-lookback-time and --date-now params this defines a loop.

Add MSID descriptions

It would be helpful if the MSID description was available, similar to the way the mnemonic and units are available:
.msid
.unit

Predictive Ephemeris Data Missing on 2000:071

Predictive ephemeris data are missing from 2000:071:00:00:00 - 2000:071:12:03:56:00 UTC. This affects orbitephem0, lunarephem0, and solarephem0, as well as their associated MSIDs (_x, _y, _z, _vx, _vy, and _vz). Also, because of their dependence on ephemeris data, PCAD derived parameters DP_PITCH, DP_ROLL, DP_XZ_ANGLE are also affected. I believe this is the only time period in the current TLM archive with this issue (missing data for > 301 seconds). Definitive ephemeris is not affected.

For example:

In [57]: x=fetch.Msid('orbitephem0_x','2000:001')

In [58]: dt = diff(x.times)

In [59]: i = dt > 301

In [60]: Chandra.Time.DateTime(x.times[i]).date
Out[60]:
array(['2000:071:00:00:00.000'],
  dtype='|S21')

In [61]: Chandra.Time.DateTime(x.times[1:][i]).date
Out[61]: 
array(['2000:071:12:03:56.000'],
  dtype='|S21')

@taldcroft

Repeated statistical data for derived parameters in engineering archive

It seems as if there are repeated values in the statistical data for at least several of the derived parameters. The repeated values occur from 2000:001 through 2000:059. The code below should should reproduce the bug.

import Ska.engarchive.fetch_eng as fetch
data = fetch.Msid('oba_ave', '2000:001', '2000:070', stat='daily')
plot(data.times, data.vals)

You should see a line connecting data from day 59 back to day 1. Further inspection should show that there is repeated data in the "times" array, and corresponding repeated data in all the statistical datasets. This behavior is evident in both the 5 minute and daily statistical data but does not seem to be present in the regular (normal sampling rate) data.

Upgrade TDB to P010

fetch.Msid currently accesses the P009 TDB, although P010 is the current operational version. A newer version would allow minimize the hand-edits required by new "converters" when switching their tools over to the Eng Archive.

@matthewdahmer

Fetch import depends on non-package files

Importing fetch requires reading filetypes.dat which is installed external to the package. Better to lazy-load any required files and bundle them in the package installation instead of $SKA/data/eng_archive.

This was a problem for building skare on a clean system because xija imported fetch, which failed.

Check integrity shows many apparent misordered archfiles

Running check_integrity.py --check-order shows a number of apparent misorders. At least some of these appear to be reprocessing of the same file (N001 => N002 with same start time, etc). Some cases don't follow this pattern.

Action - check if there is any impact from these cases in eng archive MSID data. Are the redundant points set to bad? Is there an impact on short queries, i.e. bad coarse index from archfiles table?

Logical Intervals Error

Since the release of Ska engineering archive 0.13, I've been trying to work with the logical_intervals function, but keep getting the following error (shown below for the website's example):

In [11]: dat = fetch.MSID('aomanend', '2010:001', '2010:005')

In [12]: manvs = dat.logical_intervals('==', 'NEND')
ERROR: An unexpected error occurred while tokenizing input
The following traceback may be corrupted or invalid
The error message is: ('EOF in multi-line statement', (22, 0))


NameError Traceback (most recent call last)

/home/aarvai/python/ in ()

/proj/sot/ska/arch/i686-linux_CentOS-5/lib/python2.7/site-packages/Ska.engarchive-0.13-py2.7.egg/Ska/engarchive/fetch.pyc in logical_intervals(self, op, val)
439 tstop = times[i_ends]
440 intervals = {'datestart': DateTime(tstarts).date,
--> 441 'datestop': DateTime(tstops).date,
442 'duration': times[i_ends] - times[i_starts],
443 'tstart': tstarts,

NameError: global name 'tstops' is not defined

The state_intervals function works great.

Enhancement to Logical_Intervals for MSID Sets

The current logical_intervals attribute for MSIDs is currently restricted to evaluating a logical for a single MSID (or single MSID within an MSID set). For example,

x = fetch.Msid('COTLRDSF', '2000:001', stat='5min')
x.logical_intervals('==', 'PCAD')

It would be helpful if this capability was expanded to include booleans and/or logical statements for multiple MSIDs within an MSID set. For example,

x = fetch.Msidset(['aspebx', 'afssab'], '2000:001', stat='5min')
x.logical_intervals((x['aspebx'].vals == 'ON ') & (x['afssab'].vals == 'ON '))

This capability would require that all MSIDs within the set have the same timestamps, either via the interpolate method or the statistics keyword.

I currently have a kludgy-work-around where I create a bogus MSID within the set and set its values to the boolean of the logical statement. However, something more eloquent would be nice for formal coding.

@taldcroft @jeanconn

Expand scope to include all MSIDs

Currently the archive only includes commonly-used MSIDs. However, this limits its functionality for analyses during Safe Mode. If possible, without taking up too much space, it would be very helpful if the archive included all MSIDs.

Data availability differs depending on Fetch start and stop times

This example demonstrates an event that I've seen throughout December 2011 and January 2012 on PM1THV1T:

impska

Collect 2 data sets that differ only by the end date

x1=fetch.Msid('pm1thv1t','2011:350:00:00:00.000','2011:353:00:00:00.000')
x2=fetch.Msid('pm1thv1t','2011:350:00:00:00.000','2011:356:00:00:00.000')

Plot results

figure(1)
plot_cxctime(x1.times, x1.vals)
title('Dataset w/ End Date of 2011:253:00:00:00.000')
disp('x1 Last Entry: ' + Chandra.Time.DateTime(x1.times[-1]).date)

figure(2)
plot_cxctime(x2.times, x2.vals)
title('Dataset w/ End Date of 2011:256:00:00:00.000')
disp('x2 Last Entry: ' + Chandra.Time.DateTime(x2.times[-1]).date)

The first query produced no data for 2011:252. However, this data is available on the second query.

@taldcroft @jeanconn

AOKALSTR values are strings

In [2]: dat = fetch.Msid('aokalstr', '2012:031:08:00:00', '2012:031:11:00:00')

In [3]: dat.iplot() # FAIL

In [4]: dat.vals
Out[4]:
array(['8 ', '8 ', '8 ', ..., '8 ', '8 ', '8 '],
dtype='|S2')

@aarvai - Does this make any sense to you? Is this MSID a string valued quantity in GRETA?

Delta values

GRETA has a "DELTA" x-list option in which it only outputs the timestamps when a mnemonic changes. This could be a helpful option in the Telemetry Archive if we wanted to save on memory and decrease the output array size. The user would just need to be aware that the delta timestamp isn't constant, but this is already the case now when using filter_bad=True.

Add SIM derived MSIDs

Add derived MSIDs for SIM telemetry:

3TSCPOS (counts)
3FAPOS (counts)
3TSCMOV
3FAMOV
others??

Need to get hold of the cal curve from 3FAPOS (counts) to mm and invert this, since CXC telem provides only the value in mm.

cc: @aarvai @jeanconn

Derived parameter update lag

After some investigation with Tom, it appears that the derived parameters are updated in 200,000 second "chunks" which can result in telemetry latency of ~3 days. While okay in most situations, it would be preferable for the max telemetry latency to be closer to ~1 day like the other parameters, particularly for the check_fss module (daily FSS trending plots).
@matthewdahmer @jeanconn @taldcroft

Corrupt Data for AIRU1VFT Stats on 2001:227 - 228

When run on the GRETA network (chimchim in skatest), the daily means for AIRU1VFT are incorrect on 2001:227 - 228. Specifically, the daily means are greater than the daily maxes. They are also inconsistent with full-resolution plots of data.

In [89]: x = fetch.Msid('airu1vft', '2001:227', '2001:232')

In [90]: figure()
Out[90]: <matplotlib.figure.Figure at 0x2e0f4990>

In [91]: x.plot()

In [92]: y = fetch.Msid('airu1vft', '2001:227', '2001:232', stat='daily')

In [93]: y.means
Out[93]:
array([ 142.9781189 , 143.74649048, 141.8286438 , 141.83554077,
141.83554077], dtype=float32)

In [94]: y.maxes
Out[94]:
array([ 141.81997681, 141.81997681, 141.81997681, 141.81997681,
141.81997681], dtype=float32)

@taldcroft @matthewdahmer

Improve tutorial docs

Move "source /proj/sot/ska ..." into the initial file setup section.

Explicitly call out opening a new window then typing "ska" to get into Ska.

MSIDset.interpolate behaves unintuitively

From @aarvai

I noticed that the interpolate function acted differently when "bad" data was involved than I expected. I thought that when using .interpolate, the timestamps for the all MSIDs within the MSID would be set to identical values. However, please see the attached code below. In this case, "good" data for one of the two MSIDs ('dp_pitch_fss') is not available for the first 14 hours (because we're out of the FSS FOV). The interpolate function still set the data sets to the same shape (288 entries), but, as shown by figure 3, their timestamps are far from identical. I didn't want to submit an official bug report in Github until I ran it by you first - is this the intended behavior? Or is this just something to be careful of? (Should we always plot the delta times?) Ideally, I would think that the "interpolated" timestamps should only be in time ranges were all of the data is "good" (here, the las!
t 10 hours).

from Ska.Matplotlib import plot_cxctime
from Ska.engarchive import fetch_eng as fetch

x=fetch.Msidset(['aosares1','dp_pitch_fss'],'2000:002:00:00:00.000','2000:003:00:00:00.000')

figure(1)
plot_cxctime(x['aosares1'].times,x['aosares1'].vals,'b')
plot_cxctime(x['dp_pitch_fss'].times,x['dp_pitch_fss'].vals,'r')
title('Plotted w/ Original Timestamps')

figure(2)
x.interpolate(dt=300)
plot_cxctime(x['aosares1'].times,x['aosares1'].vals,'b')
plot_cxctime(x['dp_pitch_fss'].times,x['dp_pitch_fss'].vals,'r')
title('Plotted w/ Interpolated Timestamps')

figure(3)
delta_pitch_fss = x['aosares1'].times - x['dp_pitch_fss'].times
plot(range(len(delta_pitch_fss)),delta_pitch_fss)
title('AOSARES1 times - DP_PITCH_FSS times')

Incorrect Equations for THSMIN and TSSMIN

Due to an error I made in the code to produce the THSMIN and TSSMIN derived parameters, both of these parameters return identical data to their THSMAX and TSSMAX counterparts. The code for these should be:

THSMIN:

class DP_THSMIN(DerivedParameterThermal):
    rootparams = ['OOBTHR02', 'OOBTHR03', 'OOBTHR06', 'OOBTHR07', 'OOBTHR04', 
                  'OOBTHR05']
    time_step = 32.8

    def calc(self, data):
        THSMIN = data[self.rootparams[0]].vals
        for names in self.rootparams[1:]:
            THSMIN = np.min([THSMIN, data[names].vals], axis=0)
        return THSMIN

TSSMIN:

class DP_TSSMIN(DerivedParameterThermal):
    rootparams = ['OOBTHR51', 'OOBTHR50', 'OOBTHR53', 'OOBTHR52', 'OOBTHR54', 
                  'OOBTHR49']
    time_step = 32.8

    def calc(self, data):
        TSSMIN = data[self.rootparams[0]].vals
        for names in self.rootparams[1:]:
            TSSMIN = np.min([TSSMIN, data[names].vals], axis=0)
        return TSSMIN

The only difference is the replacement of the np.max function with the np.min function. I apologize for any inconvenience this causes.

Dist_Sat_Earth missing around 2011:187 safemode

Dist_Sat_Earth is not populated from approximately 2011:187:12:18 UTC through approximately 2011:199:01:18 UTC (~12 day timespan). I would have expected this parameter to be independent of spacecraft (i.e. Safe Mode) issues. (I know Dist_Sat_Earth sounds like a big jump from AOFATTMD, but the two are back to back in my code.

Add plotting capability for bi-levels

If possible, a plotting capability for bi-levels (i.e., all MSIDs with string values rather than numerical values) would be very helpful for generating a timeline of events during an anomaly. Ideally, the state codes would be printed on the y-axis. Perhaps a special plotting capability (similar to the plot_cxctime) would be applicable?

Missing derived MSID data for most recent time values

MSID data for the most recent time values seems to be missing. This data is present for non-derived values. This behavior is not observed for data more than a few days in the past. This may be an intended behavior (perhaps due to the need to add a full "chunk" of data to the hdf5 file at a time), if this is the case I can live with it. This does not need to be a high priority issue from my perspective.

from Chandra.Time import DateTime
impska

notderiveddata = fetch.Msid('tfutsupn', '2012:250')  
deriveddata = fetch.Msid('OBA_AVE','2012:250')


DateTime(notderiveddata.times[-1]).date
DateTime(deriveddata.times[-1]).date

@taldcroft @aarvai

Add TDB Caution and Warning Limits

Similar to the recent (and very useful) enhancement that added the technical names to fetch.Msid and fetch.Msidset, it would be handy to also include the caution and warning limits. This would allow for automatic limit lines to be added, for example, when generating an LTT plot.

Practically, it may be worth waiting until the release of the P010 database to avoid re-ingesting the data. (Unless that's easily done.)

Fetch TLM for a series of time intervals

This is essentially the same as fetching all telemetry and then selecting the desired intervals, but it avoids having to fetch more data than is needed. For example, fetching ELBV for the 5 minutes post-eclipse. High rate data is required, but fetching the mission's worth of ELBV is computationally expensive. It would be nice if the time intervals could be defined as a kadi event too.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.