Git Product home page Git Product logo

mesatesthub's People

Contributors

dependabot[bot] avatar jschwab avatar wmwolf avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

classicvalues

mesatesthub's Issues

Do something with submitted `MESA_RUN_OPTIONAL` data in submissions

Submitted test instances now "know" when they came from running optional inlists. This should be saved into the database with the instances (and submissions) so that we can later search on them, they don't create spurious multiple checksum errors, don't pollute runtime statistics, etc.

Some tests with multiple inlists don't display individual parts

The test ns_h has two parts, but fails to display options for viewing (e.g., https://testhub.mesastar.org/main/commits/21b2632/test_cases/star/ns_h ).

Inspection of the YAML file shows the expected two inlist sections.

---
test_case: ns_h
module: :star
omp_num_threads: 36
run_optional: false
fpe_checks: false
inlists:
    - inlist: inlist_add_he_layer_header
      runtime_minutes:   0.34
      model_number:          108
      star_age:     1.5844043907014474E-08
      num_retries:            0
      log_rel_run_E_err:        -4.0992147704590050
      steps:          108
      retries:            0
      redos:            0
      solver_calls_made:          108
      solver_calls_failed:            0
      solver_iterations:          597
    - inlist: inlist_to_steady_h_burn_header
      runtime_minutes:   1.52
      model_number:          459
      star_age:     6.3376175628057901E-03
      num_retries:            3
      log_rel_run_E_err:        -4.9882129066874148
      steps:          462
      retries:            0
      redos:            0
      solver_calls_made:          462
      solver_calls_failed:            0
      solver_iterations:         2264
mem_rn: 4408860
success_type: :run_test_string
restart_photo: x400
mem_re: 2248712
failure_type: :photo_checksum
outcome: :fail

Display Compilation Status for each Commit in the Commits View

The overview of commits has color-coded backgrounds and badges to display a wealth of information. However, there is no indication of successful/unsuccessful compilations. There should be some indication of commits that have failed compilations on some or all computers. Any failed compilations should probably also be reported in the morning email (#17).

Add exit_code failure type

I am requesting the addition of a new failure type:

Entry in YAML file: "failure_type: :exit_code"
String for TestHub display: "Exit Code failure"

Allow Walking the Actual Git Tree

Each commit knows what its parents and children are in the database. In theory, we could have the "next" and "previous" buttons that toggle through commits allow choosing between multiple children/parents for merge/branch commits. This is relatively easy. Harder would be to make the list views of commits allow for walking the true structure, since a graph can't really be mapped cleanly to a list.

Any thoughts on what would be a more useful linear organization of commits? Currently we just sort by commit time in a single branch, but after a different branch is merged in, all of its commits become part of the mainline branch, and time-ordering does not guarantee that one commit is a parent of the next in the list.

Multi-inlist test cases don't consistently show up in test case commit view

Some test cases show multiple inlists in the test case commit view fine, like ccsn_IIp or make_o_ne_wd (see here or here, respectively), while others like cburn_inward and ns_h, do not (see here and here, respectively).

Primary Suspicion: At a first glance, the cases that don't display properly have only two inlists, whereas the ones that work properly have more than two tests. Might just be a bug where we check if the number of inlists is > 2 rather than ≥2.

more fine tuning of web interface

Some more fine tuning of the interface. when i select a particular inlist specific column, the code currently automatically turns on lots of things that are not usually of interest (redos, solver iterations, solver calls made, solver calls failed, log rel run E err). it would be better if those were off be default leaving only runtime, steps, and retries on by default.

And, by default we now carry forward total num retries for the run from one part to the next. So the output I get on the terminal shows cumulative number of retries, not just the number in that particular part. however, that doesn’t match what is being shown for “retries” on the web page. perhaps you are taking the extra step of subtracting off the starting value in order to show only the number of retries that happened in the step (logical thing to do for consistency with steps, etc). in this case, logic be damned, please give me the cumulative total retries so i can easily compare to current values shown on terminal.

i never have felt any need to look at redos, solver iterations, solver calls made, solver calls failed. it would be great to have a way to tell the test hub to skip those when showing me the options. currently they just waste a lot of space on the page and make it look unnecessarily messy. perhaps some sort of “preferences” to indicate which to show?

that might also be a solution to the issue of which things in the list should be on by default when select the particular inlist to show. you could just start with everything on that is included in the preferences.

for me, you could even take it the next step and get rid of the ability to turn particular items on or off on the webpage. i’ll just set preferences to show what i want. then it is only at the inlist level that we’d have an on/off choice, not for the individual items. That would make the interface much simpler (i like simpler). if you don’t like having it that simple, you could provide an “experts only” extra interface for you to enjoy with lots of buttons and bells and whistles! i’ll stick to the simple one.

And — importantly — only show the information for the inlists that are actually selected! currently it fills up the page with all of them whether selected or not.

Add information summarizing computational load

In the mesa dev call today (see late in the recording), there was some discussion about how it would be useful if TestHub could summarize how much compute time is being used. For example, N core-hours were used testing mesa in the last day (over all machines) or helios has contributed M core-hours to testing this week.

Searching for Commits

Since commits don't have a simple time-ordering associated with their labeling, getting at a particular commit can be challenging. It might be nice to be able to search for commits by author, date, passing status, or commit messages. Right now you can hard code a url with a valid commit identifier, so if you do know a particular commit, you can get to it.

This would be similar to the search feature on test instances, and perhaps could be ported over to mesa_test as well (you could search commits from the command line).

Branch Searching in Branch Dropdowns

It's unclear just how many branches we will have, but if it gets beyond a small number, sorting through them will be messy. The dropdowns throughout the interface only show unmerged branches (which is good). However, you should still be able to access an unmerged branch without having to hard-code the URL.

There should be a text box at the beginning of the branch dropdowns that lets you filter all branches by name, including merged branches. This would be similar to how test case dropdowns let you filter by name by typing.

Incorrect coloring of mixed commit status

When one or more tests is a unanimous failure, a commit should be colored 'red' for failed. Mixed yellow color should be reserved for cases where there are no unanimous failures.

screenshot-on-2021-04-24-at-10:14:26

Reject Test Instance Submissions with "bad" YAML

Test instances that are missing mandatory information (broadly defined), like a test case name, or a failure_type (if passed is false) should never make it into the database and should be spat back to the submitter while accepting the rest. Might also spit back unrecognized keys since they are likely wrong, or at least as a reminder to the developer to make them do something on the backend.

Background

mesa handles creating the YAML files, and mesa_test handles submitting them. In principle, mesa_test could check the YAML files for consistency so it never makes it to the server, but that would require mesa_test to be "smart", and more importantly, updated. Kicking this up to the testhub means that mesa_test can remain a blameless messenger between the testing scripts of mesa and the database in the testhub.

Bring test case – commit view up to date with the test case history view

Viewing the results of a single test case in the context of one commit shows overall results from all computers that submitted, but it doesn't get granular (showing per-inlist runtimes, custom values, etc.). The test case history view does show these. The two tables should show the same information.

Restore Morning Email

Due to changes in the backend of the testhub, the morning e-mail will not work in the new testhub as is. The "all commits" page will be a decent substitute in the short term, but this is pretty critical.

Indicate computers with partial test suite runs

A computer is listed on the main page under "Computers tested" if it has reported any tests. It would be helpful to have this list separated into computers that have run all tests and those that have run only some tests.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.