Git Product home page Git Product logo

cylc / cylc-flow Goto Github PK

View Code? Open in Web Editor NEW
319.0 15.0 90.0 125.54 MB

Cylc: a workflow engine for cycling systems.

Home Page: https://cylc.github.io

License: GNU General Public License v3.0

Shell 23.45% Python 69.98% Emacs Lisp 0.09% Dockerfile 0.05% Jinja 0.01% HTML 6.34% Vim Script 0.08%
workflow-engine python metascheduler workflow-automation workflow-management cycling-workflows job-scheduler scheduling cylc scheduler

cylc-flow's Introduction

PyPI Anaconda-Server Badge chat forum Documentation

Cylc (pronounced silk) is a general purpose workflow engine that also manages cycling systems very efficiently. It is used in production weather, climate, and environmental forecasting on HPC, but is not specialized to those domains.

Quick Start

Installation | Documentation

# install cylc
conda install cylc-flow

# extract an example to run
cylc get-resources examples/integer-cycling

# install and run it
cylc vip integer-cycling  # vip = validate, install and play

# watch it run
cylc tui integer-cycling

The Cylc Ecosystem

  • cylc-flow - The core Cylc Scheduler for defining and running workflows.
  • cylc-uiserver - The web-based Cylc graphical user interface for monitoring and controlling workflows.
  • cylc-rose - Provides integration with Rose.

Migrating From Cylc 7

Migration Guide | Migration Support

Cylc 8 can run most Cylc 7 workflows in compatibility mode with little to no changes, go through the migration guide for more details.

Quick summary of major changes:

  • Python 2 -> 3.
  • Internal communications converted from HTTPS to ZMQ (TCP).
  • PyGTK GUIs replaced by:
    • Terminal user interface (TUI) included in cylc-flow.
    • Web user interface provided by the cylc-uiserver package.
  • A new scheduling algorithm with support for branched workflows.
  • Command line changes:
    • cylc run <id> -> cylc play <id>
    • cylc restart <id> -> cylc play <id>
    • rose suite-run -> cylc install; cylc play <id>
  • The core package containing Cylc scheduler program has been renamed cylc-flow.
  • Cylc review has been removed, the Cylc 7 version remains Cylc 8 compatible.

Citations & Publications

DOI JOSS CISE

Copyright and Terms of Use

License

Copyright (C) 2008-2024 NIWA & British Crown (Met Office) & Contributors.

Cylc is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

Cylc is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with Cylc. If not, see GNU licenses.

Contributing

Contributors Commit activity Last commit

Contributions welcome:

This repository contains some code that was generated by GitHub Copilot.

cylc-flow's People

Contributors

aosprey avatar areinecke avatar arjclark avatar benfitzpatrick avatar challurip avatar datamel avatar dependabot[bot] avatar dpmatthews avatar dvalters avatar dveselov avatar dwsutherland avatar gillianomenezes avatar gmao-cda avatar hjoliver avatar jhaiduce avatar jonnyhtw avatar jonty-bom avatar kinow avatar lhuggett avatar markgrahamdawson avatar matthewrmshin avatar metronnie avatar oliver-sanders avatar rosalynhatcher avatar sadielbartholomew avatar scottwales avatar sgaist avatar thomascolemanbom avatar tomektrzeciak avatar wxtim avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cylc-flow's Issues

Handle task name spelling and case-sensitivity errors.

A task that is used in the graph but which has no runtime config defined defaults to inheriting the root namespace and, thereby, dummy command scripting. This is a very useful feature for writing quick test suites, for example, or for dummying in new tasks prior to configuring their runtimes.

However, this does mean that a task name spelling error or accidental case change in the graph (e.g. MyTask instead of myTask) automatically defines a new dummy task (MyTask) that will run in the suite in place of, and look very much like, the the intended task (myTask).

Currently verbose mode validation does warn of this. For example, if you accidentally use "Foo" instead of "foo" in the graph:

WARNING: task "Foo" is defined only by graph - it will inherit root.
WARNING: task "foo" is disabled - it is not used in the graph.

This is quite common, and is almost always an error, so in addition to the existing warnings validation should probably warn if any task names are detected that differ only by case. It would be impractical to catch other kinds of spelling error directly, but perhaps we could have validation fail graph-only dummy tasks unless the user indicates deliberate use?

Validation doesn't check graph string syntax.

Users may assume that graph strings are interpreted as a whole rather than line by line. These are legal and equivalent:

graph = "foo => bar => baz"

graph = "foo => bar
=> baz"

graph = """foo => bar
bar => baz"""

But the following cause "cylc graph" to crash:

graph = """foo => bar
=> baz"""

Failure to acquire a task lock results in a subsequent erroneous warning.

Failed to acquire task lock
examples:FFHook:A%2010080806 is already locked!
cylc (scheduler - 2011/07/01 12:49:13): CRITICAL FAILED TO ACQUIRE A TASK LOCK
cylc (scheduler - 2011/07/01 12:49:13): CRITICAL A%2010080806 failed
cylc (scheduler - 2011/07/01 12:49:13): CRITICAL FAILED TO CONNECT TO A LOCKSERVER
cylc (scheduler - 2011/07/01 12:49:13): CRITICAL A%2010080806 failed

The second critical log message is incorrect - the lockserver was contacted.

Menu appearance

The menus could be made more user-friendly by using more icons, separators, potentially submenus, and using check buttons for toggle on/off menu items. The individual task menus could be improved by preserving the underscore in the task name.

Network communication: Pyro4 or another protocol?

Currently cylc uses Pyro3 as a rather minimal object oriented RPC interface, for network communication between cylc clients (tasks, commands, GUIs) and cylc servers (running suites).

Pyro3 is now in maintenance; Pyro4, which is compatible with Python 3, is recommended for new projects. In the future we will clearly have to upgrade to Pyro4, or switch to something else - possibly a custom protocol because cylc's needs in this respect are quite simple.

Unfortunately Pyro3's built in "connection authentication" is currently critical to cylc's operation (i.e. suite passphrases), and as of mid 2012 Pyro4 does not have any built in connection authentication. However, a mid-2011 post to the Pyro mailing list by the Pyro maintainer Irmen de Jong suggests that connection authentication is on his To Do list, and it will likely take a similar form to that in Pyro3. Options:

  • wait on Pyro4 connection authentication before switching to Pyro4
  • convert to Pyro4 regardless and build our own authentication mechanism into cylc
  • drop Pyro and build our own network communication and authentication protocols

cylc task-to-suite messaging in HPC facilities

Cylc tasks need to report progress (task started, and succeeded or failed, and possibly internal outputs and other messages) back to their parent suite. In an HPC environment you would ideally not run cylc itself on the HPC nodes, but rather on some Linux server or even your own desktop (the "suite host") with the suite submitting jobs to the remote HPCF. Then the running suite does not use any valuable HPC resource, and cylc suite visualization and the GUI tools do not have to be installed or ported (if that's even possible) to the HPC environment. But, remote tasks must be able to communicate, by network socket or by passwordless ssh, back to the suite host ... and it has recently come to my attention that some (many?) HPC facilities do not allow any network routing back out of the compute nodes, for security reasons and/or to avoid extraneous "network chatter" that could have an impact on compute performance. This would seem to imply that cylc (or indeed any general scheduling tool that tracks the progress of its tasks) has to be run on the HPC host itelf, which may be problematic for other reasons (e.g. no long-running jobs allowed?) in addition to the possible inconvenience of not having suite visualization and GUI tools available for cylc users.

This ticket can be used to record any ideas for getting around this problem.

What to do about the cylc central suite database?

The central database was intended as a means of sharing suites between cylc users at a particular site. Currently, however, it is not network aware and is implemented as a very insecure world-writeable common directory space, and is generally only used for making cylc example suites available to users.

I originally intended to put the central db on the network, at the least, but I had never really thought of an effective way to handle suite discovery, meta-data, and revision control. So ...

swap gcylc title strings

When lots of windows are open, it helps to have the suite name before the application name in the gcylc title. This behaviour is more mainstream (e.g. Firefox, gvim).

Hover-over text for task names in LED view

The LED view displays task names vertically to conserve horizontal space. To avoid neck ache, we should display the task name as (horizontal) hover-over text when the mouse is over the task name.

detaching tasks and task execution timeouts

Currently a task execution timer starts when the "task started" message is received, and stops when the task finishes (by success or failure). If an execution timeout is set for the task, the corresponding task event hook is called if the timer exceeds the timeout value before the task finishes.

We may need to do something more complex for detaching tasks, which internally submit secondary jobs after execution starts, because the secondary job can potentially get stuck in an external batch scheduler queue, thus using up "execution time" while nothing is actually executing.

task started triggers

(From Dave Matthews) Cylc currently has task triggers for success or failure [and internal outputs]. The one event it doesn't have a trigger for is task started.

I can imagine cases where you might want a background task (B) to run in parallel to another, resource intensive task (A). For instance, B might post process files as they are produced by A. In this case you want to be able to trigger B when A starts (A:start => B ?) rather than when A is triggered since A might have a long wait in a queue (either a cylc queue or a batch scheduler queue).

Traceback on failure to connect

Currently the cylc task commands fail with a traceback if they fail to connect to the suite. This happens regardless of whether the failure is due to a missing passphrase or because the suite cannot be contacted for some reason.

"Retry on failure" upgrade

Cylc currently supports task retry on failure if you specify a list for "command scripting". This seems to work fine but the interface isn't ideal if your command is multi-line and you just want to repeat the same command (or a minor variant of it). There is also no support for delaying the re-try which is a feature that is widely used in our current system.

For example, there are certain tasks in our trial suites which, experience has shown, will occasionally fail but will often succeed if re-submitted after a suitable delay. This is typically due to a short outage of a service such as an archiving or database system. This is not currently straightforward to configure in cylc (other than via failure triggers which are extremely powerful but require work to set up and are probably overkill for this particular case).

I'd like to propose the following changes to task re-tries:

  1. "command scripting" is always a single string (not a list).

  2. "retry delay" specifies the delay in minutes before each retry. For example, "retry delay = 0, 10, 60" implies the first retry is submitted immediately, the second following a 10 min delay and the third following a 1 hr delay (i.e. total of 4 attempts).

  3. An environment variable CYLC_TRY_NUM (or CYLC_TASK_TRY ?) is defined which can be used to alter behaviour on retry.
    1 => initial attempt, 2 => first retry, etc.
    Note that it would be good if this variable gets incremented for manual retries as well. For instance you might configure your suite such that extra diagnostic output is enabled for particular tasks if they have to be re-run.

trigger bug: tasks depending on both async and cycling tasks

[cylc-dev] cylc-4.2.2 and earlier: bug warning; Mon 23/04/2012.

If you have a cycling task that depends on BOTH cycling and asynchronous (or special synchronous start-up) tasks, its triggers will not be defined correctly. E.g.:

  1. CyclingX & AsyncY => CyclingZ # BROKEN

This is supposed to mean that CyclingZ depends on both CyclingX and AsyncY in the first cycle (and subsequently depends only on CyclingX). But because of the bug CyclingZ will not wait on CyclingX, at least not in the first cycle (and maybe later too, depending on other triggers?). The workaround is to do this instead:

  1. AsyncY => CyclingX => CyclingZ # WORKAROUND

The only functional difference here is that CyclingX now has to wait on AsyncY in the first cycle, so the suite may exhibit marginally less efficient scheduling at start-up.

If you only use asynchronous (and special synchronous start-up) tasks for non-cycling suite sections that complete before any cycling tasks start (as I have to date) then you won't encounter this bug. That said, case 1 may be a better reflection of the true dependencies for some tasks, so the bug will be fixed.

Enable control of suites from remote hosts

This is already discussed here:
https://github.com/hjoliver/cylc/blob/master/doc/notes/hosts.txt

I'm not sure if the security issue is really relevant (if you are worried about security then you really need to use passphrases).

I propose we add support for remote access for systems with shared filesystems only (which avoids the issue of access to all the suite files, etc). We can think about what to do if you don't have shared filesystems later if there is a strong requirement for this.

Rationalize suite auto-shutdown criteria

"held" tasks are involved in suite shutdown, as well as normal task and suite hold. When a suite is told to shut down after after all currently running tasks have finished (or after a particular stop cycle) any waiting tasks (or any waiting tasks spawned beyond the stop cycle) are put into the held state, and then the suite shuts down when all tasks are either finished or held. The reason for holding waiting tasks prior to shutdown rather than removing them from the suite is that it allows you to change your mind and release them again, and to restart the suite again after shutdown (at which point the held tasks can just be released).

The current implementation leads to some odd behavior in unusual circumstances, however. For example (reported by Dave Matthews) if you hold the last task in the suite, the suite will shut down after the second to last task finishes. And, if you hold the suite and then release every task manually, you can end up with a suite running in the held state and all tasks completed, but the suite does not shut itself down.

Consider automatic task wrapping via the task job script.

Since cylc-3 and the suite.rc dependency graph, the vast majority of tasks now trigger off upstream tasks finishing - so they can be wrapped with 'cylc wrap' and there's no need to put explicit cylc messaging in the task scripts. Consequently wrapping (i.e. automatically providing the task started and finished or failed messages) might as well be the default behaviour. Further, use of pre- and post-command scripting is currently sub-optimal because it executes outside of the task started and finished messages.

Both of these problems could be solved by automatically calling 'cylc task started' at the top of task job scripts, and likewise automatically setting error trapping (for 'cylc task failed') there, and calling 'cylc task succeeded' at the end of the job script. The final 'task succeeded' call could by omitted by deliberate choice for the few tasks that need to supply their own finished message (because their initiating script detaches before task processing finishes).

Result: the user needn't specify task wrapping in the suite definition, pre- and post-command scripting are brought inside the task proper, and tasks with internal outputs only need to send their internal output messages (not started and finished messages as well).

This issue partly dates from Jan 2011 (moving start and finish messaging to the task job script to bring pre- and post-command scripting into the task) and partly from June 2011 (when Dave Matthews suggested that this could be used to entirely eliminate the need for explicit task wrapping).

Provide access via cylc to output from spent tasks.

Spent tasks (which are finished and are no longer needed to satisfy the prerequisites of others) are removed from a suite as soon as possible, after which their stdout and stderr files can no longer be accessed via cylc (although they will still be present in the suite's job submission log directory). Viewing task output files via the cylc GUI is very convenient though, so it would be good to allow access to spent task output for longer. This should not be done by simply keeping spent tasks in the main task pool for longer than necessary, however - this would slow cylc down for large suites.

Task re-tries not shown in GUI

If a task is configured to retry then it will immediately return to the waiting state on failure. One problem with this is that it is not obvious in the GUI that there has been a failure, nor any access to the output from the failed task.

It would be good if the GUI could be enhanced somehow to make re-tries more obvious, perhaps making use of the try number (see issue 29)?

Allow tasks with sub-hourly or super-daily cycling intervals.

Currently task cycle time granularity is expressed in terms of valid hours, so a task's cycle time must increment by at most 24 hours and at the least once per hour. We don't know of forecasting systems with tasks that need to run more than hourly, but it seems conceivable that this could be required in the future, and super-daily intervals are definitely required for climate applications.

Whilst use of cycle time is deeply embedded in the code base, as far as cylc is concerned it is in fact nothing more than a label attached to each task. Task dependencies are expressed in terms of their respective cycle times, and when a task spawns, its list of valid cycle times determines the cycle time of its successor.

multi-line selection in cylc dbviewer

The gcylc suite database viewer currently allows only single line selections (i.e. a single suite or a single group of suites). This means we cannot, via the GUI, operate on multiple suites at once unless we operate on all members of a common group (command line database operations allow selection of target suites by pattern matching, e.g. all suites starting with the word "test").

It would be good to allow multiple individual suites or groups to be selected at once without regard for their position in the registration hierarchy. And for operations such as copying a suite from one group to another, it would be more intuitive to select the source suite and target group, than to select the source suite and then supply the target group name via dialog box as is done currently.

This should be simple to implement, but would require some additional logic to check that the selections are appropriate to the chosen operation, and to position all selected suites and/or groups properly on the command line of the subprocess that carries out the operation.

Namespace grouping, sorting in Tree view.

We should support grouping tasks within a cycle time in the tree view (list view?), in a similar way to the graph view. We should be able to control which groups to use in the suite.rc file. We should also take the opportunity to allow sorting by other columns such as ETC, Tsubmit.

Job submission log directory for remote tasks.

The suite job submission log directory is currently ignored for remote tasks. The task job script is copied to the user's home directory on the remote platform, via scp, and then executed by the configured job submission method, on the remote platform, using ssh. The job stdout and stderr logs are also dumped in $HOME on the remote platform. This is simply because it is more difficult to check for the existence of, and create if necessary, remote directories, than it is on the local suite host, and the one remote directory whose existence is guaranteed is $HOME.

Consider adding extra commands, pre task execution, to create remote directories if necessary, or using the configured directory and leaving it to the suite owner to ensure that it exists before the suite runs.

Suite hold after next task X.

User request (Phil Andrews): at start-up or runtime, get cylc to hold a suite as soon as a designated task (regardless of its cycle time) finishes. This would be useful for debugging tasks as it allows easier access to task stdout and stderr, via the GUI, than using single task submit outside of the suite.

Currently you can tell a suite at start-up to stop after a designated task finishes, but not hold, or to hold after a all tasks pass a designated cycle time.

Auto generation of passphrases

In order to make the "[cylc] use secure passphrase" option easier for users to use, I suggest that a random passphrase is generated when a suite is registered and stored in "$CYLC_SUITE_DEF_PATH/.passphrase" (read protected).
Currently you have to set up your own passphrase for each suite in "$HOME/.cylc".

Upgrade the "cylc [util] cycletime" command.

Currently it only handles addition and subtraction of hours, for the original NWP-style hours-of-the-day cycling. It needs to handle days, months, and years too, for the new cycling modules.

environment.sh and package library

As discussed by email, I would like to propose the following:

  • Incorporate functionality of environment.sh into cylc and gcylc, so that the commands can be called with a full path or by setting PATH to pointing to the containing bin/ directory.
  • Move most of src/ into lib/cylc/ and introduce a cylc python package.
  • Move src/external/ and OrderedDict to lib/.

Restore asynchronous tasks (no cycle time) to cylc.

This was demonstrated in cylc-2, but not yet brought forward to cylc-3. Cylc was (and will be) able to run trees of tasks that are initiated by some random event such as a satellite pass (in parallel if the initiating events come in quick succession).

use of start-up tasks in an asynchronous graph

A start-up task (under [scheduling] -> [[special tasks]]) is a non-cycling synchronous task; i.e. it has an associated cycle time, but it only runs once, at suite start up. This is commonly used for a "prep" task that prepares a suite workspace or similar. If you declare a start-up task and then use it in an asynchronous graph, it results in the following run time error:

Cannot create a consistent method resolution order (MRO) for bases oneoff, async_oneoff

This is because cylc has derived a task type from both the oneoff (synchronous) and async_oneoff classes, which is inconsistent.

Cylc should check for task type inconsistencies like this when validating a suite.

Status for grouped tasks

We should figure out a way to define a status for a group of tasks, so that we can display in the graph view and potentially others.

Dependence between suites?

In a parallel trial that feeds off a main operational suite we typically have one or more clock-triggered tasks that trigger when the operational data is expected to be available and then the task implementation has to wait on the data in case it has not been generated yet. This is works perfectly well, but it might be possible to have a new task proxy type that is able to communicate with a remote suite in order to satisfy prerequisites involving remote tasks. This would be more transparent, in terms of suite design, and check-and-wait logic would not be needed in the data retrieval tasks (which would no longer need to be clock triggered either). This would require the remote suite to remember which of its tasks had completed already (unless the communication was two-way so that a task would not be removed from a suite if a downstream remote task still needed its outputs ... probably not a good idea - it could result in an operational suite being held up by a broken remote suite that feeds off it)

Support alternate family trigger semantics

Cylc automatically replaces family names in the graph with trigger expressions involving the family members. E.g. for a family FAM with members m1, m2, m3:

"FAM => foo"  -->  "m1 & m2 & m3 => foo"

i.e. "family succeeded" is equivalent to "all members succeeded". And:

"FAM:fail => foo"  -->  "(m1:fail|m2:fail) & (m1|m1:fail) & (m2|m2:fai ) & (m3|m3:fail) => foo"

i.e. "family failed" is equivalent to "all members finished (either succeeded or failed) and at least one member failed".

If the above meaning of family success or failure is not what you need, then you currently have to use family members in the graph. For instance, to trigger a task after "all members finished and at least one member succeeded":

( m1 | m2 | m3 ) & ( m1 | m1:fail ) & ( m2 | m2:fail ) & ( m3 | m3:fail ) => foo

If cylc allowed us to redefine family success as "all members finished and at least one member succeeded" instead of the default "all members succeeded", then this expression could be written as just "FAM => foo". And similarly for other possible meanings of success and failure for families (e.g. FAM:fail could mean "all members failed" instead of "at least one member failed")

A bug - output message does not trigger off and gives strange error.

Hi Hilary, hi everybody else ...

This happens while I'm trying to prepare the c-library ...

Can anyone help?

Thanks a lot,
Luis

  2012/06/24 23:22:59 CRITICAL - [foo%2012010100] -succeeded before all outputs were completed

The according suite.rc is

  title = "Output file triggering for reverse engineering."
  description = "Test output messaging communication patterns."
  [scheduling]
       initial cycle time = 2012010100
       final cycle time = 2012010200
[[dependencies]]
    [[[0,6,12,18]]]
      graph = """
        foo:out1 => baa
        foo:out2 => bab
        foo:out3 => bac
        foo:out4 => bad
        foo:out5 => bae
        foo:out6 => baf 
              """
  [runtime]
     [[foo]]
          command scripting = """
  echo This is a model running with hourly output ...
  # use task runtime environment variables here
  sleep 10
  cylc message "$CYLC_TASK_NAME uploaded file set 1 for $CYLC_TASK_CYCLE_TIME"
  sleep 10
  cylc message "$CYLC_TASK_NAME uploaded file set 2 for $CYLC_TASK_CYCLE_TIME"
  sleep 10
  cylc message "$CYLC_TASK_NAME uploaded file set 3 for $CYLC_TASK_CYCLE_TIME"
  sleep 10
  cylc message "$CYLC_TASK_NAME uploaded file set 4 for $CYLC_TASK_CYCLE_TIME"
  sleep 10
  cylc message "$CYLC_TASK_NAME uploaded file set 5 for $CYLC_TASK_CYCLE_TIME"
  sleep 10
  cylc message "$CYLC_TASK_NAME uploaded file set 6 for $CYLC_TASK_CYCLE_TIME"
  sleep  2
  echo model is checkpointing
  """
    [[[outputs]]]
        # use cylc placeholder variables here
        out1 = "<TASK> uploaded file set 1 for <CYLC_TASK_CYCLE_TIME>"
        out2 = "<TASK> uploaded file set 2 for <CYLC_TASK_CYCLE_TIME>"
        out3 = "<TASK> uploaded file set 3 for <CYLC_TASK_CYCLE_TIME>"
        out4 = "<TASK> uploaded file set 4 for <CYLC_TASK_CYCLE_TIME>"
        out5 = "<TASK> uploaded file set 5 for <CYLC_TASK_CYCLE_TIME>"
        out6 = "<TASK> uploaded file set 6 for <CYLC_TASK_CYCLE_TIME>"
[[baa]]
    command scripting = "echo pp a is triggered ..."
[[bab]]
    command scripting = "echo pp b is triggered ..."
[[bac]]
    command scripting = "echo pp c is triggered ..."
[[bad]]
    command scripting = "echo pp d is triggered ..."
[[bae]]
    command scripting = "echo pp e is triggered ..."
[[baf]]
    command scripting = "echo pp f is triggered ..."

gtk access of cli tools

Its a bit disturbing that the cylc cli tools access gtk functions. Like:

m214089@cinglung% cylc db reg ens /Users/m214089/suites/ensemble
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/gtk-2.0/gtk/init.py:57: GtkWarning: could not open display
warnings.warn(str(e), _gtk.Warning)

Update graph view after suite shutdown

Currently, the graph view is out-of-date once the suite has finished - task statuses will be shown as they were immediately before shutdown, instead of being removed as in the tree view.

Sub-hourly cycling intervals

This continues on from Issue #2. The internal cycling mechanism has been generalized, and we now have stepped yearly, monthly, and daily cycling, in addition to the original hours-of-the-day cycling.

Task log file name simplification

Hilary,

Currently, the task log root is named using the scheme:

{task}%{cycle}-{ms_since_epoch}

If I remember correctly, while you were at UKMO, we discussed shortening the {ms_since_epoch} bit, but I think we did not reach any conclusion at the end.

The main problem with the time stamp is that it makes it quite difficult for people to compare outputs of the same task in 2 different suites.

Now that retry #29 is in, I wonder whether it is possible to change the {ms_since_epoch} bit into the try number. E.g.:

{task}%{cycle}%{try_num}

A quick chat with Dave, and he told me that it would not be good, because a manual retry does not appear to increment the try number. Can the try number be incremented in a manual retry? Or is there a subtle problem we do not understand?

allow suite hold cycle time to be configured in suite.rc

Currently you can specify a hold cycle time on the command line (or via gcylc) at suite start-up. We should be able to do the same in the suite.rc file, as for final cycle time. Holding a suite at a certain point rather than stopping it outright allows users to view final task logs etc. via gcylc.

Task hover-over

Tasks in all three views should have some generic hover-over information such as task name and status - this would really help when zoomed out on large suites in the graph view, and when examining the LED view. Groups should probably show a list of the grouped tasks and their statuses.

gcylc doesn't work well with suites using secure passphrases

If I run a suite using a secure passphrase and then run gcylc there are 2 (presumably related) problems:

  1. gcylc does not display the port so there is no indication that the suite is running.

  2. gcylc issues "Connection Denied" messages approximately once a second until it is closed.

Suite Control GUIs read the suite definition only at startup.

Use of a single suite control GUI instance may span multiple suite runs or restarts, in between which changes to the suite.rc file could potentially alter the suite's task content or graph structure - but the control GUIs currently load the suite.rc file, and hence the task list and graph structure, only when they start up. Consequently the dot panel will display unused names of tasks that no longer exist, and it will not display the names and states of new tasks just added to the suite. This is not a major problem because the dot GUI still displays new tasks in the lower text tree panel, and the graph GUI still displays new "non-graphed" tasks, when the do anything interesting, as disconnected nodes to the right of the main graph. Nevertheless, it would be better to have the control GUIs reload the task list or graph whenever they change.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.