So I believe when writing a background worker, it's rather likely there are some configuration options for it. Naturally we want to describe those in a YANG model and have them show up in CDB. Naturally we want the background worker to react to them immediately (requires a CDB subscriber). Naturally we don't want to bore the developer of a background worker with the mundane details of implementing such a subscriber.
background_process already listens to CDB changes, first and foremost the enabled
leaf for the background worker, which controls whether the background worker child process should run or not. This value, or change to the value, is consumed by the supervisor and just kills the child process if it should be stopped or starts it if it should be running. We don't pass it as configuration to the background worker child process. Other configuration options, which actually affect the behaviour of the worker process, obviously needs to be passed. We already have a second CDB subscriber which listens to changes for the python-vm logging level and notifies a thread in the child process that then sets the appropriate logging level. While the log level configuration is passed to the child process, it is passed to a thread that we injected rather than to the bg function that the developer of the background worker implements.
How should we go about this?
Alternative 1: Leave it to the bg function implementer
We do nothing and let the implementer of the bg function set up their own CDB subscriber.
Alternative 2: Send over queue to bg function
We implement a queue and we accept input parameters to the supervisor process which configuration paths are interesting. The supervisor will create a CDB subscriber that subscribes to those interesting config paths and then send any updates across the queue which should then be emptied by the bg function.
This might impose limitations on how a bg function must be written, like we might force the use of a while loop around this queue. Maybe the bg function implementer wants their freedom?
It should be noted that multiprocessing queues can be select:ed upon, so that gives some more option to the implementer in avoiding busy waiting.
Alternative 3: background_process injects config listener thread and magically updates values
As per above, the supervisor establishes a queue and CDB subscriber, sending updates over the queue but the handling of those updates in the child process is done by a thread that background_process injects. What the bg function developer defines is a mapping between YANG paths and Python variables, something like:
config_map = {
'/bgworker/period': cfg.period,
'/bgworker/foo': cfg.foo
}
So we have a cfg
object that is being updated by the config listener thread and we can read it from main background worker function. Variable updates are atomic, thus threadsafe... I think, except for when the underlying type is 64 bit!? So let's not have that, heh. Not sure how we could have locks here.. maybe the cfg object could be magical, because we don't want the bg function developer to think about locks.
Thoughts? @mzagozen