Git Product home page Git Product logo

Comments (12)

ocaisa avatar ocaisa commented on September 27, 2024

There's a long issue discussion on this at #122 (which hopefully includes a solution for you!)

from dask-jobqueue.

ocaisa avatar ocaisa commented on September 27, 2024

See #122 (comment)

from dask-jobqueue.

ocaisa avatar ocaisa commented on September 27, 2024

Ah, that is now in the docs at https://jobqueue.dask.org/en/latest/advanced-tips-and-tricks.html#how-to-handle-job-queueing-system-walltime-killing-workers

from dask-jobqueue.

guillaumeeb avatar guillaumeeb commented on September 27, 2024

@berkgercek, hopefully the links provided by @ocaisa should give you at least some workaround.

Other than that, I agree that in a simple case, with adaptive mode, new Workers should be started if some are lost. We should look at how this is handled in distributed repository.

from dask-jobqueue.

jacobtomlinson avatar jacobtomlinson commented on September 27, 2024

Just a note you should be able to get the scheduler logs with cluster.get_logs().

from dask-jobqueue.

matrach avatar matrach commented on September 27, 2024

It seems that that it should be possible to make the respawning workaround even with just cluster.scale(n) by calling:

current = len(self.plan)
cluster.scale(jobs=len(cluster.scheduler.workers))
cluster.scale(current)

However, in the following code responsible for scaling:
https://github.com/dask/distributed/blob/a83d8727567dd3cdc7c6abdc7eda26d1029cd9de/distributed/deploy/spec.py#L512-L524

there is a mismatch between worker names (when using processes > 1):

  • set(self.worker_spec) has keys without a suffix: {'SLURMCluster-611', 'SLURMCluster-631'}
  • v["name"] for v in self.scheduler_info["workers"].values() has a suffix e.g., {'SLURMCluster-592-1', 'SLURMCluster-592-0'}

This mismatch seems to be also responsible for making adapt remove alive workers instead of the dead (not_yet_connected above).

from dask-jobqueue.

maawoo avatar maawoo commented on September 27, 2024

I'm running into exactly the same issue. The code that I'm using to call SLURMCluster can be found here. It already includes the workaround mentioned above.

Here is the log output from a recent run with walltime='00:20:00' and '--lifetime', '15m' (otherwise same settings as code that I linked above). I removed the first lines, which is just informing about the startup of the workers.

2023-11-09 12:34:54,303 - distributed.core - INFO - Starting established connection to tcp://172.18.10.2:41919
2023-11-09 12:46:30,084 - distributed.worker - INFO - Closing worker gracefully: tcp://172.18.1.15:42229. Reason: worker-lifetime-reached
2023-11-09 12:46:30,091 - distributed.worker - INFO - Stopping worker at tcp://172.18.1.15:42229. Reason: worker-lifetime-reached
2023-11-09 12:46:30,097 - distributed.nanny - INFO - Closing Nanny gracefully at 'tcp://172.18.1.15:39303'. Reason: worker-lifetime-reached
2023-11-09 12:46:30,102 - distributed.core - INFO - Connection to tcp://172.18.10.2:41919 has been closed.
2023-11-09 12:46:30,105 - distributed.nanny - INFO - Worker closed
2023-11-09 12:46:32,110 - distributed.nanny - ERROR - Worker process died unexpectedly
2023-11-09 12:46:32,314 - distributed.nanny - INFO - Closing Nanny at 'tcp://172.18.1.15:39303'. Reason: nanny-close-gracefully
2023-11-09 12:46:47,153 - distributed.worker - INFO - Closing worker gracefully: tcp://172.18.1.15:40559. Reason: worker-lifetime-reached
2023-11-09 12:46:47,163 - distributed.worker - INFO - Stopping worker at tcp://172.18.1.15:40559. Reason: worker-lifetime-reached
2023-11-09 12:46:47,166 - distributed.nanny - INFO - Closing Nanny gracefully at 'tcp://172.18.1.15:39757'. Reason: worker-lifetime-reached
2023-11-09 12:46:47,170 - distributed.core - INFO - Connection to tcp://172.18.10.2:41919 has been closed.
2023-11-09 12:46:47,174 - distributed.nanny - INFO - Worker closed
2023-11-09 12:46:49,178 - distributed.nanny - ERROR - Worker process died unexpectedly
2023-11-09 12:46:49,374 - distributed.nanny - INFO - Closing Nanny at 'tcp://172.18.1.15:39757'. Reason: nanny-close-gracefully
2023-11-09 12:49:50,151 - distributed.worker - INFO - Closing worker gracefully: tcp://172.18.1.15:35215. Reason: worker-lifetime-reached
2023-11-09 12:49:50,157 - distributed.worker - INFO - Stopping worker at tcp://172.18.1.15:35215. Reason: worker-lifetime-reached
2023-11-09 12:49:50,161 - distributed.nanny - INFO - Closing Nanny gracefully at 'tcp://172.18.1.15:37005'. Reason: worker-lifetime-reached
2023-11-09 12:49:50,166 - distributed.core - INFO - Connection to tcp://172.18.10.2:41919 has been closed.
2023-11-09 12:49:50,169 - distributed.nanny - INFO - Worker closed
2023-11-09 12:49:52,173 - distributed.nanny - ERROR - Worker process died unexpectedly
2023-11-09 12:49:52,355 - distributed.nanny - INFO - Closing Nanny at 'tcp://172.18.1.15:37005'. Reason: nanny-close-gracefully
2023-11-09 12:50:15,984 - distributed.worker - INFO - Closing worker gracefully: tcp://172.18.1.15:41529. Reason: worker-lifetime-reached
2023-11-09 12:50:15,990 - distributed.worker - INFO - Stopping worker at tcp://172.18.1.15:41529. Reason: worker-lifetime-reached
2023-11-09 12:50:15,994 - distributed.nanny - INFO - Closing Nanny gracefully at 'tcp://172.18.1.15:37845'. Reason: worker-lifetime-reached
2023-11-09 12:50:15,997 - distributed.core - INFO - Connection to tcp://172.18.10.2:41919 has been closed.
2023-11-09 12:50:16,000 - distributed.nanny - INFO - Worker closed
2023-11-09 12:50:18,004 - distributed.nanny - ERROR - Worker process died unexpectedly
2023-11-09 12:50:18,169 - distributed.nanny - INFO - Closing Nanny at 'tcp://172.18.1.15:37845'. Reason: nanny-close-gracefully
2023-11-09 12:50:58,702 - distributed.worker - INFO - Closing worker gracefully: tcp://172.18.1.15:40349. Reason: worker-lifetime-reached
2023-11-09 12:50:58,708 - distributed.worker - INFO - Stopping worker at tcp://172.18.1.15:40349. Reason: worker-lifetime-reached
2023-11-09 12:50:58,712 - distributed.nanny - INFO - Closing Nanny gracefully at 'tcp://172.18.1.15:34527'. Reason: worker-lifetime-reached
2023-11-09 12:50:58,716 - distributed.core - INFO - Connection to tcp://172.18.10.2:41919 has been closed.
2023-11-09 12:50:58,720 - distributed.nanny - INFO - Worker closed
2023-11-09 12:51:00,724 - distributed.nanny - ERROR - Worker process died unexpectedly
2023-11-09 12:51:00,948 - distributed.nanny - INFO - Closing Nanny at 'tcp://172.18.1.15:34527'. Reason: nanny-close-gracefully
2023-11-09 12:51:00,950 - distributed.dask_worker - INFO - End worker

If I then continue my work and end up calling .compute() somewhere, a new slurm job + dask workers are started. So at least I (or my students) don't end up accidentally processing on the cluster's login node...

from dask-jobqueue.

maawoo avatar maawoo commented on September 27, 2024

Spawning of new workers fails with:

cluster.adapt(minimum_jobs=1, 
              maximum_jobs=2, 
              worker_key=lambda state: state.address.split(':')[0], 
              interval='10s')

It works however when using the following:

cluster.adapt(minimum=1, 
              maximum=8,
              worker_key=lambda state: state.address.split(':')[0],  
              interval='10s')

In my case each job spawns 4 workers so maximum=8 is equal to maximum_jobs=2.

Removing worker_key and interval will result in the endless loop of spawning and killing workers as described in #498

from dask-jobqueue.

guillaumeeb avatar guillaumeeb commented on September 27, 2024

@matrach you are right about the mismatch in the distributed code in the specific case where we want to scale down not yet launched workers. However I'm not sure how this relates to this problem were we want to respawn dead workers?

@maawoo the link to your code is dead for me. Considering the second part, cluster.adapt(minimum_jobs=1, maximum_jobs=2) will be translated in cluster.adapt(minimum=4, maximum=8), which probably causes the issue.

It's important to stress that adaptive mode is known to have issues with dask-jobqueue when starting several Worker processes per job.

Getting back at the original problem, I just tested the following code using dask 2023.6.0:

import time
import numpy as np
from dask_jobqueue import SLURMCluster as Cluster
from dask import delayed
from dask.distributed import Client, as_completed

cluster = Cluster(walltime='00:01:00', cores=1, memory='4gb', account="campus")
cluster.adapt(minimum=2, maximum=4) # FIX

client = Client(cluster)

And I see new workers being created as soon as older ones dies, without performing any computations.

I'm going to close this issue as the more specific problems are covered by other ones.

from dask-jobqueue.

matrach avatar matrach commented on September 27, 2024

@matrach you are right about the mismatch in the distributed code in the specific case where we want to scale down not yet launched workers. However I'm not sure how this relates to this problem were we want to respawn dead workers?

I've never mentioned such a case. The issue was (is?) that the variable name not_yet_connected doesn't contain what it states to: with this naming mismatch it always contained all of the workers. Even without the mismatch, it would contain both "not yet connected" and "already dead" workers.

            not_yet_launched = set(self.worker_spec) - {
                v["name"] for v in self.scheduler_info["workers"].values()
            }
            while len(self.worker_spec) > n and not_yet_launched:
                del self.worker_spec[not_yet_launched.pop()]

But, set.pop() is allowed to return arbitrary element. Thus an implementation starting from the newest entries might always start from "not yet connected" instead of "already dead" workers.

from dask-jobqueue.

guillaumeeb avatar guillaumeeb commented on September 27, 2024

I've never mentioned such a case

What I meant was that it is another issue, or is it not?

from dask-jobqueue.

matrach avatar matrach commented on September 27, 2024

This is related when using adapt, because the code above, instead of removing dead workers, may kill the newly spawned ones. This may lead to thrashing.

from dask-jobqueue.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.