Comments (4)
This is strange: Why does this problem not occur when I use the fork method? It's still a proper subprocess, so shouldn't I get the same error ("daemonic processes are not allowed to have children")?
Maybe because the
MultiProcDataset
instance is already created and initialized, and thus it already has created subprocesses, and in a fork, it will not actually copy (serialize + deserialize)MultiProcDataset
but rather just use the same instance?
I just verified this: Yes, this is exactly what happens. And then, MultiProcDataset._collect_single_seq
here:
def _collect_single_seq(self, seq_idx: int) -> Optional[DatasetSeq]:
if seq_idx >= self._num_seqs:
return None
worker_idx = seq_idx % self.num_workers
worker = self._worker_parent_conns[worker_idx]
worker.send(("get_data_seq", {"seq_idx": seq_idx // self.num_workers}))
msg, data = worker.recv()
assert msg == "data_seq"
if data is None:
return None
assert isinstance(data, DatasetSeq)
data.seq_idx = seq_idx
return data
will use existing Connection
objects, which seems to work fine even after the fork.
But this code here is potentially very dangerous, as it will break when there are multiple workers, all spamming into the same connection. Also, this whole code here only works because MultiProcDataset
was already initialized and the sub procs were started, which is not necessarily true.
So we just had some luck that it was working fine so far.
In any case, using fork is not a good idea, because of #1494, and all the other potential problems which come with fork. So 32328cd should be undone, i.e. we should use sth different than the default (fork). But the question is, how to fix the described issue here ("daemonic processes are not allowed to have children")?
from returnn.
So 32328cd should be undone, i.e. we should use sth different than the default (fork). But the question is, how to fix the described issue here ("daemonic processes are not allowed to have children")?
Btw, for reference, what does this error mean? From the Python docs:
When a process exits, it attempts to terminate all of its daemonic child processes.
Note that a daemonic process is not allowed to create child processes. Otherwise a daemonic process would leave its children orphaned if it gets terminated when its parent process exits.
Which is true...
Note, for non-daemonic processes, we are responsible for cleaning up.
For daemonic processes, this is the logic by the multiprocessing
module:
def _exit_function(info=info, debug=debug, _run_finalizers=_run_finalizers,
active_children=process.active_children,
current_process=process.current_process):
...
for p in active_children():
if p.daemon:
info('calling terminate() for daemon %s', p.name)
p._popen.terminate()
for p in active_children():
info('calling join() for process %s', p.name)
p.join()
...
atexit.register(_exit_function)
terminate
will send SIGTERM
.
One option: We can introduce our own multiprocessing_context
type, like "spawn_no_daemon"
, and implement sth like this, i.e.:
class NoDaemonProcess(multiprocessing.Process):
# make 'daemon' attribute always return False
def _get_daemon(self):
return False
def _set_daemon(self, value):
pass
daemon = property(_get_daemon, _set_daemon)
from returnn.
So, this daemonic sub processes problem should be fixed now via the introduced NonDaemonicSpawnContext
, which we use by default now.
However, the other issue with the config is not yet fixed.
from returnn.
However, the other issue with the config is not yet fixed.
We had mostly the same problem also with MultiProcDataset
, see #1384.
from returnn.
Related Issues (20)
- CUDA error: initialization error HOT 3
- RuntimeError: CUDA error: unspecified launch failure HOT 2
- NonDaemonicSpawnProcess hangs at exit HOT 2
- High memory usage with datasets (specifically when multi procs are used)
- Hang at exit in TDL worker in multiprocessing `_run_finalizers`, deadlock in `_wait_for_tstate_lock`? HOT 6
- Hang HOT 2
- Returnn Native after using different apptainer uses old compilation HOT 6
- MetaDataset with sequence list filter file
- HDFDataset (or generic dataset) post processing HOT 15
- Dataset batching like ESPnet support
- torch.nn.functional.conv2d: RuntimeError: GET was unable to find an engine to execute this computation HOT 1
- TensorFlow 2.14 degradation in WER HOT 2
- Updates for recent TensorFlow version
- Hang in dataset iterator HOT 5
- Log GPU device for torch backend HOT 2
- torch.onnx.export requires input_names and output_names to be in order HOT 12
- RF weight dropout HOT 6
- Support for larger scale datasets HOT 33
- RuntimeError: CUDA error: unknown error
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from returnn.