speechbrain.utils.distributed module
Guard for running certain operations on main process only
- Authors:
Abdel Heba 2020
Aku Rouhe 2020
Summary
Functions:
In DDP mode, this function will synchronize all processes. |
|
This function will initialize the ddp group if distributed_launch bool is given in the python command line. |
|
Checks if the current process is the main local process and authorized to run I/O commands. |
|
Function decorator to ensure the function runs only on the main process. |
|
Runs a function with DPP (multi-gpu) support. |
Reference
- speechbrain.utils.distributed.run_on_main(func, args=None, kwargs=None, post_func=None, post_args=None, post_kwargs=None, run_post_on_main=False)[source]
Runs a function with DPP (multi-gpu) support.
The main function is only run on the main process. A post_function can be specified, to be on non-main processes after the main func completes. This way whatever the main func produces can be loaded on the other processes.
- Parameters:
func (callable) – Function to run on the main process.
args (list, None) – Positional args to pass to func.
kwargs (dict, None) – Keyword args to pass to func.
post_func (callable, None) – Function to run after func has finished on main. By default only run on non-main processes.
post_args (list, None) – Positional args to pass to post_func.
post_kwargs (dict, None) – Keyword args to pass to post_func.
run_post_on_main (bool) – Whether to run post_func on main process as well. (default: False)
- speechbrain.utils.distributed.if_main_process()[source]
Checks if the current process is the main local process and authorized to run I/O commands. In DDP mode, the main local process is the one with LOCAL_RANK == 0. In standard mode, the process will not have LOCAL_RANK Unix var and will be authorized to run the I/O commands.
- speechbrain.utils.distributed.main_process_only(function)[source]
Function decorator to ensure the function runs only on the main process. This is useful for things like saving to the filesystem or logging to a web address where you only want it to happen on a single process.
- speechbrain.utils.distributed.ddp_barrier()[source]
In DDP mode, this function will synchronize all processes. torch.distributed.barrier() will block processes until the whole group enters this function.
- speechbrain.utils.distributed.ddp_init_group(run_opts)[source]
This function will initialize the ddp group if distributed_launch bool is given in the python command line.
The ddp group will use distributed_backend arg for setting the DDP communication protocol. RANK Unix variable will be used for registering the subprocess to the ddp group.
- Parameters:
run_opts (list) – A list of arguments to parse, most often from sys.argv[1:].