datacube.utils.dask.start_local_dask#
- datacube.utils.dask.start_local_dask(n_workers=1, threads_per_worker=None, mem_safety_margin=None, memory_limit=None, **kw)[source]#
Wrapper around
distributed.Client(..)
constructor that deals with memory better.It also configures
distributed.dashboard.link
to go over proxy when operating from behind jupyterhub.- Parameters:
n_workers (
int
) – number of worker processes to launchthreads_per_worker (
Optional
[int
]) – number of threads per worker, default is as many as there are CPUsmemory_limit (
Union
[str
,int
,None
]) – maximum memory to use across all workersmem_safety_margin (
Union
[str
,int
,None
]) – bytes to reserve for the rest of the system, only applicable ifmemory_limit=
is not supplied.
Note
if
memory_limit=
is supplied, it will be parsed and divided equally between workers.