datacube.utils.dask.start_local_dask(n_workers=1, threads_per_worker=None, mem_safety_margin=None, memory_limit=None, **kw)[source]#

Wrapper around distributed.Client(..) constructor that deals with memory better.

It also configures to go over proxy when operating from behind jupyterhub.

  • n_workers (int) – number of worker processes to launch

  • threads_per_worker (Optional[int]) – number of threads per worker, default is as many as there are CPUs

  • memory_limit (Union[str, int, None]) – maximum memory to use across all workers

  • mem_safety_margin (Union[str, int, None]) – bytes to reserve for the rest of the system, only applicable if memory_limit= is not supplied.


if memory_limit= is supplied, it will be parsed and divided equally between workers.