- datacube.utils.cog.write_cog(geo_im, fname, overwrite=False, blocksize=None, ovr_blocksize=None, overview_resampling=None, overview_levels=None, use_windowed_writes=False, intermediate_compression=False, **extra_rio_opts)#
xarray.DataArrayto a file in Cloud Optimized GeoTiff format.
This function is “Dask aware”. If
geo_imis a Dask array, then the output of this function is also a Dask Delayed object. This allows us to save multiple images concurrently across a Dask cluster. If you are not familiar with Dask this can be confusing, as no operation is performed until the
.compute()method is called, so if you call this function with Dask array it will return immediately without writing anything to disk.
If you are using Dask to speed up data loading, follow the example below:
# Example: save red band from first time slice to file "red.tif" xx = dc.load(.., dask_chunks=dict(x=1024, y=1024)) write_cog(xx.isel(time=0).red, "red.tif").compute() # or compute input first instead write_cog(xx.isel(time=0).red.compute(), "red.tif")
bool) – True – replace existing file, False – abort with IOError exception
nodata – Set
nodataflag to this value if supplied, by default
nodatais read from the attributes of the input array (
bool) – Write image block by block (might need this for large images)
extra_rio_opts – Any other option is passed to
Path to which output was written
- Return type
dask.Delayedobject if input is a Dask array
This function generates a temporary in memory tiff file without compression to speed things up. It then adds overviews to this file and only then copies it to the final destination with requested compression settings. This is necessary to produce a compliant COG, since the COG standard demands overviews to be placed before native resolution data and double pass is the only way to achieve this currently.
This means that this function will use about 1.5 to 2 times memory taken by