# neuroglancer_precomputed Driver¶

The neuroglancer_precomputed driver provides access to Neuroglancer Precomputed format volumes backed by any supported Key-Value Storage Layer. It supports reading, writing, and creating new volumes.

Multiscale volumes are supported, but each scale must be opened individually.

## JSON Schema¶

object with members:

driver
Required
“neuroglancer_precomputed”
context

Specifies context resources that augment/override the parent context.

dtype

Specifies the data type.

rank
integer [0, ∞)
transform

Specifies a transform.

kvstore
Required

Specifies the underlying storage mechanism.

path
string (default is “”)

Path within the Key-Value Store specified by storage to the multiscale volume.

Example
"path/to/array"

open
boolean

Open an existing TensorStore. If neither open nor create is specified, defaults to true.

create
boolean (default is false)

Create a new TensorStore. Specify true for both open and create to permit either opening an existing TensorStore or creating a new TensorStore if it does not already exist.

delete_existing
boolean (default is false)

Delete any existing data at the specified path before creating a new TensorStore. Requires that create is true, and that open is false.

allow_metadata_mismatch
boolean (default is false)

Allow a mismatch between the existing metadata and the metadata specified for creating a new TensorStore. Requires that create is true.

cache_pool
Context resource (default is “cache_pool”)

Specifies or references a previously defined cache_pool context resource. It is normally more convenient to specify a default cache_pool in the context.

data_copy_concurrency
Context resource (default is “data_copy_concurrency”)

Specifies or references a previously defined data_copy_concurrency context resource. It is normally more convenient to specify a default data_copy_concurrency in the context.

recheck_cached_metadata
Cache revalidation bound (default is “open”)

Time after which cached metadata is assumed to be fresh. Cached metadata older than the specified time is revalidated prior to use. The metadata is used to check the bounds of every read or write operation.

Specifying true means that the metadata will be revalidated prior to every read or write operation. With the default value of “open”, any cached metadata is revalidated when the TensorStore is opened but is not rechecked for each read or write operation.

recheck_cached_data
Cache revalidation bound (default is true)

Time after which cached data is assumed to be fresh. Cached data older than the specified time is revalidated prior to being returned from a read operation. Partial chunk writes are always consistent regardless of the value of this option.

The default value of true means that cached data is revalidated on every read. To enable in-memory data caching, you must both specify a cache_pool context resource with a non-zero total_bytes_limit and also specify false, “open”, or an explicit time bound for recheck_cached_data.

scale_index
integer [0, ∞)
Zero-based index of the scale to open or create.

When opening an existing volume, this member may be specified to explicitly indicate the scale to open. Otherwise, the first scale matching the resolution constraint, if any, specified by the scale_metadata member, is chosen. To create a new scale, this must either be left unspecified or equal the number of existing scales (which is also the index that will be assigned to the new scale).

multiscale_metadata Scale-independent metadata.

Specifies the scale-independent metadata of a new volume exactly as in the info file, except that not all members are required. Required when creating a new multiscale volume. When opening an existing volume or creating a new scale within an existing multiscale volume, specifies constraints on the existing metadata.

object with members:

 type “image” | “segmentation” Specifying the type of volume. This has no bearing on TensorStore, but is used by Neuroglancer to pick the default layer type to use. Required when creating a new multiscale volume. data_type “uint8” | “uint16” | “uint32” | “uint64” | “float32” Specifies the data type. Required when creating a new multiscale volume. num_channels integer (-∞, ∞) Number of channels. Required when creating a new volume.
scale_metadata Per-scale metadata.

Specifies the per-scale metadata of a new volume as in the info file, except that not all members are required, and the chunk_size member for specifying a single chunk size takes the place of the chunk_sizes member in the info file. Required when creating a new scale. When opening an existing scale, serves to select the scale to open and constrains the existing metadata.

object with members:

 key string Specifies the scale key (relative to path). When opening an existing scale, may be specified to select the scale by key. When creating a new scale, if not specified, defaults to “__”, where [xres, yres, zres] is the resolution. size array[3] of integer [0, ∞) Voxel dimensions of the volume (XYZ order). Required when creating a new scale. Example¶ [500, 500, 500]  voxel_offset array[3] of integer (-∞, ∞) (default is [0, 0, 0]) Voxel origin of the volume (XYZ order). If specified, size must also be specified. chunk_size array[3] of integer [1, ∞) Chunk dimensions (XYZ order). Required when creating a new scale. When opening an existing scale with multiple supported chunk_sizes, may be specified to select the chunk size to use. Otherwise, the first supported chunk size is used. Example¶ [500, 500, 500]  resolution array[3] of number (-∞, ∞) Voxel size in nanometers (XYZ order). When opening an existing scale, may be specified to select the scale by resolution. Required when creating a new scale. encoding “raw” | “jpeg” | “compressed_segmentation” Specifies the chunk encoding. Required when creating a new scale. jpeg_quality integer [0, 100] (default is 75) JPEG encoding quality. Only applies if encoding is “jpeg”. The quality is specified using the IJG (Independent JPEG Group) [0, 100] recommended scale, with 0 being the worst quality (smallest file size) and 100 the best quality (largest file size). compressed_segmentation_block_size array[3] of number (-∞, ∞) Block size for compressed segmentation encoding. If encoding is “compressed_segmentation”, required when creating a new scale, and serves as a constraint when opening an existing scale. Otherwise, must not be specified. sharding Sharding | null Specifies to use the unsharded (indicated by null) or sharded format. When creating a new scale, if not specified, the unsharded format is used.

## Sharding JSON Schema¶

Specifies the sharded format within the scale_metadata property.

object with members:

 @type Required “neuroglancer_uint64_sharded_v1” preshift_bits Required integer [0, 64] Number of low-order bits of the chunk ID that do not contribute to the hashed chunk ID. hash Required “identity” | “murmurhash3_x86_128” Specifies the hash function used to map chunk IDs to shards. minishard_bits Required integer [0, 64] Number of bits of the hashed chunk ID that determine the minishard number. The number of minishards within each shard is equal to $$2^{\mathrm{minishard\_bits}}$$. The minishard number is equal to bits [0, minishard_bits) of the hashed chunk id. shard_bits Required integer [0, 64] Number of bits of the hashed chunk ID that determine the shard number. The number of shards is equal to $$2^{\mathrm{shard\_bits}}$$. The shard number is equal to bits [minishard_bits, minishard_bits+shard_bits) of the hashed chunk ID. minishard_index_encoding “raw” | “gzip” (default is “raw”) Specifies the encoding of the minishard index. Normally “gzip” is a good choice. data_index_encoding “raw” | “gzip” (default is “raw”) Specifies the encoding of the data chunks. Normally “gzip” is a good choice, unless the volume uses “jpeg” encoding.

## Example JSON specifications¶

Example: Opening the first (or only) scale of an existing multiscale volume.
{
"driver": "neuroglancer_precomputed",
"kvstore": {"driver": "gcs", "bucket": "my-bucket"},
"path": "path/to/volume"
}

Example: Opening an existing scale by index.
{
"driver": "neuroglancer_precomputed",
"kvstore": {"driver": "gcs", "bucket": "my-bucket"},
"path": "path/to/volume",
"scale_index": 1
}

Example: Opening an existing scale by resolution.
{
"driver": "neuroglancer_precomputed",
"kvstore": {"driver": "gcs", "bucket": "my-bucket"},
"path": "path/to/volume",
"resolution": [4, 4, 40]
}
}

Example: Opening an existing scale by key.
{
"driver": "neuroglancer_precomputed",
"kvstore": {"driver": "gcs", "bucket": "my-bucket"},
"path": "path/to/volume",
"key": "4_4_40"
}
}

Example: Creating a new scale in an existing multiscale volume.
{
"driver": "neuroglancer_precomputed",
"kvstore": {"driver": "gcs", "bucket": "my-bucket"},
"path": "path/to/volume",
"size": [40000, 50000, 10000],
"encoding": "compressed_segmentation",
"compressed_segmentation_block_size": [8, 8, 8],
"chunk_size": [64, 64, 64],
"resolution": [8, 8, 40]
}
}

Example: Creating a new multiscale volume.
{
"driver": "neuroglancer_precomputed",
"kvstore": {"driver": "gcs", "bucket": "my-bucket"},
"path": "path/to/volume",
"type": "segmentation",
"data_type": "uint64",
"num_channels": 1
},
"size": [40000, 50000, 10000],
"encoding": "compressed_segmentation",
"compressed_segmentation_block_size": [8, 8, 8],
"chunk_size": [64, 64, 64],
"resolution": [8, 8, 40]
}
}


## Limitations¶

Resizing is not supported.

Warning

Writing to volumes in the sharded format is supported, but because updating a shard requires rewriting it entirely, write operations may be very inefficient unless special care is taken:

1. It is most efficient to group writes by shard.

2. The temporary memory required to write a shard is 2 to 3 times the size of the shard. It is therefore advised that the shards be kept as small as possible (while still avoiding an excess number of objects in the underlying Key-Value Store).