Drivers

Single file drivers

Chunked storage drivers

json KeyValueStoreBackedChunkDriver : object

Common options supported by all chunked storage drivers.

Extends:
  • TensorStore — Specifies a TensorStore to open/create.

Subtypes:
Required members:
driver : string

Driver identifier

Specifies the TensorStore driver.

kvstore : KvStore | KvStoreUrl

Specifies the underlying storage mechanism.

Optional members:
context : Context

Specifies context resources that augment/override the parent context.

dtype : dtype

Specifies the data type.

rank : integer[0, 32]

Specifies the rank of the TensorStore.

If transform is also specified, the input rank must match. Otherwise, the rank constraint applies to the driver directly.

transform : IndexTransform

Specifies a transform.

schema : Schema

Specifies constraints on the schema.

When opening an existing array, specifies constraints on the existing schema; opening will fail if the constraints do not match. Any soft constraints specified in the chunk_layout are ignored. When creating a new array, a suitable schema will be selected automatically based on the specified schema constraints in combination with any driver-specific constraints.

path : string = ""

Additional path within the KvStore specified by kvstore.

This is joined as an additional "/"-separated path component after any path member directly within kvstore. This is supported for backwards compatibility only; the KvStore.path member should be used instead.

Example

"path/to/data"
open : boolean

Open an existing TensorStore. If neither open nor create is specified, defaults to true.

create : boolean = false

Create a new TensorStore. Specify true for both open and create to permit either opening an existing TensorStore or creating a new TensorStore if it does not already exist.

delete_existing : boolean = false

Delete any existing data at the specified path before creating a new TensorStore. Requires that create is true, and that open is false.

assume_metadata : boolean = false

Neither read nor write stored metadata. Instead, just assume any necessary metadata based on constraints in the spec, using the same defaults for any unspecified metadata as when creating a new TensorStore. The stored metadata need not even exist. Operations such as resizing that modify the stored metadata are not supported. Requires that open is true and delete_existing is false. This option takes precedence over assume_cached_metadata if that option is also specified.

Warning

This option can lead to data corruption if the assumed metadata does not match the stored metadata, or multiple concurrent writers use different assumed metadata.

assume_cached_metadata : boolean = false

Skip reading the metadata when opening. Instead, just assume any necessary metadata based on constraints in the spec, using the same defaults for any unspecified metadata as when creating a new TensorStore. The stored metadata may still be accessed by subsequent operations that need to re-validate or modify the metadata. Requires that open is true and delete_existing is false. The assume_metadata option takes precedence if also specified.

Note

Unlike the assume_metadata option, operations such as resizing that modify the stored metadata are supported (and access the stored metadata).

Warning

This option can lead to data corruption if the assumed metadata does not match the stored metadata, or multiple concurrent writers use different assumed metadata.

cache_pool : ContextResource = "cache_pool"

Cache pool for data.

Specifies or references a previously defined Context.cache_pool. It is normally more convenient to specify a default cache_pool in the context.

metadata_cache_pool : ContextResource

Cache pool for metadata only.

Specifies or references a previously defined Context.cache_pool. If not specified, defaults to the value of cache_pool.

data_copy_concurrency : ContextResource = "data_copy_concurrency"

Specifies or references a previously defined Context.data_copy_concurrency. It is normally more convenient to specify a default data_copy_concurrency in the context.

recheck_cached_metadata : CacheRevalidationBound = "open"

Time after which cached metadata is assumed to be fresh. Cached metadata older than the specified time is revalidated prior to use. The metadata is used to check the bounds of every read or write operation.

Specifying true means that the metadata will be revalidated prior to every read or write operation. With the default value of "open", any cached metadata is revalidated when the TensorStore is opened but is not rechecked for each read or write operation.

recheck_cached_data : CacheRevalidationBound = true

Time after which cached data is assumed to be fresh. Cached data older than the specified time is revalidated prior to being returned from a read operation. Partial chunk writes are always consistent regardless of the value of this option.

The default value of true means that cached data is revalidated on every read. To enable in-memory data caching, you must both specify a cache_pool with a non-zero total_bytes_limit and also specify false, "open", or an explicit time bound for recheck_cached_data.

json CacheRevalidationBound : true | false | "open" | number

Determines under what circumstances cached data is revalidated.

One of:
true

Revalidate cached data at every operation.

false

Assume cached data is always fresh and never revalidate.

"open"

Revalidate cached data older than the time at which the TensorStore was opened.

number

Revalidate cached data older than the specified time in seconds since the unix epoch.