mlir-hs-0.1.0.0
Safe HaskellNone
LanguageHaskell2010

MLIR.AST.Dialect.Linalg

Synopsis

index

The linalg.index operation returns the iteration index of the immediately enclosing linalg structured operation for the iteration dimension dim. The dim attribute specifies the position of the accessed dimension in the indexing map domain.

Example:

#map = affine_map<(i, j) -> (i, j)>
linalg.generic {indexing_maps = [#map, #map],
                iterator_types = ["parallel", "parallel"]}
  outs(%I, %J : memref<?x?xindex>, memref<?x?xindex>) {
  ^bb0(%arg0 : index, %arg1 : index):
  // Access the outer iteration dimension i
  %i = linalg.index 0 : index
  // Access the inner iteration dimension j
  %j = linalg.index 1 : index
  linalg.yield %i, %j : index, index
}

This may lower to IR resembling:

%0 = dim %I, %c0 : memref<?x?xindex>
%1 = dim %I, %c1 : memref<?x?xindex>
scf.for %i = %c0 to %0 step %c1 {
  scf.for %j = %c0 to %1 step %c1 {
    store %i, %I[%i, %j] : memref<?x?xindex>
    store %j, %J[%i, %j] : memref<?x?xindex>
  }
}

pattern Linalg_Index :: Location -> Type -> Int -> AbstractOperation operand Source #

A pattern for linalg.index.

index :: MonadBlockBuilder m => Type -> Int -> m Value Source #

A builder for linalg.index.

softmax

linalg.softmax computes a numerically stable version of softmax.

For a given input tensor and a specified dimension d, compute: 1. the max m along that dimension d 2. f(x) = exp(x - m) 3. sum f(x) along dimension d to get l(x). 4. compute the final result f(x) / l(x).

This is an aggregate linalg operation that further reduces to a small DAG of structured operations.

Warning: Regarding the tiling capabilities, the implementation doesn't check that the provided dimensions make sense. This is the responsability of the transformation calling the tiling to ensure that the provided sizes for each dimension make sense with respect to the semantic of softmax.

softmax :: MonadBlockBuilder m => [Type] -> Value -> Value -> Int -> m Value Source #

A builder for linalg.softmax.

winograd_filter_transform

Winograd Conv2D algorithm will convert linalg Conv2D operator into batched matrix multiply. Before the matrix multiply, it will convert filter and input into a format suitable for batched matrix multiply. After the matrix multiply, it will convert output to the final result tensor.

The algorithm F(m x m, r x r) is

Y = A^T x [(G x g x G^T) @ (B^T x d x B)] x A

The size of output Y is m x m. The size of filter g is r x r. The size of input d is (m + r - 1) x (m + r - 1). A^T, A, G^T, G, B^T, and B are transformation matrices.

This operator is defined to represent the high level concept of filter transformation (G x g x G^T) in the Winograd Conv2D algorithm.

pattern Linalg_WinogradFilterTransform :: Location -> Type -> operand -> operand -> Int -> Int -> AbstractOperation operand Source #

A pattern for linalg.winograd_filter_transform.

winograd_filter_transform :: MonadBlockBuilder m => Type -> Value -> Value -> Int -> Int -> m Value Source #

A builder for linalg.winograd_filter_transform.

winograd_input_transform

Winograd Conv2D algorithm will convert linalg Conv2D operator into batched matrix multiply. Before the matrix multiply, it will convert filter and input into a format suitable for batched matrix multiply. After the matrix multiply, it will convert output to the final result tensor.

The algorithm F(m x m, r x r) is

Y = A^T x [(G x g x G^T) @ (B^T x d x B)] x A

The size of output Y is m x m. The size of filter g is r x r. The size of input d is (m + r - 1) x (m + r - 1). A^T, A, G^T, G, B^T, and B are transformation matrices.

This operator is defined to represent the high level concept of input transformation (B^T x d x B) in the Winograd Conv2D algorithm.

pattern Linalg_WinogradInputTransform :: Location -> Type -> operand -> operand -> Int -> Int -> AbstractOperation operand Source #

A pattern for linalg.winograd_input_transform.

winograd_input_transform :: MonadBlockBuilder m => Type -> Value -> Value -> Int -> Int -> m Value Source #

A builder for linalg.winograd_input_transform.

winograd_output_transform

Winograd Conv2D algorithm will convert linalg Conv2D operator into batched matrix multiply. Before the matrix multiply, it will convert filter and input into a format suitable for batched matrix multiply. After the matrix multiply, it will convert output to the final result tensor.

The algorithm F(m x m, r x r) is

Y = A^T x [(G x g x G^T) @ (B^T x d x B)] x A

The size of output Y is m x m. The size of filter g is r x r. The size of input d is (m + r - 1) x (m + r - 1). A^T, A, G^T, G, B^T, and B are transformation matrices.

This operator is defined to represent the high level concept of output transformation (A^T x y x A) in the Winograd Conv2D algorithm.

pattern Linalg_WinogradOutputTransform :: Location -> Type -> operand -> operand -> Int -> Int -> AbstractOperation operand Source #

A pattern for linalg.winograd_output_transform.

winograd_output_transform :: MonadBlockBuilder m => Type -> Value -> Value -> Int -> Int -> m Value Source #

A builder for linalg.winograd_output_transform.

yield

linalg.yield is a special terminator operation for blocks inside regions in linalg generic ops. It returns values to the immediately enclosing linalg generic op.

Example:

linalg.yield %f0, %f1 : f32, f32

pattern Linalg_Yield :: Location -> [operand] -> AbstractOperation operand Source #

A pattern for linalg.yield.

yield :: MonadBlockBuilder m => [Value] -> m EndOfBlock Source #

A builder for linalg.yield.

abs

No numeric casting is performed on the input operand.

abs :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.abs.

add

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reductionbroadcastelement cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.add sequence can be lowered to a linalg.generic with different affine maps for the two operands.

add :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.add.

batch_matmul

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

batch_matmul :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.batch_matmul.

batch_matmul_transpose_a

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

batch_matmul_transpose_a :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.batch_matmul_transpose_a.

batch_matmul_transpose_b

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

batch_matmul_transpose_b :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.batch_matmul_transpose_b.

batch_matvec

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

batch_matvec :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.batch_matvec.

batch_mmt4d

Besides the outermost batch dimension has the same semantic as linalg.batch_matmul, the differences from linalg.batch_matmul in the non-batch dimensions are the same as linalg.mmt4d vs. linalg.matmul. See the description of lingalg.mmt4d.

batch_mmt4d :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.batch_mmt4d.

batch_reduce_matmul

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

batch_reduce_matmul :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.batch_reduce_matmul.

batch_vecmat

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

batch_vecmat :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.batch_vecmat.

broadcast

Broadcast the input into the given shape by adding dimensions.

Example: %bcast = linalg.broadcast ins(%input:tensor<16xf32>) outs(%init:tensor<16x64xf32>) dimensions = [1]

ceil

No numeric casting is performed on the input operand.

ceil :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.ceil.

contract

The semantics of contracting inputs A and B on top of C to produce output D is given by

D[H] = (SUM_{(I ∪ J)  H} A[I] * B[J]) + C[H]

where I, J, and H are tuples of (pairwise distinct) dimension identifiers - meant to range over valid indices - corresponding to the results of the mandatory (projected permutation) indexing_maps for A, B and C. SUM_{dims} means reduce over all valid indices for the dimensions in the set dims (with I, J, and K treated as _sets_ of dim identifiers).

The iteration space consists of all dimensions in I, J and H, i.e. the domain of each of the affine_maps. Like for einsums, the iteration type of each dim is inferred and is either:

  • reduction: the dim is used to index into A and B but not C. Per the above semantics, these dims will be contracted, i.e. reduced over.
  • parallel: the dim is used to index into C and at least one of A and B, and - deriving from matmul terminology - is either an "M-like" dim (if used on A and C), an "N-like" dim (if used on B and C) or a "batch"-dim (if used to index into A, B, and C).

For example, batch-matmul is given by I = ⟨ b, m, k ⟩, J = ⟨ b, k, n ⟩, H = ⟨ b, m, n ⟩ (with k as a contracting reduction-dimension while m, n and b have parallel iteration-type) and gets represented as:

%D = linalg.contract
    indexing_maps = [affine_map<(batch, m, n, k) -> (batch, m, k)>,
                     affine_map<(batch, m, n, k) -> (batch, k, n)>,
                     affine_map<(batch, m, n, k) -> (batch, m, n)>]
    ins(%A, %B: tensor<?x?x?xf32>, tensor<?x?x?xf32>)
    outs(%C: tensor<?x?x?xf32>) -> tensor<?x?x?xf32>

Note that by permuting dims in the affine_maps' results, accesses to to the inputs and output can be arbitrarily transposed. Similarly, arbitrary broadcasts can be achieved through leaving out dims on either input operand. For example, the following is a variant of batch-matmul with a transposition applied to A while B's 2D-matrix gets broadcasted along the batch dim:

linalg.contract
    indexing_maps = [affine_map<(batch, m, n, k) -> (batch, k, m)>,
                     affine_map<(batch, m, n, k) -> (k, n)>,
                     affine_map<(batch, m, n, k) -> (batch, m, n)>]
    ins(%A, %B: memref<?x?x?xf32>, memref<?x?xf32>)
    outs(%C: memref<?x?x?xf32>)

Numeric casting is performed on the operands to the inner multiplication, promotingtruncating them to the same data type as the accumulatoroutput.

TODO: Allow control over the combining/accumulating op and possibly the multiplication op.

contract :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> [Map] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.contract.

conv_1d_ncw_fcw

Layout: * Input: NCW. * Kernel: FCW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

conv_1d_ncw_fcw :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.conv_1d_ncw_fcw.

conv_1d_nwc_wcf

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

conv_1d_nwc_wcf :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.conv_1d_nwc_wcf.

conv_1d

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

conv_1d :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.conv_1d.

conv_2d_nchw_fchw

Layout: * Input: NCHW. * Kernel: FCHW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

conv_2d_nchw_fchw :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.conv_2d_nchw_fchw.

conv_2d_nchw_fchw_q

Layout: * Input: NCHW. * Kernel: FCHW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

conv_2d_nchw_fchw_q :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.conv_2d_nchw_fchw_q.

conv_2d_ngchw_fgchw

Layout: * Input: NGCHW. * Kernel: FGCHW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

conv_2d_ngchw_fgchw :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.conv_2d_ngchw_fgchw.

conv_2d_ngchw_gfchw

Layout: * Input: NGCHW. * Kernel: GFCHW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

conv_2d_ngchw_gfchw :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.conv_2d_ngchw_gfchw.

conv_2d_ngchw_gfchw_q

Layout: * Input: NGCHW. * Kernel: GFCHW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

conv_2d_ngchw_gfchw_q :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.conv_2d_ngchw_gfchw_q.

conv_2d_nhwc_fhwc

Layout: * Input: NHWC. * Kernel: FHWC.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

conv_2d_nhwc_fhwc :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.conv_2d_nhwc_fhwc.

conv_2d_nhwc_fhwc_q

Layout: * Input: NHWC. * Kernel: FHWC.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

conv_2d_nhwc_fhwc_q :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.conv_2d_nhwc_fhwc_q.

conv_2d_nhwc_hwcf

Layout: * Input: NHWC. * Kernel: HWCF.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

conv_2d_nhwc_hwcf :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.conv_2d_nhwc_hwcf.

conv_2d_nhwc_hwcf_q

Layout: * Input: NHWC. * Kernel: HWCF.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

conv_2d_nhwc_hwcf_q :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.conv_2d_nhwc_hwcf_q.

conv_2d_nhwgc_gfhwc

Layout: * Input: NHWGC. * Kernel: GFHWC.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

conv_2d_nhwgc_gfhwc :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.conv_2d_nhwgc_gfhwc.

conv_2d_nhwgc_gfhwc_q

Layout: * Input: NHWGC. * Kernel: GFHWC.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

conv_2d_nhwgc_gfhwc_q :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.conv_2d_nhwgc_gfhwc_q.

conv_2d

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

conv_2d :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.conv_2d.

conv_3d_ncdhw_fcdhw

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

conv_3d_ncdhw_fcdhw :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.conv_3d_ncdhw_fcdhw.

conv_3d_ndhwc_dhwcf

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

conv_3d_ndhwc_dhwcf :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.conv_3d_ndhwc_dhwcf.

conv_3d_ndhwc_dhwcf_q

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

conv_3d_ndhwc_dhwcf_q :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.conv_3d_ndhwc_dhwcf_q.

conv_3d

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

conv_3d :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.conv_3d.

copy

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

copy :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.copy.

depthwise_conv_1d_ncw_cw

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

depthwise_conv_1d_ncw_cw :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.depthwise_conv_1d_ncw_cw.

depthwise_conv_1d_nwc_wc

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

depthwise_conv_1d_nwc_wc :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.depthwise_conv_1d_nwc_wc.

depthwise_conv_1d_nwc_wcm

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

depthwise_conv_1d_nwc_wcm :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.depthwise_conv_1d_nwc_wcm.

depthwise_conv_2d_nchw_chw

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

depthwise_conv_2d_nchw_chw :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.depthwise_conv_2d_nchw_chw.

depthwise_conv_2d_nhwc_hwc

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

depthwise_conv_2d_nhwc_hwc :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.depthwise_conv_2d_nhwc_hwc.

depthwise_conv_2d_nhwc_hwc_q

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

depthwise_conv_2d_nhwc_hwc_q :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.depthwise_conv_2d_nhwc_hwc_q.

depthwise_conv_2d_nhwc_hwcm

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

depthwise_conv_2d_nhwc_hwcm :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.depthwise_conv_2d_nhwc_hwcm.

depthwise_conv_2d_nhwc_hwcm_q

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

depthwise_conv_2d_nhwc_hwcm_q :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.depthwise_conv_2d_nhwc_hwcm_q.

depthwise_conv_3d_ncdhw_cdhw

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

depthwise_conv_3d_ncdhw_cdhw :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.depthwise_conv_3d_ncdhw_cdhw.

depthwise_conv_3d_ndhwc_dhwc

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

depthwise_conv_3d_ndhwc_dhwc :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.depthwise_conv_3d_ndhwc_dhwc.

depthwise_conv_3d_ndhwc_dhwcm

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

depthwise_conv_3d_ndhwc_dhwcm :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.depthwise_conv_3d_ndhwc_dhwcm.

div

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reductionbroadcastelement cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.div sequence can be lowered to a linalg.generic with different affine maps for the two operands.

div :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.div.

div_unsigned

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reductionbroadcastelement cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.div sequence can be lowered to a linalg.generic with different affine maps for the two operands.

div_unsigned :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.div_unsigned.

dot

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

dot :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.dot.

elemwise_binary

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

elemwise_binary :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.elemwise_binary.

elemwise_unary

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

elemwise_unary :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.elemwise_unary.

erf

No numeric casting is performed on the input operand.

erf :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.erf.

exp

No numeric casting is performed on the input operand.

exp :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.exp.

fill

Works for arbitrary ranked output tensors since the operation performs scalar accesses only and is thus rank polymorphic. Numeric casting is performed on the value operand, promoting it to the same data type as the output.

fill :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.fill.

fill_rng_2d

The operation generations pseudo random numbers using a linear congruential generator. It provides no guarantees regarding the distribution of the generated random numbers. Instead of generating the random numbers sequentially, it instantiates one random number generator per data element and runs them in parallel. The seed operand and the indices of the data element seed the random number generation. The min and max operands limit the range of the generated random numbers.

fill_rng_2d :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.fill_rng_2d.

floor

No numeric casting is performed on the input operand.

floor :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.floor.

generic

Generic Linalg op form where the key properties of the computation are specified as attributes. In pretty form, a linalg.generic op is written as:

  linalg.generic #trait_attribute
      ins(%A, %B : memref<?x?xf32, stride_specification>,
                   memref<?x?xf32, stride_specification>)
      outs(%C : memref<?x?xf32, stride_specification>)
      attrs = {other-optional-attributes}
      {region}
  

Where #trait_attributes is an alias of a dictionary attribute containing: - doc [optional]: a documentation string - indexing_maps: a list of AffineMapAttr, one AffineMapAttr per each input and output view. Such AffineMapAttr specifies the mapping between the loops and the indexing within each view. - library_call [optional]: a StringAttr containing the name of an external library function that the linalg.generic operation maps to. The external library is assumed to be dynamically linked and no strong compile-time guarantees are provided. In the absence of such a library call, linalg.generic will always lower to loops. - iterator_types: an ArrayAttr specifying the type of the enclosing loops. Each element of the list represents and iterator of one of the following types: parallel, reduction, window

Example: Defining a #matmul_trait attribute in MLIR can be done as follows: #matmul_accesses = [ (m, n, k) -> (m, k), (m, n, k) -> (k, n), (m, n, k) -> (m, n) ] #matmul_trait = { doc = "C(m, n) += A(m, k) * B(k, n)", indexing_maps = #matmul_accesses, library_call = "linalg_matmul", iterator_types = ["parallel", "parallel", "reduction"] }

And can be reused in multiple places as: linalg.generic #matmul_trait ins(%A, %B : memref<?x?xf32, stride_specification>, memref<?x?xf32, stride_specification>) outs(%C : memref<?x?xf32, stride_specification>) {other-optional-attributes} { ^bb0(%a: f32, %b: f32, %c: f32) : %d = arith.mulf %a, %b: f32 %e = arith.addf %c, %d: f32 linalg.yield %e : f32 }

This may lower to either: call @linalg_matmul(%A, %B, %C) : (memref<?x?xf32, stride_specification>, memref<?x?xf32, stride_specification>, memref<?x?xf32, stride_specification>) -> ()

or IR resembling: scf.for %m = %c0 to %M step %c1 { scf.for %n = %c0 to %N step %c1 { scf.for %k = %c0 to %K step %c1 { %a = load %A[%m, %k] : memref<?x?xf32, stride_specification> %b = load %B[%k, %n] : memref<?x?xf32, stride_specification> %c = load %C[%m, %n] : memref<?x?xf32, stride_specification> %d = arith.mulf %a, %b: f32 %e = arith.addf %c, %d: f32 store %e, %C[%m, %n] : memref<?x?x?xf32, stride_specification> } } }

To allow progressive lowering from the value world (a.k.a tensor values) to the buffer world (a.k.a memref values), a linalg.generic op allows mixing tensors and buffers operands and tensor results.

%C = linalg.generic #trait_attribute
  ins(%A, %B : tensor<?x?xf32>, memref<?x?xf32, stride_specification>)
  outs(%C : tensor<?x?xf32>)
  {other-optional-attributes}
  {region}
  -> (tensor<?x?xf32>)

log

No numeric casting is performed on the input operand.

log :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.log.

map

Models elementwise operations on tensors in terms of arithmetic operations on the corresponding elements.

Example: %add = linalg.map ins(%lhs, %rhs : tensor<64xf32>, tensor<64xf32>) outs(%init: tensor<64xf32>) (%lhs_elem: f32, %rhs_elem: f32) { %0 = arith.addf %lhs_elem, %rhs_elem: f32 linalg.yield %0: f32 }

Shortened print form is available. Applies to simple maps with one non-yield operation inside the body.

The example above will be printed as: %add = linalg.map { arith.addf } ins(%lhs, %rhs : tensor<64xf32>, tensor<64xf32>) outs(%init: tensor<64xf32>)

map :: MonadBlockBuilder m => [Type] -> [Value] -> Value -> RegionBuilderT m () -> m Value Source #

A builder for linalg.map.

matmul

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

Broadcast and Transpose semantics can be appiled by specifying the explicit attribute 'indexing_maps' as shown below.This is a list attribute, so the list must include all the maps if specified.

Example Transpose: linalg.matmul indexing_maps = [ affine_map<(d0, d1, d2) -> (d2, d0)>, // transpose affine_map<(d0, d1, d2) -> (d2, d1)>, affine_map<(d0, d1, d2) -> (d0, d1)> ] ins(%arg0, %arg1 : memref<5x3xf32>,memref<5x7xf32>) outs(%arg2: memref<3x7xf32>)

Example Broadcast: linalg.matmul indexing_maps = [ affine_map<(d0, d1, d2) -> (d2)>, // broadcast affine_map<(d0, d1, d2) -> (d2, d1)>, affine_map<(d0, d1, d2) -> (d0, d1)> ] ins(%arg0, %arg1 : memref<3xf32>, memref<5x7xf32>) outs(%arg2: memref<3x7xf32>)

Example Broadcast and transpose: linalg.matmul indexing_maps = [ affine_map<(d0, d1, d2) -> (d2, d0)>, // transpose affine_map<(d0, d1, d2) -> (d2)>, // broadcast affine_map<(d0, d1, d2) -> (d0, d1)> ] ins(%arg0, %arg1 : memref<5x3xf32>, memref<7xf32>) outs(%arg2: memref<3x7xf32>)

matmul :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> Maybe [Map] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.matmul.

matmul_transpose_a

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

matmul_transpose_a :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.matmul_transpose_a.

matmul_transpose_b

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

matmul_transpose_b :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.matmul_transpose_b.

matvec

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

matvec :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.matvec.

max

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reductionbroadcastelement cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.max sequence can be lowered to a linalg.generic with different affine maps for the two operands.

max :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.max.

min

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reductionbroadcastelement cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.min sequence can be lowered to a linalg.generic with different affine maps for the two operands.

min :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.min.

mmt4d

Differences from linalg.matmul: * The right hand side is transposed, whence the 't' in 'mmt'. * The input and output tensors have a 4D shape instead of a 2D shape. They are interpreted as 2D matrices with one level of 2D tile subdivision, whence the 2+2=4 dimensions. The inner tile dimensions are identified with '0' suffixes below, for instance the LHS matrix shape (M, K, M0, K0) reads as: MxK tiles, each of shape M0xK0.

mmt4d :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.mmt4d.

mul

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reductionbroadcastelement cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.mul sequence can be lowered to a linalg.generic with different affine maps for the two operands.

mul :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.mul.

negf

No numeric casting is performed on the input operand.

negf :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.negf.

pooling_nchw_max

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

pooling_nchw_max :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.pooling_nchw_max.

pooling_nchw_sum

Layout: * Input: NCHW. * Kernel: HW.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

pooling_nchw_sum :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.pooling_nchw_sum.

pooling_ncw_max

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

pooling_ncw_max :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.pooling_ncw_max.

pooling_ncw_sum

Layout: * Input: NCW. * Kernel: W.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

pooling_ncw_sum :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.pooling_ncw_sum.

pooling_ndhwc_max

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

pooling_ndhwc_max :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.pooling_ndhwc_max.

pooling_ndhwc_min

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

pooling_ndhwc_min :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.pooling_ndhwc_min.

pooling_ndhwc_sum

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

pooling_ndhwc_sum :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.pooling_ndhwc_sum.

pooling_nhwc_max

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

pooling_nhwc_max :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.pooling_nhwc_max.

pooling_nhwc_max_unsigned

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

pooling_nhwc_max_unsigned :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.pooling_nhwc_max_unsigned.

pooling_nhwc_min

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

pooling_nhwc_min :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.pooling_nhwc_min.

pooling_nhwc_min_unsigned

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

pooling_nhwc_min_unsigned :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.pooling_nhwc_min_unsigned.

pooling_nhwc_sum

Layout: * Input: NHWC. * Kernel: HW.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

pooling_nhwc_sum :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.pooling_nhwc_sum.

pooling_nwc_max

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

pooling_nwc_max :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.pooling_nwc_max.

pooling_nwc_max_unsigned

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

pooling_nwc_max_unsigned :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.pooling_nwc_max_unsigned.

pooling_nwc_min

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

pooling_nwc_min :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.pooling_nwc_min.

pooling_nwc_min_unsigned

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

pooling_nwc_min_unsigned :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.pooling_nwc_min_unsigned.

pooling_nwc_sum

Layout: * Input: NWC. * Kernel: W.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

pooling_nwc_sum :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.pooling_nwc_sum.

powf

Only applies to floating point values.

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reductionbroadcastelement cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.powf sequence can be lowered to a linalg.generic with different affine maps for the two operands.

powf :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.powf.

quantized_batch_matmul

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. The quantized variant includes zero-point adjustments for the left and right operands of the matmul.

quantized_batch_matmul :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.quantized_batch_matmul.

quantized_matmul

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. The quantized variant includes zero-point adjustments for the left and right operands of the matmul.

quantized_matmul :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.quantized_matmul.

reciprocal

No numeric casting is performed on the input operand.

reciprocal :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.reciprocal.

reduce

Executes combiner on the dimensions of inputs and returns the reduced result. The dimensions attribute needs to list the reduction dimensions in increasing order.

Example: %reduce = linalg.reduce ins(%input:tensor<16x32x64xf32>) outs(%init:tensor<16x64xf32>) dimensions = [1] (%in: f32, %out: f32) { %0 = arith.addf %out, %in: f32 linalg.yield %0: f32 }

Shortened print form is available. Applies to simple (not variadic) reduces with one non-yield operation inside the body. Applies only if the operation takes %out as the first argument.

The example above will be printed as: %reduce = linalg.reduce { arith.addf } ins(%input:tensor<16x32x64xf32>) outs(%init:tensor<16x64xf32>) dimensions = [1]

round

No numeric casting is performed on the input operand.

round :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.round.

rsqrt

No numeric casting is performed on the input operand.

rsqrt :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.rsqrt.

select

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reductionbroadcastelement cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.select sequence can be lowered to a linalg.generic with different affine maps for the two operands.

select :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.select.

sqrt

No numeric casting is performed on the input operand.

sqrt :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.sqrt.

square

No numeric casting is performed on the input operand.

square :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.square.

sub

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reductionbroadcastelement cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.sub sequence can be lowered to a linalg.generic with different affine maps for the two operands.

sub :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.sub.

tanh

No numeric casting is performed on the input operand.

tanh :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.tanh.

transpose

Permutes the dimensions of input according to the given permutation. dim(result, i) = dim(input, permutation[i])

This op actually moves data, unlike memref.transpose which is a metadata operation only that produces a transposed "view".

Example: %transpose = linalg.transpose ins(%input:tensor<16x64xf32>) outs(%init:tensor<64x16xf32>) permutation = [1, 0]

vecmat

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

vecmat :: MonadBlockBuilder m => [Type] -> [Value] -> [Value] -> RegionBuilderT m () -> m Value Source #

A builder for linalg.vecmat.