Machine Learning Systems, Part 2 — Tensors
What is a tensor?
A tensor is a general mathematical object that relates objects in a vector space. There are many equivalent definitions of a tensor and appearances of tensors in mathematics. For the purpose of deep learning, a tensor is a mathematical object that represents a multi-dimensional array of values.
Tensors contain sequences of numbers (these might be real numbers, integers, boolean values, or other types) arranged in a structure. This structure includes axes — that is, given an element (as specified by an index), we might move to other elements along one of the axes, as long as we stay within the bounds of the tensor.
Tensors have various properties which define them and the operations that can be performed on them. These properties include:
- Order (sometimes referred to as rank or dimensionality) — this can be thought of as the "number of axes." A vector has order one, a matrix has order two, and a single scalar (i.e. 2.718) has order zero, by convention.
- Shape (sometimes referred to as "dimension," confusingly) — refers to the size, or number of elements, of each axis of a tensor, typically expressed as a tuple1, e.g. . A vector with 8 elements has shape , a matrix has shape , and the scalar 8 is typically described by having an empty shape 2. Axis shapes are ordered; a tensor with shape and a tensor of shape are structurally different. The number of values in a tensor's shape is equal to its order/rank.
- When describing the shape of matrices, the convention is to include the number of rows before the number of columns; an matrix has rows ( elements along its first axis) and columns ( elements along its second axis).
- Type — mathematically, the values of which tensors are composed are typically real numbers. In deep learning, with software and hardware constraints, it's typical for tensors to contain floating point values, integers, or other datatypes.
How do we refer to elements in, or index into tensors? Where there are many ways to notate particular elements of tensors3, in deep learning, indices are typically found in subscripts, and tuples of indices along each axes correspond to a particular element.
Consider a vector , for example: . To refer to the first element in , we would write (with zero-indexing4), and to refer to the third element, . In tensors of order greater than one, such as matrices, each index corresponds to indexing along a particular axis. Consider a matrix with shape :
As is the convention with labeling shapes of matrices, we index by row, then column; (first row, first column). (second row, second column), and (first row, third column).
Why think in tensors, and what do we need them to do?
Tensors are good for many reasons:
- They provide a mathematically convenient way to represent dense, multi-dimensional quantitites, which represent human modalities well.
- We have hardware machinery (e.g. GPUs) which can efficiently operate on tensors
As we build neural networks, we've coalesced around a schema of operations on tensors. Deep neural models have a surprisingly small set of operations which are ultimately relevant to us.
Some Tensor Operations in Deep Learning
Elementwise Operations
Elementwise operations are exactly that: for a given arithmetic operator (e.g. addition), we apply that operation to each symmetric element in each tensor operand.
Consider two tensors and , both with shape :
An elementwise binary function , if applied to and , yields the following output matrix:
In the case of addition, , so the output matrix becomes:
which is also an matrix. Elementwise multiplication is also known as the Hadamard Product (). Elementwise addition between two matrices is frequently written as ; elementwise multiplication is sometimes written as (to avoid confusion with matrix multiplication, discussed below).
How do element-wise operations affect tensor shape and order? Element wise operations combine corresponding elements from two or more tensors of the same shape: the resulting tensor is thus the same shape as the input tensors, and order is unaffected.
We'll also discuss broadcasting below which relaxes the assumption that input tensors in elementwise operations must be of the same shape, and allows us to apply elementwise operations across axes in more elegant ways.
Reductions
A reduction is a tensor operation that involves combining values along a particular axis of a tensor. As part of this operation, the dimension (or size) of the axis in question is "reduced" to 1 (or 0 if not explicitly written).
Consider a 3-dimensional tensor of shape . If we were to reduce over the second axis of this tensor, which has size , the dimensions of the resulting tensors would be (or if we chose to keep the reduced dimension), as all elements along the second axis have been combined into a single value which is represented for each element a tuple of the other axes.
The reduction function used to combine elements of a tensor along a certain axis can be single or multi-valued. Some examples of reduction functions include:
- arithmetic operations () — for example: a sum-based reduction () would add up all values along an axis for each column of other axes;
- min/max — which summarize all values along an axis using their minimum or maximum values;
- argmin/argmax – return the indices of the minimum or maximum values along an axis;
- mean/median or other summary statistics;
- logical operations — bitwise operations (and or or) along an axis.
A reduction may be performed over arbitrarily many axes — in these cases, all of the values along each axis are combined, and each axis over which the reduction was performed is reduced away in the resulting tensor's shape.
How do reductions affect tensor shape and order? As we've seen above, reductions squash the axes along which they are performed, removing them from the tensor's shape, and keeping the size of all other axes invariant. As such, reductions reduce the order of a tensor by the number of axes over which the reduction is performed.
Contractions
Tensor contractions refer to a general class of operations that combine tensors or distinct axes of a single tensor. In particular, contractions are sums over products of pairs of indices.
Tensor contractions allow us to define a different of multiplication than element-wise multiplication5, and are a generalization of matrix multiplication. Consider two vectors and , each with elements. Performing a tensor contraction on these two vectors would involve identifying a common pair of indices across the tensors; in this case, both tensors only have one axis (they're of order one) and therefore have one index and . The pair of indices us thus the first and only index.
The contraction of and is written as — it is, in the case of vectors, the dot product! The products of the pairs would be written as:
We write the vector vertically for conventional and notational reasons3. We have effectively iterated over and ; the index is our contraction index. We have "zipped together" the corresponding terms for those indices via a product, then summed the resulting products together.
We can extend tensor contractions to matrix multiplication as well. In this case, the pairs of indices would involve the 2-tuples and that describe elements of each matrix; the contraction index is . Making this concrete, consider two matrices and of shapes and , respectively.
In this contraction, we'll select our contraction indices to be for matrix , and for our matrix . That is, we will iterate over , , and , and the pairs of indices will be those that match based on the value of . Some examples of paired indices for matrices and respectively include , giving and .
Indeed, this tensor contraction between two order-2 tensors is just matrix multiplication! That said, tensor contractions give us ways to think about higher dimensional generalizations of matrix multiplication to tensors of orders beyond two.
Tensor contractions may occur within a tensor (combining two axes with paired indices via the products of their elements summed together), or between two tensors (as we've seen in the case of the dot and matrix product).6
In deep learning, tensor contractions are usually tensor products; they are generally between two different tensors; one set of indices from each tensor is used to construct the index pair.
How do contractions affect tensor shape and order? For two input tensors of orders and over which a tensor contraction is performed, the order of the resulting tensor is . The contraction removes one "unit" of order by "zipping" indices via an element-wise product and another by unit summing over those zipped indices.7 We can illustrate this formula this with the two canonical examples of contractions over two tensors which we've seen here, then with new higher-order examples:
- Vector dot product — two input vectors each have order one. Accordingly, the order of the resulting tensor is , which is indeed the case; the dot product of two vectors is a scalar, a zero-order tensor!
- Matrix multiplication — two input matrices each have order two. The resulting matrix has order , which is the same order as the original matrices; indeed, matrix multiplication results in an output matrix which is order two. In each case, the middle axes (that is, the first axis of the left hand tensor and the second axis of the right hand tensor), which must be equal in size, is removed. An matrix multiplied by a matrix produces an matrix.
- Higher order tensor products — for tensors of higher orders, still applies. Multiplying two order 3 ("3-dimensional") tensors, the resulting tensors will have order , with the contracted axis vanishing from the final tensor's shape.
Other Operations Affecting Shape and Ordering
Besides the tensor operations we've seen, other operations can affect the shape of tensors and the ordering of data that composes them. While these operations might not change underlying data itself, they affect the layout and organization of tensors including the relative sizes of axes. Other operations can add new data to tensors by changing their shapes.
Our initial exploration of these operations will assume that tensors are represented contiguously in memory. In practice, this means that the elements of a tensor are sequentially stored, with no gaps. We'll revisit this assumption in the future.
Reshape
Reshape operations take an input tensor and a target shape to which to change the tensor's dimensions. It preserves the ordering of elements of the tensor. The number of elements in the output shape must be the same as the input tensor, which implicitly restricts the list of possible output shapes.
Concretely, we can describe the space of valid reshape operations for a matrix with shape . To start, consider a target shape of , which has twice as many rows and half as many columns. The original matrix has elements, and the resulting matrix also has elements. More generally, the set of valid target shapes for a reshape to another order-2 tensor is described by or for some positive integer . These two reshape operations are illustrated below, with the output matrices defined in terms of indices from the input matrix. The reshape to can be thought of "flattening" rows of into one, while the reshape into can be thought of as splitting each row into parts, then stacking them.
Notice that in the operations above, the relative ordering of the elements is kept invariant — the delineation of axes changes, affecting the number of tensors along a particular axis.
Reshape operations can also result in a tensors of different orders. We consider a few illustrative cases:
- Flattening — an order- tensor with elements can be flattened into a order-1 tensor of shape . This operation turns the tensor into a one-dimensional list of elements.
- Increasing order — given an order-2 tensor of shape , we can transform such a tensor into, for example, an order-3 tensor of dimension , which would break up chunks of the tensor along the first dimension (of size ) into -sized chunks along the second dimension. We can both increase and decrease tensor order with reshape operations, arbitrarily.
Indexing
Indexing refers to accessing particular subsets of tensor elements. We might access a particular element, but we might also want to access a subset of the original tensor's values along one or more axes (itself a lower-order tensor).
The indexing notation shown below has become relatively standard across many tensor library implementations, with minor differences. We index into a tensor as denoted by square brackets as if we're indexing into a multi-dimensional array8. Each position in our indexing array specification corresponds to the axis, which is typically zero indexed given that we're thinking about tensors as arrays when we represent them computationally. We use specific index values to index along a particular axis.
Slicing refers to indexing along an axis using a range. That is, along one or more axes, a slice includes a contiguous range of indices. For example, given a tensor with shape , we might consider an indexing slice to be , which takes a "slice" of size six (from indices 0 to 5) along the second axis of the tensor. The other two axes are "spanned," meaning we want to access all values along those axes.
In mathematics, we often use Einstein notation for reasoning about axes and indices of tensors.9 We'll use some of the conventional syntax from numpy here, as it's most easily applicable to implementations.
Operations that Modify Shape or Data
There are also operations which combine or expand tensors, introducing new data. These either combine multiple distinct tensors, or they expand a tensor by copying it in some fashion. In what follows, we'll look at the transposition, concatenation and tiling operations.
Transposition, Reordering, and Permutation
The reordering operation swaps one or more axes of a tensor. Reordering changes the order with which the elements of the tensor are stored in memory, which is something we'll revisit in lectures that follow.
Given a tensor with dimensions, the reordering operation takes a permutation of the dimensions and reorders the resulting tensor given that permutation. Typically, axes are referred to in a zero-indexed fashion.
As an example, let's consider an order-4 tensor on which we want to reorder its axes. Axes in this case would be notated and indexed as . If we wanted to reorder the tensor with a full reversal, we would reorder with the axis argument as , which implies that what was axis 3 becomes axis 0, what was axis 2 becomes axis 2, and so on.
If we wanted to only reorder the middle two axes of , we might consider a reorder as . Here, we make the second axis of the new tensor the third axis (index 2) of the old tensor, and the third axis of the tensor the second axis of the old tensor (index 1).
We can also consider a more basic case, which is that transposition. In most settings, transposing a tensor refers to a reorder operation on an order-2 tensor in which axes are flipped. This is equivalent to a order operation with the axis argument as , which swaps the axes of the input matrix. A transpose operation is illustrated below.
Notice that the ordering of the indices is also swapped as it relates to mapping elements of the new tensor to that of the old. An element in the old tensor becomes in the new tensor.
In general, when we transpose axes of a tensor, the indices that refer to elements of the old tensor are also swapped. Consider our reorder operation of our order-4 tensor above. The input tensor is described by elements like , with four index subscripts. Elements in the new tensor would correspond to elements in the old tensor according to , which involves the same permutation of the axes as do the indices.
Similarly, with our alternative indexing notation discussed previously, reordering swaps the axes we reference when we denote a specific set of values in a position in our indexing array.
Permutation is another way of describing reordering. With reordering we're permuting the axes of an input tensor.
Concatenation
Tensor concatenation refers to combining multiple tensors together along a certain axis. We might concatenate only two tensors, or arbitrarily many; we concatenate in a certain direction, as described by that axis.
Consider the example below, in which we have two tensors, and . If we concatenate along the first axis, we simply stack the respective elements of each tensor along it. If we concatenate along the second axis, we concatenate in that direction.
Note that concatenated tensors must have the same shape, except for along the concatenation dimension, since we're appending values along that dimension, and it must be fully filled.
Concatenation can be thought of an opposite operation to slicing a tensor or splitting a tensor into chunks.
Tiling
The tiling operation repeats a tensor along one or more axes. That is, it concatenates copies of a tensor a specified number of times in a particular direction, resulting in a tensor with shape that is a constant multiple of the input shape along the axes for which tiling was performed.
In this example, we tile a matrix over one axis, then two axes by repeating its data. We can see that the shape of the resulting tensor grows only along those axes where the multiple is the number of times we've tiled. First, we tile an tensor with shape times over its second axis, resuting in a tensor of shape .
Second, we demonstrate tiling a tensor over multiple axes simultaneously, we tile an tensor with shape times over its second axis, resuting in a tensor of shape .
Broadcasting
As an addendum to the previous section discussing elementwise operations and contractions, there are many cases in which we may want to perform an operation on one or more tensors such that we apply that operation equally to multiple elements of the tensor. In these cases, we can broadcast an input tensor, implicitly expanding its shape so that it is compatible with the larger shape of another tensor. Broadcasting is often used for elementwise operations.
Broadcasting semantics are somewhat standardized; those in in numpy are most-often used, and we'll cover these below.
Elementwise operations on scalars — a simple, useful example of broadcasting is with scalars in elementwise operations. Consider a matrix of shape . We often want to perform an operation with and a scalar: for example, adding 1 to each element of a matrix; . The scalar value 1 has a order 0 and empty shape , but we broadcast it to a tensor of size filled with ones; we create two new dimensions from with sizes and .
Tensors can also be broadcast up to dimensions that add axis-wide. This operation is similar to tiling but is implicit; we conceptually expand a tensor across one or more axes. We apply broadcasting in the following procedure.
- We compare dimensions pair by pair of both tensors. Dimensions are eligible to be broadcast if:
- The size of both dimensions are equal, or
- The size of one of the dimensions is 1.
- We right-align the shapes of the two input tensors (which may have different orders)
- We move from right to left assessing dimension broadcasting eligibility.
- If we find two compatible and aligned dimensions, we continue leftwards to identify other potentially compatible dimensions.
Concretely, consider a tensor with shape and another tensor with shape . The operation can be performed based on the broadcasting rules; the rightmost dimensions are equal. We would add the elements to each -sized slice of (there are of them per shape of the remaining axes). We would conceptually broadcast along the first two dimensions of .
A classic example involves operating on tensors representing images, which we've seen have 3 values per pixel (RGB). An image tensor with dimension would thus be represented by a tensor of shape . We may wish to apply an identical transformation to each pixel: that is, to the last dimension (of size 3) for each of the indices of the outer tensor. That is, we have a small operand of shape .
As a final example, note that we can broadcast inner dimensions surrounded by complete dimensions. For example: a tensor of shape can be broadcast up to a tensor of shape by tiling the tensor times along its second axis.
Broadcasting for tensor contractions — broadcasting is often used for multiplication of tensors that otherwise might not have compatible dimensions for multiplications. Consider two tensors and , where we're interested in the matrix multiplication . We only care about the middle (contracted) dimensions of and ; broadcasting occurs behind the scenes with the other dimensions.
Let have shape and have shape such that and have orders and , respectively. We have shape multiplication , so as long as , we have proper shape compatibility. What about the other dimensions ? We can see as many -dimensional matrices (exactly of those matrices!). We thus perform multiplications of , and the stacked results along the other axes of () form our result.
What if the tensor were also higher-dimensional? Let have order with shape . In this case, we do what we did with and view as many dimensional matrices, stacked times according to the other axes of , (where ). But and can also be broadcasted to match the sizes of the stacks of matrices! Thus, our matrix multiplication is between stacks (often called batches) of shapes and
For a full case of arbitrarily ordered tensors and , the result is the general operation known as batched matrix multiplication (or BMM10).
Writing down the batch (stack of matrices) and matrix dimensions explicitly, we can generate the following table:
This approach applies to contractions that combine multiple axes of a tensor; we simply view the dimensions being contracted as tiled or stacked along the other axes of their input tensors.
Citation
@misc{kahn2024mlsystems,
title = {Machine Learning Systems},
author = {Jacob Kahn},
year = {2024},
url = {https://jacobkahn.me/writing/tag/machine_learning_systems/},
}
Footnotes
-
Tuples can themselves be thought of as tensors; that is, the shape of a tensor could be represented as a 1-dimensional tensor. In some software implementations of tensors, asking for the shape of a tensor returns another tensor whose entries are the shape rather than a tuple. ↩
-
As scalars are order/rank zero tensors, their shapes contain zero elements. ↩
-
Tensor indexing is part of a larger set of indexing machinery that differs between covariant and contravariant tensors. Most tensors referred to in deep neural models are reasoned about in a covariant fashion; lower indices are thus conventionally used. ↩ ↩2
-
Tensors are typically zero-indexed in mathematical notation. ↩
-
The tensor contraction, with the inner or dot product as its most basic form, can be thought of as a bilinear form that contracts dimensions, allowing values to intermix across dimensions. ↩
-
Matrix multiplication can also be viewed as a tensor product followed by a contraction. A tensor product between two matrices and with shapes and , respectively produces a matrix of shape of order ; tensor products enumerate every combination of the two input tensors. The tensor contraction over the two middle dimensions, eliminating them, results in a tensor. ↩
-
In tensor analysis, for two input tensors of order and , the order of the resulting tensor is technically , where is the number of paired indices over the contraction is performed. These operations are more uncommon in deep learning and are thus omitted here. ↩
-
Syntactically, there are many different conventions for indexing tensors. The implementation in
numpyis today often used outside of computational settings; we'll use it here as it translates best to computational implementations of these ideas. ↩ -
Einstein notation for tensors arose from the high-dimensional geometry with which the Theory of General Relativity was developed. ↩
-
BMM is implemented in commonly-used deep learning frameworks, for example, in PyTorch. ↩