Skip to main content

Section 8.2 Linear Codes

To gain more knowledge of a particular code and develop more efficient techniques of encoding, decoding, and error detection, we need to add additional structure to our codes. One way to accomplish this is to require that the code also be a group. A group code is a code that is also a subgroup of \({\mathbb Z}_2^n\text{.}\)

To check that a code is a group code, we need only verify one thing. If we add any two elements in the code, the result must be an \(n\)-tuple that is again in the code. It is not necessary to check that the inverse of the \(n\)-tuple is in the code, since every codeword is its own inverse, nor is it necessary to check that \({\mathbf 0}\) is a codeword. For instance,

\begin{equation*} (11000101) + (11000101) = (00000000)\text{.} \end{equation*}

Example 8.16.

Suppose that we have a code that consists of the following 7-tuples:

\begin{align*} &(0000000) & & (0001111) & & (0010101) & & (0011010)\\ &(0100110) & & (0101001) & & (0110011) & & (0111100)\\ &(1000011) & & (1001100) & & (1010110) & & (1011001)\\ &(1100101) & & (1101010) & & (1110000) & & (1111111)\text{.} \end{align*}

It is a straightforward though tedious task to verify that this code is also a subgroup of \({\mathbb Z}_2^7\) and, therefore, a group code. This code is a single error-detecting and single error-correcting code, but it is a long and tedious process to compute all of the distances between pairs of codewords to determine that \(d_{\min} = 3\text{.}\) It is much easier to see that the minimum weight of all the nonzero codewords is \(3\text{.}\) As we will soon see, this is no coincidence. However, the relationship between weights and distances in a particular code is heavily dependent on the fact that the code is a group.

Suppose that \({\mathbf x}\) and \({\mathbf y}\) are binary \(n\)-tuples. Then the distance between \({\mathbf x}\) and \({\mathbf y}\) is exactly the number of places in which \({\mathbf x}\) and \({\mathbf y}\) differ. But \({\mathbf x}\) and \({\mathbf y}\) differ in a particular coordinate exactly when the sum in the coordinate is \(1\text{,}\) since

\begin{align*} 1 + 1 & = 0\\ 0 + 0 & = 0\\ 1 + 0 & = 1\\ 0 + 1 & = 1\text{.} \end{align*}

Consequently, the weight of the sum must be the distance between the two codewords.

Observe that

\begin{align*} d_{\min} & = \min \{ d({\mathbf x},{\mathbf y}) : {\mathbf x}\neq{\mathbf y} \}\\ &= \min \{ d({\mathbf x},{\mathbf y}) : {\mathbf x}+{\mathbf y} \neq {\mathbf 0} \}\\ &= \min\{ w({\mathbf x} + {\mathbf y}) : {\mathbf x}+{\mathbf y}\neq {\mathbf 0} \}\\ & = \min\{ w({\mathbf z}) : {\mathbf z} \neq {\mathbf 0} \}\text{.} \end{align*}

Subsection Linear Codes

From Example 8.16, it is now easy to check that the minimum nonzero weight is \(3\text{;}\) hence, the code does indeed detect and correct all single errors. We have now reduced the problem of finding “good” codes to that of generating group codes. One easy way to generate group codes is to employ a bit of matrix theory.

Define the inner product of two binary \(n\)-tuples to be

\begin{equation*} {\mathbf x} \cdot {\mathbf y} = x_1 y_1 + \cdots + x_n y_n\text{,} \end{equation*}

where \({\mathbf x} = (x_1, x_2, \ldots, x_n)^\transpose\) and \({\mathbf y} = (y_1, y_2, \ldots, y_n)^\transpose\) are column vectors. 12  For example, if \({\mathbf x} = (011001)^\transpose\) and \({\mathbf y} = (110101)^\transpose\text{,}\) then \({\mathbf x} \cdot {\mathbf y} = 0\text{.}\) We can also look at an inner product as the product of a row matrix with a column matrix; that is,

\begin{align*} {\mathbf x} \cdot {\mathbf y} & = {\mathbf x}^\transpose {\mathbf y}\\ & = \begin{pmatrix} x_1 & x_2 & \cdots & x_n \end{pmatrix} \begin{pmatrix} y_1 \\ y_2 \\ \vdots \\ y_n \end{pmatrix}\\ & = x_{1}y_{1} + x_{2}y_{2} + \cdots + x_{n}y_{n}\text{.} \end{align*}

Example 8.19.

Suppose that the words to be encoded consist of all binary \(3\)-tuples and that our encoding scheme is even-parity. To encode an arbitrary \(3\)-tuple, we add a fourth bit to obtain an even number of \(1\)s. Notice that an arbitrary \(n\)-tuple \({\mathbf x} = (x_1, x_2, \ldots, x_n)^\transpose\) has an even number of \(1\)s exactly when \(x_1 + x_2 + \cdots + x_n = 0\text{;}\) hence, a \(4\)-tuple \({\mathbf x} = (x_1, x_2, x_3, x_4)^\transpose\) has an even number of \(1\)s if \(x_1+ x_2+ x_3+ x_4 = 0\text{,}\) or

\begin{equation*} {\mathbf x} \cdot {\mathbf 1} = {\mathbf x}^\transpose {\mathbf 1} = \begin{pmatrix} x_1 & x_2 & x_3 & x_4 \end{pmatrix} \begin{pmatrix} 1 \\ 1 \\ 1 \\ 1 \end{pmatrix} = 0\text{.} \end{equation*}

This example leads us to hope that there is a connection between matrices and coding theory.

Let \({\mathbb M}_{m \times n}({\mathbb Z}_2)\) denote the set of all \(m \times n\) matrices with entries in \({\mathbb Z}_2\text{.}\) We do matrix operations as usual except that all our addition and multiplication operations occur in \({\mathbb Z}_2\text{.}\) Define the null space of a matrix \(H \in {\mathbb M}_{m \times n}({\mathbb Z}_2)\) to be the set of all binary \(n\)-tuples \({\mathbf x}\) such that \(H{\mathbf x} = {\mathbf 0}\text{.}\) We denote the null space of a matrix \(H\) by \(\Null(H)\text{.}\)

Example 8.20.

Suppose that

\begin{equation*} H = \begin{pmatrix} 0 & 1 & 0 & 1 & 0 \\ 1 & 1 & 1 & 1 & 0 \\ 0 & 0 & 1 & 1 & 1 \end{pmatrix}\text{.} \end{equation*}

For a \(5\)-tuple \({\mathbf x} = (x_1, x_2, x_3, x_4, x_5)^\transpose\) to be in the null space of \(H\text{,}\) \(H{\mathbf x} = {\mathbf 0}\text{.}\) Equivalently, the following system of equations must be satisfied:

\begin{align*} x_2 + x_4 & = 0\\ x_1 + x_2 + x_3 + x_4 & = 0\\ x_3 + x_4 + x_5 & = 0\text{.} \end{align*}

The set of binary \(5\)-tuples satisfying these equations is

\begin{equation*} (00000) \qquad (11110) \qquad (10101) \qquad (01011)\text{.} \end{equation*}

This code is easily determined to be a group code.

Since each element of \({\mathbb Z}_2^n\) is its own inverse, the only thing that really needs to be checked here is closure. Let \({\mathbf x}, {\mathbf y} \in \Null(H)\) for some matrix \(H\) in \({\mathbb M}_{m \times n}({\mathbb Z}_2)\text{.}\) Then \(H{\mathbf x} = {\mathbf 0}\) and \(H{\mathbf y} = {\mathbf 0}\text{.}\) So

\begin{equation*} H({\mathbf x}+{\mathbf y}) = H{\mathbf x} + H{\mathbf y} = {\mathbf 0} + {\mathbf 0} = {\mathbf 0}\text{.} \end{equation*}

Hence, \({\mathbf x} + {\mathbf y}\) is in the null space of \(H\) and therefore must be a codeword.

A code is a linear code if it is determined by the null space of some matrix \(H \in {\mathbb M}_{m \times n}({\mathbb Z}_2)\text{.}\)

Example 8.22.

Let \(C\) be the code given by the matrix

\begin{equation*} H = \begin{pmatrix} 0 & 0 & 0 & 1 & 1 & 1 \\ 0 & 1 & 1 & 0 & 1 & 1 \\ 1 & 0 & 1 & 0 & 0 & 1 \end{pmatrix}\text{.} \end{equation*}

Suppose that the \(6\)-tuple \({\mathbf x} = (010011)^\transpose\) is received. It is a simple matter of matrix multiplication to determine whether or not \({\mathbf x}\) is a codeword. Since

\begin{equation*} H{\mathbf x} = \begin{pmatrix} 0 \\ 1 \\ 1 \end{pmatrix}\text{,} \end{equation*}

the received word is not a codeword. We must either attempt to correct the word or request that it be transmitted again.

Since we will be working with matrices, we will write binary \(n\)-tuples as column vectors for the remainder of this chapter.