Exercises 8.6 Exercises
1.
Why is the following encoding scheme not acceptable?
Information | \(0\) | \(1\) | \(2\) | \(3\) | \(4\) | \(5\) | \(6\) | \(7\) | \(8\) |
Codeword | \(000\) | \(001\) | \(010\) | \(011\) | \(101\) | \(110\) | \(111\) | \(000\) | \(001\) |
2.
Without doing any addition, explain why the following set of \(4\)-tuples in \({\mathbb Z}_2^4\) cannot be a group code.
3.
Compute the Hamming distances between the following pairs of \(n\)-tuples.
\(\displaystyle (011010), (011100)\)
\(\displaystyle (11110101), (01010100)\)
\(\displaystyle (00110), (01111)\)
\(\displaystyle (1001), (0111)\)
4.
Compute the weights of the following \(n\)-tuples.
\(\displaystyle (011010)\)
\(\displaystyle (11110101)\)
\(\displaystyle (01111)\)
\(\displaystyle (1011)\)
5.
Suppose that a linear code \(C\) has a minimum weight of \(7\text{.}\) What are the error-detection and error-correction capabilities of \(C\text{?}\)
6.
In each of the following codes, what is the minimum distance for the code? What is the best situation we might hope for in connection with error detection and error correction?
\(\displaystyle (011010) \; (011100) \; (110111) \; (110000)\)
\(\displaystyle (011100) \; (011011) \; (111011) \; (100011) \\ (000000) \; (010101) \; (110100) \; (110011)\)
\(\displaystyle (000000) \; (011100) \; (110101) \; (110001)\)
\(\displaystyle (0110110) \; (0111100) \; (1110000) \; (1111111) \\ (1001001) \; (1000011) \; (0001111) \; (0000000)\)
7.
Compute the null space of each of the following matrices. What type of \((n,k)\)-block codes are the null spaces? Can you find a matrix (not necessarily a standard generator matrix) that generates each code? Are your generator matrices unique?
- \begin{equation*} \begin{pmatrix} 0 & 1 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 & 1 \\ 1 & 0 & 0 & 1 & 0 \end{pmatrix} \end{equation*}
- \begin{equation*} \begin{pmatrix} 1 & 0 & 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 1 & 0 \\ 1 & 1 & 0 & 0 & 0 & 1 \end{pmatrix} \end{equation*}
- \begin{equation*} \begin{pmatrix} 1 & 0 & 0 & 1 & 1 \\ 0 & 1 & 0 & 1 & 1 \end{pmatrix} \end{equation*}
- \begin{equation*} \begin{pmatrix} 0 & 0 & 0 & 1 & 1 & 1 & 1 \\ 0 & 1 & 1 & 0 & 0 & 1 & 1 \\ 1 & 0 & 1 & 0 & 1 & 0 & 1 \\ 0 & 1 & 1 & 0 & 0 & 1 & 1 \end{pmatrix} \end{equation*}
8.
Construct a \((5,2)\)-block code. Discuss both the error-detection and error-correction capabilities of your code.
9.
Let \(C\) be the code obtained from the null space of the matrix
Decode the message
if possible.
10.
Suppose that a \(1000\)-bit binary message is transmitted. Assume that the probability of a single error is \(p\) and that the errors occurring in different bits are independent of one another. If \(p = 0.01\text{,}\) what is the probability of more than one error occurring? What is the probability of exactly two errors occurring? Repeat this problem for \(p = 0.0001\text{.}\)
11.
Which matrices are canonical parity-check matrices? For those matrices that are canonical parity-check matrices, what are the corresponding standard generator matrices? What are the error-detection and error-correction capabilities of the code generated by each of these matrices?
- \begin{equation*} \begin{pmatrix} 1 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 & 1 \end{pmatrix} \end{equation*}
- \begin{equation*} \begin{pmatrix} 0 & 1 & 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 1 & 0 \\ 1 & 1 & 0 & 0 & 0 & 1 \end{pmatrix} \end{equation*}
- \begin{equation*} \begin{pmatrix} 1 & 1 & 1 & 0 \\ 1 & 0 & 0 & 1 \end{pmatrix} \end{equation*}
- \begin{equation*} \begin{pmatrix} 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 & 1 & 0 & 0 \\ 1 & 0 & 1 & 0 & 0 & 1 & 0 \\ 0 & 1 & 1 & 0 & 0 & 0 & 1 \end{pmatrix} \end{equation*}
12.
List all possible syndromes for the codes generated by each of the matrices in Exercise 8.6.11.
13.
Let
Compute the syndrome caused by each of the following transmission errors.
An error in the first bit.
An error in the third bit.
An error in the last bit.
Errors in the third and fourth bits.
14.
Let \(C\) be the group code in \({\mathbb Z}_2^3\) defined by the codewords \((000)\) and \((111)\text{.}\) Compute the cosets of \(C\) in \({\mathbb Z}_2^3\text{.}\) Why was there no need to specify right or left cosets? Give the single transmission error, if any, to which each coset corresponds.
15.
For each of the following matrices, find the cosets of the corresponding code \(C\text{.}\) Give a decoding table for each code if possible.
- \begin{equation*} \begin{pmatrix} 0 & 1 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 & 1 \\ 1 & 0 & 0 & 1 & 0 \end{pmatrix} \end{equation*}
- \begin{equation*} \begin{pmatrix} 0 & 0 & 1 & 0 & 0 \\ 1 & 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 & 0 \\ 1 & 1 & 0 & 0 & 1 \end{pmatrix} \end{equation*}
- \begin{equation*} \begin{pmatrix} 1 & 0 & 0 & 1 & 1 \\ 0 & 1 & 0 & 1 & 1 \end{pmatrix} \end{equation*}
- \begin{equation*} \begin{pmatrix} 1 & 0 & 0 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 0 & 0 & 1 & 1 \\ 1 & 0 & 1 & 0 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 & 0 & 1 & 0 \end{pmatrix} \end{equation*}
16.
Let \({\mathbf x}\text{,}\) \({\mathbf y}\text{,}\) and \({\mathbf z}\) be binary \(n\)-tuples. Prove each of the following statements.
\(\displaystyle w({\mathbf x}) = d( {\mathbf x}, {\mathbf 0})\)
\(\displaystyle d( {\mathbf x}, {\mathbf y}) = d( {\mathbf x} + {\mathbf z}, {\mathbf y} + {\mathbf z} )\)
\(\displaystyle d({\mathbf x}, {\mathbf y}) = w({\mathbf x}- {\mathbf y})\)
17.
A metric on a set \(X\) is a map \(d: X \times X \rightarrow {\mathbb R}\) satisfying the following conditions.
\(d( {\mathbf x}, {\mathbf y}) \geq 0\) for all \({\mathbf x}, {\mathbf y} \in X\text{;}\)
\(d( {\mathbf x}, {\mathbf y}) = 0\) exactly when \({\mathbf x} = {\mathbf y}\text{;}\)
\(d( {\mathbf x}, {\mathbf y})= d( {\mathbf y}, {\mathbf x})\text{;}\)
\(d( {\mathbf x}, {\mathbf y}) \leq d( {\mathbf x}, {\mathbf z}) + d( {\mathbf z}, {\mathbf y})\text{.}\)
In other words, a metric is simply a generalization of the notion of distance. Prove that Hamming distance is a metric on \({\mathbb Z}_2^n\text{.}\) Decoding a message actually reduces to deciding which is the closest codeword in terms of distance.
18.
Let \(C\) be a linear code. Show that either the \(i\)th coordinates in the codewords of \(C\) are all zeros or exactly half of them are zeros.
19.
Let \(C\) be a linear code. Show that either every codeword has even weight or exactly half of the codewords have even weight.
20.
Show that the codewords of even weight in a linear code \(C\) are also a linear code.
21.
If we are to use an error-correcting linear code to transmit the 128 ASCII characters, what size matrix must be used? What size matrix must be used to transmit the extended ASCII character set of 256 characters? What if we require only error detection in both cases?
22.
Find the canonical parity-check matrix that gives the even parity check bit code with three information positions. What is the matrix for seven information positions? What are the corresponding standard generator matrices?
23.
How many check positions are needed for a single error-correcting code with 20 information positions? With 32 information positions?
24.
Let \({\mathbf e}_i\) be the binary \(n\)-tuple with a \(1\) in the \(i\)th coordinate and \(0\)'s elsewhere and suppose that \(H \in {\mathbb M}_{m \times n}({\mathbb Z}_2)\text{.}\) Show that \(H{\mathbf e}_i\) is the \(i\)th column of the matrix \(H\text{.}\)
25.
Let \(C\) be an \((n,k)\)-linear code. Define the dual or orthogonal code of \(C\) to be
-
Find the dual code of the linear code \(C\) where \(C\) is given by the matrix
\begin{equation*} \begin{pmatrix} 1 & 1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 1 \\ 1 & 0 & 0 & 1 & 0 \end{pmatrix}\text{.} \end{equation*} Show that \(C^\perp\) is an \((n, n-k)\)-linear code.
Find the standard generator and parity-check matrices of \(C\) and \(C^\perp\text{.}\) What happens in general? Prove your conjecture.
26.
Let \(H\) be an \(m \times n\) matrix over \({\mathbb Z}_2\text{,}\) where the \(i\)th column is the number \(i\) written in binary with \(m\) bits. The null space of such a matrix is called a Hamming code.
-
Show that the matrix
\begin{equation*} H = \begin{pmatrix} 0 & 0 & 0 & 1 & 1 & 1 \\ 0 & 1 & 1 & 0 & 0 & 1 \\ 1 & 0 & 1 & 0 & 1 & 0 \end{pmatrix} \end{equation*}generates a Hamming code. What are the error-correcting properties of a Hamming code?
The column corresponding to the syndrome also marks the bit that was in error; that is, the \(i\)th column of the matrix is \(i\) written as a binary number, and the syndrome immediately tells us which bit is in error. If the received word is \((101011)\text{,}\) compute the syndrome. In which bit did the error occur in this case, and what codeword was originally transmitted?
Give a binary matrix \(H\) for the Hamming code with six information positions and four check positions. What are the check positions and what are the information positions? Encode the messages \((101101)\) and \((001001)\text{.}\) Decode the received words \((0010000101)\) and \((0000101100)\text{.}\) What are the possible syndromes for this code?
What is the number of check bits and the number of information bits in an \((m,n)\)-block Hamming code? Give both an upper and a lower bound on the number of information bits in terms of the number of check bits. Hamming codes having the maximum possible number of information bits with \(k\) check bits are called perfect. Every possible syndrome except \({\mathbf 0}\) occurs as a column. If the number of information bits is less than the maximum, then the code is called shortened. In this case, give an example showing that some syndromes can represent multiple errors.