$\newcommand{\identity}{\mathrm{id}} \newcommand{\notdivide}{\nmid} \newcommand{\notsubset}{\not\subset} \newcommand{\lcm}{\operatorname{lcm}} \newcommand{\gf}{\operatorname{GF}} \newcommand{\inn}{\operatorname{Inn}} \newcommand{\aut}{\operatorname{Aut}} \newcommand{\Hom}{\operatorname{Hom}} \newcommand{\cis}{\operatorname{cis}} \newcommand{\chr}{\operatorname{char}} \newcommand{\Null}{\operatorname{Null}} \newcommand{\transpose}{\text{t}} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&}$

## Section17.1Polynomial Rings

Throughout this chapter we shall assume that $R$ is a commutative ring with identity. Any expression of the form

\begin{equation*} f(x) = \sum^{n}_{i=0} a_i x^i = a_0 + a_1 x +a_2 x^2 + \cdots + a_n x^n, \end{equation*}

where $a_i \in R$ and $a_n \neq 0\text{,}$ is called a polynomial over $R$ with indeterminate $x\text{.}$ The elements $a_0, a_1, \ldots, a_n$ are called the coefficients of $f\text{.}$ The coefficient $a_n$ is called the leading coefficient. A polynomial is called monic if the leading coefficient is 1. If $n$ is the largest nonnegative number for which $a_n \neq 0\text{,}$ we say that the degree of $f$ is $n$ and write $\deg f(x) = n\text{.}$ If no such $n$ exists—that is, if $f=0$ is the zero polynomial—then the degree of $f$ is defined to be $-\infty\text{.}$ We will denote the set of all polynomials with coefficients in a ring $R$ by $R[x]\text{.}$ Two polynomials are equal exactly when their corresponding coefficients are equal; that is, if we let

\begin{align*} p(x) & = a_0 + a_1 x + \cdots + a_n x^n\\ q(x) & = b_0 + b_1 x + \cdots + b_m x^m, \end{align*}

then $p(x) = q(x)$ if and only if $a_i = b_i$ for all $i \geq 0\text{.}$

To show that the set of all polynomials forms a ring, we must first define addition and multiplication. We define the sum of two polynomials as follows. Let

\begin{align*} p(x) & = a_0 + a_1 x + \cdots + a_n x^n\\ q(x) & = b_0 + b_1 x + \cdots + b_m x^m. \end{align*}

Then the sum of $p(x)$ and $q(x)$ is

\begin{equation*} p(x) + q(x) = c_0 + c_1 x + \cdots + c_k x^k, \end{equation*}

where $c_i = a_i + b_i$ for each $i\text{.}$ We define the product of $p(x)$ and $q(x)$ to be

\begin{equation*} p(x) q(x) = c_0 + c_1 x + \cdots + c_{m + n} x^{m + n}, \end{equation*}

where

\begin{equation*} c_i = \sum_{k = 0}^i a_k b_{i - k} = a_0 b_i + a_1 b_{i -1} + \cdots + a_{i -1} b _1 + a_i b_0 \end{equation*}

for each $i\text{.}$ Notice that in each case some of the coefficients may be zero.

###### Example17.1

Suppose that

\begin{equation*} p(x) = 3 + 0 x + 0 x^2 + 2 x^3 + 0 x^4 \end{equation*}

and

\begin{equation*} q(x) = 2 + 0 x - x^2 + 0 x^3 + 4 x^4 \end{equation*}

are polynomials in ${\mathbb Z}[x]\text{.}$ If the coefficient of some term in a polynomial is zero, then we usually just omit that term. In this case we would write $p(x) = 3 + 2 x^3$ and $q(x) = 2 - x^2 + 4 x^4\text{.}$ The sum of these two polynomials is

\begin{equation*} p(x) + q(x)= 5 - x^2 + 2 x^3 + 4 x^4. \end{equation*}

The product,

\begin{equation*} p(x) q(x) = (3 + 2 x^3)( 2 - x^2 + 4 x^4 ) = 6 - 3x^2 + 4 x^3 + 12 x^4 - 2 x^5 + 8 x^7, \end{equation*}

can be calculated either by determining the $c_i$s in the definition or by simply multiplying polynomials in the same way as we have always done.

###### Example17.2

Let

\begin{equation*} p(x) = 3 + 3 x^3 \qquad \text{and} \qquad q(x) = 4 + 4 x^2 + 4 x^4 \end{equation*}

be polynomials in ${\mathbb Z}_{12}[x]\text{.}$ The sum of $p(x)$ and $q(x)$ is $7 + 4 x^2 + 3 x^3 + 4 x^4\text{.}$ The product of the two polynomials is the zero polynomial. This example tells us that we can not expect $R[x]$ to be an integral domain if $R$ is not an integral domain.

Our first task is to show that $R[x]$ is an abelian group under polynomial addition. The zero polynomial, $f(x) = 0\text{,}$ is the additive identity. Given a polynomial $p(x) = \sum_{i = 0}^{n} a_i x^i\text{,}$ the inverse of $p(x)$ is easily verified to be $-p(x) = \sum_{i = 0}^{n} (-a_i) x^i = -\sum_{i = 0}^{n} a_i x^i\text{.}$ Commutativity and associativity follow immediately from the definition of polynomial addition and from the fact that addition in $R$ is both commutative and associative.

To show that polynomial multiplication is associative, let

\begin{align*} p(x) & = \sum_{i = 0}^{m} a_i x^i,\\ q(x) & = \sum_{i = 0}^{n} b_i x^i,\\ r(x) & = \sum_{i = 0}^{p} c_i x^i. \end{align*}

Then

\begin{align*} [p(x) q(x)] r(x) & = \left[ \left( \sum_{i=0}^{m} a_i x^i \right) \left( \sum_{i=0}^{n} b_i x^i \right) \right] \left( \sum_{i = 0}^{p} c_i x^i \right)\\ & = \left[ \sum_{i = 0}^{m+n} \left( \sum_{j = 0}^{i} a_j b_{i - j} \right) x^i \right] \left( \sum_{i = 0}^{p} c_i x^i \right)\\ & = \sum_{i = 0}^{m + n + p} \left[ \sum_{j = 0}^{i} \left( \sum_{k=0}^j a_k b_{j-k} \right) c_{i-j} \right] x^i\\ & = \sum_{i = 0}^{m + n + p} \left(\sum_{j + k + l = i} a_j b_k c_l \right) x^i\\ & = \sum_{i = 0}^{m+n+p} \left[ \sum_{j = 0}^{i} a_j \left( \sum_{k = 0}^{i - j} b_k c_{i - j - k} \right) \right] x^i\\ & = \left( \sum_{i = 0}^{m} a_i x^i \right) \left[ \sum_{i = 0}^{n + p} \left( \sum_{j = 0}^{i} b_j c_{i - j} \right) x^i \right]\\ & = \left( \sum_{i = 0}^{m} a_i x^i \right) \left[ \left( \sum_{i = 0}^{n} b_i x^i \right) \left( \sum_{i = 0}^{p} c_i x^i \right) \right]\\ & = p(x) [ q(x) r(x) ] \end{align*}

The commutativity and distribution properties of polynomial multiplication are proved in a similar manner. We shall leave the proofs of these properties as an exercise.

Suppose that we have two nonzero polynomials

\begin{equation*} p(x) = a_m x^m + \cdots + a_1 x + a_0 \end{equation*}

and

\begin{equation*} q(x) = b_n x^n + \cdots + b_1 x + b_0 \end{equation*}

with $a_m \neq 0$ and $b_n \neq 0\text{.}$ The degrees of $p(x)$ and $q(x)$ are $m$ and $n\text{,}$ respectively. The leading term of $p(x) q(x)$ is $a_m b_n x^{m + n}\text{,}$ which cannot be zero since $R$ is an integral domain; hence, the degree of $p(x) q(x)$ is $m + n\text{,}$ and $p(x)q(x) \neq 0\text{.}$ Since $p(x) \neq 0$ and $q(x) \neq 0$ imply that $p(x)q(x) \neq 0\text{,}$ we know that $R[x]$ must also be an integral domain.

We also want to consider polynomials in two or more variables, such as $x^2 - 3 x y + 2 y^3\text{.}$ Let $R$ be a ring and suppose that we are given two indeterminates $x$ and $y\text{.}$ Certainly we can form the ring $(R[x])[y]\text{.}$ It is straightforward but perhaps tedious to show that $(R[x])[y] \cong R([y])[x]\text{.}$ We shall identify these two rings by this isomorphism and simply write $R[x,y]\text{.}$ The ring $R[x, y]$ is called the ring of polynomials in two indeterminates $x$ and $y$ with coefficients in $R\text{.}$ We can define the ring of polynomials in $n$ indeterminates with coefficients in $R$ similarly. We shall denote this ring by $R[x_1, x_2, \ldots, x_n]\text{.}$

Let $p(x) = \sum_{i = 0}^n a_i x^i$ and $q(x) = \sum_{i = 0}^m b_i x^i\text{.}$ It is easy to show that $\phi_{\alpha}(p(x) + q(x)) = \phi_{\alpha}(p(x)) + \phi_{\alpha}(q(x))\text{.}$ To show that multiplication is preserved under the map $\phi_{\alpha}\text{,}$ observe that

\begin{align*} \phi_{\alpha} (p(x) ) \phi_{\alpha} (q(x)) & = p( \alpha ) q(\alpha)\\ & = \left( \sum_{i = 0}^n a_i \alpha^i \right) \left( \sum_{i = 0}^m b_i \alpha^i \right)\\ & = \sum_{i = 0}^{m + n} \left( \sum_{k = 0}^i a_k b_{i - k} \right) \alpha^i\\ & = \phi_{\alpha} (p(x) q(x)). \end{align*}

The map $\phi_{\alpha} : R[x] \rightarrow R$ is called the evaluation homomorphism at $\alpha\text{.}$