## Math Genius: Cubic Discriminant Uses

The discriminant for cubic equations is –

$$Δ​:=b^22:^2−4ac^3−4b^3d−27a^2d^2+18abcd$$

And I am aware that you can determine the number of roots a cubic has using method shown below –

$$Δ​:>0$$ the equation has three distinct real roots

$$Δ​:=0$$ the equation has a repeated root and all its roots are real

$$Δ​:<0$$ the equation has one real root and two non-real complex conjugate roots

But I was wondering if one could determine whether a cubic has rational or integer roots, as you can do with the discriminant for quadratics, and if so what the method would be.

I have noticed that with the cubics I have checked: if the discriminant is a perfect square there are 3 integer solutions, although I have not checked many and I am not sure of the reasoning behind it.

Any help would be greatly appreciated.

When a monic cubic has square discriminant but no rational roots, what we expect is real roots that can be written as (doubled) cosines, or sums of them.

$$x^3 + x^2 – 2x – 1$$
has $$2 cos frac{2 pi}{7} ; , ; ; 2 cos frac{4 pi}{7} ; , ; ; 2 cos frac{8 pi}{7} ; , ; ;$$

more in a minute
$$x^3 – 3x + 1$$
has $$2 cos frac{2 pi}{9} ; , ; ; 2 cos frac{4 pi}{9} ; , ; ; 2 cos frac{8 pi}{9} ; , ; ;$$

$$x^3 + x^2 – 4x + 1$$
has $$2 cos frac{2 pi}{13} + 2 cos frac{10 pi}{13}; , ; ; 2 cos frac{4 pi}{13} + 2 cos frac{6 pi}{13} ; , ; ; 2 cos frac{8 pi}{13} +2 cos frac{12 pi}{13} ; , ; ;$$

## Server Bug Fix: Strategy to calculate \$ frac{d}{dx} left(frac{x^2-6x-9}{2x^2(x+3)^2}right) \$.

I am asked to calculate the following: $$frac{d}{dx} left(frac{x^2-6x-9}{2x^2(x+3)^2}right).$$
I simplify this a little bit, by moving the constant multiplicator out of the derivative:
$$left(frac{1}{2}right) frac{d}{dx} left(frac{x^2-6x-9}{x^2(x+3)^2}right)$$
But, using the quotient-rule, the resulting expressions really get unwieldy:
$$frac{1}{2} frac{(2x-6)(x^2(x+3)^2) -(x^2-6x-9)(2x(2x^2+9x+9))}{(x^2(x+3)^2)^2}$$

I came up with two approaches (3 maybe):

1. Split the terms up like this: $$frac{1}{2}left( frac{(2x-6)(x^2(x+3)^2)}{(x^2(x+3)^2)^2} – frac{(x^2-6x-9)(2x(2x^2+9x+9))}{(x^2(x+3)^2)^2} right)$$
so that I can simplify the left term to $$frac{2x-6}{x^2(x+3)^2}.$$
Taking this approach the right term still doesn’t simplify nicely, and I struggle to combine the two terms into one fraction at the end.

2. The brute-force-method. Just expand all the expressions in numerator and denominator, and add/subtract monomials of the same order. This definitely works, but i feel like a stupid robot doing this.

3. The unofficial third-method. Grab a calculator, or computer-algebra-program and let it do the hard work.

Is there any strategy apart from my mentioned ones? Am I missing something in my first approach which would make the process go more smoothly?
I am looking for general tips to tackle polynomial fractions such as this one, not a plain answer to this specific problem.

Logarithmic differentiation can also be used to avoid long quotient rules. Take the natural logarithm of both sides of the equation then differentiate:
$$frac{y’}{y}=2left(frac{1}{x-3}-frac{1}{x}-frac{1}{x+3}right)$$
$$frac{y’}{y}=-frac{2left(x^2-6x-9right)}{x(x+3)(x-3)}$$
Then multiply both sides by $$y$$:
$$y’=-frac{{left(x-3right)}^3}{x^3{left(x+3right)}^3}$$

HINT

To begin with, notice that
begin{align*} x^{2} – 6x – 9 = 2x^{2} – (x^{2} + 6x + 9) = 2x^{2} – (x+3)^{2} end{align*}
Thus it results that
begin{align*} frac{x^{2} – 6x – 9}{2x^{2}(x+3)^{2}} = frac{2x^{2} – (x+3)^{2}}{2x^{2}(x+3)^{2}} = frac{1}{(x+3)^{2}} – frac{1}{2x^{2}} end{align*}

In the general case, polynomial long division and the partial fraction method would suffice to solve this kind of problem.

Note that $$x^2-6x-9 = (x-3)^2 – 18$$. So after pulling out the factor of $$frac 12$$, it suffices to compute
$$frac{d}{dx} left(frac{x-3}{x(x+3)}right)^2$$
and
$$frac{d}{dx} left(frac{1}{x(x+3)}right)^2.$$
These obviously only require finding the derivative of what’s inside, since the derivative of $$(f(x))^2$$ is $$2f(x)f'(x)$$.

For a final simplification, note that
$$frac{1}{x(x+3)} = frac{1}{3} left(frac 1x – frac{1}{x+3}right),$$
so you’ll only ever need to take derivatives of $$frac 1x$$ and $$frac {1}{x+3}$$ to finish, since the $$x-3$$ in the numerator of the first fraction will simplify with these to give an integer plus multiples of these terms.

As a general rule, partial fractions will greatly simplify the work required in similar problems.

Tagged : / /

## Math Genius: How to prove Fekete / Markov-Lukasz theorem: nonnegative univariate polynomial on [-1,1] can be decomposed accoording to even/odd-ness of degree

I’ve been a bit stuck on these few problems. I’m trying to prove the following statements but I’m just not sure what to or how to start:

For a univariate polynomial $$fin mathbb{R}[x]$$ of degree $$d$$. Assume that $$fgeq 0$$ on the domain [-1,1]. Show that

• if $$d$$ is even, then $$f$$ has decompositions:

(1) $$f=s_0 + (1-x^2)s_1$$ where $$s_0$$ and $$s_1$$ sums of squares with deg($$s_0)leq d$$ and deg($$s_1)leq d-2$$

(2) $$f=(1+x)s’_0 + (1-x)s’_1$$ where $$s’_0$$ and $$s’_1$$ sums of squares with deg($$s’_0),deg(s’_1)leq d$$

• if $$d$$ is odd, then $$f$$ has decompositions:

(1) $$f=(1+x)z_0 + (1-x)z_1$$ where $$z_0$$ and $$z_1$$ sums of squares with deg($$z_0),deg(z_1)leq d-1$$

(2) $$f=z’_0 + (1-x^2)z’_1$$ where $$z’_0$$ and $$z’_1$$ sums of squares with deg($$z’_0)leq d+1$$ and deg($$z’_1)leq d-1$$

I have a feeling i need to use the Goursat transform. Which is defined as the univariate transform $$G(f)(x) = (1+x)^df(frac{1-x}{1+x})$$ for some univariate polynomial $$finmathbb{R}[x]$$ of degree $$d$$. I can (if nessesary) use that if $$fgeq 0$$ on [-1,1] $$iff G(f)geq 0$$ on $$mathbb{R}_+$$.

I can only answer the (1) versions:

So, assuming $$fgeq 0$$ on [-1,1] $$Rightarrow^{Q2.1}$$ $$G(f)(x) = (1+x)^dfbig(frac{1-x}{1+x}big)geq0$$ on $$mathbb{R}_+$$. Then from Polya’s theorem, we know we can rewrite $$G(f)$$ to be of the form
$$G(f) = s_0 + xs_1$$ for some SOS polynomials $$s_0,s_1$$ with deg($$s’_0)leq d$$ and, deg$$(s’_1)leq d-1$$. Now we substitute $$y:= frac{1-x}{1+x}$$ (so $$x=frac{1-y}{1+y}$$), as
begin{align*} limlimits_{xto 0} frac{1-x}{1+x} &= frac{1}{1} = 1\ limlimits_{xto infty} frac{1-x}{1+x} &= limlimits_{xto 0} frac{-x}{x} = -1 end{align*}
So $$y$$ is within the domain [-1,1]. Substituting this into the Goursat transform, we attain:
$$big(frac{2}{1+y}big)^d f(y) = s_0 + frac{1-y}{1+y}s_1$$

Now we distinguish the two cases of $$d$$:

ODD In this case we can rewrite $$d=2m+1$$ for some $$minmathbb{N}$$. Then
begin{align*} 2^df(y) &= (1+y)^{2m+1}s_0 + (1-y)(1+y)^{2m} s_1 \ Rightarrowquad f(y) &= (1+y)Big(frac{((1+y)^m)^2}{sqrt{2^m}^2} s_0Big) +(1-y)Big(frac{((1+y)^m)^2}{sqrt{2^m}^2} s_1Big) end{align*}
And because $$s_0$$ is a SOS, they can be rewritten as $$sumlimits_{i=1}^{m_0} q_{0,i}^2$$ with deg$$(q_{0,i})leq d$$. And so
begin{align*} frac{((1+y)^m)^2}{sqrt{2^m}^2} s_0 &= frac{((1+y)^m)^2}{sqrt{2^m}^2}sumlimits_{i=1}^{m_0} q_{0,i}^2\ &= sumlimits_{i=1}^{m_0} (frac{(1+y)^m}{sqrt{2^m}} q_{0,i})^2 =: z_0 end{align*}
Which is again a SOS. Same holds for $$Big(frac{((1+y)^m)^2}{sqrt{2^m}^2} s_1Big)$$.
So for $$d$$ is odd, $$f$$ is of the form
$$f(y) = (1+x)z_0 + (1-x)z_1$$
with $$z_0$$ and $$z_1$$ SOS

EVEN In this case we can write $$d=2m$$. Thus
begin{align*} 2^df(y) &= (1+y)^{2m}s_0 + (1-y)(1+y)^{2m-1} s_1 \ Rightarrowquad f(y) &= Big(frac{((1+y)^m)^2}{sqrt{2^m}^2} s_0Big) +(1-y)(1+y)Big(frac{((1+y)^{m-1})^2}{sqrt{2^m}^2} s_1Big)\ &= Big(frac{((1+y)^m)^2}{sqrt{2^m}^2} s_0Big) +(1-y)(1+y)Big(frac{((1+y)^{m-1})^2}{sqrt{2^m}^2} s_1Big)\ &= Big(frac{((1+y)^m)^2}{sqrt{2^m}^2} s_0Big) +(1-y^2)Big(frac{((1+y)^{m-1})^2}{sqrt{2^m}^2} s_1Big)\ end{align*}

As for the $$d$$ odd case, these can be rewritten $$frac{((1+y)^m)^2}{sqrt{2^m}^2} s_0$$ and $$frac{((1+y)^{m-1})^2}{sqrt{2^m}^2} s_1$$ as sums of square functions. Therefore, for $$d$$ even, we have that $$f$$ is of the form

$$f=s_0 + (1-x^2)s_1$$ with $$s_0$$ and $$s_1$$ SOS

Only thing i have not been able to show is that these $$s_0,s_1,z_0$$ and $$z_1$$ have proper degrees. Ideas, anyone?

## Math Genius: Lower bound on the location of the root in \$(0,1)\$ of the trinomial \$z^m + z^l -1\$ involving the degrees

I will begin with my original problem: Let $$r : [0,1) to (0,infty )$$ a continuous, increasing function such that $$lim_{hto 1} r(h) = infty$$. In my case $$r$$ is explicitly given. For fixed $$xin (0,1)$$ the set of $$h$$ fulfilling
$$x^{r(h) – r(0)} + x^{8r(h)} – 1 geq 0$$
is of the form $$[0, h_0 (x)]$$. One can see that $$lim_{xto 0} h_0 (x) = 0$$. Actually, I want to say something about the speed of this convergence. For this purpose I tried to consider the problem the other way round.

This is, for fixed $$hin [0,1)$$ one can try to bound the unique root $$x_0 (h) := x_0 (a,b) in (0,1)$$ of

$$p_{a,b} (x) := x^b + x^a -1$$

in terms of $$b := r(h) – r(0)$$, $$a := 8r(h)$$. Indeed, assume $$B(r(h))$$ is a lower bound. That is $$B(r(h)) leq x_0 (h)$$. Then $$B( r(h_0 (x)) ) leq x_0 (h_0 (x)) = x$$. If $$B$$ is nice enough and increasing, we will get information about the speed of convergence.

My question is how to bound $$x_0 (a,b)$$ from below.

I tried the following:
For $$a leq a’ , b leq b’$$ we have
$$p_{a,b} geq p_{a’,b’}$$
which implies that $$x_0 (a,b) leq x_0 (a’,b’)$$. Thus I had the idea to define $$lfloor s rfloor_n := sup{ frac k n leq s : kin Bbb{N}_0 }$$. Then $$x_0 (lfloor arfloor_n , lfloor brfloor_n) leq x_0 (a,b)$$. The advantage is that
$$p_{lfloor arfloor_n , lfloor brfloor_n} (z^n) = z^{n cdot lfloor brfloor_n } + z^{n cdot lfloor arfloor_n } -1$$
is an ordinary polynomial in $$z$$. In particular it is an trinomial with coefficients $$1,1,-1$$.
The naive way to bound the roots of $$z^m + z^l -1$$ from below is to search for a upperbound of the roots of $$-z^{max (m,l)} + z^{vert m -lvert} + 1$$ (cf. https://en.wikipedia.org/wiki/Geometrical_properties_of_polynomial_roots#Bounds_on_all_roots) and take the inverse.
By Chauchy’s bound and Lagrange’s bound this would result in a lower bound $$x_0 (lfloor arfloor_n , lfloor brfloor_n) geq (frac 1 2 ) ^n to 0$$, which does not help much. This is, because the lower bound does not involve the degrees. One also sees that this is a very bad lower bound because the root of $$z^m + z^l -1$$ in $$(0,1)$$ goes to $$1$$ if $$m,l$$ are growing large.

Is there a possibility to bound the root of $$z^m + z^l -1$$ in $$(0,1)$$ from below in terms of the degrees?
Any reference on bounding positive roots is also appreciated.

Edit: For $$h$$ being small $$b$$ will be small eventually. Martin R’s comment suggested the bound $$x_0(a,b) geq 2^{-frac 1 b}$$. This can be achieved by the following: For $$a-b geq 0$$ root $$x$$ will fulfil $$x^b (1 + x^{a-b}) = 1$$. Since $$x^{a-b} in (0,1)$$ we have $$2x^b geq x^b (1 + x^{a-b}) = 1$$, thus $$x^b geq 2^{-frac 1 b}$$.

For the idea formulated above this leads to $$r(h_0 (x) ) – r(0) leq -frac{log (2)}{log (x)}$$.

I do not know if this will give the good bounds you look for.

Admitting that $$(m,l)$$ are large, the solutions are close to $$1$$. So, let $$z=1-x$$ and expand as series around $$x=0$$. This would give
$$(1-x)^m+(1-x)^l-1=1- (l+m)x+frac{1}{2} ((l-1) l+(m-1) m)x^2+-frac{1}{6} left( (l-2) (l-1) l- (m-2) (m-1) mright)x^3+Oleft(x^4right)$$ Now, using series reversion, this would give, as an approximation,
$$x=frac{1}{l+m}+frac{(l-1) l+(m-1) m}{2 (l+m)^3}+Oleft(frac 1{(l+m)^5} right)$$
Trying for $$m=12$$ and $$l=34$$, this would give $$x=frac{2743}{97336}$$ that is to say $$z=frac{94593}{97336}approx 0.9718$$ while the “exact” solution would be $$0.9676$$.

For the reversed series, I did not write the next term which is quite messy but it seems to me that, if $$l > m$$ it is positive. So, the $$x$$ will be a lower bound and the corresponding $$z$$ an upper bound.

Using the next (not written term), for the worked example, we would have $$x=frac{3152531}{102981488}$$ that is to say $$z=frac{99828957}{102981488}approx 0.9694$$ which is much better.

## Math Genius: Finding the remainder of a 6 degree polynomial and polynomial itself. Using some algebraic or graphical techniques. [duplicate]

In a six-degree polynomial, say $$f(x)$$ we have, $$f(1)=1, f(2)=1/2, f(3)=1/3, f(4)=1/4, f(5)=1/5, f(6)=1/6, f(7)=1/7$$ Find $$f(8)$$ and the polynomial $$f(x)$$.

This problem comes all the way from the book of an Indian Author. I tried solving the problem using algebra (definitely not by substituting values in the polynomial and solving for SIX variables). According to me, this question is definitely having a more rational approach. I tried using algebraic techniques by manipulating $$f(x)$$ writing it in terms of a new polynomial say $$h(x)$$ but in vain. So can anyone of the bright minds out there could help me in attempting the question. It would be a great help.

Consider the polynomial $$g(x) = xf(x) – 1$$. You have that $$g$$ is a polynomial of degree 7.
You also know that $$g(n) = nf(n) – 1 = ncdot1/n – 1= 0,$$
for $$n = 1, ldots, 7$$. Thus, $$g(x)$$ factorises as
$$g(x)= A(x – 1)cdots(x – 7), quad (*)$$
where $$A$$ is some real constant.

(We got the above since $$g(x)$$ is a degree 7 polynomial.)

Now, all we need to do is determine $$A$$. This is easy because the definition of $$g(x)$$ tells us that $$g(0) = 0f(0) – 1 = -1$$. Substituting this in $$(*)$$ gives us:

$$-1 = A(-1)cdots(-7) = -5040A$$
$$implies A = dfrac{1}{5040}.$$

Now, we can retrieve $$f(x)$$ as $$f(x) = dfrac{1}{x}(g(x) + 1)$$ or

$$f(x) = dfrac{1}{x}left(dfrac{1}{5040}(x – 1)cdots(x – 7) + 1right).$$

(Note that the constant of the polynomial within the parenthesis is $$0$$ and thus, $$f(x)$$ will indeed turn out to be a polynomial.)

Calculating $$f(8)$$ is considerably easier as we get

begin{align}f(8) &= dfrac{1}{8}left(dfrac{1}{5040}(8 – 1)cdots(8 – 7) + 1right)\ &= dfrac{1}{8}left(dfrac{1}{5040}(7)cdots(1) + 1right)\ &= dfrac{1}{8}(1 + 1) = boxed{dfrac{1}{4}}.end{align}

Tagged : /

## Math Genius: When can we have the same number of (general) solution as the projection?

$$newcommandC{mathbb{C}}$$Let $$a_1,dots,a_n$$ be parameters in $$C$$ and let
$$f_1,dots,f_n$$ be non-constant algebraically independent polynomials in $$n$$-variables over $$C$$.

For a general choice of $$a_i$$ (i.e. $$(a_1,dots,a_n)$$ belonging to a Zariski open set in the affine space $$C^n$$) we know that the system

$$f_i = a_i qquad i=1,dots,n$$

is a complete intersection, i.e. there are finite , say $$k$$, solutions in $$C^n$$ (I’m not sure what I should use to justfy this, I guess some argument in ellimination theory would justify this).

My question is the following:

Suppose $$pi: C^n to C^2$$
is a projection on say the first two coordinates.

Can I also claim that for a general choice of $$a_i$$ the projection of the solutions of

$$f_i = a_i qquad i=1,dots,n$$

with respect to $$pi$$ yields $$k$$ points in $$C^2$$? The projection should be the ellimination ideal of the ideal generated by the polynomials $$langle f_i-a_i : i=1,dots,n rangle$$ to polynomials in the first two variables. But can I also have a control on the ellimination
ideal (I want it to have the same number of vanishing points as the original ideal) for general $$a_i$$?

If so, why?

Tagged : /

## Math Genius: Polynomials, Rouche’s theorem and index of vector fields

In the proof of Rouche’s theorem I saw in a book, there are two points I failed to understand, or failed to prove myself. (if you aren’t familiar with the theorem, please try to look at the two statements here below and explain them regardless of the theorem):

1. “Consider the complex plane. It’s not difficult to show that the index of a curve $$C$$ with respect to a vector field $$v$$ is equal to the sum of indices of singular points, i.e., those at which $$v(z) = 0$$.”

2. “If $$f(z)$$ is a polynomial, and let $$v(z) = f(z)$$, where $$v$$ is a vector field, then the index of the singular point $$z_0$$ is equal to the multiplicity of the root $$z_0$$ of $$f$$.”

1. If by index of the curve C they mean the integral

$$frac{1}{2 pi i} int_gamma frac{v'(z)}{v(z)} dz ,$$

which is what I am guessing that they mean, then this follows from the residue theorem applied to the complex valued function $$v(z)$$.

1. The index of a vector field at $$z_0$$ is defined to be the winding number you get for taking a small circle about $$z_0$$ of winding number $$1$$ and computing the winding number of its image about $$0$$. This can be computed as:

$$frac{1}{2 pi i} int_gamma frac{v'(z)}{v(z)} dz ,$$

where $$gamma$$ is a parametrization of a sufficiently small circle which winds around $$z_0$$ once. For $$v(z) = f(z)$$ and $$f(z_0) = 0$$, we can write $$f(z) = (z-z_0)^n g(z)$$, where $$n$$ is maximal such that $$g(z_0) neq 0$$. Then,

$$f'(z) = n (z-z_0)^{n-1} g(z) + (z-z_0)^n g'(z)$$

and

$$frac{f'(z)}{f(z)} = frac{n}{z-z_0} + (z-z_0)^n frac{g'(z)}{g(z)}.$$

The second term above is analytic in this region if we make $$gamma$$ a small enough circle so that $$g(z)$$ is non-zero in the interior. Thus, when we integrate we just get that the index is

$$frac{1}{2 pi i} int_gamma frac{n}{z – z_0} dz = n ,$$

which is the multiplicity of $$z_0$$.

## Math Genius: To show that a polynomial has no rational roots.

Given a monic polynomial $$P(x) = x^n + a_1x^{n-1}+ldots a_n$$, with integer coefficients, I need to show that it has no rational roots (in this case integer) using the following facts

1) $$n>1,$$

2) $$a_n=17,$$

3) $$1+a_1+ldots+a_n neq 0$$, $$1-a_1+a_2 ldots +(-1)^na_n neq 0$$,

4) $$|a_m| leq 15$$, $$forall m .

Using the rational root theorem and the first 3 conditions, I was able to rule out $$pm 1$$ as roots among the four possible roots $$pm 1, pm17$$. It is clear that I have to use the last condition to rule out $$pm 17$$, but I am unable to do that. I may be missing something simple here, but any hints are welcome.

Since $$P$$ is monic and $$a_n=17$$, we have
$$P(17) = 17^n + a_1 17^{n-1}+a_2 17^{n-2}+ldots +a_{n-1}17+17$$Now use the fact that $$|a_m|leq 15$$ for $$1leq mleq n-1$$:
$$|a_1 17^{n-1}+a_2 17^{n-2}+ldots +a_{n-1}17+17|$$
$$leq |a_1| 17^{n-1}+|a_2| 17^{n-2}+ldots +|a_{n-1}|17+17$$
$$leq 15(17^{n-1}+17^{n-2}+ldots +17)+17$$
$$= frac{1}{16} left(17+15cdot 17^nright) < 17^n$$In other words, $$|P(17)|$$ is strictly greater than $$0$$, and a similar argument holds for $$P(-17)$$.

Tagged : /

## Math Genius: Hermite polynomial Orthogonality where the input is an inner product of normal random variables

Let $$He_n(x)$$ be the normalized probabilists’ Hermite polynomial of degree $$n$$.
We know that
$$int_{mathbb{R}} He_n(x) He_m(x) frac{1}{sqrt{2pi}}e^{-frac{x^2}{2}} dx = delta_{m,n}.$$
Let $$Z sim N(0,I_d)$$ and $$textbf{v}_1$$ and $$textbf{v}_2$$ are fixed unit vectors in $$mathbb{R}^d$$.
I want to show the following: for $$n le m$$,
$$mathbb{E}_{Z}[He_n(textbf{v}_1^TZ) He_m(textbf{v}_2^TZ)] = (textbf{v}_1^Ttextbf{v}_2)^{n}delta_{n,m}.$$
I was able to check this is indeed the case for $$n le m le 3$$.
However, I am not sure how to further prove this statement.
I feel that I need to use some known properties of the Hermite polynomial, however, Hermite has too many properties.

Tagged : / /

## Math Genius: In \$R[x]\$, \$f=g iff f(x)=g(x), forall x in R\$

Let \$R\$ be an integral domain and \$R[x]\$ the polynomial ring over \$R\$. Let \$f,g in R[x]\$ such that \$max(deg f, deg g)< #R\$. Show that \$f=g iff f(x)= g(x), forall x in R\$.

\$bf Attempt:\$ Since \$f,g in R[x]\$, \$\$f(x)=a_0+a_1x +a_2x^2+ dots + a_nx^n \
g(x)=b_0+b_1x+b_2x^2+dots+b_mx^m,\$\$ for some \$a_i,b_j in R (text{with} 1leq i leq n, 1leq j leq m\$). WLOG, \$n<m\$.

Assume that \$f(x) = g(x)\$ for all \$x in R\$. Then \$\$a_0+a_1x +a_2x^2+ dots + a_nx^n = b_0+b_1x+b_2x^2+dots+b_mx^m.\$\$
In particular if \$x=0\$, this implies \$a_0 = b_0\$.

So \$\$a_1x +a_2x^2+ dots + a_nx^n =b_1x+b_2x^2+dots+b_mx^m \
(a_1-b_1)x +(a_2-b_2)x^2+ dots + (a_n-b_n)x^n-b_{n+1}x^{n+1} dots -b_mx^m = 0. \$\$

After this point, I tried to make some argument that \$n=m\$ but ran into the problem that I could possibly have that \$x^p=x^q\$ for all \$x in R.\$ So I can’t claim that each of the coefficients is zero. I’m certain that I need to use the condition regarding the degrees of \$f\$ and \$g\$ but I’m not sure how to.

Equivalently, $$,h := f-g = 0 iff h(x)=0, forall xin R.$$ If $$,hne 0,$$ had more roots $$,r_i$$ than its degree then, by the Factor Theorem, it would be divisible by the nonassociate primes $$,x-r_i,,$$ hence also divisible by their lcm = product, a polynomial of greater degree than $$,h,,$$ a contradiction.

Remark  Note that $$,x!-!r,$$ is prime by $$,R[x]/(x!-!r) cong R,$$ is a domain.  Furthermore

Lemma $$, color{#c00}{rm nonassociate},$$ primes $$,p_imid a, Rightarrow p_1cdots p_nmid a$$ [lcm = product]

Proof  Induct on $$,n.,$$ Clear if $$,n=1.,$$ By induction $$,p_1mid a,ldots,p_{n-1}mid a,Rightarrow, a = p_1cdots p_{n-1} a_1.,$$ $$,color{#c00}{p_nnmid p_i},$$ for $$,i< n$$ so $$p_nmid p_1cdots p_{n-1}a_1Rightarrow,p_nmid a_1,$$ so $$a_1 = p_n a_2,,$$ hence $$a = p_1cdots p_n a_2$$

Generally, $$, D,$$ is a domain $$!iff!$$ every polynomial $$,f(x)neq 0in D[x],$$ has at most $$, deg f$$ roots in $$,D.,$$ For the simple proof see my here, where I illustrate it constructively in $$,Bbb Z/m,$$ by showing that given any $$,f(x),$$ with more roots than its degree,$$:$$ we can quickly compute a nontrivial factor of $$,m,$$ via a simple gcd computation. The quadratic case of this result is at the heart of many integer factorization algorithms, which factor $$:m:$$ by searching for a square root of $$1$$ that’s $$notequiv pm1,$$ in $$: mathbb Z/m$$.