## Math Genius: Finding the maximum value of \$int_0^1 f^3(x)dx\$

Find the maximum value of $$int_0^1 f^3(x)dx$$ given that $$-1 le f(x) le 1$$ and $$int_0^1 f(x)dx = 0$$

I could not find a way to solve this problem. I tried to use the cauchy-schwarz inequality but could not proceed further

$$int_0^1 f(x) cdot f^2(x) dx le sqrt{left(int_0^1f^4(x)dxright) left( int_0^1 f^2(x) dxright)}$$

Any hints/solutions are appreciated.

Let $$f^+(x)=max{0,f(x)},f^-(x)=max{0,-f(x)},$$ and assume $$A^+={x|f^+(x)>0},A^-={x|f^-(x)>0},$$then $$A^+cap A^- = emptyset$$
$$int_{A^+}f^+(x)=int_{A^-}f^-(x)=a$$for some $$age 0$$ by $$int_{0}^{1}f(x)=0.$$

We have that $$a le m(A^+),$$and,$$ale m(A^-).$$

We now want to find the maximum of $$int_{0}^{1}f^3(x)=int_{A^+}f^+(x)^3-int_{A^-}f^-(x)^3$$

So we just need to find the maximum of $$int_{A^+}f^+(x)^3$$, and the minimum of $$int_{A^-}f^-(x)^3$$.

For the first term, we have $$f^+(x)le 1$$,So $$f^+(x)^3le f^+(x)$$
hence we have $$int_{A^+}f^+(x)^3le int_{A^+}f^+(x) = a.$$
and for the second term we have $$frac{int_{A^-}f^-(x)^3}{m(A^-)}ge left(frac{int_{A^-}f^-(x)}{m(A^-)}right)^3=left(frac{a}{m(A^-)}right)^3$$(You can prove it by Hölder’s inequality)

So we have $$int_{0}^{1}f^3(x)=int_{A^+}f^+(x)^3-int_{A^-}f^-(x)^3le a-frac{a^3}{m(A^-)^2}le a-frac{a^3}{(1-m(A^+))^2}le a-frac{a^3}{(1-a)^2}$$
Since $$2a=int_{A^-}f^-+int_{A^+}f^+le 1$$, so $$ale 1/2.$$ So by a simple computation $$a-frac{a^3}{(1-a)^2}le frac{1}{4}quad ain[0,1/2].$$ When $$a=frac{1}{3}$$, it equals to $$frac{1}{4}.$$

To show $$int_{0}^{1}f(x)^3$$ can attain $$frac{1}{4},$$ consider such $$f(x)$$:$$f(x)=1,0le xle frac{1}{3},f(x)=-frac{1}{2},frac{1}{3}

We solve the problem via approximation by simple functions on uniform partitions of $$[0,1]$$. Consider a partition of $$[0,1]$$ into $$n$$ parts with coefficients $$alpha_i$$. Then the conditions on this simple function correspond to the conditions on the coefficients:
$$sum_{i=1}^nalpha_i=0, quad -1leqalpha_ileq 1, 1leq ileq n.$$
Similarly, the objective function becomes
$$F(alpha) = frac{1}{n}sum_{i=1}^nalpha_i^3$$
Some numerical experiments inform the solution to this problem but I doubt it is that difficult to solve using symmetry and lagrange multipliers or something else. I believe that for a partition of $$n$$ intervals, the optimal coefficients are given by $$alpha_1=1$$ and $$alpha_i = -1/(n-1)$$, or any permutation of the indices. For $$n>2$$, the objective function then gives us
$$F(alpha) = frac{1}{n}left(1-(n-1)frac{1}{(n-1)^3}right) = frac{n-2}{(n-1)^2},$$
which attains a maximum value of $$1/4$$ at $$n=3$$.

Simple functions on uniform partitions are dense in the space of simple functions, and simple functions are dense in $$L^1(0,1)$$. The functional is continuous on the considered domain, so a maximum on a dense subset corresponds to a maximum over the domain, which is a subset of $$L^1(0,1)$$.

Hint: You may use Eulero-Lagrange equations to the Lagrangian $$F(x,z,p)=z^3$$, considering the functional $$mathscr{F}(f)=int_{-1}^1 F(x,f(x),f'(x))dx$$.

## Math Genius: Finding the maximum value of \$int_0^1 f^3(x)dx\$

Find the maximum value of $$int_0^1 f^3(x)dx$$ given that $$-1 le f(x) le 1$$ and $$int_0^1 f(x)dx = 0$$

I could not find a way to solve this problem. I tried to use the cauchy-schwarz inequality but could not proceed further

$$int_0^1 f(x) cdot f^2(x) dx le sqrt{left(int_0^1f^4(x)dxright) left( int_0^1 f^2(x) dxright)}$$

Any hints/solutions are appreciated.

Let $$f^+(x)=max{0,f(x)},f^-(x)=max{0,-f(x)},$$ and assume $$A^+={x|f^+(x)>0},A^-={x|f^-(x)>0},$$then $$A^+cap A^- = emptyset$$
$$int_{A^+}f^+(x)=int_{A^-}f^-(x)=a$$for some $$age 0$$ by $$int_{0}^{1}f(x)=0.$$

We have that $$a le m(A^+),$$and,$$ale m(A^-).$$

We now want to find the maximum of $$int_{0}^{1}f^3(x)=int_{A^+}f^+(x)^3-int_{A^-}f^-(x)^3$$

So we just need to find the maximum of $$int_{A^+}f^+(x)^3$$, and the minimum of $$int_{A^-}f^-(x)^3$$.

For the first term, we have $$f^+(x)le 1$$,So $$f^+(x)^3le f^+(x)$$
hence we have $$int_{A^+}f^+(x)^3le int_{A^+}f^+(x) = a.$$
and for the second term we have $$frac{int_{A^-}f^-(x)^3}{m(A^-)}ge left(frac{int_{A^-}f^-(x)}{m(A^-)}right)^3=left(frac{a}{m(A^-)}right)^3$$(You can prove it by Hölder’s inequality)

So we have $$int_{0}^{1}f^3(x)=int_{A^+}f^+(x)^3-int_{A^-}f^-(x)^3le a-frac{a^3}{m(A^-)^2}le a-frac{a^3}{(1-m(A^+))^2}le a-frac{a^3}{(1-a)^2}$$
Since $$2a=int_{A^-}f^-+int_{A^+}f^+le 1$$, so $$ale 1/2.$$ So by a simple computation $$a-frac{a^3}{(1-a)^2}le frac{1}{4}quad ain[0,1/2].$$ When $$a=frac{1}{3}$$, it equals to $$frac{1}{4}.$$

To show $$int_{0}^{1}f(x)^3$$ can attain $$frac{1}{4},$$ consider such $$f(x)$$:$$f(x)=1,0le xle frac{1}{3},f(x)=-frac{1}{2},frac{1}{3}

We solve the problem via approximation by simple functions on uniform partitions of $$[0,1]$$. Consider a partition of $$[0,1]$$ into $$n$$ parts with coefficients $$alpha_i$$. Then the conditions on this simple function correspond to the conditions on the coefficients:
$$sum_{i=1}^nalpha_i=0, quad -1leqalpha_ileq 1, 1leq ileq n.$$
Similarly, the objective function becomes
$$F(alpha) = frac{1}{n}sum_{i=1}^nalpha_i^3$$
Some numerical experiments inform the solution to this problem but I doubt it is that difficult to solve using symmetry and lagrange multipliers or something else. I believe that for a partition of $$n$$ intervals, the optimal coefficients are given by $$alpha_1=1$$ and $$alpha_i = -1/(n-1)$$, or any permutation of the indices. For $$n>2$$, the objective function then gives us
$$F(alpha) = frac{1}{n}left(1-(n-1)frac{1}{(n-1)^3}right) = frac{n-2}{(n-1)^2},$$
which attains a maximum value of $$1/4$$ at $$n=3$$.

Simple functions on uniform partitions are dense in the space of simple functions, and simple functions are dense in $$L^1(0,1)$$. The functional is continuous on the considered domain, so a maximum on a dense subset corresponds to a maximum over the domain, which is a subset of $$L^1(0,1)$$.

Hint: You may use Eulero-Lagrange equations to the Lagrangian $$F(x,z,p)=z^3$$, considering the functional $$mathscr{F}(f)=int_{-1}^1 F(x,f(x),f'(x))dx$$.

## Math Genius: Finding the maximum value of \$int_0^1 f^3(x)dx\$

Find the maximum value of $$int_0^1 f^3(x)dx$$ given that $$-1 le f(x) le 1$$ and $$int_0^1 f(x)dx = 0$$

I could not find a way to solve this problem. I tried to use the cauchy-schwarz inequality but could not proceed further

$$int_0^1 f(x) cdot f^2(x) dx le sqrt{left(int_0^1f^4(x)dxright) left( int_0^1 f^2(x) dxright)}$$

Any hints/solutions are appreciated.

Let $$f^+(x)=max{0,f(x)},f^-(x)=max{0,-f(x)},$$ and assume $$A^+={x|f^+(x)>0},A^-={x|f^-(x)>0},$$then $$A^+cap A^- = emptyset$$
$$int_{A^+}f^+(x)=int_{A^-}f^-(x)=a$$for some $$age 0$$ by $$int_{0}^{1}f(x)=0.$$

We have that $$a le m(A^+),$$and,$$ale m(A^-).$$

We now want to find the maximum of $$int_{0}^{1}f^3(x)=int_{A^+}f^+(x)^3-int_{A^-}f^-(x)^3$$

So we just need to find the maximum of $$int_{A^+}f^+(x)^3$$, and the minimum of $$int_{A^-}f^-(x)^3$$.

For the first term, we have $$f^+(x)le 1$$,So $$f^+(x)^3le f^+(x)$$
hence we have $$int_{A^+}f^+(x)^3le int_{A^+}f^+(x) = a.$$
and for the second term we have $$frac{int_{A^-}f^-(x)^3}{m(A^-)}ge left(frac{int_{A^-}f^-(x)}{m(A^-)}right)^3=left(frac{a}{m(A^-)}right)^3$$(You can prove it by Hölder’s inequality)

So we have $$int_{0}^{1}f^3(x)=int_{A^+}f^+(x)^3-int_{A^-}f^-(x)^3le a-frac{a^3}{m(A^-)^2}le a-frac{a^3}{(1-m(A^+))^2}le a-frac{a^3}{(1-a)^2}$$
Since $$2a=int_{A^-}f^-+int_{A^+}f^+le 1$$, so $$ale 1/2.$$ So by a simple computation $$a-frac{a^3}{(1-a)^2}le frac{1}{4}quad ain[0,1/2].$$ When $$a=frac{1}{3}$$, it equals to $$frac{1}{4}.$$

To show $$int_{0}^{1}f(x)^3$$ can attain $$frac{1}{4},$$ consider such $$f(x)$$:$$f(x)=1,0le xle frac{1}{3},f(x)=-frac{1}{2},frac{1}{3}

We solve the problem via approximation by simple functions on uniform partitions of $$[0,1]$$. Consider a partition of $$[0,1]$$ into $$n$$ parts with coefficients $$alpha_i$$. Then the conditions on this simple function correspond to the conditions on the coefficients:
$$sum_{i=1}^nalpha_i=0, quad -1leqalpha_ileq 1, 1leq ileq n.$$
Similarly, the objective function becomes
$$F(alpha) = frac{1}{n}sum_{i=1}^nalpha_i^3$$
Some numerical experiments inform the solution to this problem but I doubt it is that difficult to solve using symmetry and lagrange multipliers or something else. I believe that for a partition of $$n$$ intervals, the optimal coefficients are given by $$alpha_1=1$$ and $$alpha_i = -1/(n-1)$$, or any permutation of the indices. For $$n>2$$, the objective function then gives us
$$F(alpha) = frac{1}{n}left(1-(n-1)frac{1}{(n-1)^3}right) = frac{n-2}{(n-1)^2},$$
which attains a maximum value of $$1/4$$ at $$n=3$$.

Simple functions on uniform partitions are dense in the space of simple functions, and simple functions are dense in $$L^1(0,1)$$. The functional is continuous on the considered domain, so a maximum on a dense subset corresponds to a maximum over the domain, which is a subset of $$L^1(0,1)$$.

Hint: You may use Eulero-Lagrange equations to the Lagrangian $$F(x,z,p)=z^3$$, considering the functional $$mathscr{F}(f)=int_{-1}^1 F(x,f(x),f'(x))dx$$.

## Math Genius: Inequality for joint probabilities of dependent random vectors

Let $$X$$ and $$Y$$ be two dependent random vectors in $$mathbb{R}^d$$, such that $$Xneq Y$$ with probability 1, whose joint probability measure has density $$mu(x,y)$$ with respect to the Lebesgue measure. For measurable sets $$A$$ and $$B$$, does the inequality
$$mathbb{P}(X-Y in A, Yin B) leq sup _{tin B}mathbb{P}(X in A+t)$$
hold true? Herein, operations are meant elementwise and $$A+t:={x+t:x in A}$$.

I was thinking to go through something like:

$$mathbb{P}(X-Y in A, Yin B)= int_{B}int_{A+y}mu(x,y)dxdy\ leq sup_{tin B}int_{B}int_{A+t}mu(x,y)dxdy\ =sup_{tin B}int_{A+t}int_Bmu(x,y)dydx\ leq sup _{tin B}int_{A+t}mu(x)dx\ =sup _{tin B}mathbb{P}(X in A+t)$$

where $$mu(x)$$ denote the marginal density of $$X$$, but I have doubts about the second and third lines, I’m not sure they’re correct. I pass from the third to the fourth line by using the fact that $$mu(x,y)$$ is nonnegative and $$int_Bmu(x,y)dy leq int_{mathbb{R}^d}mu(x,y)dy=mu(x)$$.

## Math Genius: References for variants of Jensen inequality

I am looking for a reference for the following claim:

Let $$X$$ be a probability space, and let $$g:X to mathbb [0,infty)$$ be in $$L^1(X)$$.
Let $$phi:mathbb [0,infty) to [0,infty)$$ be convex and strictly convex on $$[a,infty)$$, for some $$a in (0,infty)$$, and suppose that $$int_X phicirc g=phi(int_X g)$$.

Then, $$int_X g in [a,infty)$$ implies that $$g$$ is constant a.e. , and $$int_X g in (0,a]$$ implies that $$g le a$$ a.e..

I tried looking at “Real analysis” by Royden, and “Real and complex analysis” by Rudin but couldn’t find such things. (not even in the exercises).

I guess I need to look at some book on convex analysis, but I am not familiar with the literature.

Is there any book (or paper) which contains such variants of Jensen?

## Math Genius: A question from Stein’s book, Harmonic Analysis, oscillatory integral of the second kind.

My question is about a claim in Stein’s book”Harmonic Analysis:Real-Variable Methods, Orthogonality,
and Oscillatory Integrals”, page 379. It says that
$$|K_{lambda}(xi,eta)|leq A_N(1+lambda|xi-eta|)^{-N}, quad N>0.$$
Where
$$K_{lambda}(xi,eta)=int_{mathbb{R}^n} e^{ilambda[Phi(x,xi)-Phi(x,eta)]}(dot{D}_x)^N[psi(x,xi)bar{psi}(x,eta)]mathrm{d}x,$$
$$dot{D}_x$$ is the transpose of the operator
$$D_x=[ilambda Delta(x,xi,eta)]^{-1}nabla_{x}^{a(x)} =[ilambda Delta(x,xi,eta)]^{-1}langle a(x),nabla_xrangle,quad a(x)in mathbb{R}^n.$$
And where $$Delta(x,xi,eta)=nabla_x^{a(x)}[Phi(x,xi)-Phi(x,eta)]$$. It’s also supposed that $$psi$$ is a fixed smooth function of compact support (a
“cut-off” function); the phase $$Phi$$ is real-valued and smooth; on the support of $$psi$$, the Hessian of $$Psi$$ is nonvanishing.

The symbols maybe a little complex, hope I explained them clearly. You can ask me for more details or see this Stein’s book.

Now, from the book (by using a partition of unity) I can get that $$Delta(x,xi,eta)geq c|xi-eta|, quad (xi,eta)inmathrm{supp}K_{lambda}$$
and I have computed that
$$dot{D}_xg(x)=-sum_{k=1}^{n}frac{partial}{partial x_k}left[frac{a_k(x)g(x)}{ilambda Delta(x,xi,eta)}right].$$
Then I tried to proof this claim by directly estimate
$$int_{mathbb{R}^n} left|(dot{D}_x)^N[psi(x,xi)bar{psi}(x,eta)]right|mathrm{d}x.$$
It seems a considerable way but I failed. For convenience, proofing the case $$N=2$$ is enough. Anyone who can help me? Thanks a lot in advance.

## Math Genius: Wirtinger’s inequality variation

If $$f in C^1[0,1]$$ with $$f'(0) = f(1) = 0$$, then$$|f|_2leqfrac2pi|f’|_2.$$

## Elaboration:

Assume the Sturm-Liouville operator $$A: D longrightarrow L^2(0,1)$$ where the domain is
$$D = {f in C^1[0,1]: f” in L^2(0,1), f'(0) = f(1) =0}$$
and
$$Af(x) = f”(x)-lambda f(x), , lambda in mathbb{R}, , x in [0,1]$$

The eigenfuctions of $$A$$: $$phi_n(x) = sqrt{2} cosleft( frac{(2n-1)pi}{2}right), , n =1,2,dots$$ is an orthonormal basis of $$L^2(0,1)$$.

Then for an $$f in C_1[0,1]$$ with $$f'(0) = f(1) = 0$$ we have:

$$f(x) = sum_{n=1}^infty b_n sqrt{2} cosleft( frac{(2n-1)pi}{2}xright), , n =1,2,dots$$

Now it’d be very nice if
$$f'(x) = sum_{n=1}^infty a_n sqrt{2} sinleft( frac{(2n-1)pi}{2}xright), , n =1,2,dots tag{*}$$
so that, by integrating both sides
$$int_1^x f'(s),mathrm ds = sum_{n=1}^infty a_n sqrt{2} int_1^xsinleft( frac{(2n-1)pi}{2}sright),mathrm ds\ f(x) = sum_{n=1}^infty frac{-2a_n}{pi(2n-1)}sqrt{2}cosleft( frac{(2n-1)pi}{2}xright)$$
and thus by using Parseval theorem:
$$|f|_2^2 = sum_{n=1}^infty frac{4a^2_n}{pi^2(2n-1)^2} leq frac{4}{pi^2}sum_{n=1}^infty a_n^2 = frac{4}{pi^2}|f’|^2_2$$
and therefore:
$$|f|_2^2 leq frac{2}{pi} |f’|_2^2$$
Is equation $$(*)$$ (or some variation of it) true and why?

In other words, can the Fourier series expansion of $$f$$ be term by term differentiated and why?

$$defd{mathrm{d}}defpeq{mathrel{phantom{=}}{}}$$Note that after changing the domain $$D$$ to$$D_1 = {f in C^1([0, 1]) mid f” in L^2([0, 1]), f'(1) = f(0) = 0},$$
the Sturm-Liouville theorem implies that $${ψ_n(x) mid n in mathbb{N}_+}$$ is also an orthonormal basis of $$C^1([0, 1])$$, where $$ψ_n(x) = sqrt{2} sinleft( dfrac{1}{2} (2n – 1)π x right)$$, thus there exist a sequence of constants $${a_n}$$ such that$$f'(x) = sum_{n = 1}^∞ a_n sqrt{2} sinleft( frac{1}{2} (2n – 1)π x right)$$
if $$f in C^2([0, 1])$$. But for any $$f in C^1([0, 1])$$, there exist a sequence of functions $${f_n} subseteq C^2([0, 1])$$ that $$f_n’$$ uniformly converges to $$f’$$ and $$limlimits_{n → ∞} f_n(0) = f(0)$$, so this suffices for the proof of the inequality.

Actually, there is an identity:

Proposition: If $$f in C^1([0, 1])$$ satisfies $$f'(0) = f(1) = 0$$, then$$frac{4}{π^2} int_0^1 (f'(x))^2 ,d x – int_0^1 (f(x))^2 ,d x = int_0^1 left( frac{2}{π} f'(x) + f(x) tanleft( frac{π}{2} x right) right)^2 ,d x.$$

Proof: For $$0 < δ < 1$$,$$begin{gather*} int_0^{1 – δ} left( frac{2}{π} f'(x) + f(x) tanleft( frac{π}{2} x right) right)^2 ,d x\ {small= frac{4}{π^2} int_0^{1 – δ} (f'(x))^2 ,d x + frac{4}{π} int_0^{1 – δ} f(x) f'(x) tanleft( frac{π}{2} x right) ,d x + int_0^{1 – δ} (f(x))^2 tan^2left( frac{π}{2} x right) ,d x,}tag{1} end{gather*}$$
andbegin{align*} &peq int_0^{1 – δ} f(x) f'(x) tanleft( frac{π}{2} x right) ,d x = frac{1}{2} int_0^{1 – δ} tanleft( frac{π}{2} x right) ,d((f(x))^2)\ &= frac{1}{2} left. (f(x))^2 tanleft( frac{π}{2} x right) right|_0^{1 – δ} – frac{π}{4} int_0^{1 – δ} (f(x))^2 sec^2left( frac{π}{2} x right) ,d x\ &= (f(1 – δ))^2 tanleft( frac{π}{2} (1 – δ) right) – frac{π}{4} int_0^{1 – δ} (f(x))^2 sec^2left( frac{π}{2} x right) ,d x. end{align*}
Since $$tan^2 α – sec^2 α = -1$$, then$$begin{gather*} small(1) = frac{4}{π^2} int_0^{1 – δ} (f'(x))^2 ,d x – int_0^{1 – δ} (f(x))^2 ,d x + frac{4}{π} (f(1 – δ))^2 tanleft( frac{π}{2} (1 – δ) right).tag{2} end{gather*}$$
Note that as $$δ → 0^+$$,$$f(1 – δ) = int_{1 – δ}^1 f'(x) ,d x sim f'(1) δ,quad tanleft( frac{π}{2} (1 – δ) right) = cotleft( frac{π}{2} δ right) sim frac{2}{πδ},$$
thus making $$δ → 0^+$$ in (2) yields$$int_0^1 left( frac{2}{π} f'(x) + f(x) tanleft( frac{π}{2} x right) right)^2 ,d x = frac{4}{π^2} int_0^1 (f'(x))^2 ,d x – int_0^1 (f(x))^2 ,d x.$$

Is equation (∗) (or some variation of it) true and why?

I answered this question for general functional series here. Maybe for a Fourier series the conditions, sufficient for its memberwise differentiation, are weaker.

## Math Genius: An inequality of Alternating Series

We have, $$sum_{n=1}^{infty}frac{ (-1)^n sin (t log n)}{n^sigma}$$ =0

$$sum_{n=1}^{infty}frac{ (-1)^n sin (t log n)}{n^sigma}leqsum_{n=1}^{infty}frac{ (-1)^n}{n^sigma}$$. Since

sin(t log n) $$leq$$ 1. Is this inequality correct?

This inequality is not correct. Indeed, $$sin(t ln n) leq 1$$. That is, $$1$$ is always bigger in magnitude than $$sin(t ln n)$$. But because of the $$(-1)^n$$, they flip which is bigger. Take a simpler example of this. Clearly, $$2 leq 3$$. But let’s add a $$(-1)^n$$ to both. Then we have $$(-1)^n 2$$ and $$(-1)^n 3$$. If $$n=0$$, then $$2<3$$, good! But if $$n=1$$, then $$-3<-2$$ and the sides have ‘flipped’. If $$n=2$$, we have ‘flipped’ back to $$2<3$$, and so on and so forth forever. This is before you even try to add up these ‘flip-flop’ terms with the summation.

Inequalities like these with alternating series simply aren’t well behaved because of this very phenomenon. This is why we try to work with the absolute value of the summation/summand, because then inequalities will be better behaved.

## Math Genius: A math inequality cannot prove

I try to solve a inequality, I use R to run simulation with different $$alpha>0$$ and a sequence of $$n$$, the simulation all show the LHS is smaller than RHS, but I cannot prove it analytically, can anyone help? Thx!

$$sum_{j=1}^n frac{n}{(j^{2alpha+1}+n)^2}leq n^{frac{-2alpha}{2alpha+1}}quad alpha>0$$

I attach the code:

```simulation<-function(alpha){ result=NULL for(n in 1:100){ j<-1:n denom=(j^(2*alpha+1)+n)^2 result=c(result,sum(n/denom)-n^(-2*alpha/(2*alpha+1))) } sum(result>0) } sapply(seq(0.1,10,by=0.1), simulation) # all zero ```

Not a full solution.

Here is an idea which you might want to exploit.

Consider the function $$f(x) = frac{n}{(x + n)^2}$$. We see that $$f(x)$$ is monotonously decreasing, convex and approaching $$0$$ as $$x to infty$$, and considerably “smooth”.

Now interprete $$frac{n}{(j^{2alpha+1}+n)^2}$$ as probes of $$f(x)$$, albeit at non-equidistant points $$x in {1^{2alpha+1}, 2^{2alpha+1}, cdots , n^{2alpha+1}}$$.

The crucial part now is to establish that you may estimate an upper bound to the sum of these probes by a sum of more probes which are equidistant, and proper scaling. I.e. take probes at points $$x in {1,2,3, cdots , n^{2alpha+1}}$$ where now you have taken $$n^{2alpha+1}$$ probes (modulo 1) instead of $$n$$ many, so the scaling factor is $$frac{n}{n^{2alpha+1}}$$. So the idea is to establish (if possible / or in modified form) first:

$$sum_{j=1}^n frac{n}{(j^{2alpha+1}+n)^2}leq frac{n}{n^{2alpha+1}} sum_{j=1}^{n^{2alpha+1}} frac{n}{(j+n)^2}$$

Once this is done, you can continue with integral bounds:
$$sum_{j=1}^n frac{n}{(j^{2alpha+1}+n)^2}leq frac{n}{n^{2alpha+1}} sum_{j=1}^{n^{2alpha+1}} frac{n}{(j+n)^2}\ leq frac{n}{n^{2alpha+1}} int_{x=1}^{n^{2alpha+1}} frac{n}{(x+n)^2} {rm{d}} x \ =frac{n^2}{n^{2alpha+1}}left[ frac{1}{n+1} – frac{1}{n+n^{2alpha+1}}right] color{red}{le} n^{frac{-2alpha}{2alpha+1}}$$
and the last ($${color{red}{rm{red}}}$$) inequality seems to hold true for all $$n$$ and $$alpha$$ as far as I can see (from simulations).