Original Source Link
Before I begin. I’ve read many posts discussing what $dx$ is in an integral, and none of them answer the question I’m about to ask. I am writing this because I don’t want my post to be labelled as a duplicate right away. Anyway, let the reader decide whether this is a duplicate or not.
This is the definition of a differential of a function in my textbook:
$$
lim_{Delta x to 0} frac{Delta f}{Delta x} = frac{df}{dx} = f'(x).
$$
The textbook says that multiplying both sides by $dx$, we get
$$
df = f'(x)dx.
$$
I already have conceptual issues with this definition. In the textbook, it is emphasized that $Delta x = d x$. But since the limit where $Delta x to 0$ is the fraction $frac{df}{dx}$ by the above equation, how else am I supposed to interpret $dx$ other than $Delta x$ which has arrived at $0$ and therefore $dx = 0$? I did a bit of research and found out that Leibniz (who originally conceived of a derivative as the above fraction) named $dx$ and $df$ “infinitesimals”. I don’t know what to make of them, they seem nonsensical to me. To me it seems that the above equation is saying that we multiply $f'(x)$ with some infinitesimally small $Delta x$, which is equal to $dx$ and get $df$. How is multiplication by infinitesimals defined, if it even is?
Anyway, the reason I’m focusing on $df$ right now is because my textbook uses it to define the indefinite integral. It says that differentiation is the inverse function of integration. In other words
$$
int dF(x) = int F'(x) ; dx = F(x) + C.
$$
My issue here is that I don’t understand the role of $dx$ in the integral. That $dF$ is equal to $F'(x)$ times $dx$, where $dx$ is an infinitesimal seems wholly nonsensical to me, as I don’t understand how multiplication by infinitesimals is defined (if it even is), as I’ve already said above.
Even worse is the fact that my textbook admits literal multiplication of $dx$ with $F'(x)$ with the following notation (taken from one of the exercises).
$$
int dx = int 1 cdot dx.
$$
At first, I thought that I could just disregard $dx$ as a trivial notational convention (it marks the end of the integrand nicely), but it seems that this is sometimes not possible, as $dx$ plays a vital role in the integral, i.e. we actually use it in the calculation. One example of this is when we introduce a new variable $t$ (note that here $F'(x) = f(x)$).
$$
int f(g(x))g'(x) ; dx = int f(t) ; dt,
$$
where
$$
t = g(x), quad dt = g(x)dx, quad F'(t) = f(t).
$$
We manipulate $dx$ as well, and hence I conclude that it can’t be thought of only as a trivial notational convention. Taking this into account, I am especially surprised that one of the answers in this post claimed that $dx$ is just a trivial notational convention in all cases. By the above example, I don’t see how that can be.
To sum up:

What exactly is $dx$? How can it be that $dx = Delta x$ where $Delta x to 0$, but $dx neq 0$? Is multiplication by infinitesimals even defined in standard analysis?

How can I define the integral in such a way, that $dx$ is trivial and I don’t need to calculate with it?

I think it would be neater to define the indefinite integral as the inverse function of derivation, not differentiation. Can I do that?
Thank you for all your answers in advance.
As many books say, we define differential as $df = f'(x)dx$. This equation standalone has no meaning. It is defined just to make some algebraic manipulations on integrals and differential equations. But let’s look more thoroughly what a differential is.
In my view, differential is not an equation but a symbol that shows how the behaviour of the $df$ tends to match the behaviour of $f'(x)dx$. In other words for me a differential is the following statmement:
$$dfto f'(x)dx text{ when } dxto 0$$
Infinitesimals was a simple and more intuitive way to attack those problems but lucks the mathematical formalism that we need to keep our math consistent. As far as I know there is a mathematical approach to infinitesimals called nonstandard analysis but I don’t have a clue on how this approach defines differential.
Let’s now look on how differenial are used in integration. When we say that by the substitution $t = g(x)$ we get $int{f(g(x))g'(x)} = int f(t)dt$ by saying that $dt = g'(x)dx$. Again in my view, it is in some sense wrong to say that $dt = g'(x)dx$ and it would be correct if we and written $dtto g'(x)dx text{ when } dxto 0$. But, there is a theorem that proves how the substitution does not change the final integral but just transforms it to another equivalent integral. So we can “accept” that $dt = g'(x)dx$ just as a symbol, without the meaning of a real equation.
So now let’s answer your questions:
 Whe we are referring to $dx$, we behave to it as a quantity which tens to zero. In other words $dxto 0$. As far as the multiplication is concerned, the way I handle differentials is not as standard algebraic quanities but as symbols that satisfy some “informal” equations (like $df = f'(x)dx$) in order to be used in some problems that need a notion of differential(like integration and differential equations).
 You cannot take away the $dx$ symbol form integration symbol (although sometimes people ignore it but keep in mind that there should have been). If you see the definition of an definite integral, you can see that $dx$ plays a vital role. And, of course, from definition of definite integral we can generalize and define the indefinite integral, where the same principles work.
 As far as I know, differentiation and derivation is the same thing.
To answer your first question, the infinitesimal is not defined in standard analysis.
After a little practice with integrals the $dx$ will feel like an arcane bit of notation that serve no real purpose. When you get into differential equations, though, you will have to think about this differential operator again, and whether the algebra that you do with it is, in fact, “legal.”
You can define the integral as the antidifferentiation, but what what are the implications of the antiderivative?
I think it is easier to think of $dx$ as “small” small but not actually infinitesimal. The integral is the sum of finitely many small changes, rather than infinitely many infinitesimal changes.
In the standard analysis, we start with the definite integral. The integral is defined as the area under the curve between points with $x$ in some interval $[a,b].$
$int_a^b f(x) dx$
We can partition the interval: $ a= x_0 < x_1 <x_2 <cdots x_n = b$
And make a whole bunch of rectangles, each with a base of $(x_{i+1} – x_i)$ and height $x_i^*$ where $x_i^*$ is some point in $[x_i, x_{i+1}]$
And then we sum the areas of these rectangles $sum_limits{i=0}^{n1} f(x_i^*)(x_{i+1} – x_i)$
How you choose $x_i$ will change the value of this sum. The true area must be between the upper bound and the lower bound of this sum.
But, if the partition is allowed to be sufficiently fine, the upper bound and the lower bound approach the same value.
And that defines the integral.
The next step is to prove the fundamental theorem of calculus. The area under the curve equals the antiderivative.
If $F(x) = int_a^x f(t) dt$
Then $F(x+h) – F(x) = int_a^{x+h} f(t) dt – int_a^{x} f(t) dt = int_{x}^{x+h} f(t) dt $
If $f(t)$ is continuous then there is a $cin(x,x+h)$ such that $f(c)$ takes on the average value of $f(t)$
$F(x+h) – F(x) = hf(t)$
As $h$ approaches $0, c$ gets squeezed to equal $x$
$lim_{hto 0} frac {F(x+h) – F(x)}{h} = F'(x) = f(x)$
This is just a sketch of the the theory / proof, but it might be feel more natural to you that the nonstandard analysis.
Tagged : analysis / notation