Considering a Continuous Instantaneous
Calculus 1, Lectures 16A through 18A
Instead, they approached calculus in an intuitive way. Today, this intuitive method is called infinitesimal calculus . It is based on the concept of infinitesimal quantities, or just "infinitesimals", for short. These are quantities so small that they are smaller than any positive real number. In a sense, you can think of them as quantities of the form .
But, first things first: there are no such real numbers! You cannot divide by infinity!
There is also no smallest positive real number!
This last statement is easy to prove. Given any real number , the number , but .
Moreover, if , then , , etc… can be "much smaller than" itself (by many "orders of magnitude").
Using Infinitesimals
So if there are no such real numbers, how can they possibly be used? There are two approaches to the answer this question.
- They can be made rigorous through the arduous process of studying the subject of non-standard analysis.
- They can be kept at an intuitive, but non-rigorous, level.
Most people don't have the stomach for approach #1. So instead, we use approach #2. Approach #2 also has the benefit of being a lot of fun! — once you get used to it, at least.
Part of the fun that arises from this approach is that calculus formulas can be derived without resorting to the use of limits. You could call this approach Calculus Sans Limits. I discussed this at the end of my previous blog post, "Differentiable Functions and Local Linearity".
A more serious benefit of Approach #2 is that it can give you insight into many applications of calculus . Indeed, this approach results in some of the main benefits that scientists, engineers, economists, etc… get out of learning calculus.
These benefits are not something that come so naturally to people trained to be pure mathematicians.
I myself was trained this way. It always bothered me when my physics teachers took this approach.
I would ask them, "how do you know you can do that?"
And they would respond, "because it makes sense!" And then they would look at me and ask, "what are you, a mathematician or something?"
I have learned over the years to appreciate their point of view. Again, it helps you get a lot of insight into the applications of calculus; and not just in differential calculus, but even more so in integral calculus.
Lecture 16A: Continuous Growth Rates, Errors, and Newton's Method
In the lectures, before getting into infinitesimal calculus, I spend some time nailing down the meaning of two things.
The first thing that needs to be made more clear is the interpretation of continuous growth rates. And the second thing that needs more clarity is what it means for a linear approximation to be "good". These two topics take up the first half of Lecture 16A.
Continuous (Instantaneous) Relative Growth Rates
I start by considering a situation where an investment grows by 100% for every year that goes by. In other words, the value of the investment doubles every year. The only situation where this might be somewhat realistic is for a newly-formed company whose value skyrockets in its first few years.
In this situation, if $1000 is invested at time , then the investment's value at an arbitrary time is . If , then . Therefore, we can also write . The quantity is the continuous growth rate in this situation.
This is an instantaneous relative (percent) rate of growth . If the growth continued along a straight line rather than a concave up exponential growth curve, it would grow by about 69.3% in one year.
To see this, note that . Then, at any moment in time , the tangent line approximation to at the point gives . Hence, the relative change along the tangent line when is .
This is described visually in the lecture embedded above and shown in the figure below.
Errors in Linear Approximations
What does it mean for a linear approximation to be a "good" approximation for a nonlinear function near ?
It means the error in the approximation goes to zero "rapidly" as approaches .
By "rapidly", we mean that the error goes to zero faster than does as . This, in turn, is defined by requiring that as . In other words, the top of this fraction goes to zero significantly faster than the bottom does.
But what is the error? In applied mathematics, the error in an approximation is always defined to be .
An example will help. The example from the lecture is and . The tangent line (linear) approximation of near is . Since , this gives . The error is therefore . This formula simplifies to . Therefore, when . This definitely has a limit of 0 as .
In other words, goes to zero significantly faster than does. This is fast enough to call the tangent line approximation "good". The error function has a graph which is a parabola with a vertex at . The function outputs are very close to zero when is close to 3.
Errors in Terms of Infinitesimals
It turns out that errors can also be thought of in terms of infinitesimal calculus. In the previous example, let (imagine is "infinitesimally close" to 3). Then we can think of the error as a function of and write . If is an infinitesimally small, then is even "more" infinitesimally small.
Maybe we need a new adjective. Should we describe as unbelievably small? Inconceivably small? Unspeakably small? I chose to use "unspeakably small" in later lectures. This is how the error goes to zero "very fast" when described in terms of infinitesimals.
Of course, none of this is rigorous mathematics. In fact, it is, in part, meant to be mildly humorous. In spite of this, however, it is still worth doing.
Newton's Method is the topic of the last half of Lecture 16A. It is also a topic in Lecture 17A, so I will get into its details in the next section.
Lectures 16B and 17A: Putting Infinitesimal Calculus to Use
It is in Lectures 16B and 17A where I put infinitesimals to their most significant use in my Calculus 1 lectures.
The thumbnail for the video embedded above is an infinitesimal calculus version of the derivative fact . The purpose of using infinitesimals in this context is to derive this equation. The derivation is done without using the limit definition of the derivative: it is Calculus Sans Limits. It does rely on a foundational angle sum trigonometric identity, however. That identity is below.
For any two numbers and , we have:
Approximations Can Become "Exact" in Infinitesimal Calculus
The derivation also relies of the following "exact" equations involving infinitesimals. If is an infinitesimal, then
To confirm this at an intuitive level, get your calculator out and make sure it is in radian mode.
Use your calculator to see that , , , and .
Now we make a couple observations. First note that we keep dividing successive inputs by 10. Next, note that the successive "errors" in how close the outputs are to 1 are: 0.005, 0.00005, 0.0000005, and 0.000000005. They keep getting divided by as the inputs keep getting divided by 10.
Since an infinitesimal quantity is smaller than no matter how big is, it therefore makes intuitive sense to say is "exactly" 1. We go ahead and write whenever it might be handy.
And why do we write whenever it might be handy?
Once again, use your calculator to confirm that , , , and .
We are again dividing successive inputs by 10. In turn, the "error" in each output in successively approximating 0.1, 0.01, 0.001, and 0.0001 keeps getting divided by . Because of this, it makes intuitive sense to write the "exact" equation when is infinitesimal.
Deriving the Derivative of the Sine Function
Now we can derive the derivative of the sine function. Let (where is measured in radians if it is thought of as an angle). Then, for an infinitesimal increase in the input from to , we have
.
Using the angle sum formula from above, this becomes
.
Then we use our "exact" infinitesimal equations to get
.
Now just divide both sides by the (nonzero!) infinitesimal to get the derivative fact that we seek:
.
Isn't that fun?!? No limits necessary! Calculus Sans Limits!
It may seem like magic, but this is how Newton and Leibniz, as well as many people after them, thought about these things.
To a person trained as a modern-day pure mathematician, however, it can leave an uneasy feeling in their stomach.
Personally, I have learned to just appreciate it for what it is: an intuitive way to (oftentimes) get correct answers. Sometimes, however, if you are not careful, it can lead you astray to wrong answers.
The Quotient Rule discussed below is one situation where it is easy to get the wrong answer.
The Product Rule
What is the instantaneous rate of change of the product ? Since the derivative of a sum is the sum of the derivatives, i.e., , we might be tempted to say that the derivative of a product is the product of the derivatives.
This, however, would be incorrect . A simple example suffices to demonstrate this. Let and . Then , whose derivative is 6. But the product since is a constant function.
The essence of this issue is this: it is not only the size of the derivatives and that affects the size of the derivative , it is also the sizes of and themselves.
For the example above, , , and , while , and . Therefore, when , the change in the product is .
On the other hand, if we double the value for to , then the change in the product is doubled as well:
.
That should make sense when you think about formulas. After all, in the first case and in the second case.
Deriving the Product Rule with Infinitesimal Calculus
Let's see if we can work out the derivative of a product using infinitesimal calculus. Suppose the input for the product changes by an infinitesimal amount from to . Then:
.
Assuming and are differentiable, we can write and . Therefore, and , so
.
Expanding this out gives
.
Replacing with 0 and cancelling the two terms leads to . Now just divide both sides by to get the Product Rule:
.
This derivation is actually done in Lecture 17A. You will find Lecture 17A embedded further below.
There is a 3Blue1Brown Essence of Calculus video where Grant Sanderson talks about how to visualize this. The function values and represent lengths while their product represents an area.
Grant also recommends memorizing the Product Rule as "right dleft plus left dright". There is a left function and a right function being multiplied and we'd like to take the derivative of the product. The "d" in the mnemonic represents differentiation.
I like this way of memorizing it because it really flows off of your tongue.
A quick application in my lecture is to find the derivative of . The left function is while the right function is . Since and , the Product Rule allows us to conclude that .
For applications where you need to determine where , it is good to factor the answer as . That help you see that if and only if or .
Application to Revenue
Finally, I wrap up Lecture 16B with a business application. If is the price of an item being sold, let be the corresponding demand, which is the number of items you will sell (over a certain time period, of course). Then the product will be the revenue from the sales, which is the amount of money taken in.
The Product Rule implies that . It is interesting to note that if and only if . This has an interesting graphical interpretation in the lectures.
Newton's Method and the Nature of the Square Root of Two
Lecture 17A ends with the derivation of the Product Rule and more discussion about the revenue example from above.
Before that point, the main things I emphasize are Newton's Method and the nature of the number . This content can also be found at my blog post: "Does the Square Root of Two Exist?".
One thing that I prove is that, if it exists, the square root of 2 cannot be a rational number. This means it cannot be written as a ratio of the form where and are integers (positive or negative whole numbers).
The proof of this is considered to be among the most beautiful in mathematics. Why? Because it is elegant (short and ingenious) and proves something very profound (deep and unexpected) about creation.
Another Video and More About the Square Root of Two
I won't reproduce the proof here. I strongly encourage you to watch either the video above or the video below, where I prove a more general fact about irrational numbers.
But here is perhaps a deeper question: does exist? This question is also addressed in the same blog post "Does the Square Root of Two Exist?".
The proof of this fact from first principles is also harder. In fact, an entire research program to address this and related questions was started in the 19th century. It was called the Arithmetization of Analysis.
The proof can be done more easily with "higher-level" principles. In fact, I will discuss that further below under Lecture 18A. It is based on a theorem named the Intermediate Value Theorem (IVT).
Before moving on, you should take the time to realize that it is an issue that needs to be resolved. After all, no one but God knows all the infinitely many decimal places of ! And think about this: If no person actually knows them all, how do we know that it is a well-defined number?
Newton's Method
If we assume that exists, then our next goal is to approximate it. Newton's method, alluded to above, is the quickest way to approximate it from scratch.
Newton's Method relies on the fact that a tangent line to the graph of a nonlinear function near a point is a good approximation to the graph of (see above again). Therefore, as long as the point is relatively close to the -axis to begin with, then the -intercepts of the function and its tangent line should be close together.
The linear function whose graph is the tangent line to at the given point is defined by . Call the -intercept of this function . Then . Solving this equation for yields .
Rinse and Repeat
This process can be repeated (iterated) to produce an -intercept of the tangent line to the graph of near . This gives a new -intercept .
In general, we have a recursive formula that generates, based on an initial guess , a sequence .
The hope is that approaches the true root of as .
In fact, it often converges to the true root very rapidly.
Approximating the Square Root of Two
How should we use Newton's Method to approximate ?
Start by noting that, by definition, is the unique positive root of the differentiable and continuous function .
The recursive formula above becomes .
If we start by guessing , the next value is . Using this number, the next value is . This is already very close to . Newton's Method does indeed seem to produce estimates that converge very quickly to the true value.
Lectures 17B and 18A: More Calculus Rules
I start Lecture 17B off with more discussion of the revenue example from above before diving into the Quotient Rule and Chain Rule.
Derivation of the Quotient Rule with Infinitesimal Calculus
The Quotient Rule can be derived with infinitesimals, though this is a situation where it is easy to make a mistake and get the wrong answer.
To be more precise, this is a situation where one infinitesimal will be ignored (replaced by zero) and one will not. It is difficult to know which should be replaced by zero to get the correct final answer.
Here is the calculation. Let and supposed the input gets "nudged" by an infinitesimal amount from to . Then
.
Now, here comes the extra-tricky part. In the bottom of this fraction, make the replacement . But do not do this in the top! This results in . Dividing both sides by and explicitly showing the input gives the Quotient Rule:
.
The Trouble with Infinitesimals
But this is very confusing! Why should be replaced by zero in one spot but not in another? Is it excusable because we already know the "right" final answer?
As unsatisfying as it may be, I think this is just something that we'll have to accept as part of the "risk vs. reward" of using infinitesimals. They can be fun and often get you to the right answers without using limits, but they can also easily lead you to making errors.
A fundamental fact that can be obtained using the Quotient Rule is the derivative of the tangent function.
.
I usually remember the Quotient Rule by the mnemonic: low dhigh minus high dlow over the square of what's below.
Derivation of the Chain Rule with Infinitesimal Calculus
The Chain Rule tells us how to differentiate a function composition . In this case, it is best to define an intermediate variable, often called , to be . Then we can also write .
If we know "nudge" the input by an infinitesimal amount from to , then . This then leads to a "chain reaction" that produces an infinitesimal change in the final output .
But this last statement can then be written as . Dividing both sides by gives the Chain Rule:
.
We can apply the Chain Rule to, for example, find the derivative of . We can write as if we choose and . Since and , the Chain Rule implies that .
Contrast this with the derivative of , which is .
I discuss the derivations of these facts in Lecture 18A as well.
But I also discuss many other things in Lecture 18A.
Proving the Square Root of Two Exists with the Intermediate Value Theorem (IVT)
As mentioned above, a "higher-level" proof of the existence of can be accomplished with a theorem, called the Intermediate Value Theorem (IVT).
The IVT requires a thorough understanding of continuity, however. This ultimately rests of the precise definition of a limit as well as the Completeness Property of the real number system . It is not an easy thing to prove from scratch.
Intermediate Value Theorem (IVT): Suppose is a function which is defined and continuous on a closed interval . If is any number between and , then there exists a number such that .
Applying the IVT to prove that exists is pretty easy. Let (the same function we applied Newton's Method to above). Then is continuous over the whole real number line . In particular, is continuous on the closed interval . Also note that and . Let and note that is between and . The IVT now implies the existence of a number with the property that . But this means . In other words, . Q.E.D.
Other Content
Other content of Lecture 18A includes the derivations of the facts that for all and for all .
The first of these rules also includes the case where so that . Therefore, .
These facts are derived using some ingenuity along with the Chain Rule. For example, assuming that is differentiable for all , the Chain Rule implies that the equation can be differentiated on both sides to get . Multiplying both sides of this equation by implies that . By drawing a right triangle and labeling one of the non-right angles with , you will see that . (Trigonometry and the Pythagorean Theorem are needed here — see Lecture 18A above).
Source: https://infinityisreallybig.com/2019/11/30/infinitesimal-calculus-and-calculus-rules/
0 Response to "Considering a Continuous Instantaneous"
Postar um comentário