Lecture Outlines
We will cover a wide variety of materials during lecture and discussion sections, so your constant attendance is important. To help you in organizing your study materials, the list below gives an overview of the basic concepts covered during a given lecture period.
-
In our first class we spent some time talking about the basic framework for the class, as well as getting to know the instructor. We split into small groups to talk about the major narrative arcs from Calculus I.
-
Today we discussed the major motivating problem for the first portion of the course: the area problem. We introduced the notation $\int_a^b f(x)~dx$ and saw how to interpret the notion of "net area." Using our geometric intuition we computed some examples of definite integrals, and we were able to deduce some theoretical properties that the integral satisfies. For instance, for any function $f$ and any point $a$ (in the domain of $f$), one has $\int_a^a f(x)~dx = 0$. Despite the fact that geometric intuition can get us reasonably far in understanding definite integrals, we saw quickly that the only kinds of functions for which we can compute a definite integral "easily" are those which are composed using fairly rigid materials: they require their graphs to be constructed using only rectangles, triangle, trapezoids, and certain portions of circles. Even functions as humble as the parabola $f(x) =x^2$ had areas that are inaccessible! To make progress towards understanding these areas, we came up with a way for approximating integrals by pretending like the underlying functions took on constant values of subintervals. This leads to Riemann sum approximations of the areas we are interested in. At the end of class, we talked about left-hand Riemann sums, as well as Riemann sums evaluated using "right-hand" or "midpoint" rules.
Bonus video and accompanying slides! In this video we compute 3 Riemann sums. The first two are purely algebraic, but the last example is graphical.
-
We have already seen that getting an exact answer to the area problem $\int_a^b f(x)~dx$ is challenging for many functions $f(x)$. Last class we introduced the Riemann sum as a method for approximating this area, and to start class today we made the observation that if $\sum_{i=1}^n f(x_i^*)\Delta x$ is a Riemann sum approximation for $\int_a^b f(x)~dx$, then we can get a better approximation by increasing the number of subintervals. [Note: this in turn shrinks the size of $\Delta x$, which manifests geometrically as a Riemann sum with rectangles that are less wide than before.] Indeed, one of the big theorems in calculus is that if you continue to increase the size of $n$, you get closer and closer to the actual desired area. In other words, we get $$\int_a^b f(x)~dx = \lim_{n \to \infty} \sum_{i=1}^n f(x_i^*)~\Delta x$$ (at least assuming $f$ is reasonably nice). While this is great for confirming our geometric intuition, unfortunately it's incredibly unwieldy to put into practice.
So: is there some practical way to compute $\int_a^b f(x)~dx$, given that the Riemann sum method is good for approximating but not so user-friendly when it comes to computing an exact answer to the area problem? The answer (thankfully!) is "yes," and it comes from antiderivatives. We defined an antiderivative of a function $f(x)$ to be a function $F(x)$ satisfying $F'(x)=f(x)$. We saw some examples of antiderivatives, and pointed out that there's no such thing as "the" antiderivative of a function. We introduced the notation $\int f(x)~dx$ (the so-called indefinite integral) to represent the most general antiderivative of $f(x)$. Then we saw that the Fundamental Theorem of Calculus Part II connected antiderivatives to solutions to the area problem. Specifically, if $F(x)$ is an antiderivative for $f(x)$ along $[a,b]$, and if $f(x)$ is continuous on that interval, then $$\int_a^b f(x)~dx = F(b)-F(a).$$ We then used this formula to compute one of the "hard" integrals we saw earlier (namely, $\int_1^4 x^2~dx$) in a very straightforward fashion.
Bonus video and accompanying slides! Here's a video explaining the intuition behind the Fundamental Theorem of Calculus Part II. You are not required to watch this video, and you won't be evaluated on it. It's posted purely for your edification.
-
Last class we saw (via the second Fundamental Theorem of Calculus) that if one is able to compute antiderivatives, then one can answer the area problem. This means that we want to develop some facility in computing antiderivatives. This will take us more than a handful of class periods, but to get started we wanted to see if we could uncover some "basic" antiderivative rules. Fortunately, since the definition of antiderivative is so intimately connected to the definition of derivative, this means we are able to convert every derivative rule we learned in differential calculus into an antiderivative rule. We spent today's class recalling some "basic" derivative rules from differential calculus, and seeing what they correspond to as antiderivative rules. For example, the power rule in calculus says that if $n$ is any number, then $\frac{d}{dx}[x^n] = nx^{n-1}$. We were able to use this to argue that if $n \neq -1$ is any number, then we get $$\int x^n~dx = \frac{1}{n+1}x^{n+1}+C.$$ This is the so-called "anti-power rule," and it's an incredibly useful tool to have on hand. We saw a number of other antiderivative rules, including antiderivative rules for certain trig and exponential functions.
-
Today we continued in the theme of "every derivative rule gives rise to a corresponding antiderivative rule" by thinking about the chain rule. We saw that it motivates an antiderivative rule that we called $u$-substitution. Specifically, since we know that $\frac{d}{dx}[f(g(x))] = f'(g(x)) \cdot g'(x)$, this tells us that $$\int f'(g(x))\cdot g'(x)~dx = f(g(x))+C.$$ By using the substitution $u = g(x)$ (which forces $\frac{du}{dx} =g'(x)$), we were able to recast this equation as $\int f'(u)~du = f(u)+C$. We did an example of specific antiderivative problem that involved this idea, and then folks worked in small groups to tackle additional $u$-substitution problems. We observed that $u$-substitution is often a good tool to use when your integrand contains a composed function; in this case, the function that plays the role of $u$ is often the "inner" function of that composition.
Bonus video and accompanying slides! Here's a video of some computed $u$-substitution problems.
-
Today we explored the antiderivative analog of the product rule. We called this integration by parts, and wrote it with the shorthand $$\int u~dv = uv - \int v~du.$$ We saw the LIATE mnemonic as a technique for remembering how to select the "$u$" function. We did some sample problems in small groups.
-
Though they don't solve all antiderivative problems, the techniques of $u$-substitution and integration by parts are the essential tools for computing antiderivatives. We spent today continuing to work through this handout, and we finished class with discussing a few of the problems. Solutions for this sheet are posted here. I also handed out another worksheet at the end of class that asks you to take the first step in solving a handful of integrals; here are the solutions for this worksheet.
-
Today we started a week long tour of integration techniques that apply to certain "special" integrals, by which we mean integrals whose integrands are of a particular form. We spent most of our time in class today thinking about the integrals $\int \sin^2(\theta)~d\theta$ and $\int \sin^3(\theta)~d\theta$. We saw that Pythagorus' identify $\sin^2(\theta)+\cos^2(\theta)=1$ was very useful. We then used this as inspiration for how to tackle integrals of the form $$\int \sin^n(x)\cos^m(x)~dx.$$ We said that
- When the power of sine is odd in the integrand, a substitution of $u = \cos(x)$ --- together with an application of the Pythagorean theorem to replace an even power of $\sin(x)$ in terms of $\cos(x)$ --- allow us to transform such a trigonometric integral into an integral that is polynomial.
- When the power of cosine is odd in the integrand, a substitution of $u = \sin(x)$ --- together with an application of the Pythagorean theorem to replace an even power of $\cos(x)$ in terms of $\sin(x)$ --- allow us to transform such a trigonometric integral into an integral that is polynomial.
- if both exponents are even, then one has to do more work; this usually turns into integrals that are approached with the same methodology as what we used for $\int \cos^2(\theta)~d\theta$.
-
Continuing our discussion of trigonometric integrals today, we spent most of our time focusing on integrals of the form $\int \tan^n(\theta)\sec^m(\theta)~d\theta$. We had some handy substitution ideas, including
- When the power of tangent is odd in the integrand and a factor of secant appears, a substitution of $u = \sec(\theta)$ --- together with an application of the Pythagorean theorem to replace an even power of $\tan(\theta)$ in terms of $\sec(\theta)$ --- allow us to transform such a trigonometric integral into an integral that is polynomial.
- When the power of secant is even in the integrand, a substitution of $u = \tan(\theta)$ --- together with an application of the Pythagorean theorem to replace an even power of $\sec(\theta)$ in terms of $\tan(\theta)$ --- allow us to transform such a trigonometric integral into an integral that is polynomial.
Towards the end of class we saw that these kinds of trigonometric integrals can appear in some surprising places...even when we have an integral that doesn't seem to deal with trig functions! For instance, we evaluated $$\int \frac{\sqrt{x^2-16}}{x}~dx$$ by using the substitution $x=4\sec(\theta)$. This allowed us to convert our integral into $4\int \tan^2(\theta)~d\theta$. Using an integral we already computed, this allowed us to give a rather surprising result for this integral!
The final answer for this last problem involved a strange expression: $\tan(\text{arcsec}(\frac{x}{4}))$. I mentioned in class that I'd share a video that shows you how to convert this terrible looking function into something that's not nearly as scary. Here's the promised video, and the accompanying slides.
-
We've spent the last several days discussing certain integrals that benefit from particular substitutions involving trig functions. For those most part, these integrals themselves have been conspicuously trigonometric (as the products of sines, cosines, tangents, or cosecants), and so perhaps it's no surprising that a well-chosen trigonometric substitution is useful. However, our final example from last time involved an integrand that wasn't clearly related to trigonometric functions, and yet was solvable after using a clever trig substitution. Today we explored a few more situations in which a well-chosen trigonometric substitution can help knock down a nasty integral. In particular, we saw
- If the integrand includes a factor of $x^2-a^2$, and if the substitution $u=x^2-a^2$ has been tried and is unproductive, then the substitution $x=a\sec(\theta)$ can sometimes be helpful (particularly if the expression $x^2-a^2$ is in some kind of square root). The reason this works so well is that under this substitution we get rid of the square root using Pythagoras: $$\sqrt{x^2-a^2} = \sqrt{(a\sec(\theta))^2-a^2} = \sqrt{a^2(\sec^2(\theta)-1)} = \sqrt{a^2\tan^2(\theta)} = a\tan(\theta).$$
- If the integrand includes a factor of $x^2+a^2$, and if the substitution $u=x^2+a^2$ has been tried and is unproductive, then the substitution $x=a\tan(\theta)$ can sometimes be helpful (particularly if the expression $x^2+a^2$ is in some kind of square root). The reason this works so well is that under this substitution we get rid of the square root using Pythagoras: $$\sqrt{x^2+a^2} = \sqrt{(a\tan(\theta))^2+a^2} = \sqrt{a^2(\tab^2(\theta)+1)} = \sqrt{a^2\sec^2(\theta)} = a\sec(\theta).$$
- If the integrand includes a factor of $a^2-x^2$, and if the substitution $u=a^2-x^2$ has been tried and is unproductive, then the substitution $x=a\sin(\theta)$ (or $x=a\cos(\theta)$) can sometimes be helpful (particularly if the expression $a^2-x^2$ is in some kind of square root). The reason this works so well is that under this substitution we get rid of the square root using Pythagoras: $$\sqrt{a^2-x^2} = \sqrt{a^2-(a\sin(\theta))^2} = \sqrt{a^2(1-\sin^2(\theta))} = \sqrt{a^2\cos^2(\theta)} = a\cos(\theta).$$
-
With anti-derivative techniques in the rear view window, we started discussing a new concept today. The motivation was to try to use the tools we have developed so far to answer area problems which aren't covered by the second fundamental theorem of calculus. For instance, the fundamental theorem doesn't tell us how to approach areas over infinite intervals. We resolved this issue today by giving a definition for each of $\int_a^\infty f(x)~dx, \int_{-\infty}^a f(x)~dx$ and $\int_{-\infty}^\infty f(x)~dx$. Our definitions are phrased in terms of limits (perhaps not a surprise since we're in a calculus course!). We saw a variety of examples of these types of integrals, including some that converged and others that diverged.
-
Last class we explored how one computes areas in situations where the Fundamental Theorem of Calculus Part II doesn't apply because the interval of integration is infinitely wide. Today we explore an analogous issue, determining how to compute area problems in the situation where the integrand is discontinuous (including, for example, when the function has a vertical asymptote and "blows up" at a point, so that the graph becomes infinitely tall). This led to the introduction of improper integrals of type II. To start, we gave a definition for how one evaluates an area problem on an interval $[a,b]$ when the integrand is only continuous on $[a,b)$. In this case, we define $$\int_a^b f(x)~dx = \lim_{t \to b^-} \int_a^t f(x)~dx.$$ If instead the function is only continuous on $(a,b]$, then we define $$\int_a^b f(x)~dx = \lim_{t \to a^+} \int_t^b f(x)~dx.$$ We computed several examples, one where the integral converges (i.e., returns a finite value), and one where the integral diverges (in the particular case we considered, it diverged to $+\infty$). We also saw how one can take an integrand that has several "places of improperness" and appropriately split it into smaller integrals to determine its convergence or divergence. Although this potentially requires computing several improper integrals to come to a conclusion, remember that if any of the "subintegrals" under consideration diverge, then the original expression automatically diverges as well.
-
When we evaluate improper integrals, sometimes we are often confronted with limits that we can evaluate conceptually in a straightforward way (e.g., if we have a quantity that is $1$ divided by big number, then as that big number gets bigger, the fraction gets closer and closer to $0$). Other times, the limit is more ambiguous. For example, we thought about the expression $\lim_{R \to \infty} \frac{R}{e^R}$ and saw that it had some tension: the numerator is getting bigger, but so is the denominator! Which of these two competing forces will win, or will they ultimately balance each other out. Today, we learned a techniques for evaluating these kinds of "$\frac{\infty}{\infty}$" limits called l'Hopital's rule. We saw we could also use l'Hopital to evaluate "$\frac{0}{0}$" type limits. Finally, we commented that there are other "indeterminate forms" (like $0\cdot \infty$, or $\infty-\infty$) that aren't set up to use l'Hopital's rule, but which can sometimes be rearranged to make l'Hopital a usable strategy.
-
In today's class we used what we know about computing areas to answer a related question: how do we find the area between two curves? By using some geometric intuition, we were able to give an integral formula to represent these kinds of areas. We worked through a lot of different examples, including some where we had to break the give region into separate subregions so that we could set up integrals that would do the desired calculations.
-
We reviewed some of the content from last class, including setting up an integral that allowed us to calculate the area for a circle of radius $r$. We also saw that some regions can have their areas calculated by thinking of them as 'type 2' regions, which means as a region which has a uniform "right" boundary and a uniform "left" boundary. We set up some examples.
-
In today's class we explored a somewhat unexpected application of integration: the computation of volumes. The idea was that one could approximate the volume of a solid by taking small "cross sectional slices" of the volume, and then approximating the volume of each "slice" as the product of the cross-sectional area of the slice times its thickness. Following this procedure through increasingly small slices, we were able to argue that the volume of the solid can be thought of as the definite integral of cross sectional area. We used this method to compute the volume of a sphere. We also started to think about how we can use this method to compute the volume generated by revolving a $2$-dimensional region around an axis of rotation, particularly in the case where one has "sliced" the region in a direction perpendicular to the axis of rotation. This gives the so-called "washer method."
-
We worked several examples that computed the volume of a solid of revolution via the washer method. Towards the end of class, we asked what to do when our region is sliced in a direction parallel to the axis of rotation. This gave rise to a new method for computing volume called the shell method.
-
For our final application of integration, we asked an entirely new question: how far does a bug travel as it moves from point $A=(a,f(a))$ to point $B = (b,f(b))$ along the graph of a function $f(x)$? By splitting this curve up into small pieces and using tangent lines to approximate the curve, we were able to derive a surprising formula: the arc length is given by $$\int_a^b \sqrt{1+(f'(x))^2}~dx.$$ We used this formula to give an integral expression for the arc length of a handful of curves, but saw that in general these computations can be extremely challenging. We gave a surprising example in which the integral is computable because of an algebra miracle.
-
Today we started on the second half of content in calculus II. The big motivating problem in this second unit of the course is to ask for higher-degree analogs of the tangent lines that were so useful in differential calculus. In particular, we saw that one could try to construct a "tangent parabola" to a curve with a reasonable hope that it might be even better at approximating the function than the tangent line. We saw some examples to suggest that this hope really did seem to work. This led us to ask whether it would be better to instead create a tangent cubic, or a tangent quartic. Indeed, it seemed reasonable that we might even try to let this process continue indefinitely, in which case we'd create something like an infinite degree tangent polynomial that might agree with the function everywhere. This is precisely the construction that we want to work towards by the end of the semester, but getting there requires us to untangle a subtle question: what does it mean to add an infinite number of things together?
In order to answer this question, we have to begin by introducing some new concepts. In today's class, we defined the notion of a sequence. We saw many examples, and we said that in this course we'll be most interested in sequences that are defined according to some "closed form algebraic expression." The big questions we have when we see a sequence are (1) what pattern defines the sequence, and (2) where is the sequence going? To make the second notion more precise, we defined the notion of the limit of a sequence. We started working on an example to see whether our intuition about the limit of a sequence matched up with the definition.
-
In today's class we began by working with the technical definition of "limit" to prove that $\lim_{n \to \infty} \left\{\frac{1}{2^n}\right\} = 0$. Afterwards, we tried to think about ways that we could build intuition for what limits should be, without having to appeal to the formal definition. In answer to this question, we stated two key results: the "piggyback theorem" and the "kangaroo pouch" theorem.
-
We started class today by working in small groups to evaluate limits of a few sequences. This gave us some extra practice using the techniques we introduced last time. After that we saw some additional properties about limits, including how limits behave under arithmetic and a formula for computing limits of geometric sequences.
-
Today marked a significant step in our overall goal to understand "infinite degree tangent polynomials" (which, remember, is the motivation for this second half of our class), because we were able to define what it means to add up an infinite number of numbers. Specifically, if $\{a_n\}$ is some sequence of numbers, then we define the series associated to $\{a_n\}$ in terms of partial sums: for a given number $n$, we define $s_n = a_1 + a_2 + \cdots + a_n$. This gives us a sequence of partial sums $$\{s_n\} = \{a_1,a_1+a_2,a_1+a_2+a_3,a_1+a_2+a_3+a_4,\cdots\},$$ and we define $\sum a_n$ to be $\lim_{n \to \infty} \{s_n\}$ --- assuming the limit exists at all. (If the limit of the partial sums fails to exist, we say that the series diverges.) We ran through a number of explicit examples where we took a sequence, generated a handful of partial sums, and then tried to determine a general formula for the $n$th partial sum so that we could evaluate $\lim_{n \to \infty} \{s_n\} = \sum a_n$. Towards the end, this led us to thinking about geometric series, and we wrote down the geometric series formula at the end of class.
-
We did a handful of computations that took advantage of the geometric series formula. We saw that this formula can be useful even when the series we're looking at doesn't look precisely like the kind of series that shows up in the geometric series theorem. We said that geometric series would be some of our favorite series, because they are one of the few kinds of series that can be computed exactly. The other big class of series we can evaluate exactly are the so-called telescoping series. We saw an example of a telescoping series.
-
At the start of class, students worked in small groups to tackle a handful of geometric series problems. We then moved on to start tackling a fairly large question: what do we do if we can't compute the value of a series exactly (e.g., if the series isn't geometric, or we can't find a pattern for the $n$th partial sum)? In this situation, can we still say whether or not the series converges? We will develop a lot of ways to answer this question, but today we explored one specific situation in which we could conclude that a series diverges. Namely, if $\{a_n\}$ is a sequence whose limit isn't $0$, then we know for sure that $\sum a_n$ must be divergent. This is called the divergence test. We saw a few examples. We also saw what this theorem CANNOT do. Namely, if $\lim \{a_n\}$ \emph{is} zero, then the divergence test tells us nothing about whether or not $\sum a_n$ converges.
-
We started class with a quick review of the divergence test, and then we asked the question: if we have a series whose convergence/divergence can't be determined using the tools we already know about (namely: the geometric series theorem or the divergence test), then how can we determine if it converges? For example, what about the series $\sum_{n=1}^\infty \frac{1}{n}$? or $\sum_{n=1}^\infty \frac{1}{n^2}$? We studied these two examples in detail, determining their convergence/divergence by comparing the value of each series to a suitably chosen improper integral. We were able to see that $\sum_{n=1}^\infty \frac{1}{n}$ captures an area that contains the infinite area given by $\int_1^\infty \frac{1}{x}~dx$, and therefore the series $\sum_{n=1}^\infty \frac{1}{n}$ diverges. Similarly, we were able to see that $\sum_{n=2}^\infty \frac{1}{n^2}$ is contained within the (finite!) area given by $\int_1^\infty \frac{1}{x^2}~dx$, and therefore this forced $\sum_{n=1}^\infty \frac{1}{n^2}$ to converge.
Taking what we learned from these examples, we wrote down the integral test. It tells us that if $\sum_{n=c}^\infty a_n$ is a series, and if we can find a function $f(x)$ so that $f(n) = a_n$ for all $n$ --- and assuming some other technical assumptions about the function $f$ are true --- then the series $\sum_{n=c}^\infty a_n$ and the integral $\int_c^\infty f(x)~dx$ have the same convergence or divergence property. We then said that this can be used to give a proof of the $p$-test for series: $$\sum_{n=1}^\infty \frac{1}{n^p} \text{ converges exactly when }p>1.$$
-
Today we carefully worked through an application of the integral test, using it to argue that the series $\sum_{n=1}^\infty \frac{n}{e^n}$ converges by studying the related improper integral $\int_1^\infty \frac{x}{e^x}$. This computation came at a significant upfront "cost," in the sense that we had to check a lot of things about the function $\frac{x}{e^x}$ before we could use the integral test.
Once this was complete, we observed that the integral test is really only useful for studying series $\sum_{n=c}^\infty a_n$ under the assumption that the associated function $f(x)$ has a computable antiderivative. For example, we observed that the integral test would not be a good choice to try to analyze the series $\sum_{n=1}^\infty \frac{\sin^2(n)}{n^3}$, since the associated function $f(x) = \frac{\sin^2(x)}{x^3}$ isn't a function we can find an antiderivative for. So what do we do in this case? In the case of the example we were analyzing, we saw that we could find another (simpler) series which was "bigger" than the series we were considering; this "larger" series was convergence, and so we argued this meant our "smaller" series must be convergent too. This is our first example of a method for determining the convergence of one series by comparing it to another, better-understood series.
-
We built on our last example from last class period, and saw an analogous example where we used the divergence of a particular "small" series to conclude that a related "larger" series should also diverge. This example and the last motivated a test for convergence that we called the comparison test for series. It is a useful test to run when you have a series with positive terms and for which there is a "related" series whose convergence or divergence you already know. Though this can be quite helpful, one downside is that the test requires that your two series have a particular relation to each other in order for conclusions to be drawn. For instance, if you have $0 \leq a_n \leq b_n$ for all $n$ and you know that $\sum a_n$ diverges, then you can also conclude that $\sum b_n$ diverges; unfortunately, however, if you know that $\sum a_n$ converges, you cannot make any conclusions about what $\sum b_n$ does.
One method around this latter restriction comes from a second comparison test, which we called the limit comparison test. In this case one again needs positive termed series $\sum a_n$ and $\sum b_n$, but this time all we need to know is that $\lim \frac{a_n}{b_n}$ is a positive number in order to make a conclusion. Specifically, if $\lim \frac{a_n}{b_n}$ is a positive number, then the limit comparison test tells us that $\sum a_n$ and $\sum b_n$ converge or diverge together.
-
Though we have a lot of tests to determine if a given series converges or diverges, the only tests we currently have that allow us to deal with series whose terms aren't "eventually" positive are the geometric series theorem and the divergence test. What do we do with a series whose terms are both positive and negative, but which isn't amenable to one of these tests? Today we introduced the notion of an alternating series, and we stated and used the alternating series theorem as a tool for determining when certain alternating series converge.
-
Today we thought more about the alternating series theorem, and did another example. We then talked about series whose terms include both positive and negative values, but which are not alternating. We defined the notion of conditional and absolute convergence, and stated a theorem that told us that a series which converges absolutely is convergent.
-
In today's class we covered what is perhaps the most powerful series test of them all: the ratio test! In essence, the ratio test is working to determine if a given series seems to "eventually look geometric." It requires us to compute $\lim \left|\frac{a_{n+1}}{a_n}\right|$. If this quantity is less than $1$, then the series converges absolutely; if this value is greater than $1$, then the series diverges. If it equals $1$ exactly, then the test is inconclusive. We ran this test a few times to see what it would say about the convergence/divergence of certain series. We commented that it was really good at series which involve a combination of various components (algebraic, exponential, factorial), but it wasn't any good at evaluating convergence of series that look like "algebraic/algebraic."
At the end of class we defined the notion of power series. In a sense, a power series is the kind of "infinite degree polynomial" that we were motivated to study at the beginning of this unit. We defined the notion of the center of the series, as well as the coefficients of the series. We then said that the fundamental question for power series was determining for what values of $x$ they converge.
-
Last class we introduced the notion of power series, and said that the big question was determining when they would converge. Today we pursued this problem in earnest, examining a few power series and determining the values of $x$ where they converge. We saw that all power series converge at their center. To determine convergence for points away from the center, we first rely on the ratio test. This told us almost everything there was to know, though in some examples we needed some additional tests in order to determine convergence for a few special points.
-
We started class today by defining some terms related to our work last class period, namely "interval of convergence" and "radius of convergence." We then stated a theorem that tells us how to differentiate and antidifferentiate a power power series. In short, one performs the "usual" differentiation and antidifferentiation techniques by applying them "term by term" on the series. Not only does this give a valid power series representation for derivatives and antiderivatives, but the theorem also tells us that the underlying radius of convergence doesn't change when we do this!
To put these ideas to the test, we computed a power series representation for $\frac{1}{1+x}$ using the geometric series theorem. We then integrated to give a power series representation for $\ln(1+x)$. Given the radius of convergence for the initial series, we knew this new series would certainly converge on $(-1,1)$, and would certainly diverge on $(-\infty,-1) \cup (1,\infty)$. However, we saw that the series also converged at $x=1$ (since then we'd recover the alternating harmonic series, which we know converges from seeing it previously with the alternating series theorem), which meant that we recovered a really amazing formula: $$\ln(2) = \frac{1}{1}-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\cdots.$$
-
Building on the ideas from last class period, we continued our discussion of "creating new power series by performing calculus on known power series." We reviewed the example from the end of class last time, and we spent time building another geometric series and integrating it to get a power series for another familiar function. This time we integrated $$\frac{1}{1+x^2} = \frac{1}{1-(-x^2)} = 1-x^2+x^4-x^6+x^8-\cdots = \sum_{n=0}^\infty (-1)^n x^{2n} \quad \quad \text{ for $x$ in $(-1,1)$}$$ and integrated it to get a power series representation for $\arctan(x)$.
-
At the start of class we continued a development of the power series representation for $\text{arctan}(x)$. In particular, we were able to resolve an integration constant, so that we were left with the equality $$\text{arctan}(x) = x-\frac{x^3}{3}+\frac{x^5}{5}-\frac{x^7}{7}+\frac{x^9}{9}-\cdots = \sum_{n=0}^\infty \frac{(-1)^n}{2n+1}x^{2n+1} \quad \quad \text{ for $-1 \leq x \leq 1$}.$$ We were able to use this to give a very accurate calculation of $\text{arctan}(0.1)$ on the board, as well as recovering an amazing series representation for $\frac{\pi}{4}$.
Though the last result was amazing, the ideas which motivate it are frustratingly specific. We could give a power series representation for $\text{arctan}(x)$ because it happened to be "related" to the value of a geometric series via integration. But now all functions can be recovered in this way, so how do we come up with a power series representation for a random function? To answer this question, we set up what would be true if a function could be expressed as a power series, and then used this to help us solve for the coefficients that would make the equation work. Doing so gave us the following expression, typically known as Taylor's theorem: the power series for a function $f$ centered at $a$ is given by $$f(x) = \sum_{n=0}^\infty \frac{f^{(n)}(a)}{n!}(x-a)^n,$$ at least for those $x$ where the power series converges. We used this to give a power series representation for the exponential function: $$e^x = \sum_{n=0}^\infty \frac{x^n}{n!} = 1+x+\frac{x^2}{2}+\frac{x^3}{6}+\cdots.$$
-
We started class by using the computed MacLaurin series for $\sin(x)$. We did this by applying Taylor's formula. We then used this computation to give a Taylor series for $\cos(x)$. Our trick was to use "calculus of power series", together with the fact that $\frac{d}{dx}\left[\sin(x)\right] = \cos(x)$, to compute this power series without having to go through the process of applying Taylor's theorem directly. It turns out this makes the computation of $\cos(x)$ much easier!
-
Though applying Taylor's formula gives us a nice way to compute the series representation for a function, it's not the only --- or necessarily the easiest --- way to find a function's series representation. Today we saw how to compute a power series representation for functions like $e^{-x^2}$ and $x^3\arctan(-2x^3)$, and even how to multiply power series to find the power series for product functions.
-
In today's class we considered what one can do with Taylor series expansions. We saw that one can use Taylor series to not only approximate function values very efficiently and easily, but to also find (series representations for) definite integrals, values for indeterminate limits, and even high-order derivatives. We also saw how to use series to evaluate familiar functions in weird places. For example, we gave arguments that told us that $e^{\pi \sqrt{-1}}+1=0$, and that $$\sqrt{-1}^{\sqrt{-1}} = e^{-\frac{\pi}{2}}.$$
-
We've spent a lot of time in class talking about polynomial approximations of functions, and in today's class we saw a brief introduction into how one can follow a similar program to approximate functions with trigonometric functions. These are called Fourier series, and they have an enormous amount of uses. The PDF of the presentation I gave is available here, and some of the sound applets we played with in class are here.