Lecture Outlines

We will cover a wide variety of materials during lecture and discussion sections, so your constant attendance is important. To help you in organizing your study materials, the list below gives an overview of the basic concepts covered during a given lecture period. Each "lecture" represents an hour-long block of class content; hence each day we meet typically consists of three "lectures."

Lecture 1: Riemann sums and definite integrals

Today we studied the question: how can I compute the area under the graph of a function $f(x)$ over the interval $[a,b]$? We saw that there are ways to approximate this answer by replacing the given "wavy" area with geometric objects whose area is easy to compute. The basic idea was to take the interval $[a,b]$, split it into $n$ subintervals of equal length, and then treat the function $f(x)$ as though it were constant (or, more generally, linear) over this subinterval. When one does so, the graph of $f(x)$ is then approximated by either a rectangle or a trapezoid. By adding up the areas of each of these objects, we generate our estimate for the area under the curve.

This process was captured by the various ``Riemann sums" which estimate the desired area. We claimed that if one considers the values of these Riemann sums when the number of subintervals $n$ approaches infinity, then the approximations will approach the actual value for the area under the curve. We finished this lecture by working through an example of this kind of computation, and then discussing how the geometry which motivates the problem of the definite integral can be used to prove certain basic properties (like, for example, $$\int_a^b f(x)~dx = -\int_b^a f(x)~dx.$$

Lecture 2: The Fundamental Theorems of Calculus

The theme of this lecture was the important connection between anti-differentiation and definite integrals. This gives rise to the two Fundamental Theorems of Calculus which show us how the seemingly unrelated processes of differentiation and definite integrals are related. In particular, we saw that an antiderivative can be used to give a fast way for calculating a definite integral (this is called the Evaluation Theorem, or the Fundamental Theorem of Calculus Part II). This is perhaps the central theorem in all of integral calculus. It's counterpart --- the so-called Fundamental Theorem of Calculus Part I --- tells us that one can use definite integrals to calculate antiderivatives.

We gave sketches of the proofs for both of the Fundamental Theorems, then did a number of problems which are related to these results. We also saw how one can use the Fundamental Theorem Part II to answer questions which look like they should be answered by the Fundamental Theorem Part I. For instance, we calculate $$\frac{d}{dx}\left[\int_{x^2}^{x^3} \sin(e^t)~dt\right]$$ by pretending we had an antiderivative $F(x)$ for $\sin(e^x)$, and then using it to calculate the derivative in question.

Lecture 3: Finding antiderivatives using familiar derivative rules

The Fundamental Theorems of Calculus tell us that antiderivatives and definite integrals are intimately related, and so they motivate our hunt for computing antiderivatives for functions. Since any differentiation rule can be turned into a rule for indefinite integrals, we start this quest in earnest by returning to the some of the important differentiation techniques and turning them into integration rules. For instance, the familiar chain rule from differential calculus has an integral counterpoint that is called $u$-substitution. We saw that the key to using $u$-substitution effectively is being able to spot compositions of functions. We also stated an integration rule which comes from the product rule; this is called integration by parts. We did a number of problems that required $u$-substitution.

Lecture 4: Integration by parts; trigonometric Substitutions, Part I

We started lecture by creating a method for finding antiderivatives that was motivated by the product rule. This technique is called integration by parts. We said that the techniques of $u$-substitution and integration by parts are the go-to tools for computing many antiderivatives.

We then focused on computing integrals that take certain specified forms. In particular, we focused on integrals that looked either like $\int \cos^m(x) \sin^n(x)~dx$ and $\int \tan^m(x) \sec^n(x)~dx$. For various possibilities for the parameters $n$ and $m$ in this problem, we were able to make substitutions which made these integrals more easy to compute. Along the way we also made heavy use of some familiar trigonometric identities, particularly $\sin^2(x) + \cos^2(x) = 1$ as well as $\sin^2(x) = \frac{1}{2}(1-\cos(2x))$ and $\cos^2(x) = \frac{1}{2}(1+\cos(2x))$.

Lecture 5: Trigonometric Substitutions, Part II
In this lecture we discussed integrals that are amenable to trigonometric substitutions, even though it might not be clear at first that trig subs would be helpful. In particular, we saw the following guidelines

  • if the integrand involves $\root\of{x^2+a^2}$, it might be useful to use the substitution $x = a\tan(\theta)$;
  • if the integrand involves $\root\of{x^2-a^2}$, it might be useful to use the substitution $x = a\sec(\theta)$;
  • if the integrand involves $\root\of{a^2-x^2}$, it might be useful to use the substitution $x = a\sin(\theta)$ or $x = a\cos(\theta)$.

We saw that it was important to remember that these substitutions don't always work; sometimes there are other substitutions that are preferable, and one needs to spend time developing a feel for when a given substitution will or won't be helpful.

We also spent time in class today discussing how to simplify expressions such as $\sin(\arctan(x))$ or $\tan(\arccos(x))$.

Lecture 6: Partial fractions

We developed a new integration technique that comes from "undoing" the familiar process of common denominators. This is formally known as "partial fractions," and it amounts to a process of factorizing the denominator of a rational function and splitting such a fraction up into constituent summands. For instance, we saw in class that

$\displaystyle \frac{2x+1}{2x^2+5x-3} = \frac{4/7}{2x-1} + \frac{5/7}{x+3}.$

We used this process to help us evaluate integrals whose integrand looked like the quotient of two polynomials, but which weren't amenable to substitution.

Lecture 7: Improper Integrals: Part I

We've spent the last several lectures discussing methods for evaluating indefinite integrals, but in today's class we asked a slightly different question: what does it mean to compute a definite integral over an interval that is infinitely wide? Integrals over these domains are called improper integrals of type 1; we saw that we can evaluate these integrals as the limits of certain finite definite integrals. In particular, we saw that

$\displaystyle \int_a^\infty f(x)~dx = \lim_{R \to \infty} \int_a^R f(x)~dx$.

We also discussed integrals taken over intervals of the form $(-\infty,a]$ and $(-\infty,\infty)$.

Lecture 8: l'Hopital's rule; Improper Integrals: Part II

In the last lecture we considered integrals that involve infinity. Because we're evaluating limits as parameters approach infinity, we saw that we'll need some technology for making the evaluation of these limits simpler when we encounter expressions of indeterminate type. To address these issues we recalled how one can use l'Hospital's rule to compute these limits. This trick comes in handy when attempting to evaluate limits of certain expressions which arise when computing improper integrals.

After discussing l'Hospital's rule we discussed improper integrals of the second kind; these are integrals over intervals for which the integrand has a discontinuity. The failure of continuity prevents us from using the evaluation theorem to compute such definite integrals, and so instead we saw how to use limits to compute these integrals instead. By the end of class we also saw how one handles improper integrals which are a mix of type one and type 2; the general rule in cases such as this is to split the given interval up into subintervals for which there is only one "problem," and then to evaluate the integral over each of these subintervals separately.

Lecture 9: Comparison Tests for improper integrals

Often when one is faced with a definite integral that she'd like to compute, the harsh reality of life is that computing an antiderivative for the given integral is simply impossible. In these situations, one has to make due with whatever limited information she can for the given integral. In the case that the desired definite integral is improper, one of the things one can hope to find is whether the given integral converges or diverges (without asking what the actual value of the integral is when it converges). One can do precisely this by comparing the given integrand to another integrand whose convergence or divergence is already known. This is the heart of the comparison test for integrals, and in class today we saw how one can go about using comparison tests to determine the convergence or divergence of improper integrals that we can't compute antiderivatives for.

Lecture 10: Area between curves

After spending several lectures talking about techniques for evaluating anti-derivatives, we're now going to focus less on the evaluation of integrals and more on what kinds of quantities can be represented (and calculated) using integrals. We started this by thinking about how we can use integrals to calculate the area captures between two curves. This included looking at the areas bounded by graphs of functions, but also areas which are most easily represented by integrating with respect to $y$ (instead of the more typical integrals with respect to $x$).

Lecture 11: Volumes through integration

We considered how one could compute the volume of a 3-dimensional object by dissecting it into cross sections. The idea was to compute the volume of each cross section (by multiplying the cross-sectional area by the width of each cross section), and then add up the volumes of each of the cross sections. By cutting into more and more cross sections (or, equivalently, by making the width of the cross sections approach 0), we get better and better estimations for the total area. Taking this process to it's limit, we see that the volume of an object is the integral of the cross-sectional area.

We put the general theory of computing volumes via cross sections into play by investigating volumes attached to some familiar geometric figures.

Lecture 12: Volumes through integration, Part II

We used the machinery we developed for computing volumes via cross sections to come up with formulae for the volumes of objects which are generated by rotating a 2-dimensional area about a 3-dimensional axis. In the case where we imagine dissecting the resultant volume in a direction perpendicular to the axis of rotation, this led to the so-called "washer method."

In this lecture we discussed another way to calculate the volume of a solid generated by rotating an area about an axis of revolution. Whereas taking cuts that are perpendicular to the axis of rotation results in cross sections that are either washers or discs (and hence gives rise to ``the disc method"), by instead cutting our area up into slices that are parallel to the axis of rotation we are left with hollow cylinders (and hence gives rise to ``the shell method"). We saw that there are certain situations in which one might prefer one of these methods over the other, and that there are other times when one can employ either method effectively.

Lecture 13: Arc length

In class today we saw one final application of integrals; this time we used them to calculate the distance in traveling from a point $(a,f(a))$ to a point $(b,f(b))$ along the graph of a function $f(x)$. We called this the arclength problem, and saw that the solution to this problem was to compute $$\int_a^b \root\of{1+(f'(x))^2}~dx.$$ We computed (or tried to compute) the arc length for a few different functions.

Lecture 14: Introducing parametric curves

We introduced the notion of a parametric curve. This is a curve that includes all points of the form $(x(t),y(t))$, where $x(t)$ and $y(t)$ are functions that each depend on a parameter $t$ which takes on values from some interval $[a,b]$. We saw, for instance, that one can parameterize the unit circle in a number of different ways, and that small changes to this parameterization can stretch the circle of shift its position in space. One can use the operations of "stretching" and "shifting" to generate some fairly exotic parametric curves; we saw one example inspired by a popular carnival ride. We also gave a method for parameterizing a line that passes through two points. We saw that all graphs of the form $y=f(x)$ can be viewed as parametric curves, but that not all parametric curves can be expressed as the graph of some function $y=f(x)$.

The following PDF might be useful if you're interested in seeing a bit more on parametric curves. It also includes some information on polar coordinates, which will be the subject of Lecture 16.

Lecture 15: The calculus of parametric curves

Having introduced parametric curves in the previous class, in this lecture we discussed how one answers calculus questions for parametric curves. For instance: at a given point $(x(t_0),y(t_0))$ on a parametric curve, how does one find the slope of the tangent line? the concavity? If we view $(x(t),y(t))$ as measuring the position of a particle as it moves through space, how do we measure the speed of that particle? How can we measure how far that particle moves from some initial point $(x(t_0),y(t_0))$ to some terminal point $(x(t_1),y(t_1))$? How can we calculate the area captured beneat a parametric curve? Capture inside some "loop" of a parametric curve?

Lecture 16: Polar curves

After our first midterm we discussed the notion of polar coordinates and polar functions. Instead of describing a point in space by its horizontal and vertical displacement from the origin (its "rectangular coordinates"), the polar representation instead records the distance from the origin and an angle value (the one made by the positive $x$-axis and the line segment connecting the origin and the point). We discussed methods for transforming between polar and rectangular coordinates, and we discussed how one can define a function on polar coordinates (typically by setting $r = f(\theta)$ for some function $f$). With these notions in mind it is natural to ask for the calculus of these curves: what can we say about slopes of tangent lines? concavity? speed and arc length? area bounded by a polar curve? The answer to all but the last question came by recognizing that a polar curve is really just a special kind of parametric curve, namely $$(f(\theta)\cos(\theta),f(\theta)\sin(\theta)).$$ Hence to answer questions related to differential calculus of polar curves, we can just fall back on what we learned about the differential calculus of parametric curves. Polar integrals require a new setup, since in these settings the notion of "area under a curve" means something slightly different. We developed the theory which led to the formula $$\int_{\theta=a}^b \frac{1}{2}\left(f(\theta)\right)^2~d\theta$$ for calculating the area bounded by a polar curve $r = f(\theta)$ on the interval $[a,b]$. We then used this to compute the area between two polar curves.

Lecture 17: Sequences

Today we discussed the notion of a sequence, which is just an ordered list of numbers. We saw how one can use fairly simple rules to develop various sequences, and we said that in this class we'll mostly be interested in the limit of a given sequence. Intuitively, the limit of a sequence $\{a_n\}_{n=1}^\infty$, if it exists, is a number $L$ so that the terms in the sequence "eventually" get "close to $L$." We made this intuitive description more precise in class by writing down the actual definition (in terms of $\epsilon$ and $N$) for what the statement $\lim_{n \to \infty} \{a_n\} = L$ means.

Lecture 18: Computing Limits of Sequences

We reviewed some of the basic properties of sequences, focusing particularly on the "$\epsilon$-$N$" definition of the limit. We then moved on to discuss rules that could be used to "quickly" compute limits of sequences. The two big rules we discussed were

  • If there exists a function $f(x)$ so that $a_n = f(n)$ for all positive integers $n$, and if $\displaystyle \lim_{x \to \infty} f(x) = L$, then $\displaystyle \lim_{n \to \infty} a_n = L$ as well.

  • If $\displaystyle \lim_{n \to \infty} a_n = L$ and $f$ is a function that is continuous at $L$, then $\displaystyle \lim_{n \to \infty} f(a_n) = f(L)$.

We saw several examples where these rules were used to calculate limits of sequences, and in particular we saw that the second rule is good for evaluating sequences which are "inside" some function. In other words, sequences that are defined in terms of compositions can often have their limits calculated using this second fact.

We finished this class by returning to the example of $\left\{\frac{3^n}{n!}\right\}$, whose limit we weren't able to calculate with the previous rules. We discussed the squeeze theorem and saw how it could be used to prove that $$\lim_{n \to \infty} \left\{\frac{3^n}{n!}\right\} = 0.$$

Lecture 19: Intro to series; Geometric series
For a sequence $\{a_n\}_{n=1}^\infty$, we defined the $n$th partial sum to be $$s_n = a_1 + a_2 + \cdots + a_n,$$ and we said that the series associated to a sequence $\{a_n\}$ is then $$\sum_{n=1}^\infty a_n = \lim_{n \to \infty} \{s_n\}_{n=1}^\infty.$$ We used the definition to evaluate $$\sum_{n=1}^\infty \frac{1}{2^n} = \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \frac{1}{16} + \cdots.$$ We then saw that this was a special case of a far more general phenomenon when considering "geometric series." We proved that $$\sum_{n=0}^\infty ar^{n} \quad \left\{\begin{array}{ll}\mbox{diverges}&\mbox{ if }|r|\geq 1\\\mbox{converges to }\frac{a}{1-r}&\mbox{ if }|r|<1\end{array}\right.$$.

Lecture 20: Telescoping series; the Divergence Test

We started class today by introducing the notion of a telescoping series, which often appear when the terms in a given series can be expresses in terms of differences of fractions using partial fractions; for example, we computed the value for the series $$\displaystyle \sum_{n=1}^\infty \frac{1}{n(n+1)} = \sum_{n=1}^\infty \frac{1}{n}-\frac{1}{n+1}.$$

We then commented that --- aside from geometric and telescoping series --- it's typically very hard to calculate the precise value to which a series converges. Instead, one often has to be content with knowing whether a series simply converges or diverges. Toward that end, over the next several class periods we'll develop some results which will let us determine if a given series converges or diverges. To get started in that direction, we started by discussing the Divergence Test. This test relates to the terms of the series, and it says that if the terms of a series $\sum a_n$ do not approach $0$ (i.e., if $\lim_{n \to \infty} \{a_n\} \neq 0$), then the series itself is not convergent. This is the front line of attack when it comes to determining whether a given series diverges. After all, it's often quite easy to determine whether $\lim_{n \to \infty} a_n = 0$, and if the terms of the series don't have this property than one can immediately make a conclusion about the divergence of the series. This allowed us to say that some otherwise complicated series were actually divergent, including (for example) $$\sum_{n=1}^\infty \root\of{\frac{n^2+1}{9n^2+\root\of{n}}}.$$ Be warned: if $\lim_{n \to \infty} \{a_n\} = 0$, then the divergence test tells you nothing about the convergence or divergence of the given series!

Lecture 21: The integral test

In this lecture we discussed a way to use integrals to make convergence/divergence statements about series. We said that if one has a series $\sum_{n = b}^\infty a_n$, and if there exists a function $f(x)$ so that $f(n) = a_n$ and which is continuous on $[b,\infty)$, is "eventually decreasing" and is positive, then the convergence of $\int_b^\infty f(x)~dx$ forces the convergence of $\sum_{n=b}^\infty a_n$; and likewise the divergence of $\int_b^\infty f(x)~dx$ forces the divergence of $\sum_{n=b}^\infty a_n$. We used this fact to explain why the harmonic series diverged, since one can compare it to the integral $$\int_1^\infty \frac{1}{x}~dx$$ which is known to diverge (because of our old friend the $p$-test). A similar idea allowed us to determine a $p$-test for series, and also told us how to determine the convergence of the (seemingly quite complicated) series $\sum_{n=0}^\infty ne^{-n}$.

Lecture 22: Comparison tests for series

In this lecture we discussed two comparison tests for series whose terms are positive. The first of these was the series analog of a comparison test for integrals that we saw earlier in the semester. This allowed us to determine convergence/divergence properties for series as exotic as $$\sum_{n=2}^\infty \frac{\arctan(n)}{n^2+1}, \quad \sum_{n=1}^\infty \frac{1}{1+n^4+\cos^2(n)}, \quad \mbox{ and } \quad \sum_{k=2}^\infty \frac{2k^2}{k^{5/2}-1}.$$

Though this comparison test is useful, it isn't necessarily fool proof, and it also requires a certain amount of careful inequality checking in order to be used effectively. One can dismiss some of this care when using the limit comparison test, which says that two series $\sum a_n$ and $\sum b_n$ which have positive terms will have the same convergence/divergence if $$\lim_{n \to \infty} \frac{a_n}{b_n} = L$$ for some finite, non-zero number $L$. This can be used to determine the convergence of the series $$\sum_{n=1}^\infty \frac{n^2+n+5}{5n^5+4n^4+3n^2-n+6},$$ which is a series that would be extremely difficult to use the comparison test to analyze.

Lecture 23: Alternating series

We presented a test that one can use to determine when certain alternating series converge. Since nearly all the series tests we've discussed thus far apply only to series with positive terms, this is the first theorem we've discussed in which negative terms are allowed. The main result of this lecture (the alternating series theorem) tells us that an alternating series will converge under some relatively mild hypotheses; we said this should make some intuitive sense, since an alternating series has the opportunity for some "cancelation" that might make it "easier" for the series to converge. We finished this lecture by introducing the notions of absolute and conditional convergence. We said any absolutely convergent series must be convergent (this gives one way to show convergence of a series which doesn't have only positive terms, but which isn't quite alternating).

Lecture 24: The Ratio and Root Tests

Having covered a number of tests for determining convergence or divergence of positive-termed or alternating series, in today's class we discussed our final two tests for series convergence: the ratio test and root test. Both of these tests have the virtue of having almost no hypotheses to verify, and (when applicable) give a tremendous amount of information. We saw that the ratio test is particularly applicable to series that involve either factorials or a mix of algebraic and exponential terms, whereas the root test is effective for determining the convergence of those series whose $n$th terms are raised to the $n$th power. For instance, the first series below is amenable to the ratio test, whereas the latter is amenable to the root test: $$\sum_{k=2}^\infty (-1)^{k^2} \frac{k 2^k}{3^{k+1}} \quad \mbox{ and } \quad \sum_{n=1}^\infty \frac{e^{3n}}{n^{2n}}.$$

Lecture 25: Power Series

Suppose that someone asked you to evaluate all of the following series: $\sum_{n=1}^\infty \left(\frac{1}{3}\right)^n$, $\sum_{n=1}^\infty \left(\frac{2}{3}\right)^n$, $\sum_{n=1}^\infty \left(\frac{1.5}{3}\right)^n$, $\sum_{n=1}^\infty \left(\frac{-e}{3}\right)^n$. Each of these series would be relatively simple to evaluate on their own (they're simply geometric series, after all), but it would become tiresome to compute all of them sequentially. This is especially true since all the series seem so similar to each other. Wouldn't it be better to find some series we could evaluate which would tell us the value of each of these individual series?

This is the motivation behind power series, which are essentially functions that are defined in terms of series. In class we defined power series formally, as well as the notion of interval of convergence.

Lecture 26: A series representation for $\arctan(x)$
  • We began this class by reviewing how to compute the radius of convergence and interval of convergence for a power series. Afterward we computed some geometric power series using the geometric series theorem. By performing some simple operations on these series, we were able to discover a series representation for an old friend: $\arctan$. (At least for values of $x$ in the interval $[-1,1]$.) This allowed us to give the following (surprising!) series identity: $$\frac{\pi}{4} = 1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \cdots.$$

Lecture 27: Generating power series from the geometric series $\frac{1}{1-x} = 1 + x + x^2 + x^3 + \cdots$
  • In this class we built on the idea we began investigating in our last session: that one can use substitution and integration/differentiation to take the known power series $$\frac{1}{1-x} = 1 + x + x^2 + x^3 + \cdots$$ and generate new power series from it. For instance, in class we used this idea to show that $$\ln(1+x) = x - \frac{x^2}{2} + \frac{x^3}{3} - \frac{x^4}{4} + \cdots $$ for values of $x$ in the interval $(-1,1)$.