Lecture Outlines

We will cover a wide variety of materials during lecture and discussion sections, so your constant attendance is important. To help you in organizing your study materials, the list below gives an overview of the basic concepts covered during a given lecture period.

Lecture 1: Introduction to Parametric Curves

We started class today by covering some of the logistics from the syllabus. I'll emphasize here that if office hours don't work for you, I'm VERY happy to set up an appointment to talk outside of office hours, or you can just email me questions or drop by my office to chat.

With logistics out of the way, we started working on some mathematics. We answered questions like: what is a parametric function? what is a parametric curve? how are some stock examples of parametric functions? what does it mean to talk about the orientation associated to a parametric curve? how can we reverse the orientation of a parametric curve, or "speed up" the curve? how can recover ellipses from parametric curves?

For those of you who don't currently have a copy of the text, here's a scan of the first chapter of the book. I mentioned the password for accessing this document in class. Feel free to ask for the password if you don't remember it or weren't in class that day.

Lecture 2: Differential calculus of parametric curves

We started class by using some of the simple tools we developed last class for building circles to construct a more exotic parametric curve. We also discussed how to parameterize a line that passes through two points. We then turned to asking calculus-style questions for parametric curves. How do we find the slope of the line tangent to a parametric curve at a given point? How do we calculate its concavity? How do we calculate the ``speed" of a point moving along a parametric curve?

Lecture 3: Integral calculus of parametric curves; polar coordinates

We began class today by computing the arc length of a parametric curve, essentially by recalling that one computes distance traveled by integrating speed. We then talked about how one can compute the area under a parametric curve, later applying this technique to find area between two curves and even the area bounded by a simple closed curve. We finished class by introducing polar coordinates. We discussed how one converts between polar and rectangular coordinates, and we commented that a given point in the plane has an infinite number of polar coordinate representations. We introduced the notion of a polar function, and we graphed the function $r(\theta) = \sin(\theta)$.

Lecture 4: Differential calculus of polar functions

In class today we focused our attention on polar functions. After reviewing how to sketch polar curves, we saw that certain polar equations can be transformed into equations that involve rectangular coordinates. We also saw that any polar function can be thought of as a parametric function (with parameter $\theta$). Namely, since $x = r\cos(\theta)$ and $y = r \sin(\theta)$ are formulas for converting polar coordinates into rectangular, and since a polar function tells us that the radial value $r$ is equal to some function of the angle value $\theta$ --- say $r = f(\theta)$ --- then we have that the polar function is given by the parametric function $$(f(\theta)\cos(\theta),f(\theta)\sin(\theta)).$$ This is enormously helpful because it means all the techniques we've developed for answering questions about the differential calculus of parametric curves (slopes of tangents, concavity, speed, arc length) can all be answered for polar functions as well, and without having to invent some new theoretical apparatus.

Lecture 5: Area bounded by polar functions

Having discussed the differential calculus of polar functions last class, this time we settled the question of computing the area bounded by a polar function. We saw that the underlying geometry of this question is different from the question of finding the area bounded by some rectangular function. On the hand, the method for computing areas bounded by rectangular functions could be adapted to this situation (once we made the appropriate conversion into the language of polar coordinates; for example, instead of approximating area using rectangles, we approximate area using segments of circles). We saw that the area bounded by a polar curve $r = f(\theta)$ over the interval $[\alpha,\beta]$ is given by $$\int_{\theta = \alpha}^\beta \frac{1}{2}\left(f(\theta)\right)^2~d\theta.$$ We used this formula to compute area for a few examples.

Lecture 6: Vectors

Today we introduced the notion of vectors in $\mathbb{R}^2$ and $\mathbb{R}^3$. We said that vectors have a direction and a magnitude, and we gave a formula for computing the latter. We also said that one can create new vectors from old vectors via the operations of vector addition and real scaling. From this we could also define vector subtraction, and we could also create a unit vector that points in the direction of a given vector $\mathbf{v}$. Most of the class was spent listing off various algebraic or geometric properties that vectors enjoy. We finished class by seeing some applications of vectors to the physics of force.

Lecture 7: Dot products

Today we defined an operation that allows you to take the "product" of two vectors in a given $\mathbb{R}^n$. The result of this "product" is a real number. The operation is the dot product, defined for $\mathbf{v} = \langle v_1,v_2,\cdots,v_n\rangle$ and $\mathbf{w} = \langle w_1,w_2,\cdots,w_n\rangle$ by $$\mathbf{v}\cdot\mathbf{w} = v_1w_1+\cdots+v_nw_n.$$ We saw in class that this number has some geometrically significant meaning. Specifically, we showed that $\mathbf{v} \bullet \mathbf{v} = \|\mathbf{v}\|^2$ and that $\mathbf{v} \bullet \mathbf{w} = \|\mathbf{v}\|\|\mathbf{w}\|\cos(\theta)$, where $\theta$ represents the angle between the two vectors $\mathbf{v}$ and $\mathbf{w}$. This allowed us to use the dot product as a kind of ``perpendicularity detector." We discussed some of the algebraic identities which hold for dot products. We then considered how the sign of the dot product can be used to determine whether the angle between two vectors is acute, right or obtuse.

We then discussed how one can use dot products to calculate the projection of a vector $\mathbf{w}$ onto a vector $\mathbf{v}$. By this we mean that we want to find an equation $$\mathbf{w} = \mathbf{w}^\| + \mathbf{w}^\perp,$$ where $\mathbf{w}^\|$ is a vector in the direction (or the exact opposite direction) of $\mathbf{v}$, and $\mathbf{w}^\perp$ is a vector that is orthogonal to $\mathbf{v}$. We used the notation $\text{proj}_\mathbf{v} \mathbf{w}$ for the ``parallel part," and expressed it as $$\text{proj}_\mathbf{v}\mathbf{w} = \frac{\mathbf{w}\bullet \mathbf{v}}{\mathbf{v}\bullet\mathbf{v}}\mathbf{v}.$$

Lecture 8: Applications of dot product; Cross products

We started class by briefly discussing scalar projection, which records the signed magnitude of the projection vector; we used the notation $\text{comp}_\mathbf{v}\mathbf{w}$ for this quantity. We finished our discussion of dot products by stating that the work done by a constant force vector $\mathbf{F}$ on an object displaced along the vector $\mathbf{D}$ is given by $\mathbf{F} \bullet \mathbf{D}$.

Class finished with a very quick definition of the so-called cross product of two vectors in $\mathbb{R}^3$. We introduced the notation $\mathbf{i},\mathbf{j}$ and $\mathbf{k}$ for the vectors $\langle 1,0,0 \rangle$, $\langle 0,1,0 \rangle$ and $\langle 0,0,1\rangle$. We mentioned that you can compute cross products either by memorizing the formula, or by using a trick that revolves around the determinant of a $3\times 3$ matrix. We computed several examples, and we wrote down some geometric and algebraic identities related to the cross product. We showed, for instance, that for any vector $\mathbf{a} \in \mathbb{R}^3$ we have $\mathbf{a} \times \mathbf{a} = \mathbf{0}.$

Lecture 9: Lines and planes; vector valued functions

We started class today by finishing our discussion of cross products. We wrote down a few more algebraic rules that the dot product satisfied. We saw that cross product does not enjoy some algebraic properties we are used to using. For instance, cross product is neither commutative nor associative.

In the second part of class we discussed how to write equations for lines and planes. These typically revolved around understanding a particular geometric description of the given line or plane. We gave vector forms of both lines and planes, as well as a parametric description of a line and a scalar form of a plane.

We finished class by combining the two topics we've focused on so far this semester: parametric functions and vectors. The result is what we called a vector-valued function, which is simply a function $\mathbf{r}$ whose domain is $\mathbb{R}$ and whose codomain is some $\mathbb{R}^n$. We discussed some of the basic terminology that surrounds these functions.

Lecture 10: Differential calculus of vector-valued functions

We started class by defining the derivative of a vector valued function $\mathbf{r}(t)$. We saw that, geometrically, the quantity $$\frac{\mathbf{r}(t_1)-\mathbf{r}(t_0)}{t_1-t_0}$$ gives us something akin to the slope of a secant line from standard differential calculus (in this context you might reasonably call it a secant vector). Hence we defined the derivative $\mathbf{r}'(t)$ to be the limit of this process: $$\mathbf{r}'(t) = \lim_{t_1 \to t_0} \frac{\mathbf{r}(t_1)-\mathbf{r}(t_0)}{t_1-t_0}.$$ We saw that this derivative can be computed by simply taking derivatives of the component functions (which only requires us to compute the kinds of derivatives we're used to from standard single-variable differential calculus). We saw that one can use this to create a unit tangent function $$\mathbf{T}(t) = \frac{\mathbf{r}'(t)}{\|\mathbf{r}'(t)\|}$$ (which captures only the direction of the tangent vector, and that $\|\mathbf{r}'(t)\|$ is best interpreted as the speed of a particle moving along the curve according to the parameterization $\mathbf{r}(t)$. From this we said that the distance a particle moves along the curve from $t = \alpha$ to $t = \beta$ is then simply $$\int_{t = \alpha}^\beta \|\mathbf{r}'(t)\|~dt.$$ Critically we observed that this quantity is independent of the parameterization of the arc whose length is being computed. We also observed that some of the standard rules of differentiation apply in this context; for instance, we saw that there are three kinds of "product rule" for vector-valued functions, and they all have the same basic structure as the old-fashioned product rule.

At the end of class we defined a notion of the "curviness" of a parametric curve by defining the curvature $$\kappa(t) = \left\|\frac{d\mathbf{T}}{ds}\right\|.$$ Geometrically this is meant to represent the (magnitude of) the vector which represents the change in $\mathbf{T}$ after making a small step along the curve. We've set it up as the (magnitude of) the change in the unit tangent given a small change in arclength, because we'd like the curvature to be independent of the speed that we're traversing the curve. The benefit of this definition is that it makes some geometric sense; the downside is that it's impractical for computations. We'll remedy this in class tomorrow.

Lecture 11: The geometry of vector-valued functions

Today we mostly analyzed geometric qualities of a vector-valued function $\mathbf{r}(t)$ in terms of its derivative information. For instance, we gave a more computationally-friendly method for computing curvature: $$\kappa(t) = \frac{\|\mathbf{T}'(t)\|}{\|\mathbf{r}'(t)\|}.$$ In $\mathbb{R}^3$, we said this was equivalent to $$\kappa(t) = \frac{\|\mathbf{r}'(t)\times \mathbf{r}''(t)\|}{\|\mathbf{r}'(t)\|^3},$$ a formula that can sometimes be easier to implement than the previous one.

Next we discussed the notion of vectors orthogonal to the curve $\mathbf{r}(t)$ at a point $\mathbf{r}(t_0)$. In fact, we said there is a whole plane's worth of vectors orthogonal to the curve at a given point, and we wanted a way to find a "special" vector orthogonal to the curve. We then stated that $$\mathbf{N}(t) = \frac{\mathbf{T}'(t)}{\|\mathbf{T}'(t)\|}$$ is a unit vector orthogonal to the curve (i.e., a unit vector orthogonal to $\mathbf{T}(t)$); we called this the principle unit normal vector. We checked that it was orthogonal to $\mathbf{T}(t)$ by taking a derivative of the identity $\mathbf{T}(t)\cdot\mathbf{T}(t) = 1$. We then finished class by saying that one can then use the unit tangent and principle normal to create the binormal vector $$\mathbf{B}(t) = \mathbf{T}(t) \times \mathbf{N}(t).$$ These three vectors ($\mathbf{T}(t_0),\mathbf{N}(t_0),\mathbf{B}(t_0)$) then create the "frame" for the curve at a point $\mathbf{r}(t_0)$ on the curve. (We stated, but didn't show, that this information together with a little physics can be used to derive Kepler's laws of planetary motion. Cool!)

Lecture 12: Functions of two variables

We started class briefly mentioning definite integrals for vector-value functions. Most of the class, however, was spent introducing the notion of a function of two variables. We gave a few examples and their associated graphs. Since graphing a function of two variables can be difficult, we developed the machinery of level curves to give a 2-dimensional visualization for the graph of a function of two variables (which is a surface in $\mathbb{R}^3$). We finished class by discussing the notion of limits for these kinds of functions. We said that limits are far more complicated for functions of 2 variables since there are many paths along which one can "approach" a limiting point $(a,b)$.

Lecture 13: Limits for functions of two variables

Today delved into the question of limits for functions of more than one variable. We saw example of functions which did not have a limit at a certain point, as well as a few examples of functions that do have limits. [Limits form the theoretical underpinning for the discussion of derivatives for functions of many variables, something we'll cover next class period.]

Lecture 14: Derivatives

For a function $z = f(x,y)$, we defined the partial derivatives with respect to $x$ and $y$. We saw that computing these partial derivatives amounts to using the familiar rules from single variable calculus, treating the other variables as constants. We saw that for a particular function, the two second derivatives $f_{xy}$ and $f_{yx}$ were identical; Clairaut's theorem says that this holds far more generally. We thought about what these derivatives mean graphically, and in particular how we could interpret them in the language of vectors tangent to the graph of the function. From this we naturally built the tangent plane to the graph of the function: $$f_x(a,b)(x-a)+f_y(a,b)(y-b)-(z-f(a,b)) = 0.$$ We defined a function to be differentiable if the tangent plane is a good approximation to the function. We used this to give a notion of linearization for the function, which we stated as $$\Delta z \approx f_x(a,b) \Delta x + f_y(a,b) \Delta y.$$

We finished class by introducing the chain rule for functions of more than one variable. In particular, we saw the version of the chain rule that states that if $z = f(x,y)$, where both $x$ and $y$ are functions of a single variable $t$, then $z$ is also a function of $t$, and we have $$\frac{dz}{dt} = \frac{\partial f}{\partial x}\frac{dx}{dt} + \frac{\partial f}{\partial y}\frac{dy}{dt}.$$

Lecture 15: Chain rule; directional derivatives

Today we continued our exploration of the chain rule. In addition to doing some sample problems which used the version of the chain rule from last class, we also discussed how one can modify this idea to encode derivatives (or partial derivatives) when there are many independent variables, or even multiple layers of intermediate variables. Ultimately the form for a (partial) derivative depends on the number of independent and intermediate variables, but one can draw a dependency tree amongst variables to resolve this issue. One situation that we deal with frequently is when $z = f(x,y)$ and $x$ and $y$ both depend on a single variable $t$. Then we get $$\frac{dz}{dt} = \frac{\partial z}{\partial x}\frac{dx}{dt} + \frac{\partial z}{\partial y}\frac{dy}{dt}.$$ When the variables $x$ and $y$ depend on more than one variable $s$ and $t$, we have $$\frac{\partial z}{\partial t} = \frac{\partial z}{\partial x}\frac{\partial x}{\partial t} + \frac{\partial z}{\partial y}\frac{\partial y}{\partial t}$$ $$\frac{\partial z}{\partial s} = \frac{\partial z}{\partial x}\frac{\partial x}{\partial s} + \frac{\partial z}{\partial y}\frac{\partial y}{\partial s}.$$

Up to this point we've talked about how a function $z=f(x,y)$ depends on changes in $x$ or $y$. However, for a function whose inputs live in two-dimensional space, there are many other rates of changes a person might be interested in. For instance, how does a function change if we move in a direction parallel to some unit vector $\mathbf{u}$?? To answer these questions, we defined the notion of a directional derivative. Though the definition is perfectly analogous to the definition of derivative we've seen many times before, for differentiable functions we argued that if $\mathbf{u}$ is a unit vector, then one can compute the directional derivative as $$D_\mathbf{u}f = \left\langle\frac{\partial f}{\partial x},\frac{\partial f}{\partial y}\right\rangle \cdot \mathbf{u}.$$ This representation for the directional derivative makes it clear that the vector $\langle\frac{\partial f}{\partial x},\frac{\partial f}{\partial y}\rangle$ is important. We called this the gradient of $f$ (written $\nabla f$).

Lecture 16: Geometric interpretations for $\nabla f$

In today's class we took advantage of the formula $D_{\mathbf{u}}f = \nabla f \cdot \mathbf{u}$ to give an interoperation of the gradient vector as the "direction of steepest ascent." This unlocks a lot of important qualities. For instance: the maximum value for $D_{\mathbf{u}}f$ (when considered over all possible directions $\mathbf{u}$) is $\|\nabla f\|$. Furthermore, we have that $D_\mathbf{u}f$ is precisely $0$ when $\mathbf{u}$ is talent to a level curve or level surface. Hence when one moves perpendicular to the gradient, one remains (at least instantaneously) on a fixed level curve. From the perspective of directional derivatives, this means that $D_\mathbf{u}f = 0$ when $\mathbf{u}$ is tangent to a level curve or surface. We were able to use this result to give a quick way for computing the tangent plane to a surface in which $z$ is implicitly --- though not explicitly! --- dependent on the parameters $x$ and $y$.

Lecture 17: Extrema of functions

Today we asked the question: for a function $f(x,y)$, how can we go about finding places where $f(x,y)$ is maximized or minimized. We argued that when $f$ has partial derivatives, places where $f$ is maximized or minimized should occur in places where $\nabla f = \langle 0,0 \rangle$. Based on this we defined the notion of a critical point, and wrote a theorem that says that maxima and minima occur at critical points. To determine whether a critical point is a bona fide extreme value, we gave the higher-dimensional analog of the second derivative test.

Lecture 18: Global extremes on closed domains

We started class by observing that not all functions have global extreme values. We stated the extreme value theorem, which says that continuous functions on closed, bounded sets must take on extreme values. We considered how one can find the global maximum(s) and minimum(s) of a function on a closed, bounded set. The idea was to split the hunt into two pieces: first looking for critical points on the interior of the set, and then finding maxima and minima along the boundary. Finding extreme values on the boundary are a somewhat easier problem to solve, since we are finding extreme values of a function on a set of points that is $1$ dimension smaller than the original domain. We carefully worked through an example in class where we optimized a function whose domain was square.

Lecture 19: Constrained optimization

We finished our discussion of optimization today by determining how to finding maxima and minima when our domain is restricted. Whereas the optimization problem we worked on yesterday had us finding the maxima and minima of a function $f(x,y)$ on a $2$-dimensional domain, these constrained optimization problems typically require us to maximize a function $f(x,y)$ along some $1$-dimensional domain. We argued that if we want to optimize a function $f(x,y)$ subject to some constraint $g(x,y) = k$, then we should attempt to find points $(x_0,y_0)$ so that $\nabla f(x_0,y_0) = \lambda \nabla g(x_0,y_0)$ (assuming that $f$ is reasonably well-behaved and $\nabla g$ is nonzero along the constrained domain). We worked through an example of this form. We gave an interpretation for what the quantity $\lambda$ represents. We also discussed how we could use this method when attempting to solve a constrained optimization problem that was subject to more than one constraint.

Lecture 20: Integrals for functions of 2 variables; Fubini's theorem

Today in class we considered the question of how one calculates the volume captured underneath a surface $z = f(x,y)$ and above a rectangular region $R = [a,b]\times [c,d]$ in the domain. Our idea was to use Riemann sums to calculate this volume. Fortunately, the geometric intuition which makes Riemann sums so convenient for producing approximations to areas under curves of the form $y = f(x)$ carries over into this higher dimensional setting. As one might expect, to evaluate a volume without any approximation error, one must take the limit of these Riemann sums. Unfortunately, carrying out the algebra associated to this process is even more cumbersome than it is in the single-variable case (where it was already moderately painful).

There is another way to view the volume captured by such a surface, and that is to calculate it by "slicing" the volume into cross sections. A result from single variable integration tells us that if we can produce a formula for the cross-sectional area, then integrating this expression will give us an expression for volume. This allows us to express integrals over 2-dimensional domains in terms of (so-called) iterated integrals, a result known as Fubini's theorem. We finished class by determining how one can calculate 2-dimensional integrals when there exists functions $g_1(x),g_2(x)$ so that the domain of integration is of the form $\{(x,y): a \leq x \leq b \quad \text{ and } \quad g_1(x) \leq y \leq g_2(x)\}$. Our idea was to extend a function $f(x,y)$ on a non-rectangular domain to a function $F(x,y)$ on an encapsulating rectangle $R$. We did this in such a way that the new function $F$ agrees with $f$ on $D$, but is zero elsewhere. We saw that in this way we got $$\iint_D f(x,y)~dA = \iint_R F(x,y)~dA = \int_{x=a}^b \int_{y=g_1(x)}^{g_2(x)} f(x,y)~dy~dx.$$

Lecture 21: Type I and Type II domains

Today's class was spent entirely in thinking about how one sets up and evaluates integrals over 2-dimensional domains. We saw examples which required couldn't be evaluated without changing the order of integration.

Lecture 22: More 2-d integrals; Introduction to integration in polar coordinates

We computed a number of problems today related to 2-d integrals. These problems involved somewhat extensive setup: they asked us to compute the volume bounded by certain surfaces, and we were responsible for determining how these volumes were related to the kinds of integrals we discussed in class. At the end of class we gave an example which was most naturally evaluated in the language of polar coordinates. We considered how one makes such a substitution, and we said (though didn't justify) that the differential term $dA$ in polar coordinates is $r\ dr d\theta$.

Lecture 23: Integrating in polar coordinates

We started class by giving an explanation for why the usual differential $dxdy$ is replaced by $rdrd\theta$ when integrating in polar coordinates. We then did a number of examples to showcase the process one follows to integrate in polar coordinates.

Lecture 24: Applications of 2-d integrals; Introduction to 3-d integrals

We started class today by discussing a few physical applications of 2d integrals, focusing mostly on how one can use integrals to calculate the mass of a 2-dimensional lamina. We saw how similar integrals can be used to calculate various moments, center of mass, and moments of inertia for such an object. Using the question of computing mass as our motivation, we then discussed how one would compute the mass of a 3-dimensional solid. Following the intuition from the 2-d setup, we were able to estimate the mass of a solid volume using a Riemann sum, and hence saw that one could use a 3-d integral to compute mass precisely. We saw how one can use (the 3-d analog of) Fubini's theorem to evaluate such an integral as an iterated integral.

Lecture 25: More on 3-d integrals

We started class by observing that many of the physical applications we saw for 2-d integration have analogues for 3-d integrals: mass (which we saw last time), moments, center of mass, and even moments of inertia (though we didn't write this latter application down). We then motivated the question of how one changes variables for functions of two variables by looking at $$\iint_R \frac{e^{x-y}}{x+y}~dA.$$ We saw this integral has a domain that requires some dissection in order to be integrated either as type I or II, but --- more seriously --- has an integrand that can't be computed when viewed as either type I or type II.

Lecture 26: Change of variables and Jacobians

We spent the class discussing how one would go about doing a "change of variables" when converting the usual $x,y$-coordinates into some other $u,v$-coordinate system. Our motivation for this question was the realization that some integrals are just very hard to calculate: either because the domain of integration is type I or II (or type 1,2, or 3 in the 3-d context), or because the integrand itself doesn't have a nice anti-derivative. We analyzed of the theory behind 2d change of variables, and saw that the differential term changes by a factor of a so-called Jacobian. We saw that the change of variables given by polar coordinates has a Jacobian equal to $r$; this agrees with earlier computations we performed where we had to substitute $rdrd\theta$ in place of $dxdy$.

Lecture 27: 3d change of variables; spherical coordinates

We translated the notion of change of variables into the 3d setting today. As before, the tricky part is determining how to change the differential term. In this case, we saw that $$dV = \Delta u\Delta v \Delta w \left\| \frac{\partial(x,y,z)}{\partial(u,v,w)}\right\| = \Delta u\Delta v \Delta w \left\|\det\left| \begin{array}{ccc} \frac{\partial x}{\partial u}&\frac{\partial x}{\partial v}&\frac{\partial x}{\partial w}\\ \frac{\partial y}{\partial u}&\frac{\partial y}{\partial v}&\frac{\partial y}{\partial w}\\ \frac{\partial z}{\partial u}&\frac{\partial x}{\partial v}&\frac{\partial z}{\partial w}\\ \end{array}\right|\right\|.$$

We finished class by talking about the 3d coordinate system known as spherical coordinates. In this case, $x,y$ and $z$ are replaced with $\rho,\phi$ and $\theta$, subject to \begin{align*} x&=\rho \sin\phi \cos\theta\\ y&=\rho \sin\phi \sin\theta\\ z&=\rho \cos\phi. \end{align*}

Lecture 28: More on spherical coordinates

Today we computed the Jacobian for the change of coordinates into spherical coordinates, and we saw an example calculation that uses spherical coordinates to answer a problem concerning the mass of a solid inside the sphere $x^2+y^2+z^2 = 2z$ and outside the sphere $x^2 + y^2 + z^2 = 1$.

Lecture 29: Vector fields

Today we introduced the notion of a vector field, which is a function which takes in many variables and outputs vectors. We talked about how one can plot a vector field, and that there are many physical phenomena which can be thought of as vector fields (e.g., gravity or fluid flow). We remarked that we have actually seen vector fields before: the gradient $\nabla f$ of a function of many variables $f$ is a vector field. We said that a vector field $\mathbf{F}$ for which there exists some function $f$ satisfying $\mathbf{F} = \nabla f$ is called conservative, and we gave a process --- involving partial anti-differentiation and partial derivatives --- which lets us determine whether a function is conservative and, if it is, what the function $f$ is.

Lecture 30: Line integrals

Our discussion today was motivated by the following question: if you have a wire in space whose density varies according to a function $\delta(x,y,z)$, what is the mass of the wire? This led us to the definition of a line integral; if $C$ is a curve parameterized by $\mathbf{r}(t) = (x(t),y(t),z(t))$ for $a \leq t \leq b$, then we said that $$\int_C f~ds = \int_{t=a}^b f(x(t),y(t),z(t)) \root\of{(x'(t))^2+(y'(t))^2+(z'(t))^2}~dt = \int_C f(\mathbf{r}(t))\|\mathbf{r}'(t)\|~dt.$$ We did several examples of line integrals and also defined related integrals: \begin{align*} \int_C f~dx &= \int_{t=a}^b f(\mathbf{r}(t))x'(t)~dt\\ \int_C f~dy &= \int_{t=a}^b f(\mathbf{r}(t))y'(t)~dt. \end{align*}We finished class by showing that if a vector field $\mathbf{F}$ acts on a particle as it traverses $C$, then the work performed by the vector field is $$\int_C \mathbf{F}\cdot \mathbf{T}~ds = \int_{t=a}^b \mathbf{F}(\mathbf{r}(t))\cdot \mathbf{r}'(t)~dt.$

Lecture 31: Fundamental theorem of line integrals

Today we showed how one can use a potential function for a conservative vector field to evaluate an associated work integral. Specifically, if $\mathbf{F} = \nabla f$ and $C$ is parameterized by $\mathbf{r}(t)$ for $a \leq t \leq b$, then $$\int_C \mathbf{F}\cdot d\mathbf{r} = f(\mathbf{r}(b))-f(\mathbf{r}(a)).$$ This result is called the fundamental theorem of line integrals. We used this to do a few work integral calculations (e.g., the work done by gravity if a GPS satellite falls from geostationary orbit into the Pacific ocean). We briefly discussed the notion of independence of path, and we also said that a vector field is conservative if and only if the integral along any closed curve is $0$.

Lecture 32: Green's Theorem; curl and divergence

We started the class by discussing some basic vocabulary for discussing curves in the plane. With this in hand we were able to state Green's theorem. This is an amazing result that lets one exchange a (1-dimensional) line integral for a 2-dimensional integral. Green's theorem says that for a positively oriented, piecewise smooth simple closed curve bounding a region $D$, and for functions $P$ and $Q$ which have continuous partial derivatives on and near $C$, one has $$\int_C \langle P,Q\rangle\cdot d\mathbf{r} = \int_C P~dx + Q~dy = \iint_D \left(\frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y}\right)~dA.$$ We put this to work to evaluate a number of line integrals, and we also saw that one can use this result to find a line integral which calculates area. We finished class by mentioning that Green's Theorem even works for regions which have "holes."

We then defined operators curl and divergence. Curl is an operator that takes in a 3d vector field and returns another 3d vector field. We saw that the curl of a conservative vector field is identically $\mathbf{0}$. We gave an interpretation for $\text{curl}(F)$ as $\nabla \times F$. We then defined divergence, and we observed that one can think of $\text{div}(F)$ as $\nabla \cdot F$. We observed that the divergence of the curl of a vector field is always $0$.

Lecture 33: Vector forms of Green's theorem; parametric surfaces

We started class today by showing how Green's theorem has two "vector forms" which are related to curl and divergence: \begin{align*} \int_C \mathbf{F}\cdot d\mathbf{r} &= \int_C \mathbf{F}\cdot \mathbf{T}~ds = \iint_D \text{curl}(\tilde F)\cdot \mathbf{k}~dA\\ \int_C \mathbf{F}\cdot\mathbf{n}~ds &= \iint_D \text{div}(\mathbf{F})~dA. \end{align*} We said that the goal for the remainder of the course is to develop the necessary machinery to give generalizations of these two results to "wobbly" 2-dimensional integrals (as opposed to 2-dimensional integrals of "flat" domains in $\mathbb{R}^2$). We laid the foundation for these generalizations by introducing the notion of a parameterized surface. We saw several examples of parameterized surfaces, including many familiar geometric objects. We also saw that we can view the graph of any function of two variables as a parameterized surface. Once we have a parametric surface, we then discussed two "calculus-style" questions about the surface. First: how do we compute a tangent plane to the surface?We gave an answer to the first question, and started to develop the theory which would give an answer to the second question.

Lecture 34: Surface area; surface integrals

We continued our discussion of surface area today, arguing that if $S$ is a surface parameterized by $\mathbf{r}(u,v)$ over a domain $R$ in the $uv$-plane, then $$\text{area}(S) = \iint_R \|\mathbf{r}_u \times \mathbf{r}_v\|dA.$$ We computed the surface area of a few parametric surfaces. We then used this idea to define a surface integral. These are the $2$-dimensional analogs of line integrals. We did a few calculations with surface integrals. We also defined the 2-dimensional analog of the integral of a vector field over a curve: these are the so-called flux integrals.

Lecture 35: Flux integrals; Divergence theorem

We spent more time thinking about flux integrals today, defining them more precisely and stating the conditions one needs on a surface in order to define a flux integral. Most notably, one needs to have a unit normal $\mathbf{b}$ to the surface. The flux is defined in terms of this unit normal; note that since a(n orientable) surface has two sides, there are only two choices for a normal to the surface. We saw that if $\mathbf{n}$ is the given unit norm, and $S$ is parametrized by $\mathbf{r}(u,v)$ on a domain $D$ in the $uv$-plane, then $$\iint_S \mathbf{F}\cdot \mathbf{S} = \iint_D \mathbf{F}(\mathbf{r}(u,v))\cdot \mathbf{n}\|\mathbf{r}_u \times \mathbf{r}_v\|~dA.$$ In particular, note that we have either $$\mathbf{n} = \frac{\mathbf{r}_u \times \mathbf{r}_v}{\|\mathbf{r}_u \times \mathbf{r}_v\|} \quad \text{ or } \quad \mathbf{n} = \frac{\mathbf{r}_v \times \mathbf{r}_u}{\|\mathbf{r}_v \times \mathbf{r}_u\|}.$$ If it is the former, then the flux integral becomes $$\int_S \mathbf{F} \cdot \mathbf{S} = \iint_D \mathbf{F}(\mathbf{r}(u,v)) \cdot \left(\frac{\mathbf{r}_u \times \mathbf{r}_v}{\|\mathbf{r}_u \times \mathbf{r}_v\|}\right) \|\mathbf{r}_v\times \mathbf{r}_u\|~dudv = \iint_D \mathbf{F}(\mathbf{r}(u,v)) \cdot \left(\mathbf{r}_u \times \mathbf{r}_v\right)~dudv.$$ We did several calculations involving flux integrals.

Lecture 36: Divergence theorem; Stokes' theorem

Today we made good on producing generalizations to the vector forms of Green's theorem. The first generalization we stated was the divergence theorem. This says (under appropriate hypothesis on $\mathbf{F}$ and $S$, including that $S$ is a closed surface bounding a region $Q$ with the outward pointing norm $\mathbf{n}$) that $$\iint_S \mathbf{F}\cdot\mathbf{n}~dS = \iiint_Q \text{div}(F)~dV.$$ This generalized the second vector form of Green's theorem.

We then generalized the first vector form of Green's theorem. Recall that the first vector form of Green's theorem says for a (nice) closed curve $C$ in the plane, we have $$\int_C \mathbf{F}\cdot\mathbf{T}ds = \iint_D \text{curl}(\tilde{\mathbf{F}})\cdot\mathbf{k}~dA,$$ where $D$ is the region bounded by $D$. The generalization of this result to $\mathbb{R}^3$ is Stokes' theorem; in this result, $S$ is a (nice) surface with norm $\mathbf{n}$, and $C$ is the boundary of $S$ (oriented relative to $\mathbf{n}$); one then has $$\int_C \mathbf{F}\cdot\mathbf{T}ds = \iint_S \text{curl}(\mathbf{F})\cdot d\mathbf{S}.$$

We finished the class by showing how some of the theory we discussed during the course is all part of the same story. Namely, many of the theorems we discussed took on the following form: a $d$-dimensional integral whose integrand is the "post-derivative" form of some function is equal to the $d-1$-dimensional integral whose integrand is the same function "pre-derivative." Are specifically, we said:

Theorem name Theorem statement "derivative" operator
Fundamental theorem of calculus $\displaystyle \int_a^b \frac{d}{dx}\left[F'(x)\right]~dx = F(b)-F(a)$ $\frac{d}{dx}$
Fundamental theorem of line integrals $\displaystyle \int_C \nabla(f)\cdot d\mathbf{r} = f(\mathbf{r}(b))-f(\mathbf{r}(a))$ $\nabla$
Divergence theorem $\displaystyle \iiint_Q \text{div}(\mathbf{F})~dV = \iint_S \mathbf{F}\cdot d\mathbf{S}$ divergence
Stokes' theorem $\displaystyle \iint_S \text{curl}(\mathbf{F})\cdot d\mathbf{S} = \int_C \mathbf{F}\cdot d\mathbf{r}$ curl