Presentation Plans
You will cover a wide variety of materials during lecture and discussion sections; your constant attendance is important not only for your understanding, but the understanding of others. If you want a sense for how your presentations are evaluated, please reference the presentation guidelines.
To help frame our progression through the course, below I've listed rough outlines for the lessons which constitute our course. On average, lessons consist of two sections from the text; I expect that most lessons can be presented in 35 or so. As a guide for what could possibly be covered in a given class, our policy will be that we will not complete more than 2 "new" presentations on a given day. This means on a given day a presenter from last class can finish her lecture, and at most two new students will be giving presentations.
Unit 1: Fundamentals of Vector spaces
The content of this section is simply a review of the basic definitions concerning vector spaces. Students will have encountered these ideas in 206 already, although perhaps only in the case when the scalar field is $\mathbb{R}$ or $\mathbb{C}$.
-
Presenter: everyone Text sections: 1--4 Content: The basic terminology and axioms associated to vector spaces are discussed in 5-minute mini-presentations. The outline for the day is given here. Comments: The author adopts font choices that are hard to recreate on the board. We'll adopt the following substitutions: his choice of $\mathscr{F}$ for a general field will be replaced with $\mathbb{F}$, and in a similar way we'll use $\mathbb{R}, \mathbb{C}$ and $\mathbb{Z}$ for the real numbers, complex numbers, and integers (respectively). His notation for complex-coefficient polynomials in a variable $t$ (something approximately like $\mathscr{P}$) will be replaced instead with the standard notation $\mathbb{C}[t]$; similar notation applies if we want polynomials to have coefficients from a field $\mathbb{F}$. Unfortunately there isn't standard notation for the set of polynonmials of degree at most $n-1$ (the author's $\mathscr{P}_n$), so we'll invent our own: $\mathbb{C}[t]_{< n}$ seems to be appropriately suggestive when coefficients are drawn from the complex numbers. -
Presenter: Jennifer Text sections: 5--6 Content: The fundamentals of linear combinations and independence are discussed. Comments: Note that the text is a bit subtle in its indication of the finiteness of particular sums; one such appearance, for instance, is in the definition of linear dependence. The presenter should make sure to address this issue. -
Presenter: Sappha Text sections: 7--8 Content: The notion of basis is introduced, as is an important invariance property of bases (namely, the number of vectors in a basis). Note that this is the section in which vector spaces are separated into two camps: finite- and infinite-dimensional. Comments: -
Presenter: Audrey Text sections: 9--10 Content: In this presentation we learn what it means for two vector spaces to be "the same," and argue that finite dimensional vector spaces over a field $\mathbb{F}$ can be fairly easily catalogued. In the latter half the important notion of subspace is introduced. Comments:
Unit 2: Canonical constructions on vector spaces
In this section the recurring theme is to provide a host of ways to construct new vector spaces out of old vector spaces. For each of these constructions we will ask some fundamental questions about their underlying structure (e.g., what is the dimension), and we will also explore the interrelationships between these various constructions.
-
Presenter: Marissa Text sections: 11--12 Content: For the first part of this presentation, the theme is to ask for ``extreme" subspaces of a vector space $V$ relative to another collection of subspaces. For example, if we have a collection of subspaces $W_1,\cdots,W_k$ within $V$, then what is the largest subspace contained in all of them? What is the smallest subspace that contains them all? In the latter half of this presentation, we show that dimension satisfies an inequality that is compatible with set containment. Comments: -
Presenter: Madie Text sections: 18--19 Content: One of the natural ways to put two vector spaces together is via direct sum. One benefit of this construction is that the vector spaces don't need to be contained in an ambient vector space (though they do need to be vector spaces over the same field), contrary to our construction of sums of subspaces. The relationship between "sums" and "direct sums" is explored, as is the dimension of a direct sum of vector spaces. Comments: -
Presenter: Maggie Text sections: 21--22 Content: In much the same way that one defines quotient groups via cosets of a subgroup within some ambient group, we define a construction of a quotient space via cosets of some subspace within a vector space. The dimension of a quotient space is determined. Comments: -
Presenter: Han Text sections: 13--14 Content: If $V$ is a vector space over $\mathbb{F}$, then a certain subset of functions with domain $V$ and codomain $\mathbb{F}$ plays an important role. This is the dual space. In this presentation we are also introduced to the bracket notation, which is simply a way to encode the fact that one can "pair" an element from a vector space and an element from its dual to recover an element from the scalar field $\mathbb{F}$. Dual spaces and brackets will be revisited many times over in this course. Comments: -
Presenter: Amy Text sections: 15--16 Content: The dimension of a vector space's dual is discussed, as well as the notion of the dual space of the dual space of a vector space. We find that a finite-dimensional vector space $V$ is isomorphic to its dual $V'$, but not canonically. On the other hand, there is a natural isomorphism between $V$ and $(V')'$. Comments: -
Presenter: Olivia Text sections: 17, 20 Content: A process is described that allows us to define a subspace of $V'$ for a given collection of elements in $V$. We analyze how the construction of "annihilator" interrelates with the constructions of "dual" and "direct sum". Comments: -
Presenter: Laura Text sections: 23, 29 (first 3 paragraphs) Content: Duals of direct sums have already been considered. The construction of direct sum, however, offers another way in which one might hope for a function on a direct sum to be "linear," namely to be linear in each coordinate. This gives rise to the notion of bilinear forms, which are (as in the case of duals) a new vector space construction. Comments: -
Presenter: Carla Text sections: 29 (fourth paragraph and on), 30 (through Theorem 3) Content: We introduce a method by which the symmetric group $S_k$ can act on $k$-linear forms on a vector space; the action simply uses the permutation to shuffle the positions of inputs. That is to say, if $\pi \in S_k$ is given and $w$ is a $k$-linear form, then we get a new $k$-linear form $\pi w$ by letting $\pi w$ act on $(v_1,\cdots,v_k)$ by $$(\pi w)(v_1,\cdots,v_k) = w(v_{\pi(1)},\cdots,v_{\pi(k)}).$$ Under this action we can identify forms which satisfy particular conditions. For example, we say that a $k$-linear form $w$ is symmetric if $\pi w = w$ for all $\pi \in S_k$, and we say $w$ is alternating if $\pi w = (\text{sgn}(\pi)) w$. Comments: -
Presenter: Audrey Text sections: 30 (Theorem 4), 31 Content: Today we work towards proving the following result: if $V$ is an $n$-dimensional vector space, then the space of alternating $n$-linear forms is $1$-dimensional. Though it's not immediately clear why this is useful, we will see when we introduce determinants that this is a theoretical tool that makes the construction of determinants quite simple. Comments:
Unit 3: Fundamentals of transformations
With vector spaces analyzed in depth, we're now prepared to take a more nuanced look at relationships between vector spaces. This is done by studying functions between vector spaces; since this is a linear algebra class, though, the class of functions we look at needs to be suitably "linear" in order to have a hope of being meaningful. The next few sections introduce the appropriate functions (deemed linear transformations) and start to uncover some of their basic properties.
-
Presenter: Jennifer Text sections: 32--34 Content: Linear transformations are defined, and a handful of standard examples are given. It is shown that the set of linear transformations on a vector space $V$ is itself a vector space. Since linear transformations are functions from a vector space $V$ to itself, it's possible to compose two linear transformations; the resultant function is again linear. Algebraic properties of composition are discussed. Comments: -
Presenter: Sappha Text sections: 36, 44 Content: In this presentation two constructions are discussed that produce a new linear transformation from a given transformation $A$. First, if $A$ is an isomorphism, it admits a related linear transformation known as its inverse $A^{-1}$. Algebraic properties of inverses are covered. Second, a construction is given that produces a linear transformation on $V'$ which is "dual" to a given linear transformation $A$ on $V$; this is the adjoint transformation. Comments: -
Presenter: Marissa Text sections: 49, 50 (through the proof of Theorem 1) Content: For a given linear transformation $A$ on a vector space $V$, we identify two critical subspaces of $V$ related to $A$: the range and the null-space. The relationship between these subspaces for a transformation $A$ and its adjoint $A'$ (via annihilators) is determined, and this is used to prove the celebrated rank-nullity theorem. Comments: -
Presenter: Madie Text sections: 37, 38 Content: A linear transformation $A$ is defined in terms of its action on a vector space $V$. If, however, we endow $V$ with a prescribed basis $\mathcal{X}$, then it becomes possible to represent the action of $A$ in terms of its action on the basis $\mathcal{X}$. The instrument which does this accounting is the matrix associated to $A$, which our book denotes $[A]$ (though should perhaps express as $[A]_\mathcal{X}$. The basic properties of this assignment, including how it behaves under the arithmetic operations for linear transformations, is discussed. Comments: -
Presenter: Maggie Text sections: 46--47 Content: With matrix representations in hand, there are natural questions which arise from the process of changing the underlying basis of a space. For example, for a given vector $v \in V$ and bases $\mathscr{X} = \{x_1,\cdots,x_n\}$ and $\mathscr{Y} = \{y_1,\cdots,y_n\}$, what is the relationship between the $\mathscr{X}$-coordinates and $\mathscr{Y}$-coordinates of $v$? if instead $A$ is a linear transformation on $V$, then how are the matrix representations $[A]_{\mathscr{X}}$ and $[A]_{\mathscr{Y}}$ related? Comments: -
Presenter: Han Text sections: 39--40 Content: Now that we know how to represent a linear transformation as a matrix, one question to ask is whether there are bases under which the transformation's matrix representation is particularly "nice." As a first answer to this question we study invariant subspaces, as well as the reducibility of a linear transformation. Intuitively, asking whether a transformation is reducible is the same as asking whether it can be thought of as the "direct sum" of transformations on complementary subspaces of the domain. Comments: -
Presenter: Amy Text sections: 41--42 Content: We know what it means for a linear transformation to be reducible, but what are the simplest possible reducible transformations? This question is a bit vague, but here's one reasonable answer to the question: we know that a transformation on $V$ is reducible if there are invariant subspaces $M$ and $N$ of $V$ so that $V = M\oplus N$; the simplest reducible transformation along $(M,N)$ would be one that acts as the identity on one component and the zero transformation on the other. Transformations of this sort are called projections, and they are a useful class of objects to understand deeply. Projections and some of their basic arithmetic properties are explored in this presentation. Comments: -
Presenter: Olivia Text sections: 43, 45 Content: In this presentation we explore how the notion of projection interacts with concepts we've already discussed. For example, we explore how projections along a subspace $M$ can be used to determine whether $M$ is invariant under a transformation $V$; there is a similar result that allows one to use projections to identify reducibility along a vector space decomposition $M \oplus N$. We also examine how projections behave under the adjoint operator. Comments:
Unit 4: Jordan Canonical form
Now that we understanding how one can encode a given linear transformation in terms of a matrix, one question to ask is whether there is a basis for the underlying vector space $V$ in which the given transformation is particularly "nice". There is a lot to say in this regard, and we have already studied certain cases of this question. For instance, we know that a transformation $A$ on $V$ can be written as a direct sum of transformations $B$ and $C$ on subspaces $M$ and $N$ (respectively) precisely when the pair $(M,N)$ reduces $A$ (this is just the definition of reducibility!). Our main result in this section is to argue that when the underlying field is sufficiently "nice," any transformation can be expressed under a suitable basis as a matrix that is "almost diagonal."
-
Presenter: Laura Text sections: 53 Content: To begin our quest towards Jordan canonical form, we will need to have a technique for finding values $\lambda$ for which there exist nonzero vectors $v \in V$ with $Av = \lambda v$. [Note: it's not at all clear why this should be an important question to answer at this point.] The standard way to do this is via the determinant of a transformation. Using our knowledge of top-dimensional alternating linear forms on $V$ (!!), we'll show that there is a determinant function from the set of linear transformations to the set of scalars that satisfies a number of desirable properties. Among them is a technique for finding all proper values of a given matrix. Comments: You may omit the third- and second-to-last paragraphs in section 53, but should cover the terminology introduced in the final paragraph of the section. -
Presenter: Carla Text sections: 54 -- 55 Content: The notion of proper values (i.e., eigenvalues) and their connection to the characteristic polynomial is explored. The relationship between the geometric and algebraic manifestations of proper values are discussed. Comments: -
Presenter: Audrey Text sections: 56, 57 (through the proof of Theorem 1) Content: As a first result to arguing that an arbitrary transformation has a "nice" matrix representation, we prove that any transformation admits a basis under which the matrix representation is triangular. We then study a particular subclass of transformations (nilpotent transformations) and show that they contain invariant subspaces on which the matrix representation is a particularly simple triangular matrix. Comments: -
Presenter: Jennifer Text sections: 57 (after the proof of Theorem 1), 58 Content: In this section we use the results we've developed so far to arrive at the nicest of the so-called canonical forms: the Jordan canonical form. Comments:
Unit 5: Inner Product Spaces
At this point we've hit the highlights of both general (finite-dimensional) vector spaces and their associated linear transformations. Now we shift to a more specialized collection of vector spaces, and those are vector spaces for which one can give a meaningful treatment of the geometric concepts of length and angles. In particular we'll be focusing only on real and complex vector spaces, and we'll essentially using the classical notion of a dot product as inspiration for defining the kinds of concepts we're after.
-
Presenter: Amy Text sections: 61--62 Content: Sections 59 and 60 give motivation for the formalism introduced in this section. The key concept is to give axiomatic qualities that capture the kinds of properties that we'd want from a function that generalizes the notion of dot product on $\mathbb{R}^n$. Once we've defined inner products, we can again rely on our geometric intuition to define a notion of orthogonality that comes from an inner product. Comments: Parts of sections 59 and 60 might be useful in motivating the definitions we see in these sections, particularly the "skew symmetry" that is the first axiom of an inner product space. Given that these sections introduce new notions, it's also worth pointing out that having meaningful examples can go a long way into driving the intuition behind these concepts home. Finally, note that section 62 introduces some concepts (like orthogonal dimension) which will later be redundant with terminology we already have on hand. When introducing these concepts, it might be useful to point out that they are a placeholder as we try to work towards their connections to things we already know (e.g., how does orthogonal dimension relate to dimension?) -
Presenter: Marissa Text sections: 63--64 Content: Building on the basic terminology established in the previous sections, we continue to study notions related to inner product spaces. In particular we give a characterization of what it means for a collection of vectors to be complete, and we also give an important bound on the inner product between two vectors (in terms of the magnitudes of these vectors). Comments: Section 63 begins to untangle some of the pedantic definitions in section 62, but it doesn't ultimately resolve them. For instance, Halmos points out in section 62 that the number of elements in a complete orthonormal set doesn't have an immediate connection to the so-called orthogonal dimension (since it's at least conceivable that we could have a complete orthonormal set that is not "as large as possible" (which is how orthogonal dimension is defined). Section 63 does not resolve this question, but it does pave the way for a resolution. The presenter should make sure that everyone knows we're establishing some basic properties of complete orthonormal sets, but that we haven't yet developed all we need to resolve this particular puzzle. -
Presenter: Madie Text sections: 65--66 Content: The puzzle of how complete orthonormal sets behave --- particularly how large they can be, and whether they can be larger enough to be a basis for the space --- is finally resolved. An algorithm which converts a given basis to an orthonormal basis is introduced (the famed Gram-Schmidt algorithm that everyone loves). The notion of orthogonal complement is then introduced Comments: At the end of section 65 Halmos mentions that orthonormal bases are so awesome that we'll work with them by default when we are in an inner product space; indeed, we'll use them without explicitly calling them orthonormal bases, and instead we'll demote a non-orthonormal basis by giving it the name "linear basis." It's worth making this very clear in your presentation, since it will effect the duration of the course. -
Presenter: Han Text sections: 67--69 Content: We revisit the notion of linear functional, now in the context of an inner product space. Whereas the dual space for a general vector space isn't naturally connected to the vector space, it turns out that there is a natural way to associate vectors to linear functionals in an inner product space (though the connection isn't quite pretty enough to be an isomorphism). These ideas are explored in full, including a reflection on what this means for the reflexivity results we already know about, and in terms of how it tells us we can make $V'$ into an inner product space in terms of the inner product space structure on $V$. Comments: Sections 68 and 69 include a lot of exposition, but not a lot of things that are formally deemed definitions or theorems. Part of the challenge in this presentation is to give an appropriate narrative structure, but also to codify the discussion into a more standard definition/theorem format. Bonus: Here is Han's handout that summarizes how the new news relates to the old news. -
Presenter: Maggie Text sections: 70--71 Content: In our last presentation we saw how one can view elements of $V$ naturally as elements of $V'$ in an inner product space, and because of this we're presented with an opportunity that doesn't appear in general vector spaces: a linear transformation might be its own adjoint. In this presentation we define the notion of self-adjointness and prove a number of properties that self-adjoint transformations have. We then give a method that allows us to characterize when a linear transformation is the zero transformation via inner products, ultimately using this to characterize self-adjoint transformations for complex inner product spaces (i.e., unitary spaces). Comments: Starting in this section, Halmos' works to motivate a number of results in inner product spaces by finding analogies with the complex numbers. In these section his driving motivation is that self-adjoint transformations are somehow akin to real numbers. The presenter doesn't need to make this the motivation in their presentation of the given material, and there's good reason to think that holding these observations until later (when the theory is more fully developed) is the more natural way to approach this content. In any event, the motivation doesn't carry any mathematical power, so it's not something that one can use in proving results. -
Presenter: Carla Text sections: 72--73 Content: Two new classes of transformations are introduced. The first is the collection is the collection of positive transformations; roughly speaking, these are the self-adjoint transformations for which an input vector $x$ and its output vector $A$ have a non-negative inner product. On the other hand, an isometry is a self-adjoint transformation which preserve inner product (i.e., the value of $(x,y)$ is the same as the value of $(Ax,Ay)$). This latter relation means that $A$ not only preserves the linear structure of $V$, but also its inner product structure; as a result, the isometries can be thought of as the isomorphisms of an inner product space with itself. Comments: Halmos continues to motivate concepts in inner product spaces by using properties of complex numbers as his muse. Again: the presenter of this material does not need to be make this her motivation when presenting the content of this section, and there's good reason to think that holding these observations until later (when the theory is more fully developed) is the more natural way to approach this content. In any event, the motivation doesn't carry any mathematical power, so it's not something that one can use in proving results. -
Presenter: Laura Text sections: 75--76 Content: We explore the connection between orthogonal complements and the projections they entail; since any subspace $\mathcal{M}$ of $\mathcal{V}$ has a unique orthogonal complement $\mathcal{M}^\perp$, this means each subspace gets a unique "perpendicular projection" (i.e., the projection $P_{\mathcal{M},\mathcal{M}^\perp}$. We determine how we can characterize perpendicular projections, and in particular look to see what "extra" properties they have that distinguish them from general projections. The key result in this regard is that whereas a general projection is characterized by idempotency, a perpendicular projection is idempotent and self-adjoint. We then define a notion of orthogonality for projections, and then consider both a criteria for when a sum of perpendicular projections is again a perpendicular projection, and another theorem that tells us how to detect when one subspace $\mathcal{N}$ is contained in another subspace $\mathcal{M}$ in terms of properties of the corresponding perpendicular projections. Comments: As ever, story telling is an important part of this section, since there's a central theme running throughout. You'll want to make connections to past results on projections, but be sure to show how these new results act to further specialize the results we already know about general projections. -
Presenter: Olivia Text sections: 78--79 Content: These sections show how the notions we've developed so far for transformations on inner product spaces (self-adjointness, positivity, etc.) are connected to proper values. There are lots of results in this vein!! We then go on to prove the spectral theorem, which Paul is quite excited about; the generalization of this result to general inner product spaces (not necessarily finite-dimensional) is described by some as the most important result on linear transformations. Comments: There's A LOT of content in these two sections, and 50 minutes simply won't suffice. You'll certainly have to make choices about what content to present and what content to leave out;. This likely means not only missing out on proofs of many theorems, but perhaps not even starting some theorems as well; here are some suggested omissions to get you started: Theorem 2 from section 79 could be stated (even informally) but not proved; Theorem 3 isn't something we necessarily need for our purposes in this course; and the proof of Theorem 2 from section 78 can be omitted (it uses some technology that we've skipped over). What Paul doesn't deliver on very powerfully is why we care about the spectral theorem. The punchline in this regard is at the bottom of page 157, and it essentially says that self-adjoint transformations are diagonable (and, indeed, the basis for this diagonalization can be chosen to be orthonormal) --- and indeed are simply linear combinations of perpendicular projections. These results will be used to great effect later (e.g., we'll be able to use them to do things like take the square root of a matrix). Note that the proof of Theorem 2 in 78 is inaccessible since we skipped the section on complexification. Despite not being able to offer a proof, the theorem itself can still be stated and interpreted for the class.
-
Presenter: Audrey Text sections: 80, 82 Content: The main content of the spectral theorem is that self-adjoint transformations have a "spectral form": they can be written (uniquely) as a linear combination of perpendicular projections which add up to the identity. This, in turn, can be used to show us that self-adjoint transformations are not only diagonable, but are in fact isometrically diagonable. Given that this "spectral form" form is so convenient, it's natural to ask precisely which transformations (other than self-adjoint transformations) enjoy this kind of expression. In the first part of this lecture, we learn that the transformations which have a "spectral form" are precisely those transformations which commute with their adjoint. In the second part of this lecture, we show how one can use a spectral form to define the value of a transformation under a function. This generalizes the fact we've often used throughout the semester; that one can scale, add or multiply linear transformations, and therefore define what it means to plug a transformation into a polynomial. With this new technology, though, one can evaluate linear transformations at more exotic functions (like the square root function, or the exponential function). Comments: -
Presenter: Sappha Text sections: 83--84 Content: We conclude the class with two additional applications of spectral decompositions. The first builds on a theme that has resonated since we've started to discuss self-adjoint transformations, namely that there is a fairly strong analogy to be made between linear transformations and complex numbers. For example, we've seen that self-adjoint transformations act as a kind of linear transformation analog of real numbers (e.g., if one views "starring" a transformations as a kind of transformation analog of conjugation in $\mathbb{C}$, then the self-adjoint transformations --- those with $A = A*$ --- satisfy the same kind of invariance properties that real numbers enjoy under conjugation; another reason one might reasonable view self-adjoint transformations as analogs of real numbers is that self-adjoint transformations are precisely the (normal) transformations with real proper values). In this way, the fact that every $A \in L(V,V)$ has a unique expression as a sum $A = B+iC$ where $B,C$ are self-adjoint transformations is a generalization of the fact that every complex numbers can be written as a real linear combination on the basis $\{1,i\}$. But there is another natural way to express complex numbers, and that is via their polar form: every complex number $\alpha$ can be written in the form $\alpha = \rho e^{i \theta}$, where $\rho$ is a positive number (namely $|\alpha|$) and $e^{i\theta}$ is a point on the unit circle. Surprisingly, we'll find that linear transformations also admit this kind of polar analog. The second (and last) application of the power of spectral decompositions is to give a criteria for when normal linear transformations commute. We've seen several times throughout this course that commuting transformations are particularly special (e.g., they can be simultaneously triangulized), and this final theorem shows us that the relationship between commuting matrices is --- in fact --- quite interwoven.
Comments: