Homework Assignments
Homework will be due once per week. This page will be updated every time a new problem set is posted. Solutions for each problem set will be posted on this page after the assignment is due.
assignments
Instructions: Your answers should always be written in narrative form; you should of course include relevant computations and equations, but these should be situated in the framework of some overall narrative that explains your reasoning, and they should always be naturally integrated into the structure of your writing. You should follow standard rules of composition: write in complete sentences, include appropriate punctuation, group sentences with a common theme into a paragraph, provide appropriate narrative transition when your reasoning takes a significant turn, etc. When writing formal proofs, you should not use abbreviations like $\forall$, $\exists$, $\Rightarrow$. It is acceptable to use the notation $\in$ when describing elements in a formal proof (e.g., "Since $2 \in \mathbb{Z}$, we see that ..."). As a general rule, a mathematical symbol should never start a sentence or follow punctuation; the exception is that you can write a mathematical symbol after a colon which is announcing the beginning of a list of mathematical objects. Of course your work should be legible and neat. The overall guiding principle in your writing is to remember that your work is meant to be read by a a skeptical peer; your job is to write a (logically) convincing argument to this audience.
- Suppose that $\mathbb{F}$ is a subfield of $\mathbb{K}$, and that $V$ is a $\mathbb{K}$-vector space. Show that $V$ is an $\mathbb{F}$-vector space. (Note: you'll need to provide the relevant operations and check axioms.)
- Suppose that $p$ is prime, and let $f:\mathbb{Z} \to \{0,1,\cdots,p-1\}$ be the function which sends any given integer $x$ to its remainder upon division by $p$. Define new addition and multiplication operations $\oplus$ and $\odot$ on $\mathbb{Z}$ as follows: $$a\oplus b = f(a+b) \quad \quad \quad \text{ and } \quad \quad \quad a\odot b = f(a\cdot b).$$ (Note that since the outputs of $f$ come from $\{0,1,\cdots,p-1\}\subseteq \mathbb{Z}$, these new addition and multiplication operations really do return integer values.) Show that $\mathbb{Z}$ is NOT a field under these operations by exhibiting an explicit failure of some field axiom.
- Show that there is a way to define addition and multiplication on $\mathbb{Z}$ so that it becomes a field. That is, show there are functions $\boxplus:\mathbb{Z} \times \mathbb{Z} \to \mathbb{Z}$ and $\boxdot: \mathbb{Z} \times \mathbb{Z} \to \mathbb{Z}$ so that $\mathbb{Z}$ satisfies the relevant axioms of a field under $\boxplus$ and $\boxdot$. [Hint: find a field that $\mathbb{Z}$ is in bijection with and use this as leverage.]
- Suppose that $V$ is an $\mathbb{F}$-vector space and that $\alpha \in \mathbb{F}$ and $v \in V$ satisfy $\alpha \cdot v = 0_V$. Prove --- using only axioms of fields or vector spaces --- that either $\alpha = 0_\mathbb{F}$ or $v = 0_V$.
- Suppose that $V$ is an $\mathbb{F}$-vector space and that $\alpha,\beta \in \mathbb{F}$ are distinct elements. Prove that for any nonzero $v \in V$ we have $\alpha \cdot v \neq \beta \cdot v$.
- Complete problem 3 from Section 4 of the text.
Here are the Solutions.
Instructions: Your answers should always be written in narrative form; you should of course include relevant computations and equations, but these should be situated in the framework of some overall narrative that explains your reasoning, and they should always be naturally integrated into the structure of your writing. You should follow standard rules of composition: write in complete sentences, include appropriate punctuation, group sentences with a common theme into a paragraph, provide appropriate narrative transition when your reasoning takes a significant turn, etc. When writing formal proofs, you should not use abbreviations like $\forall$, $\exists$, $\Rightarrow$. It is acceptable to use the notation $\in$ when describing elements in a formal proof (e.g., "Since $2 \in \mathbb{Z}$, we see that ..."). As a general rule, a mathematical symbol should never start a sentence or follow punctuation; the exception is that you can write a mathematical symbol after a colon which is announcing the beginning of a list of mathematical objects. Of course your work should be legible and neat. The overall guiding principle in your writing is to remember that your work is meant to be read by a a skeptical peer; your job is to write a (logically) convincing argument to this audience.
- Suppose that $V$ is an $n$-dimensional $\mathbb{K}$-vector space, and that $\mathbb{F}$ is a subfield of $\mathbb{K}$. We already know that $\mathbb{K}$ is a $\mathbb{K}$-vector space, and by the first problem on pset ½ it therefore follows that both $V$ and $\mathbb{K}$ are $\mathbb{F}$-vector spaces. Prove that if $\mathbb{K}$ is $m$-dimensional over $\mathbb{F}$, then $V$ is $nm$-dimensional over $\mathbb{F}$. That is, prove $$(\dim_\mathbb{F}(\mathbb{K}))(\dim_\mathbb{K}(V)) = \dim_\mathbb{F}(V)$$ when all the given quantities are finite.
- Suppose that $W$ is a subspace of $V$ (both over the field $\mathbb{F}$), and that $\dim(W) = \dim(V) < \infty$. Prove that $W = V$.
- Suppose that $W_1$ and $W_2$ are subspaces of $V$, and that $V = W_1 \cup W_2$. Prove that either $W_1 = V$ or $W_2 = V$.
- Complete problem 8 from section 12. (Note that (a) is asking you to prove the statement it is asserting.)
- If $W_1$ and $W_2$ are vector spaces within $V$, prove that $$\dim(W_1)+\dim(W_2) = \dim(W_1 + W_2) + \dim(W_1 \cap W_2).$$ [Hint: $W_1 \cap W_2$ is a subspace of both $W_1$ and $W_2$. Theorem 2 from section 12 is now useful.]
- Prove that ``isomorphic to" is an equivalence relation on vector spaces over $\mathbb{F}$, in that it satisfies the following three properties:
- (Reflexivity) If $V$ is a vector space over $\mathbb{F}$, then $V \simeq V$.
- (Symmetry) If $V$ and $U$ are vector spaces over $\mathbb{F}$ with $V \simeq U$, then $U \simeq V$.
- (Transitivity) If $V$, $U$ and $W$ are vector spaces over $\mathbb{F}$ with $V \simeq U$ and $U \simeq W$, then $V \simeq W$.
- Suppose that $V$ is an $n$-dimensional vector space, and that $\mathcal{X} = \{x_1,\cdots,x_n\}$ is a given basis. In section 9 we learned that $\mathcal{X}$ gives rise to an isomorphism from $V$ to $\mathbb{F}^n$; this function is often called the coordinate function for $V$ relative to $\mathcal{X}$. In our proof in section $\S 9$ we called this function $T$, but to emphasize the fact that the function depends on the basis $\mathcal{X}$ we will instead denote it by $T_\mathcal{X}$ here. To define $T_\mathcal{X}$, recall that for any $v \in V$ there exist unique scalars $c_1,\cdots,c_n$ so that $v = c_1x_1+\cdots+c_nx_n$. We then define $T_\mathcal{X}:V \to \mathbb{F}^n$ by the rule $T_\mathcal{X}(v)=(c_1,\cdots,c_n)$.
- Suppose that $V = \mathbb{C}[t]_{<3}$ and that $\mathcal{X} = \{t^2-1,t^2+t,t^2+t+1\}$. (You can assume $\mathcal{X}$ is a basis of $V$ without proof.) Compute $T_\mathcal{X}(at^2+bt+c)$.
- Suppose that $V = \mathbb{C}[t]{<3}$ and that $\mathcal{Y} = \{1,t+1,t^2+t+1\}$. (You can assume $\mathcal{Y}$ is a basis of $V$ without proof.) Compute $T_\mathcal{Y}(at^2+bt+c)$.
Here are the Solutions.
Instructions: Your answers should always be written in narrative form; you should of course include relevant computations and equations, but these should be situated in the framework of some overall narrative that explains your reasoning, and they should always be naturally integrated into the structure of your writing. You should follow standard rules of composition: write in complete sentences, include appropriate punctuation, group sentences with a common theme into a paragraph, provide appropriate narrative transition when your reasoning takes a significant turn, etc. When writing formal proofs, you should not use abbreviations like $\forall$, $\exists$, $\Rightarrow$. It is acceptable to use the notation $\in$ when describing elements in a formal proof (e.g., "Since $2 \in \mathbb{Z}$, we see that ..."). As a general rule, a mathematical symbol should never start a sentence or follow punctuation; the exception is that you can write a mathematical symbol after a colon which is announcing the beginning of a list of mathematical objects. Of course your work should be legible and neat. The overall guiding principle in your writing is to remember that your work is meant to be read by a a skeptical peer; your job is to write a (logically) convincing argument to this audience.
- Consider the following basis of $\mathbb{\mathbb{R}}^3$: $$\mathcal{X} = \{(0,1,0), (1,1,-1),(1,2,3)\}.$$ Give the dual basis $\mathcal{X}'$.
[Hint: you want to produce functionals that have specified outputs on the basis elements. To do this, you might need to remember how to solve systems of equations. Once you give the supposed dual basis, be sure that you argue (1) each element is a linear functional on $\mathbb{R}^3$, and (2) that it is appropriately "dual" to the basis above.]
- Consider $f_1,f_2,f_3 \in (\mathbb{R}[t]_{<3})'$ defined by $$f_1(p) = \int_0^1 p(t)~dt \quad \quad f_2(p) = \int_0^2 p(t)~dt \quad \quad f_3(p) = \left.\frac{dp}{dt}\right|_{t=-1}.$$ Find $\mathcal{X} = \{p_1,p_2,p_3\} \subseteq \mathbb{R}[t]_{<3}$ so that $\mathcal{X}' = \{f_1,f_2,f_3\}$.
NB: Properly speaking we ought to be showing that $\mathcal{X}$ is a basis before constructing the set dual to it, but if you correctly argue that $\{p_1,p_2,p_3\}$ is dual to $\{f_1,f_2,f_3\}$, then I'll spare you the drudgery of checking that $\{p_1,p_2,p_3\}$ is a basis.
- Suppose that $V$ is a vector space over $\mathbb{F}$ with $n=\dim(V)$.
- Suppose that $n>1$, and that $\mathcal{X}=\{x_1,\cdots,x_n\}$ is a basis for $V$. Prove that $\mathcal{Y}=\{x_1+x_2,x_2,\cdots,x_n\}$ is also a basis for $V$, and carefully check that $\mathcal{Y}\neq \mathcal{X}$.
- Suppose that $|\mathbb{F}|>2$, and that $\mathcal{X}=\{x_1,\cdots,x_n\}$ is a basis for $V$. Let $\alpha \in \mathbb{F}$ be given with $\alpha \not\in \{0_\mathbb{F},1_\mathbb{F}\}$. Prove that $\mathcal{Y}=\{\alpha x_1,x_2,\cdots,x_n\}$ is also a basis for $V$, and carefully check that $\mathcal{Y} \neq \mathcal{X}$.
NB: The purpose of this problem is to show that in almost all cases (namely, in all cases except when $\mathbb{F}=\mathbb{F}_2$ and $\dim(V)=1$), a given vector space $V$ has more than one basis. For this reason, we cannot speak about "the basis" for a vector space $V$, but must instead talk about "a basis" for a vector space $V$.
- Suppose $U$ is a subspace of $V$, with $1 \leq \dim(U) < \dim(V)$. We know from Halmos that there exists some complement $W$ to $U$ within $V$, which means $W$ is a subspace of $V$ with $U+W=V$ and $U \cap W = \{0_V\}$. Suppose that $\{x_1,\cdots,x_k\}$ is a basis for $U$, and that $\{y_1,\cdots,y_\ell\}$ is a basis for $W$.
- Prove that $\mathcal{Y}=\{y_1+x_1,y_2,\cdots,y_\ell\}$ is a linearly independent set within $V$.
- Let $\hat W$ be the space spanned by $\mathcal{Y}$. Prove that $\hat W$ is a complement of $U$ in $V$.
- Prove that $W \neq \hat W$.
NB: The purpose of this problem is to show in almost all cases (namely, when $U$ is a "nontrivial" subspace of $V$), we cannot speak about "the complement" of $U$ in $V$, but must instead speak about "a complement" of $U$ in $V$.
Here are the Solutions.
- Consider the following basis of $\mathbb{\mathbb{R}}^3$: $$\mathcal{X} = \{(0,1,0), (1,1,-1),(1,2,3)\}.$$ Give the dual basis $\mathcal{X}'$.
Instructions: Your answers should always be written in narrative form; you should of course include relevant computations and equations, but these should be situated in the framework of some overall narrative that explains your reasoning, and they should always be naturally integrated into the structure of your writing. You should follow standard rules of composition: write in complete sentences, include appropriate punctuation, group sentences with a common theme into a paragraph, provide appropriate narrative transition when your reasoning takes a significant turn, etc. When writing formal proofs, you should not use abbreviations like $\forall$, $\exists$, $\Rightarrow$. It is acceptable to use the notation $\in$ when describing elements in a formal proof (e.g., "Since $2 \in \mathbb{Z}$, we see that ..."). As a general rule, a mathematical symbol should never start a sentence or follow punctuation; the exception is that you can write a mathematical symbol after a colon which is announcing the beginning of a list of mathematical objects. Of course your work should be legible and neat. The overall guiding principle in your writing is to remember that your work is meant to be read by a a skeptical peer; your job is to write a (logically) convincing argument to this audience.
- Suppose that $V$ and $W$ are $\mathbb{F}$-vector spaces, and that $T:V \to W$ is a function with the property that for all $\alpha,\beta \in \mathbb{F}$ and all $v_1,v_2 \in V$ we have $T(\alpha v_1+\beta v_2)=\alpha T(v_1) + \beta T(v_2)$. (We will later call functions of this form "linear transformations," but most people you meet on the street will just say "$T$ is linear.")
- Prove that if $T$ is a linear transformation, then $T(0_V)=0_W$.
- Prove that if $T$ is injective and $\{x_1,\cdots,x_n\}$ is a linearly independent collection of vectors from $V$, then $\{T(x_1),\cdots,T(x_n)\}$ is a linearly independent collection of vectors from $W$.
- Prove that if $T$ is surjective and $\{x_1,\cdots,x_n\}$ spans $V$, then $\{T(x_1),\cdots,T(x_n)\}$ spans $W$.
- Prove that if $T$ is an isomorphism, then $\{x_1,\cdots,x_n\}$ is a basis for $V$ if and only if $\{T(x_1),\cdots,T(x_n)\}$ is a basis for $W$.
- Suppose that $U$ and $W$ are subspaces of a vector space $V$. Prove that $(U+W)^0=U^0 \cap W^0$.
- We learned in class that if $\dim(V) < \infty$, then $V$ is "naturally isomorphic" to $V''$, where that natural isomorphism $T:V \to V''$ associates to a given $v \in V$ the function $T(v)=\text{ev}_v:V' \to \mathbb{F}$ which sends any $y \in V'$ to $\text{ev}_v(y) = y(v) = [v,y]$. In this problem we will give an example that shows that the map $T:V \to V''$ can fail to be surjective when $\dim(V) = \infty$. [In fact, if $V$ is any infinite dimensional vector space, one can use precisely these same ideas to prove that $V$ is not isomorphic to $V''$, but we will settle for our more humble result.]
Let $\mathbb{F}$ be a field, and let $V = \mathbb{F}[t]$; this just means that $V$ is the set of all polynomials with coefficients in $\mathbb{F}$. We write $\mathbb{F}[[t]]$ to denote the set of formal power series with coefficients in $\mathbb{F}$; specifically, $$\mathbb{F}[[t]] = \{f_0+f_1t+f_2t^2+f_3t^3+\cdots: f_i \in \mathbb{F} \text{ for all }i.\} = \left\{\sum_{i=0}^\infty f_i t^i: f_i \in \mathbb{F} \text{ for all }i \geq 0\right\}.$$ The space $\mathbb{F}[[t]]$ is an $\mathbb{F}$-vector space under the operations $$\begin{align*} \left(\sum_{i=0}^\infty f_i t^i\right)+\left(\sum_{i=0}^\infty g_i t^i\right) &= \sum_{i=0}^\infty (f_i+g_i)t^i\\ \alpha \cdot \sum_{i=0}^\infty f_i t^i &= \sum_{i=0}^\infty (\alpha f_i) t^i. \end{align*}$$ Note that $\mathbb{F}[t]$ is the subset of $\mathbb{F}[[t]]$ given by $$\left\{\sum_{i=0}^\infty f_i t^i: \exists N \in \mathbb{N} \text{ so that }f_i=0 \text{ for all }i \geq N\right\}.$$ (In other words $\mathbb{F}[t]$ is the subset of formal power series whose coefficients are eventually zero, whereas a random element of $\mathbb{F}[[t]]$ can continue to have nonzero coefficients indefinitely.)
- For any $h = \sum_{i=0}^\infty h_i t^i \in \mathbb{F}[[t]]$, define a function $y_h:V \to \mathbb{F}$ that associates to any input $p = \sum_{i=0}^\infty a_i t^i \in \mathbb{F}[t]$ the output $y_h(p) = \sum_{i=0}^\infty a_i h_i$. Verify that the outputs of $y_h$ truly are in $\mathbb{F}$, and show that $y_h \in \mathbb{F}[t]'$.
[Hint: given that the value of $y_h(p)$ is expressed as an infinite sum of elements from $\mathbb{F}$, one should reasonably be worried that this output isn't a bona fide element of $\mathbb{F}$ (since we only know how to add a finite number of elements in $\mathbb{F}$!). You are being asked to show that outputs of $y_h$ are elements in the stated codomain so that you ensure that $y_h$ is well-defined.]
- Verify that the set $\mathcal{X} = \{y_1,y_t,y_{t^2},y_{t^3},\cdots\} \cup \{y_{1+t+t^2+t^3+\cdots}\}$ is linearly independent. (In doing this, recall that a linear combination of elements of $\mathcal{X}$ means --- by definition of linear combination --- that only finitely many coefficients in the combination are non-zero. You will also use the fact that two formal power series $\sum f_i t^i$ and $\sum g_i t^i$ are equal if and only if $f_i=g_i$ for all $i \geq 0$.)
- Using an analog of Theorem 1 of section 15, one can verify that there is an element $\Delta \in \mathbb{F}[t]''$ so that $\Delta(y_{t^n})=0$ for all $n \in \{0,1,2,3,\cdots\}$ and so that $\Delta(y_{1+t+t^2+t^3+\cdots})=1$. Prove that $\Delta \neq \text{ev}_p$ for any $p \in \mathbb{F}[t]$.
- For any $h = \sum_{i=0}^\infty h_i t^i \in \mathbb{F}[[t]]$, define a function $y_h:V \to \mathbb{F}$ that associates to any input $p = \sum_{i=0}^\infty a_i t^i \in \mathbb{F}[t]$ the output $y_h(p) = \sum_{i=0}^\infty a_i h_i$. Verify that the outputs of $y_h$ truly are in $\mathbb{F}$, and show that $y_h \in \mathbb{F}[t]'$.
Here are the Solutions.
- Suppose that $V$ and $W$ are $\mathbb{F}$-vector spaces, and that $T:V \to W$ is a function with the property that for all $\alpha,\beta \in \mathbb{F}$ and all $v_1,v_2 \in V$ we have $T(\alpha v_1+\beta v_2)=\alpha T(v_1) + \beta T(v_2)$. (We will later call functions of this form "linear transformations," but most people you meet on the street will just say "$T$ is linear.")
Instructions: Your answers should always be written in narrative form; you should of course include relevant computations and equations, but these should be situated in the framework of some overall narrative that explains your reasoning, and they should always be naturally integrated into the structure of your writing. You should follow standard rules of composition: write in complete sentences, include appropriate punctuation, group sentences with a common theme into a paragraph, provide appropriate narrative transition when your reasoning takes a significant turn, etc. When writing formal proofs, you should not use abbreviations like $\forall$, $\exists$, $\Rightarrow$. It is acceptable to use the notation $\in$ when describing elements in a formal proof (e.g., "Since $2 \in \mathbb{Z}$, we see that ..."). As a general rule, a mathematical symbol should never start a sentence or follow punctuation; the exception is that you can write a mathematical symbol after a colon which is announcing the beginning of a list of mathematical objects. Of course your work should be legible and neat. The overall guiding principle in your writing is to remember that your work is meant to be read by a a skeptical peer; your job is to write a (logically) convincing argument to this audience.
Because of fall break, I'm giving you the option to work on a subset of these problems instead of all of them. You must complete at least two problems. If you complete two of them, then this pset will be worth half as much as other psets when I weight your homework average at the end of the term. If you complete three of the four, this pset will count for three-quarters of a pset when I weight your homework average. If you complete all four, this pset will count with the same weight as all other psets.
- Suppose that $U$ is a subspace of a vector space $V$. Consider the function $T:(V/U)' \to U^0$ that assigns each functional $f \in (V/U)'$ to the function $T_f \in V'$ defined by $T_f(x) = f(x+U)$. Verify that the outputs of $T$ are elements of the stated codomain (this includes verifying that $T_f \in V'$ for all $f \in (V/U)'$), and then show that $T$ is an isomorphism.
- If $V$ is a vector space, we have heard that the collection of alternating $k$-linear forms is a vector space. Many books use $\bigwedge^k(V')$ to denote this set, so we'll adopt that notation in this problem and for the duration of the problem set. For the duration of this problem, we'll also assume that the field of scalars has the property that $1 \neq -1$. (I.e., we assume $\mathbb{F}$ does not have characteristic $2$.)
- Suppose that $\mathcal{X} = \{x_1,\cdots,x_n\}$ is a basis for $V$, where $n \geq 2$. Prove that for any $1 \leq i < j \leq n$ there exists a unique $w_{i,j} \in \bigwedge^2(V')$ so that $$w_{i,j}(x_\ell,x_m) = \left\{\begin{array}{ll}0&\text{ if }\{\ell,m\} \neq \{i,j\}\\1&\text{ if }(\ell,m) = (i,j)\\-1&\text{ if }(\ell,m) = (j,i).\end{array}\right.$$
- Prove that $\{w_{i,j}: 1 \leq i < j \leq n\}$ is a basis for $\bigwedge^2(V')$.
[Note: If one counts the number of forms in the basis from part (b) above, then one finds that $$\dim\left(\bigwedge^2(V')\right)=\binom{\dim(V)}{2},$$ where this last expression is a binomial coefficients. One can adapt the proof above to show that for any $k$ we have $\dim\left(\bigwedge^k(V')\right) = \binom{\dim(V)}{k}$. (This is another reason why the space of top-dimensional linear forms is $1$ dimensional, since the formula above tells us that $\dim\left(\bigwedge^{\dim(V)}(V')\right) = \binom{\dim(V)}{\dim(V)}=1$.
- If $w$ is a $k$-linear form on $V$, then define $\text{Stab}(w) = \{\sigma \in S_k: \sigma w = w\}$.
- Find trilinear forms $w_1,w_2,w_3,w_6$ on $\mathbb{R}^4$ so that $\left|\text{Stab}(w_i)\right| = i$.
- Prove that if $w$ is nonzero and skew-symmetric, then $\text{Stab}(w) = A_k$ (i.e., the alternating group). Can you find $k \in \mathbb{N}$, a vector space $V$, and a $k$-linear form $w$ on $V$ so that $\text{Stab}(w) = A_k$, but with $w$ not skew-symmetric? If so, do it; if not, prove why. [Note: in this problem again assume that $\text{char}(\mathbb{F}) \neq 2$.]
[Hint: It is a fact --- that you don't have to prove --- that if $\alpha_1,\alpha_2,\alpha_3,\alpha_4,\beta_1,\beta_2,\beta_3,\beta_4,\gamma_1,\gamma_2,\gamma_3,\gamma_4$ are any given elements of $\mathbb{R}$, then the function $$w\left(((x_1,x_2,x_3,x_4),(y_1,y_2,y_3,y_4),(z_1,z_2,z_3,z_4))\right) = \left(\sum_{i=1}^4 \alpha_i x_i\right)\left(\sum_{j=1}^4 \beta_jy_j\right)\left(\sum_{k=1}^4 \gamma_k z_k\right)$$ is a trilinear functional on $\mathbb{R}^4$.]
- In several places in the analysis of alternating forms, Halmos has asked us to re-express an alternating linear form by writing each argument as a linear combination of a given basis and expanding it using linearity. His arguments have been descriptive, but not terribly explicit. For this problem, we will carry this process out completely in the case of "top-dimensional" alternating forms (i.e., when the degree of the form equals the dimension of the space $V$ that the form acts on).
Let $\{x_1,\cdots,x_n\}$ be a basis for $V$, let $w$ be an alternating $n$-linear form, and let $\{y_1,\cdots,y_n\}$ be a subset of vectors from $V$. For each $1 \leq i \leq n$, we know there are coefficients $a_{i1},\cdots,a_{in} \in \mathbb{F}$ so that $$y_i = a_{i1}x_1 + \cdots + a_{in}x_n.$$ Prove that $$w(y_1,\cdots,y_n) = \left(\sum_{\sigma \in S_n} (\text{sgn}(\sigma)) a_{1\sigma(1)}\cdots a_{n\sigma(n)}\right)w(x_1,\cdots,x_n).$$
[Hint: You might first show that $$w(y_1,\cdots,y_n) = \sum_{i_n=1}^n \sum_{i_{n-1}=1}^n \cdots \sum_{i_2=1}^n \sum_{i_1=1}^n a_{1i_1} a_{2i_2} \cdots a_{ni_n} w(x_{i_1},x_{i_2},\cdots,x_{i_n}).$$ You can do this by using induction and linearity to successively ``expand" each argument.]
[Note 1: Observe that this means that the action that $w$ performs on any collection of $n$ vectors $\{y_1,\cdots,y_n\}$ is determined by (a) the value of $ \left(\sum_{\sigma \in S_n} (\text{sgn}(\sigma)) a_{1\sigma(1)}\cdots a_{n\sigma(n)}\right)$ (which only depends on how we write each $y_i$ as a combination of $\{x_1,\cdots,x_n\}$, and NOT on the form $w$) and (b) the value of $w(x_1,\cdots,x_n)$. For this reason, a top-dimensional linear form is determined entirely by its action on a single basis.]
[Note 2: The fact that a top dimensional linear form is determined entirely by its action on a basis is why any two top-dimensional linear forms $w_1$ and $w_2$ are linearly dependent: for if $\{x_1,\cdots,x_n\}$ is a basis for $V$ and we define $\alpha = w_2(x_1,\cdots,x_n)$ and $\beta=w_1(x_1,\cdots,x_n)$, then our formula above tells us that for any $y_1,\cdots,y_n \in V$ we get $$0 = \alpha w_1(y_1,\cdots,y_n) - \beta w_2(y_1,\cdots,y_n),$$ and hence $\alpha w_1-\beta w_2=0$ is a nontrivial linear dependence between $w_1$ and $w_2$.]
[Note 3: You can modify this argument slightly to come up with an analogous expression for the expansion of an alternating $k$-linear form when $1 \leq k < n$. Using the same notation, for any set of $k$ vectors $\{y_1,\cdots,y_k\}$ this formula tells us $$w(y_1,\cdots,y_k)=\sum_{\mathcal{I} \in K} ~\sum_{\sigma \in S_k} \left((\text{sgn}(\sigma)) a_{i_1i_{\sigma(1)}}\cdots a_{i_ki_{\sigma(k)}} \right) w\left(x_{i_1},\cdots,x_{i_k}\right),$$ where $K$ stands for the set of all subsets of $\{1,2,\cdots,n\}$ of size $k$, and for a given $\mathcal{I} \in K$ we write its elements in increasing order as $i_1$ through $i_k$.]
Here are the Solutions.
Instructions: Your answers should always be written in narrative form; you should of course include relevant computations and equations, but these should be situated in the framework of some overall narrative that explains your reasoning, and they should always be naturally integrated into the structure of your writing. You should follow standard rules of composition: write in complete sentences, include appropriate punctuation, group sentences with a common theme into a paragraph, provide appropriate narrative transition when your reasoning takes a significant turn, etc. When writing formal proofs, you should not use abbreviations like $\forall$, $\exists$, $\Rightarrow$. It is acceptable to use the notation $\in$ when describing elements in a formal proof (e.g., "Since $2 \in \mathbb{Z}$, we see that ..."). As a general rule, a mathematical symbol should never start a sentence or follow punctuation; the exception is that you can write a mathematical symbol after a colon which is announcing the beginning of a list of mathematical objects. Of course your work should be legible and neat. The overall guiding principle in your writing is to remember that your work is meant to be read by a a skeptical peer; your job is to write a (logically) convincing argument to this audience.
- Complete problem 4 from section 23 of the text. (You can ignore the portion of the question that asks about non-degeneracy.)
- Suppose that $W$ is the space of $k$-linear forms on a vector space $V$, and let $\pi \in S_k$ be given. Prove that $A_\pi:W \to W$ given by $A_\pi(w) = \pi w$ is a linear transformation.
- In class we've seen that the set of all linear transformations $T:V \to V$ is a vector space. Some books use the notation $\text{Hom}(V,V)$ to denote this space, a notation we'll adopt for this problem set.
Prove the following: if $\mathcal{X} = \{x_1,\cdots,x_n\}$ is any basis for $V$, and if $\{y_1,\cdots,y_n\} \subseteq V$ is given, then there exists a unique $A \in \text{Hom}(V,V)$ so that $A(x_i) = y_i$ for all $i$.
- Let $\mathcal{X} = \{x_1,\cdots,x_n\}$ be a basis for $V$, and for all $1 \leq i,j \leq n$ we define $A^\mathcal{X}_{i,j}$ to be the unique element of $\text{Hom}(V,V)$ satisfying $$A^\mathcal{X}_{i,j}(x_\ell) = \left\{\begin{array}{ll}x_j&\text{ if }\ell = i\\0_V&\text{ if }\ell \neq i.\end{array}\right.$$ Prove that $\{A^{\mathcal{X}}_{i,j}: 1 \leq i,j \leq n\}$ is a basis for $\text{Hom}(V,V)$.
[Note as a consequence of this result, we have $\dim(\text{Hom}(V,V)) = (\dim(V))^2$.]
Here are the Solutions.
Instructions: Your answers should always be written in narrative form; you should of course include relevant computations and equations, but these should be situated in the framework of some overall narrative that explains your reasoning, and they should always be naturally integrated into the structure of your writing. You should follow standard rules of composition: write in complete sentences, include appropriate punctuation, group sentences with a common theme into a paragraph, provide appropriate narrative transition when your reasoning takes a significant turn, etc. When writing formal proofs, you should not use abbreviations like $\forall$, $\exists$, $\Rightarrow$. It is acceptable to use the notation $\in$ when describing elements in a formal proof (e.g., "Since $2 \in \mathbb{Z}$, we see that ..."). As a general rule, a mathematical symbol should never start a sentence or follow punctuation; the exception is that you can write a mathematical symbol after a colon which is announcing the beginning of a list of mathematical objects. Of course your work should be legible and neat. The overall guiding principle in your writing is to remember that your work is meant to be read by a a skeptical peer; your job is to write a (logically) convincing argument to this audience.
- Complete Problem 7 from section 43.
- Complete Problem 3 from section 40.
- Complete Problem 5 from section 47.
- Let $V$ be the vector space of all infinitely differentiable functions with domain $\mathbb{R}$ and codomain $\mathbb{R}$. (This just means those functions $f:\mathbb{R} \to \mathbb{R}$ so that for every $n \in \mathbb{N}$ we have $\frac{d^n}{dt^n}[f(t)]$ exists and is differentiable.) Let $\mathcal{E}$ be the subset of all even functions, and let $\mathcal{O}$ be the subset of all odd functions. (Recall that $f \in V$ is even if for all $x \in \mathbb{R}$ we have $f(-x)=f(x)$, and $g \in V$ is called odd if for all $x \in \mathbb{R}$ we have $g(-x)=-g(x)$.) We denote the subset of polynomials of degree at most $n$ by $\mathcal{P}_n$. It is a fact (that you don't have to prove) that $V$ is a vector space, and that $\mathcal{E}, \mathcal{O}$, and $\mathcal{P}_n$ are all subspaces of $V$.
Let $T:V \to V$ be the function defined by $T(f)=\frac{d}{dt}\left[\frac{d}{dt}\left[f\right]\right].$ (I.e., $T$ is the function that maps any function to its second derivative.)
- Prove that $\mathcal{E}$ and $\mathcal{O}$ are invariant under $T$. (You may use familiar rules from calculus in your computations, as long as you cite them.)
- Prove that $\mathcal{P}_n$ is invariant under $T$. (You may use familiar rules from calculus in your computations, as long as you cite them.)
- Let $S:\mathcal{P}_n \to \mathcal{P}_n$ be the function $T$ restricted to $\mathcal{P}_n$. (Note that the codomain for $S$ really is $\mathcal{P}_n$ since $\mathcal{P}_n$ is invariant under $T$.) Find subspaces of $\mathcal{P}_n$ that reduce $S$. Then choose a basis for each of your reducing subspaces and use these to compute a matrix representation for $S$.
- In this problem, we aim to study how certain natural operations on vectors and linear transformations (namely function evaluation and function composition) manifest when those vectors and linear transformations are "coordinatized" (i.e., by transforming a vector to its coordinates in a given basis, or representing a transformation as a matrix using bases).
First, we need some notation. Throughout this problem we will write $V,U$ and $W$ for vector spaces with bases $X=\{x_1,\cdots,x_c\}, Y=\{y_1,\cdots,y_r\}$, and $Z=\{z_1,\cdots,z_\ell\}$ (respectively). For a vector space $S$ with basis $B=\{b_1,\cdots,b_d\}$ we write $\text{Crd}_B$ for the function $\text{Crd}_B:S \to \mathbb{F}^d$ which sends each $s \in S$ to its coordinate vector under $B$; i.e., if $s=c_1b_1+\cdots+c_d b_d$, then $\text{Crd}_B(s)=(c_1,\cdots,c_d)$.
For $n,m \in \mathbb{N}$ and $\mathbb{F}$ a field, we write $M_{n\times m}(\mathbb{F})$ for the set of all matrices with $n$ rows and $m$ columns whose entries come from $\mathbb{F}$. It is a fact (that you don't have to prove) that $M_{n \times m}(\mathbb{F})$ is an $\mathbb{F}$-vector space under the usual matrix addition and scaling operations. For $A = (\alpha_{ij}) \in M_{r \times c}(\mathbb{F})$ and for a given $1 \leq j \leq c$, we write $\text{Col}_j(A)$ to denote the element of $\mathbb{F}^r$ whose $\ell$th entry is $\alpha_{\ell j}$. (More colloquially, $\text{Col}_j(A)$ is just the element of $\mathbb{F}^r$ whose entries are given by the values in the $j$th column of $A$.)
Halmos gave us a way to multiply two elements of $M_n(\mathbb{F})$; here we give a way to multiply appropriately-sized matrices and vectors. If $A \in M_{n \times m}(\mathbb{F})$ and $d = (d_1,\cdots,d_m) \in \mathbb{F}^m$, then we define $Ad$ to be the vector in $\mathbb{F}^n$ given by $$\sum_{j=1}^m d_j \text{Col}_j(A).$$ Using this, one can now define a matrix multiplication $M_{n \times m}(\mathbb{F}) \times M_{m \times \ell}(\mathbb{F}) \to M_{n \times \ell}(\mathbb{F})$: for $A \in M_{n \times m}(\mathbb{F})$ and $B \in M_{m \times \ell}(\mathbb{F})$, the product matrix $AB$ is defined by $$\text{Col}_t(AB)=A\text{Col}_t(B)$$ for all $1 \leq t \leq \ell$.
- Let $T:V \to U$ be a linear transformation, and recall that $X$ and $Y$ are bases for $V$ and $U$ (respectively). (Here we're using the more general definition of "linear transformation" that allows the domain and codomain to be different subspaces.) Associate to $T$ the matrix $[T]_{Y,X} \in M_{r \times c}(\mathbb{F})$ defined by $$\text{Col}_j([T]_{Y,X})=\text{Crd}_Y(T(x_j))$$ for all $1 \leq j \leq c$.
Prove that for all $v \in V$ we have $$[T]_{Y,X}\text{Crd}_X(v) = \text{Crd}_{Y}(T(v)).$$
- Let $T:V \to U$ and $R:U \to W$ be linear transformations. Prove that $$[RT]_{Z,X}=[R]_{Z,Y}[T]_{Y,X},$$ where the expression $RT$ on the left side indicates the composition $R \circ T: V \to W$ and the expression on the right side indicates the product of matrices .
Here are the Solutions
- Let $T:V \to U$ be a linear transformation, and recall that $X$ and $Y$ are bases for $V$ and $U$ (respectively). (Here we're using the more general definition of "linear transformation" that allows the domain and codomain to be different subspaces.) Associate to $T$ the matrix $[T]_{Y,X} \in M_{r \times c}(\mathbb{F})$ defined by $$\text{Col}_j([T]_{Y,X})=\text{Crd}_Y(T(x_j))$$ for all $1 \leq j \leq c$.
Instructions: Your answers should always be written in narrative form; you should of course include relevant computations and equations, but these should be situated in the framework of some overall narrative that explains your reasoning, and they should always be naturally integrated into the structure of your writing. You should follow standard rules of composition: write in complete sentences, include appropriate punctuation, group sentences with a common theme into a paragraph, provide appropriate narrative transition when your reasoning takes a significant turn, etc. When writing formal proofs, you should not use abbreviations like $\forall$, $\exists$, $\Rightarrow$. It is acceptable to use the notation $\in$ when describing elements in a formal proof (e.g., "Since $2 \in \mathbb{Z}$, we see that ..."). As a general rule, a mathematical symbol should never start a sentence or follow punctuation; the exception is that you can write a mathematical symbol after a colon which is announcing the beginning of a list of mathematical objects. Of course your work should be legible and neat. The overall guiding principle in your writing is to remember that your work is meant to be read by a a skeptical peer; your job is to write a (logically) convincing argument to this audience.
Because of Thanksgiving break, I'm giving you the option to either not complete this assignment at all, or to work on a subset of these problems instead of all of them. If you choose to work on a nonempty subset of problems, you must complete at least two problems. If you complete two of them, then this pset will be worth half as much as other psets when I weight your homework average at the end of the term. If you complete three of the four, this pset will count for three-quarters of a pset when I weight your homework average. If you complete all four, this pset will count with the same weight as all other psets. If you choose to not submit this pset, it will not count against your homework average.
- Suppose that $\lambda=0$ is a proper value for $T \in L(V,V)$. Prove that $\det(T)=0$ by using the definition of the determinant function.
- Complete Problem 4(a--c) from section 62.
- Find an orthonormal basis for $\mathbb{C}[t]_{< 3}$ (under the "usual" inner product $(x,y)=\int_0^1 x(t)\overline{y(t)}~dt$).
- Suppose that $\mathcal{H}$ abnd $\mathcal{K}$ are subspaces of an inner product space $V$. Prove that $(\mathcal{H}+\mathcal{K})^\perp=\mathcal{H}^\perp \cap \mathcal{K}^\perp$.
Here are the Solutions
Instructions: Your answers should always be written in narrative form; you should of course include relevant computations and equations, but these should be situated in the framework of some overall narrative that explains your reasoning, and they should always be naturally integrated into the structure of your writing. You should follow standard rules of composition: write in complete sentences, include appropriate punctuation, group sentences with a common theme into a paragraph, provide appropriate narrative transition when your reasoning takes a significant turn, etc. When writing formal proofs, you should not use abbreviations like $\forall$, $\exists$, $\Rightarrow$. It is acceptable to use the notation $\in$ when describing elements in a formal proof (e.g., "Since $2 \in \mathbb{Z}$, we see that ..."). As a general rule, a mathematical symbol should never start a sentence or follow punctuation; the exception is that you can write a mathematical symbol after a colon which is announcing the beginning of a list of mathematical objects. Of course your work should be legible and neat. The overall guiding principle in your writing is to remember that your work is meant to be read by a a skeptical peer; your job is to write a (logically) convincing argument to this audience.
- Complete problem 2 from section 70 of the text.
- Suppose that $V$ is an inner product space, and that $A,B \in L(V,V)$. We say that $A$ is congruent to $B$, written $A \equiv B$, if there exists an invertible $P \in L(V,V)$ so that $B = P^*AP$. It is a fact (that you don't have to prove) that congruence is an equivalence relation.
- Prove that if $A \equiv B$, then $A^* \equiv B^*$.
- Is there a linear transformation $A$ and a scalar $\alpha \in \mathbb{F}$ so that $A \equiv \alpha\cdot \text{id}$, but with $A \neq \alpha\cdot \text{id}$? If so, find an explicit example and justify your assertions; otherwise prove that $A \equiv \alpha\cdot \text{id}$ implies $A = \alpha \cdot \text{id}$.
- Are there linear transformations $A$ and $B$ so that $A \equiv B$ but $A^2 \not\equiv B^2$? If so, find an explicit example and justify your assertions; otherwise prove that $A \equiv B$ implies $A^2 \equiv B^2$.
[Hint: We don't have a lot of information about congruent transformations outside of the definition, so it can be hard to "see" when two transformations are congruent without explicitly referring to the definition. In answering (b) and (c), you might consider exploring some small examples --- say by using transformations on a $2$-dimensional vector space --- to run some experiments. You might ask questions like: what would it mean to be congruent to the identity transformation? the zero transformation?]
- Suppose that $V$ is an inner product space and that $A,B \in L(V,V)$.
- Suppose that $A$ and $B$ are both self-adjoint transformations. Prove that $AB+BA$ is self-adjoint.
- Suppose that $A$ and $B$ are both skew. Prove that $AB+BA$ is self-adjoint.
- Suppose that $A$ is self-adjoint and $B$ is skew. Prove that $AB+BA$ is skew.
- Suppose that $V$ is a finite-dimensional real inner product space and that $A$ is skew.
- Prove that if $\dim(V)$ is odd, then $\det(A) = 0$.
- Prove that the rank of $A$ is even.
[Hint: for part (a) you can use some helpful properties of the determinant: for $\alpha \in \mathbb{F}$ and $A \in L(V,V)$ we have $\det(\alpha A) = (\alpha)^{\dim(V)}\det(A)$, and that $\det(A^*)=\overline{\det(A)}$. For part (b), you might try to show that $R(A)$ and $N(A)$ reduce $A$; to do this, the most significant part (aside from showing each of these subspaces is invariant) is to argue that $R(A) \cap N(A) = \{0\}$. Once you have done this, it's not too bad to show that $R(A) \oplus N(A) = V$ using rank nullity. Then argue that $A$ restricted to $R(A)$ is still a skew symmetric transformation, so that you can appeal to part (a).]
Here are the Solutions.