Lecture Outlines
We will cover a wide variety of materials during classtime, so your constant attendance is important. To help you in organizing your study materials, the list below gives an overview of the basic concepts covered during a given lecture period.
-
In today's class we mostly focused on playing with the symmetries of a square. We argued that there should be 8 such symmetries, and we described them geometrically and gave them the names $R_0, R_{90}, R_{180}, R_{270}, H, V, D, D'$. We wrote $D_4$ to denote the set (of 8) symmetries of the square. We also observed that it's possible to compose these symmetries, and that each such composition gives us one of the symmetries on the list we've already written down. This means that function composition is a binary operation on $D_4$.
-
In today's class we finished off the Cayley table for $D_4$, and then we made a number of observations. For instance, we saw that the Cayley table is a "Latin square." We also saw that there was some "big picture" structure in terms of rotations and reflections, namely the two rotations always result in a rotation and two reflections always result in a rotation, but if we do a rotation and a reflection then the result is a reflection. We also made some observations about an identity element $R_0$, and the existence of inverses.
With this in mind, we then formally defined the notion of group (for which $D_4$ is going to be our recurring poster child). This required us to first define binary operations, and to label some as commutative and others as non-commutative. We saw lots of examples of binary operations, and some examples of non-binary operations. We saw a number of examples of groups, plus some examples of things that weren't groups because they failed one (or more) axioms.
Towards the end of class we started to work towards developing "modular arithmetic." In service of this goal, we defined divisibility and stated the division algorithm.
-
Picking up with our discussion of the division algorithm from last class, during this class we proved (most of) this result and then thought about its implications. We mostly focused on the remainder that we get when we try to divide an integer $a$ by a positive integer $n$. We referred to this quantity as the remainder of $a$ modulo $n$, which we wrote $a \pmod{n}$. We defined a notion of congruence modulo $n$ and saw that it was an equivalance relation. We discussed the notion of the equivalence class of $a$ modulo $n$ (which is just the set of all integers $b$ so that $a \equiv b \pmod{n}$). We saw that congruence is actually deeply connected to remainders; indeed, two numbers are congruent modulo $n$ if and only if they have the same remainder upon division by $n$. The big theoretical tool that we developed came at the end of class, when we saw that congruence satisfies a kind of substitution principle for expressions that involve arithmetic. Namely, we showed that if $a \equiv \hat a \pmod{n}$ and $b \equiv \hat b \pmod{n}$, then $$a+b \equiv \hat a + \hat b \pmod{n} \qquad \qquad \text{ and } \qquad \qquad ab \equiv \hat a \hat b \pmod{n}.$$
-
Today we discussed why the big theorem from last time -- "Arithmetic is well-defined on congruence classes" -- has its particular name. We defined the set $\mathbb{Z}_n$ by $$\mathbb{Z}_n = \{[0]_n,[1]_n,[2]_n,\cdots,[n-1]_n\}$$ and saw that the binary operations of addition (defined by $[i]_n+[j]_n=[i+j]_n$) and multiplication (defined by $[i]_n\cdot [j]_n=[i\cdot j]_n$) are well-defined. With this in mind, it's natural to ask if either $(\mathbb{Z}_n,+)$ or $(\mathbb{Z}_n,\cdot)$ is a group. We showed that $(\mathbb{Z}_n,+)$ is, but that $(\mathbb{Z}_n,\cdot)$ is not (with the single exception of $n=1$). In fact, the easy obstacle we saw that prevents $(\mathbb{Z}_n,\cdot)$ from being a group is that $[0]_n$ won't have a multiplicative inverse (again, unless $n=1$). If we remove this conspicuous obstacle, do we now have a group? That is, do we know whether $(\mathbb{Z}_n-\{[0]_n\},\cdot)$ is a group? We computed the Cayley table for $(\mathbb{Z}_5-\{[0]_5\},\cdot)$ and saw that each element has an inverse; given that we already knew the other axioms of a group held for $\mathbb{Z}_n$ under multiplication, this means that $(\mathbb{Z}_5-\{[0]_5\},\cdot)$ is a group. But you were given a challenge to show that $(\mathbb{Z}_6-\{[0]_6\},\cdot)$ is not a group. What distinguishes these two cases, and what can we say in general?
-
Today we started by discussing why the nonzero elements of $\mathbb{Z}_6$ are not a group under multiplication modulo $6$. We observed that the numbers which shared a common factor with $6$ had no multiplicative inverse. With this as motivation, we defined the notion of greatest common divisor, as well as relatively primality. We saw that the greatest common divisor is a linear combination (via Bezout), and we gave an explicit algorithm for computing gcds via the division algorithm (the so-called Euclidean algorithm). We even saw how one could "reverse engineer" the Euclidean algorithm to produce an explicit expression of the gcd as a linear combination. We then saw that this was the key to determining whether $[a]_n$ has a multiplicative inverse modulo $n$.
-
Today we concluded our discussion of $U(n)$, not only seeing that it's the largest subset of $\mathbb{Z}_n$ which is a group under multiplication modulo $n$, but also characterizing those $n$ for which $\mathbb{Z}_n-\{[0]_n\}$ is a group.
In the latter half of class we started discussing generic properties enjoyed by all groups. For instance, we showed that each group has a unique identity, and that each element has a unique inverse. We also say that groups have a cancellation law that is extremely useful. One of the applications of this result was the "walks like a duck for identity" result that gives us a fast way to determine if an element is the identity in a group.
-
We spent some time today discussing some additional properties that groups enjoy. For instance, we saw "walks like a duck" for inverses (which gives us a somewhat faster way to verify the inverse of a given element). We also saw the "socks and shoes" theorem, and we discussed generalized associativity. All these rules are extremely useful because they apply for all groups.
We spent the second part of class discussing the notion of order, both for groups and for elements. The order of a group is just the number of things in that group. For instance, $|\mathbb{Z}_n|=n$, whereas $|\mathbb{Z}|=\infty$. The order of an element $g \in G$ is defined to be $$\min\left\{n \in \mathbb{N}: \underbrace{g \star g \star \cdots \star g}_{\tiny n \text{ times}} = e_G\right\}$$ (with the convention that $\min \emptyset = \infty$). We saw some computations related to the elements of some particular elements in some particular groups.
-
We introduced the notion of subgroup, which captures the idea of one group "living within" another group. We gave a formal definition, and saw some examples that illustrated the two key parts of the definition (that $H$ is a subset of $G$, and that $H$ inherits its operation from $G$). We saw that if $H \leq G$, then the notions of "identity" and "inverses" are compatible between these two groups; specifically, that $e_H = e_G$, and that for all $h \in H$ the notion of "inverse of $h$ in the group $H$" and "inverse of $h$ in the group $G$" coincide. We then stated and proved the one-step subgroup test, which gives us an efficient way to check whether a nonempty subset of $G$ is actually a subgroup. We did an example, and then stated two related results: the two-step subgroup test (which is logically equivalent to the one-step subgroup test, but just parcels out some of the relevant calculations) and the finite subgroup test (which gives an even faster way to tell if $H \leq G$ in the case where $|H| < \infty$).
-
We started today by discussing a handful of canonical subgroups (i.e., subgroups that we can find in any group we're interested in). These included the center of the group, defined by $$Z(G) = \{g \in G: \forall h \in G, hg = gh\}.$$ For a given element $a \in G$, we also defined the centralizer $C(a)$ and the cyclic subgroup $\langle a \rangle$ by $$C(a) = \{g \in G: ga=ag\}, \qquad \langle a \rangle = \{a^k: k \in \mathbb{Z}\}.$$ We thought about how one might show that these subsets are really subgroups (via the $1$ or $2$-step subgroup tests), and we saw a handful of examples. With cyclic subgroups on our mind, we also asked how one might be able to tease out properties of the cyclic subgroup $\langle a \rangle$ in terms of information about its generator $a$. We quickly homed in on investigating the important role that $|a|$ plays in the structure of $\langle a \rangle$. For instance, we proved the equal power theorem (which tells us under what conditions on $i,j \in \mathbb{Z}$ we have $a^i = a^j), and saw that it was the key result for proving the order divides lemma, as well as the "order is order."
-
In today's class we continued our discussion of powers of elements. Last time we asked when two powers are equal as elements (i.e., when is $a^i=a^j$), but this time we started by asking when two elements generate the same cyclic subgroup (i.e., when $\langle a^i \rangle = \langle a^j \rangle$). We saw that when $\gcd(i,|a|)=d$ we have $\langle a^i \rangle = \langle a^d \rangle$. Using the "order is order" theorem from last class, we were then able to use this result to calculate the order of a power of an element; again using $d = \gcd(i,|a|)$, we found that $|a^i| = \frac{|a|}{\gcd(i,|a|)}$. This allowed us to prove a theorem about the generators of the group $\mathbb{Z}_n$. We finished class by discussing cyclic groups, and proved in particular that cyclic groups are necessarily abelian.
-
Today we stated and proved the fundamental theorem of cyclic groups. We used this to classify subgroups of a cyclic group of size $100$. It also gave us a technique for showing something is not cyclic if the number of elements of a particular order isn't "right."
-
The next big topic we want to discuss is symmetric groups, but to do that we first need to review some basic function theory. We discussed the notion of functions, as well as their domain and codomain. We defined the image of a function, and we defined injectivity (including a discussion of a proof template for verifying injectivity), surjectivity (again, with an accompanying template), and bijectivity. We gave examples and non-examples of these. We saw that each of these properties is preserved by composition. W
-
We dived into studying bijections today, and in particular saw that they are precisely the functions which have inverses. We saw that inverse (functions) are unique. We then used what we have learned to prove that the set of bijections from a set $A$ back to itself forms a group, which we called the symmetric group on $A$ (denoted $S_A$). We thought in particular about the special case where $A = \{1,2,\cdots,n\}$, in which case we wrote $S_{\{1,2,\cdots,n\}}$ by $S_n$. We thought about different ways to represent elements, and settled in particular on cycle representations for elements. We saw that $D_4$ can be "seen" inside $S_4$.
-
Today we thought about what we can "see" about an element based on its cycle structure. For instance, we proved that $m$-cycles have order $m$. We also saw that every element can be written as a product of disjoint cycles, and that Ruffini's theorem gives us a way to compute the order of a permutation based on its cycle structure. We saw (but didn't prove) that disjoint cycles commute.
-
Last class we observed that every element of $S_n$ can be written as a product of transpositions. However, the way we write an element in this way is far from unique. For instance, we have $$(1,2,3,4,5) = (1,5)(1,4)(1,3)(1,2) = (2,3)(1,5)(1,4)(2,3)(4,5)(1,3)(1,2)(4,5).$$ What, if anything, is invariant in the way that we express an element as a product of transpositions? We saw that no matter how you write an element as a product of transposition, the parity of the number of transpositions is always the same. That is to say, any two expressions of $\sigma$ as a product of transpositions must either both have an even number of factors or both have an odd number of factors. This theorem was the basis for our definitions for what it means for a permutation to be "even" or "odd." We saw that $\varepsilon$ is always even. We wrote down a theorem that described how evenness and oddness behave relative to the group operation. We defined $A_n$ to be the set of all even transpositions, and we saw that it was a group.
-
In today's class we studied a special class of functions between groups. We know that any function is a way to connect two sets (the domain and codomain) by sending elements from one set to elements of another. But what does it mean to have a function between groups, given that a group isn't just a set of elements but also an operation? We would like a function between groups to respect this additional structure, so we defined a homomorphism from a group $(G,\star)$ to a group $(\hat G,\bullet)$ to be a function $f:G \to \hat G$ such that for all $g_1,g_2 \in G$ we have $f(g_1\star g_2) = f(g_1)\bullet f(g_2)$. We saw a number of examples and non-examples. We defined the notion of the kernel of a homomorphism, which is the set $$\ker(f) = \{g \in G: f(g) = e_{\hat G}\}.$$ We started a proof that for any homomorphism $f:G \to \hat G$ we have $\ker(f) \leq G$.
-
We kept discussing homomorphisms today, and focused in particular on their action on elements. We started by finishing our proof that any homomorphism $f:G \to \hat G$ has $\ker(f) \leq G$. This gave rise to the proofs that homomorphisms preserve identity, inverses, and powers. We also saw that for any $g \in G$ we have $|f(g)| \mid |g|$. We finished class by introducing the notion of inverse image; we saw, for example, that $\ker(f) = f^{-1}(\{e_{\hat G}\})$.
-
Today we started by thinking about how the preimage of an element under a homomorphism. Assuming $f:G \to \hat G$ is a homomorphism and that $\hat g \in \hat G$ has some $g_0 \in G$ with $f(g_0) = \hat g$, we proved that $f^{-1}(\{\hat g\}) = g_0\ker(f) = \{g_0 k: k \in \ker(f)\}$. This tells us in particular that $f$ is $|\ker(f)|$-to-$1$ onto the image of $f$. We also saw that preimages of subgroups from $\hat G$ are themselves subgroups of $G$, and likewise that images of subgroups of $G$ are subgroups of $\hat G$. Additionally, we saw that homomorphisms preserve cyclicity and abelian-ness; specifically, if $H \leq G$ is abelian (or cyclic), then $f(H)$ is abelian (or cyclic). We finished class by defining isomorphism.
-
Today we studied isomorphisms in earnest. The intuition is that if $f:G \to \hat G$ is an isomorphism, then $G$ and $\hat G$ are basically "the same" group. By this we don't mean the groups are actually equal; instead, we mean that the groups have the same group-theoretic behavior. We saw a number of examples and non examples. We stated a theorem that listed a number of qualities that should be preserved between two groups $G$ and $\hat G$ for which there exists an isomorphism $f:G \to \hat G$; for example, $G$ is abelian iff $\hat G$ is abelian, and $G$ is cyclic iff $\hat G$ is cyclic. We also saw that for any given $n \in \mathbb{N}$, the number of elements of order $n$ in $G$ and $\hat G$ should be the same. This is a very useful tool that lets us determine when some groups are not isomorphic. For instance, we defined a new nonabelian group of size $8$ which we denoted $Q_8$. We argued that there does not exist an isomorphism $f:D_4 \to Q_8$ because they have a different number of elements of order $2$.
-
We started class by proving that "isomorphic to" is an equivalence relation on the collection of groups. We proved that every cyclic group is isomorphic to some $\mathbb{Z}$ group (determined by the size of $G$). We stated and proved Cayley's theorem, which tells us that every group is isomorphic to a subgroup of a symmetric group. We defined the notion of an automorphism group, as well as an inner automorphism group. We said that $\text{Aut}(\mathbb{Z}_n) \simeq U(n)$.
-
At the start of class we discussed the statement we finished with last class period: for $n \in \mathbb{N}$, we have $\text{Aut}(\mathbb{Z}_n) \simeq U(n)$. Though we didn't give a complete proof, we did discuss how one could imagine creating a function $f:\text{Aut}(\mathbb{Z}_n) \to U(n)$; the key observation is that any homomorphism with domain $\mathbb{Z}_n$ is entirely determine by where it sends $[1]_n$, and furthermore that to be an isomorphism we would need the image of $[1]_n$ to also be a generator of $\mathbb{Z}_n$. Since we learned a while ago that the generators for $\mathbb{Z}_n$ are precisely the elements of $U(n)$, this means each automorphism of $\mathbb{Z}_n$ sends $[1]_n$ to an element of $U(n)$. Hence to define the desired function $f:\text{Aut}(\mathbb{Z}_n) \to U(n)$, we decided that for a given $\varphi \in \text{Aut}(\mathbb{Z}_n)$ we would set $f(\varphi) = \varphi([1]_n)$. We showed this map was injective, but punted on proving that it was surjective and operation preservating.
However, the fact that $\text{Aut}(\mathbb{Z}_n) \simeq U(n)$ prompts a more general question: when can we replace complicated-looking groups with other groups that are simpler (or are "built out of" simpler groups)? To help give this idea some context, we revisited the notion of the direct sum of two groups; this is the group-theoretic way to build new groups out of old groups (or, intuitively, to "glue" groups together to make something new). We reviewed some basic properties of direct sums, such as $e_{G \oplus \hat G} = (e_G,e_{\hat G})$, that $|G \oplus \hat G| = |G| \cdot |\hat G|$, and that $G \oplus \hat G$ is abelian if and only if $G$ and $\hat G$ are abelian. We noted that $|(g_1,g_2)| = \text{lcm}(|g_1|,|g_2|)$. This was the basis for a somewhat more complicated theorem that tells us when the direct sum of cyclic groups is again cyclic, namely: if $G_1,\cdots,G_n$ are groups, then $G_1 \oplus \cdots \oplus G_n$ is cyclic if and only if each $G_i$ is cyclic, and for all $1 \leq i < j \leq n$ we have $\gcd(|G_i|,|G_j|)=1$. For instance, we noted that $\mathbb{Z}_{15} \simeq \mathbb{Z}_3 \oplus \mathbb{Z}_5$.
-
Today we were mostly interested in ways to "break down" some familiar groups into smaller groups via direct sums. Out two big results were that if $n=m_1m_2\cdots m_k$ such that $\gcd(m_i,m_j)=1$ for all $i \neq j$, then we have $$\mathbb{Z}_n \simeq \mathbb{Z}_{m_1}\oplus \mathbb{Z}_{m_2} \oplus \cdots\oplus \mathbb{Z}_{m_k}$$ $$U(n) \simeq U(m_1) \oplus U(m_2) \oplus \cdots\oplus U(m_k).$$ We saw plenty of examples of these theorems in action (including some non-examples where an integer factorization of $n$ didn't result in a group-theoretic "factorization" of $\mathbb{Z}_n$!).
-
Today we wrapped our discussion of "decomposing" $U(n)$. We already saw last time that if $n=m_1m_2\cdots m_k$ with $\gcd(m_i,m_j)=1$ for all $i \neq j$, then $$U(n) \simeq U(m_1) \oplus U(m_2) \oplus \cdots\oplus U(m_k).$$ Given that the most "natural" factorization of $n$ into pairwise relatively prime factors comes from its prime factorization, we asked: what can we say about the structure of $U(p^e)$ when $p$ is prime. Gauss' theorem told us that for odd primes we have $U(p^e) \simeq \mathbb{Z}_{p^{e-1}(p-1)}$, whereas when $p=2$ we have $$U(2^e) \simeq \left\{\begin{array}{ll}\mathbb{Z}_1,&\text{ if }e=1\\\mathbb{Z}_2,&\text{ if }e=2\\\mathbb{Z}_{2^{e-2}}\oplus \mathbb{Z}_2,&\text{ if }e>2.\end{array}\right.$$ This means that we can write any $U$ group as a direct sum of $\mathbb{Z}$ groups! This gives us deep new insight into the structure of $U$ groups. For instance, we were able to use this to enumerate the elements of order $2$ in $U(7000)$.
These decomposition results from the last two class periods are extremely powerful, and suggest that it would be nice to have some new tool that lets us probe group structures for groups other than those of the form $U(n)$ or $\mathbb{Z}_n$. With this in mind, we introduced a (sort of) new topic: cosets. We defined the notion of left and right cosets, and we gave lots of examples. We gave a theorem that listed a lot of results about cosets, and we used those results to prove Lagrange's theorem. Lagrange's theorem is an extraordinarily powerful tool, and one that many folks consider the most powerful theorem in introductory group theory. We will see how it earned that reputation in the next few class periods.
-
Today we discussed some of the immediate applications of Lagrange's theorem. For instance, if $g \in G$ is given and $|G|< \infty$, then we know that $|g| \mid |G|$. This allowed us to recover two big results from number theory: Euler's theorem (that says if $\gcd(a,n)=1$, then $a^{|U(n)|}\equiv 1 \pmod{n}$) and Fermat's little theorem (which says that if $p$ is prime and $a \in \mathbb{Z}$, then $a^p \equiv a \pmod{p}$). These are very interesting results that reveal a lot of depth to the structure of the integers. For instance, we used Fermat's little theorem to give a proof that $4$ is composite without actually factoring $4$! This cute example is actually the tip of a very big iceburg, and indeed results like this are used to verify whether certain large numbers are "probabilistic primes" (i.e., numbers which are very likely prime, even if we don't know for sure). We also saw a few applications of Lagrange's theorem that tell us that the size of the group can sometimes be used to classify the structure of the group. For instance, if $|G|$ is a prime number $p$, then we saw that $G \simeq \mathbb{Z}_p$. In a similar vein, we gave a result that shows that if $|G|=2p$ for some prime $p$, then either $G \simeq \mathbb{Z}_{2p}$ or $G \simeq D_p$. These results are (almost!) enough to classify groups of order up to $10$; we saw a graphical depiction of these groups as well.
-
We saw a few class periods ago that if $H \leq G$, then most left cosets of $G$ are not subgroups of $G$ (indeed, the only coset which is a subgroup is the coset $e_GH=H$ itself1). But if the cosets aren't individually a group, could we perhaps collectively turn them into a group? To do this, we'd like some "natural" group structure on cosets. One reasonable definition for an operation on left cosets is as follows: $$aH\star bH = (a\star b)H.$$
With this in mind, we asked whether this operation really made sense? The crux of the issue is that our function definition depends on what element we use to represent the coset, whereas any given coset can be represented in lots of different ways. If I choose to represent the cosets as $aH$ and $bH$, but you choose to represent those same cosets as $\hat aH$ and $\hat bH$, how do we know that my definition for their product (which would be the coset $abH$) will match your definition for their product (which would be the coset $\hat a \hat bH$)? We computed a few examples and saw that this dilemma is a real sticking point. Indeed, it doesn't always result in a well-defined function (even though it sometimes does). The good news is that when the operation is well-defined, then we proved that the set of left cosets of $H$ in $G$ (which we denoted $G/H$, pronounced "$G$ modulo $H$" or "$G$ mod $H$" or "the quotient of $G$ by $H$") forms a group under this operation. We saw some specific examples to make this abstract idea a bit more concrete.
-
Last class period we produced an operation on the set of left cosets, and we saw that this operation is not well-defined. How, then, can we determine when the operation is well-defined, so that the quotient group $G/H$ is a sensible object to investigate? Towards that goal, today we defined the notion of a normal subgroup. We saw that a subgroup $H$ is normal in $G$ (denoted $H \lhd G$) if for all $g \in G$ we have $gH=Hg$. We saw some examples and non-examples of this concept. We proved that in certain situations, we can recognize that a subgroup is normal "easily" (e.g., if $G$ is abelian, or more generally if $H \leq Z(G)$; also we saw this holds when $|G|/|H|=2$), but we also gave a normal subgroup test to check for normality when these "easy" checks fail. We used this to prove that $\text{SL}_n(\mathbb{R}) \lhd \text{GL}_n(\mathbb{R})$ and that $\text{Inn}(G) \lhd \text{Aut}(G)$.
-
We have spent a few days defining what factors groups are, but we haven't spent as much time exploring how they can be useful (aside, of course, from providing a way to make "new" groups out of "old" ones). We said there are two big ways that factor groups can help us in our understanding of groups. First, they can allow us to understand structural properties of a group. For instance, the $G/Z$ theorem tells us that $G$ is abelian whenever $G/Z(G)$ is abelian. This is a deep theorem with a lot of consequences. For instance, we used it to prove that any nonabelian group of order $pq$ has a trivial center. (It is also the key ingredient for showing that any group of order $p^2$ is abelian.) The other way that factor groups are useful is that they give us a "ladder" for proving results by induction. For instance, we proved Cauchy's theorem on finite abelian groups by using induction via quotient groups.
-
We have spent a considerable amount of time thinking about normality, but haven't yet explored how normality interacts with group homomorphisms. Today we proved that normality plays nice with "pushing forward" and "pulling back" under homomorphisms. Specifically, if $f:G \to \hat G$ is a homomorphism, then: if $H \lhd G$, then $f(H) \lhd f(G)$; and if $K \lhd \hat G$, then $f^{-1}(K) \lhd G$. In particular, this tells us that the kernel of any homomorphism is a normal subgroup of the domain. We used this fact to find lots of normal subgroups. We also considered the structure of the group $G/\ker(f)$ itself, and stated (and mostly proved) the first isomorphism theorem: that $G/\ker(f) \simeq f(G)$.
-
We are at the end of our discussion of groups, and so as one last "hoorah" in this area we spent the day discussing one of the big classification theorems for groups: the fundamental theorem of finite abelian groups. (We did the "primary decomposition" form of this result, but there are other variations on this theorem that give slightly different looking results.) The theorem tells us that if $G$ is abelian and $|G|=n = p_1^{e_1}\cdots p_k^{e_k}$, then there must be subgroups $A_1,\cdots,A_k \leq G$ so that $$G \simeq A_1 \oplus \cdots \oplus A_k,$$ and such that each $A_i$ has $$A_i \simeq \mathbb{Z}/p_i^{a_{i,1}}\mathbb{Z}\oplus \cdots \oplus \mathbb{Z}/p_i^{a_{i,n_i}}\mathbb{Z}$$ where $a_{i,1}+\cdots+a_{i,n_i}=e_i$.
Though this result is very technical in appearance, it actually is a tremendous boon to studying abelian groups. It provides a way to create a "list of candidates" for the isomorphism type of a given abelian group based on its size, and this in turn allows us to use a kind of process of elimination to determine the isomorphism type of the group. We spent lots of time doing examples to showcase how this theorem works for groups of particular sizes.
-
Now that we understand groups very well, it's time to turn to another class of algebraic objects: rings. A ring is a set $R$ together with two binary operations (which we call addition and multiplication) that satisfies a few axioms. For one, it must be the case that $(R,+)$ is an abelian group. We also insist that multiplication is associative, and that multiplication distributes across addition. Note that in a ring, we do not insist that multiplication commutes, nor that $R$ has a multiplicative identity (much less multiplicative inverses). We saw lots of examples of rings, many of which we have been using for a very long time. We saw that rings enjoy some properties that we're used to (e.g., uniqueness of identities), but some properties that we are familiar with in the context of groups don't translate over to rings. For instance, we saw that "walks like a duck" for multiplicative identity fails in general. The source of this failure is the potential presence of so-called zero divisors. We saw some examples of zero divisors, and we saw that we have a kind of cancellation property when we "avoid" zero divisors appropriately. We wrapped class by writing out some familiar arithmetic properties enjoyed by rings.
-
We have already seen some familiar examples of rings, and we've explored properties that rings can (and might not!) have. We gave proofs for parts of the "basic arithmetic of rings" theorem. We then introduced some new terminology for rings. We defined the notion of the unit group of a ring, we defined what it means for a ring to be an integral domain, and what it means for a ring to be a field. We gave a handful of examples of each of these properties.
-
A lot of what we've seen so far about rings harkens back to properties that we've seen for groups. How far can we push that connection? What kinds of properties/objects exist for groups, and to what degree can we see them manifested in rings? For example, we know that some groups have a commutative operation (the abelian ones), and some don't (the non-abelian ones). This same dichotomy plays out for rings, where we have some rings with a commutative multiplication operation (the commutative rings) while others have a non-commutative multiplication operation.
With this basic premise as motivation, we studied the notion of subrings, and we saw that there was such a thing as a subring test (these are the analogs of subgroups and the subgroup tests). We also defined the notion of an ideal, which we said was supposed to be the analog of a normal subgroup. We then defined principal ideals, which are sort of like the ring-theoretic analogs of cyclic normal subgroups. Since ideals are supposed to be analogs to normal subgroups, this made us wonder about whether we can define quotient rings. We got started with this process, but said we'd finish the discussion next class period.
-
If $I$ is an ideal in a ring $R$, then we defined $R/I$ to be the set of additive cosets of $I$ (that is, elements of $R/I$ are cosets $r+I$ with $r \in R$). To make this set of cosets into a ring, we would need an addition operation and a multiplication operation, so we set $$(r_1+I)+(r_2+I)=(r_1+r_2)+I \qquad \qquad \text{ and } \qquad \qquad (r_1+I)(r_2+I)=r_1r_2+I.$$ Do these operations make $R/I$ into a ring? It turns out the addition operation is always well defined (since $I$ becomes a normal subgroup when considered in the abelian group $(R,+)$, and all subgroups of abelian groups are normal). But is the multiplication operation well-defined? It turns out that this holds if and only if $I$ is an ideal (very much in parallel to the issues we saw around well-definedness of coset operations for groups). We thought about several examples of ideals, as well as what their corresponding quotient "looks like." Along the way, we defined what a ring homomorphism is, as well as a ring isomorphism. We finished class by studying the quotient ring $\mathbb{R}[x]/(x^2+1)$.
-
We spent a good part of today's class continuing our investigation into the quotient ring $\mathbb{R}[x]/(x^2+1)$, ultimately giving a proof that it is isomorphic to $\mathbb{C}$. Towards the end of class, we asked a broader question: what was it about $(x^2+1)$ which allowed the quotient we created to be something so nice (namely, a field)? To start answering that question, we defined the notion of a prime ideal at the end of class.
-
Today we thought about what properties of an ideal $I$ allow us to recover certain desirable properties in its quotient $R/I$. We saw that if $R$ is a commutative ring with unity, then $R/I$ is an integral domain if and only if $I$ is prime, whereas $R/I$ is a field if and only if $I$ is maximal. In addition to defining primality and maximality, we also saw examples of both.
-
Today we thought about where we could go next in our exploration of algebra (if it weren't the last day of class!). Charting potential courses forward largely asked us to look backward at some of the motivating questions in classical algebra (e.g., why is the general quintic insolvable...and what does that even mean? what is Fermat's Last Theorem all about? who would be interested in non-commutative rings?).