Mathematica is awesome in a lot of ways, but one of those ways is that it lets you import pictures (and sounds too, as you've seen before). I'm going to import a picture of my little nephew who turned 1 yesterday. He's pictured at age 6 months.
Mathematica will even tell us how the image is stored, provided you know how to ask. Below I get Mathematica to tell me that the image is stored as four separate bits, the first of which contains the actual data for the image and the other three are some rules which tell a computer how to read the data.
The data of the image is really just a big matrix of integers between 0 and 256 (or is it128?), where each entry gives the darkness of the pixel it corresponds to. It happens that for this particular picture I need to mess around with things a little bit in order to get at the matrix. For those who are interested, in this particular instance the data is stored as a matrix whose entries are row vectors with three identical entries. My function below just replaces this redundancy and leaves me with a plain ol' matrix. You can see that it's 480x640 (since the image is 640 pixels wide and 480 pixels tall).
This matrix requires a lot of data: specifically, it takes 480·640 = 307,200 values (one for each entry in the matrix). What I'd like to do is cut down on the amount of information I have to keep track of. My brilliant idea is this: if there are singular values of this matrix which are small, then omitting them from the matrix shouldn't affect the matrix too much. That is to say, if s is the point after which singular values get small, then we should expect
Notice that the quantity on the right contains not too much data: for each singular value we need to know
(1) the singular value
(2) the entries of u_i (there are n of them)
(3) the entries of v_i (there are m of them)
All in all, this means that we need
bits of information.
So the question is: when do singular values get small? The following plot shows us that singular values get (relatively) small quite quickly: in fact, they decay exponentially. So the significant contributions comes from the first singular values, and less is contributed by the remaining singular values.
The function below will spit out an image which is the approximation given by taking only the first k singular values of the matrix of our image. It will also tell us how many singular values it's used and the compression rate we get.
Let's see what happens as singular values change. (Roll your mouse over the image below to see the movie.)
In this case it looks like if we use the first 60 singular values we'll get a great image with only a fifth the data required!
So far in this class we have talked about how one can deal with `noisy data' from scientific experiments. By `noisy data' I mean observed data that has built in error (since we will never observe anything perfectly), and by `deal with' I mean that least squares approximation was a technique we used to fit our data to a desirable model even though it doesn't perfectly match the model. The singular value decomposition allows us to do a similar kind of noise reduction.
Below I define a function and draw how that function evolves over time. Mathematica does this by sampling the function at regular points in a square grid and then interpolating between the points. Initially the informatoin is stored in a giant 20x23x23 array (there is a 23x23 matrix for each of the 20 sampled times), but I'll change that data structure later.
Here is a picture of how this function moves through time. (Roll your mouse over the image below to see the movie.)
Now I'm going to restructure the data so that the ith row contains all the observed data at time i. I could, in fact, organize my data differently if I wanted and still run this same procedure, but in general it's a good idea to have all observed data for time i either in column i or row i (these are just a transpose away from each other and won't affect the outcome).
Now let's look at the singular value decomposition of the matrix which contains all my data. You will notice that there is an enormous drop off of singular values after the first 2.
In real life, of course, observations are never perfect. Below is a collection of observations of the undulating wave from before. I've achieved this by having Mathematica add in an error factor at random in each observation. We can watch a movie of the observed data to see the motion it describes. (Roll your mouse over the image below to see the movie.)
After restructuring my data matrix, I'll plot the singular values of the noisy data in blue and the singular values of the perfect matrix in red.
So we can see that the first two singular values are quite close, whereas the latter singular values in the noisy data are substantially higher than in the perfect data matrix. So what we'll do is simply use the first two singular values from our noisy data as an approximation of the observed data (instead of using all the singular values of the noisy data).
Let's see how our data looks after we filtered out the offending singular values. (Roll your mouse over the image below to see the movie.)
Here you can really see how we have smoothed out the data. The movie in the middle is the original (perfect) data, the one on the right the noisy observed data, and the movie on the left the noise-reduced version of the observed data. (Roll your mouse over the image below to see the movie.)
I'm going to run the same calculations as we did last time, but this time my data is very noisy. In fact, I've added in a number randomly between -0.5 and 0.5. You might not think this is very much, but the function I stated with only takes values between -1 and 1. So I'm doing significant damage to the function by adding in these values.
As in the previous parts, an image with a border is a movie which you can activate by rolling over the image.