[MUSIC] Potpourri, that's a strange name for a lesson. Well, the first course was getting too long. We had to leave out a few concepts that were not crucial in an introductory course. This course is about mastering MATLAB, is it not? So we need to cover everything. Well, maybe not everything, but you get my point. So this lesson covers topics they're important to know, but did not fall neatly into a single category. First, we'll cover mixed mode arithmetic, which is a fancy way of saying what happens when we mix different data types like doubles, and imagers in a single expression. It will look at a unique MATLAB feature. It's built in support for solving systems of linear equations. Finally, we take a peek of live scripts, which make it easy to create a nice looking document that contains both MATLAB code and it's output. So here we go, mixed mode arithmetic. We covered numerical data types in the first course, but as a reminder, you can see the table behind me showing the two floating point types double and single, and the eight energy types and MATLAB provides. However, we did not cover what happens when you use arithmetic expressions in your program that have mixtures of these types. That's called mixed mode arithmetic. It has quite a few rules. Well, look at those rules now, and we'll show examples of each of them in action. The simplest arithmetic expressions have one operator, such as plus or minus or times and one or two operations. So, for instance, four times pi here has one operator multiplication, also known as times and two operations four and pi. Because they're too operands, we call this a binary operation. The type of each of these operations is double, which is the default and MATLAB. But you can specify different types by using MATLABs casting functions like at eight for example. Those of you out there with sharply honed mathematical skills will see that this is the wrong answer. That's because 500 is too large to fit into a little old and eight. And so the result we get is the closest number to 500 that lies inside the range of numbers that do fit into an innate, which goes from minus 128 to plus 127. We studied this phenomenon, which is called clipping in less than seven of our first course. There's no red here, so it's not an error, at least not according to MATLAB. So far, all our operations have used operations of the same type. So none of these expressions is an example of mixed mode arithmetic. We'll get there, but in the meantime, there's another way not to have operations with mixed types, and that's just a have one operand like this. We still have one operator, the minus, but only one operand the three This is called a unitary operation. Of course, you can combine operations into much more complicated expressions, like four plus one times two, and we spent some time in less than two of our introductory course on how these combinations work. But what we left out was what happens when two operations in a binary operation like one plus two are of different types. Sometimes the operation isn't even allowed. This shouldn't be too surprising, because we've already seen that binary operations aren't allowed in some other situations. I'm talking about situations in which the two operations have incompatible shapes here is a table showing the shape compatibility rules for binary operators. Here we have five rules for a binary operation, which we show here as an operand x and operator op and a second operand y. Where x and y could be any scaler's or two dimensional arrays who shapes are compatible for the operator op. For example, plus minus, and all the dot operators which are known collectively, is array operators require that the two operations x and y have the same size and shape. Rule B is for multiplication, also known as matrix multiplication. And it says that the size of the second dimension of x must equal the size of the first dimension of y. Or to put it another way, the number of columns of x must equal the number of rows of y. Here's an example that follows that rule. Yeah, it follows it, because the first operandi has three columns in the second operandi has three rows. But if you swap these operations, you get an error. Yeah, you get the error, because now the first operand, as one row in the second has three columns. One is an equal to three so you get scolded. By the way, if you need a refresher course on why matrix multiplication has this rule. You can check out lesson 2 of our introductory course. Rules C and D for the backslash and forward slash operators are very similar to the rule from matrix multiplication, but they require equal numbers of rose or equal numbers of columns, respectively. We'll see why in our next lesson, for now, they're just rules. The last rule E for the exponentiation operator, also known as the power operator, says that both operations must be square. So for example, 2 buy 2 is fine, but 2 by 3 is not, and one or both of them must be scaler. Let's try it for a three by three matrix raised to a scaler power. Those were some big numbers, but it works, whereas is not allowed, because the first operandi is not square ,you know if you take a minute to think about it. Even if you didn't know rule E, you could see from rule B that you can't raise a non square matrix to a power. That's because raising a matrix to a power means we're multiplying the matrix by itself, and Rule B says that its first dimension must be the same as its second dimension. If you need to pause me to think about that, or come to think of it, if you need to pause me for any other reason, I won't be offended. So the last rule required that one of the operations be a scaler. And on the subject of scaler, I need to point out that the other four rules include special cases for scaler is that we haven't mentioned yet. We had those special cases. The rules look like this. The special cases are that if one operand is a scaler, the other operand could be any shape it all. The operator is applied to the scaler and each element in the array, with the only stipulation being that for the two slash operators, rule C and D scaler must be in the denominator These five rules have been part of my lab for decades but in September of 2016, when version 2016b appeared. MATLAB added a sixth one that was welcomed by lots of people, including me. It's rule F here, which says that you can now add vectors to arrays or subtract vectors from arrays. Or perform any array operation for that matter not just any vector in any array though. You can only do it if the vector is either a row vector with a length equal to the row length of the array. Or a column vector with the length equal to the column length of the array. Put in another way, the vector has to be either the same width or the same height as the array. Here's an example with a column vector. I've got that example and a bunch of other examples prepared for you in a live script. We'll explain how to create live scripts and how to use them in the last lecture of this lesson. Here we've got a 2 by 3 array and A column vector C of length 2. The column length is 2 for both of them, so the new rule applies. Can you see what the operation actually did? Well, it added the column vector to each column of the array. So you can see that 7 and 9 were also added to 100 and 400 to get 107 and 409, 7 and 9 were also added to 200 and 500 to get 207 and 509. And 7 and 9 were added to 300 and 600 to get 307 and 609. Or to put it another way, 7 was added to every element on the first row, and 9 was added to every element on the second row. I guess I'm over killing this explanation, right? Okay, I'll stop. Why is rule F such a welcome thing? Well, it's because it just so happens that there are lots of problems in engineering and physics in which we need to add a column vector to every column of an array. Or a row vector to every row, which you should try on your own by the way, no sense of me having all the fun. And if you're trying it using MATLAB online like I am, and since it's always up to date, you're definitely going to have rule F. On the other hand, if you're driving an old installed model older than 2016b, you'll get an error message that says matrix dimensions must agree. But don't despair, you can accomplish the same thing by using the built in function, repmat, to make an array of copies of the vector. The 1 and 3 here tell repmat to make a 1 by 3 array of its first argument, which is c, since c itself is 2 by 1. The result, which is assigned to upper case C here, is a 2 by 3 array. Even the very oldest version of MATLAB will happily add A to C because they have the same size and shape. We're just asking Matt Lab to do a plain old array operations, as in rule A. I've used repmat hundreds of times in the past to do this, but no more rule F has put an end to it and made it so much simpler. Repmat still a necessary function but not for this. Our statement of rule F is actually more restrictive than necessary because it says that one of the operations must be a vector. The general version of rule F applies to any to raise that are compatible. Meaning that for every dimension, either the sizes of the two operands of the same or one of the two sizes is equal to one. You're probably not have a lot of use for this general version, so we're not going to pursue it here. But if you want to pursue it yourself to search in Matt Labs help system for the word compatible, it gives a fairly clear explanation. Well, not as clear as the explanations in our super fine MOOCs, well, that's a pretty high bar. One last restriction, which is not explicitly stated in this table, is that rules B through E work only with two dimensional arrays. While rules A and F work for any number of dimensions. Okay, we can now assume that you're up to speed and up to here on the shape rules for binary operators. So we're finally ready to get to the subject in the title of this lecture. We're going to look at rules concerning the types of the operands, including, in particular, the cases in which we have mixed mode arithmetic. It's a bit complicated, but I think we've simplified it as much as possible with this table, which breaks down the requirements into six rules. Once again, the rules concern only, binary operators. And so once again, we have two operators X and Y and one binary operator, OP. Now, however, we're assuming that all the rules of the first table are obeyed, and this table shows the additional limitations that arise from the types of the operations. Those limitations are affected by whether one or both of the operations or scaler. So there's a little bit of shape specifications in this table too. Rule one, though, has no shape restrictions. It simply says that if X and Y are floating point numbers. Which includes doubles and singles or a mixture of the two and all operators work, that's a pretty easy rule. And so is rule six, which also has no shape restrictions but says that if the operations are integers of different types or comprise an integer and a single. Then none of the operators works for example, Matt Lab refuses to process these two expressions. However, using the same energy types is okay here. We've used rule to and Rule five those rules. In fact, all the rules from 2 to 5 include shape specifications, and they all include at least one interger. One thing to notice is that every rule in the table, except for rule 5, includes mixed mode arithmetic. Rule 5 deals with images of the same type that are not scalers and it tells us that only the array operators work for them. No matrix operations allowed. So, for example, you can't use matrix multiplication with these same two matrices, even though they obey the shape rule from matrix multiplication. Which requires that the number of columns of the first operand equal the number of rows of the second operand. All we did here was replaced the plus with an asterisk to signify matrix multiplication, and that caused the error. So what is with this M times here? Well, M times is actually the name of a function that does the same thing that the asterisk does. And it puts an asterisk in parentheses here to indicate that. If you call times with two arguments, it performs matrix multiplication on them. Every one of Matt Labs arithmetic operators has an equivalent function plus minus power, empower and so on. And you could get a list of them by searching in the help documentation for arithmetic. The M in the name MTimes means matrix. So you might expect that the function that is equivalent to a ray multiplication would be named ATIMES. Nope, it's just TIMES. Well, of course, as long as the names are different, it works. And those math work folks have never been ones to waste a keystroke. In this message, we also see a somewhat vague explanation of the problem that caused the error. This operator is not fully supported for integer classes. And it's also kind enough to tell us that if a least one argument is scalar, MTIMES will work. And we're left to figure out that if we use the asterisk, then it'll work if it least one operand is a scalar. Well, we can also see that from rules 3 and 4 up here. But let's try a couple examples anyway. First, with the scalar on right, And now with the scalar on the left. And rules 3 and 4 also tell us that things are still just peachy if the scalar are doubles instead of unsigned 16 bit integers. Let's try that. Okay, we've now seen examples of all the rules except number one. So here we go. And so here we've added two different floats, a single and a double. It's kind of anti climactic, I guess. But now we've seen all the rules in action. By the way, if you're wondering about the order of these rules, they go from the least restrictive to the most restrictive. So those are the rules that determine the combinations of shapes and types that are legal in mixed mode arithmetic. And for that matter, non mixed mode arithmetic. Which maybe we should call same mode arithmetic. Or same type arithmetic. Those same type operations show up in rules 1,2 and 6. Note that none of these rules includes any limitations whatever on the values of the operands. Just their shapes and their types. So if the operations obey the rules, you can put any values in them you want to. Big, little positive, negative zero infinity and everything in between. Well, there is one exception. It's really kind of a big one. You can't raise an integer to a fractional power. So, for example, although it's perfectly legal to calculate the square of 3 when the type of 3 is an integer. Which obeys rule to for scalars with an integer and a double. You can't go back the other way and calculate the square root of 9 when the type of 9 is an integer. So well, hook, I see what you're thinking. Nope, you can't use s, q, r, t on an energy type, either. See, there's no getting around it. The type that MATLAB chooses for the result of these power operations is int64. As you can see up here when we didn't use a fractional power. So MATLAB has to give a whole number is an answer. Which fractional powers very often do not produce. The fact that MATLAB chose to give the answer as an int64 raises a very interesting question. Well, let's hope it's interesting. Or at least that it's more interesting than these boring rules have been, despite my cheery, upbeat, chirpy presentation. And that interesting question is this, what types do arithmetic operations produce as output? My gosh, more rules. But you'll be relieved to see that the rules very simple. This time, we don't even need a table. In fact, there is just one rule. The type of the output of any legal arithmetic operation is the same as the type of the operation that occupies the least space and memory. So, for example, if we had an 8 bit integer to a double, the result is an 8 bit integer. Because an 8 bit integer occupies less space than a double. Which you may remember uses 64 bits. Or say we multiply an array of singles by an array of doubles. And the type of the output is single because a single occupies only 32 bits. Extending the rule to non mixed mode arithmetic. When both operands have the same type. Then, of course, the output has that same type too. Let's do some examples. Well defined a couple of variables to work with here first. Since we didn't declare the type of x it defaults to a double. And now let's operate on them. So here we've added a double to a 16 bit integer. Since the 16 bit integer occupies only 16 bits, while the double occupies 64 bits, the result is an int6. Here's more operations. These are all mixed mode arithmetic operations because the operations have different types. And in every case we get an inch 16. Is this what you would have guessed before we learned the rule? I don't think so. I think most people would guess that MATLAB would give every one of these answers as a double. Because a double can hold so many more values. Having a double is the result would also keep you from being hit with surprises like this. Why do we get a 0 when we divide 12 by 9876? If we do the division with two doubles, we get this fractional result. Which makes a lot more sense. But an integer can't hold a fractional result. And since the result is an integer, we get the closest whole number, which is 0. If the fractional result had been, say, 0.6, then the output would have been a 1 because 1 is the closest hole number to 0.6. This rounding is a consequence of math works decision to force the result of the mixed mode operation to have this smaller type. And if there's a tie for the smallest type, which happens only if we have a double in an int64 for the winter is the int64. So even in that case, any fractional part is rounded off. This rounding like the clipping that we saw at the beginning of this lecture, is a hazard that we face when we do mixed mode arithmetic involving integers. So why does MATLAB choose the smaller type for its result? Well, that's a very good question. As a matter of fact, most other computer programming languages do give you a double when you mix a double in an integer. But MATLAB has a very good answer to this very good question. It produces an integer to save memory. The logic here is that the reason the programmer chose to use an integer type in the first place, as opposed to the default double type, is to take advantage of the smaller size of the integer type in order to save memory. Producing a double as most other languages do, cancels that advantage Let's see an example of how MATLAB approach saves memory. I have an image of the MATLAB logo saved as a PNG file here in this folder. Let's read that into a variable and display it. As you can see over here in the workspace, the variable M, which contains the image, is a 400 by 400 by 3 array, meaning that it has 400 columns, 400 rows and 3 pages. Each page of the array contains a 400 by 400 array of color intensities. One page each for the red, green and blue colors, also known as an RGB image, in which each set of red, green and blue values makes up one pixel. What's most important to us though here is that the array is of type Uint8. So each element occupies eight bits, which is one byte. We can calculate how much memory space this image requires, but let's take the easy way and let the function whos tell us, 480,000 bytes. This one variable takes up almost half a megabyte of memory. Even though this is a relatively small image. Now, let's do a little image processing on it. I'll create a darker version of the image by dividing all the intensity values by three. Note that it kind of made the white background dark too, which is actually not surprising, since we darkened every single pixel in the square image. Let's run whos again. As you can see, D is also a Uint8. So the size of the image has not changed. 480,000 bytes for each of them. That's because of MATLAB rule that says that the type of the output of the arithmetic operation that produced D is the same is the type of its smallest operand. The two operands were M, which is a Uint8 and three, which is a double, so the result is a Uint8. If we were using a language like say, C or C++ or Java, which would produce a double instead of Uint8 as output, the result would be eight times as big. Here's the equivalent operation in those languages. What we've done is converted the eight bit M into a 64 bit double. So now the type of the operand that was taking less space has been changed to match the type of the operand, and that takes more space. This step, which is done before the arithmetic operation is applied, is called widening and widening is exactly what is done by C, C++ and Java from mixed mode arithmetic operations. So let's see how big D is now. Almost four megabytes. You may think that four megabytes is nothing these days, and you would be right, but once you start dealing with medical images such as MRI or CT, which can consist of over 100 individual images. Or if you think of videos that can have 30 or even 60 frames per second. All of a sudden we're talking about gigabytes and a factor of it becomes crucial. MATLABs approach, by comparison, is to use narrowing like this. So now the type of the operation which takes more space is changed to match the type of one that takes less space, thereby saving memory. And it's not just a matter of saving memory, operations on wider operand take more time too. So MATLABs approach of narrowing in mixed mode arithmetic saves both space and time. The downside is that during narrowing values around it off and you'll find people online railing away about that, but in fact the difference of 0.5 is rarely even detectable in an image. So for image processing, it makes a lot of sense to use the energy type as the result of a mixed mode arithmetic expression involving doubles and integers. And you can always override this behavior by explicitly widening the smaller operand. If you do wide and make sure you widen the operand and as we have done and not the result of the operation, if you widen the result of the operation, you could lose accuracy. Here's a simpler example to show what happens. So, B and C have the same type double, but they have different values. Do you see why? Well, it's because for B, we widen the operand and for see, we widened the result of the operation. When we calculate B here, we first convert A to a double and then divide by 2. And since 2 is by default, also a double, the results of the operation is a double, but when we calculate C, we first divide by 2. And since a is an int8, the result is an int8, which means that the exact answer 8.5 is rounded to the end of your 9. We then convert that 9 from an int8 to a double. And now let's see what happens if we use an int8 type for the 2, as well as for the A. This time B and C have different types, but the same value. And why is that? Well, here again for B, we widen the operand and for C, we widen the result of the operation. But this time since F is an int8, the result for B is an int8, so the result is rounded to 9. So the important lesson here is that when we explicitly convert a variable from one numerical data type to another in conjunction with an arithmetic operation, we have to be very careful about whether we do the conversion before or after the operation. We can lose precision due to the rounding or truncation, or we may end up with an unexpected type or both. Well, with this ominous warning of the dangers lurking within type conversions, and arithmetic operations, we conclude our lecture on mixed mode arithmetic, which is simply arithmetic on two numbers of different types. We've taken a close look at the rules for legal operations when the operands have different types and when they have the same type. There's a lot of detail here, but we've done our best to cut through the complexity via two tables that present the rules in a simple form as possible. These tables will let you know what works and what doesn't. And in addition to the tables, we provided one rule to tell you what type you get when operation does work. You can refer back to these tables, which are provided as accompanying references for this course to help your memory. Or you could just type in operation into the command window to see whether it works, and what type you get. MATLAB will let you know. [MUSIC]