MATRIX METHODS The Student Solutions Manual is now available online through separate purchase at www.elsevierdirect.com/companions/9780123744272 MATRIX METHODS: Applied Linear Algebra Third Edition Richard Bronson Fairleigh Dickinson University Teaneck, New Jersey Gabriel B. Costa United States Military Academy West Point, New York AMSTERDAM • BOSTON • HEIDELBERG • LONDON NEW YORK • OXFORD • PARIS • SAN DIEGO SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO Academic Press is an imprint of Elsevier Academic Press is an imprint of Elsevier 30 Corporate Drive, Suite 400, Burlington, MA 01803, USA 525 B Street, Suite 1900, San Diego, California 92101-4495, USA 84 Theobald’s Road, London WC1X 8RR, UK Copyright © 2009, Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Permissions may be sought directly from Elsevier’s Science & Technology Rights Department in Oxford, UK: phone: (+44) 1865 843830, fax: (+44) 1865 853333, E-mail: permissions@elsevier.com. You may also complete your request online via the Elsevier homepage (http://elsevier.com), by selecting “Support & Contact” then “Copyright and Permission” and then “Obtaining Permissions.” Library of Congress Cataloging-in-Publication Data APPLICATION SUBMITTED British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library. ISBN: 978-0-12-374427-2 For information on all Academic Press publications visit our Web site at www.elsevierdirect.com Printed in the United States of America 08 09 10 9 8 7 6 5 4 3 2 1 To Evy...again. R.B. To my brother priests...especially Father Frank Maione, the parish priest of my youth...and Archbishop Peter Leo Gerety, who ordained me a priest. G.B.C. This page intentionally left blank Contents Preface xi About the Authors xiii Acknowledgments xv 1 Matrices 1 1.1 Basic Concepts 1 Problems 1.1 3 1.2 Operations 6 Problems 1.2 8 1.3 Matrix Multiplication 9 Problems 1.3 16 1.4 Special Matrices 19 Problems 1.4 23 1.5 Submatrices and Partitioning 29 Problems 1.5 32 1.6 Vectors 33 Problems 1.6 34 1.7 The Geometry of Vectors 37 Problems 1.7 41 2 Simultaneous Linear Equations 43 2.1 Linear Systems 43 Problems 2.1 45 2.2 Solutions by Substitution 50 Problems 2.2 54 2.3 Gaussian Elimination 54 Problems 2.3 62 vii viii Contents 2.4 Pivoting Strategies 65 Problems 2.4 70 2.5 Linear Independence 71 Problems 2.5 76 2.6 Rank 78 Problems 2.6 83 2.7 Theory of Solutions 84 Problems 2.7 87 2.8 Final Comments on Chapter 2 88 3 The Inverse 93 3.1 Introduction 93 Problems 3.1 98 3.2 Calculating Inverses 101 Problems 3.2 106 3.3 Simultaneous Equations 109 Problems 3.3 111 3.4 Properties of the Inverse 112 Problems 3.4 114 3.5 LU Decomposition 115 Problems 3.5 121 3.6 Final Comments on Chapter 3 124 4 An Introduction to Optimization 127 4.1 Graphing Inequalities 127 Problems 4.1 130 4.2 Modeling with Inequalities 131 Problems 4.2 133 4.3 Solving Problems Using Linear Programming 135 Problems 4.3 140 4.4 An Introduction to The Simplex Method 140 Problems 4.4 147 4.5 Final Comments on Chapter 4 147 5 Determinants 149 5.1 Introduction 149 Problems 5.1 150 5.2 Expansion by Cofactors 152 Problems 5.2 155 5.3 Properties of Determinants 157 Problems 5.3 161 5.4 Pivotal Condensation 163 Problems 5.4 166 Contents ix 5.5 Inversion 167 Problems 5.5 169 5.6 Cramer’s Rule 170 Problems 5.6 173 5.7 Final Comments on Chapter 5 173 6 Eigenvalues and Eigenvectors 177 6.1 Definitions 177 Problems 6.1 179 6.2 Eigenvalues 180 Problems 6.2 183 6.3 Eigenvectors 184 Problems 6.3 188 6.4 Properties of Eigenvalues and Eigenvectors 190 Problems 6.4 193 6.5 Linearly Independent Eigenvectors 194 Problems 6.5 200 6.6 Power Methods 201 Problems 6.6 211 7 Matrix Calculus 213 7.1 Well-Defined Functions 213 Problems 7.1 216 7.2 Cayley–Hamilton Theorem 219 Problems 7.2 221 7.3 Polynomials of Matrices–Distinct Eigenvalues 222 Problems 7.3 226 7.4 Polynomials of Matrices—General Case 228 Problems 7.4 232 7.5 Functions of a Matrix 233 Problems 7.5 236 7.6 The Function eAt 238 Problems 7.6 240 7.7 Complex Eigenvalues 241 Problems 7.7 244 7.8 Properties of eA 245 Problems 7.8 247 7.9 Derivatives of a Matrix 248 Problems 7.9 253 7.10 Final Comments on Chapter 7 254 x Contents 8 Linear Differential Equations 257 8.1 Fundamental Form 257 Problems 8.1 261 8.2 Reduction of an nth Order Equation 263 Problems 8.2 269 8.3 Reduction of a System 269 Problems 8.3 274 8.4 Solutions of Systems with Constant Coefficients 275 Problems 8.4 285 8.5 Solutions of Systems—General Case 286 Problems 8.5 294 8.6 Final Comments on Chapter 8 295 9 Probability and Markov Chains 297 9.1 Probability: An Informal Approach 297 Problems 9.1 300 9.2 Some Laws of Probability 301 Problems 9.2 304 9.3 Bernoulli Trials and Combinatorics 305 Problems 9.3 309 9.4 Modeling with Markov Chains: An Introduction 310 Problems 9.4 313 9.5 Final Comments on Chapter 9 314 10 Real Inner Products and Least-Square 10.1 Introduction 315 Problems 10.1 317 10.2 Orthonormal Vectors 320 Problems 10.2 325 10.3 Projections and QR-Decompositions 327 Problems 10.3 337 10.4 The QR-Algorithm 339 Problems 10.4 343 10.5 Least-Squares 344 Problems 10.5 352 315 Appendix: A Word on Technology 355 Answers and Hints to Selected Problems 357 Index 411 Preface It is no secret that matrices are used in many fields. They are naturally present in all branches of mathematics, as well as, in many engineering and science fields. Additionally, this simple but powerful concept is readily applied to many other disciplines, such as economics, sociology, political science, nursing and psychology. The Matrix is a dynamic construct. New applications of matrices are still evolving, and our third edition of Matrix Methods: Applied Linear Algebra (previously An Introduction) reflects important changes that have transpired since the publication of the previous edition. In this third edition, we added material on optimization and probability theory. Chapter 4 is new and covers an introduction to the simplex method, one of the major applied advances in the last half of the twentieth century. Chapter 9 is also new and introduces Markov Chains, a primary use of matrices to probability applications. To ensure that the book remains appropriate in length for a one semester course, we deleted some of the subject matter that is more advanced; specifically, chapters on the Jordan Canonical Form and on Special Matrices (e.g., Hermitian and Unitary Matrices). We also included an Appendix dealing with technological support, such as computer algebra systems. The reader will also find that the text contains a considerable “modeling flavor”. This edition remains a textbook for the student, not the instructor. It remains a book on methodology rather than theory. And, as in all past editions, proofs are given in the main body of the text only if they are easy to follow and revealing. For most of this book, a firm understanding of basic algebra and a smattering of trigonometry are the only prerequisites; any references to calculus are few and far between. Calculus is required for Chapter 7 and Chapter 8; however, these chapters may be omitted with no loss of continuity, should the instructor wish to do so. The instructor will also find that he/she can “mix and match” chapters depending on the particular course requirements and the needs of the students. xi xii Preface In closing, we would like to acknowledge the many people who helped to make this book a reality. These include the professors, most notably Nicholas J. Rose, who introduced us to the subject matter and instilled in us their love of matrices. They also include the hundreds of students who interacted with us when we passed along our knowledge to them. Their questions and insights enabled us to better understand the underlying beauty of the field and to express it more succinctly. Special thanks go to the Most Reverend John J. Myers, Archbishop of Newark, as well as to the Reverend Monsignor James M. Cafone and the Priest Community at Seton Hall University. Gratitude is also given to the administrative leaders of Seton Hall University, and to Dr. Joan Guetti and to the members of the Department of Mathematics and Computer Science. Finally, thanks are given to Colonel Michael Phillips and to the members of the Department of Mathematical Sciences of the United States Military Academy. Richard Bronson Teaneck, NJ Gabriel B. Costa West Point, NY and South Orange, NJ About the Authors Richard Bronson is a Professor of Mathematics in the School of Computer Science and Engineering at Fairleigh Dickinson University, where he is currently the Senior Executive Assistant to the President. Dr. Bronson has been chairman of his academic department, Acting Dean of his college and Interim Provost. He has authored or co-authored eleven books in mathematics and over thirty articles, primarily in mathematical modeling. Gabriel B. Costa is a Catholic priest. He is a Professor of Mathematical Sciences and associate chaplain at the United States Military Academy at West Point. He is on an extended Academic Leave from Seton Hall University. His interests include differential equations, sabermetrics and mathematics education. This is the third book Father Costa has co-authored with Dr. Bronson. xiii This page intentionally left blank Acknowledgments Many readers throughout the country have suggested changes and additions to the first edition, and their contributions are gratefully acknowledged. They include John Brillhart, of the University of Arizona; Richard Thornhill, of the University of Texas; Ioannis M. Roussos, of the University of Southern Alabama; Richard Scheld and James Jamison, of Memphis State University; Hari Shankar, of Ohio University; D.J. Hoshi, of ITT-West; W.C. Pye and Jeffrey Stuart, of the University of Southern Mississippi; Kevin Andrews, of Oakland University; Harold Klee, of the University of Central Florida; Edwin Oxford, Patrick O’Dell and Herbert Kasube, of Baylor University; and Christopher McCord, Philip Korman, Charles Groetsch and John King, of the University of Cincinnati. Special thanks must also go to William Anderson and Gilbert Steiner, of Fairleigh Dickinson University, who were always available to me for consultation and advice in writing this edition, and to E. Harriet, whose assistance was instrumental in completing both editions. Finally, I have the opportunity to correct a twenty-year oversight: Mable Dukeshire, previously Head of the Department of Mathematics at FDU, now retired, gave me support and encouragement to write the first edition. I acknowledge her contribution now, with thanks and friendship. xv This page intentionally left blank 1 Matrices 1.1 Basic Concepts Definition 1 A matrix is a rectangular array of elements arranged in horizontal rows and vertical columns. Thus, 1 2 3 0 5 −1 , (1) ⎡ ⎤ 411 ⎣3 2 1⎦, (2) 042 and ⎡√ ⎤ 2 ⎣π⎦ (3) 19.5 are all examples of a matrix. The matrix given in (1) has two rows and three columns; it is said to have order (or size) 2 × 3 (read two by three). By convention, the row index is always given first. The matrix in (2) has order 3 × 3, while that in (3) has order 3 × 1. The entries of a matrix are called elements. In general, a matrix A (matrices will always be designated by uppercase boldface letters) of order p × n is given by ⎡ ⎤ a11 a12 a13 · · · a1n A = ⎢⎢⎢⎢⎢⎣aa23...11 a22 a32 ... a23 a33 ... ··· ··· a2n a3n ... ⎥⎥⎥⎥⎥⎦, (4) ap1 ap2 ap3 · · · apn 1 2 Chapter 1 Matrices which is often abbreviated to [aij]p × n or just [aij]. In this notation, aij represents the general element of the matrix and appears in the ith row and the jth column. The subscript i, which represents the row, can have any value 1 through p, while the subscript j, which represents the column, runs 1 through n. Thus, if i = 2 and j = 3, aij becomes a23 and designates the element in the second row and third column. If i = 1 and j = 5, aij becomes a15 and signifies the element in the first row, fifth column. Note again that the row index is always given before the column index. Any element having its row index equal to its column index is a diagonal element. Thus, the diagonal elements of a matrix are the elements in the 1−1 position, 2−2 position, 3−3 position, and so on, for as many elements of this type that exist. Matrix (1) has 1 and 0 as its diagonal elements, while matrix (2) has 4, 2, and 2 as its diagonal elements. If the matrix has as many rows as columns, p = n, it is called a square matrix; in general it is written as ⎡ a11 a12 a13 ··· ⎤ a1n ⎢⎢⎢⎢⎢⎢⎣aa23... 11 a22 a32 ... a23 a33 ... ··· ··· a2n a3n ... ⎥⎥⎥⎥⎥⎥⎦. (5) an1 an2 an3 · · · ann In this case, the elements a11, a22, a33, . . . , ann lie on and form the main (or principal) diagonal. It should be noted that the elements of a matrix need not be numbers; they can be, and quite often arise physically as, functions, operators or, as we shall see later, matrices themselves. Hence, 1 √ (t2 + 1)dt t2 3t 2 , 0 sin θ − cos θ cos θ sin θ , and ⎡x2 ⎤ x ⎢⎢⎢⎣ex d dx ln x ⎥⎥⎥⎦ 5 x+2 are good examples of matrices. Finally, it must be noted that a matrix is an entity unto itself; it is not a number. If the reader is familiar with determinants, he will undoubtedly recognize the similarity in form between the two. Warning: the similarity ends there. Whereas a determinant (see Chapter 5) can be evaluated to yield a number, a matrix cannot. A matrix is a rectangular array, period. 1.1 Basic Concepts 3 Problems 1.1 1. Determine the orders of the following matrices: ⎡ 3 A = ⎢⎢⎣ 2 0 −3 1 5 3 −5 −2 −6 1 2 4 5 2 2 ⎤ 7 07⎥⎥⎦, 2 ⎡ 1 B = ⎣0 4 2 0 3 ⎤ 3 0⎦, 2 ⎡ ⎤ ⎡ 3 t t2 ⎤ 0 1 C =⎣ 5 10 2 6 11 3 −7 12 4 8⎦, 12 D = ⎢⎢⎣ t−2 t+2 2t − 3 t4 3t −5t2 6t 1 2t5 5 2 ⎥⎥⎦, 3t2 ⎡1 E = ⎢⎢⎣ 2 2 3 1 3 3 5 1⎤ − 4 5 ⎥⎥⎦, 6 ⎡⎤ 1 F = ⎢⎢⎢⎢⎣ 1005⎥⎥⎥⎥⎦, −4 ⎡√ 313 G = ⎢⎢⎣ 2π 4√6.3 25 ⎤ −505 1.10√843⎥⎥⎦, −5 H= 0 0 0 0 , J = [1 5 −30]. 2. Find, if they exist, the elements in the 1−3 and the 2−1 positions for each of the matrices defined in Problem 1. 3. Find, if they exist, a23, a32, b31, b32, c11, d22, e13, g22, g23, and h32 for the matrices defined in Problem 1. 4. Construct the 2 × 2 matrix A having aij = (−1)i + j. 5. Construct the 3 × 3 matrix A having aij = i/j. 6. Construct the n × n matrix B having bij = n − i − j. What will this matrix be when specialized to the 3 × 3 case? 7. Construct the 2 × 4 matrix C having cij = i j when i = 1, when i = 2. 8. Construct the 3 × 4 matrix D having ⎧ ⎨i + j dij = ⎩0i − j when i > j, when i = j, when i < j. 9. Express the following times as matrices: (a) A quarter after nine in the morning. (b) Noon. (c) One thirty in the afternoon. (d) A quarter after nine in the evening. 10. Express the following dates as matrices: (a) July 4, 1776 (b) December 7, 1941 (c) April 23, 1809 (d) October 31, 1688 4 Chapter 1 Matrices 11. A gasoline station currently has in inventory 950 gallons of regular unleaded gasoline, 1253 gallons of premium, and 98 gallons of super. Express this inventory as a matrix. 12. Store 1 of a three store chain has 3 refrigerators, 5 stoves, 3 washing machines, and 4 dryers in stock. Store 2 has in stock no refrigerators, 2 stoves, 9 washing machines, and 5 dryers, while store 3 has in stock 4 refrigerators, 2 stoves, and no washing machines or dryers. Present the inventory of the entire chain as a matrix. 13. The number of damaged items delivered by the SleepTight Mattress Company from its various plants during the past year is given by the matrix ⎡ ⎤ 80 12 16 ⎣50 40 16⎦. 90 10 50 The rows pertain to its three plants in Michigan,Texas, and Utah. The columns pertain to its regular model, its firm model, and its extra-firm model, respectively. The company’s goal for next year to is to reduce by 10% the number of damaged regular mattresses shipped by each plant, to reduce by 20% the number of damaged firm mattresses shipped by its Texas plant, to reduce by 30% the number of damaged extra-firm mattresses shipped by its Utah plant, and to keep all other entries the same as last year. What will next year’s matrix be if all goals are realized? 14. A person purchased 100 shares of AT&T at $27 per share, 150 shares of Exxon at $45 per share, 50 shares of IBM at $116 per share, and 500 shares of PanAm at $2 per share. The current price of each stock is $29, $41, $116, and $3, respectively. Represent in a matrix all the relevant information regarding this person’s portfolio. 15. On January 1, a person buys three certificates of deposit from different institutions, all maturing in one year. The first is for $1000 at 7%, the second is for $2000 at 7.5%, and the third is for $3000 at 7.25%. All interest rates are effective on an annual basis. (a) Represent in a matrix all the relevant information regarding this person’s holdings. (b) What will the matrix be one year later if each certificate of deposit is renewed for the current face amount and accrued interest at rates one half a percent higher than the present? 16. (Markov Chains, see Chapter 9) A finite Markov chain is a set of objects, a set of consecutive time periods, and a finite set of different states such that (i) during any given time period, each object is in only state (although different objects can be in different states), and 1.1 Basic Concepts 5 (ii) the probability that an object will move from one state to another state (or remain in the same state) over a time period depends only on the beginning and ending states. A Markov chain can be represented by a matrix P = pij where pij represents the probability of an object moving from state i to state j in one time period. Such a matrix is called a transition matrix. Construct a transition matrix for the following Markov chain: Census figures show a population shift away from a large mid-western metropolitan city to its suburbs. Each year, 5% of all families living in the city move to the suburbs while during the same time period only 1% of those living in the suburbs move into the city. Hint: Take state 1 to represent families living in the city, state 2 to represent families living in the suburbs, and one time period to equal a year. 17. Construct a transition matrix for the following Markov chain: Every four years, voters in a New England town elect a new mayor because a town ordinance prohibits mayors from succeeding themselves. Past data indicate that a Democratic mayor is succeeded by another Democrat 30% of the time and by a Republican 70% of the time. A Republican mayor, however, is succeeded by another Republican 60% of the time and by a Democrat 40% of the time. Hint: Take state 1 to represent a Republican mayor in office, state 2 to represent a Democratic mayor in office, and one time period to be four years. 18. Construct a transition matrix for the following Markov chain: The apple harvest in New York orchards is classified as poor, average, or good. Historical data indicates that if the harvest is poor one year then there is a 40% chance of having a good harvest the next year, a 50% chance of having an average harvest, and a 10% chance of having another poor harvest. If a harvest is average one year, the chance of a poor, average, or good harvest the next year is 20%, 60%, and 20%, respectively. If a harvest is good, then the chance of a poor, average, or good harvest the next year is 25%, 65%, and 10%, respectively. Hint: Take state 1 to be a poor harvest, state 2 to be an average harvest, state 3 to be a good harvest, and one time period to equal one year. 19. Construct a transition matrix for the following Markov chain. Brand X and brand Y control the majority of the soap powder market in a particular region, and each has promoted its own product extensively. As a result of past advertising campaigns, it is known that over a two year period of time 10% of brand Y customers change to brand X and 25% of all other customers change to brand X. Furthermore, 15% of brand X customers change to brand Y and 30% of all other customers change to brand Y. The major brands also lose customers to smaller competitors, with 5% of brand X customers switching to a minor brand during a two year time period and 2% of brandY customers doing likewise. All other customers remain loyal to their past brand of soap powder. Hint: Take state 1 to be a brand X customer, state 2 a brand Y customer, state 3 another brand customer, and one time period to be two years. 6 Chapter 1 Matrices 1.2 Operations The simplest relationship between two matrices is equality. Intuitively one feels that two matrices should be equal if their corresponding elements are equal. This is the case, providing the matrices are of the same order. Definition 1 Two matrices A = [aij]p×n and B = [bij]p×n are equal if they have the same order and if aij = bij (i = 1, 2, 3, . . . , p; j = 1, 2, 3, . . . , n). Thus, the equality 5x + 2y x − 3y = 7 1 implies that 5x + 2y = 7 and x − 3y = 1. The intuitive definition for matrix addition is also the correct one. Definition 2 If A = [aij] and B = [bij] are both of order p × n, then A + B is a p × n matrix C = [cij] where cij = aij + bij (i = 1, 2, 3, . . . , p; j = 1, 2, 3, . . . , n). Thus, ⎡ 5 ⎣7 ⎤⎡ ⎤⎡ 1 −6 3 3⎦ + ⎣ 2 −1⎦ = ⎣ 5 + (−6) 7+2 ⎤⎡ ⎤ 1+3 −1 4 3 + (−1)⎦ = ⎣ 9 2⎦ −2 −1 41 (−2) + 4 (−1) + 1 20 and t2 3t 5 0 + 1 t −6 −t = t2 + 1 4t −1 −t ; but the matrices ⎡ ⎤ 50 ⎣−1 0⎦ and 21 −6 2 11 cannot be added since they are not of the same order. It is not difficult to show that the addition of matrices is both commutative and associative: that is, if A, B, C represent matrices of the same order, then (A1) A + B = B + A, (A2) A + (B + C) = (A + B) + C. We define a zero matrix 0 to be a matrix consisting of only zero elements. Zero matrices of every order exist, and when one has the same order as another matrix A, we then have the additional property (A3) A + 0 = A. 1.2 Operations 7 Subtraction of matrices is defined in a manner analogous to addition: the orders of the matrices involved must be identical and the operation is performed elementwise. Thus, 5 −3 1 2 − 6 4 −1 −1 = −1 −7 2 3 . Another simple operation is that of multiplying a scalar times a matrix. Intuition guides one to perform the operation elementwise, and once again intuition is correct. Thus, for example, 7 1 −3 2 4 = 7 −21 14 28 and t 1 3 0 2 = t 3t 0 2t . Definition 3 If A = [aij] is a p × n matrix and if λ is a scalar, then λA is a p × n matrix B = [bij] where bij = λaij(i = 1, 2, 3, . . . , p; j = 1, 2, 3, . . . , n). Example 1 Find 5A − 1 2 B if A= 4 0 1 3 and B= 6 18 −20 8 Solution 5A − 1 2 B = 5 4 0 1 3 − 1 2 6 18 −20 8 = 20 0 5 15 − 3 9 −10 4 = 17 −9 15 11 . It is not difficult to show that if λ1 and λ2 are scalars, and if A and B are matrices of identical order, then (S1) λ1A = Aλ1, (S2) λ1(A + B) = λ1A + λ1B, (S3) (λ1 + λ2)A = λ1A + λ2A, (S4) λ1(λ2A) = (λ1λ2)A. The reader is cautioned that there is no such operation as matrix division. We will, however, define a somewhat analogous operation, namely matrix inversion, in Chapter 3. 8 Chapter 1 Matrices Problems 1.2 In Problems 1 through 26, let A= 1 3 2 4 , B= 5 7 6 8 , C= −1 3 0 −3 , ⎡ 3 D = ⎢⎢⎣−13 ⎤ 1 −22⎥⎥⎦, ⎡ −2 E = ⎢⎢⎣ 0 5 ⎤ 2 −−23⎥⎥⎦, ⎡ ⎤ 01 F = ⎢⎢⎣−01 00⎥⎥⎦. 26 51 22 1. Find 2A. 2. Find −5A. 3. Find 3D. 4. Find 10E. 5. Find −F. 6. Find A + B. 7. Find C + A. 8. Find D + E. 9. Find D + F. 10. Find A + D. 11. Find A − B. 12. Find C − A. 13. Find D − E. 14. Find D − F. 15. Find 2A + 3B. 16. Find 3A − 2C. 17. Find 0.1A + 0.2C. 18. Find −2E + F. 19. Find X if A + X = B. 20. Find Y if 2B + Y = C. 21. Find X if 3D − X = E. 22. Find Y if E − 2Y = F. 23. Find R if 4A + 5R = 10C. 24. Find S if 3F − 2S = D. 25. Verify directly that (A + B) + C = A + (B + C). 26. Verify directly that λ(A + B) = λA + λB. 27. Find 6A − θB if A= θ2 4 2θ − 1 1/θ and B= θ2 − 1 3/θ 6 θ3 + 2θ + 1 . 28. Prove Property (A1). 29. Prove Property (A3). 30. Prove Property (S2). 31. Prove Property (S3). 32. (a) Mr. Jones owns 200 shares of IBM and 150 shares of AT&T. Determine a portfolio matrix that reflects Mr. Jones’ holdings. (b) Over the next year, Mr. Jones triples his holdings in each company. What is his new portfolio matrix? (c) The following year Mr. Jones lists changes in his portfolio as −50 100 . What is his new portfolio matrix? 1.3 Matrix Multiplication 9 33. The inventory of an appliance store can be given by a 1 × 4 matrix in which the first entry represents the number of television sets, the second entry the number of air conditioners, the third entry the number of refrigerators, and the fourth entry the number of dishwashers. (a) Determine the inventory given on January 1 by 15 2 8 6 . (b) January sales are given by 4 0 2 3 . What is the inventory matrix on February 1? (c) February sales are given by 5 0 3 3 , and new stock added in February is given by 3 2 7 8 . What is the inventory matrix on March 1? 34. The daily gasoline supply of a local service station is given by a 1 × 3 matrix in which the first entry represents gallons of regular, the second entry gallons of premium, and the third entry gallons of super. (a) Determine the supply of gasoline at the close of business on Monday given by 14,000 8,000 6,000 . (b) Tuesday’s sales are given by 3,500 2,000 1,500 . What is the inventory matrix at day’s end? (c) Wednesday’s sales are given by 5,000 1,500 1,200 . In addition, the station received a delivery of 30,000 gallons of regular, 10,000 gallons of premium, but no super. What is the inventory at day’s end? 35. On a recent shopping trip Mary purchased 6 oranges, a dozen grapefruits, 8 apples, and 3 lemons. John purchased 9 oranges, 2 grapefruits, and 6 apples. Express each of their purchases as 1 × 4 matrices. What is the physical significance of the sum of these matrices? 1.3 Matrix Multiplication Matrix multiplication is the first operation we encounter where our intuition fails. First, two matrices are not multiplied together elementwise. Secondly, it is not always possible to multiply matrices of the same order while it is possible to multiply certain matrices of different orders. Thirdly, if A and B are two matrices for which multiplication is defined, it is generally not the case that AB = BA; that is, matrix multiplication is not a commutative operation. There are other properties of matrix multiplication, besides the three mentioned that defy our intuition, and we shall illustrate them shortly. We begin by determining which matrices can be multiplied. Rule 1 The product of two matrices AB is defined if the number of columns of A equals the number of rows of B. 10 Chapter 1 Matrices Thus, if A and B are given by ⎡ ⎤ A= 6 −1 1 2 0 1 −1 0 1 0 and B = ⎣ 3 2 −2 1⎦, 41 10 (6) then the product AB is defined since A has three columns and B has three rows. The product BA, however, is not defined since B has four columns while A has only two rows. When the product is written AB, A is said to premultiply B while B is said to postmultiply A. Rule 2 If the product AB is defined, then the resultant matrix will have the same number of rows as A and the same number of columns as B. Thus, the product AB, where A and B are given in (6), will have two rows and four columns since A has two rows and B has four columns. An easy method of remembering these two rules is the following: write the orders of the matrices on paper in the sequence in which the multiplication is to be carried out; that is, if AB is to be found where A has order 2 × 3 and B has order 3 × 4, write (2 × 3)(3 × 4) (7) If the two adjacent numbers (indicated in (7) by the curved arrow) are both equal (in the case they are both three), the multiplication is defined. The order of the product matrix is obtained by canceling the adjacent numbers and using the two remaining numbers. Thus in (7), we cancel the adjacent 3’s and are left with 2 × 4, which in this case, is the order of AB. As a further example, consider the case where A is 4 × 3 matrix while B is a 3 × 5 matrix. The product AB is defined since, in the notation (4 × 3)(3 × 5), the adjacent numbers denoted by the curved arrow are equal. The product will be a 4 × 5 matrix. The product BA however is not defined since in the notation (3 × 5)(4 × 3) the adjacent numbers are not equal. In general, one may schematically state the method as (k × n)(n × p) = (k × p). Rule 3 If the product AB = C is defined, where C is denoted by [cij], then the element cij is obtained by multiplying the elements in ith row of A by the corresponding elements in the jth column of B and adding. Thus, if A has order k × n, and B has order n × p, and ⎡ ⎤⎡ ⎤⎡ ⎤ a11 a12 · · · a1n b11 b12 · · · b1p c11 c12 · · · c1p ⎢⎢⎢⎢⎣a2... 1 a22 ... ··· a2n ... ⎥⎥⎥⎥⎦ ⎢⎢⎢⎢⎣b2... 1 b22 ... ··· b2...p⎥⎥⎥⎥⎦ = ⎢⎢⎢⎢⎣c2...1 c22 ... ··· c2p ... ⎥⎥⎥⎥⎦, ak1 ak2 · · · akn bn1 bn2 · · · bnp ck1 ck2 · · · ckp 1.3 Matrix Multiplication 11 then c11 is obtained by multiplying the elements in the first row of A by the corresponding elements in the first column of B and adding; hence, c11 = a11b11 + a12b21 + · · · + a1nbn1. The element c12 is found by multiplying the elements in the first row of A by the corresponding elements in the second column of B and adding; hence. c12 = a11b12 + a12b22 + · · · + a1nbn2. The element ckp is obtained by multiplying the elements in the kth row of A by the corresponding elements in the pth column of B and adding; hence, ckp = ak1b1p + ak2b2p + · · · + aknbnp. Example 1 Find AB and BA if A= 1 4 2 5 3 6 ⎡ ⎤ −7 −8 and B = ⎣ 9 10⎦. 0 −11 Solution ⎡ ⎤ AB = 1 4 2 5 3 6 −7 ⎣9 0 −8 10⎦ −11 = 1(−7) + 2(9) + 3(0) 4(−7) + 5(9) + 6(0) 1(−8) + 2(10) + 3(−11) 4(−8) + 5(10) + 6(−11) = −7 + 18 + 0 −28 + 45 + 0 −8 + 20 − 33 −32 + 50 − 66 = 11 17 −21 −48 , ⎡ ⎤ −7 BA = ⎣ 9 0 −8 10⎦ −11 1 4 2 5 3 6 ⎡ (−7)1 + (−8)4 (−7)2 + (−8)5 = ⎣ 9(1) + 10(4) 9(2) + 10(5) ⎤ (−7)3 + (−8)6 9(3) + 10(6) ⎦ 0(1) + (−11)4 0(2) + (−11)5 0(3) + (−11)6 ⎡ ⎤⎡ ⎤ −7 − 32 −14 − 40 −21 − 48 −39 −54 −69 = ⎣ 9 + 40 18 + 50 27 + 60⎦ = ⎣ 49 68 87⎦. 0 − 44 0 − 55 0 − 66 −44 −55 −66 The preceding three rules can be incorporated into the following formal definition: 12 Chapter 1 Matrices Definition 1 If A = [aij] is a k × n matrix and B = [bij] is an n × p matrix, then the product AB is defined to be a k × p matrix C = [cij] where cij = n l=1 ailblj = ai1b1j + ai2b2j + ·· · + ainbnj(i = 1, 2, . . . , k; j = 1, 2, . . . , p). Example 2 Find AB if ⎡ ⎤ 2 A = ⎣−1 3 1 0⎦ 1 and B= 3 4 1 −2 5 1 −1 0 . Solution ⎡ ⎤ 2 AB = ⎣−1 3 1 0⎦ 1 3 4 1 −2 5 1 −1 0 ⎡ 2(3) + 1(4) 2(1) + 1(−2) = ⎣−1(3) + 0(4) −1(1) + 0(−2) 3(3) + 1(4) 3(1) + 1(−2) ⎡ ⎤ 10 0 11 −2 = ⎣−3 −1 −5 1⎦. 13 1 16 −3 2(5) + 1(1) −1(5) + 0(1) 3(5) + 1(1) ⎤ 2(−1) + 1(0) −1(−1) + 0(0)⎦ 3(−1) + 1(0) Note that in this example the product BA is not defined. Example 3 Find AB and BA if A= 2 −1 1 3 and B= 4 1 0 2 . Solution AB = 2 −1 1 3 4 1 0 2 = 2(4) + 1(1) −1(4) + 3(1) 2(0) + 1(2) −1(0) + 3(2) = 9 −1 2 6 ; BA = 4 1 0 2 2 −1 1 3 = 4(2) + 0(−1) 1(2) + 2(−1) 4(1) + 0(3) 1(1) + 2(3) = 8 0 4 7 . This, therefore, is an example where both products AB and BA are defined but unequal. Example 4 Find AB and BA if A= 3 0 1 4 and B= 1 0 1 2 . 1.3 Matrix Multiplication 13 Solution AB = 3 0 1 4 1 0 1 2 = 3 0 5 8 , BA = 1 0 1 2 3 0 1 4 = 3 0 5 8 . This, therefore, is an example where both products AB and BA are defined and equal. In general, it can be shown that matrix multiplication has the following properties: (M1) (M2) (M3) A(BC) = (AB)C A(B + C) = AB + AC (B + C)A = BA + CA (Associative Law) (Left Distributive Law) (Right Distributive Law) providing that the matrices A, B, C have the correct order so that the above multiplications and additions are defined. The one basic property that matrix multiplication does not possess is commutativity; that is, in general, AB does not equal BA (see Example 3). We hasten to add, however, that while matrices in general do not commute, it may very well be the case that, given two particular matrices, they do commute as can be seen from Example 4. Commutativity is not the only property that matrix multiplication lacks. We know from our experiences with real numbers that if the product xy = 0, then either x = 0 or y = 0 or both are zero. Matrices do not possess this property as the following example shows: Example 5 Find AB if A= 4 2 2 1 and B= 3 −6 −4 8 . Solution AB = 4 2 2 1 3 −6 −4 8 = 4(3) + 2(−6) 2(3) + 1(−6) 4(−4) + 2(8) 2(−4) + 1(8) = 0 0 0 0 . Thus, even though neither A nor B is zero, their product is zero. 14 Chapter 1 Matrices One final “unfortunate” property of matrix multiplication is that the equation AB = AC does not imply B = C. Example 6 Find AB and AC if A= 4 2 2 1 , B= 1 2 1 1 , C= 2 0 2 −1 . Solution AB = 4 2 2 1 1 2 1 1 = 4(1) + 2(2) 2(1) + 1(2) 4(1) + 2(1) 2(1) + 1(1) = 8 4 6 3 ; AC = 4 2 2 1 2 0 2 −1 = 4(2) + 2(0) 2(2) + 1(0) 4(2) + 2(−1) 2(2) + 1(−1) = 8 4 6 3 . Thus, cancellation is not a valid operation in the matrix algebra. The reader has no doubt wondered why this seemingly complicated procedure for matrix multiplication has been introduced when the more obvious methods of multiplying matrices termwise could be used. The answer lies in systems of simultaneous linear equations. Consider the set of simultaneous linear equations given by 5x − 3y + 2z = 14, x + y − 4z = −7, (8) 7x − 3z = 1. This system can easily be solved by the method of substitution. Matrix algebra, however, will give us an entirely new method for obtaining the solution. Consider the matrix equation Ax = b (9) where ⎡ ⎤ 5 −3 2 A = ⎣1 1 −4⎦, 7 0 −3 ⎡⎤ x x = ⎣y⎦, z ⎡⎤ 14 and b = ⎣−7⎦. 1 Here A, called the coefficient matrix, is simply the matrix whose elements are the coefficients of the unknowns x, y, z in (8). (Note that we have been very careful to put all the x coefficients in the first column, all the y coefficients in the second column, and all the z coefficients in the third column. The zero in the (3, 2) entry appears because the y coefficient in the third equation of system (8) 1.3 Matrix Multiplication 15 is zero.) x and b are obtained in the obvious manner. One note of warning: there is a basic difference between the unknown matrix x in (9) and the unknown variable x. The reader should be especially careful not to confuse their respective identities. Now using our definition of matrix multiplication, we have that ⎡ ⎤⎡ ⎤ ⎡ ⎤ 5 −3 2 x (5)(x) + (−3)(y) + (2)(z) Ax = ⎣1 1 −4⎦ ⎣y⎦ = ⎣(1)(x) + (1)(y) + (−4)(z)⎦ 7 0 −3 z (7)(x) + (0)(y) + (−3)(z) ⎡ ⎤⎡⎤ 5x − 3y + 2z 14 = ⎣ x + y − 4z⎦ = ⎣−7⎦. (10) 7x − 3z 1 Using the definition of matrix equality, we see that (10) is precisely system (8). Thus (9) is an alternate way of representing the original system. It should come as no surprise, therefore, that by redefining the matrices A, x, b, appropriately, we can represent any system of simultaneous linear equations by the matrix equation Ax = b. Example 7 Put the following system into matrix form: x − y + z + w = 5, 2x + y − z = 4, 3x + 2y + 2w = 0, x − 2y + 3z + 4w = −1. Solution Define ⎡ ⎤ 1 −1 1 1 A = ⎢⎢⎣23 1 2 −1 0 20⎥⎥⎦, 1 −2 3 4 ⎡⎤ x x = ⎢⎢⎣ y z ⎥⎥⎦, w ⎡⎤ 5 b = ⎢⎢⎣ 04⎥⎥⎦. −1 The original system is then equivalent to the matrix system Ax = b. Unfortunately, we are not yet in a position to solve systems that are in matrix form Ax = b. One method of solution depends upon the operation of inversion, and we must postpone a discussion of it until the inverse has been defined. For the present, however, we hope that the reader will be content with the knowledge that matrix multiplication, as we have defined it, does serve some useful purpose. 16 Chapter 1 Matrices Problems 1.3 1. The order of A is 2 × 4, the order of B is 4 × 2, the order of C is 4 × 1, the order of D is 1 × 2, and the order of E is 4 × 4. Find the orders of (a) AB, (e) CD, (i) ABC, (b) BA, (f) AE, (j) DAE, In Problems 2 through 19, let (c) AC, (g) EB, (k) EBA, (d) CA, (h) EA, (l) EECD. A= 1 3 2 4 , B= 5 7 6 8 , C= −1 3 0 −2 1 1 , ⎡ 1 D = ⎣−1 ⎤ ⎡ ⎤ ⎡ ⎤ 1 −2 2 1 0 12 2⎦, E = ⎣ 0 −2 −1⎦, F = ⎣−1 −1 0⎦, 2 −2 101 1 23 X = [1 − 2], Y = [1 2 1]. 2. Find AB. 5. Find BC. 8. Find XB. 11. Find CD. 14. Find YC. 17. Find EF. 20. Find AB if 3. Find BA. 6. Find CB. 9. Find XC. 12. Find DC. 15. Find DX. 18. Find FE. 4. Find AC. 7. Find XA. 10. Find AX. 13. Find YD. 16. Find XD. 19. Find YF. A= 2 3 6 9 and B= 3 −1 −6 2 . Note that AB = 0 but neither A nor B equals the zero matrix. 21. Find AB and CB if A= 3 1 2 0 , B= 2 1 4 2 , C= 1 3 6 −4 . Thus show that AB = CB but A = C 22. Compute the product 12 34 x y . 1.3 Matrix Multiplication 17 23. Compute the product ⎡ ⎤⎡ ⎤ 1 0 −1 x ⎣3 1 1⎦ ⎣y⎦. 13 0 z 24. Compute the product a11 a12 a21 a22 x y. 25. Compute the product ⎡⎤ b11 b21 b12 b22 b13 b23 2 ⎣−1⎦. 3 26. Evaluate the expression A2 − 4A − 5I for the matrix∗ A= 1 4 2 3 . 27. Evaluate the expression (A − I)(A + 2I) for the matrix∗ A= 3 −2 5 4 . 28. Evaluate the expression (I − A)(A2 − I) for the matrix∗ ⎡ ⎤ 2 −1 1 A = ⎣3 −2 1⎦. 0 01 29. Verify property (M1) for A= 2 1 1 3 , B= 0 −1 1 4 , C= 5 2 1 1 . 30. Prove Property (M2). 31. Prove Property (M3). 32. Put the following system of equations into matrix form: 2x + 3y = 10, 4x − 5y = 11. ∗I is defined in Section 1.4 18 Chapter 1 Matrices 33. Put the following system of equations into matrix form: x + z + y = 2, 3z + 2x + y = 4, y + x = 0. 34. Put the following system of equations into matrix form: 5x + 3y + 2z + 4w = 5, x + y + w = 0, 3x + 2y + 2z = −3, x + y + 2z + 3w = 4. 35. The price schedule for a Chicago to Los Angeles flight is given by P = [200 350 500], where the matrix elements pertain, respectively, to coach tickets, business-class tickets, and first-class tickets. The number of tickets purchased in each category for a particular flight is given by ⎡⎤ 130 N = ⎣ 20⎦. 10 Compute the products (a) PN, and (b) NP, and determine their significance. 36. The closing prices of a person’s portfolio during the past week are given by the matrix ⎡ 40 P = ⎢⎢⎣ 3 1 4 40 1 2 3 5 8 40 7 8 3 1 2 41 4 ⎤ 41 3 7 8 ⎥⎥⎦, 10 9 3 4 10 1 8 10 9 5 8 where the columns pertain to the days of the week, Monday through Friday, and the rows pertain to the prices of Orchard Fruits, Lion Airways, and Arrow Oil. The person’s holdings in each of these companies are given by the matrix H = [100 500 400]. Compute the products (a) HP, and (b) PH, and determine their significance. 37. The time requirements for a company to produce three products is given by the matrix ⎡ ⎤ 0.2 0.5 0.4 T = ⎣1.2 2.3 0.7⎦, 0.8 3.1 1.2 where the rows pertain to lamp bases, cabinets, and tables, respectively. The columns pertain to the hours of labor required for cutting the wood, assembling, and painting, respectively. The hourly wages of a carpenter to cut wood, 1.4 Special Matrices 19 of a craftsperson to assemble a product, and of a decorator to paint is given, respectively, by the elements of the matrix ⎡⎤ 10.50 W = ⎣14.00⎦. 12.25 Compute the product TW and determine its significance. 38. Continuing with the data given in the previous problem, assume further that the number of items on order for lamp bases, cabinets, and tables, respectively, is given by the matrix O = [1000 100 200]. Compute the product OTW, and determine its significance. 39. The results of a flu epidemic at a college campus are collected in the matrix ⎡ ⎤ 0.20 0.20 0.15 0.15 F = ⎣0.10 0.30 0.30 0.40⎦. 0.70 0.50 0.55 0.45 The elements denote percents converted to decimals. The columns pertain to freshmen, sophomores, juniors, and seniors, respectively, while the rows represent bedridden students, infected but ambulatory students, and well stu- dents, respectively. The male–female composition of each class is given by the matrix ⎡ 1050 C = ⎢⎢⎣1130600 ⎤ 950 1055000⎥⎥⎦. 860 1000 Compute the product FC, and determine its significance. 1.4 Special Matrices There are certain types of matrices that occur so frequently that it becomes advis- able to discuss them separately. One such type is the transpose. Given a matrix A, the transpose of A, denoted by AT and read A-transpose, is obtained by changing all the rows of A into columns of AT while preserving the order; hence, the first row of A becomes the first column of AT, while the second row of A becomes the second column of AT, and the last row of A becomes the last column of AT. Thus if ⎡ ⎤ ⎡ ⎤ 123 147 A = ⎣4 5 6⎦, then AT = ⎣2 5 8⎦ 789 369 20 Chapter 1 Matrices and if ⎡⎤ 15 A= 1 5 2 6 3 7 4 8 , then AT = ⎢⎢⎣23 67⎥⎥⎦. 48 Definition 1 If A, denoted by [aij] is an n × p matrix, then the transpose of A, denoted by AT = [aiTj] is a p × n matrix where aiTj = aji. It can be shown that the transpose possesses the following properties: (1) (AT)T = A, (2) (λA)T = λAT where λ represents a scalar, (3) (A + B)T = AT + BT, (4) (A + B + C)T = AT + BT + CT, (5) (AB)T = BTAT, (6) (ABC)T = CTBTAT Transposes of sums and products of more than three matrices are defined in the obvious manner. We caution the reader to be alert to the ordering of properties (5) and (6). In particular, one should be aware that the transpose of a product is not the product of the transposes but rather the commuted product of the transposes. Example 1 Find (AB)T and BTAT if A= 3 4 0 1 and B= −1 3 2 −1 1 0 . Solution ⎡ ⎤ AB = −3 −1 6 7 3 4 , −3 (AB)T = ⎣ 6 3 −1 7⎦; 4 ⎡ −1 BTAT = ⎣ 2 1 ⎤ 3 −1⎦ 0 3 0 ⎡ 4 1 −3 =⎣ 6 3 ⎤ −1 7⎦. 4 Note that (AB)T = BTAT but ATBT is not defined. A zero row in a matrix is a row containing only zeros, while a nonzero row is one that contains at least one nonzero element. A matrix is in row-reduced form if it satisfies four conditions: (R1) All zero rows appear below nonzero rows when both types are present in the matrix. (R2) The first nonzero element in any nonzero row is unity. 1.4 Special Matrices 21 (R3) All elements directly below ( that is, in the same column but in succeeding rows from) the first nonzero element of a nonzero row are zero. (R4) The first nonzero element of any nonzero row appears in a later column (further to the right) than the first nonzero element in any preceding row. Such matrices are invaluable for solving sets of simultaneous linear equations and developing efficient algorithms for performing important matrix operations. We shall have much more to say on these matters in later chapters. Here we are simply interested in recognizing when a given matrix is or is not in row-reduced form. Example 2 Determine which of the following matrices are in row-reduced form: ⎡ 1 A = ⎢⎢⎣00 0 1 0 0 0 −2 4 −6 5 00 00 ⎤ 7 70⎥⎥⎦, 0 ⎡ 1 B = ⎣0 0 2 0 0 ⎤ 3 0⎦, 1 ⎡ ⎤ ⎡ ⎤ 1234 −1 −2 3 3 C = ⎣0 0 1 2⎦, D = ⎣ 0 0 1 −3⎦. 0105 0 01 0 Solution Matrix A is not in row-reduced form because the first nonzero element of the second row is not unity. This violates (R2). If a23 had been unity instead of −6, then the matrix would be in row-reduced form. Matrix B is not in row-reduced form because the second row is a zero row and it appears before the third row which is a nonzero row. This violates (R1). If the second and third rows had been interchanged, then the matrix would be in row-reduced form. Matrix C is not in row-reduced form because the first nonzero element in row two appears in a later column, column 3, than the first nonzero element of row three. This violates (R4). If the second and third rows had been interchanged, then the matrix would be in row-reduced form. Matrix D is not in row-reduced form because the first nonzero element in row two appears in the third column, and everything below d23 is not zero. This violates (R3). Had the 3–3 element been zero instead of unity, then the matrix would be in row-reduced form. For the remainder of this section, we concern ourselves with square matrices; that is, matrices having the same number of rows as columns. A diagonal matrix is a square matrix all of whose elements are zero except possibly those on the main diagonal. (Recall that the main diagonal consists of all the diagonal elements a11, a22, a33, and so on.) Thus, 50 0 −1 ⎡ ⎤ 300 and ⎣0 3 0⎦ 003 22 Chapter 1 Matrices are both diagonal matrices of order 2 × 2 and 3 × 3 respectively. The zero matrix is the special diagonal matrix having all the elements on the main diagonal equal to zero. An identity matrix is a diagonal matrix worthy of special consideration. Des- ignated by I, an identity is defined to be a diagonal matrix having all diagonal elements equal to one. Thus, 10 01 ⎡ ⎤ 1000 and ⎢⎢⎣00 1 0 0 1 00⎥⎥⎦ 0001 are the 2 × 2 and 4 × 4 identities respectively. The identity is perhaps the most important matrix of all. If the identity is of the appropriate order so that the following multiplication can be carried out, then for any arbitrary matrix A, AI = A and IA = A. A symmetric matrix is a matrix that is equal to its transpose while a skew symmetric matrix is a matrix that is equal to the negative of its transpose. Thus, a matrix A is symmetric if A = AT while it is skew symmetric if A = −AT. Examples of each are respectively ⎡ ⎤ 123 ⎡ ⎤ 0 2 −3 ⎣2 4 5⎦ and ⎣−2 0 1⎦. 356 3 −1 0 A matrix A = [aij] is called lower triangular if aij = 0 for j > i (that is, if all the elements above the main diagonal are zero) and upper triangular if aij = 0 for i > j (that is, if all the elements below the main diagonal are zero). Examples of lower and upper triangular matrices are, respectively, ⎡ ⎤ ⎡ ⎤ 5000 −1 2 4 1 ⎢⎢⎣−01 2 1 0 3 00⎥⎥⎦ and ⎢⎢⎣ 0 0 1 0 3 2 −15⎥⎥⎦. 2141 000 5 Theorem 1 The product of two lower (upper) triangular matrices is also lower (upper) triangular. Proof. Let A and B both be n × n lower triangular matrices. Set C = AB. We need to show that C is lower triangular, or equivalently, that cij = 0 when i < j. Now, n j−1 n cij = aikbkj = aikbkj + aikbkj. k=1 k=1 k=j 1.4 Special Matrices 23 We are given that aik = 0 when i < k, and bkj = 0 when k < j, because both A and B are lower triangular. Thus, j−1 j−1 aikbkj = aik(0) = 0 k=1 k=1 because k is always less than j. Furthermore, if we restrict i < j, then n n aikbkj = (0)bkj = 0 k=j k=j because k ≥ j > i. Therefore, cij = 0 when i < j. Finally, we define positive integral powers of a matrix in the obvious manner: A2 = AA, A3 = AAA and, in general, if n is a positive integer, An = AA . . . A. n times Thus, if A= 1 1 −2 3 , then A2 = 1 1 −2 3 1 1 −2 3 = −1 4 −8 7 . It follows directly from Property 5 that (A2)T = (AA)T = ATAT = (AT)2. We can generalize this result to the following property for any integral positive power n: (7) (An)T = (AT)n. Problems 1.4 1. Verify that (A + B)T = AT + BT where ⎡ ⎤ 1 5 −1 ⎡ ⎤ 613 A = ⎣2 1 3⎦ and B = ⎣ 2 0 −1⎦. 0 7 −8 −1 −7 2 2. Verify that (AB)T = BTAT, where ⎡ t A = ⎣1 1 t2 ⎤ 2t⎦ 0 and B= 3 t t 2t t+1 t2 0 t3 . 24 Chapter 1 Matrices 3. Simplify the following expressions: (a) (ABT)T, (c) (AT(B + CT))T, (e) ((A + AT)(A − AT))T. (b) AT + (A + BT)T, (d) ((AB)T + C)T, 4. Find XTX and XXT when ⎡⎤ 2 X = ⎣3⎦. 4 5. Find XTX and XXT when X = [1 −2 3 −4]. 6. Find XTAX when A= 2 3 3 4 and X = x y. 7. Determine which, if any, of the following matrices are in row-reduced form: ⎡ ⎤ ⎡ ⎤ 0 1 0 4 −7 1 1 0 4 −7 A = ⎢⎢⎣00 0 0 0 0 1 0 21⎥⎥⎦, B = ⎢⎢⎣00 1 0 0 1 1 0 21⎥⎥⎦, 0000 0 0001 5 ⎡ ⎤ ⎡ ⎤ 1 1 0 4 −7 0 1 0 4 −7 C = ⎢⎢⎣00 1 0 0 0 1 0 21⎥⎥⎦, D = ⎢⎢⎣00 0 0 0 0 0 0 01⎥⎥⎦, 0 0 0 1 −5 0000 0 ⎡ ⎤ 222 E = ⎣0 2 2⎦, ⎡ ⎤ 000 F = ⎣0 0 0⎦, ⎡ ⎤ 123 G = ⎣0 0 1⎦, 002 000 100 ⎡ ⎤ 000 H = ⎣0 1 0⎦, ⎡ ⎤ 011 J = ⎣1 0 2⎦, ⎡ ⎤ 1 02 K = ⎣0 −1 1⎦, 000 000 0 00 ⎡ ⎤ 200 L = ⎣0 2 0⎦, 000 ⎡ 1 M = ⎢⎣0 1 2 1 1⎤ 3 1 4 ⎥⎦, 001 ⎡ ⎤ 100 N = ⎣0 0 1⎦, 000 Q= 0 1 1 0 , R= 1 0 1 0 , S= 1 1 0 0 , T= 1 0 12 1 . 8. Determine which, if any, of the matrices in Problem 7 are upper triangular. 9. Must a square matrix in row-reduced form necessarily be upper triangular? 1.4 Special Matrices 25 10. Must an upper triangular matrix necessarily be in row-reduced form? 11. Can a matrix be both upper and lower triangular simultaneously? 12. Show that AB = BA, where ⎡ ⎤ −1 0 0 ⎡ ⎤ 500 A = ⎣ 0 3 0⎦ and B = ⎣0 3 0⎦. 001 002 13. Prove that if A and B are diagonal matrices of the same order, then AB = BA. 14. Does a 2 × 2 diagonal matrix commute with every other 2 × 2 matrix? 15. Compute the products AD and BD for the matrices ⎡ ⎤ 111 A = ⎣1 1 1⎦, 111 ⎡ ⎤ 012 B = ⎣3 4 5⎦, 678 ⎡ ⎤ 20 0 D = ⎣0 3 0⎦. 0 0 −5 What conclusions can you make about postmultiplying a square matrix by a diagonal matrix? 16. Compute the products DA and DB for the matrices defined in Problem 15. What conclusions can you make about premultiplying a square matrix by a diagonal matrix? 17. Prove that if a 2 × 2 matrix A commutes with every 2 × 2 diagonal matrix, then A must also be diagonal. Hint: Consider, in particular, the diagonal matrix D= 1 0 0 0 . 18. Prove that if an n × n matrix A commutes with every n × n diagonal matrix, then A must also be diagonal. 19. Compute D2 and D3 for the matrix D defined in Problem 15. 20. Find A3 if ⎡ ⎤ 100 A = ⎣0 2 0⎦. 003 21. Using the results of Problems 19 and 20 as a guide, what can be said about Dn if D is a diagonal matrix and n is a positive integer? 22. Prove that if D = [dij] is a diagonal matrix, then D2 = [di2j]. 26 Chapter 1 Matrices 23. Calculate D50 − 5D35 + 4I, where ⎡ ⎤ 00 0 D = ⎣0 1 0⎦. 0 0 −1 24. A square matrix A is nilpotent if An = 0 for some positive integer n. If n is the smallest positive integer for which An = 0 then A is nilpotent of index n. Show that ⎡ ⎤ −1 −1 −3 A = ⎣−5 −2 −6⎦ 213 is nilpotent of index 3. 25. Show that ⎡ ⎤ 0100 A = ⎢⎢⎣00 0 0 1 0 01⎥⎥⎦ 0000 is nilpotent. What is its index? 26. Prove that if A is a square matrix, then B = (A + AT)/2 is a symmetric matrix. 27. Prove that if A is a square matrix, then C = (A − AT)/2 is a skew symmetric matrix. 28. Using the results of the preceding two problems, prove that any square matrix can be written as the sum of a symmetric matrix and a skew-symmetric matrix. 29. Write the matrix A in Problem 1 as the sum of a symmetric matrix and skewsymmetric matrix. 30. Write the matrix B in Problem 1 as the sum of a symmetric matrix and a skew-symmetric matrix. 31. Prove that if A is any matrix, then AAT is symmetric. 32. Prove that the diagonal elements of a skew-symmetric matrix must be zero. 33. Prove that the transpose of an upper triangular matrix is lower triangular, and vice versa. 34. If P = [pij] is a transition matrix for a Markov chain (see Problem 16 of Section 1.1), then it can be shown with elementary probability theory that the i − j element of P2 denotes the probability of an object moving from state i to stage j over two time periods. More generally, the i − j element of Pn for any positive integer n denotes the probability of an object moving from state i to state j over n time periods. 1.4 Special Matrices 27 (a) Calculate P2 and P3 for the two-state transition matrix P= 0.1 0.4 0.9 0.6 . (b) Determine the probability of an object beginning in state 1 and ending in state 1 after two time periods. (c) Determine the probability of an object beginning in state 1 and ending in state 2 after two time periods. (d) Determine the probability of an object beginning in state 1 and ending in state 2 after three time periods. (e) Determine the probability of an object beginning in state 2 and ending in state 2 after three time periods. 35. Consider a two-state Markov chain. List the number of ways an object in state 1 can end in state 1 after three time periods. 36. Consider the Markov chain described in Problem 16 of Section 1.1. Determine (a) the probability that a family living in the city will find themselves in the suburbs after two years, and (b) the probability that a family living in the suburbs will find themselves living in the city after two years. 37. Consider the Markov chain described in Problem 17 of Section 1.1. Determine (a) the probability that there will be a Republican mayor eight years after a Republican mayor serves, and (b) the probability that there will be a Republican mayor 12 years after a Republican mayor serves. 38. Consider the Markov chain described in Problem 18 of Section 1.1. It is known that this year the apple harvest was poor. Determine (a) the probability that next year’s harvest will be poor, and (b) the probability that the harvest in two years will be poor. 39. Consider the Markov chain described in Problem 19 of Section 1.1. Determine (a) the probability that a brand X customer will be a brand X customer after 4 years, (b) after 6 years, and (c) the probability that a brand X customer will be a brand Y customer after 4 years. 40. A graph consists of a set of nodes, which we shall designate by positive integers, and a set of arcs that connect various pairs of nodes. An adjacency matrix M associated with a particular graph is defined by mij = number of distinct arcs connecting node i to node j (a) Construct an adjacency matrix for the graph shown in Figure 1.1. (b) Calculate M2, and note that the i − j element of M2 is the number of paths consisting of two arcs that connect node i to node j. 28 Chapter 1 Matrices Figure 1.1 Figure 1.2 41. (a) Construct an adjacency matrix M for the graph shown in Figure 1.2. (b) Calculate M2, and use that matrix to determine the number of paths consisting of two arcs that connect node 1 to node 5. (c) Calculate M3, and use that matrix to determine the number of paths consisting of three arcs that connect node 2 to node 4. Figure 1.3 42. Figure 1.3 depicts a road network linking various cities. A traveler in city 1 needs to drive to city 7 and would like to do so by passing through the least 1.5 Submatrices and Partitioning 29 number of intermediate cities. Construct an adjacency matrix for this road network. Consider powers of this matrix to solve the traveler’s problem. 1.5 Submatrices and Partitioning Given any matrix A, a submatrix of A is a matrix obtained from A by the removal of any number of rows or columns. Thus, if ⎡ ⎤ 1234 A = ⎢⎢⎣ 5 9 6 10 7 11 128⎥⎥⎦, B= 10 14 12 16 , and C = [2 3 4], (11) 13 14 15 16 then B and C are both submatrices of A. Here B was obtained by removing from A the first and second rows together with the first and third columns, while C was obtained by removing from A the second, third, and fourth rows together with the first column. By removing no rows and no columns from A, it follows that A is a submatrix of itself. A matrix is said to be partitioned if it is divided into submatrices by horizontal and vertical lines between the rows and columns. By varying the choices of where to put the horizontal and vertical lines, one can partition a matrix in many different ways. Thus, ⎡ 1 2 3 4⎤ ⎡ 1 2 3 4⎤ ⎢⎢⎣ 5 9 6 10 7 11 128⎥⎥⎦ and ⎢⎢⎣ 5 9 6 10 7 11 128⎥⎥⎦ 13 14 15 16 13 14 15 16 are examples of two different partitions of the matrix A given in (11). If partitioning is carried out in a particularly judicious manner, it can be a great help in matrix multiplication. Consider the case where the two matrices A and B are to be multiplied together. If we partition both A and B into four submatrices, respectively, so that CD GH A= and B = EF JK where C through K represent submatrices, then the product AB may be obtained by simply carrying out the multiplication as if the submatrices were themselves elements. Thus, AB = CG + DJ EG + FJ CH + DK EH + FK , (12) providing the partitioning was such that the indicated multiplications are defined. It is not unusual to need products of matrices having thousands of rows and thousands of columns. Problem 42 of Section 1.4 dealt with a road network connecting seven cities. A similar network for a state with connections between all 30 Chapter 1 Matrices cities in the state would have a very large adjacency matrix associated with it, and its square is then the product of two such matrices. If we expand the network to include the entire United States, the associated matrix is huge, with one row and one column for each city and town in the country. Thus, it is not difficult to visualize large matrices that are too big to be stored in the internal memory of any modern day computer. And yet the product of such matrices must be computed. The solution procedure is partitioning. Large matrices are stored in external memory on peripheral devices, such as disks, and then partitioned. Appropriate submatrices are fetched from the peripheral devices as needed, computed, and the results again stored on the peripheral devices. An example is the product given in (12). If A and B are too large for the internal memory of a particular computer, but C through K are not, then the partitioned product can be computed. First, C and G are fetched from external memory and multiplied; the product is then stored in external memory. Next, D and J are fetched and multiplied. Then, the product CG is fetched and added to the product DJ. The result, which is the first partition of AB, is then stored in external memory, and the process continues. Example 1 Find AB if ⎡ ⎤ ⎡ ⎤ 31 2 132 A = ⎣1 4 −1⎦ and B = ⎣−1 0 1⎦. 31 2 011 Solution We first partition A and B in the following manner ⎡ ⎤ 31 2 ⎡ ⎤ 132 A = ⎣1 4 −1⎦ and B = ⎣−1 0 1⎦; 31 2 011 then, ⎡ AB = ⎢⎢⎢⎣ 3 1 3 1 4 1 1 −1 3 0 + 2 −1 1 −1 3 0 + 2 0 0 1 1 ⎤ 3 1 3 1 4 1 2 1 + 2 −1 2 1 +2 1 1 ⎥⎥⎥⎦ ⎡ = ⎢⎣ 2 −3 9 3 + 0 0 2 −1 2 9+0 2 ⎤ 7 6 + 2 −1 ⎥⎦ 7+2 ⎡ ⎤⎡ ⎤ 2 11 9 2 11 9 = ⎣−3 2 5⎦ = ⎣−3 2 5⎦. 2 11 9 2 11 9 1.5 Submatrices and Partitioning 31 Example 2 Find AB if ⎡ ⎤ 3 A = ⎢⎢⎢⎢⎣020 1 0 0 0 0 031⎥⎥⎥⎥⎦ ⎡ 21 and B = ⎣−1 1 01 ⎤ 000 0 0 0⎦. 001 000 Solution From the indicated partitions, we find that ⎡ AB = ⎢⎢⎢⎢⎢⎢⎢⎣ 31 20 00 00 00 2 −1 2 −1 2 −1 1 1 + 0 0 0 1 1 1 + 3 4 0 1 1 1 +0 0 1 31 20 00 00 00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 + 0 0 + 3 1 +0 0 0 0 0 0 0 1 1 1 ⎤ ⎥⎥⎥⎥⎥⎥⎥⎦ ⎡ AB = ⎢⎢⎢⎢⎢⎣ 5 4 0 0 4 2 + 0 0 0 0 + 0 0 0 0 3 1 0 0+0 0 0 0 0 0 0 0 0 0 0 0 + 0 0 0 0 + 0 0 0 0 0 0 0 0 3 1 ⎤ ⎥⎥⎥⎥⎥⎦ 0 0 0+0 0 0 ⎡ ⎤⎡ ⎤ 54000 54000 = ⎢⎢⎢⎢⎣004 2 3 1 0 0 0 0 0 0 031⎥⎥⎥⎥⎦ = ⎢⎢⎢⎢⎣004 2 3 1 0 0 0 0 0 0 031⎥⎥⎥⎥⎦. 00000 00000 Note that we partitioned in order to make maximum of the zero submatrices of both A and B. A matrix A that can be partitioned into the form ⎡ ⎤ A1 A = ⎢⎢⎢⎢⎢⎣ A2 A3 ... 0 ⎥⎥⎥⎥⎥⎦ 0 An is called block diagonal. Such matrices are particularly easy to multiply because in partitioned form they act as diagonal matrices. 32 Chapter 1 Matrices Problems 1.5 1. Which of the following are submatrices of the given A and why? ⎡ ⎤ 123 A = ⎣4 5 6⎦ 789 (a) 1 7 3 9 (b) 1 (c) 1 8 2 9 (d) 4 7 6 9 . 2. Determine all possible submatrices of A= a c b d . 3. Given the matrices A and B (as shown), find AB using the partitionings indicated: ⎡ 1 −1 ⎤ 2 ⎡ ⎤ 5 202 A = ⎣3 0 4⎦, B = ⎣1 −1 3 1⎦. 0 12 0 114 4. Partition the given matrices A and B and, using the results, find AB. ⎡ ⎤ ⎡ ⎤ 4100 320 0 A = ⎢⎢⎣20 2 0 0 1 00⎥⎥⎦, B = ⎢⎢⎣−01 1 0 0 2 01⎥⎥⎦. 0012 0 0 1 −1 5. Compute A2 for the matrix A given in Problem 4 by partitioning A into block diagonal form. 6. Compute B2 for the matrix B given in Problem 4 by partitioning B into block diagonal form. 7. Use partitioning to compute A2 and A3 for ⎡ ⎤ 100000 A = ⎢⎢⎢⎢⎢⎢⎣0000 2 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0001⎥⎥⎥⎥⎥⎥⎦. 000000 What is An for any positive integral power of n > 3? 1.6 Vectors 33 8. Use partitioning to compute A2 and A3 for ⎡ ⎤ 0 −1 0 0 0 0 0 A = ⎢⎢⎢⎢⎢⎢⎢⎢⎣−00001 00000 0 2 −2 −4 0 0 −1 3 4 0 0 1 −2 −3 0 0 0 0 0 −1 00000⎥⎥⎥⎥⎥⎥⎥⎥⎦. 0 0 0 0 0 0 −1 What is An for any positive integral power of n? 1.6 Vectors Definition 1 A vector is a 1 × n or n × 1 matrix. A 1 × n matrix is called a row vector while an n × 1 matrix is a column vector. The elements are called the components of the vector while the number of components in the vector, in this case n, is its dimension. Thus, ⎡⎤ 1 ⎣2⎦ 3 is an example of a 3-dimensional column vector, while t 2t −t 0 is an example of a 4-dimensional row vector. The reader who is already familiar with vectors will notice that we have not defined vectors as directed line segments. We have done this intentionally, first because in more than three dimensions this geometric interpretation loses its significance, and second, because in the general mathematical framework, vectors are not directed line segments. However, the idea of representing a finite dimensional vector by its components and hence as a matrix is one that is acceptable to the scientist, engineer, and mathematician. Also, as a bonus, since a vector is nothing more than a special matrix, we have already defined scalar multiplication, vector addition, and vector equality. A vector y (vectors will be designated by boldface lowercase letters) has associated with it a nonnegative number called its magnitude or length designated by y . Definition 2 If y = [y1 y2 . . . yn] then y = (y1)2 + (y2)2 + · · · + (yn)2. 34 Chapter 1 Matrices Example 1 Find y if y = 1 2 3 4 . √ Solution y = (1)2 + (2)2 + (3)2 + (4)2 = 30. If z is a column vector, z is defined in a completely analogous manner. Example 2 Find z if ⎡⎤ −1 z = ⎣ 2⎦. −3 √ Solution z = (−1)2 + (2)2 + (−3)2 = 14. A vector is called a unit vector if its magnitude is equal to one. A nonzero vector is said to be normalized if it is divided by its magnitude. Thus, a normalized vector is also a unit vector. Example 3 Normalize the vector [1 0 −3 2 −1]. Solution The magnitude of this vector is √ (1)2 + (0)2 + (−3)2 + (2)2 + (−1)2 = 15. Hence, the normalized vector is √1 0 √−3 √2 √−1 . 15 15 15 15 In passing, we note that when a general vector is written y = [y1y2 . . . yn] one of the subscripts of each element of the matrix is deleted. This is done solely for the sake of convenience. Since a row vector has only one row (a column vector has only one column), it is redundant and unnecessary to exhibit the row subscript (the column subscript). Problems 1.6 1. Find p if 5x − 2y = b, where ⎡⎤ 1 ⎡⎤ 2 ⎡⎤ 1 x = ⎣3⎦, y = ⎣p⎦, and b = ⎣ 13⎦. 0 1 −2 1.6 Vectors 35 2. Find x if 3x + 2y = b, where ⎡⎤ ⎡⎤ 3 2 y = ⎢⎢⎣16⎥⎥⎦ and b = ⎢⎢⎣−41⎥⎥⎦. 0 1 3. Find y if 2x − 5y = −b, where x = 2 −1 3 and b = 1 0 −1 . 4. Using the vectors defined in Problem 2, calculate, if possible, (a) yb, (b) ybT, (c) yTb, (d) bTy. 5. Using the vectors defined in Problem 3, calculate, if possible, (a) x + 2b, (b) xbT, (c) xTb, (d) bTb. 6. Determine which of the following are uni√t vectors: √ (a) 1 1 , (b) 1/2 1/2 , (c) 1/ 2 −1/ 2 ⎡⎤ 0 (d) ⎣1⎦, 0 ⎡⎤ 1/2 (e) ⎣1/3⎦, 1/6 ⎡ √⎤ (f) ⎣11//√√33⎦, 1/ 3 ⎡⎤ 1 (g) 1 2 ⎢⎢⎣11⎥⎥⎦, 1 ⎡⎤ 1 (h) 1 6 ⎢⎢⎣53⎥⎥⎦, 1 (i) √1 −1 0 1 −1 . 3 7. Find y if (a) y = 1 −1 , (b) y = 3 4 , (c) y = −1 −1 1 , (d) y = 1 2 1 2 1 2 , (e) y = 2 1 −1 3 , (f) y = 0 −1 5 3 2 . 8. Find x if ⎡⎤ (a) x = 1 −1 , (b) x = 1 2 , 1 (c) x = ⎣1⎦, 1 36 Chapter 1 Matrices ⎡⎤ ⎡⎤ ⎡⎤ 1 1 1 (d) x = ⎢⎢⎣−11⎥⎥⎦, (e) x = ⎢⎢⎣23⎥⎥⎦, (f) x = ⎢⎢⎣01⎥⎥⎦. −1 4 0 9. Find y if (a) y = 2 1 −1 3 , (b) y = 0 −1 5 3 2 . 10. Prove that a normalized vector must be a unit vector. 11. Show that the matrix equation ⎡ 11 ⎣2 5 −1 3 ⎤⎡ ⎤ ⎡ ⎤ −2 x −3 3⎦ ⎣y⎦ = ⎣ 11⎦ 1z 5 is equivalent to the vector equation ⎡ ⎤ ⎡⎤ ⎡ ⎤ ⎡ ⎤ 1 1 −2 −3 x ⎣ 2⎦ + y ⎣5⎦ + z ⎣ 3⎦ = ⎣ 11⎦. −1 3 1 5 12. Convert the following system of equations into a vector equation: 2x + 3y = 10, 4x + 5y = 11. 13. Convert the following system of equations into a vector equation: 3x + 4y + 5z + 6w = 1, y − 2z + 8w = 0, −x + y + 2z − w = 0. 14. Using the definition of matrix multiplication, show that the jth column of (AB) = A × (jth column of B). 15. Verify the result of Problem 14 by showing that the first column of the product AB with A= 1 4 2 5 3 6 ⎡ ⎤ 11 and B = ⎣−1 0⎦ 2 −3 is ⎡⎤ 1 A ⎣−1⎦, 2 1.7 The Geometry of Vectors 37 while the second column of the product is ⎡⎤ 1 A ⎣ 0⎦. −3 16. A distribution row vector d for an N-state Markov chain (see Problem 16 of Section 1.1 and Problem 34 of Section 1.4) is an N-dimensional row vector having as its components, one for each state, the probabilities that an object in the system is in each of the respective states. Determine a distribution vector for a three-state Markov chain if 50% of the objects are in state 1, 30% are in state 2, and 20% are in state 3. 17. Let d(k) denote the distribution vector for a Markov chain after k time periods. Thus, d(0) represents the initial distribution. It follows that d(k) = d(0)Pk = P(k−1)P, where P is the transition matrix and Pk is its kth power. Consider the Markov chain described in Problem 16 of Section 1.1. (a) Explain the physical significance of saying d(0) = [0.6 0.4]. (b) Find the distribution vectors d(1) and d(2). 18. Consider the Markov chain described in Problem 19 of Section 1.1. (a) Explain the physical significance of saying d(0) = [0.4 0.5 0.1]. (b) Find the distribution vectors d(1) and d(2). 19. Consider the Markov chain described in Problem 17 of Section 1.1. (a) Determine an initial distribution vector if the town currently has a Democratic mayor, and (b) show that the components of d(1) are the probabilities that the next mayor will be a Republican and a Democrat, respectively. 20. Consider the Markov chain described in Problem 18 of Section 1.1. (a) Determine an initial distribution vector if this year’s crop is known to be poor. (b) Calculate d(2) and use it to determine the probability that the harvest will be good in two years. 1.7 The Geometry of Vectors Vector arithmetic can be described geometrically for two- and three-dimensional vectors. For simplicity, we consider two dimensions here; the extension to threedimensional vectors is straightforward. For convenience, we restrict our examples to row vectors, but note that all constructions are equally valid for column vectors. A two dimensional vector v = [a b] is identified with the point (a, b) on the plane, measured from the origin a units along the horizontal axis and then b 38 Chapter 1 Matrices Figure 1.4 units parallel to the vertical axis. We can then draw an arrow beginning at the origin and ending at the point (a, b). This arrow or directed line segment, as shown in Figure 1.4, represents the vector geometrically. It follows immediately from Pythagoras’s theorem and Definition 2 of Section 1.6 that the length of the directed line segment is the magnitude of the vector. The angle associated with a vector, denoted by θ in Figure 1.4, is the angle from the positive horizontal axis to the directed line segment measured in the counterclockwise direction. Example 1 Graph the vectors v = [2 4] and u = [−1 1] and determine the magnitude and angle of each. Solution The vectors are drawn in Figure 1.5. Using Pythagoras’s theorem and elementary trigonometry, we have, for v, v = (2)2 + (4)2 = 4.47, tan θ = 4 = 2, and θ = 63.4◦. 2 For u, similar computations yield u= (−1)2 + (1)2 = 1.14, tan θ = 1 −1 = −1, and θ = 135◦. To construct the sum of two vectors u + v geometrically, graph u normally, translate v so that its initial point coincides with the terminal point of u, being careful to preserve both the magnitude and direction of v, and then draw an arrow from the origin to the terminal point of v after translation. This arrow geometrically 1.7 The Geometry of Vectors 39 Figure 1.5 Figure 1.6 represents the sum u + v. The process is depicted in Figure 1.6 for the two vectors defined in Example 1. To construct the difference of two vectors u − v geometrically, graph both u and v normally and construct an arrow from the terminal point of v to the terminal point of u. This arrow geometrically represents the difference u − v. The process is depicted in Figure 1.7 for the two vectors defined in Example 1. To measure the magnitude and direction of u − v, translate it so that its initial point is at the origin, 40 Chapter 1 Matrices Figure 1.7 being careful to preserve both its magnitude and direction, and then measure the translated vector. Both geometrical sums and differences involve translations of vectors. This suggests that a vector is not altered by translating it to another position in the plane providing both its magnitude and direction are preserved. Many physical phenomena such as velocity and force are completely described by their magnitudes and directions. For example, a velocity of 60 miles per hour in the northwest direction is a complete description of that velocity, and it is independent of where that velocity occurs. This independence is the rationale behind translating vectors geometrically. Geometrically, vectors having the same magnitude and direction are called equivalent, and they are regarded as being equal even though they may be located at different positions in the plane. A scalar multiplication ku is defined geometrically to be a vector having length k times the length of u with direction equal to u when k is positive, and opposite to u when k is negative. Effectively, ku is an elongation of u by a factor of k when k is greater than unity, or a contraction of u by a factor of k when k is less than unity, followed by no rotation when k is positive, or a rotation of 180 degrees when k is negative. Example 2 Find −2u and 1 2 v geometrically for the vectors defined in Example 1. Solution To construct −2u, we double the length of u and then rotate the result- ing vector by 180◦. To construct 1 2 v we halve the length of v and effect no rotation. These constructions are illustrated in Figure 1.8. 1.7 The Geometry of Vectors 41 Figure 1.8 Problems 1.7 In Problems 1 through 16, geometrically construct the indicated vector operations for u = [3 −1], v = [−2 5], w = [−4 −4], x= 3 5 , and y= 0 −2 . 1. u + v. 2. u + w. 3. v + w. 4. x + y. 5. x − y. 6. y − x. 7. u − v. 8. w − u. 9. u − w. 10. 2x. 11. 3x. 12. −2x. 13. 1 2 u. 14. − 1 2 u. 17. Determine the angle of u. 15. 1 3 v. 16. − 1 4 w. 18. Determine the angle of v. 19. Determine the angle of w. 20. Determine the angle of x. 21. Determine the angle of y. 22. For arbitrary two-dimensional row vectors construct on the same graph u + v and v + u. (a) Show that u + v = v + u. (b) Show that the sum is a diagonal of a parallelogram having u and v as two of its sides. This page intentionally left blank 2 Simultaneous Linear Equations 2.1 Linear Systems Systems of simultaneous equations appear frequently in engineering and scientific problems. Because of their importance and because they lend themselves to matrix analysis, we devote this entire chapter to their solutions. We are interested in systems of the form a11x1 + a12x2 + · · · + a1nxn = b1, a21x1 + a22x2 + · · · + a2nxn = b2, ... (1) am1x1 + am2x2 + · · · + amnxn = bm. We assume that the coefficients aij (i = 1, 2, . . . , m; j = 1, 2, . . . , n) and the quantities bi (i = 1, 2, . . . , m) are all known scalars. The quantities x1, x2, . . . , xn represent unknowns. Definition 1 A solution to (1) is a set of n scalars x1, x2, . . . , xn that when substituted into (1) satisfies the given equations (that is, the equalities are valid). System (1) is a generalization of systems considered earlier in that m can differ from n. If m > n, the system has more equations than unknowns. If m < n, the system has more unknowns than equations. If m = n, the system has as many unknowns as equations. In any case, the methods of Section 1.3 may be used to convert (1) into the matrix form Ax = b, (2) 43 44 Chapter 2 Simultaneous Linear Equations where ⎡ ⎤ ⎡⎤ ⎡⎤ a11 a12 · · · a1n x1 b1 A = ⎢⎢⎢⎣ a21 ... a22 ... ··· a2n ... ⎥⎥⎥⎦, x = ⎢⎢⎢⎣x...2⎥⎥⎥⎦, b = ⎢⎢⎢⎣ b2 ... ⎥⎥⎥⎦. am1 am2 · · · amn xn bm Thus, if m = n, A will be rectangular and the dimensions of x and b will be different. Example 1 Convert the following system to matrix form: x + 2y − z + w = 4, x + 3y + 2z + 4w = 9. Solution A= 1 1 2 3 −1 2 1 4 , ⎡⎤ x x = ⎢⎢⎣ y z ⎥⎥⎦, w b= 4 9 . Example 2 Convert the following system to matrix form: x − 2y = −9, 4x + y = 9, 2x + y = 7, x − y = −1. Solution ⎡ 1 A = ⎢⎢⎣24 1 ⎤ −2 11⎥⎥⎦, −1 x= x y , ⎡⎤ −9 b = ⎢⎢⎣ 97⎥⎥⎦. −1 A system of equations given by (1) or (2) can possess no solutions, exactly one solution, or more than one solution (note that by a solution to (2) we mean a vector x which satisfies the matrix equality (2)). Examples of such systems are x + y = 1, (3) x + y = 2, x + y = 1, (4) x − y = 0, 2.1 Linear Systems 45 x + y = 0, (5) 2x + 2y = 0. Equation (3) has no solutions, (4) admits only the solution x = y = 1 2 , while (5) has solutions x = −y for any value of y. Definition 2 A system of simultaneous linear equations is consistent if it possesses at least one solution. If no solution exists, the system is inconsistent. Equation (3) is an example of an inconsistent system, while (4) and (5) represent examples of consistent systems. Definition 3 A system given by (2) is homogeneous if b = 0 (the zero vector). If b = 0 (at least one component of b differs from zero) the system is nonhomogeneous. Equation (5) is an example of a homogeneous system. Problems 2.1 In Problems 1 and 2, determine whether or not the proposed values of x, y, and z are solutions of the given systems. 1. x + y + 2z = 2, x − y − 2z = 0, x + 2y + 2z = 1. (a) x = 1, y = −3, z = 2. (b) x = 1, y = −1, z = 1. 2. x + 2y + 3z = 6, x − 3y + 2z = 0, 3x − 4y + 7z = 6. (a) x = 1, y = 1, z = 1. (b) x = 2, y = 2, z = 0. (c) x = 14, y = 2, z = −4. 3. Find a value for k such that x = 1, y = 2, and z = k is a solution of the system 2x + 2y + 4z = 1, 5x + y + 2z = 5, x − 3y − 2z = −3. 4. Find a value for k such that x = 2 and y = k is a solution of the system 3x + 5y = 11, 2x − 7y = −3. 46 Chapter 2 Simultaneous Linear Equations 5. Find a value for k such that x = 2k, y = −k, and z = 0 is a solution of the system x + 2y + z = 0, −2x − 4y + 2z = 0, 3x − 6y − 4z = 1. 6. Find a value for k such that x = 2k, y = −k, and z = 0 is a solution of the system x + 2y + 2z = 0, 2x − 4y + 2z = 0, −3x − 6y − 4z = 0. 7. Find a value for k such that x = 2k, y = −k, and z = 0 is a solution of the system x + 2y + 2z = 0, 2x + 4y + 2z = 0, −3x − 6y − 4z = 1. 8. Put the system of equations given in Problem 4 into the matrix form Ax = b. 9. Put the system of equations given in Problem 1 into the matrix form Ax = b. 10. Put the system of equations given in Problem 2 into the matrix form Ax = b. 11. Put the system of equations given in Problem 6 into the matrix form Ax = b. 12. A manufacturer receives daily shipments of 70,000 springs and 45,000 pounds of stuffing for producing regular and support mattresses. Regular mattresses r require 50 springs and 30 pounds of stuffing; support mattresses s require 60 springs and 40 pounds of stuffing. The manufacturer wants to know how many mattresses of each type should be produced daily to utilize all available inventory. Show that this problem is equivalent to solving two equations in the two unknowns r and s. 13. A manufacturer produces desks and bookcases. Desks d require 5 hours of cutting time and 10 hours of assembling time. Bookcases b require 15 minutes of cutting time and one hour of assembling time. Each day, the manufacturer has available 200 hours for cutting and 500 hours for assembling. The manufacturer wants to know how many desks and bookcases should be scheduled for completion each day to utilize all available workpower. Show that this problem is equivalent to solving two equations in the two unknowns d and b. 14. A mining company has a contract to supply 70,000 tons of low-grade ore, 181,000 tons of medium-grade ore, and 41,000 tons of high-grade ore to a 2.1 Linear Systems 47 supplier. The company has three mines which it can work. Mine A produces 8000 tons of low-grade ore, 5000 tons of medium-grade ore, and 1000 tons of high-grade ore during each day of operation. Mine B produces 3000 tons of low-grade ore, 12,000 tons of medium-grade ore, and 3000 tons of high-grade ore for each day it is in operation. The figures for mine C are 1000, 10,000, and 2000, respectively. Show that the problem of determining how many days each mine must be operated to meet contractual demands without surplus is equivalent to solving a set of three equations in A, B, and C, where the unknowns denote the number of days each mine will be in operation. 15. A pet store has determined that each rabbit in its care should receive 80 units of protein, 200 units of carbohydrates, and 50 units of fat daily. The store carries four different types of feed that are appropriate for rabbits with the following compositions: Feed A B C D Protein units/oz 5 4 8 12 Carbohydrates units/oz 20 30 15 5 Fat units/oz 3 3 10 7 The store wants to determine a blend of these four feeds that will meet the daily requirements of the rabbits. Show that this problem is equivalent to solving three equations in the four unknowns A, B, C, and D, where each unknown denotes the number of ounces of that feed in the blend. 16. A small company computes its end-of-the-year bonus b as 5% of the net profit after city and state taxes have been paid. The city tax c is 2% of taxable income, while the state tax s is 3% of taxable income with credit allowed for the city tax as a pretax deduction. This year, taxable income was $400,000. Show that b, c, and s are related by three simultaneous equations. 17. A gasoline producer has $800,000 in fixed annual costs and incurs an additional variable cost of $30 per barrel B of gasoline. The total cost C is the sum of the fixed and variable costs. The net sales S is computed on a wholesale price of $40 per barrel. (a) Show that C, B, and S are related by two simultaneous equations. (b) Show that the problem of determining how many barrels must be produced to break even, that is, for net sales to equal cost, is equivalent to solving a system of three equations. 18. (Leontief Closed Models) A closed economic model involves a society in which all the goods and services produced by members of the society are consumed by those members. No goods and services are imported from without and none are exported. Such a system involves N members, each of whom produces goods or services and charges for their use. The problem is to determine the prices each member should charge for his or her labor so that everyone 48 Chapter 2 Simultaneous Linear Equations breaks even after one year. For simplicity, it is assumed that each member produces one unit per year. Consider a simple closed system consisting of a farmer, a carpenter, and a weaver. The farmer produces one unit of food each year, the carpenter produces one unit of finished wood products each year, and the weaver produces one unit of clothing each year. Let p1 denote the farmer’s annual income (that is, the price she charges for her unit of food), let p2 denote the carpenter’s annual income (that is, the price he charges for his unit of finished wood products), and let p3 denote the weaver’s annual income. Assume on an annual basis that the farmer and the carpenter consume 40% each of the available food, while the weaver eats the remaining 20%. Assume that the carpenter uses 25% of the wood products he makes, while the farmer uses 30% and the weaver uses 45%. Assume further that the farmer uses 50% of the weaver’s clothing while the carpenter uses 35% and the weaver consumes the remaining 15%. Show that a break-even equation for the farmer is 0.40p1 + 0.30p2 + 0.50p3 = p1, while the break-even equation for the carpenter is 0.40p1 + 0.25p2 + 0.35p3 = p2. What is the break-even equation for the weaver? Rewrite all three equations as a homogeneous system. 19. Paul, Jim, and Mary decide to help each other build houses. Paul will spend half his time on his own house and a quarter of his time on each of the houses of Jim and Mary. Jim will spend one third of his time on each of the three houses under construction. Mary will spend one sixth of her time on Paul’s house, one third on Jim’s house, and one half of her time on her own house. For tax purposes each must place a price on his or her labor, but they want to do so in a way that each will break even. Show that the process of determining break-even wages is a Leontief closed model comprised of three homogeneous equations. 20. Four third world countries each grow a different fruit for export and each uses the income from that fruit to pay for imports of the fruits from the other countries. Country A exports 20% of its fruit to country B, 30% to country C, 35% to country D, and uses the rest of its fruit for internal consumption. Country B exports 10% of its fruit to country A, 15% to country C, 35% to country D, and retains the rest for its own citizens. Country C does not export to country A; it divides its crop equally between countries B and D and its own people. Country D does not consume its own fruit. All of its fruit is for export with 15% going to country A, 40% to country B, and 45% to country C. Show that the problem of determining prices on the annual harvests of fruit so that each country breaks even is equivalent to solving four homogeneous equations in four unknowns. 2.1 Linear Systems 49 21. (Leontief Input–Output Models) Consider an economy consisting of N sectors, with each producing goods or services unique to that sector. Let xi denote the amount produced by the ith sector, measured in dollars. Thus xi represents the dollar value of the supply of product i available in the economy. Assume that every sector in the economy has a demand for a proportion (which may be zero) of the output of every other sector. Thus, each sector j has a demand, measured in dollars, for the item produced in sector i. Let aij denote the proportion of item j’s revenues that must be committed to the purchase of items from sector i in order for sector j to produce its goods or services. Assume also that there is an external demand, denoted by di and measured in dollars, for each item produced in the economy. The problem is to determine how much of each item should be produced to meet external demand without creating a surplus of any item. Show that for a two sector economy, the solution to this problem is given by the supply/demand equations supply demand x1 = a11x1 + a12x2 + d1, x2 = a21x1 + a22x2 + d2. Show that this system is equivalent to the matrix equations x = Ax + d and (I − A)x = d. In this formulation, A is called the consumption matrix and d the demand vector. 22. Determine A and d in the previous problem if sector 1 must expend half of its revenues purchasing goods from its own sector and one third of its revenues purchasing goods from the other sector, while sector 2 must expend one quarter of its revenues purchasing items from sector 1 and requires nothing from itself. In addition, the demand for items from these two sectors are $20,000 and $30,000, respectively. 23. A small town has three primary industries, coal mining (sector 1), transportation (sector 2), and electricity (sector 3). Production of one dollar of coal requires the purchase of 10 cents of electricity and 20 cents of transportation. Production of one dollar of transportation requires the purchase of 2 cents of coal and 35 cents of electricity. Production of one unit of electricity requires the purchase of 10 cents of electricity, 50 cents of coal, and 30 cents of transportation. The town has external contracts for $50,000 of coal, $80,000 of transportation, and $30,000 units of electricity. Show that the problem of determining how much coal, electricity, and transportation is required to supply the external demand without a surplus is equivalent to solving a Leontief input–output model. What are A and d? 24. An economy consists of four sectors: energy, tourism, transportation, and construction. Each dollar of income from energy requires the expenditure 50 Chapter 2 Simultaneous Linear Equations of 20 cents on energy costs, 10 cents on transportation, and 30 cents on construction. Each dollar of income gotten by the tourism sector requires the expenditure of 20 cents on tourism (primarily in the form of complimentary facilities for favored customers), 15 cents on energy, 5 cents on transportation, and 30 cents on construction. Each dollar of income from transportation requires the expenditure of 40 cents on energy and 10 cents on construction; while each dollar of income from construction requires the expenditure of 5 cents on construction, 25 cents on energy, and 10 cents on transportation. The only external demand is for tourism, and this amounts to $5 million dollars a year. Show that the problem of determining how much energy, tourism, transportation, and construction is required to supply the external demand without a surplus is equivalent to solving a Leontief input–output model. What are A and d? 25. A constraint is often imposed on each column of the consumption matrix of a Leontief input–output model, that the sum of the elements in each column be less than unity. Show that this guarantees that each sector in the economy is profitable. 2.2 Solutions by Substitution Most readers have probably encountered simultaneous equations in high school algebra. At that time, matrices were not available; hence other methods were developed to solve these systems, in particular, the method of substitution. We review this method in this section. In the next section, we develop its matrix equivalent, which is slightly more efficient and, more importantly, better suited for computer implementations. Consider the system given by (1): a11x1 + a12x2 + · · · + a1nxn = b1, a21x1 + a22x2 + · · · + a2nxn = b2, ... am1x1 + am2x2 + · · · + amnxn = bm. The method of substitution is the following: take the first equation and solve for x1 in terms of x2, x3, . . . , xn and then substitute this value of x1 into all the other equations, thus eliminating it from those equations. (If x1 does not appear in the first equation, rearrange the equations so that it does. For example, one might have to interchange the order of the first and second equations.) This new set of equations is called the first derived set. Working with the first derived set, solve the second equation for x2 in terms of x3, x4, . . . , xn and then substitute this value of x2 into the third, fourth, etc. equations, thus eliminating it. This new set is the 2.2 Solutions by Substitution 51 second derived set. This process is kept up until the following set of equations is obtained: x1 = c12x2 +c13x3 + c14x4 + · · · + c1nxn + d1, x2 = c23x3 + c24x4 + · · · + c2nxn + d2, x3 = ... c34x4 + · · · + c3nxn + d3, (6) xm = cm,m+1xm+1 + · · · + cmnxn + dm, where the cij’s and the di’s are some combination of the original aij’s and bi’s. System (6) can be quickly solved by back substitution. Example 1 Use the method of substitution to solve the system r + 2s + t = 3, 2r + 3s − t = −6, 3r − 2s − 4t = −2. Solution By solving the first equation for r and then substituting it into the second and third equations, we obtain the first derived set r = 3 − 2s − t, −s − 3t = −12, −8s − 7t = −11. By solving the second equation for s and then substituting it into the third equation, we obtain the second derived set r = 3 − 2s − t, s = 12 − 3t, 17t = 85. By solving for t in the third equation and then substituting it into the remaining equations (of which there are none), we obtain the third derived set r = 3 − 2s − t, s = 12 − 3t, t = 5. Thus, the solution is t = 5, s = −3, r = 4. 52 Chapter 2 Simultaneous Linear Equations Example 2 Use the method of substitution to solve the system x + y + 3z = −1, 2x − 2y − z = 1, 5x + y + 8z = −2. Solution The first derived set is x = −1 − y − 3z, −4y − 7z = 3, −4y − 7z = 3. The second derived set is x = −1 − y − 3z, y = − 3 − 7 z, 44 0 = 0. Since the third equation can not be solved for z, this is as far as we can go. Thus, since we can not obtain a unique value for z, the first and second equations will not yield a unique value for x and y. Caution:The third equation does not imply that z = 0. On the contrary, this equation says nothing at all about z, consequently z is completely arbitrary. The second equation gives y in terms of z. Substituting this value into the first equation, we obtain x in terms of z. The solution therefore is x = − 1 4 − 5 4 z and y = − 3 4 − 7 4 z, z is arbitrary. Thus there are infinitely many solutions to the above system. However, once z is chosen, x and y are determined. If z is chosen to be −1, then x = y = 1, while if z is chosen to be 3, then x = −4, y = −6. The solutions can be expressed in the vector form ⎡⎤ x ⎣y⎦ z = ⎡ − ⎢⎣− 1 4 3 4 − − 5 4 7 4 ⎤ z z⎥⎦ z = ⎡ − ⎢⎣− 1 4 3 4 ⎤ ⎥⎦ + ⎡ − z⎢⎣− 5 4 7 4 ⎤ ⎥⎦. 0 1 Example 3 Use the method of substitution to solve a + 2b − 3c + d = 1, 2a + 6b + 4c + 2d = 8. 2.2 Solutions by Substitution 53 Solution The first derived set is a = 1 − 2b + 3c − d, 2b + 10c = 6. The second derived set is a = 1 − 2b + 3c − d b = 3 − 5c Again, since there are no more equations, this is as far as we can go, and since there are no defining equations for c and d, these two unknowns must be arbitrary. Solving for a and b in terms of c and d, we obtain the solution a = −5 + 13c − d, b = 3 − 5c; c and d are arbitrary. The solutions can be expressed in the vector form ⎡⎤ ⎡ ⎤ ⎡⎤⎡⎤ ⎡⎤ a −5 + 13c − d −5 13 −1 ⎢⎢⎣bc⎥⎥⎦ = ⎢⎢⎣ 3 − 5c c ⎥⎥⎦ = ⎢⎢⎣ 03⎥⎥⎦ + ⎢⎢⎣−51⎥⎥⎦ + d⎢⎢⎣ 00⎥⎥⎦. d d 0 0 1 Note that while c and d are arbitrary, once they are given a particular value, a and b are automatically determined. For example, if c is chosen as −1 and d as 4, a solution is a = −22, b = 8, c = −1, d = 4, while if c is chosen as 0 and d as −3, a solution is a = −2, b = 3, c = 0, d = −3. Example 4 Use the method of substitution to solve the following system: x + 3y = 4, 2x − y = 1, 3x + 2y = 5, 5x + 15y = 20. Solution The first derived set is x = 4 − 3y, −7y = −7, −7y = −7, 0 = 0. 54 Chapter 2 Simultaneous Linear Equations The second derived set is x = 4 − 3y, y = 1, 0 = 0, 0 = 0. Thus, the solution is y = 1, x = 1, or in vector form x y = 1 1 . Problems 2.2 Use the method of substitution to solve the following systems: 1. x + 2y − 2z = −1, 2x + y + z = 5, −x + y − z = −2. 2. x + y − z = 0, 3x + 2y + 4z = 0. 3. x + 3y = 4, 2x − y = 1, −2x − 6y = −8, 4x − 9y = −5, −6x + 3y = −3. 5. 2l − m + n − p = 1, l + 2m − n + 2p = −1, l − 3m + 2n − 3p = 2. 4. 4r − 3s + 2t = 1, r + s − 3t = 4, 5r − 2s − t = 5. 6. 2x + y − z = 0, x + 2y + z = 0, 3x − y + 2z = 0. 7. x + 2y − z = 5, 2x − y + 2z = 1, 2x + 2y − z = 7, x + 2y + z = 3. 8. x + 2y + z − 2w = 1, 2x + 2y − z − w = 3, 2x − 2y + 2z + 3w = 3, 3x + y − 2z − 3w = 1. 2.3 Gaussian Elimination Although the method of substitution is straightforward, it is not the most efficient way to solve simultaneous equations, and it does not lend itself well to electronic computing. Computers have difficulty symbolically manipulating the unknowns 2.3 Gaussian Elimination 55 in algebraic equations. A striking feature of the method of substitution, however, is that the unknowns remain unaltered throughout the process: x remains x, y remains y, z remains z. Only the coefficients of the unknowns and the numbers on the right side of the equations change from one derived set to the next. Thus, we can save a good deal of writing, and develop a useful representation for computer processing, if we direct our attention to just the numbers themselves. Definition 1 Given the system Ax = b, the augmented matrix, designated by Ab, is a matrix obtained from A by adding to it one extra column, namely b. Thus, if A= 1 4 2 5 3 6 and b= 7 8 , then Ab = 1 4 2 5 3 6 7 8 , while if ⎡ ⎤ 123 ⎡⎤ −1 A = ⎣4 5 6⎦ and b = ⎣−2⎦, 789 −3 then ⎡ ⎤ 1 2 3 −1 Ab = ⎣4 5 6 −2⎦. 7 8 9 −3 In particular, the system x + y − 2z = −3, 2x + 5y + 3z = 11, −x + 3y + z = 5. has the matrix representation ⎡ ⎤⎡ ⎤⎡ ⎤ 1 1 −2 x −3 ⎣ 2 5 3⎦ ⎣y⎦ ⎣ 11⎦, −1 3 1 z 5 56 Chapter 2 Simultaneous Linear Equations with an augmented matrix of ⎡ ⎤ 1 1 −2 −3 Ab = ⎣ 2 5 3 11⎦. −1 3 1 5 Example 1 Write the set of equations in x, y, and z associated with the augmented matrix Ab = −2 0 1 4 3 5 8 −3 . Solution −2x+ y + 3z = 8, 4y + 5z = −3. A second striking feature to the method of substitution is that every derived set is different from the system that preceded it. The method continues creating new derived sets until it has one that is particularly easy to solve by back-substitution. Of course, there is no purpose in solving any derived set, regardless how easy it is, unless we are assured beforehand that it has the same solution as the original system. Three elementary operations that alter equations but do not change their solutions are: (i) Interchange the positions of any two equations. (ii) Multiply an equation by a nonzero scalar. (iii) Add to one equation a scalar times another equation. If we restate these operations in words appropriate to an augmented matrix, we obtain the elementary row operations: (E1) Interchange any two rows in a matrix. (E2) Multiply any row of a matrix by a nonzero scalar. (E3) Add to one row of a matrix a scalar times another row of that same matrix. Gaussian elimination is a matrix method for solving simultaneous linear equations. The augmented matrix for the system is created, and then it is transformed into a row-reduced matrix (see Section 1.4) using elementary row operations. This is most often accomplished by using operation (E3) with each diagonal element in a matrix to create zeros in all columns directly below it, beginning with the first column and moving successively through the matrix, column by column. The system of equations associated with a row-reduced matrix can be solved easily by back-substitution, if we solve each equation for the first unknown that appears in it. This is the unknown associated with the first nonzero element in each nonzero row of the final augmented matrix. 2.3 Gaussian Elimination 57 Example 2 Use Gaussian elimination to solve x + 3y = 4, 2x − y = 1, 3x + 2y = 5, 5x + 15y = 20. Solution The augmented matrix for this system is ⎡ ⎤ 1 34 ⎢⎢⎣23 −1 2 51⎥⎥⎦. 5 15 20 Then, ⎡ 1 ⎢⎢⎣23 5 3 −1 2 15 ⎤⎡ 4 1 51⎥⎥⎦ → ⎢⎢⎣03 20 5 3 −7 2 15 ⎤ 4 −57⎥⎥⎦ 20 ⎧ ⎨by adding to the ⎩stheceofinrdstrrooww(−2) times ⎡ 1 → ⎢⎢⎣00 5 3 −7 −7 15 ⎤ 4 −−77⎥⎥⎦ 20 ⎧ ⎨by adding to the ⎩tthhierdfirrsotwro(w−3) times ⎡ 1 → ⎢⎢⎣00 0 3 −7 −7 0 ⎤ 4 −−77⎥⎥⎦ 0 ⎧ ⎨by adding to the ⎩ftohuerfithrstrorwow(−5) times ⎡ ⎤ 134 → ⎢⎢⎣00 1 −7 −17⎥⎥⎦ 000 by multiplying the −1 second row by 7 ⎡ ⎤ 134 → ⎢⎢⎣00 1 0 01⎥⎥⎦. 000 ⎧ ⎨by adding to the ⎩stheceofinrdstrrooww(7) times 58 Chapter 2 Simultaneous Linear Equations The system of equations associated with this last augmented matrix in row-reduced form is x + 3y = 4, y = 1, 0 = 0, 0 = 0. Solving the second equation for y and then the first equation for x, we obtain x = 1 and y = 1, which is also the solution to the original set of equations. Compare this solution with Example 4 of the previous section. The notation (→) should be read “is transformed into”; an equality sign is not correct because the transformed matrix is not equal to the original one. Example 3 Use Gaussian elimination to solve r + 2s + t = 3, 2r + 3s − t = −6, 3r − 2s − 4t = −2. Solution The augmented matrix for this system is ⎡ ⎤ 1213 ⎣2 3 −1 −6⎦. 3 −2 −4 −2 Then, ⎡ ⎤⎡ ⎤⎧ 1213 1 2 1 3 ⎨by adding to the ⎣2 3 3 −2 −1 −4 −6⎦ → ⎣0 −2 3 −1 −2 −3 −4 −12⎦ −2 ⎩stheceofinrdstrrooww(−2) times ⎡ ⎤⎧ 1 2 1 3 ⎨by adding to the → ⎣0 0 −1 −8 −3 −7 −12⎦ −11 ⎩tthhierdfirrsotwro(w−3) times ⎡ ⎤ 121 3 → ⎣0 1 3 12⎦ 0 −8 −7 −11 by multiplying the second row by (−1) 2.3 Gaussian Elimination 59 ⎡ ⎤⎧ 1 2 1 3 ⎨by adding to the → ⎣0 0 1 0 3 17 12⎦ 85 ⎩tthhierdsercoownd(8r)otwimes ⎡ ⎤ 121 3 → ⎣0 1 3 12⎦. 001 5 by multiplying the third row by 1 17 The system of equations associated with this last augmented matrix in rowreduced form is r + 2s + t = 3, s + 3t = 12, t = 5. Solving the third equation for t, then the second equation for s, and, lastly, the first equation for r, we obtain r = 4, s = −3, and t = 5, which is also the solution to the original set of equations. Compare this solution with Example 1 of the previous section. Whenever one element in a matrix is used to cancel another element to zero by elementary row operation (E3), the first element is called the pivot. In Example 3, we first used the element in the 1–1 position to cancel the element in the 2–1 position, and then to cancel the element in the 3–1 position. In both of these operations, the unity element in the 1–1 position was the pivot. Later, we used the unity element in the 2–2 position to cancel the element −8 in the 3–2 position; here, the 2–2 element was the pivot. While transforming a matrix into row-reduced form, it is advisable to adhere to three basic principles: ● Completely transform one column to the required form before considering another column. ● Work on columns in order, from left to right. ● Never use an operation if it will change a zero in a previously transformed column. As a consequence of this last principle, one never involves the ith row of a matrix in an elementary row operation after the ith column has been transformed into its required form. That is, once the first column has the proper form, no pivot element should ever again come from the first row; once the second column has the proper form, no pivot element should ever again come from the second row; and so on. When an element we want to use as a pivot is itself zero, we interchange rows using operation (E1). 60 Chapter 2 Simultaneous Linear Equations Example 4 Use Gaussian elimination to solve 2c + 3d = 4, a + 3c + d = 2, a + b + 2c = 0. Solution The augmented matrix is ⎡ ⎤ 00234 ⎣1 0 3 1 2⎦. 11200 Normally, we would use the element in the 1–1 position to cancel to zero the two elements directly below it, but we cannot because it is zero. To proceed with the reduction process, we must interchange the first row with either of the other two rows. The choice is arbitrary. ⎡ ⎤⎡ ⎤ 00234 10312 ⎣1 0 3 1 2⎦ → ⎣0 0 2 3 4⎦ 11200 11200 ⎧ ⎨by interchanging the ⎩fisercsot nrodwrowwith the ⎡ ⎤⎧ 1 0 3 1 2 ⎨by adding to the → ⎣0 0 0 1 2 −1 3 −1 4⎦. −2 ⎩tthhierdfirrsotwro(w−1) times Next, we would like to use the element in the 2–2 position to cancel to zero the element in the 3–2 position, but we cannot because that prospective pivot is zero. We use elementary row operation (E1) once again. The transformation yields ⎡ ⎤ 10 3 1 2 → ⎣0 1 −1 −1 −2⎦ 00 2 3 4 ⎧ ⎨by interchanging the ⎩sthecirodnrdorwow with the ⎡ ⎤ 10 3 1 2 → ⎣0 1 −1 −1 −2⎦. 0 0 1 1.5 2 by multiplying the third row by (0.5) The system of equations associated with this last augmented matrix in rowreduced form is a + 3c + d = 2, b − c − d = −2, c + 1.5d = 2. 2.3 Gaussian Elimination 61 We use the third equation to solve for c, the second equation to solve for b, and the first equation to solve for a, because these are the unknowns associated with the first nonzero element of each nonzero row in the final augmented matrix. We have no defining equation for d, so this unknown remains arbitrary. The solution is, a = −4 + 3.5d, b = −0.5d, c = 2 − 1.5d, and d arbitrary, or in vector form ⎡⎤ ⎡ ⎤⎡⎤ ⎡⎤ a −4 + 3.5d −4 7 ⎢⎢⎣bc ⎥⎥⎦ = ⎢⎢⎣ −0.5d 2 − 1.5d ⎥⎥⎦ = ⎢⎢⎣ 20⎥⎥⎦ + d2⎢⎢⎣−−13⎥⎥⎦. d d 0 2 This is also the solution to the original set of equations. The derived set of equations associated with a row-reduced, augmented matrix may contain an absurd equation, such as 0 = 1. In such cases, we conclude that the derived set is inconsistent, because no values of the unknowns can simultaneously satisfy all the equations. In particular, it is impossible to choose values of the unknowns that will make the absurd equation true. Since the derived set has the same solutions as the original set, it follows that the original set of equations is also inconsistent. Example 5 Use Gaussian elimination to solve 2x + 4y + 3z = 8, 3x − 4y − 4z = 3, 5x − z = 12. Solution The augmented matrix for this system is ⎡ ⎤ 2 4 38 ⎣3 −4 −4 3⎦. 5 0 −1 12 Then, ⎡ ⎤⎡ ⎤ 2 4 38 1 2 1.5 4 ⎣3 −4 −4 3⎦ → ⎣3 −4 −4 3⎦ 5 0 −1 12 5 0 −1 12 by multiplying the first row by 1 2 ⎡ ⎤⎧ 1 2 1.5 4 ⎨by adding to the → ⎣0 5 −10 0 −8.5 −1 −9⎦ 12 ⎩stheceofinrdstrrooww(−3) times 62 Chapter 2 Simultaneous Linear Equations ⎡ ⎤ 1 2 1.5 4 → ⎣0 −10 −8.5 −9⎦ 0 −10 −8.5 −8 ⎧ ⎨by adding to the ⎩tthhierdfirrsotwro(w−5) times ⎡ ⎤ 1 2 1.5 4 → ⎣0 1 0.85 0.9⎦ 0 −10 −8.5 −8 by multiplying the second row by −1 10 ⎡ ⎤ 1 2 1.5 4 → ⎣0 1 0.85 0.9⎦. 00 0 1 ⎧ ⎨by adding to the ⎩tthhierdsercoownd(1r0o)wtimes The system of equations associated with this last augmented matrix in rowreduced form is x + 2y + 1.5z = 4, y + 0.85z = 0.9, 0 = 1. Since no values of x, y, and z can make this last equation true, this system, as well as the original one, has no solution. Finally, we note that most matrices can be transformed into a variety of rowreduced forms. If a row-reduced matrix has two nonzero rows, then a different row-reduced matrix is easily constructed by adding to the first row any nonzero constant times the second row. The equations associated with both augmented matrices, however, will have identical solutions. Problems 2.3 In Problems 1 through 5, construct augmented matrices for the given systems of equations: 1. x + 2y = −3, 2. x + 2y − z = −1, 3x + y = 1. 2x − 3y + 2z = 4. 3. a + 2b = 5, 4. 2r + 4s = 2, −3a + b = 13, 3r + 2s + t = 8, 4a + 3b = 0. 5r − 3s + 7t = 15. 5. 2r + 3s − 4t = 12, 3r − 2s = −1, 8r − s − 4t = 10. 2.3 Gaussian Elimination 63 In Problems 6 through 11, write the set of equations associated with the given augmented matrix and the specified variables. 6. Ab = 1 0 ⎡ 1 7. Ab = ⎣0 0 ⎡ 1 8. Ab = ⎣0 0 ⎡ 1 9. Ab = ⎣0 0 ⎡ 1 10. Ab = ⎣0 0 ⎡ 1 11. Ab = ⎢⎢⎣00 0 25 18 variables: x and y. ⎤ −2 3 10 1 −5 −3⎦ variables: x, y, and z. 014 ⎤ −3 12 40 1 −6 −200⎦ variables: r, s, and t. 0 1 25 ⎤ 3 0 −8 1 4 2⎦ variables: x, y, and z. 00 0 ⎤ −7 2 0 1 −1 0⎦ variables: a, b, and c. 0 00 ⎤ −1 0 1 1 0 −2 1 −23⎥⎥⎦ variables: u, v, and w. 001 12. Solve the system of equations defined in Problem 6. 13. Solve the system of equations defined in Problem 7. 14. Solve the system of equations defined in Problem 8. 15. Solve the system of equations defined in Problem 9. 16. Solve the system of equations defined in Problem 10. 17. Solve the system of equations defined in Problem 11. In Problems 18 through 24, use elementary row operations to transform the given matrices into row-reduced form: 18. 1 −3 −2 7 5 8 . ⎡ ⎤ 1 234 21. ⎣−1 −1 2 3⎦. −2 3 0 0 19. 4 2 24 11 20 −8 . 20. 0 2 −1 7 6 −5 . ⎡ ⎤ 0 1 −2 4 22. ⎣ 1 3 2 1⎦. −2 3 1 2 64 Chapter 2 Simultaneous Linear Equations ⎡ ⎤ 1320 23. ⎢⎢⎣−21 −4 0 3 −1 −13⎥⎥⎦. 2 −1 4 2 ⎡ ⎤ 2 3 4 6 0 10 24. ⎣−5 −8 15 1 3 40⎦. 3 3 5 4 4 20 25. Solve Problem 1. 26. Solve Problem 2. 27. Solve Problem 3. 28. Solve Problem 4. 29. Solve Problem 5. 30. Use Gaussian elimination to solve Problem 1 of Section 2.2. 31. Use Gaussian elimination to solve Problem 2 of Section 2.2. 32. Use Gaussian elimination to solve Problem 3 of Section 2.2. 33. Use Gaussian elimination to solve Problem 4 of Section 2.2. 34. Use Gaussian elimination to solve Problem 5 of Section 2.2. 35. Determine a production schedule that satisfies the requirements of the manufacturer described in Problem 12 of Section 2.1. 36. Determine a production schedule that satisfies the requirements of the manufacturer described in Problem 13 of Section 2.1. 37. Determine a production schedule that satisfies the requirements of the manufacturer described in Problem 14 of Section 2.1. 38. Determine feed blends that satisfy the nutritional requirements of the pet store described in Problem 15 of Section 2.1. 39. Determine the bonus for the company described in Problem 16 of Section 2.1. 40. Determine the number of barrels of gasoline that the producer described in Problem 17 of Section 2.1 must manufacture to break even. 41. Determine the annual incomes of each sector of the Leontief closed model described in Problem 18 of Section 2.1. 42. Determine the wages of each person in the Leontief closed model described in Problem 19 of Section 2.1. 43. Determine the total sales revenue for each country of the Leontief closed model described in Problem 20 of Section 2.1. 44. Determine the production quotas for each sector of the economy described in Problem 22 of Section 2.1. 45. An elementary matrix is a square matrix E having the property that the product EA is the result of applying a single elementary row operation on the matrix A. Form a matrix H from the 4 × 4 identity matrix I by interchanging any two rows of I, and then compute the product HA for any 4 × 4 matrix A of your 2.4 Pivoting Strategies 65 choosing. Is H an elementary matrix? How would one construct elementary matrices corresponding to operation (E1)? 46. Form a matrix G from the 4 × 4 identity matrix I by multiplying any one row of I by the number 5, and then compute the product GA for any 4 × 4 matrix A of your choosing. Is G an elementary matrix? How would one construct elementary matrices corresponding to operation (E2)? 47. Form a matrix F from the 4 × 4 identity matrix I by adding to one row of I five times another row of I. Use any two rows of your choosing. Compute the product FA for any 4 × 4 matrix A of your choosing. Is F an elementary matrix? How would one construct elementary matrices corresponding to operation (E3)? 48. A solution procedure uniquely suited to matrix equations of the form x = Ax + d is iteration. A trial solution x(0) is proposed, and then progressively better estimates x(1), x(2), x(3), . . . for the solution are obtained iteratively from the formula x(i+1) = Ax(i) + d. The iterations terminate when two successive estimates differ by less than a prespecified acceptable tolerance. If the system comes from a Leontief input–output model, then a reasonable initialization is x(0) = 2d. Apply this method to the system defined by Problem 22 of Section 2.1. Stop after two iterations. 49. Use the iteration method described in the previous problem to solve the system defined in Problem 23 of Section 2.1. In particular, find the first two iterations by hand calculations, and then use a computer to complete the iteration process. 50. Use the iteration method described in Problem 48 to solve the system defined in Problem 24 of Section 2.1. In particular, find the first two iterations by hand calculations, and then use a computer to complete the iteration process. 2.4 Pivoting Strategies Gaussian elimination is often programmed for computer implementation. Since all computers round or truncate numbers to a finite number of digits (e.g., the fraction 1/3 might be stored as 0.33333, but never as the infinite decimal 0.333333 . . .) roundoff error can be significant. A number of strategies have been developed to minimize the effects of such errors. The most popular strategy is partial pivoting, which requires that a pivot element always be larger in absolute value than any element below it in the same column. This is accomplished by interchanging rows whenever necessary. 66 Chapter 2 Simultaneous Linear Equations Example 1 Use partial pivoting with Gaussian elimination to solve the system x + 2y + 4z = 18, 2x + 12y − 2z = 9, 5x + 26y + 5z = 14. Solution The augmented matrix for this system is ⎡ ⎤ 1 2 4 18 ⎣2 12 −2 9⎦. 5 26 5 14 Normally, the unity element in the 1–1 position would be the pivot. With partial pivoting, we compare this prospective pivot to all elements directly below it in the same column, and if any is larger in absolute value, as is the case here with the element 5 in the 3–1 position, we interchange rows to bring the largest element into the pivot position. ⎡ ⎤⎡ ⎤ 1 2 4 18 5 26 5 14 ⎣2 12 −2 9⎦ → ⎣2 12 −2 9⎦. 5 26 5 14 1 2 4 18 by interchanging the first and third rows Then, ⎡ ⎤ 1 5.2 1 2.8 → ⎣2 12 −2 9 ⎦ 1 2 4 18 by multiplying the first row by 1 5 ⎡ 1 → ⎣0 1 ⎡ 1 → ⎣0 0 ⎤ 5.2 1 2.8 1.6 −4 3.4⎦ 2 4 18 ⎤ 5.2 1 2.8 1.6 −4 3.4⎦. −3.2 3 15.2 ⎧ ⎨by adding to the ⎩stheceofinrdstrrooww(−2) times ⎧ ⎨by adding to the ⎩tthhierdfirrsotwro(w−1) times The next pivot would normally be the element 1.6 in the 2–2 position. Before accepting it, however, we compare it to all elements directly below it in the same column. The largest element in absolute value is the element −3.2 in the 3–2 position. Therefore, we interchange rows to bring this larger element into the pivot position. Note. We do not consider the element 5.2 in the 1–2 position, even though it is the largest element in its column. Comparisons are only made between a prospective pivot and all elements directly below it. Recall one of the three basic 2.4 Pivoting Strategies 67 principles of row-reduction: never involve the first row of matrix in a row operation after the first column has been transformed into its required form. ⎡ 1 → ⎣0 0 ⎡ 1 → ⎣0 0 ⎡ 1 → ⎣0 0 ⎡ 1 → ⎣0 0 5.2 −3.2 ⎤ 1 2.8 3 15.2⎦ 1.6 −4 3.4 ⎤ 5.2 1 2.8 1 −0.9375 −4.75⎦ 1.6 −4 3.4 ⎤ 5.2 1 2.8 1 −0.9375 −4.75⎦ 0 −2.5 11 ⎤ 5.2 1 2.8 1 −0.9375 −4.75⎦ 0 1 −4.4 by interchanging the second and third rows by multiplying the second row by −1 3.2 ⎧ ⎨by adding to the ⎩tthhierdsercoownd(−ro1w.6) times by multiplying the third row by −1 2.5 The new derived set of equations is x + 5.2y + z = 2.8, y − 0.9375z = −4.75, z = −4.4, which has as its solution x = 53.35, y = −8.875, and z = −4.4. Scaled pivoting involves ratios. A prospective pivot is divided by the largest element in absolute value in its row, ignoring the last column. The result is compared to the ratios formed by dividing every element directly below the pivot by the largest element in absolute value in its respective row, again ignoring the last column. Of these, the element that yields the largest ratio in absolute value is designated as the pivot, and if that element is not already in the pivot position, then row interchanges are performed to move it there. Example 2 Use scaled pivoting with Gaussian elimination to solve the system given in Example 1. Solution The augmented matrix for this system is ⎡ ⎤ 1 2 4 18 ⎣2 12 −2 9⎦. 5 26 5 14 Normally, we would use the element in the 1–1 position as the pivot. With scaled pivoting, however, we first compare ratios between elements in the first 68 Chapter 2 Simultaneous Linear Equations column to the largest elements in absolute value in each row, ignoring the last column. The ratios are 1 = 0.25, 2 = 0.1667, and 5 = 0.1923. 4 12 26 The largest ratio in absolute value corresponds to the unity element in the 1–1 position, so that element remains the pivot. Transforming the first column into reduced form, we obtain ⎡ 12 ⎤ 4 18 ⎣0 8 −10 −27⎦. 0 16 −15 −76 Normally, the next pivot would be the element in the 2–2 position. Instead, we consider the ratios 8 = 0.8 and 16 = 1, 10 16 which are obtained by dividing the pivot element and every element directly below it by the largest element in absolute value appearing in their respective rows, ignoring elements in the last column. The largest ratio in absolute value corresponds to the element 16 appearing in the 3–2 position. We move it into the pivot position by interchanging the second and third rows. The new matrix is ⎡ 12 ⎤ 4 18 ⎣0 16 −15 −76⎦. 0 8 −10 −27 Completing the row-reduction transformation, we get ⎡ ⎤ 12 4 18 ⎢⎣0 1 −0.9375 −4.75⎥⎦. 00 1 −4.4 The system of equations associated with this matrix is x + 2y + 4z = 18, y − 0.9375z = −4.75, z = −4.4. The solution is, as before, x = 53.35, y = −8.875, and z = −4.4. 2.4 Pivoting Strategies 69 Complete pivoting compares prospective pivots with all elements in the largest submatrix for which the prospective pivot is in the upper left position, ignoring the last column. If any element in this submatrix is larger in absolute value than the prospective pivot, both row and column interchanges are made to move this larger element into the pivot position. Because column interchanges rearrange the order of the unknowns, a book keeping method must be implemented to record all rearrangements. This is done by adding a new row, designated as row 0, to the matrix. The entries in the new row are initially the positive integers in ascending order, to denote that column 1 is associated with variable 1, column 2 with variable 2, and so on. This new top row is only affected by column interchanges; none of the elementary row operations is applied to it. Example 3 Use complete pivoting with Gaussian elimination to solve the system given in Example 1. Solution The augmented matrix for this system is ⎡ ⎤ 12 3 ⎢⎢⎢⎢⎣-21- - ---2 12 - ---4 −2 - -1-89-⎥⎥⎥⎥⎦. 5 26 5 14 Normally, we would use the element in the 1–1 position of the coefficient matrix A as the pivot. With complete pivoting, however, we first compare this prospective pivot to all elements in the submatrix shaded below. In this case, the element 26 is the largest, so we interchange rows and columns to bring it into the pivot position. ⎡⎢⎢⎢⎣-211- - - -22 12 - - - -3- 4 −2 - ⎤ -1-89-⎥⎥⎥⎦ → ⎡⎢⎢⎢⎣-215- - - -2- 26 12 - - -3- 5 −2 - ⎤ -1-49-⎥⎥⎥⎦ by interchanging the first and third rows 5 26 5 14 1 2 4 18 → ⎡⎢⎢⎢⎣-12-226- - -1- 5 2 - - -3- 5 −2 - ⎤ -1-49-⎥⎥⎥⎦. by interchanging the first and second columns 2 1 4 18 Applying Gaussian elimination to the first column, we obtain ⎡⎢⎢⎢⎣-201- - - - -1- - - - - 0.1923 −0.3077 - - - -3- - - - - - 0.1923 −4.3077 - - ⎤ -02-..-55-33-88-55-⎥⎥⎥⎦. 0 0.6154 3.6154 16.9231 70 Chapter 2 Simultaneous Linear Equations Normally, the next pivot would be −0.3077. Instead, we compare this number in absolute value to all the numbers in the submatrix shaded above. The largest such element in absolute value is −4.3077, which we move into the pivot position by interchanging the second and third column. The result is ⎡⎢⎢⎢⎣-201- - - - - - -3- - - 0.1923 −4.3077 - - - - - -1- - - - 0.1923 −0.3077 - - ⎤ -20-..-55-33-88-55-⎥⎥⎥⎦. 0 3.6154 0.6154 16.9231 Continuing with Gaussian elimination, we obtain the row-reduced matrix ⎡⎢⎢⎢⎣-201- - - - - -3- - 0.1923 1 - - - - -1- - - 0.1923 0.0714 - ⎤ -−- -00-..-55-38-89-53-⎥⎥⎥⎦. 00 1 53.35 The system associated with this matrix is y + 0.1923z + 0.1923x = 0.5385, z + 0.0714x = −0.5893, x = 53.35. Its solution is, x = 53.35, y = −8.8749, and z = −4.3985, which is within round-off error of the answers gotten previously. Complete pivoting generally identifies a better pivot than scaled pivoting which, in turn, identifies a better pivot than partial pivoting. Nonetheless, partial pivoting is most often the strategy of choice. Pivoting strategies are used to avoid roundoff error. We do not need the best pivot; we only need to avoid bad pivots. Problems 2.4 In Problems 1 through 6, determine the first pivot under (a) partial pivoting, (b) scaled pivoting, and (c) complete pivoting for given augmented matrices. 2.5 Linear Independence 71 1 3 35 1. 4 8 15 . 1 8 15 3. 3 −4 . 11 ⎡ ⎤ 1234 5. ⎢⎣5 6 7 8⎥⎦. 9 10 11 12 1 −2 −5 2. 5 3 85 . ⎡ −2 4. ⎢⎣ 4 ⎤ 8 −3 100 5 4 75⎥⎦. −3 −1 2 250 ⎡ ⎤ 02 3 4 0 6. ⎢⎣1 0.4 0.8 0.1 90⎥⎦. 4 10 1 8 40 7. Solve Problem 3 of Section 2.3 using Gaussian elimination with each of the three pivoting strategies. 8. Solve Problem 4 of Section 2.3 using Gaussian elimination with each of the three pivoting strategies. 9. Solve Problem 5 of Section 2.3 using Gaussian elimination with each of the three pivoting strategies. 10. Computers internally store numbers in formats similar to the scientific notation 0, –E–, representing the number 0. –multiplied by the power of 10 signified by the digits following E. Therefore, 0.1234E06 is 123,400 while 0.9935E02 is 99.35. The number of digits between the decimal point and E is finite and fixed; it is the number of significant figures. Arithmetic operations in computers are performed in registers, which have twice the number of significant figures as storage locations. Consider the system 0.00001x + y = 1.00001, x + y = 2. Show that when Gaussian elimination is implemented on this system by a computer limited to four significant figures, the result is x = 0 and y = 1, which is incorrect. Show further that the difficulty is resolved when partial pivoting is employed. 2.5 Linear Independence We momentarily digress from our discussion of simultaneous equations to develop the concepts of linearly independent vectors and rank of a matrix, both of which will prove indispensable to us in the ensuing sections. Definition 1 A vector V1 is a linear combination of the vectors V2, V3, . . . , Vn if there exist scalars d2, d3, . . . , dn such that V1 = d2V2 + d3V3 + · · · + dnVn. 72 Chapter 2 Simultaneous Linear Equations Example 1 Show that [1 2 3] is a linear combination of [2 4 0] and [0 0 1]. Solution [1 2 3] = 1 2 [2 4 0] + 3[0 0 1]. Referring to Example 1, we could say that the row vector [1 2 3] depends lin- early on the other two vectors or, more generally, that the set of vectors {[1 2 3], [2 4 0], [0 0 1]} is linearly dependent. Another way of expressing this depen- dence would be to say that there exist constants c1, c2, c3 not all zero such that c1 [1 2 3] + c2 [2 4 0] + c3 [0 0 1] = [0 0 0]. Such a set would be c1 = −1, c2 = 1 2 , c3 = 3. Note that the set c1 = c2 = c3 = 0 is also a suitable set. The important fact about dependent sets, however, is that there exists a set of constants, not all equal to zero, that satisfies the equality. Now consider the set given by V1 = [1 0 0] V2 = [0 1 0] V3 = [0 0 1]. It is easy to verify that no vector in this set is a linear combination of the other two. Thus, each vector is linearly independent of the other two or, more generally, the set of vectors is linearly independent. Another way of expressing this independence would be to say the only scalars that satisfy the equation c1[1 0 0] + c2[0 1 0] + c3[0 0 1] = [0 0 0] are c1 = c2 = c3 = 0. Definition 2 A set of vectors {V1, V2, . . . , Vn}, of the same dimension, is linearly dependent if there exist scalars c1, c2, . . . , cn, not all zero, such that c1V1 + c2V2 + c3V3 + · · · + cnVn = 0 (7) The vectors are linearly independent if the only set of scalars that satisfies (7) is the set c1 = c2 = · · · = cn = 0. Therefore, to test whether or not a given set of vectors is linearly independent, first form the vector equation (7) and ask “What values for the c’s satisfy this equation?” Clearly c1 = c2 = · · · = cn = 0 is a suitable set. If this is the only set of values that satisfies (7) then the vectors are linearly independent. If there exists a set of values that is not all zero, then the vectors are linearly dependent. Note that it is not necessary for all the c’s to be different from zero for a set of vectors to be linearly dependent. Consider the vectors V1 = [1, 2], V2 = [1, 4], V3 = [2, 4]. c1 = 2, c2 = 0, c3 = −1 is a set of scalars, not all zero, such that c1 V1 + c2 V2 + c3V3 = 0. Thus, this set is linearly dependent. Example 2 Is the set {[1, 2], [ 3, 4]} linearly independent? Solution The vector equation is c1[1 2] + c2[3 4] = [0 0]. 2.5 Linear Independence 73 This equation can be rewritten as [c1 2c1] + [3c2 4c2] = [0 0] or as [c1 + 3c2 2c1 + 4c2] = [0 0]. Equating components, we see that this vector equation is equivalent to the system c1 + 3c2 = 0, 2c1 + 4c2 = 0. Using Gaussian elimination, we find that the only solution to this system is c1 = c2 = 0, hence the original set of vectors is linearly independent. Although we have worked exclusively with row vectors, the above definitions are equally applicable to column vectors. Example 3 Is the set ⎧⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎫ ⎨2 3 8⎬ ⎩⎣−26⎦, ⎣1⎦, 2 ⎣−163⎦⎭ linearly independent? Solution Consider the vector equation ⎡ ⎤ ⎡⎤ ⎡ ⎤ ⎡⎤ 2 3 8 0 c1 ⎣ 6⎦ + c2 ⎣1⎦ + c3 ⎣ 16⎦ = ⎣0⎦. (8) −2 2 −3 0 This equation can be rewritten as ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡⎤ 2c1 3c2 8c3 0 ⎣ 6c1⎦ + ⎣ c2⎦ + ⎣ 16c3⎦ = ⎣0⎦ −2c1 2c2 −3c3 0 or as ⎡ ⎤ ⎡⎤ 2c1 + 3c2 + 8c3 0 ⎣ 6c1 + c2 + 16c3⎦ = ⎣0⎦. −2c1 + 2c2 − 3c3 0 74 Chapter 2 Simultaneous Linear Equations By equating components, we see that this vector equation is equivalent to the system 2c1 + 3c2 + 8c3 = 0, 6c1 + c2 + 16c3 = 0, −2c1 + 2c2 − 3c3 = 0. By using Gaussian elimination, we find that the solution to this system is c1 = − 5 2 c3, c2 = −c3, c3 arbitrary. Thus, choosing c3 = 2, we obtain c1 = −5, c2 = −2, c3 = 2 as a particular nonzero set of constants that satisfies (8); hence, the original vectors are linearly dependent. Example 4 Is the set 1 2 , 5 7 , −3 1 linearly independent? Solution Consider the vector equation c1 1 2 + c2 5 7 + c3 −3 1 = 0 0 . This is equivalent to the system c1 + 5c2 − 3c3 = 0, 2c1 + 7c2 + c3 = 0. By using Gaussian elimination, we find that the solution to this system is c1 = (−26/3)c3, c2 = (7/3)c3, c3 arbitrary. Hence a particular nonzero solution is found by choosing c3 = 3; then c1 = −26, c2 = 7, and, therefore, the vectors are linearly dependent. We conclude this section with a few important theorems on linear independence and dependence. Theorem 1 A set of vectors is linearly dependent if and only if one of the vectors is a linear combination of the others. Proof. Let {V1, V2, . . . , Vn} be a linearly dependent set. Then there exist scalars c1, c2, . . . , cn, not all zero, such that (7) is satisfied. Assume c1 = 0. (Since at least 2.5 Linear Independence 75 one of the c’s must differ from zero, we lose no generality in assuming it is c1). Equation (7) can be rewritten as c1V1 = −c2V2 − c3V3 − · · · − cnVn, or as V1 = − c2 c1 V2 − c3 c1 V3 −··· − cn c1 Vn. Thus, V1 is a linear combination of V2, V3, . . . , Vn. To complete the proof, we must show that if one vector is a linear combination of the others, then the set is linearly dependent. We leave this as an exercise for the student (see Problem 36.) OBSERVATION 1 In order for a set of vectors to be linearly dependent, it is not necessary for every vector to be a linear combination of the others, only that there exists one vector that is a linear combination of the others. For example, consider the vectors [1 0], [2 0], [0 1]. Here, [0, 1] cannot be written as a linear combination of the other two vectors; however, [2 0] can be written as a linear combination of [1 0] and [0 1], namely, [2 0] = 2[1 0] + 0[0 1]]; hence, the vectors are linearly dependent. Theorem 2 The set consisting of the single vector V1 is a linearly independent set if and only if V1 = 0. Proof. Consider the equation c1V1 = 0. If V1 = 0, then the only way this equation can be valid is if c1 = 0; hence, the set is linearly independent. If V1 = 0, then any c1 = 0 will satisfy the equation; hence, the set is linearly dependent. Theorem 3 Any set of vectors that contains the zero vector is linearly dependent. Proof. Consider the set {V1, V2, . . . , Vn, 0}. Pick c1 = c2 = · · · = cn = 0, cn+1 = 5 (any other number will do). Then this is a set of scalars, not all zero, such that c1V1 + c2V2 + · · · + cnVn + cn+10 = 0; hence, the set of vectors is linearly dependent. Theorem 4 If a set of vectors is linearly independent, any subset of these vectors is also linearly independent. Proof. See Problem 37. Theorem 5 If a set of vectors is linearly dependent, then any larger set, containing this set, is also linearly dependent. Proof. See Problem 38. 76 Chapter 2 Simultaneous Linear Equations Problems 2.5 In Problems 1 through 19, determine whether or not the given set is linearly independent. 1. {[1 0], [0 1]}. 2. {[1 1], [1 −1]}. 3. {[2 −4], [−3 6]}. 4. {[1 3], [2 −1], [1 1]}. 5. 1 2 , ⎧⎡ ⎤ ⎨1 7. ⎩⎣01⎦, ⎧⎡ ⎤ ⎨1 9. ⎩⎣01⎦, ⎧⎡ ⎤ ⎨1 11. ⎩⎣23⎦, ⎧⎡ ⎤ ⎨4 13. ⎩⎣51⎦, 3 4 . ⎡⎤ 1 ⎣1⎦, 0 ⎡⎤ 1 ⎣1⎦, 1 ⎡⎤ 3 ⎣2⎦, 1 ⎡⎤ 3 ⎣0⎦, 2 ⎡ ⎤⎫ 0⎬ ⎣11⎦⎭. ⎡ ⎤⎫ 1⎬ ⎣−11⎦⎭. ⎡ ⎤⎫ 2⎬ ⎣13⎦⎭. ⎡ ⎤⎫ 1⎬ ⎣11⎦⎭. 6. 1 −1 , ⎧⎡ ⎤ ⎨1 8. ⎩⎣01⎦, ⎧⎡ ⎤ ⎨0 10. ⎩⎣00⎦, ⎧⎡ ⎤ ⎨1 12. ⎩⎣23⎦, 1 1 , ⎡⎤ 1 ⎣0⎦, 2 ⎡⎤ 3 ⎣2⎦, 1 ⎡⎤ 3 ⎣2⎦, 1 1 2 . ⎡ ⎤⎫ 2⎬ ⎣01⎦⎭. ⎡ ⎤⎫ 2⎬ ⎣13⎦⎭. ⎡⎤ 2 ⎣1⎦, 3 ⎡ ⎤⎫ −1 ⎬ ⎣ 23⎦⎭. 14. {[1 1 0], [1 −1 0]}. 15. {[1 2 3], [−3 −6 −9]}. 16. {[10 20 20], [10 −10 10], [10 20 10]}. 17. {[10 20 20], [10 −10 10], [10 20 10], [20 10 20]}. 18. {[2 1 1], [3 −1 4], [1 3 −2]}. ⎧⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎫ 19. ⎪⎪⎨⎪⎪⎩⎢⎢⎣2113⎥⎥⎦, 4 ⎢⎢⎣−21⎥⎥⎦, −1 ⎢⎢⎣4815⎥⎥⎦⎪⎪⎬⎪⎪⎭. 20. Express the vector ⎡⎤ 2 ⎣1⎦ 2 2.5 Linear Independence 77 as a linear combination of ⎧⎡ ⎤ ⎨1 ⎩⎣01⎦, ⎡⎤ 1 ⎣ 0⎦, −1 ⎡ ⎤⎫ 1⎬ ⎣11⎦⎭. 21. Can the vector [2 3] be expressed as a linear combination of the vectors given in (a) Problem 1, (b) Problem 2, or (c) Problem 3? 22. Can the vector [1 1 1]T be expressed as a linear combination of the vectors given in (a) Problem 7, (b) Problem 8, or (c) Problem 9? 23. Can the vector [2 0 3]T be expressed as a linear combination of the vectors given in Problem 8? 24. A set of vectors S is a spanning set for another set of vectors R if every vector in R can be expressed as a linear combination of the vectors in S. Show that the vectors given in Problem 1 are a spanning set for all two-dimensional row vectors. Hint: Show that for any arbitrary real numbers a and b, the vector [a b] can be expressed as a linear combination of the vectors in Problem 1. 25. Show that the vectors given in Problem 2 are a spanning set for all twodimensional row vectors. 26. Show that the vectors given in Problem 3 are not a spanning set for all twodimensional row vectors. 27. Show that the vectors given in Problem 3 are a spanning set for all vectors of the form [a −2a], where a designates any real number. 28. Show that the vectors given in Problem 4 are a spanning set for all twodimensional row vectors. 29. Determine whether the vectors given in Problem 7 are a spanning set for all three-dimensional column vectors. 30. Determine whether the vectors given in Problem 8 are a spanning set for all three-dimensional column vectors. 31. Determine whether the vectors given in Problem 8 are a spanning set for vectors of the form [a 0 a]T, where a denotes an arbitrary real number. 32. A set of vectors S is a basis for another set of vectors R if S is a spanning set for R and S is linearly independent. Determine which, if any, of the sets given in Problems 1 through 4 are a basis for the set of all two dimensional row vectors. 33. Determine which, if any, of the sets given in Problems 7 through 12 are a basis for the set of all three dimensional column vectors. 34. Prove that the columns of the 3 × 3 identity matrix form a basis for the set of all three dimensional column vectors. 78 Chapter 2 Simultaneous Linear Equations 35. Prove that the rows of the 4 × 4 identity matrix form a basis for the set of all four dimensional row vectors. 36. Finish the proof of Theorem 1. (Hint: Assume that V1 can be written as a linear combination of the other vectors.) 37. Prove Theorem 4. 38. Prove Theorem 5. 39. Prove that the set of vectors {x, kx} is linearly dependent for any choice of the scalar k. 40. Prove that if x and y are linearly independent, then so too are x + y and x − y. 41. Prove that if the set {x1, x2, . . . , xn} is linearly independent then so too is the set {k1x1, k2x2, . . . , knxn} for any choice of the non-zero scalars k1, k2, . . . , kn. 42. Let A be an n × n matrix and let {x1, x2, . . . , xk} and {y1, y2, . . . , yk} be two sets of n-dimensional column vectors having the property that Axi = yi = 1, 2, . . . , k. Show that the set {x1, x2, . . . , xk} is linearly independent if the set {y1, y2, . . . , yk} is. 2.6 Rank If we interpret each row of a matrix as a row vector, the elementary row operations are precisely the operations used to form linear combinations; namely, multiplying vectors (rows) by scalars and adding vectors (rows) to other vectors (rows). This observation allows us to develop a straightforward matrix procedure for determining when a set of vectors is linearly independent. It rests on the concept of rank. Definition 1 The row rank of a matrix is the maximum number of linearly independent vectors that can be formed from the rows of that matrix, considering each row as a separate vector. Analogically, the column rank of a matrix is the maximum number of linearly independent columns, considering each column as a separate vector. Row rank is particularly easy to determine for matrices in row-reduced form. Theorem 1 The row rank of a row-reduced matrix is the number of nonzero rows in that matrix. Proof. We must prove two facts: First, that the nonzero rows, considered as vectors, form a linearly independent set, and second, that every larger set is linearly dependent. Consider the equation c1v1 + c2v2 + · · · + crvr = 0, (9) 2.6 Rank 79 where v1 is the first nonzero row, v2 is the second nonzero row, . . . , and vr is the last nonzero row of a row-reduced matrix. The first nonzero element in the first nonzero row of a row-reduced matrix must be unity. Assume it appears in column j. Then, no other rows have a nonzero element in that column. Consequently, when the left side of Eq. (9) is computed, it will have c1 as its jth component. Since the right side of Eq. (9) is the zero vector, it follows that c1 = 0. A similar argument then shows iteratively that c2, . . . , cr, are all zero. Thus, the nonzero rows are linearly independent. If all the rows of the matrix are nonzero, then they must comprise a maximum number of linearly independent vectors, because the row rank cannot be greater than the number of rows in the matrix. If there are zero rows in the row-reduced matrix, then it follows from Theorem 3 of Section 2.5 that including them could not increase the number of linearly independent rows. Thus, the largest number of linearly independent rows comes from including just the nonzero rows. Example 1 Determine the row rank of the matrix ⎡1 0 −2 5 3⎤ A = ⎢⎢⎢⎣00 0 0 1 0 −4 1 01⎥⎥⎥⎦. 00 0 00 Solution A is in row-reduced form. Since it contains three nonzero rows, its row rank is three. The following two theorems, which are proved in the Final Comments to this chapter, are fundamental. Theorem 2 The row rank and column rank of a matrix are equal. For any matrix A, we call this common number the rank of A and denote it by r(A). Theorem 3 If B is obtained from A by an elementary row (or column) operation, then r(B) = r(A). Theorems 1 through 3 suggest a useful procedure for determining the rank of any matrix: Simply use elementary row operations to transform the given matrix to row-reduced form, and then count the number of nonzero rows. Example 2 Determine the rank of ⎡ ⎤ 1 34 A = ⎢⎢⎣23 −1 2 51⎥⎥⎦. 5 15 20 80 Chapter 2 Simultaneous Linear Equations Solution In Example 2 of Section 2.3, we transferred this matrix into the row- reduced form ⎡ ⎤ 134 ⎢⎢⎣00 1 0 01⎥⎥⎦. 000 This matrix has two nonzero rows so its rank, as well as that of A, is two. Example 3 Determine the rank of ⎡ ⎤ 1213 B = ⎣2 3 −1 −6⎦. 3 −2 −4 −2 Solution In Example 3 of Section 2.3, we transferred this matrix into the row- reduced form ⎡ ⎤ 121 3 ⎣0 1 3 12⎦. 001 5 This matrix has three nonzero rows so its rank, as well as that of B, is three. A similar procedure can be used for determining whether a set of vectors is linearly independent: Form a matrix in which each row is one of the vectors in the given set, and then determine the rank of that matrix. If the rank equals the number of vectors, the set is linearly independent; if not, the set is linearly dependent. In either case, the rank is the maximal number of linearly independent vectors that can be formed from the given set. Example 4 Determine whether the set ⎧⎡ ⎤ ⎨2 ⎩⎣−26⎦, ⎡⎤ 3 ⎣1⎦, 2 ⎡ ⎤⎫ 8⎬ ⎣−163⎦⎭ is linearly independent. Solution We consider the matrix ⎡ ⎤ 2 6 −2 ⎣3 1 2⎦. 8 16 −3 2.6 Rank 81 Reducing this matrix to row-reduced form, we obtain ⎡ ⎤ 1 3 −1 ⎢⎣0 1 − 5 8 ⎥⎦. 00 0 This matrix has two nonzero rows, so its rank is two. Since this is less than the number of vectors in the given set, that set is linearly dependent. We can say even more: The original set of vectors contains a subset of two linearly independent vectors, the same number as the rank. Also, since no row interchanges were involved in the transformation to row-reduced form, we can conclude that the third vector is linear combination of the first two. Example 5 Determine whether the set {[0 1 2 3 0], [1 3 −1 2 1], [2 6 −1 −3 1], [4 0 1 0 2]} is linearly independent. Solution We consider the matrix ⎡ ⎤ 01 2 30 ⎢⎢⎢⎣21 3 6 −1 −1 2 −3 11⎥⎥⎥⎦, 40 1 02 which can be reduced (after the first two rows are interchanged) to the row- reduced form ⎡ ⎤ 1 3 −1 2 1 ⎢⎢⎢⎣00 1 0 2 1 3 −7 −01⎥⎥⎥⎦. 00 0 1 27 175 This matrix has four nonzero rows, hence its rank is four, which is equal to the number of vectors in the given set. Therefore, the set is linearly independent. Example 6 Can the vector 1 1 82 Chapter 2 Simultaneous Linear Equations be written as a linear combination of the vectors 3 6 and 2 4 ? Solution The matrix A= 3 2 6 4 can be transformed into the row-reduced form 3 0 6 0 , which has rank one; hence A has just one linearly independent row vector. In contrast, the matrix ⎡⎤ 11 B = ⎣3 6⎦ 24 can be transformed into the row-reduced form, ⎡⎤ 11 ⎣0 1⎦, 00 which has rank two; hence B has two linearly independent row vectors. Since B is precisely A with one additional row, it follows that the additional row [1, 1]T is independent of the other two and, therefore, cannot be written as a linear combination of the other two vectors. We did not have to transform B in Example 6 into row-reduced form to determine whether the three-vector set was linearly independent. There is a more direct approach. Since B has only two columns, its column rank must be less than or equal to two (why?). Thus, the column rank is less than three. It follows from Theorem 3 that the row rank of B is less than three, so the three vectors must be linearly dependent. Generalizing this reasoning, we deduce one of the more important results in linear algebra. Theorem 4 In an n-dimensional vector space, every set of n + 1 vectors is linearly dependent. 2.6 Rank 83 Problems 2.6 In Problems 1–5, find the rank of the giv⎡en ma⎤trix. 1. 1 3 2 1 0 −5 . 41 2. ⎣2 3⎦. 22 ⎡ 1 3. ⎣ 2 ⎤ 4 −2 8 −4⎦. ⎡ ⎤ 1242 4. ⎣1 1 3 2⎦. −1 −4 2 1242 ⎡ ⎤ 170 5. ⎣0 1 1⎦. 110 In Problems 6 through 22, use rank to determine whether the given set of vectors is linearly independent. 6. {[1 0], [0 1]}. 7. {[1 1], [1 −1]}. 8. {[2 −4], [−3 6]}. 9. {[1 3], [2 −1], [1 1]}. 10. 1 2 , 3 4 . 11. 1 −1 , 1 1 , 1 2 . ⎧⎡ ⎤ ⎨1 12. ⎩⎣01⎦, ⎡⎤ 1 ⎣1⎦, 0 ⎡ ⎤⎫ 0⎬ ⎣11⎦⎭. ⎧⎡ ⎤ ⎨1 13. ⎩⎣01⎦, ⎡⎤ 1 ⎣0⎦, 2 ⎡ ⎤⎫ 2⎬ ⎣01⎦⎭. ⎧⎡ ⎤ ⎨1 14. ⎩⎣01⎦, ⎡⎤ 1 ⎣1⎦, 1 ⎡ ⎤⎫ 1⎬ ⎣−11⎦⎭. ⎧⎡ ⎤ ⎨0 15. ⎩⎣00⎦, ⎡⎤ 3 ⎣2⎦, 1 ⎡ ⎤⎫ 2⎬ ⎣13⎦⎭. ⎧⎡ ⎤ ⎨1 16. ⎩⎣23⎦, ⎡⎤ 3 ⎣2⎦, 1 ⎡ ⎤⎫ 2⎬ ⎣13⎦⎭. ⎧⎡ ⎤ ⎨1 17. ⎩⎣23⎦, ⎡⎤ 3 ⎣2⎦, 1 ⎡⎤ 2 ⎣1⎦, 3 ⎡ ⎤⎫ −1 ⎬ ⎣−23⎦⎭. 18. {[1 1 0], [1 −1 0]}. 19. {[1 2 3], [−3 −6 −9]}. 20. {[10 20 20], [10 −10 10], [10 20 10]}. 21. {[10 20 20], [10 −10 10], [10 20 10], [20 10 20]}. 22. {[2 1 1], [3 −1 4], [1 3 −2]}. 23. Can the vector [2 3] be expressed as a linear combination of the vectors given in (a) Problem 6, (b) Problem 7, or (c) Problem 8?