Engineering Mathematics By Sonendra Gupta Pdf
- 9,284,506 पुस्तकें पुस्तकें
- 84,837,646 लेख लेख
- ZLibrary Home
- Home
होम Engineering Mathematics for Semesters I and II
Engineering Mathematics for Semesters I and II
C B Gupta, S R Singh, Mukesh Kumar
यह पुस्तक आपको कितनी अच्छी लगी?
फ़ाइल की गुणवत्ता क्या है?
पुस्तक की गुणवत्ता का मूल्यांकन करने के लिए यह पुस्तक डाउनलोड करें
डाउनलोड की गई फ़ाइलों की गुणवत्ता क्या है?
?The textbook on Engineering Mathematics has been created to provide an exposition of essential tools of engineering mathematics which forms the core of all branches of engineering - from aerospace engineering to electronics and from mechanical engineering to computer science - because it is believed that as engineering evolves and develops, mathematics forms the common foundation of all new disciplines. Salient Features: Problems derived from actual industrial situations presented with solutions ? Introduction to Infinite series, Fourier series, Laplace Transform, Differential and Integral Calculus with reference to applications in the field of engineering. ? Pedagogy ? ?? Solved examples: 700 ? ?? Drill and Practice problems: 1100 ? ?? Illustrations: 350
प्रकाशन:
McGraw-Hill Education
फ़ाइल 1-5 मिनट के भीतर आपके ईमेल पते पर भेजी जाएगी.
फ़ाइल 1-5 मिनट के भीतर आपकी Kindle पर डिलीवर हो जाएगी.
ध्यान रखें:आप जो भी पुस्तक Kindle को भेजना चाहें, उसे सत्यापित करना होगा. अपना मेलबॉक्स देखें कि इसमें Amazon Kindle Support की तरफ़ से सत्यापन ईमेल है या नहीं.
आप के लिए दिलचस्प हो सकता है Powered by Rec2Me
सबसे उपयोगी शब्द
Engineering Mathematics for Semesters I and II About the Authors C B Gupta is presently working as Professor in the Department of Mathematics, Birla Institute of Technology and Science, Pilani (Rajasthan). With over 25 years of teaching and research experience, he is a recipient of numerous awards like the Shiksha Rattan Puraskar 2011, Best Citizens of India Award 2011, Glory of India Award 2013, and Mother Teresa Award 2013. He was listed in Marquis' Who's Who in Science and Technology in the World 2010 and 2013, and in top 100 scientists of the world in 2012. He obtained his master's degree in Mathematical Statistics and PhD in Operations Research from Kurukshetra University, Kurukshetra. His fields of specialization are Applied Statistics, Optimization, and Operations Research. A number of students have submitted their theses/dissertations on these topics under his supervision. He has published a large number of research papers on these topics in peer-reviewed national and international journals of repute. He has authored/co-authored 12 books on Probability and Statistics, Quantitative Methods, Optimization in Operations Research, Advance Discrete Mathematics, Engineering Mathematics I–III, Advanced Mathematics, and the like. He is also on the editorial board and a reviewer of many national and international journals. Dr. Gupta is a member of various academic and management committees of many institutes/universities. He has participated in more than 30 national and international conferences in which he has delivered invited talks and chaired technical sessions. He has been a member of Rajasthan Board of School Education, Ajmer, and also a member of various committees of RPSC Ajmer, UPSC, New Delhi, and AICTE, New Delhi. S R Singh is presently working as Associate Professor in the Department of Mathematics at Chaudhary Charan Singh University, Meerut (Uttar Pradesh) and has an experience of 20 years in academics and research. His areas of specialization are Inventory Control, Supply-Chain Management, an; d Fuzzy Set Theory. He has attended various seminars/conferences. Fifteen students have been awarded PhD under his supervision. He has published more than hundred research papers in reputed national and international journals. His research papers have been published in International Journal of System Sciences, Asia Pacific Journal of Operational Research, Control and Cybernetics, Opsearch, International Journal of Operational Research, Fuzzy Sets and Systems, and International Journal of Operations and Quantitative Management. He has authored/co-authored nine books. Mukesh Kumar is presently working as Associate Professor in the Department of Mathematics at Graphic Era University, Dehradun (Uttarakhand). He received an MPhil in Mathematics from the Indian Institute of Technology, Roorkee, and a PhD in Mathematics (Operations Research) from HNB Garhwal Central University, Srinagar. He has a teaching experience of more than 10 years and his fields of specialization are Inventory Control, Supply-Chain Management, and Operations Research. He has published many research papers in national and international reputed journals. He has authored a book, Mathematical Foundations in Computer Science. He is on the editorial board and is a reviewer of many national and international journals. Engineering Mathematics for Semesters I and II C B Gupta Professor Department of Mathematics Birla Institute of Technology and Science (BITS) Pilani, Rajasthan S R Singh Associate Professor Department of Mathematics Chaudhary Charan Singh University Meerut, Uttar Pradesh Mukesh Kumar Associate Professor Department of Mathematics Graphic Era University Dehradun, Uttarakhand McGraw Hill Education (India) Private Limited NEW DELHI McGraw Hill Education Offices New Delhi New York St Louis San Francisco Auckland Bogotá Caracas Kuala Lumpur Lisbon London Madrid Mexico City Milan Montreal San Juan Santiago Singapore Sydney Tokyo Toronto McGraw Hill Education (India) Private Limited Published by McGraw Hill Education (India) Private Limited P-24, Green Park Extension, New Delhi 110 016 Engineering Mathematics for Semesters I and II Copyright © 2015, by McGraw Hill Education (India) Private Limited. No part of this publication may be reproduced or distributed in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise or stored in a database or retrieval system without the prior written permission of the publishers. The program listing (if any) may be entered, stored and executed in a computer system, but they may not be reproduced for publication. This edition can be exported from India only by the publishers, McGraw Hill Education (India) Private Limited. Print Edition ISBN 13: 978-93-392-1964-2 ISBN 10: 93-392-1964-3 EBook Edition ISBN 13: 978-93-392-1965-9 ISBN 10: 93-392-1965-1 Managing Director: Kaushik Bellani Head—Products (Higher Education and Professional): Vibha Mahajan Assistant Sponsoring Editor: Koyel Ghosh Senior Editorial Researcher: Sachin Kumar Manager—Production Systems: Satinder S Baveja Assistant Manager—Editorial Services: Sohini Mukherjee Senior Production Executive: Suhaib Ali Senior Graphic Designer—Cover: Meenu Raghav Senior Publishing Manager (SEM & Tech. Ed.): Shalini Jha Assistant Product Manager (SEM & Tech. Ed.): Tina Jajoriya General Manager—Production: Rajender P Ghansela Manager—Production: Reji Kumar Information contained in this work has been obtained by McGraw Hill Education (India), from sources believed to be reliable. However, neither McGraw Hill Education (India) nor its authors guarantee the accuracy or completeness of any information published herein, and neither McGraw Hill Education (India) nor its authors shall be responsible for any errors, omissions, or damages arising out of use of this information. This work is published with the understanding that McGraw Hill Education (India) and its authors are supplying information but are not attempting to render engineering or other professional services. If such services are required, the assistance of an appropriate professional should be sought. Typeset at Text-o-Graphics, B-1/56, Aravali Apartment, Sector-34, Noida 201 301, and printed at Cover Printer: Visit us at: www.mheducation.co.in Contents Preface xvii 1. Matrix Algebra 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 1.10 1.11 1.12 1.13 1.14 1.15 1.16 1.17 1.18 1.19 1.20 1.21 1.22 1.23 1.24 1.25 Introduction Notation and Terminology Special Types of Matrices Equality of Two Matrices Properties of Matrices Properties of Matrix Multiplication Transpose of a Matrix Symmetric and Skew-Symmetric Matrices Transposed Conjugate, Hermitian, and Skew-Hermitian Matrices Elementary Transformation or Elementary Operations The Inverse of a Matrix Exercise 1.1 Answers Echelon Form of a Matrix Rank of a Matrix Canonical Form (or Normal Form) of a Matrix Exercise 1.2 Answers Linear Systems of Equations Homogeneous Systems of Linear Equations Systems of Linear Non-Homogeneous Equations Condition for Consistency Theorem Condition for Inconsistent Solution Characteristic Roots and Vectors (or Eigenvalues and Eigenvectors) Some Important Theorems on Characteristic Roots and Characteristic Vector Nature of the Characteristic Roots The Cayley–Hamilton Theorem Similarity of Matrices Diagonalization Matrix 1.1–1.73 1.1 1.1 1.2 1.5 1.5 1.7 1.8 1.8 1.9 1.13 1.13 1.16 1.17 1.17 1.17 1.18 1.25 1.25 1.26 1.27 1.27 1.28 1.28 1.33 1.34 1.34 1.40 1.43 1.43 Contents vi 1.26 1.27 1.28 1.29 1.30 1.31 Exercise 1.3 Answers Quadratic Forms Complex Quadratic Form Canonical Form Positive Definite Quadratic and Hermitian Forms Some Important Remark's Exercise 1.4 Answers Applications of Matrices Summary Objective-Type Questions Answers 2. Differential Calculus 2.1 2.2 2.3 2.4 2.5 Introduction Differentiation Geometrical Meaning of Derivative at a Point Successive Differentiation Calculation of nth Order Differential Coefficients Exercise 2.1 Answers 2.6 Leibnitz's Theorem Exercise 2.2 Answers Summary Objective-Type Questions Answers 3. Partial Differentiation 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 Introduction Partial Derivatives of First Order Geometric Interpretation of Partial Derivatives Partial Derivatives of Higher Orders Exercise 3.1 Homogeneous Function Euler's Theorem on Homogeneous Functions Relations Between Second-Order Derivatives of Homogeneous Functions Deduction from Euler's Theorem Exercise 3.2 Composite Function Exercise 3.3 Answers Jacobian Important Properties of Jacobians Theorems on Jacobian (Without Proof) 1.46 1.48 1.48 1.50 1.51 1.51 1.51 1.54 1.54 1.54 1.59 1.63 1.73 2.1–2.18 2.1 2.1 2.1 2.2 2.3 2.8 2.9 2.9 2.16 2.16 2.17 2.18 2.18 3.1–3.54 3.1 3.1 3.2 3.2 3.10 3.11 3.12 3.13 3.14 3.20 3.21 3.26 3.26 3.27 3.28 3.30 Contents 3.13 Jacobians of Implicit Functions 3.14 Functional Dependence Exercise 3.4 Answers 3.15 Expansion of Functions of Several Variables 3.16 Expansion of Functions of Two Variables 3.17 Taylor's and Maclaurin's Theorems for Three Variables Exercise 3.5 Answers Summary Objective-Type Questions Answers 4. Maxima and Minima 4.1 Introduction 4.2 Maxima and Minima of Functions of Two Independent Variables 4.3 Necessary Conditions for the Existence of Maxima or Minima of f(x, y) at the Point (a, b) 4.4 Sufficient Conditions for Maxima and Minima (Lagrange's Condition for Two Independent Variables) 4.5 Maximum and Minimum Values for a Function f(x, y, z) 4.6 Lagrange's Method of Multipliers Exercise 4.1 Answers 4.7 Convexity, Concavity, and Point of Inflection 4.8 Asymptotes to a Curve 4.9 Curve Tracing Exercise 4.2 Answers Summary Objective-Type Questions Answers 5. Integral Calculus 5.1 5.2 5.3 5.4 5.5 5.6 5.7 Introduction Indefinite Integral Some Standard Results on Integration Definite Integral Geometrical Interpretation of Definite Integral Leibnitz's Rule of Differentiation under the Sign of Integration Reduction Formula for the Integrals Exercise 5.1 Answers 5.8 Areas of Curves Exercise 5.2 Answers vii 3.30 3.30 3.40 3.40 3.41 3.41 3.43 3.48 3.48 3.49 3.51 3.54 4.1–4.37 4.1 4.1 4.1 4.2 4.10 4.12 4.16 4.17 4.18 4.19 4.21 4.32 4.33 4.35 4.36 4.37 5.1–5.106 5.1 5.1 5.1 5.2 5.2 5.2 5.3 5.8 5.9 5.9 5.16 5.17 Contents viii 5.9 Area of Closed Curves Exercise 5.3 Answers 5.10 Rectification Exercise 5.4 Answers 5.11 Intrinsic Equations Exercise 5.5 Answers 5.12 Volumes and Surfaces of Solids of Revolution Exercise 5.6 Answers 5.13 Surfaces of Solids of Revolution Exercise 5.7 Answers 5.14 Applications of Integral Calculus Exercise 5.8 Answers Exercise 5.9 Answers 5.15 Improper Integrals Exercise 5.10 Answers 5.16 Multiple Integrals Exercise 5.11 Answers Exercise 5.12 Answers Exercise 5.13 Answers Exercise 5.14 Answers Summary Objective-Type Questions Answers 6. Special Functions 6.1 6.2 6.3 6.4 6.5 6.6 Introduction Bessel's Equation Solution of Bessel's Differential Equation Recurrence Formulae/Relations of Bessel's Equation Generating Function for Jn(x) Integral Form of Bessel's Function Exercise 6.1 6.7 Legendre Polynomials 6.8 Solution of Legendre's Equations 5.17 5.19 5.19 5.19 5.26 5.27 5.27 5.30 5.31 5.31 5.38 5.38 5.39 5.42 5.43 5.43 5.50 5.50 5.54 5.54 5.54 5.63 5.63 5.63 5.74 5.75 5.80 5.81 5.91 5.92 5.100 5.101 5.102 5.104 5.106 6.1–6.47 6.1 6.1 6.1 6.3 6.6 6.8 6.11 6.13 6.13 Contents 6.9 6.10 6.11 6.12 6.13 6.14 6.15 6.16 6.17 Generating Function of Legendre's Polynomials Rodrigues' Formula Laplace Definite Integral for Pn(x) Orthogonal Properties of Legendre's Polynomial Recurrence Formulae for Pn(x) Exercise 6.2 Answers Beta Function Gamma Function Relation between Beta and Gamma Functions Duplication Formula Exercise 6.3 Answers Summary Objective-Type Questions Answers 7. Vector Differential and Integral Calculus 7.1 7.2 7.3 7.4 7.5 7.6 7.7 7.8 7.9 7.10 7.11 7.12 7.13 7.14 ix 6.16 6.17 6.18 6.20 6.22 6.27 6.29 6.29 6.31 6.32 6.35 6.42 6.43 6.44 6.46 6.47 7.1–7.51 Introduction Parametric Representation of Vector Functions Limit, Continuity, and Differentiability of a Vector Function Gradient, Divergence, and Curl Physical Interpretation of Curl Important Vector Identities Exercise 7.1 Answers Vector Integration Line Integrals Conservative Field and Scalar Potential Surface Integrals: Surface Area and Flux Volume Integrals Exercise 7.2 Answers Green's Theorem in the Plane: Transformation between Line and Double Integral Gauss's Divergence Theorem (Relation between Volume and Surface Integers) Stokes' Theorem (Relation between Line and Surface Integrals) Exercise 7.3 Answers Summary Objective-Type Questions Answers 8. Infinite Series 8.1 Sequence 8.2 The Range 8.3 Bounds of a Sequence 7.1 7.1 7.2 7.3 7.6 7.10 7.11 7.11 7.12 7.12 7.15 7.17 7.21 7.23 7.24 7.24 7.28 7.36 7.43 7.44 7.45 7.47 7.51 8.1–8.41 8.1 8.1 8.1 Contents x 8.4 Convergence of a Sequence 8.5 Monotonic Sequence 8.6 Infinite Series Exercise 8.1 Answers 8.7 Geometric Series Exercise 8.2 Answers 8.8 Alternating Series 8.9 Leibnitz Test Exercise 8.3 Answers 8.10 Positive-Term Series 8.11 p-series Test 8.12 Comparison Test Exercise 8.4 Answers 8.13 D'Alembert's Ratio Test Exercise 8.5 Answers 8.14 Cauchy's Root (or Radical) Test Exercise 8.6 Answers 8.15 Raabe's Test Exercise 8.7 Answers 8.16 Absolute Convergence and Conditional Convergence 8.17 Test for Absolute Convergence Exercise 8.8 Answers 8.18 Power Series 8.19 Uniform Convergence 8.20 Binomial, Exponential, and Logarithmic Series Exercise 8.9 Answers Summary Objective-Type Questions Answers 9. Fourier Series 9.1 9.2 9.3 9.4 9.5 9.6 Introduction Periodic Function Fourier Series Euler's Formulae Dirichlet's Conditions for a Fourier Series Fourier Series for Discontinuous Functions 8.2 8.2 8.2 8.5 8.6 8.6 8.7 8.7 8.7 8.7 8.10 8.10 8.10 8.11 8.11 8.16 8.16 8.16 8.20 8.20 8.21 8.24 8.24 8.24 8.26 8.27 8.27 8.27 8.30 8.30 8.31 8.32 8.35 8.37 8.37 8.37 8.40 8.41 9.1–9.46 9.1 9.1 9.2 9.3 9.4 9.4 Contents xi 9.7 Fourier Series for Even and Odd Functions Exercise 9.1 Answers 9.8 Change of Interval 9.9 Fourier Half-Range Series Exercise 9.2 Answers 9.10 More on Fourier Series 9.11 Special Waveforms 9.12 Harmonic Analysis and its Applications Summary Objective-Type Questions Answers 9.12 9.19 9.20 9.20 9.21 9.28 9.31 9.32 9.37 9.39 9.42 9.44 9.46 10. Ordinary Differential Equations: First Order and First Degree 10.1 10.2 10.3 10.4 10.5 10.6 10.7 10.8 10.9 10.10 10.11 Introduction Basic Definitions Formation of an Ordinary Differential Equation First-Order and First-Degree Differential Equations Exercise 10.1 Answers Exercise 10.2 Answers Exercise 10.3 Answers Exercise 10.4 Answers Exercise 10.5 Answers Exercise 10.6 Answers Exercise 10.7 Answers Physical Applications Rate of Growth or Decay Newton's Law of Cooling Chemical Reactions and Solutions Simple Electric Circuits Orthogonal Trajectories and Geometrical Applications Velocity of Escape from the Earth Exercise 10.8 Answers Summary Objective-Type Questions Answers 10.1–10.60 10.1 10.1 10.2 10.3 10.6 10.6 10.9 10.10 10.15 10.16 10.21 10.21 10.28 10.28 10.31 10.32 10.39 10.40 10.41 10.43 10.43 10.46 10.47 10.48 10.52 10.53 10.55 10.55 10.57 10.60 xii Contents 11. Linear Differential Equations of Higher Order with Constant Coefficients 11.1 Introduction 11.2 The Differential Operator D 11.3 Solution of Higher Order Homogeneous Linear Differential Equations with Constant Coefficient Exercise 11.1 Answers 11.4 Solution of Higher Order Non-homogeneous Linear Differential Equation with Constant Coefficients 11.5 General Methods of Finding Particular Integrals (PI) 11.6 Short Methods of Finding the Particular Integral When 'R' is of a Certain Special Forms Exercise 11.2 Answers 11.7 Solutions of Simultaneous Linear Differential equations Exercise 11.3 Answers Summary Objective-Type Questions Answers 11.1–11.34 11.1 11.1 11.2 11.5 11.5 11.5 11.6 11.7 11.19 11.20 11.21 11.28 11.29 11.29 11.31 11.34 12. Solutions of Second-Order Linear Differential Equations with Variable Coefficients 12.1–12.34 12.1 Introduction 12.2 Complete Solutions of y¢¢ + Py¢ + Qy = R in Terms of One Known Solution Belonging to the CF 12.3 Rules for Finding an Integral (Solution) Belonging to Complementary Function (CF), i.e., Solution of y¢¢ + P(x)y + Q(x)y = 0 Exercise 12.1 Answers 12.4 Removal of the First Derivative: Reduction to Normal form 12.5 Working Rule for Solving Problems by using Normal Form Exercise 12.2 Answers 12.6 Transformation of the Equation by Changing the Independent Variable 12.7 Working Rule for Solving Equations by Changing the Independent Variable Exercise 12.3 Answers 12.8 Method of Variation of Parameters 12.9 Working Rule for Solving Second Order LDE by the Method of Variation of Parameters 12.10 Solution of Third-order LDE by Method of Variation of Parameters Exercise 12.4 Answers 12.1 12.1 12.3 12.10 12.10 12.10 12.12 12.15 12.16 12.16 12.17 12.23 12.23 12.23 12.24 12.25 12.30 12.30 Contents Summary Objective-Type Questions Answers 13. Series Solutions 13.1 Introduction 13.2 Classification of Singularities 13.3 Ordinary and Singular Points Exercise 13.1 13.4 Power Series 13.5 Power-Series Solution about the Ordinary Point x = x0 Exercise 13.2 Answers 13.6 Frobenius Method Exercise 13.3 Answers Exercise 13.4 Answers Exercise 13.5 Answers Exercise 13.6 Answers Summary 14. Partial Differential Equations (PDE's) 14.1 14.2 14.3 14.4 14.5 14.6 14.7 14.8 14.9 Introduction Linear Partial Differential Equation Classification of Partial Differential Equations of Order One Formation of Partial Differential Equations Exercise 14.1 Answers Lagrange's Method of Solving the Linear Partial Differential Equations of First Order Exercise 14.2 Answers Nonlinear Partial Differential Equations of First Order Exercise 14.3 Answers Clairaut's Equation Exercise 14.4 Answers Linear Partial Differential Equation with Constant Coefficients Exercise 14.5 Answers Method of Finding the Particular Integral (PI or ZP) of the Linear Homogeneous PDE with Constant Coefficients xiii 12.31 12.33 12.34 13.1–13.32 13.1 13.1 13.2 13.3 13.3 13.4 13.11 13.11 13.12 13.19 13.19 13.24 13.24 13.27 13.27 13.29 13.30 13.30 14.1–14.66 14.1 14.1 14.2 14.2 14.9 14.9 14.10 14.14 14.15 14.15 14.20 14.21 14.21 14.23 14.23 14.23 14.28 14.28 14.29 xiv Contents 14.10 Nonhomogeneous Linear Partial Differential Equations with Constant Coefficients 14.11 Equation Reducible to Linear Equations with Constant Coefficients Exercise 14.6 Answers 14.12 Classification of Partial Differential Equations of Second Order Exercise 14.7 Answers 14.13 Charpit's Method Exercise 14.8 Answers 14.14 Nonlinear Partial Differential Equations of Second Order: (Monge's Method) Exercise 14.9 Answers 14.15 Monge's Method of Integrating Exercise 14.10 Answers Summary Objective-Type Questions Answers 15. Applications of Partial Differential Equations 15.1 Introduction 15.2 Method of Separation of Variables Exercise 15.1 Answers 15.3 Solution of One-dimensional Wave Equation Exercise 15.2 Answers 15.4 One-Dimensional Heat Equation 15.5 Solution of One-dimensional Heat Equation Exercise 15.3 Answers 15.6 Vibrating Membrane—Two-Dimensional Wave Equation 15.7 Solution of Two-Dimensional Wave Equation Exercise 15.4 Answers 15.8 Two-Dimensional Heat Flow 15.9 Solution of Two-Dimensional Heat Equation by the Method of Separation of Variables 15.10 Solution of Two-Dimensional Laplace's Equation by the Method of Separation of Variables Exercise 15.5 Answers 15.11 Transmission-Line Equations Exercise 15.6 Answers 14.38 14.44 14.47 14.48 14.48 14.50 14.50 14.50 14.53 14.53 14.53 14.57 14.57 14.57 14.60 14.61 14.61 14.65 14.66 15.1–15.59 15.1 15.1 15.5 15.5 15.5 15.16 15.17 15.17 15.18 15.25 15.26 15.26 15.27 15.32 15.32 15.33 15.34 15.37 15.48 15.49 15.49 15.54 15.55 Contents Summary Objective-Type Questions Answers 16. Laplace Transform 16.1 16.2 16.3 16.4 16.5 16.6 16.7 16.8 16.9 16.10 16.11 16.12 16.13 16.14 16.15 16.16 16.17 16.18 16.19 16.20 16.21 16.22 16.23 16.24 16.25 16.26 16.27 16.28 16.29 Introduction Definition of Laplace Transform Laplace Transforms of Elementary Functions Linearity Property of Laplace Transforms A Function of Class A Sectionally Continuous Laplace Transforms of Derivatives Differentiation of Transforms Laplace Transform of the Integral of a function Integration of Transform! (Division by t) Heaviside's Unit Function (Unit-Step Function) Diracs Delta Function (or Unit-Impulse Function) Laplace Transforms of Periodic Functions The Error Function Laplace Transform of Bessel's Functions J0(t) and J1(t) Initial- and Final-Value Theorems Laplace Transform of the Laplace Transform Exercise 16.1 Answers The Inverse Laplace Transform (ILT) Null Function Uniqueness of Inverse Laplace Transform Use of Partial Fractions to Find ILT Convolution The Heaviside Expansion Formula Method of Finding Residues Inversion Formula for the Laplace Transform Exercise 16.2 Answers Applications of Laplace Transform Exercise 16.3 Answers Solution of Ordinary Differential Equations with Variable Coefficients Exercise 16.4 Answers Simultaneous Ordinary Differential Equations Exercise 16.5 Answers Solution of Partial Differential Equations (PDE) Exercise 16.6 Answers xv 15.55 15.59 15.59 16.1–16.80 16.1 16.1 16.3 16.6 16.11 16.11 16.13 16.13 16.14 16.15 16.17 16.17 16.19 16.21 16.22 16.23 16.24 16.31 16.32 16.33 16.33 16.33 16.45 16.48 16.51 16.52 16.52 16.54 16.56 16.56 16.61 16.62 16.62 16.63 16.63 16.64 16.66 16.66 16.66 16.69 16.69 Contents xvi Summary Objective-Type Questions Answers Appendix: Basic Formulae and Concepts Index 16.70 16.76 16.80 A.1–A.10 I.1–I.5 Preface Engineering mathematics (also called mathematical engineering) is a branch of applied mathematics concerning mathematical models (mathematical methods and techniques) that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology—both of which may belong to the wider category, i.e., engineering science—engineering mathematics is an interdisciplinary subject motivated by engineers' needs. These needs may be practical, theoretical, or other considerations together with their specializations, and deal with constraints effective in engineering work. Historically, engineering mathematics consisted mostly of mathematical analysis (applied analysis), most notably differential equations, real analysis, and complex analysis including vector analysis, numerical analysis, Fourier analysis, as well as linear-algebra applied probability, outside of analysis. Salient Features ∑ ∑ ∑ ∑ ∑ ∑ ∑ Complete coverage of the foundational topics in Engineering Mathematics 360° coverage of subject matter: Introduction - History – Pedagogy - Applications Engrossing problem sets based on real life situations 626 solved problems with detailed procedure and solutions 397 MCQs with answers derived from important competitive examinations Appendix includes chapter-wise list of formulae Other pedagogical aids include � Drill and Practice Problems: 993 � Illustrations: 130 Chapter Organization The book is divided into sixteen chapters. In Chapter 1, we have discussed matrix algebra which includes basic terminology of a matrix, matrix inverse, rank of a matrix, solutions of homogeneous and non-homogeneous simultaneous equations, characteristic roots and vectors, quadratic forms, and applications of matrices. Chapter 2 deals with successive differentiation and Leibnitz's theorem, while Chapter 3 discusses partial derivatives of higher orders, homogeneous functions including Euler's theorem, Jacobian and its properties, and Taylor's series. Chapter 4 covers Lagrange's multipliers method to find extreme points of two and more variables, convexity, concavity, and point of inflection. Asymptotes of the curve and curve tracing in Cartesian, polar, and parametric coordinates are discussed. Preface xviii In Chapter 5, we present areas, volumes, and surfaces of solids of revolution of curves in Cartesian, polar, and parametric coordinates; moment of inertia; improper and multiple integrals; and Dirichlet's integral. In Chapter 6, special functions which include Bessel's equation, Legendre's polynomials, Beta and Gamma functions, along with their properties, including orthogonal properties, are discussed. Chapter 7 covers differential and integral calculus which includes parametric representation of vector functions, gradient of a scalar field, divergence of a vector field, curl of a vector function, Green's theorem, Gauss's theorem, and Stokes' theorem. In Chapter 8, infinite series and sequences are discussed. Chapter 9 deals with Fourier series which includes periodic functions, even and odd functions, Euler's formulae, Fourier series for discontinuous functions, even and odd functions, and Fourier sine and cosine series. Chapters 10, 11, and 12 cover the basics of ordinary differential equations, integrating factors, exact differential equations and linear differential equations of higher orders with constant coefficients and the methods to solve these equations. Second-order differential equations and various methods to solve them are discussed in Chapter 13. Series solutions which deal with analytic functions, ordinary and singular points, power series and its solution including Frobenius method is also covered. Chapters 14 and 15 deal with partial differential equations. In Chapter 14, methods to solve homogeneous and nonhomogeneous linear partial differential equations, Clairaut's equation, Charpit's method, and Monge's method, along with classifications of partial differential equations, are discussed. Chapter 15 deals with applications of partial differential equations including wave and heat equations. Finally, Chapter 16 presents Laplace and inverse Laplace transformations with different properties and theorems and applications of the Laplace transform. Also, Summary and Objective-Type Questions are discussed at the end of every chapter. Online Learning Center The Online Learning Center can be accessed at https://www.mhhe.com/gupta/em1/2 and contains the Instructor Elements: Solutions Manual. Acknowledgements The authors are thankful to all who have directly or indirectly helped them during the preparation of this book. We are also thankful to all our family members for their encouragement, patience, and all possible help provided by them while we were engaged in writing the manuscript. We would also like to thank the following reviewers for their feedback and comments: Rama Bhargava Indian Institute of Technology (IIT) Roorkee, Uttarakhand Kuldeep Sharma, Sachin Kumar, V P Gupta Krishna Institute of Engineering and Technology (KIET) Ghaziabad, Uttar Pradesh Vatsala Mathur Malaviya National Institute of Technology, Jaipur, Rajasthan Chirag Barasara Atmiya Institute of Technology and Science, Rajkot, Gujarat Bikash Bhattacharjya Indian Institute of Technology (IIT) Guwahati, Assam Bimal Kumar Mishra Birla Institute of Technology (BIT), Mesra, Jharkhand Debdas Mishra C V Raman College of Engineering, Bhubaneswar, Odisha S K Abdur Rauf Sikkim Manipal Institute of Technology, Majitar, Sikkim Preface Mahesh A Yeolekar Amiraj College of Engineering and Technology, Sanand, Gujarat S Sriram Patrician College of Arts and Science, Chennai, Tamil Nadu R Sekar Pondicherry Engineering College, Pondicherry xix We wish to express our appreciation for the support provided by the staff at McGraw Hill Education (India) during the publication of this book. Feedback Request We shall be grateful to acknowledge any constructive comments/suggestions from the readers for further improvement of the book. C B GUPTA S R SINGH MUKESH KUMAR Publisher's Note McGraw Hill Education (India) invites suggestions and comments from you, all of which can be sent to info.india@mheducation.com (kindly mention the title and author name in the subject line). Piracy-related issues may also be reported. Matrix Algebra 1 1.1 INTRODUCTION In many applications in physics, pure and applied mathematics, and engineering, it is useful to represent and manipulate data in tabular or array form. A rectangular array which obeys certain algebraic rules of operation is called a matrix. The aim of this chapter is to study the algebra of matrices and algebraic structures along with its application to the study of systems of linear equations. 1.2 NOTATION AND TERMINOLOGY A matrix is a rectangular array of numbers (may be real or complex); its order is the number of rows and columns that define the array. Thus, the matrices È 1 3 -2 ˘ È x ˘ È 1 3˘ È 1 3 4 ˘ Í , , 5˙˙ , ÍÍ y ˙˙ , [1, 1 - i, 1], [0] Í 4 5˙ Í 5 0 2 ˙ Í 3 2 Î ˚ Î ˚ Í7 -1 0 ˙ Í z ˙ Î ˚ Î ˚ have orders 2 ¥ 2, 2 ¥ 3, 3 ¥ 3, 3 ¥ 1, 1 ¥ 3, and 1 × 1, respectively (the order 2 ¥ 2 is read "two by two"). In general, the matrix A, defined by È a11 Í Í a21 A= Í Í Ía p1 Î a12 a22 a p2 a1q ˘ ˙ a2 q ˙ ˙ ˙ a pq ˙˚ is of order p ¥ q. The numbers aij (i = 1, 2, 3, … p, j = 1, 2, 3, … q) are called the entries or elements of A; the first subscript defines its row position and the second, its column position. In general, we will use bold letters to represent the matrices, but sometimes it is convenient to explicitly mention the order of A or display a typical element by use of the notations Apxq and (aij), Engineering Mathematics for Semesters I and II 1.2 È 1 1 1˘ Í 2 3˙ A3¥3 = (i j) = Í2 2 2 ˙ (i = 1, 2, 3, j = 1, 2, 3) Í 2 3˙ Î 3 3 3 ˚3 ¥ 3 È0 -1 -2 -3˘ A2¥4 = (i – j) = Í (i = 1, 2, j = 1, 2, 3, 4) ˙ Î 1 0 -1 -2 ˚ 2 ¥ 4 Consider the system of simultaneous equations: 3x1 – 2x2 + 3x3 – x4 = 1 x1 – 2x3 = 2 x1 + x2 + x4 = –1 3 -1˘ È3 -2 È3 -2 3 -1 1˘ Í ˙ Í ˙ Matrix A = Í 1 0 -2 0 ˙ is the coefficient matrix, and C = Í 1 0 -2 0 2 ˙ is the ÍÎ 1 1 0 Í1 1˙˚ 1 0 1 -1˙˚ Î augmented matrix. The augmented matrix is the coefficient matrix with an extra column containing the right-hand side constant matrix. 1.3 SPECIAL TYPES OF MATRICES (i) Square Matrix A matrix A in which the number of rows is equal to the number of columns is called a square matrix. Thus, for the elements aij of a square matrix A = [ aij ]n ¥ n for which i = j, the elements a11, a22, … ann are called the diagonal elements and the line along which they lie is called the principal diagonal of the matrix. È0 5 4 ˘ Í ˙ Example: A = Í 3 -1 7˙ is a square matrix of order 3. The elements 0, –1, 1 constitute the ÍÎ4 3 1˙˚ 3¥3 principal diagonal of the matrix A. (ii) Row Matrix–Column Matrix Any 1 ¥ n matrix which has only one row and n column is called a row matrix or a row vector. Similarly, any m ¥ 1 matrix which has m rows and only one column is called a column matrix or a column vector. È7 ˘ Í ˙ 8 Example: A = [3, 4, 5, 6]1 ¥ 4 is a row matrix of the type 1 ¥ 4, while B = ÍÍ ˙˙ is a column matrix of 9 Í ˙ Î 5˚ 4 ¥ 1 the type 4 ¥ 1. Matrix Algebra 1.3 (iii) Unit Matrix or Identity Matrix A square matrix in which each diagonal element is one and each non-diagonal elements is equal to zero is called a unit matrix or an identity matrix and is denoted by I. It will denote a unit matrix of order n. Thus, a square matrix A = [aij] is a unit matrix if aij = 1 if i = j and aij = 0 if i π j. È1 Í 0 Example: I4 = Í Í0 Í Î0 0 0 0˘ ˙ 1 0 0˙ 0 1 0˙ ˙ 0 0 1˚ 4 ¥ 4 È 1 0 0˘ È 1 0˘ Í ˙ I2 = Í I3 = Í 0 1 0 ˙ ˙ Î0 1˚ 2 ¥ 2 ÍÎ0 0 1˙˚ 3¥3 (iv) Null Matrix or Zero Matrix The m ¥ n matrix whose elements are all zero is called the null matrix or zero matrix of the type m ¥ n. It is denoted by Om¥n È0 Í 0 Example: O3¥4 = Í Í0 Í Î0 0 0 0˘ È0 0 0 ˘ ˙ 0 0 0˙ Í ˙ and O3 ¥ 3 = Í0 0 0 ˙ 0 0 0˙ ÍÎ0 0 0 ˙˚ ˙ 0 0 0 ˚3 ¥ 4 are matrices of the type 3 ¥ 4 and 3 ¥ 3. (v) Diagonal Matrix A square matrix A = [aij]n¥n whose elements above and below the principal diagonal are all zero, i.e., aij = 0 " i π j, is called a diagonal matrix. Thus, a diagonal matrix has both upper and lower triangular matrices. È3 0 0˘ È5 0 0˘ Í ˙ Example: A3¥3 = 0 0 0 and B3 ¥ 3 = Í0 2 0 ˙ are diagonal matrices. Í ˙ Í ˙ ÍÎ0 0 6 ˙˚ ÍÎ0 0 3˙˚ (vi) Scalar Matrix A diagonal matrix whose diagonal elements are all equal to a scalar is called a scalar matrix. ÈK 0 0 0˘ Í ˙ 0 K 0 0˙ Example: A4¥4 = Í is a scalar matrix each of whose diagonal elements is equal to K. Í 0 0 K 0˙ Í ˙ Î 0 0 0 K˚ Engineering Mathematics for Semesters I and II 1.4 (vii) Upper and Lower Triangular Matrices A square matrix A = [aij] is called an upper triangular matrix if aij = 0 whenever i > j. Thus, in an upper triangular matrix, all the elements below the principal diagonal are zero. Similarly, a square matrix A = [aij] is called a lower triangular matrix if aij = 0 whenever i < j. Thus, in a lower triangular matrix all the elements above the principal diagonal are zero. È1 È 3 5 7˘ Í 0 Í ˙ and B = Í Example: A = Í0 2 3˙ Í0 ÍÎ0 0 4 ˙˚ Í 3¥3 Î0 are upper triangular matrices. 3 4 5˘ ˙ 2 -1 0 ˙ 0 3 1˙ ˙ 0 0 7˚ 4 ¥ 4 È2 0 È2 0 0 ˘ Í Í ˙ Í5 3 1 2 0 and Q = P= Í ˙ Í4 3 ÍÎ 3 5 7˙˚ Í 3¥3 Î6 -1 0 0˘ ˙ 0 0˙ are lower triangular matrices. 6 5˙ ˙ 0 8˚ 4 ¥ 4 (viii) Orthogonal Matrix A square matrix A is said to be orthogonal if ATA = I 1 2˘ È -2 1Í 2 1˙˙ Example: A = Í 2 3 ÍÎ 1 -2 2 ˙˚ 3¥3 (ix) Idempotent Matrix A matrix A is said to be idempotent if A2 = A È 2 -2 -4 ˘ Í ˙ Example: A = Í-1 3 4 ˙ ÍÎ 1 -2 -3˙˚ (x) Involuntary Matrix A matrix A is said to be involuntary if A2 = I, where I is the identity matrix. È-5 -8 0 ˘ Í ˙ Example: A = Í 3 5 0 ˙ ÍÎ 1 2 -1˙˚ (xi) Nilpotent Matrix A matrix A is said to be nilpotent if AK = 0 (null matrix) where K is a positive integer. However, if K is a least positive integer for which AK = 0 then K is called the index of the nilpotent matrix. È ab b2 ˘ Example: A = Í ˙ ÍÎ- a 2 - ab ˙˚ Matrix Algebra 1.5 (xii) Trace of a Matrix Let A be a square matrix of order n. The sum of the elements of A lying along the principal diagonal is called the trace of the matrix A. Trace of matrix A is denoted as tr A. Thus, if A = ÈÎaij ˘˚ then n¥n n tr A =  aii = a11 + a22 + a33 + + ann i =1 Note: Let A and B be two square matrices of order n and l be a scalar then (i) tr(lA) = l tr A (ii) tr(A + B) = tr A + tr B (iii) tr (AB) = tr (BA) 1.4 EQUALITY OF TWO MATRICES Two matrices A = [aij] and B = [bij] are said to be equal if they are of the same size and the elements in the corresponding elements of the two matrices are the same, i.e., aij = bij " i, j. Thus, if two matrices A and B are equal, we write A = B. If two matrices A and B are not equal, we write A π B. If two matrices are not of the same size, they cannot be equal. Èa b ˘ Èe and B = Í Example: If A = Í ˙ c d Î ˚2 ¥ 2 Îg f˘ ˙ h ˚2 ¥ 2 Then A = B iff a = e, b = f, c = g and d = h. Example 1 Find the values of a, b, c and d so that the matrices A and B may be equal, where Èa b ˘ È 1 3˘ A= Í ˙, B = Í ˙ c d Î ˚ Î0 -5˚ The matrices A and B are of the same size, 2 ¥ 2. If A = B then the corresponding elements of A and B must be equal. \ if a = 1, b = 3, c = 0 and d = –5 then we will have A = B Solution 1.5 PROPERTIES OF MATRICES 1.5.1 Addition and Subtraction of Two Matrices Two matrices A and B are said to be comparable for addition and subtraction, if they are of the same order. Let A = [aij]m¥n and B = [bij]m¥n be the two matrices. Then the addition of the matrices A and B is defined by C = [cij] = A + B = [aij] + [bij] = [aij + bij] Thus, cij = aij + bij; i = 1, 2, 3, … m j = 1, 2, 3, … n The order of the new matrix C is same as that of A and B. Engineering Mathematics for Semesters I and II 1.6 Similarly, C = A – B = [aij] – [bij] C = [aij – bij] Thus, cij = aij – bij; i = 1, 2, 3, … m j = 1, 2, 3, … n Let A = [aij], B = [bij], and c = [cij] be m ¥ n matrices with entries from the complex numbers. Then the following properties hold: (i) Commutative law for addition, i.e., A + B = B + A. (ii) Associative law for addition, i.e., (A + B) + C = A + (B + C). (iii) Existence of additive identity, i.e., A + O = O + A = A 'O' is the additive identity. (iv) Existence of inverse, i.e., A + (–A) = O = (–A) + A '–A' is the additive inverse of A. 1.5.2 Multiplication of Matrices Two matrices A = [aij]m¥n and B = [bij]n¥p are said to comparable for the product AB, if the number of columns in the matrix A is equal to the number of rows in the matrix B. Then the matrix multiplication exists. Let A = [aij]m¥n and [bij]n¥p be two matrices. Then, the product AB is the matrix C = [cij]m¥p such that C = AB Cij = [aij] [bij] Cij = ai1 b1j + ai2 b2j + … + a1n bnj n = Âa ir r =1 Example 2 Ê i = 1, 2, 3, brj for Á Ë j = 1, 2, 3, È 3 5˘ È 2 3˘ If A = Í ˙ and B = Í ˙ , find A + B and A – B Î6 -1˚ Î 1 0˚ Solution 3 5˘ È 2 3˘ A + B = ÈÍ ˙+Í ˙ Î6 -1˚ Î 1 0 ˚ 5 + 3˘ È3 + 2 = Í ˙ 6 + 1 1 + 0˚ Î È 5 8˘ = Í ˙ Î7 -1˚ Now, mˆ n˜¯ È 3 5˘ È 2 3˘ A–B= Í ˙-Í ˙ Î6 -1˚ Î 1 0 ˚ 5 - 3˘ È3 - 2 = Í ˙ 6 1 1 - 0˚ Î Matrix Algebra 1.7 È 1 2˘ = Í ˙ Î5 -1˚ Example 3 Find the product of matrices A and B, where È 1 3 0˘ È 2 5 1˘ Í ˙ Í ˙ A = Í-1 2 1˙ and B = Í-1 0 2 ˙ ÍÎ 0 0 2 ˙˚ ÍÎ 2 1 3˙˚ Solution È 1 3 0 ˘ È 2 5 1˘ AB = Í-1 2 1˙ Í-1 0 2 ˙ Í ˙Í ˙ ÍÎ 0 0 2 ˙˚ ÍÎ 2 1 3˙˚ È -1 5 7˘ Í ˙ = Í-2 -4 6 ˙ ÍÎ 4 2 6 ˙˚ Example 4 matrix. Give an example to show that the product of two non-zero matrices may be a zero È 1 0˘ È0 0 ˘ Let A = Í ˙, B = Í ˙ Î0 0 ˚ Î0 1˚ Then A and B are both 2 ¥ 2 matrices. Hence, they are conformable for product. Now, Solution È 1 0 ˘ È0 0 ˘ È0 + 0 0 + 0 ˘ È0 0 ˘ A◊B= Í ˙Í ˙=Í ˙=Í ˙ Î0 0 ˚ Î0 1˚ Î0 + 0 0 + 0 ˚ Î0 0 ˚ 1.5.3 Multiplication of a Matrix by a Scalar Let l be a scalar (real or complex) and A be a given matrix. Then the multiplication of A = [aij] by a scalar l is defined by aA = a[aij] = [a aij]. Thus, each element of the matrix A is multiplied by the scalar l. The size of the matrix so obtained will be the same as that of the given matrix A. Example: È 1 3 5˘ È 2 ¥ 1 2 ¥ 3 2 ¥ 5˘ È 2 6 10 ˘ 2Í ˙=Í ˙=Í ˙ Î6 1 0 ˚ Î2 ¥ 6 2 ¥ 1 2 ¥ 0 ˚ Î12 2 0 ˚ 1.6 PROPERTIES OF MATRIX MULTIPLICATION If A = [aij]m¥n, B = [bjk]n¥p and C = [ckl]p¥q are three matrices with entries from the set of complex numbers then (i) Associative Law (AB) C = A(BC) Engineering Mathematics for Semesters I and II 1.8 (ii) (iii) Distributive Law A(B + C) = AB + AC AB π BA, in general, Thus, the Commutative Law is not true for matrix multiplication. 1.7 TRANSPOSE OF A MATRIX If A = [aij]m¥n matrix then the transpose of a matrix A is denoted by A¢or AT and defined as A¢ or AT = [aji]n¥m Thus, a matrix obtained by interchanging the corresponding rows and columns of a matrix A is called the transpose matrix of A. È2 0 7 ˘ Í ˙ Example: If A = Í2 5 8˙ , the transpose of the matrix A is ÍÎ2 1 7˙˚ 3¥3 È2 2 2 ˘ Í ˙ Í0 5 1˙ ÍÎ7 8 7˙˚ 3¥3 Further, (a) The transpose of a column matrix is a row matrix. È 3˘ Í4˙ Example: If A = Í ˙ , then AT = [3 4 5 8]. Í 5˙ Í ˙ Î 8˚ (b) The transpose of a row matrix is a column matrix. Example: If A = [3 4 5 8] then È 3˘ Í ˙ 4 T A = Í ˙, Í 5˙ Í ˙ Î 8˚ (c) If A is p ¥ q matrix then AT is an q ¥ p matrix. Therefore, the products of AAT, ATA are both defined and are of order p ¥ p and q ¥ q, respectively. (d) If AT and BT denote the transpose of A and B respectively then (i) (AT)T = A (ii) (A + B)T = AT + BT (iii) (AB)T = BTAT 1.8 SYMMETRIC AND SKEW-SYMMETRIC MATRICES (i) Symmetric Matrix A square matrix A = [aij] is said to be symmetric if AT = A. Thus, for a symmetric matrix aij = aji " i, j. Matrix Algebra Èa Í Example: A = Í h ÍÎ g h b f 1.9 g˘ ˙ f ˙ is a symmetric matrix of order 3 ¥ 3. c ˙˚ (ii) Skew-Symmetric Matrix A square matrix A = [aij] is said to be skew-symmetric if AT = –A. Thus, for a skew-symmetric matrix aij = –aji " i, j. For diagonal elements i = j, \ aii = –aii or 2aii = 0 or aii = 0 Thus, the diagonal elements are all zero. È 0 a b˘ Í ˙ Example: A = Í- a 0 c ˙ is a skew-symmetric matrix. ÎÍ -b -c 0 ˙˚ 1.9 TRANSPOSED CONJUGATE, HERMITIAN, AND SKEW-HERMITIAN MATRICES (i) Transposed Conjugate of a Matrix The transpose of the conjugate of a matrix A is called the transposed conjugate of A and it is denoted by Aq or by A*. Thus, the conjugate of the transpose of A is the same as the transpose of the conjugate of A, i.e., If ( A)T = ( AT ) = Aq A = [aij]m¥n then Aq = [bji]n¥m, where bji = ÈÎaij ˘˚ Example: If È1 + 2i 2 - 4i 2 + 5i ˘ Í ˙ A = Í4 - 5i 7 + 2i 7 + 3i ˙ ÍÎ 8 5 + 6i 7 ˙˚ Then 8 ˘ È1 + 2i 4 - 5i Í ˙ A = Í2 - 4i 7 + 2i 5 + 6i ˙ ÍÎ 2 + 5i 7 + 3i 7 ˙˚ and 8 ˘ È1 - 2i 4 + 5i Í ˙ A = ( A ) = Í2 + 4i 7 - 2i 5 - 6i ˙ ÍÎ 2 - 5i 7 - 3i 7 ˙˚ T q T Engineering Mathematics for Semesters I and II 1.10 Theorem 1 If Aq and Bq be the transposed conjugate of A and B respectively then (i) (ii) (Aq)q = A (A + B)q = Aq + Bq, A and B being of the same order (iii) (lA)q = l Aq , l being any complex number (iv) (AB)q = BqAq, A and B being conformable to multiplication (ii) Hermitian Matrix A square matrix A = [aij] is said to be Hermitian if Aq = A. Thus, for a Hermitian matrix aij = aij = a ji " i, j. If A is a Hermitian matrix then aii = aii " i, by definition Therefore, aii is real for all i. Thus, every diagonal element of a Hermitian matrix must be real. Example: b + ic ˘ È a A= Í ˙ and b ic d ˚ Î 2 + i 3 - 4i ˘ È 1 Í ˙ B= Í2-i 0 5 - 4i ˙ are Hermitian matrices. 3 ˚˙ ÎÍ3 + 4i 5 + 4i (iii) Skew-Hermitian Matrix A square matrix A = [aij] is said to be skew-Hermitian if Aq = –A. Thus, a matrix is skew-Hermitian if aij = – a ji " i, j. If A is a skew-Hermitian matrix then aii = aii " i \ aii + aii = 0 Thus, the diagonal elements of a skew-Hermitian matrix must be pure imaginary numbers or zero. - 3 - 2i ˘ 3 + 5i ˘ È 0 È -2i Example: A = Í ˙ and B = Í ˙ are skew-Hermitian matrices. 3 2 i 0 3 + 5 i 0 ˚ Î ˚ Î We observe the following notes: 1. If A is a symmetric (skew-symmetric) matrix then kA is also a symmetric (skew-symmetric) matrix, where k is any constant. 2. If A is a Hermitian matrix then iA is a skew-Hermitian matrix. 3. If A is a skew-Hermitian matrix then iA is a Hermitian matrix. 4. If A and B are symmetric (skew-symmetric) then (A + B) is also asymmetric (skew-symmetric) matrix. 5. If A be any square matrix then A + AT is symmetric and A – AT is a skew-symmetric matrix. 6. If A be any square matrix then A + Aq, AAq, AqA are all Hermitian and A – Aq is a skewHermitian matrix. 7. Every real symmetric matrix is Hermitian. Matrix Algebra Example 5 Solution Then 1.11 Give an example of a matrix which is skew-symmetric but not skew-Hermitian. 2 + 3i ˘ È 0 Let A = Í ˙ be a square matrix of order 2 ¥ 2. 0 ˚ Î- 2 - 3i 0 - 2 - 3i ˘ 2 + 3i ˘ È 0 AT = ÈÍ ˙ = -Í ˙=-A 0 ˚ 0 ˚ Î2 + 3i Î- 2 - 3i Thus, the matrix A is skew-symmetric. - 2 + 3i ˘ È 0 Aq = ( AT ) = Í ˙π -A 0 ˚ Î2 - 3i So that, the matrix A is not skew-Hermitian. Again, Example 6 Show that every square matrix is uniquely expressible as the sum of a symmetric matrix and a skew-symmetric matrix. Solution Let A be any square matrix we can write A= 1 1 ( A + AT ) + ( A - AT ) 2 2 A = P + Q (say) where Now, P= 1 1 ( A + AT ) and Q = ( A - AT ) 2 2 È1 ˘ PT = Í ( A + AT )˙ Î2 ˚ = PT = = T 1 ( A + AT )T 2 È∵ (l A)T = l AT ˘ Î ˚ 1È T A + ( AT )T ˘˚ 2Î È∵ ( A + B)T = AT + BT ˘ Î ˚ 1 T ( A + A) 2 1 ( A + AT ) = P 2 Thus, P is a symmetric matrix. = Again, È1 ˘ QT = Í ( A - AT )˙ 2 Î ˚ T = 1 1 ( A - AT )T = ÈÎ AT - ( AT )T ˘˚ 2 2 = 1 T [ A - A] 2 Engineering Mathematics for Semesters I and II 1.12 1 [ A - AT ] 2 QT = –Q Therefore, Q is a skew-symmetric matrix. Thus, we have expressed the square matrix A as the sum of a symmetric and skew-symmetric matrix. Now, to prove that the representation is unique, let A = R + S be another such representation of A, where R is symmetric and S is skew-symmetric. Then to prove that R = P and S = Q, we have AT = (R + S)T = RT + ST = - = R – S(Q RT = R and ST = –S) Therefore, A + AT = 2R and A – AT = 2 S 1 1 T This implies that R = ( A + A ) and S = ( A - AT ) . 2 2 Thus, R = P and S = Q. Therefore, the representation is unique. Example 7 Show that every square matrix is uniquely expressible as the sum of a Hermitian matrix and a skew-Hermitian matrix. Solution If A is any square matrix then A + Aq is a Hermitian matrix and A – Aq is a skew-Hermitian matrix. Therefore, 1 1 ( A + Aq ) is a Hermitian and ( A - Aq ) is a skew-Hermitian matrix. 2 2 1 1 ( A + Aq ) + ( A - Aq ) 2 2 A = P + Q (say) 1 1 where P = ( A + Aq ) is Hermitian and q = ( A - Aq ) is a skew-Hermitian matrix. 2 2 Thus, every square matrix can be expressed as the sum of a Hermitian matrix and a skew-Hermitian matrix. Now, to prove that A = R + S be the another representation is unique, where R is Hermitian and S is skew-Hermitian. Therefore, Aq = (R + S)q = Rq + Sq =R–S (∵ Rq = R and Sq = –S) Now, A= \ R= 1 1 ( A + Aq ) = P and S = ( A - Aq ) = Q 2 2 Thus, the representation is unique. Matrix Algebra 1.13 1.10 ELEMENTARY TRANSFORMATION OR ELEMENTARY OPERATIONS The following transformations are called elementary transformations of a matrix. (i) Interchanging of rows (columns) (ii) Multiplication of a row (column) by a non-zero scalar (iii) Adding/subtracting K multiples of a row (column) to another row (column) Notation The following row (column) transformations will be denoted by the following symbols. (i) Ri ¤ Rj (Ci ¤ Cj) for the interchange of the ith and jth row (column) (ii) Ri Æ KRi (Ci Æ KCi) for multiplication of the ith row (column) by K (iii) Ri Æ Ri + a Rj (Ci Æ Ci + a Cj) for addition to the ith row (column), a times the jth row (column) 1.11 THE INVERSE OF A MATRIX The inverse of an n ¥ n matrix A = [Aij] is denoted by A–1 and is an n ¥ n matrix such that AA–1 = A–1A = In where I is the n ¥ n unit matrix. If A has an inverse, then A is called a non-singular matrix. If A has no inverse, then A is called a singular matrix. 1.11.1 The Inverse of a Square Matrix is Unique Proof Let B and C are inverse of A then AB = BA = In (1) And AC = CA = In From (1), we have AB = In Premultiplication with C gives C(AB) = CIn (2) C(AB) = C From (2), we have CA = In Post-multiplication with B, gives (CA)B = In B = B Since C(AB) = (CA)B Therefore, from (3) and (4), we have B = C. Hence, the inverse of a square matrix is unique. (3) 1.11.2 (i) Some Special Points on Inverse of a Matrix If A be any n-rowed square matrix, then (Adj A)A = A(Adj A) = |A| In, where In is the n-rowed unit matrix. (4) Engineering Mathematics for Semesters I and II 1.14 (ii) (iii) The necessary and sufficient condition for a square matrix A to possess the inverse is that |A| π 0. If A be an n ¥ n non-singular matrix then (iv) (A¢)–1 = (A–1)¢, where ( ¢ ) (desh) denote, the transpose. If A be an n ¥ n non-singular matrix then (A–1)q = (Aq)–1 (v) If A, B be two n-rowed non-singular matrices then AB is also non-singular and (x) (AB)–1 = B–1A–1 If A is a non-singular matrix then det(A–1) = (det A)–1. If the matrices A and B commute then A–1 and B–1 also commute. If A, B, C be three matrices conformable for multiplication then (ABC)–1 = C–1 ◊ B–1 ◊ A–1. If the product of two non-zero square matrices is a zero matrix then both must be singular matrices. If A be an n ¥ n matrix then (xi) |adj A| = |A|n–1 If A is a non-singular matrix then (xii) adj adj A = |A|n–2 A If A and B are square matrices of the same order then adj (AB) = adj B. adj A (vi) (vii) (viii) (ix) 1.11.3 Method of Finding the Inverse by Elementary Operations The elementary row transformations which reduce a given square matrix A to the unit matrix I, when we applied the above transformations to the unit matrix I. Give the inverse of matrix A, i.e., A–1. To find A–1, write the matrix A and I side by side and then applying the same row operations on both sides. We get A is reduced to I, the other matrix represents A–1. Example 8 Using row elementary operation, find the inverse of the matrix A, where È 3 -3 4 ˘ A = Í2 -3 4 ˙ Í ˙ ÍÎ0 -1 1˙˚ 3¥3 Solution Writing the given matrix A side by side with the unit matrix of the same order as A, we have È 3 -3 4 1 0 0 ˘ Í ˙ [A | I3] = Í2 -3 4 0 1 0 ˙ ÍÎ0 -1 1 0 0 1˙˚ È 1 0 0 1 -1 0 ˘ Í ˙ R1 Æ R1 - R2 Í2 -3 4 0 1 0˙ ÍÎ0 -1 1 0 0 1˙˚ Matrix Algebra 1.15 1 -1 0 ˘ È1 0 0 Í ˙ R2 Æ R2 - 2 R2 Í0 -3 4 -2 3 0 ˙ ÍÎ0 -1 1 0 0 1˙˚ 1 -1 0 ˘ È1 0 0 R2 Æ R2 - 4 R3 ÍÍ0 1 0 -2 3 -4 ˙˙ ÍÎ0 -1 1 0 0 1˙˚ 1 -1 0 ˘ È1 0 0 Í R3 Æ R3 + R2 Í0 1 0 -2 3 -4 ˙˙ = ÈÎ I 3 A-1 ˘˚ ÍÎ0 0 1 -2 3 -3˙˚ Hence, Example 9 È 1 -1 0 ˘ A–1 = Í-2 3 -4 ˙ Í ˙ ÍÎ-2 3 -3˙˚ Using row elementary operations, find the inverse of the given matrix A, where È 0 Í 1 A= Í Í 1 Í Î-1 2 1 3˘ ˙ 1 -1 -2 ˙ 2 0 1˙ ˙ 1 2 6˚ 4 ¥ 4 Solution È 0 Í 1 [A | I4] = Í Í 1 Í Î-1 2 1 3 1 0 0 0˘ ˙ 1 -1 -2 0 1 0 0 ˙ 2 0 1 0 0 1 0˙ ˙ 1 2 6 0 0 0 1˚ È 1 Í 0 R1 ´ R2 Í Í 1 Í Î-1 1 -1 -2 0 1 0 0 ˘ ˙ 2 1 3 1 0 0 0˙ 2 0 1 0 0 1 0˙ ˙ 1 2 6 0 0 0 1˚ È1 R3 Æ R3 - R1 ÍÍ0 R4 Æ R4 + R1 Í0 Í Î0 1 -1 -2 0 1 2 1 3 1 0 1 1 3 0 -1 2 1 4 0 1 È1 Í 0 R2 Æ R2 - R3 Í Í0 Í Î0 1 -1 -2 0 1 0 1 0 0 1 1 -1 1 1 3 0 -1 1 2 1 4 0 1 0 0 0 1 0 0˘ 0 ˙˙ 0˙ ˙ 1˚ 0˘ ˙ 0˙ 0˙ ˙ 1˚ Engineering Mathematics for Semesters I and II 1.16 È1 R3 Æ R3 - R2 ÍÍ0 R4 Æ R4 - 2 R2 Í0 Í Î0 Hence, 1 0 0 0 1 1 0 0˘ ˙ 0 1 1 -1 0 ˙ 3 -1 -2 2 0 ˙ ˙ 4 -2 -1 2 1˚ 1 -1 -2 È1 Í 0 R4 Æ R4 - R3 Í Í0 Í Î0 R3 Æ R3 - 3 R4 R1 Æ R1 + 2 R4 1 -1 -2 1 0 0 0 1 0 0 0 0 0˘ ˙ 0 1 1 -1 0 ˙ 3 -1 -2 2 0 ˙ ˙ 1 -1 1 0 1˚ 1 -1 0 -2 1 0 0 1 È1 Í0 Í Í0 Í Î0 0 0 1 1 3 0 1 -1 1 0 2 -5 0 1 -1 1 -1˘ 1 0 0 1 1 -1 0 ˙˙ 0 1 0 2 -5 2 -3˙ ˙ 0 0 1 -1 1 0 1 ˚ 0 -2 2˘ 0 ˙˙ 2 -3˙ ˙ 0 1˚ È1 Í0 R1 Æ R1 + R3 Í Í0 Í Î0 1 0 0 È1 Í0 R1 Æ R1 - R2 Í Í0 Í Î0 0 0 0 -1 -3 2 3 -1˘ 1 0 0 1 1 -1 0 ˙˙ = È I A-1 ˘˚ 0 1 0 2 -5 2 -3˙ Î 4 ˙ 0 0 1 -1 1 0 1˚ È-1 -3 3 -1˘ Í ˙ 1 1 -1 0 ˙ A–1 = Í Í 2 -5 2 -3˙ Í ˙ 1˚ Î-1 1 0 EXERCISE 1.1 1. Find the inverses of the following matrices by elementary transformation: È1 3 3˘ Í ˙ (i) Í1 4 3˙ ÍÎ1 3 4 ˙˚ È 1 2 3˘ Í ˙ (iv) Í2 4 5˙ ÍÎ 3 5 6 ˙˚ È2 1 2 ˘ Í ˙ (ii) Í2 2 1˙ ÍÎ 1 2 2 ˙˚ È 1 -1 1˘ Í ˙ (iii) Í4 1 0˙ ÍÎ 8 1 1˙˚ 1 3˘ È 1 Í ˙ (v) Í 1 3 -3˙ ÍÎ-2 -4 -4 ˙˚ È3 13 17˘ Í ˙ (vi) Í5 7 1˙ ÍÎ8 3 11˙˚ Matrix Algebra 1.17 Answers (i) È 7 -3 -3˘ Í -1 1 0 ˙ Í ˙ 1˙˚ ÎÍ -1 0 (iv) È 1 -3 2 ˘ Í ˙ Í-3 3 -1˙ ÎÍ 2 -1 0 ˙˚ (ii) È 2 -2 -3˘ 1Í ˙ -2 2 2˙ 5Í ÎÍ 2 -3 2 ˙˚ (v) È 12 4 6 ˘ Í ˙ Í-5 -1 -3˙ ÍÎ -1 -1 -1˙˚ (iii) È 1 2 -1˘ Í ˙ Í-4 -7 4 ˙ ÎÍ-4 -9 5˙˚ (vi) È-74 92 106 ˘ 1 Í ˙ 47 103 -82 ˙ 1086 Í ÍÎ 41 -95 44 ˙˚ 1.12 ECHELON FORM OF A MATRIX A matrix A is said to be in Echelon form if the following hold: (i) Every row of matrix A, which has all its entries zero occurs below every row which has a nonzero entry. (ii) The first non-zero entry in each non-zero row is equal to one. (iii) the number of zeros preceding the first non-zero element in a row is less than the number of such zero in the succeeding row. Example: The matrix È1 Í 0 A= Í Í0 Í Î0 3 2 6˘ ˙ 1 4 2˙ is an Echelon form. 0 0 5˙ ˙ 0 0 0˚ 1.13 RANK OF A MATRIX The rank of a matrix A is said to be 'r' if it possesses the following two properties: (i) There is at least one non-zero minor of order 'r' whose determinant is not equal to zero. (ii) If the matrix A contains any minor of order (r + 1) then the determinant of every minor of A of order r + 1, should be zero. Thus, the rank of a matrix is the largest order of a non-zero minor of the matrix The rank of a matrix A is denoted by r(A). The rank of a matrix in Echelon form is equal to the number of non-zero rows of the matrix, i.e., r(A) = Number of non-zeros in the Echelon form of the matrix. Some Important Results (i) (ii) (iii) (iv) (v) Rank of A and AT is same. Rank of a null matrix is zero Rank of a non-singular matrix A of order n is n. Rank of an identity matrix of order n is n. For a rectangular matrix A of order m ¥ n, rank of A £ min (m, n), i.e., rank cannot exceed the smaller of m and n. Engineering Mathematics for Semesters I and II 1.18 (vi) (vii) (viii) For an n-square matrix A, if r(A) = n then |A| π 0, i.e., matrix A is non-singular. For any square matrix, if r(A) < n then |A| = 0, i.e., matrix A is singular. The rank of a product of two matrices cannot exceed the rank of either matrix. 1.14 CANONICAL FORM (OR NORMAL FORM) OF A MATRIX The normal form of a matrix A of order m × n of rank 'r' is one of the forms ÈI 0˘ [ I r ], Í r ˙ , [ Ir Î 0 0˚ ÈI ˘ 0], Í r ˙ Î 0˚ where Ir is an identity matrix of order r. By the application of a number of elementary operations, a matrix of rank r can be reduced to normal form. Then the rank of matrix A is r. Example 10 Reduce the matrix A to echelon form and, hence, find its rank, where È1 Í 1 A= Í Í2 Í Î3 2 1 2˘ ˙ 3 2 2˙ 4 3 4˙ ˙ 7 4 6˚ È1 Í 1 A= Í Í2 Í Î3 2 1 2˘ ˙ 3 2 2˙ 4 3 4˙ ˙ 7 4 6˚ Solution R2 Æ R2 - R1 R3 Æ R3 - 2 R1 È1 Í Í0 R4 Æ R4 - 3 R1 Í0 Í Î0 È1 Í 0 R4 Æ R4 - R3 Í Í0 Í Î0 2 1 2˘ ˙ 1 1 0˙ 0 1 0˙ ˙ 1 1 0˚ 2 1 2˘ ˙ 1 1 0˙ 0 1 0˙ ˙ 0 0 0˚ The last equivalent matrix is in echelon form. The number of non-zero rows in this matrix is 3. Therefore, r(A) = number of non-zero rows in echelon form of the matrix r(A) = 3 Example 11 Determine the rank of the given matrix. È 1 3 4 3˘ Í ˙ A = Í3 9 12 9˙ ÍÎ 1 3 4 1˙˚ Solution È 1 3 4 3˘ Í ˙ A = Í3 9 12 9˙ ÍÎ 1 3 4 1˙˚ R2 Æ R2 - 3 R1 È 1 3 4 3˘ R3 Æ R3 - R1 ÍÍ0 0 0 0 ˙˙ ÍÎ0 0 0 - 2 ˙˚ R2 ´ R3 È 1 3 4 3˘ Í ˙ Í0 0 0 - 2 ˙ ÍÎ0 0 0 0 ˙˚ The last equivalent matrix is in echelon form. The number of non-zero rows in this matrix is 2. Therefore, r(A) = 2 Example 12 Solution Determine the values of K such that the rank of the matrix A is 3, where. È1 Í 4 A= Í ÍK Í Î9 1 -1 0 ˘ ˙ 4 -3 1˙ 2 2 2˙ ˙ 9 K 3˚ È1 Í 4 A= Í ÍK Í Î9 1 -1 0 ˘ ˙ 4 -3 1˙ 2 2 2˙ ˙ 9 K 3˚ We have R2 Æ R2 - 4 R1 È 1 Í R3 Æ R3 - 2 R1 Í 0 R4 Æ R4 - 9 R1 Í K - 2 Í Î 0 1 0 0 R3 Æ R3 - 4 R2 È 1 Í R4 Æ R4 - 3 R2 Í 0 ÍK - 2 Í Î 0 1 -1 0˘ ˙ 0 1 1˙ 0 0 - 2˙ ˙ 0 K +6 0 ˚ R4 ´ R3 È 1 Í Í 0 Í 0 Í ÎK - 2 -1 1 4 0˘ ˙ 1˙ 2˙ ˙ 0 K + 9 3˚ 1 -1 0˘ ˙ 0 1 1˙ 0 K +6 0 ˙ ˙ 0 0 -2 ˚ Engineering Mathematics for Semesters I and II 1.20 (i) (ii) If K = 2, |A| = 1.0.8. – 2 = 0, the rank of the matrix A = 3. If K = –6, number of non-zero rows is 3, the rank of matrix A is 3. Example 13 the rank of A. È1 2 1 0˘ Í ˙ Reduce the matrix A = Í-2 4 3 0 ˙ to canonical (normal) form. Hence, find ÍÎ 1 0 2 -8˙˚ Solution È1 2 1 0˘ Í ˙ A = Í-2 4 3 0 ˙ ÍÎ 1 0 2 -8˙˚ C2 Æ C2 - 2C1 È 1 0 0 0 ˘ Í ˙ C3 Æ C3 - C1 Í-2 8 5 0˙ ÍÎ 1 -2 1 -8˙˚ R2 Æ R2 + 2 R1 È 1 0 0 0 ˘ Í ˙ R3 Æ R3 - R1 Í0 8 5 0˙ ÍÎ0 -2 1 -8˙˚ C2 Æ 1 C È1 0 0 0˘ 8 2Í ˙ 1 5 0˙ Í0 Í ˙ 1 1 - 8˙ Í0 4 Î ˚ C3 Æ C3 - 5C2 È 1 0 0 0˘ Í ˙ 0 1 0 0˙ Í Í ˙ 1 9 - 8˙ Í0 4 4 Î ˚ R3 Æ 4 R3 È 1 0 0 0˘ Í ˙ 1 0 0˙ Í0 ÍÎ0 -1 9 -32 ˙˚ R3 Æ R3 + R2 È 1 0 0 0˘ Í ˙ 0 1 0 0 Í ˙ ÍÎ0 0 9 -32 ˙˚ C3 Æ 1 C3 È 1 0 0 0˘ 9 Í ˙ 0˙ Í0 1 0 ÍÎ0 0 1 -32 ˙˚ Matrix Algebra 1.21 C4 Æ C4 + 32C3 È 1 0 0 0 ˘ Í ˙ Í0 1 0 0 ˙ ÍÎ0 0 1 0 ˙˚ ÈÎ I 3 0 ˘˚ Hence, the rank of the matrix A = 3 (number of non-zero rows in I3). Example 14 find its rank. È 1 -1 Í 4 1 Reduce the matrix A = Í Í0 3 Í 1 Î0 È 1 -1 Í 4 1 Solution The given matrix is A = Í Í0 3 Í 1 Î0 C2 Æ C2 + C1 C3 Æ C3 - 2C1 C4 Æ C4 + 3C1 2 -3˘ ˙ ÈIr 0˘ 0 2˙ to the normal form Í ˙ and, hence, Î 0 0˚ 0 4˙ ˙ 0 2˚ 2 -3˘ ˙ 0 2˙ 0 4˙ ˙ 0 2˚ È1 Í Í4 Í0 Í Î0 0 0 0˘ ˙ 5 -8 14 ˙ 3 0 4˙ ˙ 1 0 2˚ È1 Í0 R2 Æ R2 - 4 R1 Í Í0 Í Î0 0 0 C4 Æ C4 - 2C2 È 1 Í Í0 Í0 Í Î0 0 0 R3 Æ R3 - 3 R2 R4 Æ R4 - 5 R2 0 0 0˘ ˙ 1 0 0˙ 0 0 - 2˙ ˙ 0 -8 4˚ C3 ´ C4 È 1 Í Í0 Í0 Í Î0 È1 Í Í0 Í0 Í Î0 0˘ 1 0 2 ˙˙ 3 0 4˙ ˙ 5 - 8 14 ˚ 0˘ ˙ 1 0 0˙ 3 0 - 2˙ ˙ 5 -8 4˚ 0 0 0˘ ˙ 1 0 0˙ 0 -2 0 ˙ ˙ 0 4 -8˚ 1.22 Engineering Mathematics for Semesters I and II 1 È1 C Í 3 3 Í0 1 C4 Æ - C4 Í 0 8 Í0 Î C3 ´ 0 0 0˘ 1 0 0 ˙˙ 0 1 0˙ ˙ 0 -2 1˚ R4 ´ R4 + 2 R3 È 1 Í0 Í Í0 Í Î0 0 0 0˘ 1 0 0 ˙˙ ∼ [I4 ] 0 1 0˙ ˙ 0 0 1˚ which is in normal form. Hence, the rank of the matrix A is 4. Example 15 where Find two non-singular matrices P and Q such that PAQ is in the normal form, È 1 1 1˘ Í ˙ A = Í 1 -1 -1˙ ÍÎ3 1 1˙˚ Solution We write A = I3AI3, i.e., È 1 1 1˘ È 1 0 0 ˘ È 1 0 0 ˘ Í ˙ Í ˙ Í ˙ Í 1 -1 -1˙ = Í0 1 0 ˙ A Í0 1 0 ˙ ÍÎ3 1 1˙˚ ÍÎ0 0 1˙˚ ÍÎ0 0 1˙˚ ≠ ≠————≠ Pre factor ____________________ ≠ Post factor Now, we apply the elementary operations on the matrix A (left-hand side of the above equation). Until it is reduced to the normal form, every elementary row operation will also be applied to the prefactor and every elementary column operation to the post-factors of the above equation. Performing R2 Æ R2 – R1, R2 Æ R3 – 3R1, we get 1 1˘ È 1 0 0 ˘ È 1 0 0 ˘ È1 Í ˙ Í ˙ Í ˙ 0 2 -2 ˙ = Í -1 1 0 ˙ A Í0 1 0 ˙ Í ÍÎ0 -2 -2 ˙˚ ÍÎ-3 0 1˙˚ ÍÎ0 0 1˙˚ C2 Æ C2 – C1, C3 Æ C3 – C1, we get 0 ˘ È 1 0 0 ˘ È 1 -1 -1˘ È1 0 Í ˙ Í ˙ Í ˙ 0 2 2 ˙ = Í -1 1 0 ˙ A Í0 1 0˙ Í ÍÎ0 -2 -2 ˙˚ ÍÎ-3 0 1˙˚ ÍÎ0 0 1˙˚ Matrix Algebra R2 Æ - 1.23 1 R 2 2 0˘ È 1 0 0 ˘ È 1 -1 -1˘ È1 0 Í ˙ Í ˙ Í ˙ 1 1˙ = Í1/2 -1/2 0 ˙ A Í0 1 0˙ Í0 ÍÎ0 -2 -2 ˙˚ ÍÎ -3 0 1˙˚ ÍÎ0 0 1˙˚ C3 Æ C3 – C2 0 0 ˘ È 1 -1 0 ˘ È 1 0 0˘ È 1 Í ˙ Í ˙ Í ˙ 1 0 ˙ = Í1/2 -1/2 0 ˙ A Í0 1 -1˙ Í0 ÍÎ0 -2 0 ˙˚ ÍÎ -3 0 1˙˚ ÍÎ0 0 1˙˚ R3 Æ R3 + 2R2 0 0 ˘ È 1 -1 0 ˘ È 1 0 0˘ È 1 Í ˙ Í ˙ Í ˙ Í0 1 0 ˙ = Í1/2 -1/2 0 ˙ A Í0 1 -1˙ ÍÎ0 0 0 ˙˚ ÍÎ -2 -1 1˙˚ ÍÎ0 0 1˙˚ ÈI2 PAQ = Í Î0 0˘ ˙ 0˚ 0 0˘ È 1 È 1 -1 0 ˘ Í ˙ Í ˙ 1 -1˙ , the rank of matrix A is 2. P = Í1/2 - 1/2 0 ˙ , Q = Í0 ÍÎ -2 ÍÎ0 0 -1 1˙˚ 1˙˚ where Example 16 Find the two non-singular matrices P and Q such that the normal form of A is PAQ È1 3 6 -1˘ Í ˙ where A = Í1 4 5 1˙ . Hence, find its rank. ÍÎ1 5 4 3˙˚ 3¥ 4 Solution Consider A = I3AI4 È1 È1 3 6 -1˘ È 1 0 0 ˘ Í Í ˙ Í ˙ Í0 Í1 4 5 1˙ = Í0 1 0 ˙ A Í0 ÍÎ1 5 4 3˙˚ ÍÎ0 0 1˙˚ Í Î0 R2 Æ R2 – R1, R3 Æ R3 – R1 0 1 0 0 0 0 1 0 0˘ ˙ 0˙ 0˙ ˙ 1˚ 1.24 Engineering Mathematics for Semesters I and II È1 0 È 1 3 6 -1˘ È 1 0 0 ˘ Í Í ˙ Í ˙ Í0 1 Í0 1 -1 2 ˙ = Í-1 1 0 ˙ A Í0 0 ÍÎ0 2 -2 4 ˙˚ ÍÎ-1 0 1˙˚ Í Î0 0 0 0˘ ˙ 0 0˙ 1 0˙ ˙ 0 1˚ C2 Æ C2 – 3C1, C3 Æ C3 – 6C1, C4 Æ C4 + C1 È 1 -3 -6 1˘ È 1 0 0 0˘ È 1 0 0˘ Í ˙ 1 0 0˙ Í ˙ Í ˙ Í0 0 1 1 2 = 1 1 0 A Í ˙ Í ˙ Í0 0 1 0˙ ÍÎ0 2 -2 4 ˙˚ ÍÎ-1 0 1˙˚ Í ˙ 0 1˚ Î0 0 R3 Æ R3 – 2R2 È 1 -3 -6 1˘ È 1 0 0 0˘ È 1 0 0˘ Í 1 0 0 ˙˙ Í0 1 -1 2 ˙ = Í -1 ˙ A Í0 1 0 Í ˙ Í ˙ Í0 0 1 0˙ ÍÎ0 0 0 0 ˙˚ ÍÎ 1 -2 1˙˚ Í ˙ 0 1˚ Î0 0 C3 Æ C3 + C2, C4 Æ C4 – 2C2 7˘ È 1 -3 -9 È 1 0 0 0˘ È 1 0 0˘ Í ˙ 1 1 -2 ˙ Í ˙ Í ˙ Í0 0 1 0 0 = 1 1 0 A Í ˙ Í ˙ Í0 0 1 0˙ ÍÎ0 0 0 0 ˙˚ ÍÎ-1 -2 1˙˚ Í ˙ 1˚ Î0 0 0 7˘ È 1 -3 -9 È 1 0 0˘ Í ˙ I 0 0 1 1 2 È 2 ˘ Í ˙ Í ˙ = 1 1 0 A Í ˙ Í ˙ 1 0˙ Î 0 0 ˚ Í 1 -2 1˙ Í0 0 ˙ Î ˚ Í0 0 0 1˚ Î \ ÈI2 0˘ PAQ = Í ˙, Î 0 0˚ where 7˘ È 1 -3 -9 È 1 0 0˘ Í ˙ 1 1 -2 ˙ ˙ Í0 P = Í-1 , the rank of the matrix A is 2. 1 0 , Q = Í ˙ Í0 0 1 0˙ ÍÎ 1 -2 1˙˚ Í ˙ 1˚ Î0 0 0 Matrix Algebra 1.25 EXERCISE 1.2 1. Find the ranks of the following matrices: È 1 -1 3 6 ˘ Í ˙ Í 1 3 -3 -4 ˙ ÍÎ5 3 3 11˙˚ 3 7˘ È2 Í ˙ (ii) Í 3 -2 4 ˙ ÎÍ 1 -3 -1˚˙ (iii) 1 3˘ È2 Í ˙ Í4 7 13˙ ÍÎ4 -3 -1˙˚ È0 Í 1 (iv) Í Í3 Í Î1 (v) È 1 2 3 1˘ Í ˙ Í2 4 6 2 ˙ ÍÎ 1 2 3 2 ˙˚ 3¥4 È 3 -2 0 -1 -7˘ Í ˙ 0 2 2 1 -5˙ (vi) Í Í 1 -2 -3 -2 1˙ Í ˙ 1 2 1 6˚4 ¥ 5 Î0 (i) 2. 1 -3 -1˘ ˙ 0 1 1˙ 1 0 2˙ ˙ 1 -2 0 ˚ 4 ¥ 4 Find the non-singular matrices P and Q such that PAQ is in the normal form for A. Hence, find the rank of A. (i) 5˘ È3 2 -1 Í ˙ A = Í5 1 4 -2 ˙ ÍÎ 1 -4 11 -19˙˚ È 1 -1 2 -1˘ Í ˙ (ii) A = Í4 2 -1 2 ˙ ÍÎ2 2 -2 0 ˙˚ È 1 2 3 -2 ˘ Í ˙ (iv) A = Í2 -2 1 3˙ ÍÎ 3 0 4 1˙˚ È 1 1 2˘ Í ˙ (iii) A = Í 1 2 3˙ ÎÍ0 -1 -1˚˙ 3. È9 7 3 6 ˘ Í ˙ Reduce the matrix A = Í5 -1 4 1˙ to normal form and find its rank. ÍÎ6 8 2 4 ˙˚ 4. È0 Í 1 Reduce the matrix A = Í Í3 Í Î1 1 -3 -1˘ ˙ 1˙ to normal form and find its rank. 2˙ ˙ 0˚ 0 1 1 0 1 -2 Answers 1. (i) 3 (ii) 2 (iii) 2 (iv) 4 (v) 2 (vi) 4 Engineering Mathematics for Semesters I and II 1.26 2. 0 1˘ È 0 ÈI2 0˘ Í ˙ 1/3 -5/3˙ (i) PAQ = Í ˙ , where P = Í 0 Î 0 0˚ ÍÎ1/2 -1/3 1/6 ˙˚ È Í 1 Í 2 (ii) P = ÍÍ 3 Í Í- 1 ÍÎ 3 0 1 6 1 3 È ˘ Í1 0˙ Í ˙ 1˙ Í - , Q = Í0 ˙ 2 Í ˙ 1˙ Í0 ÍÎ0 2 ˙˚ È Í1 Í Í0 Í and Q = Í Í0 Í Í Í0 ÎÍ 4 7 1 7 9 119 1 7 1 0 17 0 0 9 ˘ 217 ˙ ˙ 1 - ˙˙ 7 ˙ , r(A) = 2 0˙ ˙ ˙ 1˙ 31 ˚˙ 1˘ 0 - ˙ 2 ˙ 3˙ 1 -1 ; r( A) = 3 2˙ ˙ 0 0 1˙ 0 1 0 ˙˚ 1 È 1 0 0˘ È1 -1 -1˘ Í ˙ Í ˙ (iii) P = Í -1 1 0 ˙ , Q = Í0 1 -1˙ and r( A) = 2 ÍÎ -1 1 1 ˙˚ ÍÎ0 0 1 ˙˚ 1 4 1˘ È - ˙ Í1 3 3 3 Í ˙ È 1 0 0˘ 1 5 7 Í0 ˙ Í ˙ (iv) P = Í-2 1 0˙ , Q = Í , r( A) = 2 6 6 6˙ Í ˙ ÍÎ -1 -1 1˙˚ 0 1 0˙ Í0 ÍÎ0 0 0 1˙˚ 3. 4. r(A) = 3 r(A) = 2 1.15 LINEAR SYSTEMS OF EQUATIONS Matrices play a very important role in the solution of linear systems of equations, which appear frequently as models of various problems, for instance, in electrical networks, traffic flow, production and consumption, assignment of jobs to workers, population of growth, statistics, and many others. In this section we shall the study of the nature of solutions of linear systems of equations. We shall first consider systems of homogeneous linear equations and proceed to discuss systems of nonhomogeneous linear equations. Matrix Algebra 1.27 1.16 HOMOGENEOUS SYSTEMS OF LINEAR EQUATIONS Consider a11 x1 + a12 x2 + a21 x1 + a22 x2 + am1 x1 + am 2 x2 + + a1n xn = 0,¸ + a2 n xn = 0 ÔÔ ˝ Ô + amn xn = 0 Ô˛ (5) is a system of m homogeneous equations in n unknowns x1, x2, x3, …, xn. Let È a11 Í Í a21 A= Í Í ÎÍam1 a12 a22 am 2 È x1 ˘ È0 ˘ a1n ˘ Í ˙ Í ˙ ˙ Í x2 ˙ Í0 ˙ a2 n ˙ Í ˙ , X = x3 , O = Í0 ˙ ˙ Í ˙ Í ˙ ˙ Í ˙ Í ˙ amn ˚˙ m ¥ n Íx ˙ Í0 ˙ Î ˚ m ¥1 Î n ˚ n ¥1 Then, the system (5) can be written in the matrix form AX = O (6) The matrix A is called the coefficient matrix of the system of (5). The system (5) has the trivial (zero) solution if the rank of the coefficient matrix A in the echelon form of the matrix is equal to the number of unknown variables (n), i.e., r(A) = n The system (5) has infinitely many solutions if the rank of coefficient matrix is less than the number of unknown variables, i.e., r(A) < n. Remark I: The number of linearly independent solutions of m homogeneous linear equations in n variables, AX = O, is (n – r), where r is the rank of the matrix A. Remark II: A homogeneous linear system with fewer equations than unknowns always has nontrivial solutions. 1.17 SYSTEMS OF LINEAR NON-HOMOGENEOUS EQUATIONS Suppose a system of m non-homogeneous linear equations with n unknown variables is of the form a11 x1 + a12 x2 + a21 x1 + a22 x2 + am1 x1 + am 2 If we write È a11 Í a21 A= Í Í Í ÍÎam1 a12 a22 am 2 + a1n xn = b1 ¸ + a2 n xn = b2 ÔÔ ˝ Ô x2 + + amn xn = bm Ô˛ a1n ˘ È x1 ˘ È b1 ˘ ˙ Í ˙ Í ˙ a2 n ˙ x2 ˙ b Í ,X= and B = Í 2 ˙ ˙ Í ˙ Í ˙ ˙ Í ˙ Í ˙ amn ˙˚ m ¥ n ÍÎ xn ˙˚ n ¥ 1 ÍÎbm ˙˚ m ¥ 1 Then, the above system (7) can be written in the form of a single matrix equation AX = B. (7) 1.28 Engineering Mathematics for Semesters I and II È a11 Í a The matrix [A B ]= Í 21 Í Í ÎÍam1 a12 a22 a1n a2 n am 2 … amn b1 ˘ ˙ b2 ˙ is called the augmented matrix of the given ˙ ˙ bm ˚˙ system of equations. Any set of values which simultaneously satisfy all these equations is called a solution of the given system (7). When the system of equations has one or more solution then the given system is consistent, otherwise it is inconsistent. 1.18 CONDITION FOR CONSISTENCY THEOREM The system of equations AX = B is consistent, i.e., possesses a solution, if and only if the coefficient matrix A and the augmented matrix [A B ] are of the same rank. Now, two cases arise. Case 1 If the rank of the coefficient matrix and the rank of the augmented matrix are equal to the number of unknown variables, i.e., r(A) = r(A : B) = n (number of unknown variables) then the system has a unique solution. Case 2 If the rank of the coefficient matrix and the rank of the augmented matrix are equal but less than the number of unknown variables, i.e., r(A) = r(A : B) < n then the given system has infinitely many solution. 1.19 CONDITION FOR INCONSISTENT SOLUTION The system of equations AX = B is inconsistent, i.e., possesses no solution if the rank of coefficient matrix A is not equal to the rank of augmented matrix [A B ] i.e. r(A) π r [A B ] Example 17 Show that the given system of equations x + y + z = 6, x + 2y + 3z = 14, x + 4y + 7z = 30 are consistent and solve them. Solution The given systems of equations can be written in matrix form AX = B i.e., È1 1 1˘ È x ˘ È 6 ˘ Í ˙Í ˙ Í ˙ Í1 2 3˙ Í y ˙ = Í14 ˙ ÍÎ1 4 7˙˚ ÍÎ z ˙˚ ÍÎ30 ˙˚ Now, the augmented matrix È1 1 1 6 ˘ [A B]= ÍÍ1 2 3 14˙˙ ÍÎ1 4 7 30 ˙˚ R2 Æ R2 - R1 È 1 1 1 6 ˘ Í ˙ R3 Æ R3 - R1 Í0 1 2 8˙ ÍÎ0 3 6 24 ˙˚ Matrix Algebra 1.29 R3 Æ R3 - 3 R2 È 1 1 1 6 ˘ Í ˙ Í0 1 2 8 ˙ ÍÎ0 0 0 0 ˙˚ The above is the echelon form of the matrix [A | B]. Here, r[A | B] = number of non-zero rows = 2 and r(A) = 2. r(A) = 2 \ r[A | B] = 2 = r(A). Hence, the system is consistent. Now, the number of unknown variables is 3. Since r(A) < 3, the given system will have an infinite number of solutions. The given system of equations is equivalent to the matrix equation È 1 1 1˘ È x ˘ È6 ˘ Í ˙Í ˙ Í ˙ Í0 1 2 ˙ Í y ˙ = Í 8 ˙ ÍÎ0 0 0 ˙˚ ÍÎ z ˙˚ ÍÎ0 ˙˚ Or x+y+z=6 y + 2z = 8 Let z = k so that y = 8 – 2k and x = 6 – 8 + 2k – k = k – 2 where k is an arbitrary constant. Example 18 Investigate for what values of l and m, the simultaneous equations x + y + z = 6, x + 2y + 3z = 10 and x + 2y + lz = m have (i) no solution, (ii) a unique solution, and (iii) an infinite number of solutions. Solution The given system can be written in matrix form. AX = B, i.e., È1 1 1˘ È x ˘ È 6 ˘ Í ˙Í ˙ Í ˙ Í1 2 3˙ Í y ˙ = Í10 ˙ ÍÎ1 2 l ˙˚ ÍÎ z ˙˚ ÍÎ m ˙˚ The augmented matrix È1 1 1˘ È 6 ˘ [A B]= ÍÍ1 2 3˙˙ = ÍÍ10˙˙ ÍÎ1 2 l ˙˚ ÍÎ m ˙˚ R2 Æ R2 - R1 È1 1 1 6 ˘ Í ˙ R3 Æ R3 - R1 Í0 1 2 4 ˙ ÍÎ0 1 l - 1 m - 6 ˙˚ Engineering Mathematics for Semesters I and II 1.30 R3 Æ R3 - R2 È1 1 1 6 ˘ Í ˙ 2 4 ˙ Í0 1 ÍÎ0 0 l - 3 m - 10 ˙˚ (i) (ii) (iii) If l = 3 and m π 10 then r(A|B) = 3 and r(A) = 2. Thus, r(A | B) π r(A) The given system is inconsistent. Hence, the system has no solution. If l π 3 and m have any value then r(A | B) = r(A) = 3 = number of unknown variables Hence, the system has a unique solution. If l = 3 and m = 10 then r(A | B) = 2 = r(A) In this case, the system is consistent. Here, the number of unknown variables is 3. \ r(A | B) = r(A) < 3 Hence, the system of equations possesses an infinite number of solutions. Solve the following system of linear equations: x + 2y – z = 3, 3x – y + 2z = 1, 2x – 2y + 3z = 2 and x – y + z = –1. Example 19 Solution The given system of equations can be written in matrix form AX = B, i.e., È1 2 -1˘ È3˘ Í ˙ Èx˘ Í ˙ Í3 -1 2 ˙ Í y ˙ = Í 1 ˙ Í2 -2 3 ˙ Í ˙ Í 2 ˙ Í ˙ ÍÎ z ˙˚ Í ˙ Î1 -1 1 ˚ Î-1˚ The augmented matrix È1 2 -1 3 ˘ Í ˙ 3 -1 2 1 [A B]= ÍÍ2 -2 3 2 ˙˙ Í ˙ Î1 -1 1 -1˚ R2 Æ R2 - 3 R1 È 1 2 -1 3˘ Í ˙ R3 Æ R3 - 2 R1 Í0 -7 5 -8˙ R4 Æ R4 - R1 Í0 -6 5 -4 ˙ Í ˙ Î0 -3 2 -4 ˚ R2 Æ R2 - R3 È 1 2 -1 3˘ Í ˙ Í0 -1 0 -4 ˙ Í0 -6 5 -4 ˙ Í ˙ Î0 -3 2 -4 ˚ Matrix Algebra 1.31 R3 Æ R3 - 6 R2 È 1 2 -1 3˘ R4 Æ R4 - 3 R2 ÍÍ0 -1 0 -4 ˙˙ Í0 0 5 20 ˙ Í ˙ 8˚ Î0 0 2 1 R3 È 1 2 -1 3˘ 5 Í ˙ Í0 -1 0 -4 ˙ Í0 0 1 4˙ 1 Í ˙ R4 Æ R4 Î0 0 1 4˚ 2 R3 Æ R4 Æ R4 - R3 È 1 2 -1 3˘ Í ˙ Í0 -1 0 -4 ˙ Í0 0 1 4˙ Í ˙ Î0 0 0 0 ˚ The augmented matrix [A | B] has been reduced to echelon form. r[A | B] = number of non-zero rows in echelon form = 3. Also, r(A) = 3 \ r[A | B] = 3 = r(A) = number of unknown variables Hence, the given system of equations has a unique solution. È 1 2 -1˘ È 3˘ Í ˙ Èx˘ Í ˙ 0 1 0 Í ˙ Í y ˙ = Í-4 ˙ Í0 0 1˙ Í ˙ Í 4 ˙ Í ˙ ÍÎ z ˙˚ Í ˙ Î0 0 0 ˚ Î 0˚ x + 2y – z = 3, –y = –4, z = 4 Now, \ y = 4, z = 4, x = –1 Discuss for all values of K for the system of equations x + y + 4z = 6, x + 2y – 2z = 6, kx + y + z = 6 as regards existence and nature of solutions. Example 20 Solution The given system of equations can be written in matrix form AX = B; È 1 1 4 ˘ È x ˘ È6 ˘ Í ˙Í ˙ Í ˙ Í 1 2 -2 ˙ Í y ˙ = Í6 ˙ ÍÎ K 1 1˙˚ ÍÎ z ˙˚ ÍÎ6 ˙˚ The given set of equations will have a unique solution if and only if the coefficient matrix A is nonsingular. Engineering Mathematics for Semesters I and II 1.32 R2 Æ R2 - R1 R3 Æ R3 - KR1 1 4 ˘ Èx˘ È 6 ˘ È1 Í ˙Í ˙ Í ˙ 1 -6 ˙ Í y ˙ = Í 0 ˙ Í0 ÍÎ0 1 - K 1 - 4 K ˙˚ ÍÎ z ˙˚ ÍÎ6 - 6 K ˙˚ R3 Æ R3 - (1 - K ) R2 È1 1 4 ˘ Èx˘ È 6 ˘ Í ˙Í ˙ Í ˙ -6 ˙ Í y ˙ = Í 0 ˙ Í0 1 ÍÎ0 0 7 - 10 K ˙˚ ÍÎ z ˙˚ ÍÎ6 - 6 K ˙˚ \ The coefficient matrix A in (1) will be a non-singular iff 7 – 10K π 0, i.e., K π 7 10 Hence, the given system of equations will have a unique solution if K π In case K = 7 then (1) becomes 10 7 10 È 1 1 4˘ È x ˘ È 6 ˘ Í ˙Í ˙ Í ˙ Í0 1 -6 ˙ Í y ˙ = Í 0 ˙ ÍÎ0 0 0 ˙˚ ÍÎ z ˙˚ ÍÎ18/10 ˙˚ The above system is not consistent if K = Example 21 Solution 7 . 10 Solve x + 3y – 2z = 0, 2x – y + 4z = 0 and x – 11y + 14z = 0. The given system of equations can be written in matrix form as AX = 0; È1 3 -2 ˘ È x ˘ È0 ˘ Í2 -1 4 ˙ Í y ˙ = Í0 ˙ Í ˙Í ˙ Í ˙ ÍÎ1 -11 14 ˙˚ ÍÎ z ˙˚ ÍÎ0 ˙˚ Here, 1 3 -2 |A| = 2 -1 4 = 30 - 72 + 42 = 0 1 -11 14 \ the matrix A is singular, i.e., r(A) < n. Thus, the given system has a nontrivial solution and will have an infinite number of solutions. Now, the given system is È1 3 -2 ˘ È x ˘ È0 ˘ Í ˙Í ˙ Í ˙ Í2 -1 4 ˙ Í y ˙ = Í0 ˙ ÍÎ1 -11 14 ˙˚ ÍÎ z ˙˚ ÍÎ0 ˙˚ (1) Matrix Algebra 1.33 R2 Æ R2 - 2 R1 È1 3 -2 ˘ È x ˘ È0 ˘ Í ˙Í ˙ Í ˙ R3 Æ R3 - R1 Í0 -7 8 ˙ Í y ˙ = Í0 ˙ ÍÎ0 -14 16 ˙˚ ÍÎ z ˙˚ ÍÎ0 ˙˚ R3 Æ R3 - 2 R2 È1 3 -2 ˘ È x ˘ È0 ˘ Í ˙Í ˙ Í ˙ Í0 -7 8 ˙ Í y ˙ = Í0 ˙ ÍÎ0 0 0 ˙˚ ÍÎ z ˙˚ ÍÎ0 ˙˚ And so we have x + 3y – 2z = 0 –7y + 8z = 0 Let z = K, y = 8 10 K , x = - K where K is any arbitrary constant. 7 7 10 8 K , y = K and z = K . 7 7 Thus, the system has an infinite number of solutions. \ x= - 1.20 CHARACTERISTIC ROOTS AND VECTORS (OR EIGENVALUES AND EIGENVECTORS) È x1 ˘ Í ˙ x Let A be a square matrix of order n, l is a scalar and X = Í 2 ˙ a column vector. Í ˙ Í ˙ ÎÍ xn ˚˙ Consider the equation AX = lX (8) Clearly, X = 0 is a solution of (8) for any value l. Now, let us see whether there exist scalars l and non-zero vectors X which satisfy (8). This problem is known as characteristic value problem. If In is unit matrix of order n then (8) may be written in the form (A – l In)X = 0 (9) Equation (9) is the matrix form of a system of n homogeneous linear equations in n unknowns. This system will have a nontrivial solution if and only if the determinant of the coefficient matrix A – l In vanishes, i.e., If A - l In = a11 - l a21 a12 a22 - l a1n a2 n an1 an 2 ann - l =0 Engineering Mathematics for Semesters I and II 1.34 The expansion of this determinant yields a polynomial of degree n in l, called the characteristic polynomial of the matrix A. The equation |A – l In| = 0 is called the characteristic equation or secular equation of the matrix A. The nth roots of the characteristic equation of a matrix A of order n are called the characteristic roots, characteristic values, proper values, eigenvalues, or latent roots of the matrix A. The set of the eigenvalues of a matrix A is called the spectrum of A. If l is a characteristic root of an n ¥ n matrix A then a non-zero vector X such that AX = lX is called a characteristic vector, eigenvector, proper vector or latent vector of the matrix A corresponding to the characteristic root l. 1.21 SOME IMPORTANT THEOREMS ON CHARACTERISTIC ROOTS AND CHARACTERISTIC VECTOR Theorem 2 l is a characteristic root of a matrix A if and only if there exists a non-zero vector X such that AX = lX. Theorem 3 If X is a characteristic vector of A then X cannot correspond to more than one characteristic value of A. Theorem 4 The characteristic vectors corresponding to distinct characteristic roots of a matrix are linearly independent. Theorem 5 The characteristic roots of a Hermitian matrix are real. 1.22 NATURE OF THE CHARACTERISTIC ROOTS (i) (ii) (iii) (iv) (v) (vi) (vii) (viii) The characteristic roots of a real symmetric matrix are all real. The characteristic roots of a skew-Hermitian matrix are either pure imaginary or zero. The characteristic roots of a real symmetric matrix are either pure imaginary or zero, for every such matrix is skew-Hermitian. The characteristic roots of a unitary matrix are of unit modulus. The characteristic roots of an orthogonal matrix are of unit modulus. The sum of the eigenvalues of a matrix A is equal to the trace of the matrix also equal to the sum of the elements of the principal diagonal. The product of the eigenvalues of A is equal to the determinant of A. If l1, l2, …, ln are the eigenvalues of A then the eigenvalues of (a) Ak are l1k, l2k, …, lnk , and 1 1 1 . (b) A–1 are , ,..., l1 l2 ln Matrix Algebra Example 22 matrix 1.35 Find the characteristic roots and the corresponding characteristic vectors of the È 8 -6 2 ˘ Í ˙ 7 -4 ˙ A = Í-6 ÍÎ 2 -4 3˙˚ Solution i.e., The characteristic equation of the matrix A is |A – lI| = 0 8-l -6 -6 7 - l 2 or or -4 2 -4 =0 3-l (8 – l) {(7 – l) (3 – l) – 16} + 6{–6(3 – l) + 8} + 2{24 – 2(7 – l)} = 0 l3 – 18 l2 + 45 l = 0 or l(l – 3) (l – 15) = 0 or l = 0, 3, 15. The characteristic roots of A are 0, 3, 15 The eigenvector of A corresponding to the eigenvalue 0 is given by (A – 0I) X = O or È 8 -6 2 ˘ È x1 ˘ È0 ˘ Í ˙Í ˙ Í ˙ 7 -4 ˙ Í x2 ˙ = Í0 ˙ Í-6 ÍÎ 2 -4 3˙˚ ÍÎ x3 ˙˚ ÍÎ0 ˙˚ or 3˘ È x1 ˘ È0 ˘ È 2 -4 Í ˙Í ˙ Í ˙ 7 -4 ˙ Í x2 ˙ = Í0 ˙ , by R1 ´ R3 Í-6 ÍÎ 8 -6 2 ˙˚ ÍÎ x3 ˙˚ ÍÎ0 ˙˚ or 3˘ È x1 ˘ È0 ˘ È2 -4 Í ˙Í ˙ Í ˙ 0 5 5˙ Í x2 ˙ = Í0 ˙ , by R2 Æ R2 + 3 R1 Í ÍÎ0 10 -10 ˙˚ ÍÎ x3 ˙˚ ÍÎ0 ˙˚ R3 Æ R3 - 4 R1 È2 -4 3˘ È x1 ˘ È0 ˘ Í ˙Í ˙ Í ˙ Í0 -5 5˙ Í x2 ˙ = Í0 ˙ , by R3 Æ R3 + 2 R2 0 0 ˚˙ ÍÎ x3 ˙˚ ÎÍ0 ˚˙ ÎÍ0 The coefficient matrix is of rank 2. Therefore, these equations have n – r = 3 – 2 = 1 linearly independent solutions. The above equations are 2x1 – 4x2 + 3x3 = 0 –5x2 + 5x3 = 0 From the last equation, we get x2 = x3. or 1.36 Engineering Mathematics for Semesters I and II Choose x2 = k, x3 = k; then the first equation gives x1 = k , where k is any scalar. 2 È1 ˘¢ \ X1 = k Í , 1, 1˙ = k[1, 2, 2]¢ is an eigenvector of A corresponding to the eigenvalue 0. Î2 ˚ The eigenvector of A corresponding to the eigenvalue 3 are given by (A – 3I)X = 0. or 2 ˘ È x1 ˘ È0 ˘ È 5 -6 Í ˙Í ˙ Í ˙ 4 -4 ˙ Í x2 ˙ = Í0 ˙ Í-6 ÍÎ 2 -4 0 ˙˚ ÍÎ x3 ˙˚ ÍÎ0 ˙˚ or È -1 -2 -2 ˘ È x1 ˘ È0 ˘ Í ˙Í ˙ Í ˙ 4 -4 ˙ Í x2 ˙ = Í0 ˙ , by R1 Æ R1 + R2 Í-6 ÍÎ 2 4 0 ˙˚ ÍÎ x3 ˙˚ ÍÎ0 ˙˚ or È -1 -2 -2 ˘ È x1 ˘ È0 ˘ Í ˙ Í 0 16 8˙˙ Í x2 ˙ = ÍÍ0 ˙˙ , by R2 Æ R2 - 6 R1 Í ÍÎ 0 - 8 - 4 ˙˚ ÍÎ x3 ˙˚ ÍÎ0 ˙˚ R3 Æ R3 + 2 R1 or È-1 -2 -2 ˘ È x1 ˘ È0 ˘ 1 Í ˙Í ˙ Í ˙ 8˙ Í x2 ˙ = Í0 ˙ , by R3 Æ R3 + R2 Í 0 16 2 ÍÎ 0 0 0 ˙˚ ÍÎ x3 ˙˚ ÍÎ0 ˙˚ The coefficient matrix of these equations is of rank 2. Therefore, this equation is of rank 2. Therefore, these equations have n – r = 3 – 2 = 1 linearly independent solutions. The above equations are –x1 – 2x2 – 2x3 = 0 16x2 + 8x3 = 0 1 x3 . 2 Choose x3 = 4k, x2 = –2k; then from the first equation. X1 = –4 k, where k is any scalar. \ X2 = k[–4, –2, 4]¢ is an eigenvector of A corresponding to the eigenvalue 3. Now, the eigenvector of A corresponding to the eigenvalue 15 is given by (A – 15 I)X = 0 From the last equation, we get x2 = - or 2 ˘ È x1 ˘ È0 ˘ È -7 -6 Í ˙Í ˙ Í ˙ 6 8 4 Í ˙ Í x2 ˙ = Í 0 ˙ ÍÎ 2 -4 -12 ˙˚ ÍÎ x3 ˙˚ ÍÎ0 ˙˚ or 6 ˘ È x1 ˘ È0 ˘ È -1 2 Í ˙Í ˙ Í ˙ Í-6 -8 -4 ˙ Í x2 ˙ = Í0 ˙ , by R1 Æ R1 - R2 . ÍÎ 2 -4 -12 ˙˚ ÍÎ x3 ˙˚ ÍÎ0 ˙˚ Matrix Algebra or 1.37 2 6 ˘ È x1 ˘ È0 ˘ È -1 Í 0 -20 -40 ˙ Í x ˙ = Í0 ˙ , by R Æ R - 6 R 2 2 1 Í ˙ Í 2˙ Í ˙ 0 0 ˚˙ ÍÎ x3 ˙˚ ÎÍ0 ˚˙ R3 Æ R3 + 2 R1 ÎÍ 0 The coefficient matrix of these equations is of rank 2. \ these equations have n – r = 3 – 2 = 1 linearly independent solutions. The above equations are –x1 + 2x2 + 6x3 = 0 –20 x2 – 40 x3 = 0 From the last equation, we get x2 = –2x3 Choose x3 = k, x2 = –2 k. Then from the first equation, we get x1 = 2 k, where k is any scalar. \ x3 = k [2, –2, 1]¢ is an eigenvector of A corresponding to the eigenvalue 15. Method 1 Example 23 Find the eigenvalues and the corresponding eigenvectors of the matrix È 6 -2 2 ˘ Í ˙ A = Í-2 3 -1˙ ÎÍ 2 -1 3˙˚ Solution The characteristic equation of A is |A – lI| = 0 or or 6-l -2 2 -2 3 - l -1 = 0 2 -1 3 - l 6-l -2 0 -2 3 - l 2 - l 2 -1 2-l =0 C3 Æ C3 + C2 by R2 Æ R3 – R3 2 or 6-l -2 0 (2 - l ) -2 3 - l 1 = 0 2 -1 1 or 6-l (2 - l ) -4 2 or by -2 0 4 - l 0 = 0, -1 1 (2 - l ) [(6 - l ) (4 - l ) - 8] = 0 or (2 – l) (l – 2) (l – 8) = 0 or l = 2, 2, 8 \ the characteristic roots of A are 2, 2, 8. Engineering Mathematics for Semesters I and II 1.38 Now, the eigenvectors of the matrix A corresponding to the eigenvalue 2 are given by non-zero solutions of the equation (A – 2I) X = 0 or È 4 -2 2 ˘ È x1 ˘ È0 ˘ Í ˙Í ˙ Í ˙ 1 -1˙ Í x2 ˙ = Í0 ˙ Í-2 ÍÎ 2 -1 1˙˚ ÍÎ x3 ˙˚ ÍÎ0 ˙˚ or 1 -1˘ È x1 ˘ È0 ˘ È -2 Í 4 -2 2 ˙ Í x ˙ = Í0 ˙ , by R ´ R 1 2 Í ˙ Í 2˙ Í ˙ ÍÎ 2 -1 1˙˚ ÍÎ x3 ˙˚ ÍÎ0 ˙˚ or È-2 1 -1˘ È x1 ˘ È0 ˘ Í ˙Í ˙ Í ˙ Í 0 0 0 ˙ Í x2 ˙ = Í0 ˙ , by R2 Æ R2 + 2 R1 ÍÎ 0 0 0 ˙˚ ÍÎ x3 ˙˚ ÍÎ0 ˙˚ R3 Æ R3 + R1 The coefficient matrix of these equations is of rank 1. \ there are n – r = 3 – 1 = 2 linearly independent solutions. The above equation is –2 x1 + x2 – x3 = 0 Clearly, X1 = [–1, 0, 2]¢ and X2 = [1, 2, 0]¢ are two linearly independent vectors. Then X1 and X2 are two linearly independent eigenvectors of A, linearly independent vector corresponding to the eigenvalue 2. The eigenvectors of A corresponding to the eigenvalue 8 are given by the non-zero solutions of the equation (A – 8I)X = 0 or È-2 -2 2 ˘ È x1 ˘ È0 ˘ Í ˙Í ˙ Í ˙ Í-2 -5 -1˙ Í x2 ˙ = Í0 ˙ ÍÎ 2 -1 -5˙˚ ÍÎ x3 ˙˚ ÍÎ0 ˙˚ or È-2 -2 2 ˘ È x1 ˘ È0 ˘ Í ˙Í ˙ Í ˙ Í 0 -3 -3˙ Í x2 ˙ = Í0 ˙ , by R2 Æ R2 - R1 ÍÎ 0 -3 -3˙˚ ÍÎ x3 ˙˚ ÍÎ0 ˙˚ R3 Æ R3 + R1 or È-2 -2 2 ˘ È x1 ˘ È0 ˘ Í ˙Í ˙ Í ˙ Í 0 -3 -3˙ Í x2 ˙ = Í0 ˙ , by R3 Æ R3 - R2 ÍÎ 0 0 0 ˙˚ ÍÎ x3 ˙˚ ÍÎ0 ˙˚ The coefficient matrix of these equations is of rank 2. \ there are n – r = 3 – 2 = 1 linearly independent solutions. The above equations are –2x1 – 2 x2 + 2 x3 = 0 –3 x2 – 3x3 = 0 Matrix Algebra 1.39 The last equation gives x2 = –x3 Choose x3 = 1, x2 = –1; then the first equation gives x1 = 2. Hence, X3 = [2, –1, 1] is an eigenvector of A corresponding to the eigenvalue 8. Method 2 Example 24 Find the eigenvalues and eigenvectors of the given matrix A, where È 2 -1 1 ˘ A = Í-1 2 -1˙ Í ˙ ÍÎ 1 -1 2 ˙˚ Solution The characteristic equation for A is |A – lI| = 0 2-l -1 -1 2 - l or 1 -1 1 -1 =0 2-l 2 or (2 – l) [(2 – l) – 1] + 1[–2 + l + 1] + 1[1 – 2 + l] = 0 l3 – 6l2 + 9l – 4 = 0 or or l = 1, 1, 4 The eigenvalues are 1, 1, 4. Let X1 = (x1, x2, x3)T be the eigenvectors corresponding to the eigenvalues l = 4. [A – lI] X1 = 0 or [A – 4I] X1 = 0 È-2 -1 1 ˘ È x1 ˘ È0 ˘ Í ˙Í ˙ Í ˙ Í -1 -2 -1˙ Í x2 ˙ = Í0 ˙ ÍÎ 1 -1 -2 ˙˚ ÍÎ x3 ˙˚ ÍÎ0 ˙˚ or 1 ˘ È x1 ˘ È0 ˘ R2 Æ R2 - 1 R1 È-2 -1 2 Í ˙Í ˙ Í ˙ Í 0 -3/2 -3/2 ˙ Í x2 ˙ = Í0 ˙ 1 ÍÎ 0 -3/2 -3/2 ˙˚ ÍÎ x3 ˙˚ ÍÎ0 ˙˚ R3 Æ R3 + R1 2 1 ˘ È x1 ˘ È0 ˘ È-2 -1 Í ˙Í ˙ Í ˙ 0 3/2 3/2 Í ˙ Í x2 ˙ = Í0 ˙ R3 Æ R3 - R2 ÍÎ 0 0 -3/2 ˙˚ ÍÎ x3 ˙˚ ÍÎ0 ˙˚ or –2x1 – x2 + x3 = 0 or 2x1 + x2 – x3 = 0 - 3 3 x2 - x3 = 0 or x2 + x3 = 0 2 2 Let x3 = k1, x2 = –k1; then x1 = k1 \ X1 = k1[1, –1, 1]T or X1 = [1, –1, 1]T Engineering Mathematics for Semesters I and II 1.40 Let x2 = [x1, x2, x3]T be the eigenvectors corresponding to eigenvalue l = 1 [A – lI] X2 = 0 or [A – I] X2 = 0 È 1 -1 1 ˘ È x1 ˘ È0 ˘ Í ˙Í ˙ Í ˙ Í-1 1 -1˙ Í x2 ˙ = Í0 ˙ ÍÎ 1 -1 1 ˙˚ ÍÎ x3 ˙˚ ÍÎ0 ˙˚ or È1 -1 1 ˘ È x1 ˘ È0 ˘ Í ˙ Í ˙ Í ˙ R2 Æ R2 + R1 Í 0 0 0 ˙ Í x2 ˙ = Í 0 ˙ R Æ R - R 3 1 ÍÎ0 0 0 ˙˚ ÍÎ x3 ˙˚ ÍÎ0 ˙˚ 3 or x1 – x2 + x3 = 0 Let x1 = k1 and x2 = k2; then x3 = k2 – k1 \ X2 = [k1, k2, k2 – k1]T or X2 = [1, 1, 0]T. Put k1 = 1 and k2 = 1 The given matrix A is symmetric with repeated eigenvalues. So we consider the third eigenvector X3 = [l, m, n]T. Since, the given matrix is symmetric, the vector X3 is orthogonal to X1, i.e., X1X3T = 0 or Èl ˘ Í ˙ [1, - 1, 1] Ím ˙ = 0 or l – m + n = 0 ÍÎ n ˙˚ Now, X3 is orthogonal to X2, i.e., X2X3T = 0 Èl ˘ Í ˙ [1, - 1, 0] Ím ˙ = 0 or l + m = 0 ÍÎ n ˙˚ (1) (2) Solving (1) and (2), we get l = –1, m = 1 and n = 2. \ X3 = [–1, 1, 2]T. Thus, the eigenvectors X1, X2, X3 corresponding to eigenvalues l = 4, 1, 1 are given by X1 = [1, –1, 1]T, X2 = [1, 1, 0]T and X3 = [–1, 1, 2]T 1.23 THE CAYLEY–HAMILTON THEOREM Statement Every square matrix satisfies its own characteristic equation, i.e., if for a square matrix A of order n, |A – lIn| = (1)n [ln + a1 ln–1 + an ln–2 + … + an] then the matrix equation Xn + a1Xn–1 + a2 Xn–2 + … + an In = 0 is satisfied by X = A i.e., An + a1 An–1 + a2 An–2 + … + an In = 0. Proof The characteristic matrix of A is A – l In. Since the elements of A – l In are at most of the first degree in l, the elements of Adj (A – l In) are ordinary polynomials in l of degree (n – 1) or less. Matrix Algebra 1.41 Therefore, Adj (A – l In) can be written as a matrix polynomial in l, given by Adj (A – l In) = B0 ln–1 + B1 ln–2 + … + B l n–2 + Bn–1 where B0, B1, B2, …, Bn–2 , Bn–1 are matrices of order n ¥ n whose elements are functions of aij's. Now, (A – l In) Adj (A – l In) = |A – l In| ◊ In \ (A – l In) (B0 ln–1 + B1ln–2 + n [∵ A Adj A = |A| ◊ In] + Bn–2l + Bn–1) n = (–1) [l + a1 ln–1 + + an] In. Equating the coefficients of like powers of l on both sides, we get –In B0 = (–1)n In A B0 – In B1 = (–1)n a1 In … ABn – 1 = (–1)n a2 In Multiplying these successively by An, An – 1, …, In and adding, we get 0 = (–1)n [An + a1 An–1 + a2 An–2 + + an In] An + a1 An–1 + a2 An–2 + … + an In = 0 Thus, Corollary The matrix A is non-singular, i.e., |A| π 0. |A| = (–1) an and an π 0. Also, Ê 1ˆ A–1 = Á - ˜ ÈÎ An -1 + a1 An - 2 + Ë an ¯ Example 25 + an -1 I n ˘˚ Verify the Cayley–Hamilton theorem for the matrix È 2 -1 1˘ Í ˙ A = Í-1 2 -1˙ and, hence, find A–1. ÍÎ 1 -1 2 ˙˚ Solution The characteristic equation of the matrix A is |A – lI| = 0 2-l or -1 1 -1 1 2-l -1 = 0 -1 2 - l or –l3 + 6l2 – 9l + 4 = 0 or l3 – 6l2 + 9l – 4 = 0 To verify Cayley–Hamilton theorem, we have to show that A3 – 6A2 + 9A – 4I = 0 È1 0 0 ˘ È 2 -1 1˘ Í ˙ Í ˙ We have I = Í0 1 0 ˙ , A = Í-1 2 -1˙ ÍÎ0 0 1 ˙˚ ÍÎ 1 -1 2 ˙˚ (1) Engineering Mathematics for Semesters I and II 1.42 È 2 -1 1˘ È 2 -1 1˘ È 6 -5 5˘ Í ˙Í ˙ Í ˙ A = A ◊ A = Í-1 2 -1˙ Í-1 2 -1˙ = Í-5 6 -5˙ , ÍÎ 1 -1 2 ˙˚ ÍÎ 1 -1 2 ˙˚ ÍÎ 5 -5 6 ˙˚ 2 È 22 -21 21˘ Í ˙ A3 = A2 ◊ A = Í-21 22 -21˙ ÍÎ 21 -21 22 ˙˚ and Now, A3 – 6A2 + 9A – 4I È 22 -21 21˘ È 6 -5 5˘ È 2 -1 1˘ È 1 0 0 ˘ È0 0 0 ˘ Í ˙ Í ˙ Í ˙ = -21 22 -21 - 6 -5 6 -5 + 9 -1 2 -1 - 4 Í0 1 0 ˙ = Í0 0 0 ˙ = O Í ˙ Í ˙ Í ˙ Í ˙ Í ˙ ÍÎ 21 -21 22 ˙˚ ÍÎ 5 -5 6 ˙˚ ÍÎ 1 -1 2 ˙˚ ÍÎ0 0 1˙˚ ÍÎ0 0 0 ˙˚ Hence, theorem is verified. Further, premultiplying (1) by A–1, we get A2 – 6A + 9I – 4A–1 = 0 or A–1 = 1 2 [ A - 6 A + 9I ] 4 A–1 = È 3 1 -1˘ 1Í ˙ 1 3 1˙ 4Í ÍÎ-1 1 3˙˚ È 1 4˘ Find the eigenvalues of the matrix A = Í ˙ and verify Cayley–Hamilton theorem Î 2 3˚ for the matrix A. Find the inverse of the matrix A and also express A5 – 4A4 – 7A3 + 11 A2 – A – 10 I as a linear polynomial in A. Example 26 Solution The characteristic equation of the matrix A is |A – lI| = 0 or 1- l 4 = 0 or l2 – 4l – 5 = 0 2 3-l or (l – 5) (l + 1) = 0 or l = 5, –1 Thus, the eigenvalues of A are 5, –1. By Cayley–Hamilton theorem, the matrix A must satisfy its characteristic equation (1). We have, A2 – 4A – 5I = 0 We have, È1 0 ˘ È1 4 ˘ I= Í ˙, A = Í ˙ Î0 1 ˚ Î2 3 ˚ (1) (2) Matrix Algebra 1.43 È1 4 ˘ È1 4 ˘ È9 16 ˘ A2 = A ◊ A = Í ˙Í ˙=Í ˙ Î2 3 ˚ Î2 3 ˚ Î8 17 ˚ Now, È9 16 ˘ È1 4 ˘ È1 0 ˘ È0 0 ˘ A2 – 4A – 5I = Í -4Í - 5Í ˙ ˙ ˙=Í ˙=O Î8 17 ˚ Î2 3 ˚ Î0 1 ˚ Î0 0 ˚ Hence, the theorem is verified Now, premultiplying (2) by A–1, we get A – 4I – 5A–1 = 0 or A–1 = 1 [ A - 4I ] 5 1 È-3 4 ˘ Í ˙ 5 Î 2 -1˚ The characteristic equation of A is l2 – 4l – 5 = 0. Dividing the polynomial l5 – 4l4 – 7l3 + 11 l2 – l – 10 by the polynomial l2 – 4l – 5 = 0, we get l5 – 4l4 – 7l3 + 11l2 – l – 10 = (l2 – 4l – 5) (l3 – 2l + 3) + l + 5 = \ A5 – 4A4 – 7A3 + 11A2 – A – 10I = (A2 – 4A – 5I) (A3 – 2A + 3I) + A + 5I. = A + 5I [∵ A2 – 4A – 5I = 0] which is a linear polynomial in A. 1.24 SIMILARITY OF MATRICES Let A and B be the two square matrices of order n. Then B is said to be similar to A if there exists a non-singular matrix P such that B = P–1 AP If B is similar to A then | B | = | P–1 AP | = | P–1| | A | | P | = | P–1 | | P | | A | = |P–1 P | | A | =|I||A| =|A| Thus, the similar matrices have the same determinant 1.25 DIAGONALIZATION MATRIX A matrix A is said to be diagonalizable if it is similar to a diagonal matrix. Thus, the matrix A is diagonalizable if $ an invertible matrix P such that D = P–1 AP, where D is a diagonal matrix. Engineering Mathematics for Semesters I and II 1.44 Theorem 6 A matrix of order n is diagonalizable if and only if it possesses n linearly independent eigenvectors. Proof Suppose first that A is diagonalizable. Then A is similar to a diagonal matrix D = Diag. [l1, l2, …, ln]. \ there exists an invertible matrix P = [X1, X2, …, Xn], such that P–1 AP = D i.e., AP = PD or or A[X1, X2, … , Xn] = [X1, X2, …, Xn]D = [X1, X2, …, Xn] diag. [l1, l2, …, ln] [A X1, A X2, …, AXn] = [l1 X1, l2 X2, …, ln Xn] Hence, A X1 = l1 X1, A X2 = l2 X2, …, AXn = ln Xn Thus, X1, X2, …, Xn are eigenvectors of A corresponding to the eigenvalues l1, l2, …, ln respectively. Since the matrix P is non-singular, its column vectors X1, X2, …, Xn are linearly independent. Hence, A has n linearly independent eigenvectors. Conversely, suppose that A possesses n linearly independent eigenvectors X1, X2, …, Xn and let l1, l2, …, ln be the corresponding eigenvalues. Then Let Then A X1 = l1 X1, AX2 = l2 X2, …, AXn = ln Xn. P = [X1, X2, …, Xn] and D = Diag. [l1, l2, …, ln] AP = A[X1, X2, …, Xn] = [A X1, A X2, …, AXn] = [l1 X1, l2 X2, …, ln Xn] = [X1, X2, …, Xn] diag. [l1, l2, …, ln] = PD Since the column vectors X1, X2, …, Xn of the matrix P are linearly independent, so P is invertible and P–1 exists. \ AP = PD fi P–1 AP = P–1 PD fi P–1 AP = D [∵ P–1 P = I] fi A is similar to D fi A is diagonalizable. Theorem 7 If the eigenvalues of a matrix of order n × n are all distinct then it is always similar to a diagonal matrix. Proof Suppose A be a square matrix of order n and let matrix A have n distinct eigenvalues l1, l2, …, ln. We know that eigenvectors of a matrix corresponding to distinct eigenvalues are linearly independent. Matrix Algebra 1.45 \ A has n linearly independent eigenvectors and so it is similar to a diagonal matrix D = Diag. [l1, l2, …, ln]. Corollary The two matrices of order n with the same set of n distinct eigenvalues are similar. È -9 4 4 ˘ Í ˙ Example 27 Show that the matrix A = Í -8 3 4 ˙ is diagonalizable. Also, find the diagonal ÍÎ-16 8 7˙˚ form and a diagonalizing matrix P. Solution The characteristic equation of the matrix A is | A – lI | = 0 or or -9 - l 4 4 =0 -8 3-l 4 -16 8 7-l -1 - l 4 -1 - l 3 - l -1 - l or or or = 0, by C1 Æ C1 + C2 + C3 7-l 1 4 4 =0 - (1 + l ) 1 3 - l 4 1 8 7-l 1 or 8 4 4 4 4 - (1 + l ) 0 -1 - l 0 = 0, by R2 Æ R2 - R1 0 4 3-l R3 Æ R3 - R1 (1 + l) (1 + l) (3 – l) = 0 l = –1, –1, 3 \ the eigenvectors of the matrix A corresponding to the eigenvalue –1 are given by the equation [A – (–1)I] X = 0 or (A + I) X = 0 or È -8 4 4 ˘ È x1 ˘ È0 ˘ Í ˙Í ˙ Í ˙ Í -8 4 4 ˙ Í x2 ˙ = Í0 ˙ ÍÎ-16 8 8˙˚ ÍÎ x3 ˙˚ ÍÎ0 ˙˚ or È -8 4 4 ˘ È x1 ˘ È0 ˘ Í ˙Í ˙ Í ˙ Í 0 0 0 ˙ Í x2 ˙ = Í0 ˙ , by R2 Æ R2 - R1 ÍÎ-0 0 0 ˙˚ ÍÎ x3 ˙˚ ÍÎ0 ˙˚ R3 Æ R3 - 2 R1 The coefficient matrix of these equations has the rank 1. \ there are n – r = 3 – 1 = 2 linearly independent solutions. The above equation –8 x1 + 4 x2 + 4 x3 = 0 Engineering Mathematics for Semesters I and II 1.46 or –2 x1 + x2 + x3 = 0 Clearly, X1 = [1, 1, 1]¢ and X2 = [0, 1, –1]¢ are two linearly independent solutions. \ X1 and X2 are two linearly independent eigenvectors of A corresponding to the eigenvalue –1. Now, the eigenvectors of A corresponding to the eigenvalue 3 are given by (A – 3I) X = 0 or È-12 4 4 ˘ È x1 ˘ È0 ˘ Í ˙Í ˙Í ˙ Í -8 0 4 ˙ Í x2 ˙ Í0 ˙ ÍÎ-16 8 4 ˙˚ ÍÎ x3 ˙˚ ÍÎ0 ˙˚ or R2 Æ R2 - R1 È-12 4 4 ˘ È x1 ˘ È0 ˘ Í ˙Í ˙Í ˙ Í 4 -4 0 ˙ Í x2 ˙ Í0 ˙ , by R3 Æ R3 - R1 ÍÎ -4 4 0 ˙˚ ÍÎ x3 ˙˚ ÍÎ0 ˙˚ or È-12 4 4 ˘ È x1 ˘ È0 ˘ Í ˙Í ˙Í ˙ Í 4 -4 0 ˙ Í x2 ˙ Í0 ˙ , by R3 Æ R3 + R2 ÍÎ 0 0 0 ˙˚ ÍÎ x3 ˙˚ ÍÎ0 ˙˚ The coefficient matrix of these equations has the rank 2. \ there are n – r = 3 – 2 = 1 linearly independent solutions. The above equations are –12 x1 + 4 x2 + 4 x3 = 0 4 x1 – 4 x2 = 0 The last equation gives x2 = x1 Choose x1 = 1, so x2 = 1, then from the first equation, we have x3 = 2 \ X3 = [1, 1, 2]¢ is an eigenvector of A corresponding to the eigenvalue 3. Now, the modal matrix P = [ X1 X 2 È1 0 1˘ Í ˙ X3 ] = Í1 1 1˙ ÍÎ1 -1 2 ˙˚ The columns of P are linearly independent eigenvectors of A corresponding to the eigenvalues –1, –1, 3 respectively. The matrix P will transform A to the diagonal form D which is given by the relation. È -1 0 0 ˘ –1 P AP = ÍÍ 0 -1 0 ˙˙ = D ÍÎ 0 0 3˙˚ EXERCISE 1.3 1. Test for consistency and solve the following systems of equations. (i) 2x + 6y + 11 = 0, 6x + 20y + 6z = –3, 6y – 18z = –1 (ii) 2x – y + 3z = 8, –x + 2y + z = 4, 3x + y – 4z = 0 Matrix Algebra 2. 1.47 7. Find the values of a and b for which the equations x + 2y + 3z = 4, x + 3y + 4z = 5, x + 3y + az = b have (i) no solution, (ii) a unique solution, and (iii) an infinite number of solutions. Solve: x + y + z = 0, 2x + 5y + 7z = 0, 2x – 5y + 3z = 0 Show that the only real value of l for which the following equations have a non-zero solution is 6: x + 2y + 3z = lx, 3x + y + 2z = ly, 2x + 3y + z = lz. For what values of do the equations x + y + z = 1, x + 2y + 4z = l, x + 4y + 10z = l2, have a solution and solve them completely in each case. Show that the three equations –2x + y + z = a, x – 2y + z = b, x + y – 2z = c have no solutions unless a + b + c = 0, in which case they have infinitely many solutions. Find these solutions when a = 1, b = 1, and c = –2. Find the eigenvalues and eigenvectors of the matrix 8. È5 4 ˘ A= Í ˙ Î 1 2˚ Find the eigenvalues and eigenvectors of the matrix 3. 4. 5. 6. 2 -3˘ È-2 Í ˙ A= Í 2 1 -6 ˙ ÍÎ -1 -2 0 ˙˚ 9. Verify the Cayley–Hamilton theorem for the matrix È 0 0 1˘ Í ˙ A = Í 3 1 0˙ ÍÎ-2 1 4 ˙˚ Hence, or otherwise, evaluate A–1. 10. È 1 2 0˘ Í ˙ Verify that the matrix A = Í2 -1 0 ˙ satisfies its own characteristic equation. Is it true of ÍÎ0 0 -1˙˚ every square matrix? State the theorem that applies here. 11. È 1 2˘ 6 5 4 3 2 If A = Í ˙ , express A – 4A + 8A – 12 A + 14A as a linear polynomial in A. Î-1 3˚ 12. Show that the following matrices are not similar to diagonal matrices: (i) È2 1 0 ˘ È2 -1 1˘ Í ˙ Í ˙ Í0 2 1˙ (ii) Í2 2 -1˙ ÍÎ0 0 2 ˙˚ ÍÎ 1 2 -1˙˚ Engineering Mathematics for Semesters I and II 1.48 13. È -9 4 4 ˘ Show that the matrix Í -8 3 4 ˙ is diagonalizable. Find the diagonalizing matrix P. Í ˙ ÍÎ-16 8 7˙˚ 14. È 1 0 -1˘ Í ˙ Diagonalize the matrix A = Í 1 2 1˙ ÍÎ2 2 3˙˚ Answers 1. 2. 3. 5. 6. 7. 8. 9. 11. 13. 14. (i) Not consistent (ii) consistent, x = 2, y = 2, z = 2. (i) a = 4, b π 5 (ii) a π 4, (iii) a = 4, b = 5. x = 0 = y = z. for l = 1, x = 1 + 2c, y = –3c, z = c, where c is any arbitrary constant. For l = 2, x = 2K, y = 1 – 3K, z = K, where K is any arbitrary constant. x = c – 1, y = c – 1, z = c, where c is any arbitrary constant. l = 6, 1, X1 = [4, 1]¢, X2 = [1, –1]¢ l = 5, –3, –3, X1 = [1, 2, –1]¢, X2 = [–2, 1, 0]¢, X3 = [3, 0, 1]¢. È 4 1 -1˘ 1Í ˙ A = Í-12 2 3˙ . 5 ÍÎ 5 0 0 ˙˚ –4 A + 5 I È1 0 1˘ Í ˙ P = Í1 1 1˙ , diag. [ -1, - 1, 3] ÍÎ1 -1 2 ˙˚ –1 1˘ È 1 0 0˘ È 1 2 Í ˙ Í ˙ D = P AP = Í0 2 0 ˙ , P = Í-1 -1 -1˙ ÍÎ0 0 3˙˚ ÍÎ 0 -2 -2 ˙˚ –1 1.26 QUADRATIC FORMS Let X = [x1, x2, x3,…, xn]T be an n-vector in the vector space Vn over a field F, and let A = [aij] be an n-square matrix over F. A real quadratic form is a homogeneous expression of the form. n Q(x1, x2, …, xn) = Âa xx (10) ij i j i , j =1 in which the power of each term is 2. Now, Eq. (10) can be written as Q1 = a11 x2 + (a12 + a21) x1x2 + + + (a1n + an1) x1xn + a22x22 + (a23 + a32) x2x3 + (a2n + an2) x2xn + + ann xn2 Matrix Algebra 1.49 Q = XTAX (11) Using the definition of matrix multiplication, let bij = (aij + a ji ) and the matrix B = [bij] be 2 symmetric since bij = bji. Further, bij + bji = aij + aji. Then Eq. (11) becomes Q = XTBX, where B is a symmetric matrix and bij = (aij + aji)/2. Example 28 5x2x3. Obtain the matrix of the quadratic form Q = x12 + 2x22 – 7x32 – 4x1x2 + 8x1x3 + Solution a11 = 1, a22 = 2, a33 = –7 1 1 a12 = (coefficient of x1 x2 ) = (-4) = -2 2 2 a13 = 1 1 (coefficient of x1 x3 ) = (8) = 4 2 2 a23 = 1 1 (coefficient of x2 x3 ) = (5) = 5/2 2 2 È a11 Í Then the matrix A = Ía21 ÍÎ a31 a12 a22 a32 a13 ˘ ˙ a23 ˙ a33 ˙˚ È ˘ Í 1 -2 4 ˙ Í ˙ 5˙ A = Í -2 2 Í 2˙ Í ˙ 5 Í4 -7˙ ˙˚ 2 ÎÍ [∵ a12 = a21 , a23 = a32 ] which is a symmetric matrix. Example 29 Solution Find the matrix of the quadratic form Q = 2x12 + 3x22 + x32 – 3x1x2 + 2x1x3 + 4x2x3. a11 = 2, a22 = 3, a33 = 1 1 a12 = (coefficient of x1 x2 ) = 2 1 a13 = (coefficient of x1 x3 ) = 2 a23 = \ 1 3 (-3) = - = a21 2 2 1 (2) = 1 = a31 2 1 1 (coefficient of x2 x3 ) = (4) = 2 = a32 2 2 -3/2 1 ˘ È 2 Í -3/2 3 2 ˙˙ Matrix A = Í ÍÎ 1 2 1 ˙˚ which is a symmetric matrix. Engineering Mathematics for Semesters I and II 1.50 Example 30 Obtain the symmetric matrix B for the quadratic form Q = x12 + 2x1x2 – 4x1x3 + 6x2x3 – 5x22 + 4x23. Solution a11 = 1, a22 = –5, a33 = 4 \ b11 = a11 = 1, b22 = a22 = –5, b33 = a33 = 4 and b12 = b21 = 1 1 (a + a21 ) = (2) = 1 2 12 2 b13 = b31 = 1 1 (a13 + b31 ) = (-4) = -2 2 2 b23 = b32 = 1 1 (a + a32 ) = (6) = 3 2 23 2 Hence, the symmetric matrix È b11 Í B = Íb21 ÍÎb31 b12 b22 b32 b13 ˘ È 1 1 -2 ˘ ˙ Í ˙ b23 ˙ = Í 1 -5 3 ˙ b33 ˙˚ ÍÎ-2 3 4 ˙˚ 1.27 COMPLEX QUADRATIC FORM Let A be a complex matrix. Then the quadratic form is defined as n Q=  aij xi x j = X T AX (12) i , j =1 where X = [x1, x2, …, xn]T be a vector in C. Complex quadratic form is defined for Hermitian matrix and it is called a Hermitian form and is always real. 2 + i˘ È 1 Let the Hermitian matrix A = Í ˙. 3 ˚ Î2 - i The quadratic form. Example 31 2 + i ˘ È x1 ˘ È 1 Q = X T AX = [ x1 , x2 ] Í ˙Í ˙ 3 ˚ Î x2 ˚ Î2 - i = | x1|2 + (2 + i ) x1 x2 + (2 - i ) x1 x2 + 3 | x2|2 = | x1|2 + 2( x1 x2 + x1 x2 ) + i( x1 x2 - x1 x2 ) + 3 | x2|2 Since x1 x2 + x1 x2 is real and x1 x2 - x1 x2 is an imaginary, \ Q = |x1|2 + Real + 3|x2| = Real Hence, the Hermitian matrix A is always real. Matrix Algebra 1.51 1.28 CANONICAL FORM The sum of the square form of a real quadrate form Q = XT AX is (13) Y T DY = l1 y12 + l2 y22 + l3 y32 + + ln yn2 Equation (13) is formed with the help of the orthogonal transformation X = PY where P is the modal matrix and D is a diagonal matrix or a spectral matrix whose diagonal elements are the eigenvalues of the matrix A. Consider the rank of a matrix A as r and let n be the number of variables in quadratic form. Index The number of positive terms in the canonical form of a quadratic form is called the index and it is denoted by P. Signature The signature of a quadratic form is the difference between the number of positive terms and the negative terms. 1.29 POSITIVE DEFINITE QUADRATIC AND HERMITIAN FORMS Let a real quadratic or Hermitian form Q(x) = XT AX or X T HX over a real field R or a complex field C. Then a real quadratic or Hermitian form Q(x) is said to be (i) positive definite if Q(x) > 0 when X π 0 (ii) negative definite if Q(x) < 0 when X π 0 (iii) positive semidefinite if Q(x) ≥ 0 when X π 0 (iv) negative semidefinite if Q(x) £ 0 when X π 0 A real symmetric matrix A (or a Hermitian matrix H) is said to be a positive definite matrix (or a +ve definite Hermitian matrix) iff Q = XT AX (or X T AX ) is +ve definite and is defined as A > 0 (or H > 0). Similarly ∑ for a negative definite matrix, A < 0 (or H < 0) ∑ for a positive semidefinite matrix, A ≥ 0 (or H ≥ 0) ∑ for a negative semidefinite matrix, A £ 0 (or H £ 0) 1.30 SOME IMPORTANT REMARK'S (R–1) Let Q(X) = XT AX be a real quadratic form of order (n), rank (r) and index (P). Then Q(X) is (i) Positive Definite (PD) iff r = p = n or if all the eigenvalues of A are positive. (ii) Positive SemiDefinite (PSD) iff r = p < n or all the eigenvalues of A are ≥ 0. (iii) Negative Definite (ND) iff r = –p = n or all the eigenvalues of A are negative. (iv) Negative SemiDefinite (NSD) iff r = –p < n or all the eigenvalues of A are £ 0. (R–2) A real quadratic form Q(x) = XTAX is PD, if the (i) det·(A) > 0 (ii) every principal minor of A is positive (iii) aii > 0, i = 1, 2, …, n, where A = [aij] Engineering Mathematics for Semesters I and II 1.52 (R–3) If a real quadratic form Q(X) = XTAX is PSD then (i) det(A) = 0 (ii) every principal minor of A is positive (iii) aii > 0 if xi2 appears in Q(X) (R–4) A real quadratic form Q(X) = XTAX is ND iff all the principal minors of A of even order are positive and those of odd order are negative. (R–5) A quadratic form Q(X) = XTAX is NSD iff A is singular and all principal minors of even order of A are non-negative while those of odd order are negative. (R–6) A real symmetric matrix A is indefinite iff at least one of the following conditions is satisfied. (i) A has a –ve principal minor of even order (ii) A has a +ve principal minor and odd order and a –ve principal minor of odd order Note: All the above remarks are same for Hermitian form. Theorem 8: Sylvester Criterion for Positive Definiteness n A quadratic form Q(X) = XTAX =  aij xi x j (14) i , j =1 is positive definite iff all the leading principal minors of A are positive. a and (14) is negative definite iff a11 < 0, 11 a21 Example 32 a11 a12 > 0 , a21 a22 a31 a13 a23 < 0 and so on …. a33 Determine the nature, index, and signature of the quadratic form Q(X) = 2x1x2 + 2x1x3 + 2x2x3 = XT AX È0 1 1 ˘ Í ˙ Solution Here, A = Í1 0 1 ˙ ÍÎ1 1 0 ˙˚ The characteristic matrix for the matrix A is or or a12 a22 a32 -l |A – lI| = 0 fi 1 1 3 l – 3l – 2 = 0 l = 2 –1, –1 1 -l 1 1 1 =0 -l Matrix Algebra 1.53 Therefore, some eigenvalues are +ve and some are –ve. Hence, the Q(X) is indefinite. The index of Q(X) is 1 and signature = 1 – 2 = –1 Example 33 = 3x12 Examine whether the quadratic form Q(X) is positive definite, where Q(X) = XT AX + 3x1x2 + 4x22. Solution È3 1 ˘ Here, A = Í ˙ ; then the eigenvalues of A are 2 and 5 both are positive and the leading Î2 4 ˚ 3 1 = 12 – 2 = 10 > 0 2 4 Hence, Q(X) is positive definite. minor, |3| = 3 > 0 and Example 34 Determine the nature, index, and signature of the quadratic form 2 2 Q(X) = x1 + 4 x3 + 4 x1 x2 + 10 x1 x3 + 6 x2 x3 Solution È1 2 5 ˘ Í ˙ Here, A = Í2 0 3 ˙ ÍÎ5 3 4 ˙˚ The characteristic equation for A is |A – lI| = 0 or (1 - l ) 2 5 2 -l 3 =0 5 3 (4 - l ) or (1 – l) [l(l – 4) – 9] – 2[2(4 – l) – 15] + 5[6 + 5 l] = 0 or l 3 – l2 – 38l – 36 = 0 or l = –1, 1 ± 37 \ some of the eigenvalues are positive and some are negative. Hence, Q(X) is indefinite. Now, index = 1, signature = 1 – 2 = –1. Example 35 Determine the nature of the quadratic form È 3 -2i ˘ Q(X) = X T Í ˙X Î2i 4 ˚ Solution È3 x1 - 2ix2 ˘ È 3 -2i ˘ È x1 ˘ Q(X) = [ x1 , x2 ] Í ˙ Í x ˙ = [ x1 , x2 ] Í2ix + 4 x ˙ 2 i 4 Î ˚Î 2˚ 2˚ Î 1 = 3 x1 x1 - 2i( x1 x2 - x1 x2 ) + 4 x2 x2 > 0 and real. Since x1 x2 and x1 x2 both are imaginary. Hence, Q(x) is a positive definite. Engineering Mathematics for Semesters I and II 1.54 EXERCISE 1.4 Determine the nature, index and signature of the following quadratic forms: 1. Q(X) = x12 + 4 x22 + x32 - 4 x1 x2 + 2 x1 x3 - 4 x2 x3 2. Q(X) = 3 x12 + 3 x22 + 3 x32 + 2 x1 x2 + 2 x1 x3 - 2 x2 x3 3. Q(X) = 6 x12 - 4 x1 x2 + 3 x22 - 2 x2 x3 + 3 x32 + 4 x1 x3 4. Q(X) = 5 x12 + 26 x22 + 10 x32 + 4 x2 x3 + 14 x1 x3 + 6 x1 x2 5. Q(X) = -3 x12 - 3 x22 - 3 x32 - 2 x1 x2 - 2 x1 x3 - 2 x2 x3 6. Q(X) = x12 + 2 x22 + 3 x32 + 2 x1 x2 + 2 x2 x3 - 2 x1 x3 Answers 1. 2. 3. 4. 5. 6. Positive definite, index = 3, signature = 3 Positive definite, index = 3, signature = 3 Positive definite, index = 3, signature = 3 Positive semi-definite, index = 3, signature = 3 Negative definite, index = 0, signature = –3 Indefinite, index = 2, signature = 1 1.31 APPLICATIONS OF MATRICES (i) Differentiation and Integration of a Matrix The elements of a matrix A may be functions of a variable, say, t. This functional dependence of A on t is shown by writing A and its elements as A(t) and aij(t), defined as A = A(t) = [aij(t)] Thus, the differential coefficient of A w.r.t. t is defined as Å= d Èd ˘ ( A) = Í aij (t )˙ dt Î dt ˚ Èd Í dt a11 Í Íd a = Í dt 21 Í Í Íd Í an1 Î dt d a12 dt d a22 dt d a dt n 2 d ˘ a1n ˙ dt ˙ d a2 n ˙˙ dt ˙ ˙ ˙ d ann ˙ dt ˚ The integral of the matrix A is defined as Ú A(t )dt = È Ú aij (t ) dt ˘ , assuming the elements in A(t) to Î ˚ be integrable. Thus, the integral of A is obtained by integrating each element of A. Matrix Algebra Example 36 Solution Prove that 1.55 d at (e ) = a ea t . dt We know that ea t = 1 + \ a t (a t )2 (a t )3 + + + 1! 2! 3! d at d È a t (a t )2 (a t )3 (e ) = Í1 + + + + dt dt ÎÍ 1! 2! 3! ˘ ˙ ˚˙ È a a2 2 a3 3 t + t + Í1 + t + 2! 3! ÍÎ 1! ˘ ˙ ˙˚ = d dt = d a d a2 d 2 a3 d 3 (1) + (t ) + (t ) + (t ) + dt 1! dt 2! dt 3! dt 2 3 = 0 + a + a 2t + a 3t 2 + 2! 3! È a a2 2 = a Í1 + t + t + 2! ÎÍ 1! ˘ ˙ ˚˙ = a eat Proved. d2 y dy + 4 - 12 y = 0 dt dt 2 y(0), y¢(0) = 8 by matrix method. Example 37 Solution Solve (1) dy1 = y2 dt (2) dy d Ê dy1 ˆ = -4 1 + 12 y1 Á ˜ dt Ë dt ¯ dt (3) Suppose y = y1 and Equation (1) becomes dy2 = 12 y1 - 4 y2 dt Equations (2) and (3) written in matrix form are d dt È y1 ˘ È 0 1 ˘ È y1 ˘ Í ˙=Í ˙Í ˙ Î y2 ˚ Î12 -4 ˚ Î y2 ˚ RHS of Eq. (4) gives the eigenvector. The characteristic equation 0-l 12 or 1 =0 -4 - l –l(–4 – l) – 12 = 0 (4) Engineering Mathematics for Semesters I and II 1.56 or l2 + 4l + 12 = 0 or (l – 2) (l + 6) = 0 or l = 2, –6 For l = 2 and l = –6, eigenvectors are [1, 2]T and [1, –6]T Matrix of eigenvectors is È1 1 ˘ -1 1 È6 1 ˘ P= Í ˙, P = Í ˙ 8 Î2 -1˚ Î2 -6 ˚ Now, 2t P eltP–1 = È1 1 ˘ Èe Í2 -6 ˙ Í Î ˚ ÍÎ 0 È e2 t Í 2t ÍÎ2e 0 ˘ 1 È6 1 ˘ ˙ Í ˙ e -6 t ˙˚ 8 Î2 -1˚ e -6 t ˘ È6 1 ˘ ˙Í ˙ -6e -6 t ˙˚ Î2 -1˚ = 1 8 = 2t -6 t 1 È 6e + 2e Í 2t 8 ÎÍ12e - 12e -6 t e2 t - e -6 t ˘ ˙ 2e2 t + 6e -6 t ˚˙ Using the initial conditions, y(0) = 0 and y¢(0) = 8 È y1 ˘ 1 È 6e2 t + 2e -6 t Í y ˙ = Í 2t -6 t Î 2 ˚ 8 ÍÎ12e - 12e e2 t - e -6 t ˘ È0 ˘ ˙Í ˙ 2e2 t + 6e -6 t ˚˙ Î8 ˚ È e2 t - e -6 t ˘ = Í ˙ ÍÎ2e2 t + 6e -6 t ˙˚ \ y1 = y = e2t – e–6t and y2 = dy = 2e2 t + 6e -6 t dt (ii) Use of Matrices in Graph Theory Matrices play an important role in the field of graph theory, the first application of graph theory, which shows its face in communications, transportation, sciences, and many more fields. There are two important Engineering Mathematics By Sonendra Gupta Pdf
Source: https://in.b-ok.as/book/5511339/0c6b9d
Posted by: reidwitua1960.blogspot.com

0 Response to "Engineering Mathematics By Sonendra Gupta Pdf"
Post a Comment