Skip to content

KZHU.ai 馃殌

Into the Unknown

Menu
  • 馃搱 Discrete Optimization
    • Mathematics
    • Computer Science
    • Cryptography
    • C++
  • 鈿涳笍 Quantum Computing
    • Physics
    • Blockchain
  • 馃 Machine Learning
    • TensorFlow
    • Python
  • 馃洶 Data Science
    • Statistics
    • Matlab
  • 馃對 Hybrid Cloud
    • Kubernetes
    • Golang
    • Web
  • 馃搱 Business
    • MBA @ Gies
    • Six Sigma
  • 馃彟 Finance
    • Law
    • Economics
Menu
Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors

Posted on October 14, 2021November 10, 2022 by keslerzhu

Table of Contents

Toggle
  • Determinants
    • Laplace Expansion
    • Leibniz Formula (Big Formula)
    • Properties of a Determinant
  • Eigenvalue Problem
  • Matrix Diagonalization
  • Powers of Matrix
  • My Certificate
  • Related Quick Recap

Determinants can tell us whether a matrix has an inverse, and more importantly it can help us solve the eigenvalue problems. The eigenvalue problem is another way to look at a square matrix in terms of scalars called eigenvalues, and vectors called eigenvectors.



Determinants

If there is inverse to the matrix of A, you can solve the equation A x = b, by x = A-1 b and b will be a unique solution. The determinants of 2-by-2 and 3-by-3 matrices are easy to calculate. When we need to generalize this to an n-by-n matrix, one of the method is to use Laplace expansion, which helps reduce the level of the determinant by one dimension of the matrix.

Laplace Expansion

Instead of working on only first row, Laplace expansion actually works going across any row or down any column. You have complete freedom to select row to go across or column to go down. The signs of the terms follow the checkerboard patterns, like below:

+ - + - ...
- + - + ...
+ - + - ...
...

When calculating the determinant of a matrix, the easiest way is to look for the row or column with most zeros.

Leibniz Formula (Big Formula)

There are n factorial terms when computing the determinant from the Leibniz Formula. The complication is how to determine the sign of terms: try to get the column of choice from order 1, 2, 3 through n, if the number of permutation (flip) is even, you get a plus sign; otherwise it is minus sign.

Properties of a Determinant

Determinant is actually a function that takes an n-by-n matrix and returns a scalar. These 3 properties are all you need to define the determinant function. With these definitions you can prove the Laplace Expansion and Leibniz Formula.

Property 1det I = 1
Property 2determinant changes sign under row interchange
Property 3determinant is a linear function of the first row

You can use these 3 properties to prove all of these other properties.

  1. The determinant is linear function of all rows.
  2. det = 0 if there are 2 rows are identical.
  3. det = 0 if there is a row of zeros.
  4. det = 0 implies its matrix is not invertible
  5. The determinant of diagonal matrix, the determinant of lower triangular matrix, the determinant of upper triangular matrix are just products of diagonal elements.
  6. det(AB) = det A * det B
  7. det(A-1) = 1 / det(A)
  8. det(AT) = det(A). Whatever you do on the rows of a matrix, also applies to the columns of a matrix.
  9. The determinant of a matrix does not change when you multiply a row with a number and add it to another row. (Gaussian elimination does not affect the matrix’s determinant).


Eigenvalue Problem

Eigenvalue problem was introduced by studying the rotation of solid objects in physics. In quantum physics the eigenvalue were the energy levels of the atoms. Suppose we have a matrix A, you want to find its eigenvalues and its eigenvectors, with the following equation holds:

A x = 位 x
x is the eigenvector, 位 is eigenvalue

For each eigenvalue, you have an associated eigenvector. In order to get a non-trivial result of vector x. The equation below must hold, and it is called Characteristic Equation of matrix A, which is a n-th order polynomial equation in 位.

det(A - 位 I) = 0 -> C1位n + C2位n-1 + ... = 0.

The polynomial equation has roots, the roots can be real or complex. All symmetric matrices have real eigenvalues.

Matrix Diagonalization

If a n-by-n matrix A has n linearly independent eigenvectors, then we can do a very nice matrix multiplication on A to transform it into a diagonal matrix, which is very easy to compute with.

Suppose a n-by-n matrix A has n linear independent eigenvectors: x1, x2, …, xn, all of which are used as columns to construct an invertible matrix S (with i-th column is the i-th eigenvector).

Also, A has n eigenvalues: 位1, 位2, …, 位n. all of which are used as diagonal elements to construct a matrix 螞 (a diagonal matrix with all eigenvalues along the diagonal). So we have this equation:

A S = S 螞
A = S 螞 S-1  <- factorization of A
螞 = S-1 A S  <- we have diagonalized A into 螞

Powers of Matrix

When you want to calculate the powers of a matrix, you may first diagonalize the matrix first to shortcut the calculation.

A = S 螞 S-1
A2 = S 螞 S-1 S 螞 S-1 = S 螞 I 螞 S-1 = S 螞2 S-1
...
Ap = S 螞 S-1 ... S 螞 S-1 = S 螞 I ... I 螞 S-1 = S 螞p S-1

A diagonal matrix is very easy to raise to a power. The result of raising a matrix to a power is that you just raise the diagonal elements to the power.



My Certificate

For more on Eigenvalues and Eigenvectors, please refer to the wonderful course here https://www.coursera.org/learn/matrix-algebra-engineers

My #72 course certificate from Coursera

Related Quick Recap

Vector Space & Fundamental Subspaces

I am Kesler Zhu, thank you for visiting my website. Check out more course reviews at https://KZHU.ai

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

American Contract Law I Andrew Ng Anna Koop Brenda Gunderson Christopher Millard Computer Communications Specialization Cryptography Economics of Money and Banking Evgenii Vashukevich Garud Iyengar Ivan Vybornyi Jeffrey Chasnov John Daily Jonathan Katz Kevin Webster Ling-Chieh Kung Machine Learning: Algorithms in the Real World Martin Haugh Mathematics for Engineers Specialization Matthew Hutchens Michael Donohoe Michael Fricke Microsoft Azure Fundamentals AZ-900 Exam Prep Specialization Operations Research (3): Theory Perry Mehrling Petro Lisowsky Physical Basics of Quantum Computing Practical Reinforcement Learning Rebekah May Search Engine Optimization (SEO) Specialization Sergey Sysoev Statistical Thermodynamics Specialization Statistics with Python Specialization Taxation of Business Entities I: Corporations TensorFlow 2 for Deep Learning Specialization U.S. Federal Taxation Specialization Wounjhang Park Xiaobo Zhou Yi Wang 小褘褋芯械胁 小械褉谐械泄 小械褉谐械械胁懈褔

Subscribe to our newsletter!

© 2025 KZHU.ai 馃殌 | Powered by Superbs Personal Blog theme

Privacy Policy - Terms and Conditions