Information support for schoolchildren and students
Site search

Inverse matrix for dummies detailed examples of solutions. Inverse matrix. Solution of matrix equations. Finding the inverse matrix by the adjoint matrix method

For any nonsingular matrix A, there exists a unique matrix A -1 such that

A*A -1 =A -1 *A = E,

where E is the identity matrix of the same orders as A. The matrix A -1 is called the inverse of matrix A.

If someone forgot, in the identity matrix, except for the diagonal filled with ones, all other positions are filled with zeros, an example of an identity matrix:

Finding the inverse matrix by the adjoint matrix method

The inverse matrix is ​​defined by the formula:

where A ij - elements a ij .

Those. To calculate the inverse of a matrix, you need to calculate the determinant of this matrix. Then find algebraic additions for all its elements and make a new matrix from them. Next, you need to transport this matrix. And divide each element of the new matrix by the determinant of the original matrix.

Let's look at a few examples.

Find A -1 for matrix

Solution. Find A -1 by the adjoint matrix method. We have det A = 2. Let us find the algebraic complements of the elements of the matrix A. In this case, the algebraic complements of the matrix elements will be the corresponding elements of the matrix itself, taken with a sign in accordance with the formula

We have A 11 = 3, A 12 = -4, A 21 = -1, A 22 = 2. We form the adjoint matrix

We transport the matrix A*:

We find the inverse matrix by the formula:

We get:

Use the adjoint matrix method to find A -1 if

Solution. First of all, we calculate the given matrix to make sure that the inverse matrix exists. We have

Here we have added to the elements of the second row the elements of the third row, multiplied previously by (-1), and then expanded the determinant by the second row. Since the definition of this matrix is ​​different from zero, then the matrix inverse to it exists. To construct the adjoint matrix, we find the algebraic complements of the elements of this matrix. We have

According to the formula

we transport the matrix A*:

Then according to the formula

Finding the inverse matrix by the method of elementary transformations

In addition to the method of finding the inverse matrix, which follows from the formula (the method of the associated matrix), there is a method for finding the inverse matrix, called the method of elementary transformations.

Elementary matrix transformations

The following transformations are called elementary matrix transformations:

1) permutation of rows (columns);

2) multiplying a row (column) by a non-zero number;

3) adding to the elements of a row (column) the corresponding elements of another row (column), previously multiplied by a certain number.

To find the matrix A -1, we construct a rectangular matrix B \u003d (A | E) of orders (n; 2n), assigning to the matrix A on the right the identity matrix E through the dividing line:

Consider an example.

Using the method of elementary transformations, find A -1 if

Solution. We form the matrix B:

Denote the rows of the matrix B through α 1 , α 2 , α 3 . Let's perform the following transformations on the rows of the matrix B.

Let a square matrix be given. It is required to find the inverse matrix.

First way. In Theorem 4.1 of the existence and uniqueness of the inverse matrix, one of the ways to find it is indicated.

1. Calculate the determinant of the given matrix. If, then the inverse matrix does not exist (the matrix is ​​degenerate).

2. Compose a matrix from the algebraic complements of the matrix elements.

3. Transposing the matrix , get the associated matrix .

4. Find the inverse matrix (4.1) by dividing all elements of the associated matrix by the determinant

The second way. To find the inverse matrix, elementary transformations can be used.

1. Compose a block matrix by assigning to the given matrix identity matrix of the same order.

2. With the help of elementary transformations performed on the rows of the matrix , bring its left block to the simplest form. In this case, the block matrix is ​​reduced to the form, where is a square matrix obtained as a result of transformations from the identity matrix.

3. If , then is block equal to the inverse matrix, i.e. If, then the matrix has no inverse.

Indeed, with the help of elementary transformations of the rows of a matrix, its left block can be reduced to a simplified form (see Fig. 1.5). In this case, the block matrix is ​​transformed to the form, where is an elementary matrix that satisfies the equality. If the matrix is ​​nonsingular, then, according to item 2 of Remarks 3.3, its simplified form coincides with the identity matrix. Then it follows from equality that. If the matrix is ​​degenerate, then its simplified form differs from the identity matrix, and the matrix has no inverse.

11. Matrix equations and their solution. Matrix notation of SLAE. Matrix method (inverse matrix method) for solving SLAE and conditions for its applicability.

Matrix equations are equations of the form: A*X=C; X*A=C; A*X*B=C where matrix A,B,C are known, the matrix X is not known, if the matrices A and B are not degenerate, then the solutions of the original matrices will be written in the corresponding form: X=A -1 *C; X=C*A -1; X \u003d A -1 * C * B -1 Matrix form of writing systems of linear algebraic equations. Several matrices can be associated with each SLAE; moreover, the SLAE itself can be written as a matrix equation. For SLAE (1), consider the following matrices:

The matrix A is called system matrix. The elements of this matrix are the coefficients of the given SLAE.

The matrix A˜ is called expanded matrix system. It is obtained by adding to the system matrix a column containing free members b1,b2,...,bm. Usually this column is separated by a vertical line, for clarity.

The column matrix B is called matrix of free members, and the column matrix X is matrix of unknowns.

Using the notation introduced above, SLAE (1) can be written in the form of a matrix equation: A⋅X=B.

Note

The matrices associated with the system can be written in various ways: everything depends on the order of the variables and equations of the considered SLAE. But in any case, the order of the unknowns in each equation of a given SLAE must be the same.

The matrix method is suitable for solving SLAEs in which the number of equations coincides with the number of unknown variables and the determinant of the main matrix of the system is nonzero. If the system contains more than three equations, then finding the inverse matrix requires significant computational effort, therefore, in this case, it is advisable to use for solving Gauss method.

12. Homogeneous SLAEs, conditions for the existence of their non-zero solutions. Properties of partial solutions of homogeneous SLAEs.

A linear equation is called homogeneous if its free term is equal to zero, and inhomogeneous otherwise. A system consisting of homogeneous equations is called homogeneous and has the general form:

13 .The concept of linear independence and dependence of partial solutions of a homogeneous SLAE. Fundamental decision system (FSR) and its finding. Representation of the general solution of a homogeneous SLAE in terms of the FSR.

Function system y 1 (x ), y 2 (x ), …, y n (x ) is called linearly dependent on the interval ( a , b ) if there is a set of constant coefficients that are not equal to zero simultaneously, such that the linear combination of these functions is identically equal to zero on ( a , b ): for . If equality for is possible only for , the system of functions y 1 (x ), y 2 (x ), …, y n (x ) is called linearly independent on the interval ( a , b ). In other words, the functions y 1 (x ), y 2 (x ), …, y n (x ) linearly dependent on the interval ( a , b ) if there exists zero on ( a , b ) their non-trivial linear combination. Functions y 1 (x ),y 2 (x ), …, y n (x ) linearly independent on the interval ( a , b ) if only their trivial linear combination is identically equal to zero on ( a , b ).

Fundamental decision system (FSR) a homogeneous SLAE is a basis of this system of columns.

The number of elements in the FSR is equal to the number of unknowns in the system minus the rank of the system matrix. Any solution to the original system is a linear combination of solutions to the FSR.

Theorem

The general solution of the inhomogeneous SLAE is equal to the sum of the particular solution of the inhomogeneous SLAE and common solution corresponding homogeneous SLAE.

1 . If columns are solutions homogeneous system equations, then any linear combination of them is also a solution to a homogeneous system.

Indeed, it follows from the equalities that

those. a linear combination of solutions is a solution to a homogeneous system.

2. If the rank of the matrix of a homogeneous system is , then the system has linearly independent solutions.

Indeed, by the formulas (5.13) of the general solution of the homogeneous system, we can find particular solutions by assigning the following to the free variables default value sets (each time assuming that one of the free variables is equal to one, and the rest are equal to zero):

which are linearly independent. Indeed, if a matrix is ​​formed from these columns, then its last rows form the identity matrix. Therefore, the minor located in the last lines is not equal to zero (it equal to one), i.e. is basic. Therefore, the rank of the matrix will be equal. Hence, all columns of this matrix are linearly independent (see Theorem 3.4).

Any collection of linearly independent solutions of a homogeneous system is called fundamental system (set) of solutions .

14 Minor of the th order, basic minor, matrix rank. Matrix rank calculation.

The order k minor of a matrix A is the determinant of some of its square submatrices of order k.

In an m x n matrix A, a minor of order r is called basic if it is nonzero, and all minors of larger order, if they exist, are equal to zero.

The columns and rows of the matrix A, at the intersection of which there is a basis minor, are called the basis columns and rows of A.

Theorem 1. (On the rank of a matrix). For any matrix, the minor rank is equal to the row rank and equal to the column rank.

Theorem 2. (On the basic minor). Each column of the matrix is ​​decomposed into a linear combination of its basic columns.

The rank of a matrix (or minor rank) is the order of the basis minor or, in other words, the most big order, for which there are non-zero minors. The rank of a zero matrix is, by definition, considered to be 0.

We note two obvious properties of minor rank.

1) The rank of a matrix does not change when transposing, since when a matrix is ​​transposed, all its submatrices are transposed and the minors do not change.

2) If A' is a submatrix of matrix A, then the rank of A' does not exceed the rank of A, since the non-zero minor included in A' is also included in A.

15. The concept of -dimensional arithmetic vector. Vector equality. Actions on vectors (addition, subtraction, multiplication by a number, multiplication by a matrix). Linear combination of vectors.

Ordered collection n valid or complex numbers called n-dimensional vector. The numbers are called vector coordinates.

Two (non-zero) vectors a and b are equal if they are equidirectional and have the same modulus. All zero vectors are considered equal. In all other cases, the vectors are not equal.

Addition of vectors. There are two ways to add vectors.1. parallelogram rule. To add the vectors and, we place the origins of both at the same point. We complete the parallelogram and draw the diagonal of the parallelogram from the same point. This will be the sum of the vectors.

2. The second way to add vectors is the triangle rule. Let's take the same vectors and . We add the beginning of the second to the end of the first vector. Now let's connect the beginning of the first and the end of the second. This is the sum of the vectors and . By the same rule, you can add several vectors. We attach them one by one, and then connect the beginning of the first to the end of the last.

Subtraction of vectors. The vector is directed opposite to the vector. The lengths of the vectors are equal. Now it is clear what subtraction of vectors is. The difference of the vectors and is the sum of the vector and the vector .

Multiply a vector by a number

Multiplying a vector by a number k results in a vector whose length is k times different from the length. It is codirectional with the vector if k is greater than zero, and directed oppositely if k is less than zero.

The scalar product of vectors is the product of the lengths of vectors and the cosine of the angle between them. If the vectors are perpendicular, their dot product is zero. But like this scalar product is expressed in terms of the coordinates of the vectors and .

Linear combination of vectors

Linear combination of vectors call vector

where - linear combination coefficients. If a a combination is called trivial if it is nontrivial.

16 .Scalar product of arithmetic vectors. The length of the vector and the angle between the vectors. The concept of orthogonality of vectors.

The scalar product of vectors a and b is the number

The scalar product is used to calculate: 1) finding the angle between them; 2) finding the projection of vectors; 3) calculating the length of a vector; 4) the condition of perpendicularity of vectors.

The length of the segment AB is the distance between points A and B. The angle between vectors A and B is called the angle α = (a, c), 0≤ α ≤П. By which it is necessary to rotate 1 vector so that its direction coincides with another vector. Provided that their beginnings coincide.

Orth a is a vector a having unit length and direction a.

17. The system of vectors and its linear combination. concept linear dependence and independence of the system of vectors. Theorem on the necessary and sufficient conditions for the linear dependence of a system of vectors.

A system of vectors a1,a2,...,an is called linearly dependent if there are numbers λ1,λ2,...,λn such that at least one of them is nonzero and λ1a1+λ2a2+...+λnan=0. Otherwise, the system is called linearly independent.

Two vectors a1 and a2 are called collinear if their directions are the same or opposite.

Three vectors a1,a2 and a3 are called coplanar if they are parallel to some plane.

Geometric criteria for linear dependence:

a) the system (a1,a2) is linearly dependent if and only if the vectors a1 and a2 are collinear.

b) the system (a1,a2,a3) is linearly dependent if and only if the vectors a1,a2 and a3 are coplanar.

theorem. (A necessary and sufficient condition for a linear dependence systems vectors.)

Vector system vector space is linearly dependent if and only if one of the vectors of the system is linearly expressed in terms of the others vector this system.

Consequence.1. A system of vectors in a vector space is linearly independent if and only if none of the vectors of the system is linearly expressed in terms of other vectors of this system.2. A vector system containing a zero vector or two equal vectors is linearly dependent.

We continue talking about actions with matrices. Namely, in the course of studying this lecture, you will learn how to find the inverse matrix. Learn. Even if the math is tight.

What is an inverse matrix? Here we can draw an analogy with reverse numbers: consider, for example, the optimistic number 5 and its reciprocal . The product of these numbers is equal to one: . It's the same with matrices! The product of a matrix and its inverse is - identity matrix, which is the matrix analogue of the numerical unit. However, first things first, we will solve an important practical issue, namely, we will learn how to find this very inverse matrix.

What do you need to know and be able to find the inverse matrix? You must be able to decide determinants. You must understand what is matrix and be able to perform some actions with them.

There are two main methods for finding the inverse matrix:
by using algebraic additions and using elementary transformations.

Today we will study the first, easier way.

Let's start with the most terrible and incomprehensible. Consider square matrix . The inverse matrix can be found using the following formula:

Where is the determinant of the matrix , is the transposed matrix of algebraic complements of the corresponding elements of the matrix .

The concept of an inverse matrix exists only for square matrices, matrices "two by two", "three by three", etc.

Notation: As you probably already noticed, the inverse of a matrix is ​​denoted by a superscript

Let's start with the simplest case - a two-by-two matrix. Most often, of course, “three by three” is required, but, nevertheless, I strongly recommend studying a simpler task in order to learn general principle solutions.

Example:

Find the inverse of a matrix

We decide. The sequence of actions is conveniently decomposed into points.

1) First we find the determinant of the matrix.

If the understanding of this action is not good, read the material How to calculate the determinant?

Important! If the determinant of the matrix is ZERO– inverse matrix DOES NOT EXIST.

In the example under consideration, as it turned out, , which means that everything is in order.

2) Find the matrix of minors.

To solve our problem, it is not necessary to know what a minor is, however, it is advisable to read the article How to calculate the determinant.

The matrix of minors has the same dimensions as the matrix , that is, in this case .
The case is small, it remains to find four numbers and put them instead of asterisks.

Back to our matrix
Let's look at the top left element first:

How to find it minor?
And this is done like this: MENTALLY cross out the row and column in which this element is located:

The remaining number is minor of the given element, which we write in our matrix of minors:

Consider the following matrix element:

Mentally cross out the row and column in which this element is located:

What remains is the minor of this element, which we write into our matrix:

Similarly, we consider the elements of the second row and find their minors:


Ready.

It's simple. In the matrix of minors, you need CHANGE SIGNS for two numbers:

It is these numbers that I have circled!

is the matrix of algebraic complements of the corresponding elements of the matrix .

And just something…

4) Find the transposed matrix of algebraic additions.

is the transposed matrix of algebraic complements of the corresponding elements of the matrix .

5) Answer.

Remember our formula
All found!

So the inverse matrix is:

It's best to leave the answer as is. NO NEED divide each element of the matrix by 2, as you get fractional numbers. This nuance is discussed in more detail in the same article. Actions with matrices.

How to check the solution?

Matrix multiplication must be performed either

Examination:

already mentioned identity matrix is a matrix with units on main diagonal and zeros elsewhere.

Thus, the inverse matrix is ​​found correctly.

If you perform an action, then the result will also be an identity matrix. This is one of the few cases where matrix multiplication is permutable, more detailed information can be found in the article Properties of operations on matrices. Matrix expressions. Also note that during the check, the constant (fraction) is brought forward and processed at the very end - after the matrix multiplication. This is a standard take.

Let's move on to a more common case in practice - the three-by-three matrix:

Example:

Find the inverse of a matrix

The algorithm is exactly the same as for the two-by-two case.

We find the inverse matrix by the formula: , where is the transposed matrix of algebraic complements of the corresponding elements of the matrix .

1) Find the matrix determinant.


Here the determinant is revealed on the first line.

Also, do not forget that, which means that everything is fine - inverse matrix exists.

2) Find the matrix of minors.

The matrix of minors has the dimension "three by three" , and we need to find nine numbers.

I'll take a look at a couple of minors in detail:

Consider the following matrix element:

MENTALLY cross out the row and column in which this element is located:

The remaining four numbers are written in the determinant "two by two"

This two-by-two determinant and is a minor of the given element. It needs to be calculated:


Everything, the minor is found, we write it into our matrix of minors:

As you may have guessed, there are nine two-by-two determinants to calculate. The process, of course, is dreary, but the case is not the most difficult, it can be worse.

Well, to consolidate - finding another minor in the pictures:

Try to calculate the rest of the minors yourself.

Final Result:
is the matrix of minors of the corresponding elements of the matrix .

The fact that all the minors turned out to be negative is pure coincidence.

3) Find the matrix of algebraic additions.

In the matrix of minors, it is necessary CHANGE SIGNS strictly for the following elements:

In this case:

Finding the inverse matrix for the “four by four” matrix is ​​not considered, since only a sadistic teacher can give such a task (for the student to calculate one “four by four” determinant and 16 “three by three” determinants). In my practice, there was only one such case, and the customer control work paid dearly for my torment =).

In a number of textbooks, manuals, you can find a slightly different approach to finding the inverse matrix, but I recommend using the above solution algorithm. Why? Because the probability of getting confused in calculations and signs is much less.

ALGEBRAIC ADDITIONS AND MINORS

Let's have a third-order determinant: .

Minor corresponding to this element aij third-order determinant is called the second-order determinant obtained from the given one by deleting the row and column at the intersection of which the given element stands, i.e. i-th line and j-th column. Minors corresponding to a given element aij we will denote M ij.

For example, minor M12 corresponding to the element a 12, there will be a determinant , which is obtained by deleting the 1st row and 2nd column from the given determinant.

Thus, the formula that determines the third-order determinant shows that this determinant is equal to the sum of the products of the elements of the 1st row and their corresponding minors; while the minor corresponding to the element a 12, is taken with the “–” sign, i.e. can be written that

. (1)

Similarly, one can introduce definitions of minors for determinants of the second order and higher orders.

Let's introduce one more concept.

Algebraic addition element aij determinant is called its minor M ij multiplied by (–1) i+j .

Algebraic element addition aij denoted A ij.

From the definition we get that the connection between the algebraic complement of an element and its minor is expressed by the equality A ij= (–1) i+j M ij .

For example,

Example. Given a determinant. Find A 13 , A 21 , A 32.

It is easy to see that using algebraic additions of elements, formula (1) can be written as:

Similarly to this formula, one can obtain the decomposition of the determinant over the elements of any row or column.

For example, the decomposition of the determinant over the elements of the 2nd row can be obtained as follows. According to property 2 of the determinant, we have:

Let's expand the obtained determinant by the elements of the 1st row.

. (2)

From here because the second-order determinants in formula (2) are the minors of the elements a 21 , a 22 , a 23. Thus, , i.e. we have obtained the expansion of the determinant by the elements of the 2nd row.

Similarly, one can obtain the decomposition of the determinant over the elements of the third row. Using property 1 of determinants (on transposition), one can show that similar expansions are also valid for expansions in column elements.

Thus, the following theorem is true.

Theorem (on expansion of the determinant in a given row or column). The determinant is equal to the sum of the products of the elements of any of its rows (or columns) and their algebraic complements.

All of the above is true for determinants of any higher order.

Examples.

INVERSE MATRIX

The concept of an inverse matrix is ​​introduced only for square matrices.

If a A is a square matrix, then reverse for it, a matrix is ​​a matrix denoted A-1 and satisfying the condition . (This definition is introduced by analogy with the multiplication of numbers)

Definition 1: A matrix is ​​called degenerate if its determinant is zero.

Definition 2: A matrix is ​​called non-singular if its determinant is not equal to zero.

Matrix "A" is called inverse matrix, if the condition A*A-1 = A-1 *A = E (identity matrix) is satisfied.

A square matrix is ​​invertible only if it is nonsingular.

Scheme for calculating the inverse matrix:

1) Calculate the determinant of the matrix "A" if A = 0, then the inverse matrix does not exist.

2) Find all algebraic complements of the matrix "A".

3) Compose a matrix of algebraic additions (Aij )

4) Transpose the matrix of algebraic complements (Aij )T

5) Multiply the transposed matrix by the reciprocal of the determinant of this matrix.

6) Run a check:

At first glance it may seem that it is difficult, but in fact everything is very simple. All solutions are based on simple arithmetic operations, the main thing when solving is not to get confused with the "-" and "+" signs, and not to lose them.

Now let's decide together practical task, calculating the inverse matrix.

Task: find the inverse matrix "A", shown in the picture below:

We solve everything exactly as indicated in the plan for calculating the inverse matrix.

1. The first thing to do is to find the determinant of the matrix "A":

Explanation:

We have simplified our determinant by using its main functions. First, we added to the 2nd and 3rd row the elements of the first row, multiplied by one number.

Secondly, we changed the 2nd and 3rd columns of the determinant, and according to its properties, we changed the sign in front of it.

Thirdly, we took common factor(-1) of the second line, thereby changing the sign again, and it became positive. We also simplified line 3 the same way as at the very beginning of the example.

We have a triangular determinant, in which the elements below the diagonal are equal to zero, and by property 7 it is equal to the product diagonal elements. As a result, we got A = 26, hence the inverse matrix exists.

A11 = 1*(3+1) = 4

A12 \u003d -1 * (9 + 2) \u003d -11

A13 = 1*1 = 1

A21 = -1*(-6) = 6

A22 = 1*(3-0) = 3

A23 = -1*(1+4) = -5

A31 = 1*2 = 2

A32 = -1*(-1) = -1

A33 = 1+(1+6) = 7

3. The next step is to compile a matrix from the resulting additions:

5. We multiply this matrix by the reciprocal of the determinant, that is, by 1/26:

6. Well, now we just need to check:

During the verification, we received an identity matrix, therefore, the decision was made absolutely correctly.

2 way to calculate the inverse matrix.

1. Elementary transformation of matrices

2. Inverse matrix through an elementary converter.

Elementary matrix transformation includes:

1. Multiplying a string by a non-zero number.

2. Adding to any line of another line, multiplied by a number.

3. Swapping the rows of the matrix.

4. Applying a chain of elementary transformations, we obtain another matrix.

BUT -1 = ?

1. (A|E) ~ (E|A -1 )

2. A -1*A=E

Let's look at this in a practical example with real numbers.

Exercise: Find the inverse matrix.

Solution:

Let's check:

A little clarification on the solution:

We first swapped rows 1 and 2 of the matrix, then we multiplied the first row by (-1).

After that, the first row was multiplied by (-2) and added to the second row of the matrix. Then we multiplied the 2nd row by 1/4.

final stage transformations was the multiplication of the second row by 2 and the addition from the first. As a result, we have an identity matrix on the left, therefore, the inverse matrix is ​​the matrix on the right.

After checking, we were convinced of the correctness of the decision.

As you can see, calculating the inverse matrix is ​​very simple.

In concluding this lecture, I would also like to devote some time to the properties of such a matrix.