CS代写 Linear_Equations – cscodehelp代写

Linear_Equations

Systems of Linear Equations¶

Copyright By cscodehelp代写 加微信 cscodehelp

In this notebook we leran various functions for solving and manipulating systems of linear equations.

Initialization¶

import numpy as np
import scipy.linalg as la
import sys

The main linear algebra routines are contained in scipy.linalg. Behind the scenes the calculations are done using BLAS and LAPACK libraries. Depending on how your version was compiled these can be highly optimized and even parallelized.

Some functions also have implementations in NumPy. In general we are going to prefer the versions from scipy.linalg. In practice, both versions should give the same results except in extreme cases, such as dealing with (numerically) singular matrices.
We will represent both vectors and matrices using NumPy arrays. NumPy arrays are not vectors or matrices. A vector or matrix is a special object that has special properties. For example, we know what multiplying a matrix and a vector means, where as multiplying a two dimensional array and a one dimensional array is not uniquely defined. There is a matrix object in NumPy but we will not use those in this notebook. Due to the fact that we are using arrays instead of matrices this means we will need to use the function np.dot when we multiply a matrix and a vector. See below for details.

As always, begin by looking at the documentation. From scipy.linalg we see there are many routines. Here we will focus on some of those from “Basics” and those related to the LU decomposition.

Also as always we should look at the documentation for any and all functions that interest us.

la.lu_factor?

la.lu_solve?

Notice that there are a few functions related to the LU decomposition. We will learn why this is the case below.

Sample linear system¶
Consider a sample linear system of equations
$$ egin{align}
10 x_1 – 7 x_2 + x_3 &= 8, \
-3 x_1 + 2 x_2 + 6 x_3 &= 4, \
5 x_1 – x_2 + 5 x_3 &= 6 .
end{align} $$
We can write this in the form
$$defmat#1{mathsf{#1}}
A x = b, $$
$$ A = left( egin{array}{rrr}
10 & -7 & 1 \
-3 & 2 & 6 \
5 & -1 & 5
end{array}
ight)
quadmathrm{and}quad
b = left( egin{array}{c} 8 \ 4 \ 6 end{array}
ight). $$

We can create these as arrays using np.array. Notice that we force the result to be a floating point array, not an integer array, by making any one of the entries a floating point number. NumPy tries to use the “simplest” data type when it creates an array. There are a few ways to force the type it chooses, this is one way.

A = np.array([ [10, -7, 1.], [-3, 2, 6], [5, -1, 5] ])
b = np.array([8., 4, 6])

Solving the System¶
To solve this system we can, not surprisingly, use solve.

x = la.solve(A, b)

[ 0. -1. 1.]

Verifying the Solution¶
To verify the solution we can directly evaluate $A x$ and compare it to $b$. Of course this only verifies that the solve has worked correctly, it does not verify that we have entered $A$ and $b$ correctly. There are two steps, one is to actually perform the multiplication and the second is to compare the $A x$ to $b$.

Matrix Multiplication¶
To multiply matrices we do not use ‘ * ‘. The usual multiplication is component-wise multiplication. This is not the same as matrix multiplication.

To see this, consider using the usual multiplication:

if (sys.version_info > (3, 0)):
# Python 3 code in this block
print(‘A =’, A)
print(‘x =’, x)
print(‘A*x =’, A*x)
# Python 2 code in this block
print(“A = %s” % np.array_str(A))
print(“x = %s” % np.array_str(x))
print(“A*x = %s” % np.array_str(A*x))

A = [[10. -7. 1.]
[-3. 2. 6.]
[ 5. -1. 5.]]
x = [ 0. -1. 1.]
A*x = [[ 0. 7. 1.]
[-0. -2. 6.]
[ 0. 1. 5.]]

Notice what happened: the first component of $b$ multiplied the first column of $A$, the second component of $b$ multiplied the second column of $A$, and similarly for the third. This is what we mean by component-wise multiplication.

If we want to perform matrix multiplication we need to use a function, np.dot. Consider

if (sys.version_info > (3, 0)):
# Python 3 code in this block
print(‘np.dot(A, x) =’, np.dot(A,x))
# Python 2 code in this block
print(“np.dot(A, x) = %s” % np.array_str(np.dot(A,x)))

np.dot(A, x) = [8. 4. 6.]

Notice that this returns a vector and that vector should be $b$.

Comparing calculations¶
This is a small system so we can easily see that all values are nearly zero. We do not want to check that $A x$ is exactly equal to $b$ (in general this does not work numerically), however, we can check that the values are “close enough”. There are a few ways to do this, here we use np.allclose. (At this point you may be checking the documentation! Here we are using the default settings, but could adjust the tolerances for a more stringent test.)

print(np.dot(A,x)-b)
print(np.allclose(np.dot(A, x), b))

[0. 0. 0.]

Factoring the system¶
The work required to solve the system of equations mostly involves manipulating the matrix, $A$, and then performing the same manipulations on the right hand side of the equations, $b$. We could instead have many right hand sides (a two dimensional array with multiple columns, one for each set of values for which we want to find a solution). This is handled by solve. Alternatively, we can decompose $A$ into pieces that encode the required manipulations using the LU decomposition. The decomposition only needs to be performed once, it can then be applied whenever needed. Finally, for numerical stability we should also pivot the matrix with permutation matrix $P$. LU decomposition with pivoting is provided by scipy.linalg.lu.

(P, L, U) = la.lu(A)

if (sys.version_info > (3, 0)):
# Python 3 code in this block
print(“Permutation matrix:
”, P)
print(“Lower triangular matrix:
”, L)
print(“Upper triangluar matrix:
”, U)
print(“PLU == A?”, np.allclose(np.dot(P,np.dot(L,U)), A))
# Python 2 code in this block
print(“Permutation matrix:
%s” % np.array_str(P))
print(“Lower triangular matrix:
%s” % np.array_str(L))
print(“Upper triangluar matrix:
%s” % np.array_str(U))
print(“PLU == A? %r” % np.allclose(np.dot(P,np.dot(L,U)), A))

Permutation matrix:
[[1. 0. 0.]
[0. 0. 1.]
[0. 1. 0.]]
Lower triangular matrix:
[[ 1. 0. 0. ]
[ 0.5 1. 0. ]
[-0.3 -0.04 1. ]]
Upper triangluar matrix:
[[10. -7. 1. ]
[ 0. 2.5 4.5 ]
[ 0. 0. 6.48]]
PLU == A? True

By default we are given the permutation matrix $P$ and the lower and upper triangular matrices that, when recombined, produce $mat A$. This function is good when we want to see the decomposition in a form easy for us to read. If we want to use the decomposition for solving systems of linear equations the information could be stored in a more efficient form for the computer’s use.

Note that our choice of using generic arrays for storing matrices comes at a cost when we want to multiply many of them together. Here we have had to use nested calls to np.dot. This can be slightly streamlined, but remains tedious. It is just a price we must pay for our choice. In practice it is usually a rather small price.

Getting back to solving a system of equations, we can use scipy.linalg.lu_factor to decompose in a form more useful for the computer.

la.lu_factor(A)

(array([[10. , -7. , 1. ],
[ 0.5 , 2.5 , 4.5 ],
[-0.3 , -0.04, 6.48]]),
array([0, 2, 2], dtype=int32))

Here $L$ and $U$ have been merged into a single matrix (notice in the form above that $L$ had ones along the diagonal so they do not need to be stored) and the permutations are represented by a one dimensional array instead a matrix. This is much more memory efficient, but is also much harder for us to read. Even so, this can be used in scipy.linalg.lu_solve. In fact, the tuple returned here is exactly what needs to be provided as the first argument to that function.

lufactors = la.lu_factor(A)
xlu = la.lu_solve(lufactors, b)
print(“LU solution: %s” % np.array_str(xlu))
print(“Consistent with previous solution? %r” % np.allclose(xlu, x))

LU solution: [ 0. -1. 1.]
Consistent with previous solution? True

If $A=LU$, then $Ax=LUx=b$. Thus, first solve $Ly=b$, for $y$, then solve $Ux=y$ for $x$. Both solutions are found by back-substitution.

With permutation matrix $P$, such that $PA=LU$ the decomposition is sometimes called LUP decomposition, so $LUx=Pb$. In this case the solution is also done in two logical steps: (1) solve the equation $Ly=Pb$ for $y$ and (2) solve the equation $Ux=y$ for $x$.

Slightly Larger System¶
As an example of a slightly larger system and one where we want to find solutions with different right hand sides consider
$$mat A = left( egin{array}{rrrr}
2 & -3 & 1 & 3 \
1 & 4 & -3 & -3 \
5 & 3 & -1 & -1 \
3 & -6 & -3 & 1
end{array}
ight),
vec b_1 = left( egin{array}{r}
-4 \ 1 \ 8 \ -5
end{array}
ight),
vec b_2 = left( egin{array}{r}
-10 \ 0 \ -3 \ -24
end{array}
ight).

A = np.array([ [2., -3, 1, 3], [1, 4, -3, -3], [5, 3, -1, -1], [3, -6, -3, 1]])
b1 = np.array([-4., 1, 8, -5])
b2 = np.array([-10, 9, -3, -24])

We can solve this directly using solve.

print(“Solution for b1: %s” % np.array_str(la.solve(A, b1)))
print(“Solution for b2: %s” % np.array_str(la.solve(A, b2)))

Solution for b1: [ 2. -1. 4. -5.]
Solution for b2: [-2.6875 3.925 -1.56875 2.90625]

Alternatively we can use an LU decomposition. Notice the decomposition is only performed once.

lufactors = la.lu_factor(A)
print(“LU solution for b1: %s” % np.array_str(la.lu_solve(lufactors, b1)))
print(“LU solution for b2: %s” % np.array_str(la.lu_solve(lufactors, b2)))

LU solution for b1: [ 2. -1. 4. -5.]
LU solution for b2: [-2.6875 3.925 -1.56875 2.90625]

Finally, we could have solved for both right hand sides at once. To do this we need to combine b1 and b2. We do this below.

b = np.vstack((b1, b2))

array([[ -4., 1., 8., -5.],
[-10., 9., -3., -24.]])

If we try to use this with solve it fails!

la.solve(A, b)

—————————————————————————
ValueError Traceback (most recent call last)
in
—-> 1 la.solve(A, b)

~anaconda3libsite-packagesscipylinalgasic.py in solve(a, b, sym_pos, lower, overwrite_a, overwrite_b, debug, check_finite, assume_a, transposed)
148 # Last chance to catch 1×1 scalar a and 1-D b arrays
149 if not (n == 1 and b1.size != 0):
–> 150 raise ValueError(‘Input b has to have same number of rows as ‘
151 ‘input a’)

ValueError: Input b has to have same number of rows as input a

We get an incompatible dimensions error because b should have the same number of rows as A, not the same number of columns. This is easy to fix, just take the transpose of b. There is shorthand for doing this as shown below.

print(“Original b:
%s” % np.array_str(b))
print(“Transpose of b:
%s” % np.array_str(b.T))

Original b:
[[ -4. 1. 8. -5.]
[-10. 9. -3. -24.]]
Transpose of b:
[[ -4. -10.]
[ 1. 9.]
[ 8. -3.]
[ -5. -24.]]

With this we find:

print(“scipy.linalg.solve:
%s” % np.array_str(la.solve(A, b.T)))
print(“scipy.linalg.lu_solve:
%s” % np.array_str(la.lu_solve(lufactors, b.T)))

scipy.linalg.solve:
[[ 2. -2.6875 ]
[-1. 3.925 ]
[ 4. -1.56875]
[-5. 2.90625]]
scipy.linalg.lu_solve:
[[ 2. -2.6875 ]
[-1. 3.925 ]
[ 4. -1.56875]
[-5. 2.90625]]

Each column of the result is the solution for each column in the array b.

Note: This notebook is adapted from http://www.phys.cwru.edu/courses/p250/

程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com

Leave a Reply

Your email address will not be published. Required fields are marked *