About 3307 results. 1294 free access solutions
№ |
Condition |
free/or 0.5$ |
m55089 | Modify the Cholesky Algorithm as suggested in the text so that it can be used to solve linear systems, and use the modified algorithm to solve the linear systems in Exercise 7.
In Exercise 7
a. 2x1 − x2 = 3,
−x1 + 2x2 − x3 = −3,
− x2 + 2x3 = 1.
b. 4x1 + x2 + x3 + x4 = 0.65,
x1 + 3x2 − x3 + x4 = 0.05,
x1 − x2 + 2x3 = 0,
x1 + x2 + 2x4 = 0.5.
c. 4x1 + x2 − x3 = 7,
x1 + 3x2 − x3 = 8,
−x1 − x2 + 5x3 + 2x4 = −4,
2x3 + 4x4 = 6.
d. 6x1 + 2x2 + x3 − x4 = 0,
2x1 + 4x2 + x3 = 7,
x1 + x2 + 4x3 − x4 = −1,
−x1 − x3 + 3x4 = −2. |
buy |
m55090 | Modify the LDLt Factorization Algorithm as suggested in the text so that it can be used to solve linear systems. Use the modified algorithm to solve the following linear systems.
a. 2x1 − x2 = 3,
−x1 + 2x2 − x3 = −3,
− x2 + 2x3 = 1.
b. 4x1 + x2 + x3 + x4 = 0.65,
x1 + 3x2 − x3 + x4 = 0.05,
x1 − x2 + 2x3 = 0,
x1 + x2 + 2x4 = 0.5.
c. 4x1 + x2 − x3 = 7,
x1 + 3x2 − x3 = 8,
−x1 − x2 + 5x3 + 2x4 = −4,
2x3 + 4x4 = 6.
d. 6x1 + 2x2 + x3 − x4 = 0,
2x1 + 4x2 + x3 = 7,
x1 + x2 + 4x3 − x4 = −1,
−x1 − x3 + 3x4 = −2. |
buy |
m55091 | Modify the LDLt factorization to factor a symmetric matrix A. Apply the new algorithm to the following matrices:
a.
b.
c.
d. |
buy |
m55092 | Modify the LU Factorization Algorithm so that it can be used to solve a linear system, and then solve the following linear systems.
a. 2x1− x2+ x3 = −1,
3x1+3x2+9x3 = 0,
3x1+3x2+5x3 = 4.
b. 1.012x1 − 2.132x2 + 3.104x3 = 1.984,
−2.132x1 + 4.096x2 − 7.013x3 = −5.049,
3.104x1 − 7.013x2 + 0.014x3 = −3.895.
c. 2x1 = 3,
x1 + 1.5x2 = 4.5,
− 3x2 + 0.5x3 = −6.6,
2x1 − 2x2 + x3 + x4 = 0.8
d. 2.1756x1 + 4.0231x2 − 2.1732x3 + 5.1967x4 = 17.102,
−4.0231x1 + 6.0000x2 + 1.1973x4 = −6.1593,
−1.0000x1 − 5.2107x2 + 1.1111x3 = 3.0004,
6.0235x1 + 7.0000x2 − 4.1561x4 = 0.0000 |
buy |
m55093 | Modify the LU Factorization Algorithm so that it can be used to solve a linear system, and then solve the following linear systems.
a. x1 − x2 = 2,
2x1 + 2x2 + 3x3 = −1,
−x1 + 3x2 + 2x3 = 4
b. 1/3 x1 + 1/2 x2 - 1/4 x3 = 1,
1/5 x1 + 2/3 x2 + 3/8 x3 = 2,
2/5 x1 - 2/3 x2 + 5/8 x3 = −3.
b. 2x1 + x2 = 0,
−x1 + 3x2 + 3x3 = 5,
2x1 − 2x2 + x3 + 4x4 = −2,
−2x1 + 2x2 + 2x3 + 5x4 = 6
d. 2.121x1 − 3.460x2 + 5.217x4 = 1.909,
5.193x2 − 2.197x3 + 4.206x4 = 0,
5.132x1 + 1.414x2 + 3.141x3 = −2.101,
−3.111x1 − 1.732x2 + 2.718x3 + 5.212x4 = 6.824 |
buy |
m55103 | Neville s Algorithm is used to approximate f (0) using f (−2), f (−1), f (1), and f (2). Suppose f (−1) was overstated by 2 and f (1) was understated by 3. Determine the error in the original calculation of the value of the interpolating polynomial to approximate f (0). |
buy |
m55104 | Neville s Algorithm is used to approximate f (0) using f (−2), f (−1), f (1), and f (2). Suppose f (−1) was understated by 2 and f (1) was overstated by 3. Determine the error in the original calculation of the value of the interpolating polynomial to approximate f (0). |
buy |
m55110 | Obtain factorizations of the form A = PtLU for the following matrices.
a.
b.
c.
d. |
buy |
m55111 | Obtain the least squares approximation polynomial of degree 3 for the functions in Exercise 1 using the results of Exercise 7.
In Exercise 1
a. f (x) = x2 + 3x + 2, [0, 1];
b. f (x) = x3, [0, 2];
c. f (x) = 1/x, [1, 3];
d. f (x) = ex , [0, 2];
e. f (x) = 1/2 cos x + 1/3 sin 2x, [0, 1];
f. f (x) = x ln x, [1, 3]. |
buy |
m55148 | Part (ii) of Theorem 9.26 states that Nullity (A) = Nullity (AtA). Is it also true that Nullity (A) = Nullity(AAt)? |
buy |
m55153 | Perform only two steps of the conjugate gradient method with C = C−1 = I on each of the following linear systems. Compare the results in parts (b) and (c) to the results obtained in parts (b) and (c) of Exercise 1 of Section 7.3 and Exercise 1 of Section 7.4.
a. 3x1 − x2 + x3 = 1,
−x1 + 6x2 + 2x3 = 0,
x1 + 2x2 + 7x3 = 4.
b. 10x1 − x2 = 9,
−x1 + 10x2 − 2x3 = 7,
− 2x2 + 10x3 = 6.
c. 10x1 + 5x2 = 6,
5x1 + 10x2 − 4x3 = 25,
− 4x2 + 8x3 − x4 = −11,
− x3 + 5x4 = −11.
d. 4x1 + x2 − x3 + x4 = −2,
x1 + 4x2 − x3 − x4 = −1,
−x1 − x2 + 5x3 + x4 = 0,
x1 − x2 + x3 + 3x4 = 1.
e. 4x1 + x2 + x3 + x5 = 6,
x1 + 3x2 + x3 + x4 = 6,
x1 + x2 + 5x3 − x4 − x5 = 6,
x2 − x3 + 4x4 = 6,
x1 − x3+ +4x5 = 6.
f. 4x1 − x2 − x4 = 0,
−x1 + 4x2 − x3 − x5 = 5,
− x2 + 4x3 − x6 = 0,
−x1 + 4x4 − x5 = 6,
− x2 − x4 + 4x5 − x6 = −2,
− x3 − x5 + 4x6 = 6. |
buy |
m55155 | Perform the following computations (i) exactly, (ii) using three-digit chopping arithmetic, and (iii) using three-digit rounding arithmetic. (iv) Compute the relative errors in parts (ii) and (iii).
a. 4/5 + 1/3
b. 4/5· 1/3
c. (1/3− 3/11) + 3/20
d. (1/3 + 3/11) - 3/20 |
buy |
m55157 | Perform the following matrix-matrix multiplications:
a.
b.
c.
d. |
buy |
m55158 | Perform the following matrix-matrix multiplications:
a.
b.
c.
d. |
buy |
m55159 | Perform the following matrix-vector multiplications:
a.
b.
c.
d. |
buy |
m55160 | Perform the following matrix-vector multiplications:
a.
b.
c.
d. |
buy |
m55164 | Picard s method for solving the initial-value problem
y = f (t, y), a ≤ t ≤ b, y(a) = α,
is described as follows: Let y0(t) = α for each t in [a, b]. Define a sequence {yk(t)} of functions
a. Integrate y = f (t, y(t)), and use the initial condition to derive Picard s method.
b. Generate y0(t), y1(t), y2(t), and y3(t) for the initial-value problem
y = −y + t + 1, 0 ≤ t ≤ 1, y(0) = 1. |
buy |
m55185 | Prove Kahan s Theorem 7.24. [Hint: If λ1, . . . , λn are eigenvalues of Tω,
Since det D−1 = det(D − ωL) −1 and the determinant of a product of matrices is the product of the determinants of the factors, the result follows from Eq. (7.18).] |
buy |
m55188 | Prove Taylor s Theorem 1.14 by following the procedure in the proof of Theorem 3.3. [Let
Where P is the nth Taylor polynomial, and uses the Generalized Rolle s Theorem 1.10] |
buy |
m55198 | Prove that if || · || is a vector norm on Rn, then ||A|| = max||x||=1 ||Ax|| is a matrix norm. |
buy |
|