Starting diary file "diary-2016-01-20.txt" on Wed Jan 20 2016 12:39 PM. % ITERATIVE METHODS % Jacobi iteration: A = [3 1 ; -2 4] A = 3 1 -2 4 b = [9; 8] b = 9 8 x = A \ b x = 2 3 x = zeros(2,1) x = 0 0 x = 0 0 x = [ (9-x(2))/3 ; (8+2*x(1))/4 ] x = 3 2 residual = b - A*x residual = -2 6 relres = norm(b - A*x) / norm(b) relres = 0.5252 x = [ (9-x(2))/3 ; (8+2*x(1))/4 ] x = 2.3333 3.5000 relres = norm(b - A*x) / norm(b) relres = 0.1667 x = [ (9-x(2))/3 ; (8+2*x(1))/4 ] x = 1.8333 3.1667 relres = norm(b - A*x) / norm(b) relres = 0.0875 x = [ (9-x(2))/3 ; (8+2*x(1))/4 ] x = 1.9444 2.9167 x = [ (9-x(2))/3 ; (8+2*x(1))/4 ] x = 2.0278 2.9722 x = [ (9-x(2))/3 ; (8+2*x(1))/4 ] x = 2.0093 3.0139 x = [ (9-x(2))/3 ; (8+2*x(1))/4 ] x = 1.9954 3.0046 x = [ (9-x(2))/3 ; (8+2*x(1))/4 ] x = 1.9985 2.9977 x = [ (9-x(2))/3 ; (8+2*x(1))/4 ] x = 2.0008 2.9992 x = [ (9-x(2))/3 ; (8+2*x(1))/4 ] x = 2.0003 3.0004 x = [ (9-x(2))/3 ; (8+2*x(1))/4 ] x = 1.9999 3.0001 relres = norm(b - A*x) / norm(b) relres = 6.7544e-05 % Here is the matrix-vector view of Jacobi iteration: A A = 3 1 -2 4 x = [0;0] x = 0 0 d = diag(A) d = 3 4 D = diag(d) D = 3 0 0 4 C = A - D C = 0 1 -2 0 x = (b - C*x) ./ d x = 3 2 x = (b - C*x) ./ d x = 2.3333 3.5000 x = (b - C*x) ./ d x = 1.8333 3.1667 x = (b - C*x) ./ d x = 1.9444 2.9167 x = (b - C*x) ./ d x = 2.0278 2.9722 x = (b - C*x) ./ d x = 2.0093 3.0139 x = (b - C*x) ./ d x = 1.9954 3.0046 x = (b - C*x) ./ d x = 1.9985 2.9977 x = (b - C*x) ./ d x = 2.0008 2.9992 x = (b - C*x) ./ d x = 2.0003 3.0004 x = (b - C*x) ./ d x = 1.9999 3.0001 x = (b - C*x) ./ d x = 2.0000 2.9999 % But Jacobi doesn't always converge! A = [ 1 2 ; 3 4] A = 1 2 3 4 b = A * [1;1] b = 3 7 A\b ans = 1.0000 1.0000 x=[0;0] x = 0 0 d = diag(A) d = 1 4 D = diag(d) D = 1 0 0 4 C = A - D C = 0 2 3 0 x = (b - C*x) ./ d x = 3.0000 1.7500 x = (b - C*x) ./ d x = -0.5000 -0.5000 x = (b - C*x) ./ d x = 4.0000 2.1250 x = (b - C*x) ./ d x = -1.2500 -1.2500 x = (b - C*x) ./ d x = 5.5000 2.6875 x = (b - C*x) ./ d x = -2.3750 -2.3750 x = (b - C*x) ./ d x = 7.7500 3.5312 x = (b - C*x) ./ d x = -4.0625 -4.0625 x = (b - C*x) ./ d x = 11.1250 4.7969 x = (b - C*x) ./ d x = -6.5938 -6.5938 x = (b - C*x) ./ d x = 16.1875 6.6953 x = (b - C*x) ./ d x = -10.3906 -10.3906 x = (b - C*x) ./ d x = 23.7812 9.5430 x = (b - C*x) ./ d x = -16.0859 -16.0859 x = (b - C*x) ./ d x = 35.1719 13.8145 x = (b - C*x) ./ d x = -24.6289 -24.6289 x = (b - C*x) ./ d x = 52.2578 20.2217 x = (b - C*x) ./ d x = -37.4434 -37.4434 x = (b - C*x) ./ d x = 77.8867 29.8325 x = (b - C*x) ./ d x = -56.6650 -56.6650 x = (b - C*x) ./ d x = 116.3301 44.2488 % Things are just getting worse ... no convergence! A \ b ans = 1.0000 1.0000 edit jacobisolve help jacobisolve jacobisolve : Solve Ax=b by Jacobi iteration [x, resvec] = jacobisolve(A, b, niters); Given matrix A and vector b, this runs "niters" iterations of Jacobi to solve for x in A*x=b. x = jacobisolve(A, b, niters, viziters); also visualizes x as a solution to the model problem after every "viziters" iterations. Optionally, [x, resvec] = ... also returns a vector of the norm of the residual b - A*x at each iteration. The code below follows the demo in CS 111. John R. Gilbert 13 Jan 2016 load temperature whos ... A = A100; b = b100; size(A) ans = 10000 10000 x = jacobisolve(A,b,10); surfc(reshape(x,100,100)),shg % 10 iterations isn't enough to get a very accurate answer, compared to the right answer... x = A\b; surfc(reshape(x,100,100)),shg x = jacobisolve(A,b,100,1); {??? Operation terminated by user ... } x = jacobisolve(A,b,1000,50); relres = norm(b - A*x) / norm(b) relres = 0.0033 [x,resvec] = jacobisolve(A,b,1000,50); size(resvec) ans = 1 1000 resvec resvec = ... semilogy(resvec/norm(b)),shg xlabel('iteration'); ylabel('relative residual'); hold on % Let's compare to the results of Conjugate Gradients... help pcg PCG Preconditioned Conjugate Gradients Method. ... [x,flag,rr,niters,resvec] = pcg(A,b,1e-6,1000); niters niters = 246 relres = norm(b - A*x) / norm(b) relres = 9.9957e-07 semilogy(resvec/norm(b),'r'),shg legend('Jacobi','conjugate gradient') shg % Conjugate Gradient only works if the matrix is symmetric and positive definite: nnz(A - A') ans = 0 x = rand(10000,1); x' * A * x ans = 3.3947e+03 % The convergence is related to the "condition number" of A, % which is the ratio of the largest to smallest eigenvalue. % Smaller condition number is better; the smallest possible is 1, % which is for example the condition number of any identity matrix. condest(A) % "condest" is a Matlab code to estimate the condition number of A ans = 6.0107e+03