File:Arnoldi.f
Arnoldi.f (file size: 8 KB, MIME type: application/acad)
$ \renewcommand{\vec}[1]{ {\bf #1} } \newcommand{\bnabla}{ \vec{\nabla} } \newcommand{\Rey}{Re} \def\vechat#1{ \hat{ \vec{#1} } } \def\mat#1{#1} $
The Arnoldi Method
The Arnoldi method is a method for calculating the eigenvalues and eigenvectors of a matrix, i.e. for calculating the scalar $\sigma$ and $n$-vectors $\vec{x}$ that satisfy \[
\sigma\,A = A\,\vec{x}
\] for a given $n\times n$ matrix $A$.
The main advantage of the method is that it only requires calculations of multiplies by $A$ for a given $\vec{x}$ -- it does not need to know $A$ itself. This means that $A$ need not even be stored, and could correspond to a very complex linear 'action' on $\vec{x}$, e.g. a time integral with initial condition $\vec{x}$. The method seeks eigenvectors in $\mathrm{span}\{\vec{x},\,A\vec{x},\,A^2\vec{x},...\}$, but uses Gram-Schmidt orthogonalisation to improve the suitability of the basis. The set of orthogonalised vectors is called the Krylov-subspace, and m is the maximum number of vectors stored. It is possible to restart without losing information; not implemented here.
Time Integration and Exponentiation
The method finds the eigenvalues most separated in the complex plane first. If $A$ is expected to have many negative eigenvalues of little interest, it may be better to work with $\tilde{A}=\mathrm{e}^A=1+A+\frac{1}{2!}A^2+...$, which shares the same eigenvectors but has more suitable eigenvalues, $\tilde{\sigma}=\mathrm{e}^\sigma$. The negative eigenvalues $\sigma$ then correspond to eigenvalues $\tilde{\sigma}$ bunched close to the origin. The Arnoldi method favours the $\tilde{\sigma}$ most separated in the complex plane, being the $\sigma$ with largest real parts.
Note that for the system $\partial_t \vec{x} = A\,\vec{x}$, time integration corresponds to exponentiation: Taking eigenvector $\vec{x}$ with eigenvalue $\sigma$ as an initial condition, time integration from $0$ to $T$ provides a result that is the same as multiplication of the initial condition by the value $\mathrm{e}^{\sigma T}$, i.e., this is the corresponding eigenvalue of the time integration operator for eigenvector $\vec{x}$.
Consider the case of a perturbation $\vec{\delta x}$ linearised about a solution $\vec{x}_0$, i.e. $\partial_t \vec{\delta x} = A(\vec{x}_0)\,\vec{\delta x}$. Let $\vec{X}(\vec{x})$ be the result of time integration of $\vec{x}$. The result of $A\,\vec{\delta x}$ may be approximated by $\frac{1}{\epsilon}(\vec{X}(\vec{x}_0+\epsilon\,\vec{\delta x})-\vec{X}(\vec{x}_0))$ for some small value $\epsilon$. Note that to find the eigenvalues of $A$ with the Arnoldi method, in this case only a routine for time integration of a given initial condition is required.
How to use the code
To download, click the link above. The Lapack package is also required.
The subroutine arnold(...) needs to be passed a subroutine that calculates the dot product of two eigenvectors. It should look like, for example,
double precision function dotprod(n,a,b) implicit none integer :: n double precision :: a(n), b(n) dotprod = sum(a*b) end function dotprod
arnold(...) needs to be called repeatedly. It communicates the status of the computation via the flag ifail, which tells the user how many eigenvalues are converged up to a given tolerance, to multiply a vector by $A$ again, or tells the user if the method has failed, e.g. reached maximum number of vector that can be stored.
An example of use of the code:
! declare workspace vectors, h, q, b... - see header of arnoldi.f sv = ... ! random initial vector x k = 0 ! initialise iteration counter do while(.true.) call arnold(n,k,kmax,ncgd,dotprod,tol,sv,h,q,b,wr,wi,ifail) if(ifail==-1) then print*, ' arnoldi converged!' exit else if(ifail==0) then call multA(sv, sv) ! possibly complicated routine that multiplies sv by A else if(ifail==1) then print*, 'WARNING: arnoldi reached max its' exit else if(ifail>=2) then print*, 'WARNING: arnoldi error:', ifail exit end if end do
On exit, the eigenvectors are stored in columns of b, in order corresponding to eigenvalues in wr and wi. If the first eigenvalue is real (wi(1)==0.), the eigenvector occupies the first column of b only. If the next eigenvalue is complex, the real and imaginary parts will occupy the next two columns of b.
Parallel use
No special adaptations are required for parallel (MPI) use -- let each thread pass its subsection for the vector sv, and let the dotprod function allreduce the result of the dot product.
File history
Click on a date/time to view the file as it appeared at that time.
Date/Time | Dimensions | User | Comment | |
---|---|---|---|---|
current | 02:06, 13 December 2016 | (8 KB) | Apwillis (talk | contribs) | For calculating the eigenvalues of a matrix. |
You cannot overwrite this file.
File usage
The following 3 pages use this file: