File:Arnoldi.f: Difference between revisions
mNo edit summary |
mNo edit summary |
||
(9 intermediate revisions by the same user not shown) | |||
Line 5: | Line 5: | ||
The Arnoldi method is a method for calculating the eigenvalues and eigenvectors of a matrix, i.e. for calculating the scalar $\sigma$ and $n$-vectors $\vec{x}$ that satisfy | The Arnoldi method is a method for calculating the eigenvalues and eigenvectors of a matrix, i.e. for calculating the scalar $\sigma$ and $n$-vectors $\vec{x}$ that satisfy | ||
\[ | \[ | ||
\sigma\,\vec{x} = A\,\vec{x} | |||
\] | \] | ||
for a given $n\times n$ matrix $A$. | for a given $n\times n$ matrix $A$. | ||
The main advantage of the method is that it only requires calculations of multiplies by $A$ for a given $\vec{x}$ -- it does not need to know $A$ itself. This means that $A$ need not even be stored, and could correspond to a very complex linear 'action' on $\vec{x}$, e.g. a time integral with initial condition $\vec{x}$. | The main advantage of the method is that it only requires calculations of multiplies by $A$ for a given $\vec{x}$ -- it does not need to know $A$ itself. This means that $A$ need not even be stored, and could correspond to a very complex linear 'action' on $\vec{x}$, e.g. a time integral with initial condition $\vec{x}$. Given a starting vector $\vec{x}_0$, the method seeks eigenvectors $\vec{x}$ in $\mathrm{span}\{\vec{x}_0,\,A\vec{x}_0,\,A^2\vec{x}_0,...\}$, but uses Gram-Schmidt orthogonalisation to improve the numerical suitability of this basis set. The set of orthogonalised vectors is called the Krylov-subspace. In practice we need to limit the size of this space to m vectors, but if this number is reached, it is possible to restart without losing information. Restarts are not implemented here; see the following which can dramatically reduce the number of multiplies by $A$ required. | ||
===Time Integration and Exponentiation=== | ===Time Integration and Exponentiation=== | ||
The method finds the eigenvalues most separated in the complex plane first. If $A$ is expected to have many negative eigenvalues of little interest, it may be better to work with $\tilde{A}=\mathrm{e}^A=1+A+\frac{1}{2!}A^2+...$, i.e. the eigenproblem $\mathrm{e}^\sigma\vec{x}=\mathrm{e}^A\vec{x}$. This problem | The method finds the eigenvalues most separated in the complex plane first. If $A$ is expected to have many negative eigenvalues of little interest, it may be better to work with $\tilde{A}=\mathrm{e}^A=1+A+\frac{1}{2!}A^2+...$, i.e. the eigenproblem $\mathrm{e}^\sigma\vec{x}=\mathrm{e}^A\vec{x}$. This problem | ||
shares the same eigenvectors as the original problem, but | shares the same eigenvectors as the original problem, but its eigenvalues, $\tilde{\sigma}=\mathrm{e}^\sigma$, are often better suited to the Arnoldi method. The negative eigenvalues $\sigma$, which typically correspond to modes of little interest, now correspond to eigenvalues $\tilde{\sigma}$ bunched close to the origin. The Arnoldi method favours the $\tilde{\sigma}$ most separated in the complex plane, being the $\sigma$ with largest real parts. | ||
Note that for the system $\partial_t \vec{x} = A\,\vec{x}$, time integration corresponds to exponentiation: | Note that for the system $\partial_t \vec{x} = A\,\vec{x}$, time integration corresponds to exponentiation: | ||
First, for eigenvector $\vec{x}_0$ with eigenvalue $\sigma$ we may write $\sigma\, \vec{x}_0 = A\,\vec{x}_0$. | |||
Also, taking $\vec{x}_0$ as an intial condition, after a time $T$ it will have grown or decayed to | |||
$\vec{x}_T=\mathrm{e}^{\sigma T}\vec{x}_0 =\vec{x}_0 + \int_0^T A\,\vec{x}\,dt$. We may write this as | |||
$\mathrm{e}^{\sigma T}\vec{x}_0 \equiv B\,\vec{x}_0$, where $B$ is the time integration operator. | |||
Then $\vec{x}_0$ is an eigenvector of both $A$ and $B$, where its eigenvalue for $B$ is $\tilde{\sigma}=\mathrm{e}^{\sigma T}$. | |||
We apply the Arnoldi method to $B$, then recover $\sigma$ from $\tilde{\sigma}$. | |||
Now let's consider the case of a perturbation $\vec{\delta x}$ linearised about a solution $\vec{x}_0$. Evolution of the perturbation is given by $\partial_t \vec{\delta x} = A(\vec{x}_0)\,\vec{\delta x}$. Let $\vec{X}(\vec{x})$ be the result of time integration of $\vec{x}$. For this system, the result of $B\,\vec{\delta x}$ for a given $\vec{\delta x}$ may be approximated by $\frac{1}{\epsilon}(\vec{X}(\vec{x}_0+\epsilon\,\vec{\delta x})-\vec{X}(\vec{x}_0))$ for some small value $\epsilon$. (Although we expect that $\vec{X}(\vec{x}_0)=\vec{x}_0$ for a solution $\vec{x}_0$, numerical accuracy is likely to be better with the given form.) Note that to find the eigenvalues $\tilde{\sigma}=\mathrm{e}^{\sigma T}$ of $B$ with the Arnoldi method, only a routine for time integration of a given initial condition is required. | |||
== How to use the code == | == How to use the code == |
Latest revision as of 02:22, 21 October 2021
$ \renewcommand{\vec}[1]{ {\bf #1} } \newcommand{\bnabla}{ \vec{\nabla} } \newcommand{\Rey}{Re} \def\vechat#1{ \hat{ \vec{#1} } } \def\mat#1{#1} $
The Arnoldi Method
The Arnoldi method is a method for calculating the eigenvalues and eigenvectors of a matrix, i.e. for calculating the scalar $\sigma$ and $n$-vectors $\vec{x}$ that satisfy \[ \sigma\,\vec{x} = A\,\vec{x} \] for a given $n\times n$ matrix $A$.
The main advantage of the method is that it only requires calculations of multiplies by $A$ for a given $\vec{x}$ -- it does not need to know $A$ itself. This means that $A$ need not even be stored, and could correspond to a very complex linear 'action' on $\vec{x}$, e.g. a time integral with initial condition $\vec{x}$. Given a starting vector $\vec{x}_0$, the method seeks eigenvectors $\vec{x}$ in $\mathrm{span}\{\vec{x}_0,\,A\vec{x}_0,\,A^2\vec{x}_0,...\}$, but uses Gram-Schmidt orthogonalisation to improve the numerical suitability of this basis set. The set of orthogonalised vectors is called the Krylov-subspace. In practice we need to limit the size of this space to m vectors, but if this number is reached, it is possible to restart without losing information. Restarts are not implemented here; see the following which can dramatically reduce the number of multiplies by $A$ required.
Time Integration and Exponentiation
The method finds the eigenvalues most separated in the complex plane first. If $A$ is expected to have many negative eigenvalues of little interest, it may be better to work with $\tilde{A}=\mathrm{e}^A=1+A+\frac{1}{2!}A^2+...$, i.e. the eigenproblem $\mathrm{e}^\sigma\vec{x}=\mathrm{e}^A\vec{x}$. This problem shares the same eigenvectors as the original problem, but its eigenvalues, $\tilde{\sigma}=\mathrm{e}^\sigma$, are often better suited to the Arnoldi method. The negative eigenvalues $\sigma$, which typically correspond to modes of little interest, now correspond to eigenvalues $\tilde{\sigma}$ bunched close to the origin. The Arnoldi method favours the $\tilde{\sigma}$ most separated in the complex plane, being the $\sigma$ with largest real parts.
Note that for the system $\partial_t \vec{x} = A\,\vec{x}$, time integration corresponds to exponentiation: First, for eigenvector $\vec{x}_0$ with eigenvalue $\sigma$ we may write $\sigma\, \vec{x}_0 = A\,\vec{x}_0$. Also, taking $\vec{x}_0$ as an intial condition, after a time $T$ it will have grown or decayed to $\vec{x}_T=\mathrm{e}^{\sigma T}\vec{x}_0 =\vec{x}_0 + \int_0^T A\,\vec{x}\,dt$. We may write this as $\mathrm{e}^{\sigma T}\vec{x}_0 \equiv B\,\vec{x}_0$, where $B$ is the time integration operator. Then $\vec{x}_0$ is an eigenvector of both $A$ and $B$, where its eigenvalue for $B$ is $\tilde{\sigma}=\mathrm{e}^{\sigma T}$. We apply the Arnoldi method to $B$, then recover $\sigma$ from $\tilde{\sigma}$.
Now let's consider the case of a perturbation $\vec{\delta x}$ linearised about a solution $\vec{x}_0$. Evolution of the perturbation is given by $\partial_t \vec{\delta x} = A(\vec{x}_0)\,\vec{\delta x}$. Let $\vec{X}(\vec{x})$ be the result of time integration of $\vec{x}$. For this system, the result of $B\,\vec{\delta x}$ for a given $\vec{\delta x}$ may be approximated by $\frac{1}{\epsilon}(\vec{X}(\vec{x}_0+\epsilon\,\vec{\delta x})-\vec{X}(\vec{x}_0))$ for some small value $\epsilon$. (Although we expect that $\vec{X}(\vec{x}_0)=\vec{x}_0$ for a solution $\vec{x}_0$, numerical accuracy is likely to be better with the given form.) Note that to find the eigenvalues $\tilde{\sigma}=\mathrm{e}^{\sigma T}$ of $B$ with the Arnoldi method, only a routine for time integration of a given initial condition is required.
How to use the code
To download, click the link above. The Lapack package is also required.
The subroutine arnold(...) needs to be passed a subroutine that calculates the dot product of two eigenvectors. It should look like, for example,
double precision function dotprod(n,a,b) implicit none integer :: n double precision :: a(n), b(n) dotprod = sum(a*b) end function dotprod
arnold(...) needs to be called repeatedly. It communicates the status of the computation via the flag ifail, which tells the user how many eigenvalues are converged up to a given tolerance, to multiply a vector by $A$ again, or tells the user if the method has failed, e.g. reached maximum number of vector that can be stored.
An example of use of the code:
! declare workspace vectors, h, q, b... - see header of arnoldi.f sv = ... ! random initial vector x k = 0 ! initialise iteration counter do while(.true.) call arnold(n,k,kmax,ncgd,dotprod,tol,sv,h,q,b,wr,wi,ifail) if(ifail==-1) then print*, ' arnoldi converged!' exit else if(ifail==0) then call multA(sv, sv) ! possibly complicated routine that multiplies sv by A else if(ifail==1) then print*, 'WARNING: arnoldi reached max its' exit else if(ifail>=2) then print*, 'WARNING: arnoldi error:', ifail exit end if end do
On exit, the eigenvectors are stored in columns of b, in order corresponding to eigenvalues in wr and wi. If the first eigenvalue is real (wi(1)==0.), the eigenvector occupies the first column of b only. If the next eigenvalue is complex, the real and imaginary parts will occupy the next two columns of b.
Parallel use
No special adaptations are required for parallel (MPI) use -- let each thread pass its subsection for the vector sv, and let the dotprod function allreduce the result of the dot product.
File history
Click on a date/time to view the file as it appeared at that time.
Date/Time | Dimensions | User | Comment | |
---|---|---|---|---|
current | 02:06, 13 December 2016 | (8 KB) | Apwillis (talk | contribs) | For calculating the eigenvalues of a matrix. |
You cannot overwrite this file.
File usage
The following 3 pages use this file: