Getting started: Difference between revisions
m (→Snapshot data) |
|||
Line 134: | Line 134: | ||
<tr class="even"> | <tr class="even"> | ||
<td align="left"><tt>vel_energy.dat</tt></td> | <td align="left"><tt>vel_energy.dat</tt></td> | ||
<td align="left">$t$, $E$</td> | <td align="left">$t$, $E$, $E_{k=0}$, $E_{m=0}$</td> | ||
<td align="left">energies | <td align="left">energies. $E-E_{k=0}=$streamwise-dependent component</td> | ||
</tr> | </tr> | ||
<tr class="odd"> | <tr class="odd"> | ||
<td align="left"><tt>vel_friction.dat</tt></td> | <td align="left"><tt>vel_friction.dat</tt></td> | ||
<td align="left">$t$, $U_b$ or $\beta$, $\langle u_z(r=0)\rangle_z$, $u_\tau$</td> | <td align="left">$t$, $U_b$ or $\beta$, $\langle u_z(r=0)\rangle_z$, $u_\tau$</td> | ||
<td align="left">bulk | <td align="left">bulk speed or pressure measure $1+\beta=Re/Re_m$, mean centreline speed, friction vel.</td> | ||
</tr> | </tr> | ||
</table> | </table> | ||
== Compiling libraries == | == Compiling libraries == |
Revision as of 03:49, 6 May 2015
$ \renewcommand{\vec}[1]{ {\bf #1} } \newcommand{\bnabla}{ \vec{\nabla} } \newcommand{\Rey}{Re} \def\vechat#1{ \hat{ \vec{#1} } } \def\mat#1{#1} $
Overview of files
- Makefile will require modification for your compiler and libraries (see #Compiling_libraries). Sample commands for other compilers can be found near the top of the file.
- parallel.h Ensure _Np is set to 1 if you do not have MPI. This file contains macros for parallelisation that are only invoked if the number of processes Np is greater than 1.
- program/parameters.f90, where you’ll find parameters. See #Parameters
- utils/ contains a number of utilities. Almost anything can be done in a util, both post-processing and analysing data at runtime. There should be no need to alter the core code. See #Making_utils
- Matlab/, a few scripts. See Matlab/Readme.txt .
Parameters
./parallel.h:
_Np
Number of processors.
Set 1 for serial use, MPI not required in that case.
_Np$\le$i_N is 'optimal' if it is a divisor of i_N and fine if it is slightly larger than a divisor. Slightly less would imply one or two cores have an extra point to deal with, and would hold up the rest.
./program/parameters.f90:
i_N | Number of radial points $n\in[1,N]$ |
i_K | Maximum k (axial), $k\in(-K,\,K)$ |
i_M | Maximum m (azimuthal), $m\in[0,\,M)$ |
i_Mp | Azimuthal periodicity, i.e. $m=0,M_p,2M_p,\dots,(M-1)M_p$ |
(set =1 for no symmetry assumption) | |
d_Re | Reynolds number $Re$ or $Rm_m$ |
d_alpha | Axial wavenumber $\alpha=2\pi/L_z$ |
b_const_flux | Enforce constant flux $U_b=\frac1{2}$. |
i_save_rate1 | Save frequency for snapshot data files |
i_save_rate2 | Save frequency for time-series data |
i_maxtstep | Maximum number of timesteps (no limit if set =-1) |
d_cpuhours | Maximum number of cpu hours |
d_time | Start time (taken from state.cdf.in if set =-1d0) |
d_timestep | Fixed timestep (typically =0.01d0 or dynamically controlled if set =-1d0) |
d_dterr | Maximum corrector norm, $\|f_{corr}\|$ (typically =1d-5 or set =1d1 to avoid extra corrector iterations) |
d_courant | Courant number $\mathrm{C}$ (unlikely to need changing) |
d_implicit | Implicitness $c$ (unlikely to need changing) |
Note the default cases, usually if the parameter is set to -1.
Input files
State files are stored in the NetCDF data format are binary yet can be transferred across different architectures safely. The program main.out runs with the compiled parameters (see main.info) but will load states of other truncations. For example, an output state file state0018.cdf.dat can be copied to an input state.cdf.in, and when loaded it will be interpolated if necessary.
state.cdf.in:
$t$ – Start time. Overridden by d_time if d_time$\ge$0
$\Delta t$ – Timestep. Ignored, see parameter d_timestep.
$N, M_p, r_n$ – Number of radial points, azimuthal periodicity, radial values of the input state.
– If the input radial points differ from the runtime points, then the fields are interpolated onto the new points automatically.
$u_r,\,u_\theta,\, u_z$ – Field vectors.
– If $K\ne\,$i_K or $M\ne\,$i_M, then Fourier modes are truncated or zeros appended.
Output
Snapshot data
Data saved every i_save_rate1 timesteps:
state????.cdf.dat vel_spec????.dat
All output is sent to the current directory, and ???? indicates numbers 0000, 0001, 0002,…. Each state file can be copied to a state.cdf.in should a restart be necessary. To list times $t$ for each saved state file,
> grep state OUT
The spectrum files are overwritten each save as they are retrievable from the state data. To verify sufficient truncation, a quick profile of the energy spectrum can be plotted with
gnuplot> set log
gnuplot> plot 'vel_spec0002.dat' w lp
Time-series data
Data saved every i_save_rate2 timesteps:
tim_step.dat | $t$, $\Delta t$, $\Delta t_{\|f\|}$, $\Delta t_{CFL}$ | current and limiting step sizes |
vel_energy.dat | $t$, $E$, $E_{k=0}$, $E_{m=0}$ | energies. $E-E_{k=0}=$streamwise-dependent component |
vel_friction.dat | $t$, $U_b$ or $\beta$, $\langle u_z(r=0)\rangle_z$, $u_\tau$ | bulk speed or pressure measure $1+\beta=Re/Re_m$, mean centreline speed, friction vel. |
Compiling libraries
Take a look at Makefile. The compiler and flags will probably need editing. There are suggested flags for many compilers at the top of this file.
Building the main code is successful if running make produces no errors, but this requires the necessary libraries to be present - LAPACK, netCDF and FFTW. Often it is necessary to build LAPACK and NetCDF with the compiler and compile flags that will be used for the main simulation code.
The default procedure for building a package (FFTW3, netCDF) is
tar -xvvzf package.tar.gz cd package/ [set environment variables if necessary] ./configure --prefix=<path> make make install
– FFTW3. This usually requires no special treatment. Install with your package manager or build with the default settings.
– LAPACK. If using gfortran you might be able to use the binary version supplied for your linux distribution. Otherwise, edit the file make.inc that comes with LAPACK, setting the fortran compiler and flags to those you plan to use. Type ‘make’. Once finished, copy the following binaries into your library path (see Makefile LIBS -L$<$path$>$/lib/)
cp lapack.a <path>/lib/liblapack.a
cp blas.a <path>/lib/libblas.a
– netCDF. If using gfortran you might be able to use the supplied binary version for your linux distribution. Otherwise, typical environment variables required to build netCDF are
CXX=""
FC=/opt/intel/fc/10.1.018/bin/ifort
FFLAGS="-O3 -mcmodel=medium"
export CXX FC FFLAGS
After the ‘make install’, ensure that files netcdf.mod and typesizes.mod appear in your include path (see Makefile COMPFLAGS -I$<$path$>$/include/). Several versions can be found at
http://www.unidata.ucar.edu/downloads/netcdf/current/index.jsp. Version 4.1.3 is relatively straight forward to install, the above flags should be sufficient.
Installation is trickier for more recent versions [2013-12-11 currently netcdf-4.3.0.tar.gz]. First
./configure --disable-netcdf-4 --prefix-<path>
which disables HDF5 support (not currently required in code). Fortran is no longer bundled, so get netcdf-fortran-4.2.tar.gz or a more recent version from here
http://www.unidata.ucar.edu/downloads/netcdf/index.jsp
Build with the above flags, and in addition
CPPFLAGS=-I<path>/include
export CPPFLAGS
Add the link flag -lnetcdff before -lnetcdf in the Makefile.
Typical usage
See expanded examples in the Tutorial.
Setting parameters
- The files of importance are parallel.h and program/parameters.f90. Edit with your favourite
text editor e.g.
> nano program/parameters.f90 [OR] > gedit program/parameters.f90
- Almost all parameters are found in program/parameters.f90.
- For serial use set _Np to 1 in parallel.h. MPI is required if, and only if, _Np is greater than 1. The compiler in Makefile might need altering if switching between parallel and serial use, or is is possible to call mpirun -np 1 ....
Starting a job
- To compile
> make > make install
The second command creates the directory install/ and a text file main.info, which is a record of settings at compile time.
- Next an initial condition state.cdf.in is needed. NOTE: Any output state, e.g. state0012.cdf.dat can be copied to state.cdf.in to be used as an initial condition. If resolutions do not match, they are automatically interpolated or truncated.
> mv install ~/runs/job0001 > cd ~/runs/job0001/ > cp .../state0012.cdf.dat state.cdf.in
- To start the run (good to first double-check in main.info that the executable was compiled with correct parameters)
> nohup ./main.out > OUT 2> OUT.err &
After a few moments, press enter again to see if main.out stopped prematurely. If it has stopped there will be a message e.g. '[1]+ Done nohup...'; check OUT.err or OUT for clues why.
- To end the run
> rm RUNNING
This will terminate the job cleanly.
- NOTE: I generate almost all initial conditions by taking a state from a run with similar parameters. If there is a mismatch in i_Mp, use the utility changeMp.f90.
Monitoring a run
Immediately after starting a job, it’s a good idea to check for any warnings
> less OUT
To find out number of timesteps completed, or for possible diagnosis of an early exit,
> tail OUT
The code outputs timeseries data and snapshot data, the latter has a 4-digit number e.g. state0012.cdf.dat.
To see when the in the run each state was saved,
> grep state OUT | less [OR] > head -n 1 vel_spec* | less
I often monitor progress with tail vel_energy.dat or
> gnuplot > plot 'vel_energy.dat' w l
Use rm RUNNING to end the job.
Making utils
The core code in program/ rarely needs to be changed. Almost anything can be done by creating a utility instead.
There are many examples in utils/. Further information can be found on the Utilities page.