I have moved! New homepage https://homepages.dias.ie/jmackey/From 1. January 2016 I have moved to the Dublin Institute for Advanced Studies. Please go to my new homepage for up-to-date information.
pion is a grid-based fluid dynamics code for hydrodynamics (HD) and magnetohydrodynamics (MHD), including a ray-tracing module for calculating the attenuation of radiation from point sources of ionizing photons, and a module for coupling fluid dynamics to microphysical processes. The algorithms are described in Mackey & Lim (2010), Mackey & Lim (2011), and with more recent updates in Mackey (2012).
I wrote some documentation of the code in 2010 (now partly outdated), including results from a suite of standard test problems to validate the code. Follow the link to read more. In the meantime, versions of the code as used for any of my papers are available on request, but with very little documentation/help to make it useful. If you are interested in using the code for an astrophysics project, let me know and I will try to help. The source code is hosted in a Mercurial repository at BitBucket (currently private, contact me to request access), and I am going to add content to the associated wiki.
pion is described in the paragraphs below. Results and movies from simulations can be found in my research pages, and in the links to my publications.
Above is a snapshot of a simulation using pion, taken from Mackey, Mohamed, Neilson, Langer, & Meyer, 2012, ApJ Letters, 751, L10. We modelled the bow shock produced by the runaway star Betelgeuse assuming it has recently evolved from a blue supergiant to its current state as a red supergiant. This can partly explain some of the more puzzling aspects of Betelgeuse's circumstellar medium. Click on the image to open a movie of the simulation's full evolution. Read more here.
Introduction to the pion code
pion is written in C++, and is designed to be as object-oriented (i.e. modular) as possible. This means that, at least in principle, parts of the code can be taken out and plugged into other codes, and parts of other codes can be merged into pion. In practice, this is only really possible for things like chemistry networks and heating/cooling processes, because most other modules depend on the underlying data structures in the computational grid. The main advantage of object-oriented code is that if you edit one part of the code you are very unlikely to break another part; it helps with maintenance, debugging, and extending the code's capabilities.The main code modules are:
- Systems of equations: equations of inviscid HD, ideal MHD, and ideal MHD with the mixed-GLM divergence cleaning method (Dedner+,2002).
- Coordinate systems: Cartesian coordinates in 1D, 2D, and 3D, Cylindrical coordinates in 2D (R,z), and spherical coordinates in 1D (r).
- Hydro/MHD solvers: Roe-type solvers are implemented for HD and MHD, and flux-vector-splitting and a (slow) exact solver for HD.
- Parallel code communication: a COMMS class using either text-file communicators or the Message Passing Interface (MPI).
- Computational grid. The grid is a multiply-linked list of finite-volume cells (or zones). Most commonly-used boundary conditions are implemented. When run in parallel each process has a subdomain of the full grid, and inter-process communication is used to share boundary data.
- Microphysics: chemistry and heating/cooling processes. A number of different classes have been written for different situations.
- Raytracing, on serial and parallel grids, from point sources or sources at infinity. This uses the short-characteristics raytracer, which is a bit diffusive but very fast and scales well.
- Data input and output (I/O), including ASCII, FITS, and Silo formats.
Problems are set up with a parameter file and an initial-condition generator. This writes an output file, which the code-executable reads. The output files are also restart files, and contain all of the simulation parameters in the header. Many simulation parameters can also be over-ridden with command-line arguments. Almost all features of the code can be used without recompilation e.g. HD, MHD, photoionization, different coordinate systems, different dimensionality of the problem, etc. are all run-time parameters. This is achieved with inherited classes and interfaces defined by virtual base classes. Some features can be excluded with compile-time flags to make the executable smaller, but I haven't found noticeable speedup using these flags.
Features included in pion
- Solving Euler or ideal MHD equations on serial and parallel grids, with any of the coordinate systems described above. First- or second-order accurate (in time and space) integration schemes are supported.
- Calculation of column densities of neutral/ionised/all gas from radiation sources (on or off the grid domain, in serial and in parallel).
- Passive tracers advected with the flow for e.g. chemical species.
- Dedner et al. (2002) divergence cleaning for multi-dimensional MHD simulations (with some improvements).
- A stellar wind inflow boundary condition, including time evolution (read from a text file) can be included in all coordinate systems. For cylindrical and Cartesian coordinates this takes the form of a circle/sphere within which the a freely expanding wind is imposed.
- There are a few microphysics integrators available, based on the descriptions in the references above. A standard base class for microphysics defines the interface to the rest of the code, so it is relatively simple to add complex chemical networks and heating/cooling functions. Dealing with radiation fields and extinction is more complicated and only partially supported, so adding in extinction-dependent photo-rates is not trivial. An adaptive integrator using the CVODE solver (part of the SUNDIALS library) is available.
- The code should run on unix/linux systems with standard GNU or Intel compilers, and also on OS X (10.6 and later) if Xcode is installed. The makefile may need modification for specialised systems.
- The code has been used in parallel with at least 1024 MPI processes on different supercomputers (Stokes at ICHEC, JUROPA at Jülich Supercomputing Centre, SuperMUC at LRZ). Results of scaling tests are presented in Mackey (2012,A&A,539,A147).
Features NOT included in pion
- Gravity, neither self-gravity nor an external potential field. I plan to add these in eventually, at least for a periodic simulation domain.
- Multi-threaded execution (with e.g. OpenMP).
- Photoionization from more than one radiation source is not supported. The raytracer can have as many sources as required, but the microphysics integrators assume there is only one ionizing source. This can be relaxed quite easily once there is a need for it.
- The grid resolution is fixed, and all cells must be cubic/square and the same size. It would be nice to have logarithmic grids in spherical coordinates, and nested grids in other coordinate systems, and adaptive grids, but it's not clear when/if this will happen.
- I have never tried to compile the code on a Windows-based system, and have no expertise to try it.
- Code has not been compiled or run on Blue Gene systems or GPU-based systems. In principle the Blue Gene compilation should work, but external libraries are required (CVODE, FITS, Silo) and this complicates the process. I just haven't spent enough time figuring out how to make it work.
Parallelisation and scaling
- The parallelisation is with MPI, where each process is allocated a subset of the computational grid and communicates with its neighbours to fit everything together. There is no OpenMP multithreading as yet.
- The key design consideration was that the parallel version of the code should produce identical results to the serial version (to machine precision); this is the case now for all problems tested. An exception is problems with chemistry integrated using CVODE - this integrates equations to a given relative-error tolerance, so machine-precision errors can change the number of iterations used and hence the solution.
- Scaling is very good up to 1024 cores for large problems. A good rule-of-thumb is that each MPI process should have a subdomain with at least 32x32 cells in 2D, or 32x32x32 in 3D. For smaller subdomains the number of boundary cells becomes comparable to the number of grid cells, and the ratio of computation to communication become unfavourable.
- The number of MPI processes must be a power of 2.
Plans for future development
- New grid implementation to make boundary communication simpler.
- Sensible treatment of thermal conduction.
- Separating time integration from spatial flux-calculations in code structure.
- Allowing MPI processes to control more than one sub-domain, and adding multithreading.
- Adding treatment of recombination/scattered radiation to the raytracer.
- 2D spherical coordinates.
- Nested grids and adaptive mesh-refinement.
Publications using pion
- "Wind bubbles within H II regions around slowly moving stars"
Mackey, Gvaramadze, Mohamed, & Langer, 2015, A&A, 573, A10.
- "Interacting supernovae from photoionization-confined shells around red supergiant stars,"
Mackey, Mohamed, Gvaramadze, Kotak, Langer, Meyer, Moriya, & Neilson, 2014, Nature, 512, 282-285.
- "Pressure-driven fragmentation of clouds at high redshift,"
Dhanoa, Mackey, & Yates, 2014, MNRAS, 444, 2085.
- "Dynamics of H II regions around exiled O stars,"
Mackey, Langer, & Gvaramadze, 2013, MNRAS, 436, 859-880.
- "Double bow shocks around young, runaway red supergiants: application to Betelgeuse,"
Mackey, Mohamed, Neilson, Langer, & Meyer, 2012, ApJ Letters, 751, L10.
- "Accuracy and efficiency of raytracing photoionisation algorithms,"
Mackey, 2012, A&A, 539, A147.
- "Effects of magnetic fields on photoionized pillars and globules,"
Mackey & Lim, 2011, MNRAS, 412, Issue 3, pp. 2079-2094.
- "Dynamical Models for the Formation of Elephant Trunks in H II Regions,"
Mackey & Lim, 2010, MNRAS, 403, Issue 2, pp. 714-730.