A walk through pynbody’s low-level facilities

The following example shows how to load a file, determine various attributes, access some data and make use of unit information.

If you’re more interested in making pretty pictures and plots straight away, you may wish to read the basic facilities tutorial first.


This tutorial discusses an interactive session rather than a script. This is only cosmetically different; the commands discussed here can of course all be used in a script too. We will use the ipython interpreter which offers a much richer interactive environment over the vanilla python interpreter. However, you can type exactly the same commands into vanilla python; only the formatting will look slightly different. For instance, the ipython prompt looks like In [1]: while the python prompt looks like >>>.

First steps


Before you start make sure pynbody is properly installed. See Pynbody Installation for more information. You will also need the standard pynbody test files if you want to follow the tutorial. These files are available separately here: <https://github.com/pynbody/pynbody/releases> (testdata.tar.gz).

After you have extracted the testdata folder (e.g. with tar -xzf testdata.tar.gz), launch ipython. At the prompt, type import pynbody. If all is installed correctly, this should silently succeed, and you are ready to use pynbody commands. Here’s an example. We’ll also load the numpy module as it provides some functions we’ll make use of later.

In [1]: import pynbody

In [2]: import numpy as np

In [3]: f = pynbody.load("testdata/test_g2_snap")

Here we’ve loaded a sample gadget file. Not much seems to have happened when you called pynbody.load(), but the variable f is now your calling point for accessing data.

In fact f is an object known as a SimSnap (see pynbody.snapshot.SimSnap).

Behind the scenes, the function inspects the provided path and decides what file format it is. At the time of writing, supported file formats include tipsy, nchilada, gadget-1, gadget-2, gadget-HDF and ramses. For most purposes you should never need to know what type of file you are dealing with – that’s the whole point of the pynbody framework.


If you look inside the testdata folder, you’ll notice that our test snapshot is actually an example of a spanned gadget snapshot. There is not actually a file called test_g2_snap, but rather two files from two CPUs, test_g2_snap.0 and test_g2_snap.1. pynbody knows to load both of these if you ask for test_g2_snap; if you ask for test_g2_snap.1 (for instance), it’ll load only that particular file.

The SimSnap object that’s returned is currently a fairly empty holder for data. The data will be loaded from disk as and when you request it.

Finding out something about the file

Let’s start to inspect the file we’ve opened. The standard python operator len can be used to query the number of particles in the file:

In [4]: len(f)
Out[4]: 8192

We can also find out about particles of a particular type or family as it is known within pynbody. To find out which families are present in the file, use families():

In [5]: f.families()
Out[5]: [<Family gas>, <Family dm>, <Family star>]

You can pick out just the particles belonging to a family by using the syntax f.family. So, for example, we can see how many particles of each type are present:

In [6]: len(f.dm)
Out[6]: 4096

In [7]: len(f.gas)
Out[7]: 4039

In [8]: len(f.star)
Out[8]: 57

Useful information about the file is stored in a python dictionary called properties:

In [9]: f.properties
{'a': 0.2777777798158637,
 'boxsize': Unit("3.00e+03 Mpc a h**-1"),
 'h': 0.71,
 'omegaL0': 0.7331,
 'omegaM0': 0.2669,
 'time': Unit("2.78e-01 s Mpc a**1/2 h**-1 km**-1")}

Like any python dictionary, specific properties can be accessed by name:

In [10]: f.properties['a']
Out[10]: 0.2777777798158637

These names are standardized across different file formats. Here for example z means redshift, a means the cosmological scalefactor, h indicates the Hubble constant in standard units (100 km/s/Mpc).


Actually f.properties has some behaviour which is very slightly different from a normal python dictionary. For further information see SimDict.

Retrieving data

Like f.properties, f itself also behaves like a python dictionary. The standard python method f.keys() returns a list of arrays that are currently in memory.

In [11]: f.keys()
Out[11]: []

Right now it’s empty! That’s actually correct because data is only retrieved when you first access it. To find out what could be loaded, use the pynbody-specific method f.loadable_keys().

In [12]: f.loadable_keys()
Out[12]: ['pos', 'vel', 'iord', 'mass']

This looks a bit more promising. To access data, simply use the normal dictionary syntax. For example f['pos'] returns an array containing the 3D-coordinates of all the particles.

In [13]: f['pos']
SimArray([[   53.31897354,   177.84364319,   128.22311401],
       [  306.75045776,   140.44454956,   215.37149048],
       [  310.99908447,    64.1344986 ,   210.53594971],
       [ 2870.90161133,  2940.17114258,  1978.79492188],
       [ 2872.41137695,  2939.21972656,  1983.91601562],
       [ 2863.65112305,  2938.05444336,  1980.06152344]], dtype=float32, 'Mpc a h**-1')


Array names are standardized across all file formats. For instance, even if you load a Gadget-HDF file – which internally refers to the position array as coordinates – you still access that array from pynbody by the name pos. The intention is that code never needs to be adapted simply because you have switched file format. However the name mapping is fully configurable should you wish to adopt different conventions.

Some arrays are stored only for certain families. For example, densities are stored only for gas particles and are accessed as f.gas['rho']. To find out what arrays are available for the gas family, use f.gas.loadable_keys():

In [14]: f.gas.loadable_keys()

So, we can get the density of the gas particles like this:

In [15]: f.gas['rho']
SimArray([  1.38886092e-09,   3.36176842e-09,   4.52736737e-09, ...,
         8.53409521e-09,   7.41017736e-09,   1.40517520e-09], dtype=float32, '1.00e+10 h**2 Msol Mpc**-3 a**-3')


The SimArray objects are actually numpy arrays with some added functionality (such as unit tracking, discussed below). Numerical operations are very nearly as fast as their numpy equivalents. However, if you want to squeeze the performance of your code, you can always get a vanilla numpy array by using the numpy view mechanism, e.g. f.gas['rho'].view(type=numpy.ndarray)

Creating your own arrays

You can create arrays using the obvious assignment syntax:

In [16]: f['twicethemass'] = f['mass']*2

You can also define new arrays for one family of particles:

In [17]: f.gas['myarray'] = f.gas['rho']**2

An array created in this way exists only for the gas particles; trying to access it for other particles raises an exception.

Alternatively, you can define derived arrays which are calculated (and re-calculated) on demand. For example,

In [18]: @pynbody.derived_array
   ....: def thricethemass(sim) :
   ....:     return sim['mass']*3

At this point, nothing has been calculated. However, when you ask for the array, the values are calculated and stored

In [19]: f['thricethemass']
SimArray([ 0.02464408,  0.02464408,  0.02464408, ...,  0.02464408,
        0.02464408,  0.02464408], dtype=float32, '1.00e+10 Msol h**-1')

This has the advantage that your new thricethemass array is automatically updated when you change the mass array:

In [20]: f['mass'][0] = 1

In [21]: f['thricethemass']
SimArray([ 3.        ,  0.02464408,  0.02464408, ...,  0.02464408,
        0.02464408,  0.02464408], dtype=float32, '1.00e+10 Msol h**-1')

Note, however, that the array is not re-calculated every time you access it, only if the mass array has changed. Therefore you don’t waste any time by using derived arrays. For more information see the reference documentation for derived arrays.

Keeping on top of units

You might have noticed in the output from the above experiments that pynbody keeps track of unit information whenever it can.


It’s worth understanding exactly where pynbody gets this information from, in case anything goes wrong. In the case of Ramses, and Gadget-HDF files the unit information is stored within your snapshot, and pynbody takes advantage of this. For old-style Gadget snapshots, the default cosmological gadget setup is assumed. For nchilada and tipsy, an nchilada or gasoline .param file is sought in the directory from which you are loading the snapshot and its immediate parent.

You can print out the units of any given array by accessing the units property:

In [22]: f['mass'].units
Out[22]: Unit("1.00e+10 Msol h**-1")

However, it’s usually more helpful to simply convert your arrays into something more managable than the internal units. Pynbody arrays can be converted using the in_units() function; just pass in a string representing the units you want.

In [23]: f['pos'].in_units('Mpc')
SimArray([[   20.86031914,    69.57888794,    50.16553879],
       [  120.01191711,    54.94700623,    84.26114655],
       [  121.6741333 ,    25.09174347,    82.36930847],
       [ 1123.20092773,  1150.30163574,   774.1763916 ],
       [ 1123.79162598,  1149.92944336,   776.17999268],
       [ 1120.36425781,  1149.47351074,   774.67193604]], dtype=float32, 'Mpc')


The function in_units() returns a copy of your array in new units. Next time you access f['pos'] it will be back in its original units. If you want to permanently convert the array in-place use convert_units() or see below.

Another option is to request that pynbody converts all your arrays into something sensible, using physical_units(),

In [24]: f.physical_units()

Take a look at what’s happened to the density:

In [25]: f.gas['rho']
SimArray([  3.26650195e-07,   7.90663989e-07,   1.06480456e-06, ...,
         2.00715840e-06,   1.74282104e-06,   3.30487211e-07], dtype=float32, 'Msol kpc**-3')

Note that the conversion will also be made when loading any arrays in future; for example:

In [26]: f['vel']
SimArray([[ 27.93829346,   4.98370504, -10.00886631],
       [ 15.36156368,   5.7859726 ,   4.36315632],
       [ -8.35731888,  -2.88852572,  22.8099041 ],
       [ 27.74917603,  85.60175323,  15.53243732],
       [ 40.75585556,  59.44286728,  44.24484634],
       [ 38.38396454,  68.63973236,  46.01428986]], dtype=float32, 'km s**-1')

A new array generated from a unary or binary operation will inherit the correct units. For example

In [27]: 5*f['vel']
SimArray([[ 139.69146729,   24.9185257 ,  -50.0443306 ],
       [  76.80781555,   28.92986298,   21.81578064],
       [ -41.78659439,  -14.44262886,  114.0495224 ],
       [ 138.74588013,  428.00875854,   77.66218567],
       [ 203.77928162,  297.21432495,  221.22422791],
       [ 191.91983032,  343.19866943,  230.07144165]], dtype=float32, 'km s**-1')

In [28]: (f['vel']**2).units
Out[28]: Unit("km**2 s**-2")

In [29]: np.sqrt(((f['vel']**2).sum(axis=1)*f['mass'])).units
Out[29]: Unit("km Msol**1/2 s**-1")

You can even associate arrays with the loaded SimSnap unit system even when you create them outside the SimSnap. This is useful for keeping things tidy with your unit conversions if you are calculating quantities that don’t apply to all of the particles. For instance:

In [30]: array = pynbody.array.SimArray(np.random.rand(10)) # make the newly-formed numpy array a pynbody array

In [31]: array.sim = f # this links the array to the simulation

In [32]: array.units = 'Mpc a' # we set units that require cosmology information

In [33]: array
SimArray([ 0.42381316,  0.91293819,  0.98373048,  0.89116205,  0.88210544,
        0.67884459,  0.50947435,  0.39155365,  0.43911523,  0.70932789], 'Mpc a')

In [34]: array.in_units('kpc')
SimArray([ 117.72587772,  253.59394396,  273.25846759,  247.5450145 ,
        245.02929085,  188.56794437,  141.52065413,  108.76490238,
        121.97645429,  197.03552661], 'kpc')

Note that the units were correctly converted into physical units in the last step.

For more information see the reference documentation for pynbody.units.


An important concept within pynbody is that of a subsnap. These are objects that look just like a SimSnap but actually only point at a subset of the particles within a parent. Subsnaps are always instances of the SubSnap class.

You’ve already seen some examples of subsnaps, actually. When you accessed f.gas or f.dm, you’re given back a subsnap pointing at only those particles. However, subsnaps can be used in a much more general way. For example, you can use python’s normal array slicing operations. Here we take every tenth particle:

In [35]: every_tenth = f[::10]

In [36]: len(every_tenth)
Out[36]: 820

In common with python’s normal mode of working, this does not copy any data, it merely creates another pointer into the existing data. As an example, let’s modify the position of one of our particles in the new view:

In [37]: every_tenth['pos'][1]
Out[37]: SimArray([ 140288.796875 ,  122217.9765625,   75805.2890625], dtype=float32, 'kpc')

In [38]: every_tenth['pos'][1] = [1,2,3]

In [39]: every_tenth['pos'][1]
Out[39]: SimArray([ 1.,  2.,  3.], dtype=float32, 'kpc')

This change is reflected in the main snapshot.

In [40]: f['pos'][10]
Out[40]: SimArray([ 1.,  2.,  3.], dtype=float32, 'kpc')


If you’re used to numpy’s flexible indexing abilities, you might like to note that, typically, f[array_name][index] == f[index][array_name]. The difference is that applying the index to the whole snapshot is more flexible and can lead to simpler code. In particular, numpy_array[index] may involve copying data whereas f[index] never does; it always returns a new object pointing back at the old one.

You can pass in an array of boolean values representing whether each successive particle should be included (True) or not (False). This allows the use of numpy‘s comparison operators. For example:

In [41]: f_slab = f[(f['x']>1000)&(f['x']<2000)]

In [42]: f_slab['x'].min()
Out[42]: SimArray(1000.12841796875, dtype=float32, 'kpc')

In [43]: f_slab['x'].max()
Out[43]: SimArray(1867.8240966796875, dtype=float32, 'kpc')

In [44]: f['x'].min()
Out[44]: SimArray(1.0, dtype=float32, 'kpc')

In [45]: f['x'].max()
Out[45]: SimArray(1173692.5, dtype=float32, 'kpc')

Here f_slab is pointing at only those particles which have x-coordinates between 1000 and 2000.

Note that subsnaps really do behave exactly like snapshots. So, for instance, you can pick out sub-subsnaps or sub-sub-subsnaps.

In [46]: len(f_slab.dm)
Out[46]: 2

In [47]: len(f_slab.dm[::10])
Out[47]: 1

In [48]: f_slab[[100,105,252]].gas['pos']
IndexError                                Traceback (most recent call last)
<ipython-input-48-f9cbab94f5ed> in <module>()
----> 1 f_slab[[100,105,252]].gas['pos']

/Users/app/anaconda/lib/python2.7/site-packages/pynbody/snapshot/__init__.pyc in __getitem__(self, i)
    269             return self._get_subsnap_from_mask_array(i)
    270         elif isinstance(i, (list, tuple, np.ndarray, filt.Filter)):
--> 271             return IndexedSubSnap(self, i)
    272         elif isinstance(i, int) or isinstance(i, np.int32) or isinstance(i, np.int64):
    273             return IndexedSubSnap(self, (i,))

/Users/app/anaconda/lib/python2.7/site-packages/pynbody/snapshot/__init__.pyc in __init__(self, base, index_array)
   1822             index_array = np.asarray(index_array)
-> 1824         findex = base._family_index()[index_array]
   1825         # Check the family index array is monotonically increasing
   1826         # If not, the family slices cannot be implemented

IndexError: index 100 is out of bounds for axis 1 with size 5


Under most circumstances there is very little performance penalty to using a SubSnap. However in performance-critical code it is worth understanding a little more about what’s going on under the hood. See Performance optimisation in pynbody.


Another way you can select a subset of particles is to use a filter. This can lead to more readable code than the equivalent explicitly written condition. For example, to pick out a sphere centered on the origin, you can use:

In [49]: from pynbody.filt import *

In [50]: f_sphere = f[Sphere('10 kpc')]

For a list of filters, see pynbody.filt.

Where next?

This concludes the tutorial for basic use of pynbody. Further tutorials for specific tasks are available. We are happy to provide further assistance via our user group email list.