General HDF5 file reader

Fleur uses the HDF5 library for output files containing large datasets. The masci-tools library provides the HDF5Reader class to extract and transform information from these files. The h5py library is used to get information from .hdf files

Basic Usage

The specifications of what to extract and how to transform the data are given in the form of a python dictionary. Let us look at a usage example; extracting data for a bandstructure calculation from the banddos.hdf file produced by Fleur.

from masci_tools.io.parsers.hdf5 import HDF5Reader
from masci_tools.io.parsers.hdf5.recipes import FleurBands

#The HDF5Reader is used with a contextmanager to safely handle
#opening/closing the h5py.File object that is produced to extract information
with HDF5Reader('/path/to/banddos.hdf') as h5reader:
   datasets, attributes = h5reader.read(recipe=FleurBands)

The method read() produces two python dictionaries. In the case of the FleurBands recipe these contain the following information.

  • datasets
    • Eigenvalues converted to eV shited to E_F=0 (if available in the banddos.hdf) and split up into spin-up/down and flattened to one dimension

    • The kpath projected to 1D and reshaped to same length as weights/eigenvalues

    • The weights (flattened) of the interstitial region, each atom, each orbital on each atom for all eigenvalues

  • attributes
    • The coordinates of the used kpoints

    • Positions, atomic symbols and indices of symmetry equivalent atoms

    • Dimensions of eigenvalues (nkpts and nbands)

    • Bravais matrix/Reciprocal cell of the system

    • Indices and labels of special k-points

    • Fermi energy

    • Number of spins in the calculation

The following pre-defined recipes are stored in recipes:

  • Recipe for banddos.hdf for bandstructure calculations

  • Recipe for banddos.hdf for standard density of states calculations

  • Different DOS modes are also supported (jDOS, orbcomp, mcd)

If no recipe is provided to the HDF5Reader, it will create the datasets and attributes as two nested dictionaries, exactly mirroring the structure of the .hdf file and converting datasets into numpy arrays.

For big datasets it might be useful to keep the dataset as a reference to the file and not load the dataset into memory. To achieve this you can pass move_to_memory=False, when initializing the reader. Notice that most of the transformations will still implicitly create numpy arrays and after the hdf file is closed the datasets will no longer be available.

Structure of recipes for the HDF5Reader

The recipe for extracting bandstructure information form the banddos.hdf looks like this:

  1#DOS Recipes
  2FleurDOS = dos_recipe_format('Local')
  3FleurJDOS = dos_recipe_format('jDOS')
  4FleurORBCOMP = dos_recipe_format('Orbcomp')
  5FleurMCD = dos_recipe_format('MCD')
  6
  7
  8def bands_recipe_format(group, simple=False):
  9    """
 10    Format for bandstructure calculations retrieving weights from the given group
 11
 12    :param group: str of the group the weights should be taken from
 13    :param simple: bool, if True no additional weights are retrieved with the produced recipe
 14
 15    :returns: dict of the recipe to retrieve a bandstructure calculation
 16    """
 17
 18    if group == 'Local':
 19        atom_prefix = 'MT:'
 20    elif group == 'jDOS':
 21        atom_prefix = 'jDOS:'
 22    elif group == 'Orbcomp':
 23        atom_prefix = 'ORB:'
 24    elif group == 'MCD':
 25        atom_prefix = 'At'
 26    else:
 27        raise ValueError(f'Unknown group: {group}')
 28
 29    recipe = {
 30        'datasets': {
 31            'eigenvalues': {
 32                'h5path':
 33                f'/{group}/BS/eigenvalues',
 34                'transforms': [
 35                    AttribTransformation(name='shift_by_attribute',
 36                                         attrib_name='fermi_energy',
 37                                         args=(),
 38                                         kwargs={
 39                                             'negative': True,
 40                                         }),
 41                    Transformation(name='multiply_scalar', args=(HTR_TO_EV,), kwargs={}),
 42                    Transformation(name='split_array',
 43                                   args=(),
 44                                   kwargs={
 45                                       'suffixes': ['up', 'down'],
 46                                       'name': 'eigenvalues'
 47                                   }),
 48                    Transformation(name='flatten_array', args=(), kwargs={})
 49                ],
 50                'unpack_dict':
 51                True
 52            },
 53            'kpath': {
 54                'h5path':
 55                '/kpts/coordinates',
 56                'transforms': [
 57                    AttribTransformation(name='multiply_by_attribute',
 58                                         attrib_name='reciprocal_cell',
 59                                         args=(),
 60                                         kwargs={'transpose': True}),
 61                    Transformation(name='calculate_norm', args=(), kwargs={'between_neighbours': True}),
 62                    Transformation(name='cumulative_sum', args=(), kwargs={}),
 63                    AttribTransformation(name='repeat_array_by_attribute', attrib_name='nbands', args=(), kwargs={}),
 64                ]
 65            },
 66        },
 67        'attributes': {
 68            'group_name': {
 69                'h5path': f'/{group}',
 70                'transforms': [
 71                    Transformation(name='get_name', args=(), kwargs={}),
 72                ],
 73            },
 74            'kpoints': {
 75                'h5path': '/kpts/coordinates',
 76            },
 77            'nkpts': {
 78                'h5path':
 79                '/Local/BS/eigenvalues',
 80                'transforms': [
 81                    Transformation(name='get_shape', args=(), kwargs={}),
 82                    Transformation(name='index_dataset', args=(1,), kwargs={})
 83                ]
 84            },
 85            'nbands': {
 86                'h5path':
 87                '/Local/BS/eigenvalues',
 88                'transforms': [
 89                    Transformation(name='get_shape', args=(), kwargs={}),
 90                    Transformation(name='index_dataset', args=(2,), kwargs={})
 91                ]
 92            },
 93            'atoms_elements': {
 94                'h5path': '/atoms/atomicNumbers',
 95                'description': 'Atomic numbers',
 96                'transforms': [Transformation(name='periodic_elements', args=(), kwargs={})]
 97            },
 98            'n_types': {
 99                'h5path':
100                '/atoms',
101                'description':
102                'Number of atom types',
103                'transforms': [
104                    Transformation(name='get_attribute', args=('nTypes',), kwargs={}),
105                    Transformation(name='get_first_element', args=(), kwargs={})
106                ]
107            },
108            'atoms_position': {
109                'h5path': '/atoms/positions',
110                'description': 'Atom coordinates per atom',
111            },
112            'atoms_groups': {
113                'h5path': '/atoms/equivAtomsGroup'
114            },
115            'reciprocal_cell': {
116                'h5path': '/cell/reciprocalCell'
117            },
118            'bravais_matrix': {
119                'h5path': '/cell/bravaisMatrix',
120                'description': 'Coordinate transformation internal to physical for atoms',
121                'transforms': [Transformation(name='multiply_scalar', args=(BOHR_A,), kwargs={})]
122            },
123            'special_kpoint_indices': {
124                'h5path': '/kpts/specialPointIndices',
125                'transforms': [Transformation(name='shift_dataset', args=(-1,), kwargs={})]
126            },
127            'special_kpoint_labels': {
128                'h5path': '/kpts/specialPointLabels',
129                'transforms': [Transformation(name='convert_to_str', args=(), kwargs={})]
130            },
131            'fermi_energy': {

Each recipe can define the datasets and attributes entry (if one is not defined, a empty dict is returned in its place). Each entry in these sections has the same strucuture.

#Example entry from the FleurBands recipe

'fermi_energy': {
         'h5path':
         '/general',
         'description':
         'fermi_energy of the system',
         'transforms': [
             Transformation(name='get_attribute', args=('lastFermiEnergy',), kwargs={}),
             Transformation(name='get_first_element', args=(), kwargs={})
         ]
     }

All entries must define the key h5path. This gives the initial dataset for this key, which will be extracted from the given .hdf file. The key of the entry corresponds to the key under which the result will be saved to the output dictionary.

If the dataset should be transformed in some way after reading it, there are a number of defined transformations in transforms. These are added to an entry by adding a list of namedtuples (Transformation for general transformations; AttribTransformation for attribute transformations) under the key transforms. General Transformations can be used in all entries, while transformations using an attribute value can only be used in the datasets entries. Each namedtuple takes the name of the transformation function and the positional (args), and keyword arguments (kwargs) for the transformation. Attribute transformations also take the name of the attribute, whose value should be passed to the transformation in attrib_name.

At the moment the following transformation functions are pre-defined:

General Transformations:
Transformations using an attribute:

Custom transformation functions can also be defined using the hdf5_transformation() decorator. For some transformation, e.g. get_all_child_datasets(), the result will be a subdictionary in the datasets or attributes dictionary. If this is not desired the entry can include 'unpack_dict': True. With this all keys from the resulting dict will be extracted after all transformations and put into the root dictionary.