# openPMD-api openPMD-api is a C++ and Python library for scientific I/O that implements the openPMD (Open Standard for Particle-Mesh Data) metadata schema. It provides a unified, portable interface for reading and writing scientific simulation data, particularly particle and mesh data from physics simulations. The library supports multiple I/O backends including HDF5, ADIOS2, and JSON/TOML, enabling both serial and MPI-parallel workflows across different platforms. The core functionality centers around the `Series` class, which represents an openPMD data file or collection of files. Each Series contains `Iteration` objects representing time steps, with each Iteration containing `meshes` (field data on grids) and `particles` (particle species data). The library provides both random-access and streaming/linear access patterns, making it suitable for both post-processing analysis and in-situ data processing in running simulations. ## Core API ### Series - Opening Data Files The Series class is the entry point for all openPMD I/O operations. It manages file handles, iterations, and backend configuration. ```cpp #include // Open a series for reading openPMD::Series read_series("data%T.h5", openPMD::Access::READ_ONLY); std::cout << "openPMD version: " << read_series.openPMD() << "\n"; std::cout << "Number of iterations: " << read_series.snapshots().size() << "\n"; // Open a series for writing with linear access (streaming-compatible) openPMD::Series write_series("output.bp", openPMD::Access::CREATE_LINEAR); write_series.setAuthor("My Simulation"); write_series.setSoftware("MyCode", "1.0.0"); // Open with JSON/TOML configuration openPMD::Series configured_series( "data.h5", openPMD::Access::READ_ONLY, R"({"defer_iteration_parsing": true})" ); // MPI-parallel opening // MPI_Comm comm = MPI_COMM_WORLD; // openPMD::Series parallel_series("data.bp", openPMD::Access::CREATE_LINEAR, comm); ``` ### Series (Python) Python bindings provide an equivalent interface with Pythonic conventions. ```python import openpmd_api as io # Open series for reading series = io.Series("data%T.h5", io.Access.read_only) print(f"openPMD version: {series.openPMD}") print(f"Iterations: {list(series.snapshots().keys())}") # Open for writing series = io.Series("output.h5", io.Access.create_linear) series.set_author("My Simulation") series.set_software("MyCode", "1.0.0") # With deferred parsing for large datasets series = io.Series("data%T.h5", io.Access.read_only, { "defer_iteration_parsing": True }) # Always close when done series.close() ``` ### Snapshots/Iterations - Accessing Time Steps Snapshots (iterations) represent discrete time steps in a simulation. Use `series.snapshots()` to access them. ```cpp #include openPMD::Series series("data%T.h5", openPMD::Access::READ_ONLY); // Iterate over all iterations for (auto const& [step, iteration] : series.snapshots()) { std::cout << "Iteration " << step << "\n"; std::cout << " Time: " << iteration.time() << "\n"; std::cout << " Meshes: " << iteration.meshes.size() << "\n"; std::cout << " Particle species: " << iteration.particles.size() << "\n"; // Close iteration to free resources (important for streaming) iteration.close(); } // Access specific iteration auto& iteration_100 = series.snapshots()[100]; iteration_100.open(); // Required with defer_iteration_parsing // ... work with data ... iteration_100.close(); ``` ### Snapshots/Iterations (Python) ```python import openpmd_api as io series = io.Series("data%T.h5", io.Access.read_only, { "defer_iteration_parsing": True }) # Iterate over all snapshots for idx, iteration in series.snapshots().items(): iteration.open() # Required with defer_iteration_parsing print(f"Iteration {idx}: time={iteration.time}") print(f" Meshes: {list(iteration.meshes.keys())}") print(f" Particles: {list(iteration.particles.keys())}") iteration.close() # Access specific iteration i = series.snapshots()[100].open() print(f"Meshes in iteration 100: {list(i.meshes.keys())}") i.close() series.close() ``` ### Writing Mesh Data Mesh records store field data on structured grids (e.g., electromagnetic fields, density). ```cpp #include #include #include openPMD::Series series("mesh_data.h5", openPMD::Access::CREATE_LINEAR); // Create iteration and mesh auto& iteration = series.snapshots()[1]; openPMD::Mesh rho = iteration.meshes["rho"]; // Set mesh attributes rho.setGeometry(openPMD::Mesh::Geometry::cartesian); rho.setAxisLabels({"x", "y", "z"}); rho.setGridSpacing({1.0, 1.0, 1.0}); rho.setGridGlobalOffset({0.0, 0.0, 0.0}); rho.setGridUnitSI(1.0e-6); // grid in micrometers // Prepare data (100x100x50 grid) std::vector data(100 * 100 * 50); std::iota(data.begin(), data.end(), 0.0); // Define dataset and store openPMD::Dataset dataset(openPMD::determineDatatype(), {100, 100, 50}); rho.resetDataset(dataset); rho.storeChunk(data, {0, 0, 0}, {100, 100, 50}); iteration.close(); series.close(); ``` ### Writing Mesh Data (Python) ```python import numpy as np import openpmd_api as io series = io.Series("mesh_data.h5", io.Access.create_linear) # Create iteration and mesh iteration = series.snapshots()[1] rho = iteration.meshes["rho"] # Prepare data data = np.arange(100 * 100 * 50, dtype=np.float64).reshape(100, 100, 50) # Define dataset dataset = io.Dataset(data.dtype, data.shape) rho.reset_dataset(dataset) # Store using slice notation rho[()] = data # Store entire array # Or: rho[0:50, 0:50, :] = data[0:50, 0:50, :] # Partial store iteration.close() series.close() ``` ### Reading Mesh Data Load field data from openPMD files with support for partial reads and lazy loading. ```cpp #include openPMD::Series series("data%T.h5", openPMD::Access::READ_ONLY); auto& iteration = series.snapshots()[100].open(); auto E_x = iteration.meshes["E"]["x"]; // Get metadata openPMD::Extent extent = E_x.getExtent(); std::cout << "Shape: (" << extent[0] << ", " << extent[1] << ", " << extent[2] << ")\n"; std::cout << "Datatype: " << E_x.getDatatype() << "\n"; // Load full dataset auto all_data = E_x.loadChunk(); series.flush(); // Execute deferred load std::cout << "First value: " << all_data.get()[0] << "\n"; // Load partial chunk openPMD::Offset offset = {10, 10, 0}; openPMD::Extent chunk_extent = {20, 20, 5}; auto chunk = E_x.loadChunk(offset, chunk_extent); series.flush(); // Access chunk.get()[index] iteration.close(); series.close(); ``` ### Reading Mesh Data (Python) ```python import openpmd_api as io series = io.Series("data%T.h5", io.Access.read_only) iteration = series.snapshots()[100] E_x = iteration.meshes["E"]["x"] print(f"Shape: {E_x.shape}") print(f"Dtype: {E_x.dtype}") # Load full dataset all_data = E_x.load_chunk() series.flush() print(f"Data shape: {all_data.shape}") print(f"First 5 values: {all_data.flat[:5]}") # Load partial chunk using slice notation chunk = E_x[10:30, 10:30, 0:5] series.flush() print(f"Chunk shape: {chunk.shape}") iteration.close() series.close() ``` ### Writing Particle Data Store particle species data including positions, momenta, and custom attributes. ```python import numpy as np from openpmd_api import Access, Dataset, Series, Unit_Dimension series = Series("particles.h5", Access.create) iteration = series.snapshots()[0] # Create particle species electrons = iteration.particles["electrons"] n_particles = 1000 # Position data position_x = np.random.rand(n_particles).astype(np.float32) position_y = np.random.rand(n_particles).astype(np.float32) position_z = np.random.rand(n_particles).astype(np.float32) dataset = Dataset(position_x.dtype, position_x.shape) # Set up position components for dim, data in [("x", position_x), ("y", position_y), ("z", position_z)]: electrons["position"][dim].reset_dataset(dataset) electrons["position"][dim].store_chunk(data) # Position offset (required by openPMD) offset_data = np.arange(n_particles, dtype=np.uint64) offset_dataset = Dataset(offset_data.dtype, offset_data.shape) for dim in ["x", "y", "z"]: electrons["positionOffset"][dim].reset_dataset(offset_dataset) electrons["positionOffset"][dim].store_chunk(offset_data) # Weighting (constant value for all particles) electrons["weighting"] \ .reset_dataset(Dataset(np.dtype("float32"), [n_particles])) \ .make_constant(1.0e-15) # Custom record with unit dimension electrons["displacement"].unit_dimension = {Unit_Dimension.M: 1} electrons["displacement"].unit_SI = 1.e-6 electrons["displacement"].reset_dataset(Dataset(np.dtype("float64"), [n_particles])) electrons["displacement"].make_constant(42.43) iteration.close() series.close() ``` ### Reading Particle Data Load particle data with support for pandas/dask DataFrames. ```python import openpmd_api as io series = io.Series("data%T.h5", io.Access.read_only) iteration = series.snapshots()[100] electrons = iteration.particles["electrons"] # Load specific component charge = electrons["charge"].load_chunk() series.flush() print(f"First 5 charges: {charge[:5]}") # Load position data pos_x = electrons["position"]["x"].load_chunk() pos_y = electrons["position"]["y"].load_chunk() momentum_z = electrons["momentum"]["z"].load_chunk() series.flush() # Convert to pandas DataFrame (requires pandas) df = electrons.to_df() print(df.head()) print(df.columns) # With custom attributes df = electrons.to_df(attributes=["particleShape"]) # Load only first 100 particles import numpy as np df = electrons.to_df(slice=np.s_[:100]) # Convert all iterations to DataFrame df_all = series.to_df("electrons", attributes=["particleShape"]) iteration.close() series.close() ``` ### Parallel I/O with MPI Write and read data in parallel using MPI domain decomposition. ```cpp #include #include #include int main(int argc, char *argv[]) { MPI_Init(&argc, &argv); int mpi_size, mpi_rank; MPI_Comm_size(MPI_COMM_WORLD, &mpi_size); MPI_Comm_rank(MPI_COMM_WORLD, &mpi_rank); // Each rank writes a 10x300 slice std::vector local_data(10 * 300, float(mpi_rank)); // Open with MPI communicator openPMD::Series series( "parallel_data.bp", openPMD::Access::CREATE_LINEAR, MPI_COMM_WORLD ); series.snapshots()[1].open(); auto mymesh = series.snapshots()[1].meshes["mymesh"]; // Global dataset: [mpi_size * 10, 300] openPMD::Extent global_extent = {10ul * mpi_size, 300}; openPMD::Dataset dataset(openPMD::determineDatatype(), global_extent); mymesh.resetDataset(dataset); // Each rank writes its slice openPMD::Offset offset = {10ul * mpi_rank, 0}; openPMD::Extent local_extent = {10, 300}; mymesh.storeChunk(local_data, offset, local_extent); series.snapshots()[1].close(); series.close(); MPI_Finalize(); return 0; } ``` ### Parallel I/O with MPI (Python) ```python from mpi4py import MPI import numpy as np import openpmd_api as io comm = MPI.COMM_WORLD mpi_size = comm.Get_size() mpi_rank = comm.Get_rank() # Local data for this rank local_data = np.full((10, 300), float(mpi_rank), dtype=np.float32) # Open with MPI communicator series = io.Series("parallel_data.bp", io.Access.create_linear, comm) iteration = series.snapshots()[1] mymesh = iteration.meshes["mymesh"] # Global dataset definition global_extent = [10 * mpi_size, 300] dataset = io.Dataset(local_data.dtype, global_extent) mymesh.reset_dataset(dataset) # Each rank writes its portion offset = [10 * mpi_rank, 0] mymesh.store_chunk(local_data, offset, local_data.shape) iteration.close() series.close() ``` ### Streaming I/O Use ADIOS2 SST engine for producer-consumer streaming workflows. ```cpp // Writer (producer) #include openPMD::Series series("stream.sst", openPMD::Access::CREATE_LINEAR, R"({ "adios2": { "engine": { "parameters": { "DataTransport": "WAN" } } } })"); for (size_t i = 0; i < 100; ++i) { auto iteration = series.snapshots()[i]; auto positions = iteration.particles["e"]["position"]; std::vector data(1000); std::iota(data.begin(), data.end(), i * 1000.0); openPMD::Dataset dataset(openPMD::Datatype::DOUBLE, {1000}); for (auto dim : {"x", "y", "z"}) { positions[dim].resetDataset(dataset); positions[dim].storeChunk(data, {0}, {1000}); } iteration.close(); // Publishes this step } series.close(); ``` ### Streaming I/O (Python Reader) ```python import openpmd_api as io config = { 'adios2': { 'engine': { 'parameters': { 'DataTransport': 'WAN' } } } } series = io.Series("stream.sst", io.Access_Type.read_linear, config) # Iterate through streamed iterations for idx, iteration in series.snapshots().items(): print(f"Received iteration {idx}") positions = iteration.particles["e"]["position"] for dim in ["x", "y", "z"]: data = positions[dim].load_chunk() iteration.close() # Releases step, advances to next series.close() ``` ### Compression Configuration Configure compression for HDF5 and ADIOS2 backends using JSON/TOML. ```cpp #include // ADIOS2 with Blosc compression std::string adios2_config = R"( backend = "adios2" [[adios2.dataset.operators]] type = "blosc" parameters.doshuffle = "BLOSC_BITSHUFFLE" parameters.clevel = 5 )"; openPMD::Series series_bp("compressed.bp", openPMD::Access::CREATE_LINEAR, adios2_config); // HDF5 with zlib compression std::string hdf5_config = R"( backend = "hdf5" [hdf5.dataset] chunks = "auto" [hdf5.dataset.permanent_filters] type = "zlib" aggression = 5 )"; openPMD::Series series_h5("compressed.h5", openPMD::Access::CREATE_LINEAR, hdf5_config); // Dataset-specific compression (meshes vs particles) std::string mixed_config = R"( backend = "adios2" [[adios2.dataset]] select = "meshes/.*" [adios2.dataset.cfg.operators] type = "blosc" parameters.clevel = 9 [[adios2.dataset]] select = "particles/.*" [adios2.dataset.cfg.operators] type = "bzip2" parameters.clevel = 5 )"; ``` ### Dask Integration Use Dask for parallel, out-of-core analysis of large datasets. ```python import openpmd_api as io import dask.array as da series = io.Series("data%T.h5", io.Access.read_only) iteration = series.snapshots()[400] # Convert mesh to dask array E = iteration.meshes["E"] E_x = E["x"].to_dask_array() E_y = E["y"].to_dask_array() E_z = E["z"].to_dask_array() # Lazy computation of field intensity intensity = E_x**2 + E_y**2 + E_z**2 max_intensity = intensity.max().compute() # Triggers computation print(f"Maximum intensity: {max_intensity}") # Find location of maximum idx_max = da.argwhere(intensity == max_intensity).compute()[0] print(f"Location of maximum: {idx_max}") # Convert particles to dask DataFrame electrons = iteration.particles["electrons"] df = electrons.to_dask(attributes=["particleShape"]) # Compute mean momentum mean_momentum_z = df["momentum_z"].mean().compute() print(f"Mean momentum_z: {mean_momentum_z}") # Histogram computation h, bins = da.histogram( df["momentum_z"].to_dask_array(), bins=50, range=[-8.0e-23, 8.0e-23], weights=df["weighting"].to_dask_array() ) histogram = h.compute() series.close() ``` ### Span API for Zero-Copy Writing Use the Span API to write directly into backend buffers, avoiding memory copies. ```cpp #include #include openPMD::Series series("span_write.bp", openPMD::Access::CREATE_LINEAR); for (size_t i = 0; i < 10; ++i) { auto& iteration = series.snapshots()[i]; auto& E = iteration.meshes["E"]; for (auto dim : {"x", "y"}) { auto& component = E[dim]; component.resetDataset({openPMD::Datatype::FLOAT, {100, 100}}); // Get a view into the backend buffer auto buffer_view = component.storeChunk({0, 0}, {100, 100}) .currentBuffer(); // Fill the buffer directly std::iota(buffer_view.begin(), buffer_view.end(), float(i * 10000)); } iteration.close(); } series.close(); ``` ## Summary openPMD-api provides a comprehensive solution for scientific data I/O in high-performance computing environments. The primary use cases include storing simulation output from particle-in-cell codes, plasma physics simulations, and other mesh/particle-based scientific applications. The library excels at handling large-scale datasets through MPI-parallel I/O, supports both HDF5 for archival storage and ADIOS2 for streaming workflows, and integrates seamlessly with Python data science tools like pandas and dask for analysis. The typical integration pattern involves creating a `Series` object at simulation startup, writing `Iteration` objects at each time step containing `meshes` (field data) and `particles` (particle species), and closing the series at simulation end. For analysis, the pattern is reversed: open an existing series, iterate through snapshots, load chunks of data as needed, and perform computations. The deferred I/O model (where data operations are queued and executed on `flush()`) enables optimizations like batching operations and provides a consistent interface across different backends. For streaming scenarios, the linear access pattern with `CREATE_LINEAR`/`READ_LINEAR` modes ensures proper synchronization between data producers and consumers.