Try Live
Add Docs
Rankings
Pricing
Docs
Install
Theme
Install
Docs
Pricing
More...
More...
Try Live
Rankings
Enterprise
Create API Key
Add Docs
pyAMReX
https://github.com/amrex-codes/pyamrex
Admin
pyAMReX is a Python binding that bridges AMReX block-structured codes with data science, providing
...
Tokens:
65,503
Snippets:
347
Trust Score:
8.5
Update:
2 months ago
Context
Skills
Chat
Benchmark
90.9
Suggestions
Latest
Show doc for...
Code
Info
Show Results
Context Summary (auto-generated)
Raw
Copy
Link
# pyAMReX pyAMReX is a Python binding for the AMReX block-structured adaptive mesh refinement (AMR) software framework. It bridges high-performance computing in AMReX-based codes with data science capabilities, providing zero-copy GPU data access for AI/ML workflows, in situ analysis, application coupling, and rapid massively parallel prototyping. The library exposes AMReX's C++ APIs to Python, enabling scientists to work with structured grids, particle data, and parallel decomposition directly from Python. The core functionality includes managing multi-dimensional arrays (MultiFab), particle containers with various memory layouts (AoS+SoA and pure SoA), geometry and coordinate system definitions, box arrays for domain decomposition, and distribution mappings for parallel execution. pyAMReX supports CPU computations via NumPy as well as GPU acceleration through CuPy, Numba, and PyTorch integrations, making it ideal for coupling HPC simulations with machine learning pipelines. ## Initialization and Configuration Initialize and finalize the AMReX library. Must be called before using any AMReX objects. ```python import amrex.space3d as amr # Initialize AMReX with configuration options amr.initialize([ "amrex.verbose=1", "amrex.throw_exception=1", "amrex.signal_handling=0", "amrex.the_arena_is_managed=0", ]) # Check configuration print(f"AMReX version: {amr.Config.amrex_version}") print(f"Has MPI: {amr.Config.have_mpi}") print(f"Has GPU: {amr.Config.have_gpu}") print(f"Precision: {amr.Config.precision}") print(f"Space dimensions: {amr.Config.spacedim}") # Check if AMReX is initialized print(f"Is initialized: {amr.initialized()}") # Always finalize when done amr.finalize() ``` ## Box and IntVect Box defines a rectangular region in index space. IntVect represents integer coordinates. ```python import amrex.space3d as amr amr.initialize([]) # Create IntVect for coordinates small_end = amr.IntVect(0, 0, 0) big_end = amr.IntVect(63, 63, 63) # Create a Box from IntVect bounds box = amr.Box(small_end, big_end) # Box properties print(f"Small end: {box.small_end}") print(f"Big end: {box.big_end}") print(f"Number of points: {box.num_pts}") print(f"Size: {box.size}") print(f"Is cell centered: {box.cell_centered}") # Box operations grown_box = box.grow(2) # Grow by 2 in all directions print(f"Grown box: {grown_box}") # Check if point is contained point = amr.IntVect(32, 32, 32) print(f"Contains point: {box.contains(point)}") # Iterate over box indices count = 0 for i, j, k in box: count += 1 if count > 5: break print(f"First few indices iterated...") amr.finalize() ``` ## RealBox and Geometry RealBox defines the physical domain bounds. Geometry combines index space with physical space. ```python import amrex.space3d as amr amr.initialize([]) # Define physical domain bounds real_box = amr.RealBox(0.0, 0.0, 0.0, 1.0, 2.0, 5.0) print(f"Physical domain: lo={real_box.lo()}, hi={real_box.hi()}") print(f"Volume: {real_box.volume()}") # Create index space box box = amr.Box(amr.IntVect(0, 0, 0), amr.IntVect(127, 127, 127)) # Define coordinate system (0=Cartesian, 1=RZ, 2=Spherical) coord = 0 is_periodic = [1, 1, 0] # Periodic in x, y; not in z # Create Geometry geom = amr.Geometry(box, real_box, coord, is_periodic) # Geometry properties print(f"Problem size: {geom.ProbSize()}") print(f"Problem length x: {geom.ProbLength(0)}") print(f"Is periodic: {geom.isPeriodic()}") print(f"Is any periodic: {geom.isAnyPeriodic()}") print(f"Domain box: {geom.domain}") # Get geometry data for computational kernels gd = geom.data() print(f"Cell sizes: {gd.CellSize()}") print(f"Coordinate type: {gd.coord}") amr.finalize() ``` ## BoxArray and DistributionMapping BoxArray manages domain decomposition. DistributionMapping assigns boxes to MPI ranks. ```python import amrex.space3d as amr amr.initialize([]) # Create a box for the domain domain_box = amr.Box(amr.IntVect(0, 0, 0), amr.IntVect(63, 63, 63)) # Create BoxArray and decompose ba = amr.BoxArray(domain_box) ba.max_size(32) # Maximum box size of 32^3 print(f"Number of boxes: {ba.size}") print(f"Total points: {ba.numPts}") # Access individual boxes for i in range(ba.size): print(f"Box {i}: {ba.get(i)}") # Create distribution mapping for parallel execution dm = amr.DistributionMapping(ba) print(f"Processor map: {dm.ProcessorMap()}") # Can also create from explicit processor assignments processor_list = amr.Vector_int([0, 0, 1, 1, 2, 2, 3, 3]) # dm_explicit = amr.DistributionMapping(processor_list) amr.finalize() ``` ## MultiFab Creation and Basic Operations MultiFab is the primary data container for field data on block-structured grids. ```python import amrex.space3d as amr import numpy as np amr.initialize([]) # Setup domain decomposition box = amr.Box(amr.IntVect(0, 0, 0), amr.IntVect(63, 63, 63)) ba = amr.BoxArray(box) ba.max_size(32) dm = amr.DistributionMapping(ba) # Create MultiFab: (BoxArray, DistributionMapping, num_components, num_ghost_cells) num_components = 3 num_ghost = 1 mfab = amr.MultiFab(ba, dm, num_components, num_ghost) # Set values mfab.set_val(0.0, 0, num_components) # Set all components to 0 # Set individual components mfab.set_val(10.0, 0, 1) # Component 0 = 10.0 mfab.set_val(20.0, 1, 1) # Component 1 = 20.0 mfab.set_val(30.0, 2, 1) # Component 2 = 30.0 # MultiFab properties print(f"Number of components: {mfab.n_comp}") print(f"Number of local boxes: {len(mfab)}") print(f"Ghost vector: {mfab.n_grow_vect}") print(f"Shape: {mfab.shape}") # Reduction operations print(f"Min of component 0: {mfab.min(0)}") print(f"Max of component 0: {mfab.max(0)}") print(f"Sum of component 0: {mfab.sum(0)}") # Math operations mfab.plus(5.0, 0, 1) # Add 5 to component 0 mfab.mult(2.0, 0, 1) # Multiply component 0 by 2 mfab.abs(0, 1) # Absolute value of component 0 # Copy MultiFab mfab_copy = mfab.copy() # Clean up mfab.clear() amr.finalize() ``` ## MultiFab Iteration and NumPy/CuPy Access Iterate over MultiFab blocks and access data as NumPy or CuPy arrays for computation. ```python import amrex.space3d as amr import numpy as np amr.initialize([]) # Setup box = amr.Box(amr.IntVect(0, 0, 0), amr.IntVect(63, 63, 63)) ba = amr.BoxArray(box) ba.max_size(32) dm = amr.DistributionMapping(ba) mfab = amr.MultiFab(ba, dm, 1, 1) mfab.set_val(0.0, 0, 1) # Method 1: Simple iteration with to_xp() (CPU/GPU agnostic) for field in mfab.to_xp(): field[()] = 42.0 # Set all values # Method 2: Detailed iteration with MFIter ngv = mfab.n_grow_vect for mfi in mfab: # Get box with ghost cells bx = mfi.tilebox().grow(ngv) # Get Array4 and convert to numpy/cupy arr4 = mfab.array(mfi) field = arr4.to_xp() # Returns numpy on CPU, cupy on GPU # Compute on the data (zero-copy access) field[()] = np.pi # Or use explicit numpy field_np = arr4.to_numpy() field_np[:, :, :, 0] = 3.14 # Method 3: Global indexing mfab[()] = np.pi # Set all cells including ghosts mfab[...] = 42.0 # Set all valid cells mfab[:, :, :, 0] = 100.0 # Set component 0 # Verify print(f"Sum after operations: {mfab.sum(0)}") mfab.clear() amr.finalize() ``` ## MultiFab GPU Operations with CuPy Use CuPy for GPU-accelerated computations on MultiFab data. ```python import amrex.space3d as amr amr.initialize([]) # Check GPU availability if amr.Config.have_gpu: import cupy as cp # Setup with device arena box = amr.Box(amr.IntVect(0, 0, 0), amr.IntVect(31, 31, 31)) ba = amr.BoxArray(box) ba.max_size(32) dm = amr.DistributionMapping(ba) # Create MultiFab on GPU mfab = amr.MultiFab( ba, dm, 1, 0, amr.MFInfo().set_arena(amr.The_Device_Arena()) ) mfab.set_val(0.0, 0, 1) # Iterate and compute with CuPy for mfi in mfab: # Get CuPy array (zero-copy GPU access) marr_cupy = mfab.array(mfi).to_cupy(order="C") # GPU computation with CuPy marr_cupy[()] = 3.0 # Use CuPy operations marr_cupy += cp.ones_like(marr_cupy) * 2.0 # Verify on GPU result = mfab.sum_unique(comp=0, local=False) print(f"Sum on GPU: {result}") mfab.clear() else: print("GPU not available, using CPU") amr.finalize() ``` ## ParticleContainer with Pure SoA Layout Modern particle container using Structure of Arrays layout for optimal performance. ```python import amrex.space3d as amr import numpy as np amr.initialize([]) # Setup geometry box = amr.Box(amr.IntVect(0, 0, 0), amr.IntVect(63, 63, 63)) real_box = amr.RealBox(0, 0, 0, 1, 1, 1) geom = amr.Geometry(box, real_box, 0, [0, 0, 0]) ba = amr.BoxArray(box) ba.max_size(32) dm = amr.DistributionMapping(ba) # Create pure SoA particle container # ParticleContainer_pureSoA_<num_real_comps>_<num_int_comps>_default pc = amr.ParticleContainer_pureSoA_8_2_default(geom, dm, ba) # Set component names pc.set_soa_compile_time_names( ["x", "y", "z", "ux", "uy", "uz", "w", "id"], # Real components ["status", "type"] # Int components ) # Initialize particles myt = amr.ParticleInitType_pureSoA_8_2() myt.real_array_data = [0.5, 0.5, 0.5, 0.0, 0.0, 0.0, 1.0, 0.0] myt.int_array_data = [1, 0] num_particles = 1000 seed = 42 pc.init_random(num_particles, seed, myt, False, real_box) print(f"Total particles: {pc.number_of_particles()}") print(f"Finest level: {pc.finest_level}") # Iterate over particles using simple syntax for pti in pc.iterator(level=0): # Direct attribute access x = pti["x"] y = pti["y"] z = pti["z"] # Modify particle data pti["ux"][:] = x[:] * 0.1 pti["uy"][:] = y[:] * 0.1 pti["uz"][:] = z[:] * 0.1 # Redistribute particles after position changes pc.redistribute() print(f"Particles after redistribute: {pc.number_of_particles()}") amr.finalize() ``` ## ParticleContainer with Legacy AoS+SoA Layout Legacy particle container using Array of Structures plus Structure of Arrays. ```python import amrex.space3d as amr import numpy as np amr.initialize([]) # Setup box = amr.Box(amr.IntVect(0, 0, 0), amr.IntVect(63, 63, 63)) real_box = amr.RealBox(0, 0, 0, 1, 1, 1) geom = amr.Geometry(box, real_box, 0, [0, 0, 0]) ba = amr.BoxArray(box) ba.max_size(32) dm = amr.DistributionMapping(ba) # Create AoS+SoA particle container # ParticleContainer_<struct_real>_<struct_int>_<array_real>_<array_int>_default pc = amr.ParticleContainer_2_1_3_1_default(geom, dm, ba) # Initialize with particle template myt = amr.ParticleInitType_2_1_3_1() myt.real_struct_data = [0.5, 0.6] # AoS real data myt.int_struct_data = [1] # AoS int data myt.real_array_data = [0.1, 0.2, 0.3] # SoA real data myt.int_array_data = [0] # SoA int data pc.init_random(100, 42, myt, False, real_box) # Add runtime components pc.add_real_comp("weight", True) pc.add_int_comp("cell_id", True) # Iterate over particles for lvl in range(pc.finest_level + 1): for pti in pc.iterator(level=lvl): # Access Array of Structs (positions, idcpu, struct data) aos = pti.aos().to_numpy() # Access Structure of Arrays (additional components) soa = pti.soa().to_xp() # Print particle positions from AoS print(f"First particle x: {aos[0]['x']}") # Modify SoA real data for name, arr in soa.real.items(): arr[:] = 42.0 # Modify SoA int data for name, arr in soa.int.items(): arr[:] = 1 print(f"Total particles: {pc.number_of_particles()}") amr.finalize() ``` ## Particles to Pandas DataFrame Convert particle data to pandas DataFrame for analysis. ```python import amrex.space3d as amr amr.initialize([]) # Setup particle container box = amr.Box(amr.IntVect(0, 0, 0), amr.IntVect(63, 63, 63)) real_box = amr.RealBox(0, 0, 0, 1, 1, 1) geom = amr.Geometry(box, real_box, 0, [0, 0, 0]) ba = amr.BoxArray(box) ba.max_size(32) dm = amr.DistributionMapping(ba) pc = amr.ParticleContainer_pureSoA_8_0_default(geom, dm, ba) pc.set_soa_compile_time_names( ["x", "y", "z", "px", "py", "pz", "w", "id"], [] ) myt = amr.ParticleInitType_pureSoA_8_0() myt.real_array_data = [0.5, 0.5, 0.5, 0.0, 0.0, 0.0, 1.0, 0.0] myt.int_array_data = [] pc.init_random(50, 42, myt, False, real_box) # Convert to pandas DataFrame (creates a copy) try: import pandas as pd df = pc.to_df(local=True) # Local particles only if df is not None: print(f"DataFrame columns: {list(df.columns)}") print(f"Number of particles: {len(df)}") print(df.head()) # Analyze with pandas print(f"Mean x position: {df['x'].mean()}") print(f"Position std: {df[['x', 'y', 'z']].std()}") except ImportError: print("pandas not available") amr.finalize() ``` ## ParmParse for Runtime Parameters Parse and manage runtime configuration parameters. ```python import amrex.space3d as amr amr.initialize([ "myapp.nx=64", "myapp.ny=64", "myapp.nz=128", "myapp.dt=0.001", "myapp.verbose=1", "physics.gravity=-9.81", ]) # Create ParmParse with prefix pp = amr.ParmParse("myapp") # Query parameters nx = pp.get_int("nx") dt = pp.get_real("dt") verbose = pp.get_bool("verbose") print(f"Grid: {nx}x{pp.get_int('ny')}x{pp.get_int('nz')}") print(f"Time step: {dt}") print(f"Verbose: {verbose}") # Query with default (returns tuple: (found, value)) found, max_steps = pp.query_int("max_steps") if not found: max_steps = 1000 print(f"Max steps: {max_steps}") # Access different prefix pp_physics = amr.ParmParse("physics") gravity = pp_physics.get_real("gravity") print(f"Gravity: {gravity}") # Add new parameters programmatically pp.add("output_interval", 100) pp.addarr("boundaries", amr.Vector_int([0, 0, 1, 1, 0, 0])) # Convert all parameters to dictionary all_params = amr.ParmParse("").to_dict() print(f"All parameters: {all_params}") # Print formatted table amr.ParmParse("").pretty_print_table() amr.finalize() ``` ## Writing Plotfiles Write simulation data to AMReX plotfile format for visualization. ```python import amrex.space3d as amr amr.initialize([]) # Setup box = amr.Box(amr.IntVect(0, 0, 0), amr.IntVect(31, 31, 31)) real_box = amr.RealBox(0, 0, 0, 1, 1, 1) geom = amr.Geometry(box, real_box, 0, [0, 0, 0]) ba = amr.BoxArray(box) ba.max_size(16) dm = amr.DistributionMapping(ba) # Create MultiFab with data mfab = amr.MultiFab(ba, dm, 3, 0) mfab.set_val(1.0, 0, 1) # density mfab.set_val(0.5, 1, 1) # velocity_x mfab.set_val(0.0, 2, 1) # velocity_y # Variable names varnames = amr.Vector_string(["density", "velocity_x", "velocity_y"]) # Write plotfile time = 0.0 level_step = 0 plotfile_name = amr.concatenate("plt", level_step, 5) # "plt00000" amr.write_single_level_plotfile( plotfile_name, mfab, varnames, geom, time, level_step ) print(f"Wrote plotfile: {plotfile_name}") # Particle container can also write plotfiles # pc.write_plotfile("particles", "particle_data") mfab.clear() amr.finalize() ``` ## Embedded Boundaries (EB) Work with embedded boundaries for complex geometries. ```python import amrex.space3d as amr amr.initialize([]) # Check if EB is available if amr.Config.have_eb: # Setup geometry box = amr.Box(amr.IntVect(0, 0, 0), amr.IntVect(63, 63, 63)) real_box = amr.RealBox(-1, -1, -1, 1, 1, 1) geom = amr.Geometry(box, real_box, 0, [0, 0, 0]) # Build EB (implicit function must be defined elsewhere) # amr.EB2_Build(geom, 0, 1, 4, True, True, 0) ba = amr.BoxArray(box) ba.max_size(32) dm = amr.DistributionMapping(ba) # Create EB-aware factory # eb_factory = amr.makeEBFabFactory( # geom, ba, dm, # amr.Vector_int([2, 2, 2]), # amr.EBSupport.full # ) # Access volume fractions # vfrac = eb_factory.getVolFrac() print("EB support available") else: print("EB support not compiled (add -DAMReX_EB=ON)") amr.finalize() ``` ## MPI Parallel Operations Use MPI for distributed parallel computing. ```python import amrex.space3d as amr amr.initialize([]) # Check MPI status if amr.Config.have_mpi: from mpi4py import MPI # Get parallel info my_rank = amr.ParallelDescriptor.MyProc() num_procs = amr.ParallelDescriptor.NProcs() is_io_proc = amr.ParallelDescriptor.IOProcessor() if is_io_proc: print(f"Running on {num_procs} MPI ranks") # Only IO processor prints amr.Print(f"Hello from rank {my_rank}") # Setup distributed MultiFab box = amr.Box(amr.IntVect(0, 0, 0), amr.IntVect(127, 127, 127)) ba = amr.BoxArray(box) ba.max_size(32) dm = amr.DistributionMapping(ba) mfab = amr.MultiFab(ba, dm, 1, 1) mfab.set_val(float(my_rank), 0, 1) # Global reduction global_sum = mfab.sum(0) if is_io_proc: print(f"Global sum: {global_sum}") # Fill ghost cells across MPI boundaries # mfab.FillBoundary(geom.periodicity()) mfab.clear() else: print("MPI not available") amr.finalize() ``` ## Summary pyAMReX serves as the bridge between high-performance AMReX C++ simulations and the Python data science ecosystem. The primary use cases include enhancing existing AMReX applications with Python-based AI/ML capabilities, rapid prototyping of new simulation codes, in situ analysis and visualization during simulations, and coupling multiple physics codes through Python. The zero-copy data access enables efficient integration with NumPy, CuPy, PyTorch, and other scientific Python libraries without memory transfer overhead. For production applications, pyAMReX is used in codes like WarpX (electromagnetic particle-in-cell) and ImpactX (beam dynamics). The library supports both CPU and GPU execution, with the same Python code working transparently on either platform through the `to_xp()` interface. Key integration patterns include iterating over local MultiFab blocks with MFIter, accessing particle data through ParticleContainer iterators, and using ParmParse for runtime configuration. The combination of AMReX's block-structured AMR capabilities with Python's ease of use makes pyAMReX ideal for modern computational science workflows that require both performance and flexibility.