Try Live
Add Docs
Rankings
Pricing
Enterprise
Docs
Install
Theme
Install
Docs
Pricing
Enterprise
More...
More...
Try Live
Rankings
Create API Key
Add Docs
PICSAR
https://github.com/ecp-warpx/picsar
Admin
PICSAR is a high-performance Particle-In-Cell (PIC) library designed to help scientists port PIC
...
Tokens:
30,559
Snippets:
220
Trust Score:
5.8
Update:
1 month ago
Context
Skills
Chat
Benchmark
74.7
Suggestions
Latest
Show doc for...
Code
Info
Show Results
Context Summary (auto-generated)
Raw
Copy
Link
# PICSAR PICSAR (Particle-In-Cell Scalable Application Resource) is a high-performance library designed to help scientists port their Particle-In-Cell (PIC) codes to exascale computers. It provides highly optimized implementations of key PIC loop functionalities including Maxwell solvers, particle pushers, current deposition, and field gathering routines. The library exploits three levels of parallelism: distributed memory (MPI), shared memory (OpenMP), and vectorization (SIMD). PICSAR can be used in two modes: as a standalone Fortran application or as a Python module that accelerates existing codes like WARP. The library includes arbitrary-order finite-difference Maxwell solvers (Yee FDTD, Karkkainen, PSATD spectral), Boris and Vay particle pushers, energy-conserving field gathering, and high-order current/charge deposition (up to 3rd order shape factors). It also includes a QED multi-physics library for advanced plasma physics simulations. ## Running Standalone Fortran Simulations Execute PICSAR simulations using MPI with an input configuration file that defines domain decomposition, grid parameters, species properties, and solver settings. ```bash # Compile PICSAR with default GNU compiler make MODE=prod # Run with 4 MPI processes mpirun -np 4 ./fortran_bin/picsar # Run with specific tile configuration mpirun -np 8 ./fortran_bin/picsar -ntilex 5 -ntiley 5 -ntilez 5 -distr 1 # Set OpenMP threads per MPI task export OMP_NUM_THREADS=4 export OMP_STACKSIZE=32M export OMP_SCHEDULE=dynamic mpirun -np 4 ./fortran_bin/picsar ``` ## Input File Configuration (input_file.pixr) Define complete PIC simulation parameters through section-based input files specifying domain decomposition, solver algorithms, plasma properties, and particle species. ```text ######################### INPUT FILE FOR THE CODE PICSAR section::cpusplit nprocx = 2 nprocy = 2 nprocz = 2 topology = 0 mpicom_curr = 0 lvec_curr_depo = 8 lvec_charge_depo = 64 lvec_fieldgathe = 256 mpi_buf_size = 2000 end::cpusplit section::main # Grid resolution nx = 100 ny = 100 nz = 100 # Physical domain bounds xmin = -20e-6 xmax = 20e-6 ymin = -20e-6 ymax = 20e-6 zmin = -20e-6 zmax = 20e-6 # Simulation steps nsteps = 100 # Tile decomposition ntilex = 10 ntiley = 5 ntilez = 5 # Guard cells nguardsx = 3 nguardsy = 3 nguardsz = 3 end::main section::solver # Maxwell solver order norderx = 2 nordery = 2 norderz = 2 # Shape factor order (1=linear, 2=quadratic, 3=cubic) nox = 1 noy = 1 noz = 1 # Current deposition (0=Esirkepov AVX512, 3=Direct vectorized) currdepo = 0 # Field gathering (0=vectorized, 1=scalar, 2=arbitrary order) fieldgathe = 0 # Particle pusher (0=Boris, 1=Vay) particle_pusher = 0 end::solver section::plasma nlab = 1.1e25 # Plasma density (m^-3) gamma0 = 1.0 # Lorentz factor pdistr = 1 # Distribution (1=ordered, 2=random) end::plasma section::species name = electron mass = 1.0 charge = -1 nppcell = 10 x_min = -20e-6 x_max = 20e-6 y_min = -20e-6 y_max = 20e-6 z_min = -10e-6 z_max = 10e-6 vdrift_z = 0.01 # Drift velocity (c) vth_x = 0.1 # Thermal velocity (c) vth_y = 0.1 vth_z = 0.1 sorting_period = 10 end::species section::output output_frequency = 10 ex = 1 ey = 1 ez = 1 rho = 1 end::output section::temporal frequency = 1 format = 0 # 0=binary, 1=ascii kinE = 1 exE = 1 divE-rho = 1 end::temporal ``` ## Python Module Initialization Initialize PICSAR from Python for integration with WARP or standalone use, setting up grid parameters, MPI decomposition, and species properties. ```python from mpi4py import MPI from picsar_python import picsarpy as pxrpy import numpy as np # Get PICSAR instance pxr = pxrpy.picsar # Initialize with defaults pxr.default_init() # Configure MPI topology pxr.topology = 0 pxr.nprocx = 2 pxr.nprocy = 2 pxr.nprocz = 2 # Set simulation domain pxr.nx_global_grid = 100 pxr.ny_global_grid = 100 pxr.nz_global_grid = 100 pxr.xmin = -5e-6 pxr.xmax = 5e-6 pxr.ymin = -5e-6 pxr.ymax = 5e-6 pxr.zmin = -5e-6 pxr.zmax = 5e-6 pxr.dtcoef = 0.7 # Maxwell solver order pxr.norderx = 2 pxr.nordery = 2 pxr.norderz = 2 # Current deposition order pxr.nox = 1 pxr.noy = 1 pxr.noz = 1 # Guard cells pxr.nxguards = 2 pxr.nyguards = 2 pxr.nzguards = 2 # Tiling pxr.ntilex = 5 pxr.ntiley = 5 pxr.ntilez = 5 # Sorting configuration pxr.sorting_activated = 1 pxr.sorting_dx = 1.0 pxr.sorting_dy = 1.0 pxr.sorting_dz = 1.0 # Initialize MPI pxr.mpi_minimal_init(-1) pxr.mpi_initialise() mpirank = pxr.rank ``` ## Configuring Particle Species Define particle species properties programmatically with charge, mass, initial positions, velocities, and sorting parameters. ```python # Define species properties name = ["electron", "proton"] charge = [-1.0, 1.0] mass = [1.0, 1836.0] nppcell = [10, 10] xmin = [-1.3, -1.3] xmax = [1.3, 1.3] ymin = [-1.3, -1.3] ymax = [1.3, 1.3] zmin = [-1.3, -1.3] zmax = [1.3, 1.3] vdriftx = [0.0, 0.0] vdrifty = [0.0, 0.0] vdriftz = [0.0, 0.0] vthx = [0.1, 0.001] vthy = [0.1, 0.001] vthz = [0.1, 0.001] sorting_period = [10, 10] sorting_start = [0, 0] # Set number of species pxr.nspecies = len(name) # Register each species for i in range(pxr.nspecies): pxr.set_particle_species_properties( i + 1, # Species index (1-based) name[i], mass[i], charge[i], nppcell[i], xmin[i], ymin[i], zmin[i], xmax[i], ymax[i], zmax[i], vdriftx[i], vdrifty[i], vdriftz[i], vthx[i], vthy[i], vthz[i], sorting_period[i], sorting_start[i] ) ``` ## Python PIC Loop Implementation Execute the complete PIC algorithm from Python with field gathering, particle pushing, current deposition, and Maxwell solver steps. ```python def initialize_simulation(): """Initialize all simulation components.""" # Compute time step from CFL condition pxr.dt = pxr.dtcoef / (pxr.clight * np.sqrt(1.0/pxr.dx**2 + 1.0/pxr.dy**2 + 1.0/pxr.dz**2)) # Scale sorting parameters pxr.sorting_dx *= pxr.dx pxr.sorting_dy *= pxr.dy pxr.sorting_dz *= pxr.dz # Initialize tiles pxr.set_tile_split() pxr.init_tile_arrays() # Initialize stencil coefficients for FDTD pxr.init_stencil_coefficients() def pic_step(nsteps): """Execute PIC time steps.""" for i in range(nsteps): # Push particles (field gather + velocity update + position update) pxr.push_particles() # Apply particle boundary conditions pxr.particle_bcs() # Sort particles for cache efficiency pxr.particle_sorting_sub() # Deposit currents on grid pxr.pxrdepose_currents_on_grid_jxjyjz() pxr.current_bcs() # Push electromagnetic fields (leap-frog) pxr.push_bfield() # B^n -> B^{n+1/2} pxr.bfield_bcs() pxr.push_efield() # E^n -> E^{n+1} pxr.efield_bcs() pxr.push_bfield() # B^{n+1/2} -> B^{n+1} pxr.bfield_bcs() # Compute diagnostics pxr.calc_diags() if mpirank == 0: print(f"Iteration {i}, time = {pxr.dt * i:.6e} s") # Run simulation initialize_simulation() start = MPI.Wtime() pic_step(100) end = MPI.Wtime() print(f"Total time: {end - start:.3f} s") # Cleanup pxr.mpi_close() ``` ## Adding Particles with Custom Density Profile Load particles into PICSAR using custom density profiles for complex plasma configurations. ```python def density_profile(x, y, z): """Define custom density profile.""" r = np.sqrt(x**2 + y**2 + z**2) Rd = pxr.xmax - pxr.xmin return np.where(r > Rd, 0.0, 1.0) def load_particles_with_profile(): """Load particles with density profile.""" for ispecies in range(pxr.nspecies): # Create grid of particle positions x, y, z = np.mgrid[0:pxr.nx, 0:pxr.ny, 0:pxr.nz] x = x.flatten() y = y.flatten() z = z.flatten() # Convert to physical coordinates x = pxr.x_grid_min_local + x * pxr.dx y = pxr.y_grid_min_local + y * pxr.dy z = pxr.z_grid_min_local + z * pxr.dz # Apply density profile dens = density_profile(x, y, z) x = np.extract(dens > 0.0, x) y = np.extract(dens > 0.0, y) z = np.extract(dens > 0.0, z) dens = np.extract(dens > 0.0, dens) # Compute particle weights nppc = nppcell[ispecies] partw = dens * pxr.nc * pxr.dx * pxr.dy * pxr.dz / nppc # Add particles for each cell for i in range(nppc): pxr.py_add_particles_to_species( ispecies + 1, len(x), x + pxr.dx * i / nppc, y + pxr.dy * i / nppc, z + pxr.dz * i / nppc, np.zeros(len(x)), # vx np.zeros(len(x)), # vy np.zeros(len(x)), # vz np.ones(len(x)), # gamma inverse partw ) ``` ## Building PICSAR Library Compile PICSAR as static or dynamic library for integration with external PIC codes. ```bash # Build with GNU compiler in production mode make COMP=gnu MODE=prod # Build with Intel compiler optimized for KNL make COMP=intel MODE=prod ARCH=knl # Build with spectral solver (requires FFTW) make COMP=gnu MODE=prod_spectral \ FFTW3_LIB=/usr/local/lib \ FFTW3_INCLUDE=/usr/local/include # Build as library make COMP=gnu MODE=library lib # Libraries generated in lib/ directory: # - libpicsar.a (static) # - libpicsar.so (dynamic) # Debug build with bounds checking make COMP=gnu MODE=debug ``` ## Integration with WARP PIC Code Use PICSAR's optimized kernels within the WARP plasma simulation framework for accelerated simulations. ```python from warp import * from warp.field_solvers.em3dsolverPXR import EM3DPXR from mpi4py import MPI EnableAll() # Basic WARP setup dim = "3d" top.depos_order = 1 # Linear interpolation top.efetch = 4 # Energy-conserving field gather # Grid configuration w3d.nx = 64 w3d.ny = 64 w3d.nz = 64 w3d.xmmin = -5e-6 w3d.xmmax = 5e-6 w3d.ymmin = -5e-6 w3d.ymmax = 5e-6 w3d.zmmin = -5e-6 w3d.zmmax = 5e-6 # Boundary conditions w3d.bound0 = w3d.boundnz = openbc w3d.boundxy = periodic top.pbound0 = top.pboundnz = absorb top.pboundxy = periodic # Create particle species elec = Species(type=Electron, name='electron') ions = Species(type=Proton, name='proton') # Initialize WARP top.fstype = -1 package('w3d') generate() # Add plasma elec.add_uniform_box( np=w3d.nx * w3d.ny * w3d.nz * 10, xmin=w3d.xmmin, xmax=w3d.xmmax, ymin=w3d.ymmin, ymax=w3d.ymmax, zmin=w3d.zmmin, zmax=w3d.zmmax, vthx=0.1*clight, spacing='uniform' ) # Initialize PICSAR-accelerated EM solver em = EM3DPXR( stencil=0, # 0=Yee, 1=Karkkainen dtcoef=0.7, l_getrho=1, spectral=0, # 0=FDTD, 1=PSATD ntilex=4, ntiley=4, ntilez=4, currdepo=0, # Esirkepov deposition fieldgathe=0, # Vectorized field gather sorting=1, l_verbose=1 ) registersolver(em) em.finalize() # Run simulation step(1000) ``` ## QED Multi-Physics Library Use the header-only C++ QED library for Breit-Wheeler pair production, quantum synchrotron emission, and Schwinger pair production in extreme field simulations. ```cpp #include "picsar_qed/qed_commons.h" #include "picsar_qed/physics/chi_functions.hpp" #include "picsar_qed/physics/breit_wheeler/breit_wheeler_engine_core.hpp" #include "picsar_qed/physics/quantum_sync/quantum_sync_engine_core.hpp" // For GPU kernels, define before including headers: // #define PXRMP_GPU __host__ __device__ using namespace picsar::multi_physics; // Calculate quantum chi parameter for electron template<typename Real> Real compute_electron_chi( Real px, Real py, Real pz, // Momentum Real ex, Real ey, Real ez, // Electric field Real bx, Real by, Real bz, // Magnetic field Real mass, Real charge ) { return chi_ele_pos<Real>( px, py, pz, ex, ey, ez, bx, by, bz ); } // Calculate chi parameter for photon template<typename Real> Real compute_photon_chi( Real px, Real py, Real pz, // Photon momentum Real ex, Real ey, Real ez, // Electric field Real bx, Real by, Real bz // Magnetic field ) { return chi_photon<Real>( px, py, pz, ex, ey, ez, bx, by, bz ); } // Example: Check for Breit-Wheeler pair production void process_photons(/* photon arrays */) { // Load lookup tables (generated separately) auto bw_tables = breit_wheeler::lookup_tables<double>::load("bw_tables.bin"); // For each photon, compute pair production probability for (size_t i = 0; i < n_photons; ++i) { double chi = compute_photon_chi( px[i], py[i], pz[i], ex[i], ey[i], ez[i], bx[i], by[i], bz[i] ); // Get pair production rate double rate = breit_wheeler::get_optical_depth(chi, bw_tables); // Probabilistic pair creation... } } ``` ## Running Unit Tests Execute acceptance tests to validate PICSAR installation and correctness. ```bash # Run Fortran acceptance tests cd Acceptance_testing/Fortran_tests/test_homogeneous_plasma mpirun -np 4 ../../../fortran_bin/picsar python test_homogeneous_plasma.py # Run Langmuir wave test cd Acceptance_testing/Fortran_tests/test_Langmuir_wave mpirun -np 4 ../../../fortran_bin/picsar python test_langmuir_wave.py # Run Python tests cd Acceptance_testing/Python_tests/test_Langmuir_wave mpirun -np 4 python test_Langmuir_wave_3d.py # Run QED library tests cd multi_physics/QED mkdir build && cd build cmake .. make make test ``` ## Key Fortran Subroutines Reference Core computational kernels available for direct use or as library functions. ```fortran ! Boris particle pusher - push momentum CALL pxr_boris_push_u_3d(np, uxp, uyp, uzp, gaminv, & ex, ey, ez, bx, by, bz, q, m, dt) ! np: number of particles ! uxp, uyp, uzp: normalized momentum arrays (in/out) ! gaminv: inverse Lorentz factor array (in/out) ! ex-bz: field values at particle positions ! q, m: charge and mass ! dt: time step ! Push particle positions CALL pxr_pushxyz(np, xp, yp, zp, uxp, uyp, uzp, gaminv, dt) ! Electric field push - Yee 3D arbitrary order CALL pxrpush_em3d_evec_norder(ex, ey, ez, bx, by, bz, & jx, jy, jz, mudt, dtsdx, dtsdy, dtsdz, & nx, ny, nz, norderx, nordery, norderz, & nxguard, nyguard, nzguard, nxs, nys, nzs, & l_nodalgrid) ! Magnetic field push - Yee 3D arbitrary order CALL pxrpush_em3d_bvec_norder(ex, ey, ez, bx, by, bz, & dtsdx, dtsdy, dtsdz, & nx, ny, nz, norderx, nordery, norderz, & nxguard, nyguard, nzguard, nxs, nys, nzs, & l_nodalgrid) ! Main simulation step CALL step(nsteps) ! Execute nsteps iterations of PIC loop ! Initialize all arrays and distributions CALL initall() ! Initialize stencil coefficients for FDTD solver CALL init_stencil_coefficients() ``` ## Summary PICSAR serves as a comprehensive toolkit for high-performance Particle-In-Cell simulations targeting exascale architectures. The library provides optimized implementations of all core PIC algorithms with support for arbitrary-order finite-difference schemes, multiple particle pushing algorithms (Boris, Vay), and both direct and Esirkepov current deposition methods. Scientists can use PICSAR either as a standalone Fortran code with configuration-file-based input or integrate its optimized kernels into existing Python-based simulation frameworks like WARP. The library's design emphasizes performance portability through three-level parallelism (MPI + OpenMP + SIMD), particle tiling for improved cache efficiency, and specialized vectorized routines for modern CPU architectures including Intel Xeon Phi (KNL). For advanced plasma physics research, the multi-physics QED library provides GPU-compatible implementations of quantum electrodynamic processes including Breit-Wheeler pair production and quantum synchrotron radiation. Typical integration patterns involve either running standalone simulations with `.pixr` input files, calling PICSAR from Python for interactive development, or linking the PICSAR library into custom simulation codes to leverage its optimized computational kernels.