======================== CODE SNIPPETS ======================== TITLE: Run Burn Guide Example DESCRIPTION: This command executes the example guide from Burn's base directory, demonstrating the end-to-end process of building and training a custom model. It requires the Burn project to be checked out and Rust/Cargo installed. SOURCE: https://github.com/tracel-ai/burn/blob/main/burn-book/src/basic-workflow/README.md#_snippet_0 LANGUAGE: Bash CODE: ``` cargo run --example guide ``` ---------------------------------------- TITLE: Navigate into the newly created Rust project directory DESCRIPTION: After creating a new project, use this command to change your current working directory into the project's root, allowing you to manage its files and dependencies. SOURCE: https://github.com/tracel-ai/burn/blob/main/burn-book/src/getting-started.md#_snippet_1 LANGUAGE: console CODE: ``` cd my_burn_app ``` ---------------------------------------- TITLE: Run Text Generation Example (CUDA and Mac) DESCRIPTION: Instructions to clone the Burn repository and run the text generation example, with specific environment setup for CUDA and Mac users. The `--release` flag is recommended for faster training. SOURCE: https://github.com/tracel-ai/burn/blob/main/examples/text-generation/README.md#_snippet_0 LANGUAGE: bash CODE: ``` git clone https://github.com/tracel-ai/burn.git cd burn # Use the --release flag to really speed up training. export TORCH_CUDA_VERSION=cu124 cargo run --example text-generation --release ``` LANGUAGE: bash CODE: ``` git clone https://github.com/tracel-ai/burn.git cd burn # Use the --release flag to really speed up training. cargo run --example text-generation --release ``` ---------------------------------------- TITLE: Create a new Rust application using Cargo DESCRIPTION: This command initializes a new Rust project directory with a `Cargo.toml` file and a `src` directory, ready for development. It sets up the basic project structure. SOURCE: https://github.com/tracel-ai/burn/blob/main/burn-book/src/getting-started.md#_snippet_0 LANGUAGE: console CODE: ``` cargo new my_burn_app ``` ---------------------------------------- TITLE: Compile the Rust project and its dependencies DESCRIPTION: This command compiles your local Rust package and all its required dependencies. It prepares your application for execution, typically generating a debug build by default. SOURCE: https://github.com/tracel-ai/burn/blob/main/burn-book/src/getting-started.md#_snippet_3 LANGUAGE: console CODE: ``` cargo build ``` ---------------------------------------- TITLE: Add Burn deep learning framework as a dependency DESCRIPTION: This command adds the Burn library to your project's `Cargo.toml` file, making it available for use. The `--features wgpu` flag enables the WGPU backend for GPU-accelerated operations. SOURCE: https://github.com/tracel-ai/burn/blob/main/burn-book/src/getting-started.md#_snippet_2 LANGUAGE: console CODE: ``` cargo add burn --features wgpu ``` ---------------------------------------- TITLE: Standard README.md Structure for Burn Examples DESCRIPTION: This Markdown template outlines the required structure for the `README.md` file within each Burn example. It includes sections for a brief description, instructions on running the example using `cargo run`, and a placeholder for any necessary prerequisites. SOURCE: https://github.com/tracel-ai/burn/blob/main/contributor-book/src/guides/submitting-examples.md#_snippet_3 LANGUAGE: markdown CODE: ``` # Example Name Brief description of what this example demonstrates. ## Running the Example ```bash cargo run --example ``` ## Prerequisites List any prerequisites here. ``` ---------------------------------------- TITLE: Simplify Burn Imports with Rust `prelude` Module DESCRIPTION: This Rust snippet demonstrates the use of Burn's `prelude` module to simplify imports of commonly used structs and macros. It shows the concise `use burn::prelude::*` syntax and expands it to illustrate all the components it imports, reducing verbosity in deep learning applications. SOURCE: https://github.com/tracel-ai/burn/blob/main/burn-book/src/getting-started.md#_snippet_9 LANGUAGE: Rust CODE: ``` use burn::prelude::*; ``` LANGUAGE: Rust CODE: ``` use burn::{ config::Config, module::Module, nn, tensor::{ backend::Backend, Bool, Device, ElementConversion, Float, Int, Shape, Tensor, TensorData, }, }; ``` ---------------------------------------- TITLE: Install mdbook for documentation DESCRIPTION: Installs the `mdbook` tool, which is used to build and serve the Burn Book and Contributor Book. This command uses `cargo install` to add `mdbook` to your system. SOURCE: https://github.com/tracel-ai/burn/blob/main/contributor-book/src/getting-started/setting-up-the-environment.md#_snippet_4 LANGUAGE: Bash CODE: ``` cargo install mdbook ``` ---------------------------------------- TITLE: Run MNIST Example with Burn Backends DESCRIPTION: This snippet provides the necessary bash commands to clone the Burn repository and execute the MNIST example using different computational backends. It covers CPU-based NdArray backends (with and without BLAS), Tch backends for both CPU and GPU (requiring CUDA version setup for GPU), and the WGPU backend. The --release flag is recommended for optimized performance. SOURCE: https://github.com/tracel-ai/burn/blob/main/examples/mnist/README.md#_snippet_0 LANGUAGE: bash CODE: ``` git clone https://github.com/tracel-ai/burn.git cd burn # Use the --release flag to really speed up training. echo "Using ndarray backend" cargo run --example mnist --release --features ndarray # CPU NdArray Backend - f32 - single thread cargo run --example mnist --release --features ndarray-blas-openblas # CPU NdArray Backend - f32 - blas with openblas cargo run --example mnist --release --features ndarray-blas-netlib # CPU NdArray Backend - f32 - blas with netlib echo "Using tch backend" export TORCH_CUDA_VERSION=cu124 # Set the cuda version cargo run --example mnist --release --features tch-gpu # GPU Tch Backend - f32 cargo run --example mnist --release --features tch-cpu # CPU Tch Backend - f32 echo "Using wgpu backend" cargo run --example mnist --release --features wgpu ``` ---------------------------------------- TITLE: Install Evcxr Jupyter Kernel Build Command DESCRIPTION: Installs the `evcxr_jupyter` package using cargo, which is a prerequisite for building and using the Evcxr kernel for Rust in Jupyter Notebooks. SOURCE: https://github.com/tracel-ai/burn/blob/main/examples/notebook/README.md#_snippet_0 LANGUAGE: shell CODE: ``` cargo install evcxr_jupyter ``` ---------------------------------------- TITLE: Perform Tensor Addition with Burn and WGPU Backend in Rust DESCRIPTION: This Rust code snippet demonstrates how to initialize two 2D tensors using the Burn library with the WGPU backend. It shows the creation of a tensor from explicit data and another filled with ones, then performs element-wise addition and prints the result. SOURCE: https://github.com/tracel-ai/burn/blob/main/burn-book/src/getting-started.md#_snippet_5 LANGUAGE: Rust CODE: ``` use burn::tensor::Tensor; use burn::backend::Wgpu; // Type alias for the backend to use. type Backend = Wgpu; fn main() { let device = Default::default(); // Creation of two tensors, the first with explicit values and the second one with ones, with the same shape as the first let tensor_1 = Tensor::::from_data([[2., 3.], [4., 5.]], &device); let tensor_2 = Tensor::::ones_like(&tensor_1); // Print the element-wise addition (done with the WGPU backend) of the two tensors. println!("{}", tensor_1 + tensor_2); } ``` ---------------------------------------- TITLE: Create New Rust Library Crate for Burn Example DESCRIPTION: This command uses Cargo, Rust's package manager, to initialize a new library crate. The `--lib` flag ensures it's a library, which is suitable for an example within the Burn workspace, named ``. SOURCE: https://github.com/tracel-ai/burn/blob/main/contributor-book/src/guides/submitting-examples.md#_snippet_1 LANGUAGE: bash CODE: ``` cargo new --lib ``` ---------------------------------------- TITLE: Navigate to Burn Examples Directory DESCRIPTION: This command changes the current directory to the `examples/` subdirectory within the Burn repository. This is the initial step required before creating a new example crate. SOURCE: https://github.com/tracel-ai/burn/blob/main/contributor-book/src/guides/submitting-examples.md#_snippet_0 LANGUAGE: bash CODE: ``` cd examples ``` ---------------------------------------- TITLE: Configure Cargo.toml for Burn Example Crate DESCRIPTION: This TOML configuration snippet defines the package metadata and dependencies for a new Burn example. It sets the package name, version, and edition, and specifies how to link to the main `burn` crate and other workspace dependencies like `serde`. SOURCE: https://github.com/tracel-ai/burn/blob/main/contributor-book/src/guides/submitting-examples.md#_snippet_2 LANGUAGE: toml CODE: ``` [package] name = "" version = "0.1.0" edition = "2021" readme = "README.md" # Remove this line if it exists # readme.workspace = true [dependencies] # Reuse workspace dependencies when available serde = { workspace = true } # Add example-specific dependencies burn = { path = "../../" } ``` ---------------------------------------- TITLE: Import Burn Modules with Rust `use` Declarations DESCRIPTION: This snippet illustrates different ways to import modules and items from the Burn library into scope using Rust's `use` declaration. It shows both individual imports and a shorthand for multiple items with a common prefix, explaining their equivalence and usage context. SOURCE: https://github.com/tracel-ai/burn/blob/main/burn-book/src/getting-started.md#_snippet_6 LANGUAGE: Rust CODE: ``` use burn::tensor::Tensor; use burn::backend::Wgpu; ``` LANGUAGE: Rust CODE: ``` use burn::{tensor::Tensor, backend::Wgpu}; ``` ---------------------------------------- TITLE: Serve project documentation locally DESCRIPTION: Commands to open and serve the Burn Book or Contributor Book locally using `mdbook` or `cargo xtask`. The `cargo xtask` command automatically handles `mdbook` installation. SOURCE: https://github.com/tracel-ai/burn/blob/main/contributor-book/src/getting-started/setting-up-the-environment.md#_snippet_5 LANGUAGE: Bash CODE: ``` mdbook serve ``` LANGUAGE: Bash CODE: ``` cargo xtask books {burn|contributor} open ``` ---------------------------------------- TITLE: Execute Burn Example via Cargo DESCRIPTION: Command line instruction to run a specific Burn example from the root of the Burn repository using Cargo. SOURCE: https://github.com/tracel-ai/burn/blob/main/burn-book/src/examples.md#_snippet_0 LANGUAGE: bash CODE: ``` cargo run --example ``` ---------------------------------------- TITLE: Running Example with WGPU Backend DESCRIPTION: This shell command demonstrates how to run the custom image dataset example using the WGPU backend. It executes the Rust project with the `wgpu` feature enabled, providing an alternative backend for computation. SOURCE: https://github.com/tracel-ai/burn/blob/main/examples/custom-image-dataset/README.md#_snippet_3 LANGUAGE: sh CODE: ``` cargo run --example custom-image-dataset --release --features wgpu ``` ---------------------------------------- TITLE: Typical Burn Example Package File Tree Structure DESCRIPTION: Shows the typical directory and file structure for a Burn example package, including `Cargo.toml`, an `examples` directory for executable binaries, and a `src` directory for the library crate's source code. SOURCE: https://github.com/tracel-ai/burn/blob/main/burn-book/src/examples.md#_snippet_2 LANGUAGE: plaintext CODE: ``` examples/burn-example ├── Cargo.toml ├── examples │ ├── example1.rs ---> compiled to example1 binary │ ├── example2.rs ---> compiled to example2 binary │ └── ... └── src ├── lib.rs ---> this is the root file for a library ├── module1.rs ├── module2.rs └── ... ``` ---------------------------------------- TITLE: Set Rust recursion limit for WGPU backend compilation DESCRIPTION: When using WGPU backends, complex type nesting can lead to compilation errors due to the default recursion limit. Adding this line at the top of `main.rs` or `lib.rs` increases the limit, resolving such issues. SOURCE: https://github.com/tracel-ai/burn/blob/main/burn-book/src/getting-started.md#_snippet_4 LANGUAGE: rust CODE: ``` #![recursion_limit = "256"] ``` ---------------------------------------- TITLE: Console Output of Burn Tensor Addition DESCRIPTION: This snippet displays the expected console output after running the Rust program that performs tensor addition using the Burn library. It shows the resulting tensor's data, shape, device, backend, kind, and data type. SOURCE: https://github.com/tracel-ai/burn/blob/main/burn-book/src/getting-started.md#_snippet_8 LANGUAGE: Console CODE: ``` Tensor { data: [[3.0, 4.0], [5.0, 6.0]], shape: [2, 2], device: DefaultDevice, backend: "wgpu", kind: "Float", dtype: "f32", } ``` ---------------------------------------- TITLE: Run Burn Regression Example with Various Backends DESCRIPTION: This snippet provides commands to clone the Burn repository and execute the regression example using different backend configurations. It includes options for CPU (NdArray, NdArray with BLAS) and GPU (Tch, WGPU) backends, along with instructions for setting the CUDA version for the Tch backend to optimize training speed. SOURCE: https://github.com/tracel-ai/burn/blob/main/examples/simple-regression/README.md#_snippet_0 LANGUAGE: bash CODE: ``` git clone https://github.com/tracel-ai/burn.git cd burn # Use the --release flag to really speed up training. echo "Using ndarray backend" cargo run --example regression --release --features ndarray # CPU NdArray Backend - f32 - single thread cargo run --example regression --release --features ndarray-blas-openblas # CPU NdArray Backend - f32 - blas with openblas cargo run --example regression --release --features ndarray-blas-netlib # CPU NdArray Backend - f32 - blas with netlib echo "Using tch backend" export TORCH_CUDA_VERSION=cu124 # Set the cuda version cargo run --example regression --release --features tch-gpu # GPU Tch Backend - f32 cargo run --example regression --release --features tch-cpu # CPU Tch Backend - f32 echo "Using wgpu backend" cargo run --example regression --release --features wgpu ``` ---------------------------------------- TITLE: Run Custom CSV Dataset Example DESCRIPTION: Executes the `custom-csv-dataset` example using Cargo, demonstrating how to load and process data from a CSV file. SOURCE: https://github.com/tracel-ai/burn/blob/main/examples/custom-csv-dataset/README.md#_snippet_0 LANGUAGE: sh CODE: ``` cargo run --example custom-csv-dataset ``` ---------------------------------------- TITLE: Rust: Setup Burn Remote Compute Server and Client DESCRIPTION: This Rust code demonstrates how to set up a Burn remote compute backend. It includes a `main_server` function to start a server on port 3000 using the Cuda backend, and a `main_client` function to create a client that connects to this server, enabling distributed tensor operations on a `RemoteDevice`. SOURCE: https://github.com/tracel-ai/burn/blob/main/README.md#_snippet_4 LANGUAGE: Rust CODE: ``` fn main_server() { // Start a server on port 3000. burn::server::start::(Default::default(), 3000); } fn main_client() { // Create a client that communicate with the server on port 3000. use burn::backend::{Autodiff, RemoteBackend}; type Backend = Autodiff; let device = RemoteDevice::new("ws://localhost:3000"); let tensor_gpu = Tensor::::random([3, 3], Distribution::Default, &device); } ``` ---------------------------------------- TITLE: Create New Rust Project with Cargo DESCRIPTION: Initializes a new Rust project directory named 'guide' using Cargo, creating a `Cargo.toml` and `src/main.rs` file. SOURCE: https://github.com/tracel-ai/burn/blob/main/burn-book/src/basic-workflow/model.md#_snippet_0 LANGUAGE: console CODE: ``` cargo new guide ``` ---------------------------------------- TITLE: Run ONNX Inference Example DESCRIPTION: Demonstrates how to execute the compiled ONNX inference example from the command line, specifying an image index for prediction. The output shows the successful prediction for the given image. SOURCE: https://github.com/tracel-ai/burn/blob/main/examples/onnx-inference/README.md#_snippet_0 LANGUAGE: bash CODE: ``` cargo run -- 15 ``` ---------------------------------------- TITLE: Install Recommended VSCode Extensions for Rust DESCRIPTION: Install these recommended VSCode extensions to enhance development experience for Rust and TOML files, including syntax analysis, dependency management, and debugging capabilities. SOURCE: https://github.com/tracel-ai/burn/blob/main/contributor-book/src/getting-started/configuring-your-editor.md#_snippet_0 LANGUAGE: VSCode Extension IDs CODE: ``` rust-lang.rust-analyzer tamasfe.even-better-toml fill-labs.dependi vadimcn.vscode-lldb ``` ---------------------------------------- TITLE: Run Speech Commands Audio Example DESCRIPTION: Enables the audio dataset (SpeechCommandsDataset) and runs the `speech_commands` example using `cargo run` with the `audio` feature flag. SOURCE: https://github.com/tracel-ai/burn/blob/main/crates/burn-dataset/README.md#_snippet_0 LANGUAGE: shell CODE: ``` cargo run --example speech_commands --features audio ``` ---------------------------------------- TITLE: Run Local Web Server for MNIST Demo DESCRIPTION: Starts a local HTTP server to host the MNIST inference demo, making it accessible in a web browser at `http://localhost:8000/`. SOURCE: https://github.com/tracel-ai/burn/blob/main/examples/mnist-inference-web/README.md#_snippet_1 LANGUAGE: shell CODE: ``` ./run-server.sh ``` ---------------------------------------- TITLE: Running Example with Torch GPU Backend DESCRIPTION: This shell command shows how to execute the custom image dataset example using the Torch GPU backend. It sets the `TORCH_CUDA_VERSION` environment variable and runs the Rust project with the `tch-gpu` feature enabled for GPU acceleration. SOURCE: https://github.com/tracel-ai/burn/blob/main/examples/custom-image-dataset/README.md#_snippet_2 LANGUAGE: sh CODE: ``` export TORCH_CUDA_VERSION=cu124 cargo run --example custom-image-dataset --release --features tch-gpu ``` ---------------------------------------- TITLE: Setting up Configuration and Dataloaders for Custom Training Loop in Burn DESCRIPTION: This Rust snippet demonstrates the initial setup for a custom training loop in Burn. It defines the training configuration, initializes the model and optimizer, and prepares the training and testing dataloaders using the Mnist dataset, mirroring the setup of a basic workflow but without the `Learner` struct. SOURCE: https://github.com/tracel-ai/burn/blob/main/burn-book/src/custom-training-loop.md#_snippet_0 LANGUAGE: rust CODE: ``` #[derive(Config)] pub struct MnistTrainingConfig { #[config(default = 10)] pub num_epochs: usize, #[config(default = 64)] pub batch_size: usize, #[config(default = 4)] pub num_workers: usize, #[config(default = 42)] pub seed: u64, #[config(default = 1e-4)] pub lr: f64, pub model: ModelConfig, pub optimizer: AdamConfig, } pub fn run(device: &B::Device) { // Create the configuration. let config_model = ModelConfig::new(10, 1024); let config_optimizer = AdamConfig::new(); let config = MnistTrainingConfig::new(config_model, config_optimizer); B::seed(config.seed); // Create the model and optimizer. let mut model = config.model.init::(&device); let mut optim = config.optimizer.init(); // Create the batcher. let batcher = MnistBatcher::default(); // Create the dataloaders. let dataloader_train = DataLoaderBuilder::new(batcher.clone()) .batch_size(config.batch_size) .shuffle(config.seed) .num_workers(config.num_workers) .build(MnistDataset::train()); let dataloader_test = DataLoaderBuilder::new(batcher) .batch_size(config.batch_size) .shuffle(config.seed) .num_workers(config.num_workers) .build(MnistDataset::test()); ... } ``` ---------------------------------------- TITLE: Fix typos in documentation DESCRIPTION: Automatically fixes any misspellings found by the `typo` tool within a specified path. This command requires `typo` to be installed. SOURCE: https://github.com/tracel-ai/burn/blob/main/contributor-book/src/getting-started/setting-up-the-environment.md#_snippet_7 LANGUAGE: Bash CODE: ``` typo -w /path/to/book ``` ---------------------------------------- TITLE: Set up Python Environment with uv DESCRIPTION: Recommended method to set up the Python environment using `uv` for necessary dependencies like ONNX and Torch. This command creates a virtual environment and installs packages. SOURCE: https://github.com/tracel-ai/burn/blob/main/crates/burn-import/onnx-tests/README.md#_snippet_0 LANGUAGE: sh CODE: ``` cd crates/burn-import/onnx-tests uv sync # or uv sync -f ``` ---------------------------------------- TITLE: Manually Install Python Dependencies DESCRIPTION: Alternative method for manual Python environment setup, installing specific versions of `onnx` and `torch` using pip. Additional dependencies are listed in `requirements.lock`. SOURCE: https://github.com/tracel-ai/burn/blob/main/crates/burn-import/onnx-tests/README.md#_snippet_1 LANGUAGE: sh CODE: ``` pip install onnx==1.15.0 torch==2.1.1 ``` ---------------------------------------- TITLE: Configure LibTorch Environment Variables in Cargo.toml DESCRIPTION: Example of setting `LD_LIBRARY_PATH` and `LIBTORCH` environment variables directly in `.cargo/config.toml` to avoid shell-based environment setup for Rust projects. SOURCE: https://github.com/tracel-ai/burn/blob/main/crates/burn-tch/README.md#_snippet_13 LANGUAGE: toml CODE: ``` [env] LD_LIBRARY_PATH = "/absolute/path/to/libtorch/lib" LIBTORCH = "/absolute/path/to/libtorch/libtorch" ``` ---------------------------------------- TITLE: Register Evcxr Jupyter Kernel to Jupyter DESCRIPTION: Executes the `evcxr_jupyter` command with the `--install` flag to register the Evcxr kernel with your Jupyter environment, making it available for use in new or existing notebooks. SOURCE: https://github.com/tracel-ai/burn/blob/main/examples/notebook/README.md#_snippet_1 LANGUAGE: shell CODE: ``` evcxr_jupyter --install ``` ---------------------------------------- TITLE: Run Burn Backend Validation Samples DESCRIPTION: Commands to execute the validation samples for the `burn-tch` backend on different compute platforms (CPU, CUDA, MPS) to verify successful installation and functionality. Includes a specific example for first-time CUDA validation. SOURCE: https://github.com/tracel-ai/burn/blob/main/crates/burn-tch/README.md#_snippet_1 LANGUAGE: shell CODE: ``` export TORCH_CUDA_VERSION=cu124 cargo run --bin cuda --release ``` LANGUAGE: shell CODE: ``` cargo run --bin cpu --release cargo run --bin cuda --release cargo run --bin mps --release ``` ---------------------------------------- TITLE: Create Dataset Splits with PartialDataset in Rust DESCRIPTION: The PartialDataset transform provides a view of a dataset within specified start and end indices, commonly used for creating train, validation, and test splits. This example demonstrates chaining ShuffledDataset and PartialDataset to create such splits. SOURCE: https://github.com/tracel-ai/burn/blob/main/burn-book/src/building-blocks/dataset.md#_snippet_3 LANGUAGE: rust CODE: ``` // define chained dataset type here for brevity type PartialData = PartialDataset>; let len = dataset.len(); let split == "train"; // or "val"/"test" let data_split = match split { "train" => PartialData::new(dataset, 0, len * 8 / 10), // Get first 80% dataset "test" => PartialData::new(dataset, len * 8 / 10, len), // Take remaining 20% _ => panic!("Invalid split type"), // Handle unexpected split types }; ``` ---------------------------------------- TITLE: Launch Local Web Server for Burn Demo DESCRIPTION: This command initiates a local web server, allowing you to access the image classification web demo from your browser. It serves the compiled WebAssembly and other assets. SOURCE: https://github.com/tracel-ai/burn/blob/main/examples/image-classification-web/README.md#_snippet_1 LANGUAGE: bash CODE: ``` ./run-server.sh ``` ---------------------------------------- TITLE: Specify Generic Parameters for Burn Tensors in Rust DESCRIPTION: This Rust snippet demonstrates how to explicitly specify generic parameters for the `Tensor` struct in Burn using type annotations. It provides an alternative to the turbofish syntax, showing how the compiler can infer types for subsequent tensors based on the first one. SOURCE: https://github.com/tracel-ai/burn/blob/main/burn-book/src/getting-started.md#_snippet_7 LANGUAGE: Rust CODE: ``` let tensor_1: Tensor = Tensor::from_data([[2., 3.], [4., 5.]]); let tensor_2 = Tensor::ones_like(&tensor_1); ``` ---------------------------------------- TITLE: Run Burn Training Process DESCRIPTION: Executes the training binary for the Burn project in release mode. This command compiles and runs the training logic defined in the 'train' binary. SOURCE: https://github.com/tracel-ai/burn/blob/main/examples/guide/README.md#_snippet_0 LANGUAGE: sh CODE: ``` cargo run --bin train --release ``` ---------------------------------------- TITLE: Format Rust code with cargo fmt DESCRIPTION: Applies `rustfmt` to all Rust files in the project to ensure consistent code style. This command should be run before committing for non-draft pull requests. SOURCE: https://github.com/tracel-ai/burn/blob/main/contributor-book/src/getting-started/setting-up-the-environment.md#_snippet_0 LANGUAGE: Bash CODE: ``` cargo fmt --all ``` ---------------------------------------- TITLE: Run project tests with cargo run-checks DESCRIPTION: Executes the project's test suite. Successful completion of these checks is mandatory before merging any pull request. Be aware that these tests can be time-consuming. SOURCE: https://github.com/tracel-ai/burn/blob/main/contributor-book/src/getting-started/setting-up-the-environment.md#_snippet_2 LANGUAGE: Bash CODE: ``` cargo run-checks ``` ---------------------------------------- TITLE: Build and Test Burn no_std Package for Various Targets DESCRIPTION: These shell commands facilitate the setup and execution of `burn-no-std-tests`. They include steps to install necessary Rust targets (`thumbv6m-none-eabi`, `thumbv7m-none-eabi`, `wasm32-unknown-unknown`), build the package for regular and specific `no_std` targets, and run the integration tests. The `RUSTFLAGS` environment variable is used for the `thumbv6m-none-eabi` build. SOURCE: https://github.com/tracel-ai/burn/blob/main/crates/burn-no-std-tests/README.md#_snippet_0 LANGUAGE: sh CODE: ``` # install the new targets if not installed previously rustup target add thumbv6m-none-eabi rustup target add thumbv7m-none-eabi rustup target add wasm32-unknown-unknown # build for various targets cargo build # regular build cargo build --target thumbv7m-none-eabi cargo build --target wasm32-unknown-unknown RUSTFLAGS="--cfg portable_atomic_unsafe_assume_single_core" cargo build --target thumbv6m-none-eabi # test cargo test ``` ---------------------------------------- TITLE: Analyze and fix Rust code with cargo clippy DESCRIPTION: Runs Clippy, a Rust linter, to identify and automatically fix common coding issues. It requires a clean Git state, but can be run on a dirty state using the `--allow-dirty` flag. SOURCE: https://github.com/tracel-ai/burn/blob/main/contributor-book/src/getting-started/setting-up-the-environment.md#_snippet_1 LANGUAGE: Bash CODE: ``` cargo clippy --fix ``` ---------------------------------------- TITLE: Build WebAssembly Binary and Assets for Web Demo DESCRIPTION: This script compiles the Rust code into WebAssembly and prepares other essential files required for the web demo. It is the first step to set up the application. SOURCE: https://github.com/tracel-ai/burn/blob/main/examples/image-classification-web/README.md#_snippet_0 LANGUAGE: bash CODE: ``` ./build-for-web.sh ``` ---------------------------------------- TITLE: Running Burn Project for Model Training DESCRIPTION: This console command executes the compiled Burn project in release mode. It initiates the model training process, displaying progress through a basic CLI dashboard. SOURCE: https://github.com/tracel-ai/burn/blob/main/burn-book/src/basic-workflow/backend.md#_snippet_1 LANGUAGE: Console CODE: ``` cargo run --release ``` ---------------------------------------- TITLE: Initialize a module using a Rust Config instance DESCRIPTION: This code demonstrates how to use the `init` method, implemented on the `MyModuleConfig` struct, to create a new module instance. It shows the necessary imports and the process of obtaining a default device, then passing it to the `config.init()` method to instantiate `my_module` with the defined configuration. SOURCE: https://github.com/tracel-ai/burn/blob/main/burn-book/src/building-blocks/config.md#_snippet_3 LANGUAGE: rust CODE: ``` use burn::backend::Wgpu; let device = Default::default(); let my_module = config.init::(&device); ``` ---------------------------------------- TITLE: Configuring Wgpu Backend for Model Training in Rust DESCRIPTION: This Rust code snippet demonstrates how to define and use the `Wgpu` backend with `Autodiff` in the `main` function of a Burn project. It initializes the backend, sets up the device, and calls the `train` function with model and optimizer configurations, enabling GPU-accelerated training. SOURCE: https://github.com/tracel-ai/burn/blob/main/burn-book/src/basic-workflow/backend.md#_snippet_0 LANGUAGE: Rust CODE: ``` # #![recursion_limit = "256"] # mod data; # mod model; # mod training; # use crate::{model::ModelConfig, training::TrainingConfig}; use burn::{ backend::{Autodiff, Wgpu}, # data::dataset::Dataset, optim::AdamConfig, }; fn main() { type MyBackend = Wgpu; type MyAutodiffBackend = Autodiff; let device = burn::backend::wgpu::WgpuDevice::default(); let artifact_dir = "/tmp/guide"; crate::training::train::( artifact_dir, TrainingConfig::new(ModelConfig::new(10, 512), AdamConfig::new()), device.clone(), ); } ``` ---------------------------------------- TITLE: Install PyTorch ONNX Export Dependencies DESCRIPTION: Installs the necessary Python packages, `torch`, `torchvision`, and `onnx`, which are required to export PyTorch models to the ONNX format. SOURCE: https://github.com/tracel-ai/burn/blob/main/examples/onnx-inference/README.md#_snippet_5 LANGUAGE: bash CODE: ``` pip install torch torchvision onnx ``` ---------------------------------------- TITLE: Run Text Classification with WGPU Backend DESCRIPTION: Execute training and inference for AG News and DbPedia datasets using the WGPU backend. The `--release` flag optimizes performance. SOURCE: https://github.com/tracel-ai/burn/blob/main/examples/text-classification/README.md#_snippet_4 LANGUAGE: bash CODE: ``` # AG News cargo run --example ag-news-train --release --features wgpu # Train on the ag news dataset cargo run --example ag-news-infer --release --features wgpu # Run inference on the ag news dataset # DbPedia cargo run --example db-pedia-train --release --features wgpu # Train on the db pedia dataset cargo run --example db-pedia-infer --release --features wgpu # Run inference db pedia dataset ``` ---------------------------------------- TITLE: Various Burn Tensor Initialization Methods DESCRIPTION: Provides multiple examples for initializing Burn Tensors using `from_data` and `from_floats`. It covers initialization from specific backends (Wgpu), generic backends, integer arrays, and custom Rust structs, demonstrating the flexibility of `TensorData`. SOURCE: https://github.com/tracel-ai/burn/blob/main/burn-book/src/building-blocks/tensor.md#_snippet_2 LANGUAGE: rust CODE: ``` // Initialization from a given Backend (Wgpu) let tensor_1 = Tensor::::from_data([1.0, 2.0, 3.0], &device); // Initialization from a generic Backend let tensor_2 = Tensor::::from_data(TensorData::from([1.0, 2.0, 3.0]), &device); // Initialization using from_floats (Recommended for f32 ElementType) // Will be converted to TensorData internally. let tensor_3 = Tensor::::from_floats([1.0, 2.0, 3.0], &device); // Initialization of Int Tensor from array slices let arr: [i32; 6] = [1, 2, 3, 4, 5, 6]; let tensor_4 = Tensor::::from_data(TensorData::from(&arr[0..3]), &device); // Initialization from a custom type struct BodyMetrics { age: i8, height: i16, weight: f32 } let bmi = BodyMetrics{ age: 25, height: 180, weight: 80.0 }; let data = TensorData::from([bmi.age as f32, bmi.height as f32, bmi.weight]); let tensor_5 = Tensor::::from_data(data, &device); ``` ---------------------------------------- TITLE: Set LibTorch CUDA Environment Variables on Linux DESCRIPTION: Commands to set `LIBTORCH` and `LD_LIBRARY_PATH` environment variables for LibTorch CUDA installation on Linux. It is also important to ensure your CUDA installation is in your `PATH` and `LD_LIBRARY_PATH`. SOURCE: https://github.com/tracel-ai/burn/blob/main/crates/burn-tch/README.md#_snippet_9 LANGUAGE: shell CODE: ``` export LIBTORCH=/absolute/path/to/libtorch/ export LD_LIBRARY_PATH=/absolute/path/to/libtorch/lib:$LD_LIBRARY_PATH ``` ---------------------------------------- TITLE: Example Output of Burn Key Debugging DESCRIPTION: This provides an example of the console output generated when the `with_debug_print()` option is enabled during PyTorch model loading. It lists the original key, the remapped key, the tensor shape, and its data type for each parameter. SOURCE: https://github.com/tracel-ai/burn/blob/main/burn-book/src/import/pytorch-model.md#_snippet_7 LANGUAGE: text CODE: ``` Debug information of keys and tensor shapes: --- Original Key: conv.conv1.bias Remapped Key: conv1.bias Shape: [2] Dtype: F32 --- Original Key: conv.conv1.weight Remapped Key: conv1.weight Shape: [2, 2, 2, 2] Dtype: F32 --- Original Key: conv.conv2.weight Remapped Key: conv2.weight Shape: [2, 2, 2, 2] Dtype: F32 --- ``` ---------------------------------------- TITLE: Configure CUDA Version for tch-rs Installation DESCRIPTION: Instructions to set the `TORCH_CUDA_VERSION` environment variable, which is required before `cargo` retrieves the `tch-rs` dependency to ensure the correct CUDA distribution of LibTorch is installed. This applies to both Unix-like systems and Windows. SOURCE: https://github.com/tracel-ai/burn/blob/main/crates/burn-tch/README.md#_snippet_0 LANGUAGE: shell CODE: ``` export TORCH_CUDA_VERSION=cu124 ``` LANGUAGE: powershell CODE: ``` $Env:TORCH_CUDA_VERSION = "cu124" ```