### Example: Opening with Rectangle Footprint in CuCIM Source: https://docs.rapids.ai/api/cucim/stable/api An example demonstrating the morphological opening operation using a rectangle-shaped footprint to open gaps between bright regions in an image. It shows how the operation can shrink existing regions while improving connectivity. ```python import cupy as cp from cucim.skimage.morphology import footprint_rectangle, opening bad_connection = cp.asarray([[1, 0, 0, 0, 1], [1, 1, 0, 1, 1], [1, 1, 1, 1, 1], [1, 1, 0, 1, 1], [1, 0, 0, 0, 1]], dtype=cp.uint8) opening(bad_connection, footprint_rectangle((3, 3))) ``` -------------------------------- ### Label Connected Regions Example Source: https://docs.rapids.ai/api/cucim/stable/api Example demonstrating the usage of the `label` function from `cucim.skimage.measure` with different connectivity settings and background values on a sample image array using cupy. ```python >>> import cupy as cp >>> x = cp.eye(3).astype(int) >>> print(x) [[1 0 0] [0 1 0] [0 0 1]] >>> print(label(x, connectivity=1)) [[1 0 0] [0 2 0] [0 0 3]] >>> print(label(x, connectivity=2)) [[1 0 0] [0 1 0] [0 0 1]] >>> print(label(x, background=-1)) [[1 2 2] [2 1 2] [2 2 1]] >>> x = cp.asarray([[1, 0, 0], ... [1, 1, 5], ... [0, 0, 0]]) >>> print(label(x)) [[1 0 0] [1 1 2] [0 0 0]] ``` -------------------------------- ### Sequential Relabeling with Offset (Python) Source: https://docs.rapids.ai/api/cucim/stable/api Illustrates how to use the `relabel_sequential` function with an `offset` parameter to specify the starting value for the relabeled sequence. This allows for custom offset ranges in the output labels. ```python >>> import cupy as cp >>> from cucim.core.operations import relabel_sequential >>> label_field = cp.array([1, 1, 5, 5, 8, 99, 42]) >>> relab, fw, inv = relabel_sequential(label_field, offset=5) >>> relab array([5, 5, 6, 6, 7, 9, 8]) ``` -------------------------------- ### Calculating Basic Region Properties (Python) Source: https://docs.rapids.ai/api/cucim/stable/api This example shows how to use the `regionprops` function to label connected regions in a binary image and then extract properties like the centroid for the first detected region. It utilizes `cupy` for array operations, common in the Rapids ecosystem. ```python from skimage import data, util from cucim.skimage.measure import label, regionprops import cupy as cp img = cp.asarray(util.img_as_ubyte(data.coins()) > 110) label_img = label(img, connectivity=img.ndim) props = regionprops(label_img) # centroid of first labeled object print(props[0].centroid) # centroid of first labeled object (dictionary-like access) print(props[0]['centroid']) ``` -------------------------------- ### Create 2D Sliding Windows on a 2D Array Source: https://docs.rapids.ai/api/cucim/stable/api This example illustrates the application of view_as_windows to a 2D CuPy array, creating overlapping sub-arrays (windows) of a defined shape. It shows the resulting structure and dimensions of the windowed array. Requires the cupy library. ```python >>> A = cp.arange(5*4).reshape(5, 4) >>> A array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15], [16, 17, 18, 19]]) >>> window_shape = (4, 3) >>> B = view_as_windows(A, window_shape) >>> B.shape (2, 2, 4, 3) >>> B array([[[[ 0, 1, 2], [ 4, 5, 6], [ 8, 9, 10], [12, 13, 14]], [[ 1, 2, 3], [ 5, 6, 7], [ 9, 10, 11], [13, 14, 15]]], [[[ 4, 5, 6], [ 8, 9, 10], [12, 13, 14], [16, 17, 18]], [[ 5, 6, 7], [ 9, 10, 11], [13, 14, 15], [17, 18, 19]]]]) ``` -------------------------------- ### Example: RGB, LAB, and LCH Color Space Conversions in skimage Source: https://docs.rapids.ai/api/cucim/stable/api Demonstrates a typical workflow for color space conversions using functions from skimage. It includes converting an RGB image to LAB, then to LCH, and finally back to LAB to show the round-trip conversion. Requires numpy arrays and the associated color conversion functions. ```python >>> from skimage import data >>> from cucim.skimage.color import rgb2lab, lch2lab >>> img = cp.array(data.astronaut()) >>> img_lab = rgb2lab(img) >>> img_lch = lab2lch(img_lab) >>> img_lab2 = lch2lab(img_lch) ``` -------------------------------- ### Wiener Deconvolution with CuPy and Scikit-image Source: https://docs.rapids.ai/api/cucim/stable/api Demonstrates image deconvolution using the Wiener filter. This example utilizes CuPy for GPU acceleration and scikit-image for image processing tasks. It shows how to apply a uniform filter and add noise to an image before deconvolution. ```python import cupy as cp import cupyx.scipy.ndimage as ndi from cucim.skimage import color, restoration from skimage import data img = color.rgb2gray(cp.array(data.astronaut())) psf = cp.ones((5, 5)) / 25 img = ndi.uniform_filter(img, size=psf.shape) img += 0.1 * img.std() * cp.random.standard_normal(img.shape) deconvolved_img = restoration.wiener(img, psf, 0.1) ``` -------------------------------- ### Warp Image with Callable Source: https://docs.rapids.ai/api/cucim/stable/api This example shows how to warp an image using a callable function that modifies coordinates. The `shift_down` function takes coordinate pairs and subtracts 10 from the y-coordinate, effectively shifting the image downwards. The `warp` function applies this callable to the image. ```python from cucim.skimage.transform import warp from skimage import data import cupy as cp image = cp.array(data.camera()) def shift_down(xy): xy[:, 1] -= 10 return xy warped = warp(image, shift_down) ``` -------------------------------- ### Sequential Relabeling with Cucim Stable (Python) Source: https://docs.rapids.ai/api/cucim/stable/api Demonstrates the basic usage of the `relabel_sequential` function to convert arbitrary labels in a CuPy array to a sequential range starting from 1. It shows the relabeled array, the forward mapping (old label to new label), and the inverse mapping (new label to old label). ```python >>> import cupy as cp >>> from cucim.core.operations import relabel_sequential >>> label_field = cp.array([1, 1, 5, 5, 8, 99, 42]) >>> relab, fw, inv = relabel_sequential(label_field) >>> relab array([1, 1, 2, 2, 3, 5, 4]) >>> print(fw) ArrayMap: 1 → 1 5 → 2 8 → 3 42 → 4 99 → 5 >>> cp.array(fw) array([0, 1, 0, 0, 0, 2, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5]) >>> cp.array(inv) array([ 0, 1, 5, 8, 42, 99]) >>> (fw[label_field] == relab).all() array(True) >>> (inv[relab] == label_field).all() array(True) ``` -------------------------------- ### Get Image Data Type String Source: https://docs.rapids.ai/api/cucim/stable/api Returns the data type of the image in string format. This string can be directly converted to a NumPy dtype object using `numpy.dtype()`, for example, `numpy.dtype(img.typestr)`. ```python img.typestr ``` -------------------------------- ### Relabel Sequential Labels Source: https://docs.rapids.ai/api/cucim/stable/api Relabels arbitrary integer labels in an array to a contiguous range starting from a specified offset. It also provides forward and inverse maps for label transformations. ```APIDOC ## POST /websites/rapids_ai_api_cucim_stable/relabel_sequential ### Description Relabels arbitrary integer labels in an array to a contiguous sequence starting from a specified offset. This function is useful for consolidating labels, especially after segmentation, and provides mappings to track original and new label values. ### Method POST ### Endpoint /websites/rapids_ai_api_cucim_stable/relabel_sequential ### Parameters #### Request Body - **label_field** (numpy array of int) - Required - An array containing non-negative integer labels. The shape can be arbitrary. - **offset** (int) - Optional - The starting value for the relabeled sequence. Must be strictly positive. Defaults to 1. ### Request Example ```json { "label_field": "...", "offset": 1 } ``` ### Response #### Success Response (200) - **relabeled** (numpy array of int) - The input label array with labels mapped to the new contiguous range `{offset, ..., number_of_labels + offset - 1}`. The data type matches `label_field`, unless overflow occurs. - **forward_map** (ArrayMap) - A mapping from original labels to the new relabeled values. - **inverse_map** (ArrayMap) - A mapping from the new relabeled values back to the original labels. #### Response Example ```json { "relabeled": "...", "forward_map": "...", "inverse_map": "..." } ``` ### Notes - The label 0 is treated as background and is not remapped. - The `forward_map` can be large if the maximum label value is significantly greater than the number of unique labels. ### Examples ```python >>> import cupy as cp >>> from cucim.skimage.segmentation import relabel_sequential ``` ``` -------------------------------- ### Create 2D Sliding Windows on a 1D Array Source: https://docs.rapids.ai/api/cucim/stable/api This snippet demonstrates how to use view_as_windows to create sliding windows of a specified shape over a 1D CuPy array. The output shows the resulting array of windows and their shapes. Dependencies include cupy. ```python >>> A = cp.arange(10) >>> A array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> window_shape = (3,) >>> B = view_as_windows(A, window_shape) >>> B.shape (8, 3) >>> B array([[0, 1, 2], [1, 2, 3], [2, 3, 4], [3, 4, 5], [4, 5, 6], [5, 6, 7], [6, 7, 8], [7, 8, 9]]) ``` -------------------------------- ### Get Distance Transform Indices with Cucim Source: https://docs.rapids.ai/api/cucim/stable/api Demonstrates how to obtain the indices corresponding to the Euclidean Distance Transform (EDT) using cucim. The function `morphology.distance_transform_edt` can return both the distance transform values and their corresponding indices. ```python >>> edt, inds = morphology.distance_transform_edt(a, return_indices=True) >>> inds array([[[0, 0, 1, 1, 3], [1, 1, 1, 1, 3], [2, 2, 1, 3, 3], [3, 3, 4, 4, 3], [4, 4, 4, 4, 3]], [[0, 0, 1, 1, 4], [0, 1, 1, 1, 4], [0, 0, 1, 4, 4], [0, 0, 3, 3, 4], [0, 0, 3, 3, 4]]]) ``` -------------------------------- ### Compare Central and Standard Image Moments (CuPy) Source: https://docs.rapids.ai/api/cucim/stable/api Compares the results of `moments_coords` and `moments_coords_central` when the center is (0, 0). This demonstrates that for a (0,0) center, central and standard image moments are equivalent. It requires the `cupy` library. ```python >>> import cupy as cp >>> from cucim.skimage.measure import moments_coords, moments_coords_central >>> # Assuming 'coords' is a pre-defined CuPy array representing image coordinates >>> # cp.allclose(moments_coords(coords), moments_coords_central(coords, (0, 0))) >>> # The output is expected to be an array containing True if they are close. >>> # Example output: >>> # array(True) ``` -------------------------------- ### Get Image Spacing Units by Dimension Order Source: https://docs.rapids.ai/api/cucim/stable/api Returns the units for each spacing element in a tuple. The length of the returned tuple is the same as the number of dimensions (`ndim`). This complements the `spacing` method by providing unit information. ```python img.spacing_units(_dim_order='XYZ') ``` -------------------------------- ### Get Image Shape Source: https://docs.rapids.ai/api/cucim/stable/api Returns the shape of the image as a tuple of dimension sizes, in the order of the dimensions. This property provides a quick way to understand the image's dimensions without needing method calls. ```python img._shape ``` -------------------------------- ### Get Image Dtype Intensity Limits Source: https://docs.rapids.ai/api/cucim/stable/api Retrieves the minimum and maximum intensity values for a given image's data type. An option to clip negative values to zero is provided, which is useful for image normalization. ```python cucim.skimage.util.dtype_limits(_image_, clip_negative=False) ``` -------------------------------- ### Computing Region Properties as a Table (Python) Source: https://docs.rapids.ai/api/cucim/stable/api Demonstrates the use of `regionprops_table` to compute specified properties for all labeled regions in an image and return them in a pandas-compatible table (dictionary). This is efficient for batch processing and analysis. ```python from skimage import data, util from cucim.skimage.measure import regionprops_table import cupy as cp img = cp.asarray(util.img_as_ubyte(data.coins()) > 110) # Assuming 'label' function is available and used to create label_img # from cucim.skimage.measure import label # label_img = label(img, connectivity=img.ndim) # Placeholder for label_img, replace with actual labeled image # For demonstration, creating a dummy labeled image label_img = cp.zeros_like(img, dtype=cp.int32) label_img[10:50, 20:80] = 1 label_img[60:90, 30:70] = 2 properties = ('label', 'centroid', 'area') table = regionprops_table(label_img, properties=properties) print(table) ``` -------------------------------- ### Get Image Size by Dimension Order Source: https://docs.rapids.ai/api/cucim/stable/api Returns the size of the image as a list of integers for the specified dimension order. If no dimension order is provided, it defaults to an empty string. This is useful for accessing sizes in a specific axis. ```python img.size(_dim_order='XYZ') ``` -------------------------------- ### Get Image Resolution Information Source: https://docs.rapids.ai/api/cucim/stable/api Retrieves a dictionary containing resolution details of the image. This includes the number of levels, dimensions for each level, downsampling factors, and tile sizes. No specific inputs are required as it operates on the CuImage object itself. ```python img._resolutions ``` -------------------------------- ### SimilarityTransform Class Initialization (Python) Source: https://docs.rapids.ai/api/cucim/stable/api Initializes the SimilarityTransform class, which represents a similarity transformation in 2D and 3D. This transformation includes scaling, rotation, and translation. It can be initialized using a transformation matrix, or individual scale, rotation, and translation parameters. The class also supports specifying the dimensionality and the cupy module to use. ```Python _class _cucim.skimage.transform.SimilarityTransform(_matrix=None_ , _scale=None_ , _rotation=None_ , _translation=None_ , _*_ , _dimensionality=2_ , _xp= _)# Similarity transformation. Has the following form in 2D: ``` X = a0 * x - b0 * y + a1 = = s * x * cos(rotation) - s * y * sin(rotation) + a1 Y = b0 * x + a0 * y + b1 = = s * x * sin(rotation) + s * y * cos(rotation) + b1 ``` where `s` is a scale factor and the homogeneous transformation matrix is: ``` [[a0 -b0 a1] [b0 a0 b1] [0 0 1]] ``` The similarity transformation extends the Euclidean transformation with a single scaling factor in addition to the rotation and translation parameters. Parameters: **matrix**(dim+1, dim+1) ndarray, optional Homogeneous transformation matrix. **scale** float, optional Scale factor. Implemented only for 2D and 3D. **rotation** float, optional Rotation angle, clockwise, as radians. Implemented only for 2D and 3D. For 3D, this is given in XZX Euler angles. **translation**(dim,) ndarray-like, optional x, y[, z] translation parameters. Implemented only for 2D and 3D. Attributes: **params**(dim+1, dim+1) ndarray Homogeneous transformation matrix. Methods `estimate`(src, dst) | Estimate the transformation from a set of corresponding points. ---|--- ``` -------------------------------- ### Get Image Spacing by Dimension Order Source: https://docs.rapids.ai/api/cucim/stable/api Returns the physical spacing of the image in a tuple. If a dimension order is specified, it returns spacing for those dimensions. If a dimension in `dim_order` does not exist, it defaults to 1.0. The units for spacing can be retrieved using `spacing_units`. ```python img.spacing(_dim_order='XYZ') ``` -------------------------------- ### Compare Images for Differences Source: https://docs.rapids.ai/api/cucim/stable/api Compares two images and returns an image highlighting their differences. Supports 'diff', 'blend', and 'checkerboard' comparison methods. The 'checkerboard' method requires 2D images and allows specifying tile dimensions. ```python cucim.skimage.util.compare_images(_image0_, _image1_, method='diff', n_tiles=(8, 8)) ``` -------------------------------- ### Iterating Through Region Properties (Python) Source: https://docs.rapids.ai/api/cucim/stable/api Demonstrates how to iterate through the properties of a detected region and access their values using both attribute access and dictionary-like access. This is useful for inspecting all available measurements for a region. ```python for prop in region: print(prop, region[prop]) ``` -------------------------------- ### Adding Custom Region Measurements (Python) Source: https://docs.rapids.ai/api/cucim/stable/api Illustrates how to extend the functionality of `regionprops` by defining and passing custom measurement functions via the `extra_properties` argument. This allows for calculating domain-specific metrics beyond the built-in properties. ```python from skimage import data, util from cucim.skimage.measure import label, regionprops import numpy as np import cupy as cp def pixelcount(regionmask): return np.sum(regionmask) img = cp.asarray(util.img_as_ubyte(data.coins()) > 110) label_img = label(img, connectivity=img.ndim) props = regionprops(label_img, extra_properties=(pixelcount,)) # Accessing the custom property print(props[0].pixelcount) print(props[1]['pixelcount']) ``` -------------------------------- ### cucim.skimage.util.view_as_windows Source: https://docs.rapids.ai/api/cucim/stable/api Creates a rolling window view of an n-dimensional array. Windows are overlapping views with a specified step size. ```APIDOC ## cucim.skimage.util.view_as_windows ### Description Creates a rolling window view of the input n-dimensional array. Windows are overlapping views of the input array, with adjacent windows shifted by a specified step size. ### Method N/A (Function) ### Endpoint N/A (Function) ### Parameters #### Path Parameters N/A #### Query Parameters N/A #### Request Body N/A ### Request Example ```python import cupy as cp from cucim.skimage.util.shape import view_as_windows A = cp.arange(4*4).reshape(4,4) # Example with default step=1 # B = view_as_windows(A, window_shape=(2, 2)) # Example with custom step # B = view_as_windows(A, window_shape=(2, 2), step=(2, 1)) ``` ### Response #### Success Response (200) - **arr_out** (ndarray) - (rolling) window view of the input array. #### Response Example ```python # Output depends on window_shape and step # For window_shape=(2,2) and step=(1,1) on the example A: # The output would be an array of shape (3, 3, 2, 2) # showing all possible 2x2 windows. ``` ``` -------------------------------- ### Calculate Triangle Threshold for Image Source: https://docs.rapids.ai/api/cucim/stable/api Computes a threshold value for an image using the triangle algorithm. This method is based on finding the peak of the histogram and the point furthest from the line connecting the histogram's start and end points. It is suitable for grayscale images. ```python from cucim.skimage.filters import threshold_triangle from skimage.data import camera import cupy as cp image = cp.array(camera()) thresh = threshold_triangle(image) binary = image <= thresh ``` -------------------------------- ### Get CIE XYZ Tristimulus Values Source: https://docs.rapids.ai/api/cucim/stable/api Retrieves the CIE XYZ tristimulus values for a specified illuminant and observer. These values are fundamental for colorimetric calculations and defining color spaces. The function returns a tuple of three floats representing X, Y, and Z. ```python from cucim.skimage.color import xyz_tristimulus_values # Get the CIE XYZ tristimulus values for a "D65" illuminant for a 10 degree field of view tristimulus_values = xyz_tristimulus_values(illuminant="D65", observer="10") print(tristimulus_values) ``` -------------------------------- ### CIE-LAB to XYZ Color Space Conversion (CuPy) Source: https://docs.rapids.ai/api/cucim/stable/api Converts an image from CIE-LAB color space to CIE XYZ color space using CuPy. It allows specifying the illuminant (e.g., 'D65') and observer angle (e.g., '2'). The input 'lab' image must be at least 2-D with the final dimension having 3 channels. It raises a ValueError for unsupported illuminants or observers, and a UserWarning for invalid pixels. ```python from skimage import data from cucim.skimage.color import lab2xyz import cupy as cp img = cp.array(data.astronaut()) # Assuming img is already in LAB format or converted to it # For example: img_lab = rgb2lab(img) # This example directly uses lab2xyz as shown in the original text # Replace with actual LAB conversion if needed: from cucim.skimage.color import rgb2lab img_lab = rgb2lab(img) img_xyz = lab2xyz(img_lab, illuminant='D65', observer='2') ``` -------------------------------- ### Create Rolling Window Views with view_as_windows Source: https://docs.rapids.ai/api/cucim/stable/api Creates overlapping rolling window views of an input n-dimensional CuPy array. Windows are shifted by a specified step size. Be cautious of memory usage, as rolling views can significantly increase memory footprint. ```python import cupy as cp from cucim.skimage.util.shape import view_as_windows A = cp.arange(4*4).reshape(4,4) # Example usage would follow, but no specific example provided in source text. ``` -------------------------------- ### Estimate Essential Matrix Transform using 8-Point Algorithm Source: https://docs.rapids.ai/api/cucim/stable/api Estimates an Essential Matrix Transform from source and destination point correspondences using the 8-point algorithm. Requires at least 8 point pairs for a well-conditioned solution. Returns a boolean indicating success. ```python import cupy as cp from cucim.skimage import transform tform_matrix = transform.EssentialMatrixTransform( rotation=cp.eye(3), translation=cp.array([0, 0, 1]) ) src = cp.array([[ 1.839035, 1.924743], [ 0.543582, 0.375221], [ 0.47324 , 0.142522], [ 0.96491 , 0.598376], [ 0.102388, 0.140092], [15.994343, 9.622164], [ 0.285901, 0.430055], [ 0.09115 , 0.254594]]) dst = cp.array([[1.002114, 1.129644], [1.521742, 1.846002], [1.084332, 0.275134], [0.293328, 0.588992], [0.839509, 0.08729 ], [1.779735, 1.116857], [0.878616, 0.602447], [0.642616, 1.028681]]) tform_matrix.estimate(src, dst) ``` -------------------------------- ### Warp Image with Inverse Transform Source: https://docs.rapids.ai/api/cucim/stable/api This example demonstrates warping an image using the inverse of a previously defined geometric transformation. By using `tform.inverse`, the `warp` function applies the reverse transformation, which can be useful for certain image processing tasks where the inverse mapping is more directly represented. ```python from cucim.skimage.transform import warp, SimilarityTransform from skimage import data import cupy as cp image = cp.array(data.camera()) tform = SimilarityTransform(translation=(0, -10)) # Warp using the inverse of the transform warped_inverse = warp(image, tform.inverse) ``` -------------------------------- ### Detect Blobs using Difference of Gaussians (DoG) in Python Source: https://docs.rapids.ai/api/cucim/stable/api Finds blobs in a grayscale image using the Difference of Gaussians (DoG) method. It returns the coordinates and standard deviation of the Gaussian kernel for each detected blob. Requires input as a NumPy or CuPy array. Parameters control blob size, intensity threshold, and overlap. ```python import cupy as cp from skimage import data from cucim.skimage import feature coins = cp.array(data.coins()) feature.blob_dog(coins, threshold=.05, min_sigma=10, max_sigma=40) ``` -------------------------------- ### Comparing Thresholding Methods (CuPy) Source: https://docs.rapids.ai/api/cucim/stable/api Generates a figure that compares the results of various automatic image thresholding algorithms applied to an input image using CuPy. The function visualizes the output of isodata, li, mean, minimum, otsu, triangle, and yen methods. ```python from skimage.data import text text_img = cp.array(text()) fig, ax = try_all_threshold(text_img, figsize=(10, 6), verbose=False) ``` -------------------------------- ### Relabel Sequential Labels with CuPy Source: https://docs.rapids.ai/api/cucim/stable/api Remaps arbitrary integer labels in an array to a contiguous sequence starting from a specified offset. This function returns the relabeled array, a forward map to convert original labels to new ones, and an inverse map for the reverse conversion. The background label 0 is preserved. ```python >>> import cupy as cp >>> from cucim.skimage.segmentation import relabel_sequential ``` -------------------------------- ### Swirl Transformation with Cucim Skimage (Python) Source: https://docs.rapids.ai/api/cucim/stable/api Applies a swirl transformation to an image, distorting it around a specified center point. Parameters control the strength, radius, and additional rotation of the swirl effect. Supports custom output shapes and interpolation modes. Requires cupy and scikit-image. ```python from skimage import data from cucim.skimage.transform import swirl import cupy as cp image = cp.array(data.camera()) # Example: Apply a swirl with default parameters swirled_image = swirl(image) # Example: Apply a stronger swirl with a different center and radius swirled_stronger = swirl(image, center=(256, 256), strength=2.0, radius=50) # Example: Swirl with additional rotation and custom output shape swirled_custom = swirl(image, rotation=45, output_shape=(600, 600)) print("Swirl transformation applied.") ``` -------------------------------- ### Create Image Montage with CuCIM Montage Source: https://docs.rapids.ai/api/cucim/stable/api The `montage` function arranges an ensemble of images into a single grid. It supports various parameters for customization, including fill values, intensity rescaling, grid shape, padding, and channel axis. The output is a single ndarray with images tiled together. ```python import cupy as cp from cucim.skimage.util import montage arr_in = cp.arange(3 * 2 * 2).reshape(3, 2, 2) arr_out = montage(arr_in) print(arr_out.shape) print(arr_out) arr_out_nonsquare = montage(arr_in, grid_shape=(1, 3)) print(arr_out_nonsquare) print(arr_out_nonsquare.shape) ``` -------------------------------- ### Morphological Closing with CuPy Source: https://docs.rapids.ai/api/cucim/stable/api Performs a morphological closing operation on an image using a specified footprint. The 'mode' parameter controls how array borders are handled, with options like 'reflect', 'constant', 'nearest', etc. The 'cval' parameter is used when 'mode' is 'constant' to specify the fill value. This function is available starting from version 24.06. ```python import cupy as cp from cucim.skimage.morphology import closing, footprint_rectangle # Example: Close a gap between two bright lines broken_line = cp.asarray([[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [1, 1, 0, 1, 1], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]], dtype=cp.uint8) closing(broken_line, footprint_rectangle((3, 3))) ``` -------------------------------- ### Apply Unsharp Mask Filter (CuPy) Source: https://docs.rapids.ai/api/cucim/stable/api Demonstrates the application of the unsharp_mask filter using CuPy. It shows how to use the filter with different data types and the 'preserve_range' option. The function sharpens an image by subtracting a blurred version of it. The 'radius' parameter controls the blur radius, and 'amount' controls the strength of the sharpening. 'preserve_range' ensures the output stays within the original data's range. ```python >>> import cupy as cp >>> array = cp.ones(shape=(5,5), dtype=cp.uint8)*100 >>> array[2,2] = 120 >>> array array([[100, 100, 100, 100, 100], [100, 100, 100, 100, 100], [100, 100, 120, 100, 100], [100, 100, 100, 100, 100], [100, 100, 100, 100, 100]], dtype=uint8) >>> cp.around(unsharp_mask(array, radius=0.5, amount=2),2) array([[0.39, 0.39, 0.39, 0.39, 0.39], [0.39, 0.39, 0.38, 0.39, 0.39], [0.39, 0.38, 0.53, 0.38, 0.39], [0.39, 0.39, 0.38, 0.39, 0.39], [0.39, 0.39, 0.39, 0.39, 0.39]]) >>> array = cp.ones(shape=(5,5), dtype=cp.int8)*100 >>> array[2,2] = 127 >>> cp.around(unsharp_mask(array, radius=0.5, amount=2),2) array([[0.79, 0.79, 0.79, 0.79, 0.79], [0.79, 0.78, 0.75, 0.78, 0.79], [0.79, 0.75, 1. , 0.75, 0.79], [0.79, 0.78, 0.75, 0.78, 0.79], [0.79, 0.79, 0.79, 0.79, 0.79]]) >>> cp.around(unsharp_mask(array, radius=0.5, amount=2, ... preserve_range=True), ... 2) array([[100. , 100. , 99.99, 100. , 100. ], [100. , 99.39, 95.48, 99.39, 100. ], [ 99.99, 95.48, 147.59, 95.48, 99.99], [100. , 99.39, 95.48, 99.39, 100. ], [100. , 100. , 99.99, 100. , 100. ]]) ``` -------------------------------- ### Thresholding an Image using Yen's Method (CuPy) Source: https://docs.rapids.ai/api/cucim/stable/api Applies Yen's thresholding method to a grayscale image using CuPy. This method determines an upper threshold value for foreground pixel classification. It can optionally use a pre-computed histogram. ```python from skimage.data import camera image = cp.array(camera()) thresh = threshold_yen(image) binary = image <= thresh ``` -------------------------------- ### Apply Sauvola Local Thresholding to an Image Source: https://docs.rapids.ai/api/cucim/stable/api Implements the Sauvola local thresholding algorithm, a modification of the Niblack method. It calculates a threshold for each pixel using the local mean, standard deviation, and a parameter 'k', adjusted by the maximum standard deviation 'r'. This is also suited for document image binarization. ```python from cucim.skimage.filters import threshold_sauvola from skimage import data import cupy as cp image = cp.array(data.page()) t_sauvola = threshold_sauvola(image, window_size=15, k=0.2) binary_image = image > t_sauvola ``` -------------------------------- ### Create Image Montage Source: https://docs.rapids.ai/api/cucim/stable/api Creates a rectangular montage from an array of equally shaped single- or multichannel images. It supports options for filling, intensity rescaling, grid shape, padding, and channel axis. ```python cucim.skimage.util.montage(_arr_in_ , _fill ='mean'_, _rescale_intensity =False_, _grid_shape =None_, _padding_width =0_, _*_ , _channel_axis =None_, _square_grid_default =True_) ``` -------------------------------- ### Binary Opening Source: https://docs.rapids.ai/api/cucim/stable/api Performs a fast binary morphological opening on an image. Opening is defined as an erosion followed by a dilation, used to remove small bright spots and connect dark gaps. ```APIDOC ## POST /websites/rapids_ai_api_cucim_stable/binary_opening ### Description Performs a fast binary morphological opening on an image. Opening is defined as an erosion followed by a dilation, used to remove small bright spots and connect dark gaps. ### Method POST ### Endpoint /websites/rapids_ai_api_cucim_stable/binary_opening ### Parameters #### Request Body - **image** (ndarray) - Binary input image. - **footprint** (ndarray or tuple, optional) - The neighborhood expressed as a 2-D array of 1’s and 0’s. If None, use a cross-shaped footprint (connectivity=1). The footprint can also be provided as a sequence of smaller footprints. - **out** (ndarray of bool, optional) - The array to store the result of the morphology. If None is passed, a new array will be allocated. - **mode** (str, optional) - The mode parameter determines how the array borders are handled. Valid modes are: ‘max’, ‘min’, ‘ignore’. Default is ‘ignore’. ``` -------------------------------- ### N-D Image Rescaling with Coordinate Mapping Source: https://docs.rapids.ai/api/cucim/stable/api This snippet demonstrates how to rescale a 3-D image cube by defining custom coordinates for each element in the output image. It involves setting up a scale factor, calculating output shape, generating coordinate grids, and applying an offset for spatial data. The 'warp' function from cucim.skimage.transform is used to perform the transformation. ```python >>> cube_shape = (30, 30, 30) >>> cube = cp.random.rand(*cube_shape) >>> scale = 0.1 >>> output_shape = tuple(int(scale * s) for s in cube_shape) >>> coords0, coords1, coords2 = cp.mgrid[:output_shape[0], ... :output_shape[1], ... :output_shape[2]] >>> coords = cp.asarray([coords0, coords1, coords2]) >>> coords = (coords + 0.5) / scale - 0.5 >>> warped = warp(cube, coords) ``` -------------------------------- ### AffineTransform for Image Transformation (Python) Source: https://docs.rapids.ai/api/cucim/stable/api Defines an AffineTransform class for 2D and higher dimensional geometric transformations such as scaling, rotation, shearing, and translation. It can be initialized with a matrix or individual transformation parameters. The estimate method can determine the transformation matrix from source and destination points, and the warp method applies the transformation to an image. ```python import cupy as cp from cucim.skimage import transform from skimage import data img = cp.array(data.astronaut()) # Define source and destination points: src = cp.array([[150, 150], [250, 100], [150, 200]]) dst = cp.array([[200, 200], [300, 150], [150, 400]]) # Estimate the transformation matrix: tform = transform.AffineTransform() tform.estimate(src, dst) # Apply the transformation: warped = transform.warp(img, inverse_map=tform.inverse) ``` -------------------------------- ### Blob Detection using Laplacian of Gaussian (LoG) Source: https://docs.rapids.ai/api/cucim/stable/api Finds blobs in a grayscale image using the Laplacian of Gaussian (LoG) method. Returns coordinates and the standard deviation of the Gaussian kernel for each detected blob. ```APIDOC ## POST /websites/rapids_ai_api_cucim_stable ### Description Finds blobs in the given grayscale image using the Laplacian of Gaussian (LoG) method. For each blob found, the method returns its coordinates and the standard deviation of the Gaussian kernel that detected the blob. ### Method POST ### Endpoint /websites/rapids_ai_api_cucim_stable ### Parameters #### Request Body - **image** (ndarray) - Required - Input grayscale image, blobs are assumed to be light on dark background (white on black). - **min_sigma** (float) - Optional - The minimum standard deviation for Gaussian kernel. Keep this low to detect smaller blobs. The standard deviations of the Gaussian filter are given for each axis as a sequence, or as a single number, in which case it is equal for all axes. - **max_sigma** (float) - Optional - The maximum standard deviation for Gaussian kernel. Keep this high to detect larger blobs. The standard deviations of the Gaussian filter are given for each axis as a sequence, or as a single number, in which case it is equal for all axes. - **num_sigma** (int) - Optional - The number of intermediate values of standard deviations to consider between min_sigma and max_sigma. - **threshold** (float or None) - Optional - The absolute lower bound for scale space maxima. Local maxima smaller than threshold are ignored. Reduce this to detect blobs with lower intensities. If threshold_rel is also specified, whichever threshold is larger will be used. If None, threshold_rel is used instead. - **overlap** (float) - Optional - A value between 0 and 1. If the area of two blobs overlaps by a fraction greater than threshold, the smaller blob is eliminated. - **log_scale** (bool) - Optional - If set, intermediate values of standard deviations are interpolated using a logarithmic scale to the base 10. If not, linear interpolation is used. - **threshold_rel** (float or None) - Optional - Minimum intensity of peaks, calculated as `max(log_space) * threshold_rel`, where `log_space` refers to the stack of Laplacian of Gaussian (LoG) images computed internally. This should have a value between 0 and 1. If None, threshold is used instead. - **exclude_border** (bool) - Optional - If set to True, the function will exclude blobs that are detected on the border of the image. ### Request Example ```json { "image": "...", "min_sigma": 1.0, "max_sigma": 50.0, "num_sigma": 10, "threshold": 0.2, "overlap": 0.5, "log_scale": false, "threshold_rel": null, "exclude_border": false } ``` ### Response #### Success Response (200) - **A** (ndarray) - A 2D array with shape (n, 3) where each row represents `(y, x, sigma)`. `(y, x)` are the coordinates of the blob and `sigma` is the standard deviation of the Gaussian kernel of the Hessian Matrix whose determinant detected the blob. #### Response Example ```json { "A": [ [197.0, 153.0, 20.333334], [124.0, 336.0, 20.333334] ] } ``` ``` -------------------------------- ### Save Image to File Source: https://docs.rapids.ai/api/cucim/stable/api Saves the image data to a specified file path. Currently, only the .ppm file format is supported, which can be viewed using the 'eog' command in Ubuntu. The method takes the file path as an argument. ```python img.save(file_path) ``` -------------------------------- ### Thresholding an Image using Triangle Method (CuPy) Source: https://docs.rapids.ai/api/cucim/stable/api Applies the triangle thresholding method to a grayscale image using CuPy for GPU acceleration. It calculates an upper threshold value, classifying pixels above this value as foreground. ```python import cupy as cp from cucim.skimage.filters import threshold_triangle from skimage.data import camera image = cp.array(camera()) thresh = threshold_triangle(image) binary = image > thresh ```