### Object Storage API Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/gridscale/gsclient-go/v3/README.md Endpoints for managing Gridscale Object Storage, including access keys and buckets. ```APIDOC ## Object Storage API ### Description Provides endpoints for managing Object Storage resources, including access keys and buckets. ### Endpoints #### GET /objectstorages/accesskeys - **Description**: Get a list of Object Storage access keys. - **Method**: GET #### GET /objectstorages/accesskeys/{accesskey_id} - **Description**: Get details of a specific Object Storage access key. - **Method**: GET - **Path Parameters**: - **accesskey_id** (string) - Required - The ID of the access key to retrieve. #### POST /objectstorages/accesskeys - **Description**: Create a new Object Storage access key. - **Method**: POST #### DELETE /objectstorages/accesskeys/{accesskey_id} - **Description**: Delete an Object Storage access key. - **Method**: DELETE - **Path Parameters**: - **accesskey_id** (string) - Required - The ID of the access key to delete. #### GET /objectstorages/buckets - **Description**: Get a list of Object Storage buckets. - **Method**: GET ``` -------------------------------- ### Storages API Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/gridscale/gsclient-go/v3/README.md Endpoints for managing Gridscale storages, including retrieval, creation, updates, deletion, and event fetching. ```APIDOC ## Storages API ### Description Provides endpoints for managing storage resources, including listing all storages, retrieving a specific storage, creating a new storage, creating a storage from a backup, cloning a storage, updating an existing storage, deleting a storage, and fetching storage events. ### Endpoints #### GET /storages - **Description**: Get a list of all storages. - **Method**: GET #### GET /storages/{storage_id} - **Description**: Get details of a specific storage. - **Method**: GET - **Path Parameters**: - **storage_id** (string) - Required - The ID of the storage to retrieve. #### POST /storages - **Description**: Create a new storage. - **Method**: POST #### POST /storages/from_backup - **Description**: Create a new storage from a backup. - **Method**: POST #### POST /storages/{storage_id}/clone - **Description**: Clone an existing storage. - **Method**: POST - **Path Parameters**: - **storage_id** (string) - Required - The ID of the storage to clone. #### PATCH /storages/{storage_id} - **Description**: Update an existing storage. - **Method**: PATCH - **Path Parameters**: - **storage_id** (string) - Required - The ID of the storage to update. #### DELETE /storages/{storage_id} - **Description**: Delete a storage. - **Method**: DELETE - **Path Parameters**: - **storage_id** (string) - Required - The ID of the storage to delete. #### GET /storages/{storage_id}/events - **Description**: Get a list of events for a specific storage. - **Method**: GET - **Path Parameters**: - **storage_id** (string) - Required - The ID of the storage. ``` -------------------------------- ### Initialize Cloud Storage Client in Go Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/cloud.google.com/go/storage/README.md Demonstrates how to create a new `storage.Client` instance. This client is essential for interacting with Google Cloud Storage services and should be reused throughout an application. It requires a `context.Context` and handles potential initialization errors. ```go client, err := storage.NewClient(ctx) if err != nil { log.Fatal(err) } ``` -------------------------------- ### Configure Gridscale Builder with Secondary Storage Source: https://context7.com/gridscale/packer-plugin-gridscale/llms.txt Configures the Gridscale Packer builder to create a template using secondary storage. This is useful for data-intensive templates where initial provisioning should occur on separate storage. ```hcl source "gridscale" "data_template" { api_key = "${env("GRIDSCALE_UUID")}" api_token = "${env("GRIDSCALE_TOKEN")}" base_template_uuid = "fd65f8ce-e2c6-40af-8fc3-92efa0d4eecb" server_cores = 4 server_memory = 8 storage_capacity = 20 # Enable secondary storage - template will be created from this storage secondary_storage = true template_name = "data-processing-template" hostname = "data-server" ssh_username = "root" ssh_password = "securePass456" } build { sources = ["source.gridscale.data_template"] # All provisioning happens on the secondary storage provisioner "shell" { inline = [ "mkfs.ext4 /dev/sdb", "mkdir -p /data", "mount /dev/sdb /data", "echo '/dev/sdb /data ext4 defaults 0 2' >> /etc/fstab" ] } } ``` -------------------------------- ### Storage Snapshot Scheduler API Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/gridscale/gsclient-go/v3/README.md Endpoints for managing Gridscale Storage Snapshot Schedules, including retrieval, creation, and updates. ```APIDOC ## Storage Snapshot Scheduler API ### Description Provides endpoints for managing Storage Snapshot Schedules, including listing all schedules, retrieving a specific schedule, and creating a new schedule. ### Endpoints #### GET /storagesnapshotschedules - **Description**: Get a list of all storage snapshot schedules. - **Method**: GET #### GET /storagesnapshotschedules/{schedule_id} - **Description**: Get details of a specific storage snapshot schedule. - **Method**: GET - **Path Parameters**: - **schedule_id** (string) - Required - The ID of the schedule to retrieve. #### POST /storagesnapshotschedules - **Description**: Create a new storage snapshot schedule. - **Method**: POST ``` -------------------------------- ### Read Object from Cloud Storage Bucket in Go Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/cloud.google.com/go/storage/README.md Provides a Go code example for reading an object from a specified bucket. It utilizes the previously created `storage.Client` to open a reader for the object, reads its content into a byte slice, and ensures the reader is closed afterward. Error handling for reading operations is included. ```go // Read the object1 from bucket. rc, err := client.Bucket("bucket").Object("object1").NewReader(ctx) if err != nil { log.Fatal(err) } deferrc.Close() body, err := io.ReadAll(rc) if err != nil { log.Fatal(err) } ``` -------------------------------- ### HCL: Gridscale Packer Server Configuration Parameters Source: https://context7.com/gridscale/packer-plugin-gridscale/llms.txt This HCL configuration defines essential server parameters for a Gridscale Packer build, including CPU cores, RAM, and storage capacity. Optional parameters like server name, hostname, user data for cloud-init, and secondary storage creation are also shown. ```hcl source "gridscale" "example" { # Required server_cores = 2 # CPU cores server_memory = 4 # RAM in GB storage_capacity = 10 # Boot storage in GB # Optional server_name = "packer-builder-${timestamp()}" # auto-generated if not set hostname = "vm-hostname" # defaults to "packer-hostname" user_data = "cloud-init configuration" # for cloud-init templates secondary_storage = false # create secondary storage, template from it } ``` -------------------------------- ### Basic HCL2 Example for Gridscale Packer Builder Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/docs/builders/gridscale.mdx This HCL2 code snippet demonstrates a basic configuration for the gridscale Packer builder. It specifies essential parameters like the base template, hostname, SSH credentials, server resources, storage capacity, and the desired template name for the output. Ensure you replace placeholder values with your actual API credentials or set environment variables. ```hcl2 source "gridscale" "example" { base_template_uuid = "fd65f8ce-e2c6-40af-8fc3-92efa0d4eecb" hostname = "test-hostname" ssh_password = "testPassword" ssh_username = "root" server_cores = 2 server_memory = 4 storage_capacity = 10 template_name = "my-ubuntu20.04-template" } build { sources = ["source.gridscale.example"] } ``` -------------------------------- ### Basic JSON Example for Gridscale Packer Builder Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/docs/builders/gridscale.mdx This JSON code snippet illustrates a basic configuration for the gridscale Packer builder. It defines the builder type and essential parameters such as the output template name, SSH password, hostname, username, server memory, server cores, storage capacity, and the base template UUID. It's crucial to provide your actual API credentials or configure environment variables. ```json { "builders": [ { "type": "gridscale", "template_name": "my-ubuntu20.04-template", "password": "testPassword", "hostname": "test-hostname", "ssh_username": "root", "server_memory": 4, "server_cores": 2, "storage_capacity": 10, "base_template_uuid": "fd65f8ce-e2c6-40af-8fc3-92efa0d4eecb" } ] } ``` -------------------------------- ### Configure Gridscale Builder from Base Template Source: https://context7.com/gridscale/packer-plugin-gridscale/llms.txt Configures the Gridscale Packer builder to create a custom template from an existing base template. It includes authentication, server and storage configuration, and defines provisioning steps. ```hcl source "gridscale" "ubuntu_custom" { # Authentication - can also use environment variables api_key = "${env("GRIDSCALE_UUID")}" # or set via GRIDSCALE_UUID api_token = "${env("GRIDSCALE_TOKEN")}" # or set via GRIDSCALE_TOKEN api_url = "https://api.gridscale.io" # optional, defaults to gridscale API # Template source - use existing gridscale template base_template_uuid = "fd65f8ce-e2c6-40af-8fc3-92efa0d4eecb" # Ubuntu 20.04 # Server configuration server_name = "packer-ubuntu-builder" server_cores = 2 server_memory = 4 # GB hostname = "ubuntu-vm" # Storage configuration storage_capacity = 10 # GB # Template output template_name = "ubuntu-20.04-custom-${timestamp()}" # SSH configuration ssh_username = "root" ssh_password = "initialPassword123!" # used for template initialization } build { sources = ["source.gridscale.ubuntu_custom"] provisioner "shell" { inline = [ "apt-get update", "apt-get install -y nginx docker.io", "systemctl enable nginx" ] } } ``` -------------------------------- ### Read FAT and Create File in Go Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/mitchellh/go-fs/README.md Demonstrates reading an existing FAT filesystem from a file-backed disk image and creating a new file with content in the root directory. It utilizes the fat library to interact with the filesystem. Assumes the input file 'FLOPPY.dmg' already contains a FAT filesystem. ```go f, err := os.OpenFile("FLOPPY.dmg", os.O_RDWR|os.O_CREATE, 0666) if err != nil { panic(err) } defer f.Close() // BlockDevice backed by a file device, err := fs.NewFileDisk(f) if err != nil { panic(err) } filesys, err := fat.New(device) if err != nil { panic(err) } rootDir, err := filesys.RootDir() if err != nil { panic(err) } subEntry, err := rootDir.AddFile("HELLO_WORLD") if err != nil { panic(err) } file, err := subEntry.File() if err != nil { panic(err) } _, err = io.WriteString(file, "I am the contents of this file.") if err != nil { panic(err) } ``` -------------------------------- ### xxhash: Benchmarking Pure Go vs Assembly Implementations (Shell) Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/klauspost/compress/zstd/internal/xxhash/README.md Commands to run benchmarks comparing the pure Go and assembly implementations of the xxhash Sum64 function for byte inputs. These benchmarks measure performance based on input size. ```Shell # Benchmark pure Go implementation go test -tags purego -benchtime 10s -bench '/xxhash,direct,bytes' # Benchmark assembly implementation go test -benchtime 10s -bench '/xxhash,direct,bytes' ``` -------------------------------- ### Basic file locking with flock Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/gofrs/flock/README.md Demonstrates basic file locking using the flock package. It shows how to create a new file lock, attempt a non-blocking lock, handle errors, perform operations if locked, and finally unlock the file. ```go import "github.com/gofrs/flock" fileLock := flock.New("/var/lock/go-lock.lock") locked, err := fileLock.TryLock() if err != nil { // handle locking error } if locked { // do work fileLock.Unlock() } ``` -------------------------------- ### Go: Setup In-memory Metrics Sink and Signal Handler Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/armon/go-metrics/README.md This example illustrates setting up an in-memory metrics sink with a specified retention and aggregation interval, along with an in-memory signal handler for capturing and dumping metrics. It also shows how to initialize a global metrics sink and emit various types of metrics (gauge, key/value, counter, sample). ```go import ( "time" "github.com/armon/go-metrics" ) // Setup the inmem sink and signal handler inm := metrics.NewInmemSink(10*time.Second, 1*time.Minute) sig := metrics.DefaultInmemSignal(inm) metrics.NewGlobal(metrics.DefaultConfig("service-name"), inm) // Run some code inm.SetGauge([]string{"foo"}, 42) inm.EmitKey([]string{"bar"}, 30) inm.IncrCounter([]string{"baz"}, 42) inm.IncrCounter([]string{"baz"}, 1) inm.IncrCounter([]string{"baz"}, 80) inm.AddSample([]string{"method", "wow"}, 42) inm.AddSample([]string{"method", "wow"}, 100) inm.AddSample([]string{"method", "wow"}, 22) // When a signal comes in, output like the following will be dumped to stderr: // [2014-01-28 14:57:33.04 -0800 PST][G] 'foo': 42.000 // [2014-01-28 14:57:33.04 -0800 PST][P] 'bar': 30.000 // [2014-01-28 14:57:33.04 -0800 PST][C] 'baz': Count: 3 Min: 1.000 Mean: 41.000 Max: 80.000 Stddev: 39.509 // [2014-01-28 14:57:33.04 -0800 PST][S] 'method.wow': Count: 3 Min: 22.000 Mean: 54.667 Max: 100.000 Stddev: 40.513 ``` -------------------------------- ### Compress Byte Blocks using FSE Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/klauspost/compress/fse/README.md Compresses a single independent block of byte data using Finite State Entropy encoding. It requires input data and can optionally utilize a scratch object for reduced allocations. The function returns the compressed output, possibly an error if the input is incompressible or best handled by RLE. ```Go package main import ( "fmt" "github.com/klauspost/compress/fse" ) func main() { input := []byte("this is a sample string to compress") // Compress the data compressed, err := fse.Compress(nil, input, nil) if err != nil { fmt.Printf("Compression error: %v\n", err) return } fmt.Printf("Original size: %d bytes\n", len(input)) fmt.Printf("Compressed size: %d bytes\n", len(compressed)) // To decompress, you would use fse.Decompress } ``` -------------------------------- ### Decompress Byte Blocks using FSE Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/klauspost/compress/fse/README.md Decompresses a block of data previously compressed with Finite State Entropy. It requires the compressed data (output from Compress) and its exact size. A scratch object can be provided for efficiency. Note that successful decompression does not guarantee data integrity; checksums should be handled by the caller. ```Go package main import ( "fmt" "github.com/klauspost/compress/fse" ) func main() { // Assume 'compressedData' is the output from fse.Compress compressedData := []byte{0x02, 0x03, 0x04, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27, 0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x2d, 0x2e, 0x2f, 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e, 0x3f, 0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47, 0x48, 0x49, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e, 0x4f, 0x50, 0x51, 0x52, 0x53, 0x54, 0x55, 0x56, 0x57, 0x58, 0x59, 0x5a, 0x5b, 0x5c, 0x5d, 0x5e, 0x5f, 0x60, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67, 0x68, 0x69, 0x6a, 0x6b, 0x6c, 0x6d, 0x6e, 0x6f, 0x70, 0x71, 0x72, 0x73, 0x74, 0x75, 0x76, 0x77, 0x78, 0x79, 0x7a, 0x7b, 0x7c, 0x7d, 0x7e, 0x7f, 0x80, 0x81, 0x82, 0x83, 0x84, 0x85, 0x86, 0x87, 0x88, 0x89, 0x8a, 0x8b, 0x8c, 0x8d, 0x8e, 0x8f, 0x90, 0x91, 0x92, 0x93, 0x94, 0x95, 0x96, 0x97, 0x98, 0x99, 0x9a, 0x9b, 0x9c, 0x9d, 0x9e, 0x9f, 0xa0, 0xa1, 0xa2, 0xa3, 0xa4, 0xa5, 0xa6, 0xa7, 0xa8, 0xa9, 0xaa, 0xab, 0xac, 0xad, 0xae, 0xaf, 0xb0, 0xb1, 0xb2, 0xb3, 0xb4, 0xb5, 0xb6, 0xb7, 0xb8, 0xb9, 0xba, 0xbb, 0xbc, 0xbd, 0xbe, 0xbf, 0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7, 0xc8, 0xc9, 0xca, 0xcb, 0xcc, 0xcd, 0xce, 0xcf, 0xd0, 0xd1, 0xd2, 0xd3, 0xd4, 0xd5, 0xd6, 0xd7, 0xd8, 0xd9, 0xda, 0xdb, 0xdc, 0xdd, 0xde, 0xdf, 0xe0, 0xe1, 0xe2, 0xe3, 0xe4, 0xe5, 0xe6, 0xe7, 0xe8, 0xe9, 0xea, 0xeb, 0xec, 0xed, 0xee, 0xef, 0xf0, 0xf1, 0xf2, 0xf3, 0xf4, 0xf5, 0xf6, 0xf7, 0xf8, 0xf9, 0xfa, 0xfb, 0xfc, 0xfd, 0xfe, 0xff} compressedSize := len(compressedData) // Decompress the data // The output buffer size needs to be known. For simplicity, we use a scratch object. // In a real application, you'd determine the output size more precisely or use a pre-allocated buffer. scratch := new(fse.Scratch) decompressed, err := fse.Decompress(scratch, compressedData, compressedSize) if err != nil { fmt.Printf("Decompression error: %v\n", err) return } fmt.Printf("Decompressed data: %s\n", decompressed) } ``` -------------------------------- ### Using Zstd Dictionaries for Compression in Go Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/klauspost/compress/zstd/README.md This Go code snippet illustrates how to enable dictionary compression when using the zstd library. The `WithEncoderDict` option is used to specify a dictionary, which can potentially improve compression ratios for data similar to the dictionary's content. Note that using an unsuitable dictionary might increase the output size. ```Go import "github.com/klauspost/compress/zstd" // To enable a dictionary use `WithEncoderDict(dict []byte)`. // Here only one dictionary will be used and it will likely be used even if it doesn't improve compression. // The used dictionary must be used to decompress the content. // For any real gains, the dictionary should be built with similar data. // If an unsuitable dictionary is used the output may be slightly larger than using no dictionary. // Use the [zstd commandline tool](https://github.com/facebook/zstd/releases) to build a dictionary from sample data. // Example of creating a decoder with a dictionary (assuming 'dict' is a []byte containing dictionary data): // var decoder, _ = zstd.NewReader(nil, zstd.WithDecoderDicts(dict)) // Example of creating an encoder with a dictionary: // var encoder, _ = zstd.NewWriter(out, zstd.WithEncoderDict(dict)) ``` -------------------------------- ### Install flock package Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/gofrs/flock/README.md Installs the flock package using the go get command. This is the standard way to manage Go dependencies. ```shell go get -u github.com/gofrs/flock ``` -------------------------------- ### xxhash: Hash Byte Slices and Strings (Go) Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/klauspost/compress/zstd/internal/xxhash/README.md Provides functions to calculate the 64-bit xxHash (XXH64) for byte slices and strings directly. It also includes a Digest type that implements the hash.Hash64 interface for incremental hashing. ```Go import "github.com/cespare/xxhash" func main() { data := []byte("hello world") hashValue := xxhash.Sum64(data) println(hashValue) stringData := "hello world" hashStringValue := xxhash.Sum64String(stringData) println(hashStringValue) digest := xxhash.New() digest.Write([]byte("hello")) digest.Write([]byte(" ")) digest.Write([]byte("world")) hashDigestValue := digest.Sum64() println(hashDigestValue) } ``` -------------------------------- ### Stream Decompression with Zstd in Go Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/klauspost/compress/zstd/README.md This Go code snippet demonstrates how to decompress a data stream using the zstd library. It creates a new zstd reader from an input stream and copies the decompressed content to an output writer. It's crucial to call the Close function on the reader to release resources and stop goroutines. ```Go import "github.com/klauspost/compress/zstd" import "io" func Decompress(in io.Reader, out io.Writer) error { d, err := zstd.NewReader(in) if err != nil { return err } defer d.Close() // Copy content... _, err = io.Copy(out, d) return err } ``` -------------------------------- ### Buffer Decompression with Zstd in Go Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/klauspost/compress/zstd/README.md This Go code snippet shows how to decompress a byte slice (buffer) using the zstd library. It utilizes a pre-configured decoder to decompress the source byte slice into a destination buffer. If no destination buffer is provided, the decoder will allocate one. ```Go import "github.com/klauspost/compress/zstd" // Create a reader that caches decompressors. // For this operation type we supply a nil Reader. var decoder, _ = zstd.NewReader(nil) // Decompress a buffer. We don't supply a destination buffer, // so it will be allocated by the decoder. func Decompress(src []byte) ([]byte, error) { return decoder.DecodeAll(src, nil) } ``` -------------------------------- ### Compressing Blocks with Go's zstd Encoder Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/klauspost/compress/zstd/README.md This Go code snippet demonstrates how to compress a byte slice using the `zstd.NewWriter` and its `EncodeAll` method. It shows the creation of a reusable encoder and a `Compress` function that utilizes it. For optimal performance, a pre-allocated destination buffer can be provided to `EncodeAll`. ```Go import "github.com/klauspost/compress/zstd" // Create a writer that caches compressors. // For this operation type we supply a nil Reader. var encoder, _ = zstd.NewWriter(nil) // Compress a buffer. // If you have a destination buffer, the allocation in the call can also be eliminated. func Compress(src []byte) []byte { return encoder.EncodeAll(src, make([]byte, 0, len(src))) } ``` -------------------------------- ### Go Immutable Radix Tree: Basic Operations and Longest Prefix Match Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/hashicorp/go-immutable-radix/README.md Demonstrates the creation of an immutable radix tree and performing basic insert operations. It also shows how to find the longest prefix match for a given key using the `LongestPrefix` method. This is useful for prefix-based lookups and routing. ```go package main import ( "fmt" "github.com/hashicorp/go-immutable-radix" ) func main() { // Create a tree r := iradix.New() r, _, _ = r.Insert([]byte("foo"), 1) r, _, _ = r.Insert([]byte("bar"), 2) r, _, _ = r.Insert([]byte("foobar"), 2) // Find the longest prefix match m, _, _ := r.Root().LongestPrefix([]byte("foozip")) if string(m) != "foo" { panic("should be foo") } fmt.Printf("Longest prefix for 'foozip': %s\n", m) } ``` -------------------------------- ### Unarchiving Files with go-getter Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/hashicorp/go-getter/v2/README.md go-getter automatically unarchives files based on their extension or an explicit 'archive' query parameter. This functionality supports various archive formats like tar.gz, zip, and gz. Archiving can also be explicitly disabled. The 'archive' parameter is removed before the download. ```shell ./foo.zip ./some/other/path?archive=zip ./some/path?archive=false ``` -------------------------------- ### Enable Key Conversion for Datastore Migration (Basic/Manual Scaling) Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/google.golang.org/appengine/README.md Enables automatic conversion between `cloud.google.com/go/datastore` and `google.golang.org/appengine/datastore` key types for applications using basic or manual scaling on App Engine. This is typically done in the `/_ah/start` handler. ```go http.HandleFunc("/_ah/start", func(w http.ResponseWriter, r *http.Request) { datastore.EnableKeyConversion(appengine.NewContext(r)) }) ``` -------------------------------- ### Go Immutable Radix Tree: Range Scan with Iterator Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/hashicorp/go-immutable-radix/README.md Illustrates how to perform a range scan on an immutable radix tree using its iterator. The example shows inserting keys, seeking to a lower bound, and iterating through keys within a specified lexicographical range. This is effective for querying data within ordered sets. ```go package main import ( "fmt" "github.com/hashicorp/go-immutable-radix" ) func main() { // Create a tree r := iradix.New() r, _, _ = r.Insert([]byte("001"), 1) r, _, _ = r.Insert([]byte("002"), 2) r, _, _ = r.Insert([]byte("005"), 5) r, _, _ = r.Insert([]byte("010"), 10) r, _, _ = r.Insert([]byte("100"), 100) // Range scan over the keys that sort lexicographically between [003, 050) it := r.Root().Iterator() it.SeekLowerBound([]byte("003")) fmt.Println("Keys in range [003, 050):") for key, _, ok := it.Next(); ok; key, _, ok = it.Next() { if string(key) >= "050" { break } fmt.Println(string(key)) } } ``` -------------------------------- ### Compress Data to Output Stream using zstd Encoder (Go) Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/klauspost/compress/zstd/README.md This snippet demonstrates how to compress data from an io.Reader to an io.Writer using the zstd package. It initializes a new zstd writer with default options, copies data, and ensures resources are released by closing the writer. It's suitable for compressing large streams. ```Go import ( "io" "github.com/klauspost/compress/zstd" ) // Compress input to output. func Compress(input io.Reader, output io.Writer) error { w, err := zstd.NewWriter(output) if err != nil { return err } _, err = io.Copy(w, input) if err != nil { w.Close() return err } return w.Close() } ``` -------------------------------- ### S3 Getter Configuration Options Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/hashicorp/go-getter/v2/README.md The S3 getter allows configuration through URL query parameters for access keys and tokens. It also supports using IAM Instance Profiles for authentication by omitting these parameters. S3-compliant servers like Minio are also compatible. ```go aws_access_key_id: "YOUR_ACCESS_KEY" aws_access_key_secret: "YOUR_SECRET_KEY" aws_access_token: "YOUR_ACCESS_TOKEN" aws_profile: "my-profile" ``` -------------------------------- ### Downloading Subdirectory using Glob Pattern Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/hashicorp/go-getter/v2/README.md Demonstrates using filesystem glob patterns to specify a subdirectory for download. The pattern must match exactly one entry in the repository. ```shell https://github.com/hashicorp/go-getter.git//test-* ``` -------------------------------- ### Run Go Benchmarks Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/ugorji/go/codec/README.md This command navigates to the 'bench' directory and runs performance benchmarks. The '-bench .' flag runs all benchmarks, '-benchmem' includes memory allocation statistics, and '-benchtime 1s' sets the benchmark duration to 1 second. ```bash cd bench go test -bench . -benchmem -benchtime 1s ``` -------------------------------- ### Compress and Decompress XZ Streams with Go API Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/ulikunitz/xz/README.md Demonstrates how to use the `xz.NewWriter` and `xz.NewReader` functions from the Go xz package to compress a string into a buffer and then decompress it, writing the output to standard output. Requires the 'github.com/ulikunitz/xz' package. ```go package main import ( "bytes" "io" "log" "os" "github.com/ulikunitz/xz" ) func main() { const text = "The quick brown fox jumps over the lazy dog.\n" var buf bytes.Buffer // compress text w, err := xz.NewWriter(&buf) if err != nil { log.Fatalf("xz.NewWriter error %s", err) } if _, err := io.WriteString(w, text); err != nil { log.Fatalf("WriteString error %s", err) } if err := w.Close(); err != nil { log.Fatalf("w.Close error %s", err) } // decompress buffer and write output to stdout r, err := xz.NewReader(&buf) if err != nil { log.Fatalf("NewReader error %s", err) } if _, err = io.Copy(os.Stdout, r); err != nil { log.Fatalf("io.Copy error %s", err) } } ``` -------------------------------- ### Compress Independent Blocks using Huff0 Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/klauspost/compress/huff0/README.md Compresses single independent blocks using the Huff0 algorithm. Users must provide input and handle potential errors such as ErrIncompressible, ErrUseRLE, ErrTooBig, or internal errors. The Scratch object can be used to reduce allocations and reuse compression tables. ```Go package main import ( "fmt" "github.com/klauspost/compress/huff0" ) func main() { input := []byte("some data to compress") // Using Compress1X sout1x, err1x := huff0.Compress1X(nil, input, nil) if err1x != nil { fmt.Printf("Compress1X error: %v\n", err1x) } else { fmt.Printf("Compress1X output size: %d bytes\n", len(sout1x)) } // Using Compress4X sout4x, err4x := huff0.Compress4X(nil, input, nil) if err4x != nil { fmt.Printf("Compress4X error: %v\n", err4x) } else { fmt.Printf("Compress4X output size: %d bytes\n", len(sout4x)) } // Example with Scratch object for reuse scratch := &huff0.Scratch{} scompressed, err := huff0.Compress1X(scratch, input, nil) if err != nil { fmt.Printf("Compress1X with Scratch error: %v\n", err) } else { fmt.Printf("Compressed size with Scratch: %d bytes\n", len(compressed)) } // Decompression example (assuming successful compression) if len(compressed) > 0 { // Reset scratch for decompression if needed, or use a new one // For simplicity, reusing scratch here, but be mindful of buffer reuse scratch.Out = nil // Clear output buffer from previous compression decompressed, err := huff0.Decompress(scratch, compressed, nil) if err != nil { fmt.Printf("Decompression error: %v\n", err) } else { fmt.Printf("Decompressed data: %s\n", string(decompressed)) } } } ``` -------------------------------- ### Basic String Globbing with go-glob in Go Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/ryanuber/go-glob/README.md This Go code snippet demonstrates the basic usage of the go-glob library for string matching. It shows how to use the Glob function with wildcard characters like '*' to check if a string matches a given pattern. The library is useful for simple pattern matching without needing regular expressions. ```go package main import "github.com/ryanuber/go-glob" func main() { glob.Glob("*World!", "Hello, World!") // true glob.Glob("Hello,*"), "Hello, World!") // true glob.Glob("*ello,*"), "Hello, World!") // true glob.Glob("World!"), "Hello, World!") // false glob.Glob("/home/*"), "/home/ryanuber/.bashrc") // true } ``` -------------------------------- ### Stats: Record Measurements (Go) Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/go.opencensus.io/README.md This Go code snippet shows how to record measurements associated with a measure using the `stats` package. It implicitly tags the measurements with tags from the provided context. ```go stats.Record(ctx, videoSize.M(102478)) ``` -------------------------------- ### Verifying File Checksum with MD5 Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/hashicorp/go-getter/v2/README.md Shows how to automatically verify the checksum of a downloaded file using the MD5 algorithm. The checksum is provided as a query parameter to the URL. ```shell ./foo.txt?checksum=md5:b7d96c89d09d9e204f5fedc4d5d55b21 ``` -------------------------------- ### Go API Encoder Constructors Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/ugorji/go/codec/README.md Offers constructor functions for creating Encoder instances. These functions enable the creation of encoders that write to an io.Writer or directly to a byte slice, using a provided Handle for encoding. Crucial for serializing data. ```go func NewEncoder(w io.Writer, h Handle) *Encoder func NewEncoderBytes(out *[]byte, h Handle) *Encoder ``` -------------------------------- ### Go API Data Structures Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/ugorji/go/codec/README.md Defines various data structures used within the Gridscale Packer plugin API. These include handles for different encoding formats (Basic, Binc, Cbor, Json, Msgpack), extension types, and options for decoding and encoding. These structures form the basis for data serialization and deserialization. ```go type BasicHandle struct{ ... } type BincHandle struct{ ... } type BytesExt interface{ ... } type CborHandle struct{ ... } type DecodeOptions struct{ ... } type Decoder struct{ ... } type EncodeOptions struct{ ... } type Encoder struct{ ... } type Ext interface{ ... } type Handle interface{ ... } type InterfaceExt interface{ ... } type JsonHandle struct{ ... } type MapBySlice interface{ ... } type MissingFielder interface{ ... } type MsgpackHandle struct{ ... } type MsgpackSpecRpcMultiArgs []interface{} type RPCOptions struct{ ... } type Raw []byte type RawExt struct{ ... } type Rpc interface{ ... } type Selfer interface{ ... } type SimpleHandle struct{ ... } type TypeInfos struct{ ... } ``` -------------------------------- ### Verifying File Checksum (Type Guessed) Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/hashicorp/go-getter/v2/README.md Illustrates verifying a file's checksum where the type (e.g., SHA1, SHA256) is automatically determined by go-getter based on the checksum string's length. ```shell ./foo.txt?checksum=b7d96c89d09d9e204f5fedc4d5d55b21 ``` -------------------------------- ### Install go-getter Library in Go Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/hashicorp/go-getter/v2/README.md Installs the go-getter library version 2 for use in Go projects. This is a standard Go package installation command. ```bash go get github.com/hashicorp/go-getter/v2 ``` -------------------------------- ### HCL Array and Object Syntax Example Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/hashicorp/hcl/README.md Illustrates how to define arrays and nested objects in HCL. It shows both array literal syntax and repeated blocks for lists of objects, as well as nested object structures. ```hcl service { key = "value" } service { key = "value" } ``` ```hcl variable "ami" { description = "the AMI to use" } ``` -------------------------------- ### Git Getter Configuration Options Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/hashicorp/go-getter/v2/README.md The Git getter supports various configuration options via query parameters. These include specifying a Git reference ('ref'), providing an SSH private key for authentication ('sshkey'), and setting the clone depth ('depth'). It supports both URL-style and SCP-style SSH addresses. ```shell git::ssh://git@example.com/foo/bar git::git@example.com/foo/bar ``` ```go git_ref: "main" git_ssh_key: "" git_depth: 10 ``` -------------------------------- ### Go: Create Aggregations for Views Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/go.opencensus.io/README.md Demonstrates how to create different aggregation types (Distribution, Count, Sum) for use in defining views. These aggregations specify how measures will be processed. ```go distAgg := view.Distribution(1<<32, 2<<32, 3<<32) countAgg := view.Count() sumAgg := view.Sum() ``` -------------------------------- ### ISO Images API Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/gridscale/gsclient-go/v3/README.md Endpoints for managing Gridscale ISO images, including retrieval, creation, updates, deletion, and event fetching. ```APIDOC ## ISO Images API ### Description Provides endpoints for managing ISO images, including listing all ISO images, retrieving a specific ISO image, creating a new ISO image, updating an existing ISO image, deleting an ISO image, and fetching ISO image events. ### Endpoints #### GET /isoimages - **Description**: Get a list of all ISO images. - **Method**: GET #### GET /isoimages/{isoimage_id} - **Description**: Get details of a specific ISO image. - **Method**: GET - **Path Parameters**: - **isoimage_id** (string) - Required - The ID of the ISO image to retrieve. #### POST /isoimages - **Description**: Create a new ISO image. - **Method**: POST #### PUT /isoimages/{isoimage_id} - **Description**: Update an existing ISO image. - **Method**: PUT - **Path Parameters**: - **isoimage_id** (string) - Required - The ID of the ISO image to update. #### DELETE /isoimages/{isoimage_id} - **Description**: Delete an ISO image. - **Method**: DELETE - **Path Parameters**: - **isoimage_id** (string) - Required - The ID of the ISO image to delete. #### GET /isoimages/{isoimage_id}/events - **Description**: Get a list of events for a specific ISO image. - **Method**: GET - **Path Parameters**: - **isoimage_id** (string) - Required - The ID of the ISO image. ``` -------------------------------- ### Build sys/unix Go Files (New System) Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/golang.org/x/sys/unix/README.md Generates Go files for the sys/unix package using the new build system, which utilizes a Docker container for reproducible builds. This system checks out kernel and system library sources directly. Requires an amd64/Linux system with Docker installed. Run `mkall.sh` to generate all files. ```bash #!/bin/bash # Ensure you are on an amd64/Linux system and have Docker installed # Set GOOS and GOARCH environment variables before running # export GOOS="your_target_os" # export GOARCH="your_target_arch" ./mkall.sh # To see the commands that will be run without executing them: # ./mkall.sh -n ``` -------------------------------- ### Enable Key Conversion for Datastore Migration (Automatic Scaling) Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/google.golang.org/appengine/README.md Provides a method to enable key conversion for datastore migrations in App Engine automatic scaling environments, where the `/_ah/start` handler is unavailable. It suggests calling `datastore.EnableKeyConversion` within handlers or middleware. ```go datastore.EnableKeyConversion(appengine.NewContext(r)) ``` -------------------------------- ### HCL Multi-line String Example Source: https://github.com/gridscale/packer-plugin-gridscale/blob/main/vendor/github.com/hashicorp/hcl/README.md Demonstrates the syntax for multi-line strings in HCL using a 'here document' style. This allows for embedding multi-line text directly within the configuration. ```hcl <