diff --git a/deployment/25.10.3/404.html b/deployment/25.10.3/404.html new file mode 100644 index 00000000..a501e304 --- /dev/null +++ b/deployment/25.10.3/404.html @@ -0,0 +1,4572 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ +

404 - Not found

+ +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/architecture/concepts/automatic-rebalancing/index.html b/deployment/25.10.3/architecture/concepts/automatic-rebalancing/index.html new file mode 100644 index 00000000..4b4c3bcf --- /dev/null +++ b/deployment/25.10.3/architecture/concepts/automatic-rebalancing/index.html @@ -0,0 +1,4692 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Automatic Rebalancing - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Automatic Rebalancing

+ +

Automatic rebalancing is a fundamental feature of distributed data storage systems designed to maintain an even +distribution of data across storage nodes. This process ensures optimal performance, prevents resource overutilization, +and enhances system resilience by dynamically redistributing data in response to changes in cluster topology or workload +patterns.

+

In a distributed storage system, data is typically spread across multiple storage nodes for redundancy, scalability, and +performance. Over time, various factors can lead to an imbalance in data distribution, such as:

+
    +
  • The addition of new storage nodes, which initially lack any data.
  • +
  • The removal or failure of existing nodes, requiring data redistribution to maintain availability.
  • +
  • The equal distribution of data across storage nodes.
  • +
+

Automatic rebalancing addresses these issues by dynamically redistributing data across the cluster. This process is +driven by an algorithm that continuously monitors data distribution and redistributes data when imbalances are detected. +The goal is to achieve uniform data placement while minimizing performance overhead during the rebalancing process.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/architecture/concepts/disaggregated/index.html b/deployment/25.10.3/architecture/concepts/disaggregated/index.html new file mode 100644 index 00000000..2e077871 --- /dev/null +++ b/deployment/25.10.3/architecture/concepts/disaggregated/index.html @@ -0,0 +1,4697 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Disaggregated - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Disaggregated

+ +

Disaggregated storage represents a modern approach to distributed storage architectures, where compute and storage +resources are decoupled. This separation allows for greater flexibility, scalability, and efficiency in managing data +across large-scale distributed environments.

+

Traditional storage architectures typically integrate compute and storage within the same nodes, leading to resource +contention and inefficiencies. Disaggregated storage solutions address these limitations by separating storage resources +from compute resources, enabling independent scaling of each component based on workload demands.

+

Key characteristics of disaggregated storage solutions include:

+
    +
  • Independent Scalability: Compute and storage can be scaled separately, optimizing resource utilization and + reducing unnecessary hardware expansion.
  • +
  • Resource Efficiency: Storage is pooled and accessible across multiple compute nodes, reducing data duplication and + improving overall efficiency.
  • +
  • Improved Performance: By reducing bottlenecks associated with tightly coupled storage, applications can achieve + better latency and throughput.
  • +
  • Flexibility and Adaptability: Different storage technologies (e.g., NVMe-over-Fabrics, object storage) can be + integrated seamlessly, allowing organizations to adopt the best-fit storage solutions for specific workloads.
  • +
  • Simplified Management: Centralized storage management reduces complexity, enabling easier provisioning, + monitoring, and maintenance of storage resources.
  • +
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/architecture/concepts/erasure-coding/index.html b/deployment/25.10.3/architecture/concepts/erasure-coding/index.html new file mode 100644 index 00000000..b3960b37 --- /dev/null +++ b/deployment/25.10.3/architecture/concepts/erasure-coding/index.html @@ -0,0 +1,4694 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Erasure Coding - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Erasure Coding

+ +

Erasure coding is a data protection mechanism used in distributed storage systems to enhance fault tolerance and +optimize storage efficiency. It provides redundancy by dividing data into multiple fragments and encoding it with +additional parity fragments, enabling data recovery in the event of node failures.

+

Traditional data redundancy methods, such as replication, require multiple full copies of data, leading to significant +storage overhead. Erasure coding improves upon this by using mathematical algorithms to generate parity fragments, +allowing data reconstruction with fewer overheads.

+

The core principle of erasure coding involves breaking data into k data fragments and computing m parity +fragments. These k+m fragments are distributed across multiple storage nodes. The system can recover lost data using +any k available fragments, even if up to m fragments are missing or corrupted.

+

Erasure coding has a number of key characteristics:

+
    +
  • High Fault Tolerance: Erasure coding can tolerate multiple node failures while allowing full data recovery.
  • +
  • Storage Efficiency: Compared to replication, erasure coding requires less additional storage to achieve similar levels of redundancy.
  • +
  • Computational Overhead: Encoding and decoding operations involve computational complexity, which may impact performance in latency-sensitive applications.
  • +
  • Flexibility: The parameters k and m can be adjusted to balance redundancy, performance, and storage overhead.
  • +
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/architecture/concepts/hyper-converged/index.html b/deployment/25.10.3/architecture/concepts/hyper-converged/index.html new file mode 100644 index 00000000..409ca163 --- /dev/null +++ b/deployment/25.10.3/architecture/concepts/hyper-converged/index.html @@ -0,0 +1,4697 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Hyper-Converged - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Hyper-Converged

+ +

Hyper-converged storage is a key component of hyper-converged infrastructure (HCI), where compute, storage, and +networking resources are tightly integrated into a unified system. This approach simplifies management, enhances +scalability, and optimizes resource utilization in distributed data storage environments.

+

Traditional storage architectures often separate compute and storage into distinct hardware layers, requiring complex +management and specialized hardware. Hyper-converged storage consolidates these resources within the same nodes, forming +a software-defined storage (SDS) layer that dynamically distributes and manages data across the cluster.

+

Key characteristics of hyper-converged storage include:

+
    +
  • Integrated Storage and Compute: Storage resources are virtualized and distributed across the compute nodes, + eliminating the need for dedicated storage arrays.
  • +
  • Scalability: New nodes can be added seamlessly, increasing both compute and storage capacity without complex + reconfiguration.
  • +
  • Software-Defined Storage (SDS): A software layer abstracts and manages storage resources, enabling automation, + fault tolerance, and efficiency.
  • +
  • High Availability and Resilience: Data is replicated across nodes to ensure redundancy and fault tolerance, + minimizing downtime.
  • +
  • Simplified Management: A unified management interface enables streamlined provisioning, monitoring, and + maintenance of storage and compute resources.
  • +
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/architecture/concepts/index.html b/deployment/25.10.3/architecture/concepts/index.html new file mode 100644 index 00000000..aac3be42 --- /dev/null +++ b/deployment/25.10.3/architecture/concepts/index.html @@ -0,0 +1,4685 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Concepts - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Concepts

+ +

Understanding the fundamental concepts behind simplyblock is essential for effectively utilizing its distributed storage +architecture. Simplyblock provides a cloud-native, software-defined storage solution that enables highly scalable, +high-performance storage for containerized and virtualized environments. By leveraging NVMe over TCP (NVMe/TCP) and +advanced data management features, simplyblock ensures low-latency access, high availability, and seamless scalability. +This documentation section provides detailed explanations of key storage concepts within simplyblock, helping users +understand how its storage components function and interact within a distributed system.

+

The concepts covered in this section include Logical Volumes (LVs), Snapshots, Clones, Hyper-Convergence, +Disaggregation, and more. Each concept plays is crucial in optimizing storage performance, ensuring data durability, +and enabling efficient resource allocation. Whether you are deploying simplyblock in a Kubernetes environment, a +virtualized infrastructure, or a bare-metal setup, understanding these core principles will help you design, configure, +and manage your storage clusters effectively.

+

By familiarizing yourself with these concepts, you will gain insight into how simplyblock abstracts storage resources, +provides scalable and resilient data services, and integrates with modern cloud-native environments. This knowledge is +essential for leveraging simplyblock to meet your organization's storage performance, reliability, and scalability +requirements.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/architecture/concepts/logical-volumes/index.html b/deployment/25.10.3/architecture/concepts/logical-volumes/index.html new file mode 100644 index 00000000..3a2e5e25 --- /dev/null +++ b/deployment/25.10.3/architecture/concepts/logical-volumes/index.html @@ -0,0 +1,4707 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Logical Volumes - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Logical Volumes

+ +

Logical Volumes (LVs) in Simplyblock are virtual NVMe devices that provide scalable, high-performance storage within a +distributed storage cluster. They enable flexible storage allocation, efficient resource utilization, and seamless data +management for cloud-native applications.

+

A Logical Volume (LV) in simplyblock is an abstracted storage entity dynamically allocated from a storage pool managed +by the simplyblock system. Unlike traditional block storage, simplyblock’s LVs offer advanced features such as thin +provisioning, snapshotting, and replication to enhance resilience and scalability.

+

Key characteristics of Logical Volumes include:

+
    +
  • Dynamic Allocation: LVs can be created, resized, and deleted on demand without manual intervention in the + underlying hardware.
  • +
  • Thin Provisioning: Storage space is allocated only when needed, optimizing resource utilization.
  • +
  • High Performance: Simplyblock’s architecture ensures low-latency access to LVs, making them suitable for demanding + workloads.
  • +
  • Fault Tolerance: Data is distributed across multiple nodes to prevent data loss and improve reliability.
  • +
+

Two basic types of logical volumes are supported by simplyblock:

+
    +
  • NVMe-oF Subsystems: Each logical volume is backed by a separate set of queue pairs. By default, each subsystem + provides three queue parts and one network connection.
  • +
+

Volumes show up in Linux using lsblk as /dev/nvme0n2, /dev/nvme1n1, /dev/nvmeXn1, ...

+
    +
  • NVMe-oF Namespaces: Each logical volume is backed by an NVMe namespace. A namespace is a feature similar to a + logical partition of a drive, although it is defined on the NVMe level (device or target). Up to 32 namespaces share + a single NVMe subsystem and its queue pairs and connections.
  • +
+

This is a more resource-efficient, but performance-limited, version of an individual volume. It is useful, if many, + small volumes are required. Both methods can be combined in a single cluster.

+

Volumes show up in Linux using lsblk as /dev/nvme0n1, /dev/nvme0n2, /dev/nvme0nX, ...

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/architecture/concepts/persistent-volumes/index.html b/deployment/25.10.3/architecture/concepts/persistent-volumes/index.html new file mode 100644 index 00000000..d4101c19 --- /dev/null +++ b/deployment/25.10.3/architecture/concepts/persistent-volumes/index.html @@ -0,0 +1,4692 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Persistent Volumes - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Persistent Volumes

+ +

Persistent Volumes (PVs) in Kubernetes provide a mechanism for managing storage resources independently of individual +Pods. Unlike ephemeral storage, which is tied to the lifecycle of a Pod, PVs ensure data persistence across Pod restarts +and rescheduling, enabling stateful applications to function reliably in a Kubernetes cluster.

+

In Kubernetes, storage resources are abstracted through the Persistent Volume framework, which decouples storage +provisioning from application deployment. A Persistent Volume (PV) represents a piece of storage that has been +provisioned in the cluster, while a Persistent Volume Claim (PVC) is a request for storage made by an application.

+

Key characteristics of Persistent Volumes include:

+
    +
  • Decoupled Storage Management: PVs exist independently of Pods, allowing storage to persist even when Pods are deleted or rescheduled.
  • +
  • Dynamic and Static Provisioning: Storage can be provisioned manually by administrators (static provisioning) or automatically by storage classes (dynamic provisioning).
  • +
  • Access Modes: PVs support multiple access modes, such as ReadWriteOnce (RWO), ReadOnlyMany (ROX), and ReadWriteMany (RWX), defining how storage can be accessed by Pods.
  • +
  • Reclaim Policies: When a PV is no longer needed, it can be retained, recycled, or deleted based on its configured reclaim policy.
  • +
  • Storage Classes: Kubernetes allows administrators to define different types of storage using StorageClasses, enabling automated provisioning of PVs based on workload requirements.
  • +
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/architecture/concepts/simplyblock-cluster/index.html b/deployment/25.10.3/architecture/concepts/simplyblock-cluster/index.html new file mode 100644 index 00000000..24991c10 --- /dev/null +++ b/deployment/25.10.3/architecture/concepts/simplyblock-cluster/index.html @@ -0,0 +1,4839 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Simplyblock Cluster - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Simplyblock Cluster

+ +

The simplyblock storage platform consists of three different types of cluster nodes and belongs to the control plane +or storage plane.

+

Control Plane

+

The control plane orchestrates, monitors, and controls the overall storage infrastructure. It provides centralized +administration, policy enforcement, and automation for managing storage nodes, logical volumes (LVs), and cluster-wide +configurations. The control plane operates independently of the storage plane, ensuring that control and metadata +operations do not interfere with data processing. It facilitates provisioning, fault management, and system scaling +while offering APIs and CLI tools for seamless integration with external management systems. A single control plane +can manage multiple clusters.

+

Storage Plane

+

The storage plane is the layer responsible for managing and distributing data across storage nodes within a cluster. It +handles data placement, replication, fault tolerance, and access control, ensuring that logical volumes (LVs) provide +high-performance, low-latency storage to applications. The storage plane operates independently of the control plane, +allowing seamless scalability and dynamic resource allocation without disrupting system operations. By leveraging +NVMe-over-TCP and software-defined storage principles, simplyblock’s storage plane ensures efficient data distribution, +high availability, and resilience, making it ideal for cloud-native and high-performance computing environments.

+

Management Node

+

A management node is a node of the control plane cluster. The management node runs the necessary management services +including the Cluster API, services such as Grafana, Prometheus, and Graylog, as well as the FoundationDB database +cluster.

+

Storage Node

+

A storage node is a node of the storage plane cluster. The storage node provides storage capacity to the distributed +storage pool of a specific storage cluster. The storage node runs the necessary data management services including +the Storage Node Management API, the SPDK service, and handles logical volume primary connections of NVMe-oF +multipathing.

+

Secondary Node

+

A secondary node is a node of the storage plane cluster. The secondary node provides automatic fail over and high +availability for logical volumes using NVMe-oF multipathing. In a highly available cluster, simplyblock automatically +provisions secondary nodes alongside primary nodes and assigns one secondary node per primary.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/architecture/concepts/snapshots-clones/index.html b/deployment/25.10.3/architecture/concepts/snapshots-clones/index.html new file mode 100644 index 00000000..b7e89ac4 --- /dev/null +++ b/deployment/25.10.3/architecture/concepts/snapshots-clones/index.html @@ -0,0 +1,4787 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Snapshots and Clones - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Snapshots and Clones

+ +

Volume snapshots and volume clones are essential data management features in distributed storage systems that enable +data protection, recovery, and replication. While both techniques involve capturing the state of a volume at a specific +point in time, they serve distinct purposes and operate using different mechanisms.

+

Volume Snapshots

+

A volume snapshot is a read-only, point-in-time copy of a storage volume. It preserves the state of the volume at the +moment the snapshot is taken, allowing users to restore data or create new volumes based on the captured state. +Snapshots are typically implemented using copy-on-write (COW) or redirect-on-write (ROW) techniques, minimizing storage +overhead and improving efficiency.

+

Key characteristics of volume snapshots include:

+
    +
  • Space Efficiency: Snapshots share common data blocks with the original volume, requiring minimal additional + storage.
  • +
  • Fast Creation: As snapshots do not duplicate data immediately, they can be created almost instantaneously.
  • +
  • Versioning and Recovery: Users can revert a volume to a previous state using snapshots, aiding in disaster + recovery and data protection.
  • +
  • Performance Considerations: While snapshots are efficient, excessive snapshot accumulation can impact performance + due to metadata overhead and fragmentation.
  • +
+

Volume Clones

+

A volume clone is a writable, independent copy of a storage volume, created from either an existing volume or a +snapshot. Unlike snapshots, clones are fully functional duplicates that can operate as separate storage entities.

+

Key characteristics of volume clones include:

+
    +
  • Writable and Independent: Clones can be modified without affecting the original volume.
  • +
  • Use Case for Testing and Development: Clones are commonly used for staging environments, testing, and application + sandboxing.
  • +
  • Storage Overhead: Unlike snapshots, clones may require additional storage capacity to accommodate changes made + after cloning.
  • +
  • Immediate Availability: A clone provides an instant copy of the original volume, avoiding long data copying + processes.
  • +
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/architecture/concepts/storage-pooling/index.html b/deployment/25.10.3/architecture/concepts/storage-pooling/index.html new file mode 100644 index 00000000..7e212db3 --- /dev/null +++ b/deployment/25.10.3/architecture/concepts/storage-pooling/index.html @@ -0,0 +1,4693 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Storage Pooling - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Storage Pooling

+ +

Storage pooling is a technique used in distributed data storage systems to aggregate multiple storage devices into a +single, unified storage resource. This approach enhances resource utilization, improves scalability, and simplifies +management by abstracting physical storage infrastructure into a logical storage pool.

+

Traditional storage architectures often rely on dedicated storage devices assigned to specific applications or +workloads, leading to inefficiencies in resource allocation and potential underutilization. Storage pooling addresses +these challenges by combining storage resources from multiple nodes into a shared pool, allowing dynamic allocation +based on demand.

+

Key characteristics of storage pooling include:

+
    +
  • Resource Aggregation: Multiple physical storage devices, such as HDDs, SSDs, or NVMe drives, are combined into a single logical storage entity.
  • +
  • Dynamic Allocation: Storage capacity can be allocated dynamically to workloads based on usage patterns and demand.
  • +
  • Improved Efficiency: By eliminating the constraints of static storage assignments, storage pooling optimizes resource utilization and reduces wasted capacity.
  • +
  • Scalability: Additional storage devices or nodes can seamlessly integrate into the storage pool without disrupting operations.
  • +
  • Simplified Management: Centralized control and monitoring enable streamlined administration of storage resources.
  • +
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/architecture/high-availability-fault-tolerance/index.html b/deployment/25.10.3/architecture/high-availability-fault-tolerance/index.html new file mode 100644 index 00000000..87f537be --- /dev/null +++ b/deployment/25.10.3/architecture/high-availability-fault-tolerance/index.html @@ -0,0 +1,4878 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + High Availability and Fault Tolerance - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

High Availability and Fault Tolerance

+ +

Simplyblock is designed to provide enterprise-grade high availability (HA) and fault tolerance for enterprise and +cloud-native storage environments. Through a combination of distributed architecture and advanced data protection +mechanisms, simplyblock ensures continuous data access, resilience against failures, and minimal service disruption. +Fault tolerance is embedded at multiple levels of the system, from data redundancy to control plane and storage path +resilience.

+

Fault Tolerance and High Availability Mechanisms

+

Simplyblock’s architecture provides robust fault tolerance and high availability by combining distributed erasure +coding, multipath access with failover, and redundant management and storage planes. These capabilities ensure that +Simplyblock storage clusters deliver the reliability and resiliency required for critical, high-demand workloads in +modern distributed environments.

+

1. Distributed Erasure Coding

+

Simplyblock protects data using distributed erasure coding, which ensures that data is striped across multiple +storage nodes along with parity fragments. This provides:

+
    +
  • Redundancy: Data can be reconstructed even if one or more nodes fail, depending on the configured erasure coding + scheme (such as 1+1, 1+2, 2+1, or 2+2).
  • +
  • Efficiency: Storage overhead is minimized compared to full replication while maintaining strong fault tolerance.
  • +
  • Automatic Rebuilds: In the event of node or disk failures, missing data is rebuilt automatically using parity + information stored across the cluster.
  • +
+

2. Multipathing with Primary and Secondary Nodes

+

Simplyblock supports NVMe-over-Fabrics (NVMe-oF) multipathing to provide path redundancy between clients and +storage:

+
    +
  • Primary and Secondary Paths: Each Logical Volume (LV) is accessible through both a primary node and one or + more secondary nodes.
  • +
  • Automatic Failover: If the primary node becomes unavailable, traffic is automatically redirected to a secondary + node with minimal disruption.
  • +
  • Load Balancing: Multipathing also distributes I/O across available paths to optimize performance and reliability.
  • +
+

3. Redundant Control Plane and Storage Plane

+

To ensure cluster-wide availability, Simplyblock operates with full redundancy in both its control plane and +storage plane:

+
    +
  • +

    Control Plane (Management Nodes):

    +
      +
    • Deployed as a highly available set of management nodes, typically in a quorum-based configuration.
    • +
    • Responsible for cluster health, topology management, and coordination.
    • +
    • Remains operational even if one or more management nodes fail.
    • +
    +
  • +
  • +

    Storage Plane (Storage Nodes):

    +
      +
    • Storage services are distributed across multiple storage nodes.
    • +
    • Data and workloads are automatically rebalanced and protected in case of node or device failures.
    • +
    • Failures are handled transparently with automatic recovery processes.
    • +
    +
  • +
+

Benefits of Simplyblock’s High Availability Design

+
    +
  • No single point of failure across the control plane, storage plane, and data paths.
  • +
  • Seamless failover and recovery from node, network, or disk failures.
  • +
  • Efficient use of storage capacity while ensuring redundancy through erasure coding.
  • +
  • Continuous operation during maintenance and upgrade procedures.
  • +
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/architecture/index.html b/deployment/25.10.3/architecture/index.html new file mode 100644 index 00000000..0abbfc1d --- /dev/null +++ b/deployment/25.10.3/architecture/index.html @@ -0,0 +1,4685 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Architecture - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Architecture

+ +

Simplyblock is a cloud-native, software-defined storage platform designed for high performance, scalability, and +resilience. It provides NVMe over TCP (NVMe/TCP) and NVMe over RDMA (ROCEv2) block storage, enabling efficient data +access across distributed environments. Understanding the architecture, key concepts, and common terminology is +essential for effectively deploying and managing simplyblock in various infrastructure setups, including Kubernetes +clusters, virtualized environments, and bare-metal deployments. This documentation provides a comprehensive overview +of simplyblock’s internal architecture, the components that power it, and the best practices for integrating it into +your storage infrastructure.

+

This section covers several critical topics, including the architecture of simplyblock, core concepts such as Logical +Volumes (LVs), Storage Nodes, and Management Nodes, as well as Quality of Service (QoS) mechanisms and redundancy +strategies. Additionally, we define common terminology used throughout the documentation to ensure clarity and +consistency. Readers will also find guidelines on document conventions, such as formatting, naming standards, and +command syntax, which help maintain uniformity across all technical content.

+

Simplyblock is an evolving platform, and community contributions play a vital role in improving its documentation. +Whether you are a developer, storage administrator, or end user, your insights and feedback are valuable. This section +provides details on how to contribute to the documentation, report issues, suggest improvements, and submit pull +requests. By working together, we can ensure that simplyblock’s documentation remains accurate, up-to-date, and +beneficial for all users.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/architecture/simplyblock-architecture/index.html b/deployment/25.10.3/architecture/simplyblock-architecture/index.html new file mode 100644 index 00000000..9e1a2825 --- /dev/null +++ b/deployment/25.10.3/architecture/simplyblock-architecture/index.html @@ -0,0 +1,4990 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Simplyblock Architecture - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Simplyblock Architecture

+ +

Simplyblock is a cloud-native, distributed block storage platform designed to deliver scalable, high-performance, and +resilient storage through a software-defined architecture. Centered around NVMe-over-Fabrics (NVMe-oF), simplyblock +separates compute and storage to enable scale-out elasticity, high availability, and low-latency operations in modern, +containerized environments. The architecture is purpose-built to support Kubernetes-native deployments with seamless +integration but supports virtual and even physical machines as clients as well.

+

Control Plane

+

The control plane hosts the Simplyblock Management API and CLI endpoints with identical features. The CLI is equally +available on all management nodes. The API and CLI are secured using HTTPS / TLS.

+

The control plane operates through redundant management nodes that handle cluster health, metadata, and orchestration. A +quorum-based model ensures no single point of failure.

+

Control Plane Responsibilities

+

The control plane provides the following functionality:

+
    +
  • Lifecycle management of clusters:
      +
    • Deploy storage clusters
    • +
    • Manages nodes and devices
    • +
    • Resize and reconfigure clusters
    • +
    +
  • +
  • Lifecycle management of logical volumes and pools
      +
    • For Kubernetes, the Simplyblock CSI driver integrates with the persistent volume lifecycle management
    • +
    +
  • +
  • Cluster operations
      +
    • I/O Statistics
    • +
    • Capacity Statistics
    • +
    • Alerts
    • +
    • Logging
    • +
    • others
    • +
    +
  • +
+

The control plane also provides real-time collection and aggregation of I/O stats (performance, capacity, +utilization), proactive cluster monitoring and health checks, monitoring dashboards, alerting, a log file repository +with a management interface, data migration, and automated node and device restart services.

+

For monitoring dashboards and alerting, the simplyblock control plane provides Grafana and Prometheus. Both systems are +configured to provide a set of standard alerts that can be delivered via Slack or email. Additionally, customers +are free to define their own custom alerts.

+

For log management, simplyblock uses Graylog. For a comprehensive insight, Graylog is configured to collect container +logs from the control plane and storage plane services, the RPC communication between the control plane and storage +cluster and the data services logs (SPDK ⧉ or Storage Performance +Development Kit).

+

Control Plane State Storage

+

The control plane is implemented as a stack of containers running on one or more management nodes. For production +environments, simplyblock requires at least three management nodes for high availability. The management nodes run as +a set of replicated, stateful services.

+

For internal state storage, the control plane uses (FoundationDB ⧉) as +its key-value store. FoundationDB, by itself, operates in a replicated highly-available cluster across all management +nodes.

+

Within Kubernetes deployments, the control plane can now also be deployed alongside the storage nodes on the same k8s +workers. It will, however, run in separate pods.

+

Storage Plane

+

The storage plane consists of distributed storage nodes that run on Linux-based systems and provide logical volumes ( +LVs) as virtual NVMe devices. Using SPDK and DPDK (Data Plane Development Kit), simplyblock achieves high-speed, +user-space storage operations with minimal latency.

+

To achieve that, simplyblock detaches NVMe devices from the Linux kernel, bypassing the typical kernel-based handling. +It then takes full control of the device directly, handling all communication with the hardware in user-space. That +removes transitions from user-space to kernel and back, improving latency and reducing processing time and context +switches.

+

Scaling and Performance

+

Simplyblock supports linear scale-out by adding storage nodes without service disruption. Performance increases with +additional cores, network interfaces, and NVMe devices, with SPDK minimizing CPU overhead for maximum throughput.

+

Data written to a simplyblock logical volume is split into chunks and distributed across the storage plane cluster +nodes. This improves throughput by parallelizing the access to data through multiple storage nodes.

+

Data Protection & Fault Tolerance

+

Simplyblock's storage engine implements erasure coding, a RAID-like system, which uses parity information to protect +data and restore it in case of a failure. Due to the fully distributed nature of simplyblock's erasure coding +implementation, parity information is not only stored on disks other than the original data chunk, but also on other +nodes. This improves data protection and enables higher fault tolerance than typical implementations. While most +erasure coding implementations provide a Maximum Tolerable Failure (MFT) in terms of how many disks can fail, +simplyblock defines it as the number of nodes that can fail.

+

As a second layer, simplyblock leverages NVMe-oF multipathing to ensure continuous access to logical volumes by +automatically handling failover between primary and secondary nodes. Each volume is presented with multiple active +paths, allowing I/O operations to seamlessly reroute through secondary nodes if the primary node becomes unavailable due +to failure, maintenance, or network disruption. This multipath configuration is managed transparently by the NVMe-oF +subsystem, providing path redundancy, eliminating single points of failure, and maintaining high availability without +requiring manual intervention. The system continuously monitors path health, and when the primary path is restored, it +can be automatically reintegrated, ensuring optimal performance and reliability.

+

Last, simplyblock provides robust encryption for data-at-rest, ensuring that all data stored on logical volumes is +protected using industry-standard AES_XTS encryption with minimal performance overhead. This encryption is applied at +the volume level and is managed transparently within the simplyblock cluster, allowing compliance with strict regulatory +requirements such as GDPR, HIPAA, and PCI-DSS. Furthermore, simplyblock’s architecture is designed for strong +multitenant isolation, ensuring that encryption keys, metadata, and data are securely segregated between tenants. This +guarantees that unauthorized access between workloads and users is prevented, making simplyblock an ideal solution for +shared environments where security, compliance, and tenant separation are critical.

+

Technologies in Simplyblock

+

Building strong and reliable distributed storage technology has to build on a strong foundation. That's why simplyblock +uses a variety of open-source key technologies as its basis.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ComponentTechnologies
NetworkingNVMe-oF ⧉, NVMe/TCP, NVMe/RoCE, DPDK ⧉
StorageSPDK ⧉, FoundationDB ⧉, MongoDB ⧉
ObservabilityPrometheus ⧉, Thanos ⧉, Grafana ⧉
LoggingGraylog ⧉, OpenSearch ⧉
KubernetesSPDK CSI ⧉, Kubernetes CSI ⧉
Operating SystemLinux ⧉
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/architecture/storage-performance-and-qos/index.html b/deployment/25.10.3/architecture/storage-performance-and-qos/index.html new file mode 100644 index 00000000..0cb782db --- /dev/null +++ b/deployment/25.10.3/architecture/storage-performance-and-qos/index.html @@ -0,0 +1,5029 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Performance and QoS - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Performance and QoS

+ +

Storage Performance Indicators

+

Storage performance can be categorized by latency (the aggregate response time of an IO request from the host to the +storage system) and throughput. Throughput can be broken down into random IOPS throughput and sequential throughput.

+

IOPS and sequential throughput must be measured relative to capacity (i.e., IOPS per TB).

+

Latency and IOPS throughput depend heavily on the IO operation (read, write, unmap) and the IO size (4K, 8K, 16K, +32K, ...). For comparability, it is typically tested with a 4K IO size, but tests with 8K to 128K are standard too.

+

Latency is strongly influenced by the overall load on the overall storage system. If there is intense IO pressure, +queues build up and response times go up. This is no different from a traffic jam on the highway or a queue at the +airline counter. Therefore, to compare latency results, it must be measured under a fixed system load (amount of +parallel IO, its size, and IO type mix).

+
+

Important

+

For latency, consistency matters. High latency variability, especially in the tail, can severely impact workloads. +Therefore, 99th percentile latency may be more important than the average or median.

+
+

Challenges with Hyper-Converged and Software-Defined Storage

+

Unequal load distribution across cluster nodes, and the dynamics of specific nodes under Linux or Windows (dynamic +multithreading, network bandwidth fluctuations, etc.), create significant challenges for consistent, high storage +performance in such an environment.

+

Mixed IO patterns increase these challenges from different workloads.

+

This can cause substantial variability in latency, IOPS throughput, and high-tail latency, with a negative impact on +workloads.

+

Simplyblock: How We Ensure Ultra-Low Latency In The 99th Percentile

+

Simplyblock exhibits a range of architectural characteristics and features to guarantee consistently low latency and +IOPS in both disaggregated and hyper-converged environments.

+

Pseudo-Randomized, Distributed Data Placement With Fast Re-Balancing

+

Simplyblock is a fully distributed solution. Back-storage is balanced across all nodes in the cluster on a very granular +level. Relative to their capacity and performance, each device and node in the cluster receives a similar amount and +size of IO. This feature ensures an entirely equal distribution of load across the network, compute, and NVMe drives.

+

In case of drive or node failures, distributed rebalancing occurs to reach the fully balanced state as quickly as +possible. When adding drives and nodes, performance increases in a linear manner. This mechanism avoids local +overload and keeps latency and IOPS throughput consistent across the cluster, independent of which node is accessed.

+

Built End-To-End With And For NVMe

+

Storage access is entirely based on NVMe (local back-storage) and NVMe over Fabric (hosts to storage nodes and storage +nodes to storage nodes). This protocol is inherently asynchronous and supports highly parallel processing, eliminating +bottlenecks specific to mixed IO patterns on other protocols (such as iSCSI) and ensuring consistently low latency.

+

Support for ROCEv2

+

Simplyblock also supports NVMe over RDMA (ROCEv2). RDMA, as a transport layer, offers significant latency and tail +latency advantages over TCP. Today, RDMA can be used in most data center environments because it requires only specific +hardware features from NICs, which are available across a broad range of models. It runs over UDP/IP and, as such, does +not require any changes to the networking.

+

Full Core-Isolation And NUMA Awareness

+

Simplyblock implements full CPU core isolation and NUMA socket affinity. Simplyblock’s storage nodes are auto-deployed +per NUMA socket and utilize only socket-specific resources, meaning compute, memory, network interfaces, and NVMe.

+

All CPU cores assigned to simplyblock are isolated from the operating system (user-space compute and IRQ handling), and +internal threads are pinned to cores. This avoids any scheduling-induced delays or variability in storage processing.

+

User-Space, Zero-Copy Framework (Kockless and Asynchronous)

+

Simplyblock uses a user-space framework (SPDK ⧉). SPDK implemented a +zero-copy model across the entire storage processing chain. This includes the data plane, the Kinux vfio driver, and the +entirely non-locking, asynchronous DPDK threading model. It enables avoiding Linux p-threads and any inter-thread +synchronization, providing much higher latency predictability and a lower baseline latency.

+

Advanced QoS (Quality of Service)

+

Simplyblock implements two independent, critical QoS mechanisms.

+

Volume and Pool-Level Caps

+

A cap, such as an IOPS, throughput limit, or a combination of both, can be set on an individual volume or an entire pool +within the cluster. Through this limit, general-purpose volumes can be pooled and limited in their total IOPS or +throughput to avoid noisy-neighbor effects and protect more critical workloads.

+

QoS Service Classes

+

On each cluster, up to 7 service classes can be defined (class 0 is the default). For each class, cluster performance (a +combination of IOPS and throughput) can be allocated in relative terms (e.g., 20%) for performance guarantees.

+

General-purpose volumes can be allocated in the default class, while more critical workloads can be split across other +service classes. If other classes do not use up their quotas, the default class can still allocate all available +resources.

+

Why QoS Service Classes are Critical

+

Why is a limit not sufficient? Imagine a heavily mixed workload in the cluster. Some workloads are read-intensive, while +others are write-intensive. Some workloads require a lot of small random IO, while others read and write large +sequential IO. There is no absolute number of IOPS or throughput a cluster can provide, considering the dynamics of +workloads.

+

Therefore, using absolute limits on one pool of volumes is effective for protecting others from spillover effects and +undesired behavior. Still, it does not guarantee performance for a particular class of volumes.

+

Service classes provide a much better degree of isolation under the consideration of dynamic workloads. As long as you +do not overload a particular service class, the general IO pressure on the cluster will not matter for the performance +of volumes in that class.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/architecture/what-is-simplyblock/index.html b/deployment/25.10.3/architecture/what-is-simplyblock/index.html new file mode 100644 index 00000000..0676543d --- /dev/null +++ b/deployment/25.10.3/architecture/what-is-simplyblock/index.html @@ -0,0 +1,4827 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + What is Simplyblock? - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

What is Simplyblock?

+ +

Simplyblock is a high-performance, distributed storage orchestration layer designed for cloud-native environments. It +provides NVMe over TCP (NVMe/TCP) block storage to hosts and offers block storage to containers through its Container +Storage Interface (CSI) and ProxMox drivers.

+

What makes Simplyblock Special?

+
    +
  • +

    Environment Agnostic: Simplyblock operates seamlessly across major cloud providers, regional, and specialized + providers, bare-metal and virtual provisioners, and private clouds, including both virtualized and bare-metal + Kubernetes environments.

    +
  • +
  • +

    NVMe-Optimized: Simplyblock is built from scratch around NVMe. All internal and external storage access is + entirely based on NVMe and NVMe over Fabric (TCP, RDMA). This includes local back-storage on storage nodes, + host-to-cluster, and node-to-node traffic. Together with the user-space data plane, distributed data placement, and + advanced quality of service (QoS) and other characteristics, this makes simplyblock the storage platform with the most + advanced performance guarantees in hyperconverged solutions available today.

    +
  • +
  • +

    User-Space Data Plane: Simplyblock data plane is built entirely in user-space with an interrupt-free, lockless, + zero-copy architecture with thread-to-core pinning. The hot data path entirely avoids Linux kernel involvement, data + copies, dynamic thread scheduling, and inter-thread synchronization. Its deployment is fully numa-node-aware.

    +
  • +
  • +

    Advanced QoS: Simplyblock provides not only IOPS or throughput-based caps, but also true QoS service classes, + effectively isolating IO traffic.

    +
  • +
  • +

    Distributed Data Placement: Simplyblock's advanced data placement, which is based on small, fixed-size data + chunks, ensures a perfectly balanced utilization of storage, compute, and network bandwidth, avoiding any performance + bottlenecks local to specific nodes. This provides almost linear performance scalability for the cluster.

    +
  • +
  • +

    Containerized Architecture: The solution comprises:

    +
      +
    • Storage Nodes: Container stacks delivering distributed data services via NVMe over Fabrics (NVMe over TCP), + forming storage clusters.
    • +
    • Management Nodes: Container stacks offering control and management services, collectively known as the control + plane.
    • +
    +
  • +
  • +

    Platform Support: Simplyblock supports deployment on virtual machines, bare-metal instances, and Kubernetes + containers, compatible with x86 and ARM architectures.

    +
  • +
  • +

    Deployment Flexibility: Simplyblock offers the greatest deployment flexibility in the industry. It can be deployed + hyper-converged, disaggregated, and in a hybrid fashion, combining the best of both worlds.

    +
  • +
+

Customer Benefits Across Industries

+

Simplyblock offers tailored advantages to various sectors:

+
    +
  • +

    Financial Services: Enhances data management by boosting performance, strengthening security, and optimizing cloud + storage costs.

    +
  • +
  • +

    Media and Gaming: Improves storage performance, reduces costs, and streamlines data management, facilitating + efficient handling of large media files and gaming data.

    +
  • +
  • +

    Technology and SaaS Companies: Provides cost savings and performance enhancements, simplifying storage management + and improving application performance without significant infrastructure changes.

    +
  • +
  • +

    Telecommunications: Offers ultra-low-latency access to data, enhances security, and simplifies complex storage + infrastructures, aiding in the efficient management of customer records and network telemetry.

    +
  • +
  • +

    Blockchain and Cryptocurrency: Delivers cost efficiency, enhanced performance, scalability, and robust data + security, addressing the unique storage demands of blockchain networks.

    +
  • +
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/assets/images/favicon.png b/deployment/25.10.3/assets/images/favicon.png new file mode 100644 index 00000000..1cf13b9f Binary files /dev/null and b/deployment/25.10.3/assets/images/favicon.png differ diff --git a/deployment/25.10.3/assets/images/gcp-wizard-local-ssd.png b/deployment/25.10.3/assets/images/gcp-wizard-local-ssd.png new file mode 100644 index 00000000..7e7b854b Binary files /dev/null and b/deployment/25.10.3/assets/images/gcp-wizard-local-ssd.png differ diff --git a/deployment/25.10.3/assets/images/simplyblock-proxmox-storage.png b/deployment/25.10.3/assets/images/simplyblock-proxmox-storage.png new file mode 100644 index 00000000..acef14ce Binary files /dev/null and b/deployment/25.10.3/assets/images/simplyblock-proxmox-storage.png differ diff --git a/deployment/25.10.3/assets/images/social/architecture/concepts/automatic-rebalancing.png b/deployment/25.10.3/assets/images/social/architecture/concepts/automatic-rebalancing.png new file mode 100644 index 00000000..30345e8c Binary files /dev/null and b/deployment/25.10.3/assets/images/social/architecture/concepts/automatic-rebalancing.png differ diff --git a/deployment/25.10.3/assets/images/social/architecture/concepts/disaggregated.png b/deployment/25.10.3/assets/images/social/architecture/concepts/disaggregated.png new file mode 100644 index 00000000..48204aab Binary files /dev/null and b/deployment/25.10.3/assets/images/social/architecture/concepts/disaggregated.png differ diff --git a/deployment/25.10.3/assets/images/social/architecture/concepts/erasure-coding.png b/deployment/25.10.3/assets/images/social/architecture/concepts/erasure-coding.png new file mode 100644 index 00000000..5670782c Binary files /dev/null and b/deployment/25.10.3/assets/images/social/architecture/concepts/erasure-coding.png differ diff --git a/deployment/25.10.3/assets/images/social/architecture/concepts/hyper-converged.png b/deployment/25.10.3/assets/images/social/architecture/concepts/hyper-converged.png new file mode 100644 index 00000000..13c4ca91 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/architecture/concepts/hyper-converged.png differ diff --git a/deployment/25.10.3/assets/images/social/architecture/concepts/index.png b/deployment/25.10.3/assets/images/social/architecture/concepts/index.png new file mode 100644 index 00000000..10a6f697 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/architecture/concepts/index.png differ diff --git a/deployment/25.10.3/assets/images/social/architecture/concepts/logical-volumes.png b/deployment/25.10.3/assets/images/social/architecture/concepts/logical-volumes.png new file mode 100644 index 00000000..21c0797f Binary files /dev/null and b/deployment/25.10.3/assets/images/social/architecture/concepts/logical-volumes.png differ diff --git a/deployment/25.10.3/assets/images/social/architecture/concepts/persistent-volumes.png b/deployment/25.10.3/assets/images/social/architecture/concepts/persistent-volumes.png new file mode 100644 index 00000000..2864644b Binary files /dev/null and b/deployment/25.10.3/assets/images/social/architecture/concepts/persistent-volumes.png differ diff --git a/deployment/25.10.3/assets/images/social/architecture/concepts/simplyblock-cluster.png b/deployment/25.10.3/assets/images/social/architecture/concepts/simplyblock-cluster.png new file mode 100644 index 00000000..fed6ab96 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/architecture/concepts/simplyblock-cluster.png differ diff --git a/deployment/25.10.3/assets/images/social/architecture/concepts/snapshots-clones.png b/deployment/25.10.3/assets/images/social/architecture/concepts/snapshots-clones.png new file mode 100644 index 00000000..c4b9e308 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/architecture/concepts/snapshots-clones.png differ diff --git a/deployment/25.10.3/assets/images/social/architecture/concepts/storage-pooling.png b/deployment/25.10.3/assets/images/social/architecture/concepts/storage-pooling.png new file mode 100644 index 00000000..cd6c1af7 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/architecture/concepts/storage-pooling.png differ diff --git a/deployment/25.10.3/assets/images/social/architecture/high-availability-fault-tolerance.png b/deployment/25.10.3/assets/images/social/architecture/high-availability-fault-tolerance.png new file mode 100644 index 00000000..f3691180 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/architecture/high-availability-fault-tolerance.png differ diff --git a/deployment/25.10.3/assets/images/social/architecture/index.png b/deployment/25.10.3/assets/images/social/architecture/index.png new file mode 100644 index 00000000..a3bb0578 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/architecture/index.png differ diff --git a/deployment/25.10.3/assets/images/social/architecture/simplyblock-architecture.png b/deployment/25.10.3/assets/images/social/architecture/simplyblock-architecture.png new file mode 100644 index 00000000..cd1ad066 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/architecture/simplyblock-architecture.png differ diff --git a/deployment/25.10.3/assets/images/social/architecture/storage-performance-and-qos.png b/deployment/25.10.3/assets/images/social/architecture/storage-performance-and-qos.png new file mode 100644 index 00000000..c5ddb671 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/architecture/storage-performance-and-qos.png differ diff --git a/deployment/25.10.3/assets/images/social/architecture/what-is-simplyblock.png b/deployment/25.10.3/assets/images/social/architecture/what-is-simplyblock.png new file mode 100644 index 00000000..6dda815b Binary files /dev/null and b/deployment/25.10.3/assets/images/social/architecture/what-is-simplyblock.png differ diff --git a/deployment/25.10.3/assets/images/social/deployments/air-gap/index.png b/deployment/25.10.3/assets/images/social/deployments/air-gap/index.png new file mode 100644 index 00000000..6a69df8b Binary files /dev/null and b/deployment/25.10.3/assets/images/social/deployments/air-gap/index.png differ diff --git a/deployment/25.10.3/assets/images/social/deployments/baremetal/index.png b/deployment/25.10.3/assets/images/social/deployments/baremetal/index.png new file mode 100644 index 00000000..8e99694c Binary files /dev/null and b/deployment/25.10.3/assets/images/social/deployments/baremetal/index.png differ diff --git a/deployment/25.10.3/assets/images/social/deployments/cluster-deployment-options.png b/deployment/25.10.3/assets/images/social/deployments/cluster-deployment-options.png new file mode 100644 index 00000000..7097c1a5 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/deployments/cluster-deployment-options.png differ diff --git a/deployment/25.10.3/assets/images/social/deployments/data-migration/index.png b/deployment/25.10.3/assets/images/social/deployments/data-migration/index.png new file mode 100644 index 00000000..a9ac3755 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/deployments/data-migration/index.png differ diff --git a/deployment/25.10.3/assets/images/social/deployments/deployment-preparation/cloud-instance-recommendations.png b/deployment/25.10.3/assets/images/social/deployments/deployment-preparation/cloud-instance-recommendations.png new file mode 100644 index 00000000..1791bc50 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/deployments/deployment-preparation/cloud-instance-recommendations.png differ diff --git a/deployment/25.10.3/assets/images/social/deployments/deployment-preparation/erasure-coding-scheme.png b/deployment/25.10.3/assets/images/social/deployments/deployment-preparation/erasure-coding-scheme.png new file mode 100644 index 00000000..1f099826 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/deployments/deployment-preparation/erasure-coding-scheme.png differ diff --git a/deployment/25.10.3/assets/images/social/deployments/deployment-preparation/index.png b/deployment/25.10.3/assets/images/social/deployments/deployment-preparation/index.png new file mode 100644 index 00000000..7fe217b0 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/deployments/deployment-preparation/index.png differ diff --git a/deployment/25.10.3/assets/images/social/deployments/deployment-preparation/numa-considerations.png b/deployment/25.10.3/assets/images/social/deployments/deployment-preparation/numa-considerations.png new file mode 100644 index 00000000..4ab21944 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/deployments/deployment-preparation/numa-considerations.png differ diff --git a/deployment/25.10.3/assets/images/social/deployments/deployment-preparation/system-requirements.png b/deployment/25.10.3/assets/images/social/deployments/deployment-preparation/system-requirements.png new file mode 100644 index 00000000..33c52343 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/deployments/deployment-preparation/system-requirements.png differ diff --git a/deployment/25.10.3/assets/images/social/deployments/index.png b/deployment/25.10.3/assets/images/social/deployments/index.png new file mode 100644 index 00000000..510ebfc2 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/deployments/index.png differ diff --git a/deployment/25.10.3/assets/images/social/deployments/install-on-linux/index.png b/deployment/25.10.3/assets/images/social/deployments/install-on-linux/index.png new file mode 100644 index 00000000..6ca4f394 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/deployments/install-on-linux/index.png differ diff --git a/deployment/25.10.3/assets/images/social/deployments/install-on-linux/install-cp.png b/deployment/25.10.3/assets/images/social/deployments/install-on-linux/install-cp.png new file mode 100644 index 00000000..e774ece7 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/deployments/install-on-linux/install-cp.png differ diff --git a/deployment/25.10.3/assets/images/social/deployments/install-on-linux/install-sp.png b/deployment/25.10.3/assets/images/social/deployments/install-on-linux/install-sp.png new file mode 100644 index 00000000..253f75d3 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/deployments/install-on-linux/install-sp.png differ diff --git a/deployment/25.10.3/assets/images/social/deployments/kubernetes/index.png b/deployment/25.10.3/assets/images/social/deployments/kubernetes/index.png new file mode 100644 index 00000000..81c7e3d5 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/deployments/kubernetes/index.png differ diff --git a/deployment/25.10.3/assets/images/social/deployments/kubernetes/install-csi.png b/deployment/25.10.3/assets/images/social/deployments/kubernetes/install-csi.png new file mode 100644 index 00000000..478229bb Binary files /dev/null and b/deployment/25.10.3/assets/images/social/deployments/kubernetes/install-csi.png differ diff --git a/deployment/25.10.3/assets/images/social/deployments/kubernetes/k8s-control-plane.png b/deployment/25.10.3/assets/images/social/deployments/kubernetes/k8s-control-plane.png new file mode 100644 index 00000000..5384b4eb Binary files /dev/null and b/deployment/25.10.3/assets/images/social/deployments/kubernetes/k8s-control-plane.png differ diff --git a/deployment/25.10.3/assets/images/social/deployments/kubernetes/k8s-storage-plane.png b/deployment/25.10.3/assets/images/social/deployments/kubernetes/k8s-storage-plane.png new file mode 100644 index 00000000..2f175162 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/deployments/kubernetes/k8s-storage-plane.png differ diff --git a/deployment/25.10.3/assets/images/social/deployments/kubernetes/openshift.png b/deployment/25.10.3/assets/images/social/deployments/kubernetes/openshift.png new file mode 100644 index 00000000..9133347e Binary files /dev/null and b/deployment/25.10.3/assets/images/social/deployments/kubernetes/openshift.png differ diff --git a/deployment/25.10.3/assets/images/social/deployments/kubernetes/talos.png b/deployment/25.10.3/assets/images/social/deployments/kubernetes/talos.png new file mode 100644 index 00000000..9af98088 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/deployments/kubernetes/talos.png differ diff --git a/deployment/25.10.3/assets/images/social/deployments/kubernetes/volume-encryption.png b/deployment/25.10.3/assets/images/social/deployments/kubernetes/volume-encryption.png new file mode 100644 index 00000000..27f66aa1 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/deployments/kubernetes/volume-encryption.png differ diff --git a/deployment/25.10.3/assets/images/social/deployments/nvme-namespaces-and-subsystems.png b/deployment/25.10.3/assets/images/social/deployments/nvme-namespaces-and-subsystems.png new file mode 100644 index 00000000..82c485ec Binary files /dev/null and b/deployment/25.10.3/assets/images/social/deployments/nvme-namespaces-and-subsystems.png differ diff --git a/deployment/25.10.3/assets/images/social/deployments/openstack/index.png b/deployment/25.10.3/assets/images/social/deployments/openstack/index.png new file mode 100644 index 00000000..c1706f58 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/deployments/openstack/index.png differ diff --git a/deployment/25.10.3/assets/images/social/deployments/proxmox/index.png b/deployment/25.10.3/assets/images/social/deployments/proxmox/index.png new file mode 100644 index 00000000..b8ebfe2d Binary files /dev/null and b/deployment/25.10.3/assets/images/social/deployments/proxmox/index.png differ diff --git a/deployment/25.10.3/assets/images/social/important-notes/acronyms.png b/deployment/25.10.3/assets/images/social/important-notes/acronyms.png new file mode 100644 index 00000000..eeff371b Binary files /dev/null and b/deployment/25.10.3/assets/images/social/important-notes/acronyms.png differ diff --git a/deployment/25.10.3/assets/images/social/important-notes/contributing.png b/deployment/25.10.3/assets/images/social/important-notes/contributing.png new file mode 100644 index 00000000..3178c063 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/important-notes/contributing.png differ diff --git a/deployment/25.10.3/assets/images/social/important-notes/documentation-conventions.png b/deployment/25.10.3/assets/images/social/important-notes/documentation-conventions.png new file mode 100644 index 00000000..378f0be4 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/important-notes/documentation-conventions.png differ diff --git a/deployment/25.10.3/assets/images/social/important-notes/index.png b/deployment/25.10.3/assets/images/social/important-notes/index.png new file mode 100644 index 00000000..0c1b7526 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/important-notes/index.png differ diff --git a/deployment/25.10.3/assets/images/social/important-notes/known-issues.png b/deployment/25.10.3/assets/images/social/important-notes/known-issues.png new file mode 100644 index 00000000..429daf35 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/important-notes/known-issues.png differ diff --git a/deployment/25.10.3/assets/images/social/important-notes/terminology.png b/deployment/25.10.3/assets/images/social/important-notes/terminology.png new file mode 100644 index 00000000..e17def8c Binary files /dev/null and b/deployment/25.10.3/assets/images/social/important-notes/terminology.png differ diff --git a/deployment/25.10.3/assets/images/social/index.png b/deployment/25.10.3/assets/images/social/index.png new file mode 100644 index 00000000..d165d27f Binary files /dev/null and b/deployment/25.10.3/assets/images/social/index.png differ diff --git a/deployment/25.10.3/assets/images/social/maintenance-operations/cluster-upgrade.png b/deployment/25.10.3/assets/images/social/maintenance-operations/cluster-upgrade.png new file mode 100644 index 00000000..0418ca2d Binary files /dev/null and b/deployment/25.10.3/assets/images/social/maintenance-operations/cluster-upgrade.png differ diff --git a/deployment/25.10.3/assets/images/social/maintenance-operations/find-secondary-node.png b/deployment/25.10.3/assets/images/social/maintenance-operations/find-secondary-node.png new file mode 100644 index 00000000..55aa706b Binary files /dev/null and b/deployment/25.10.3/assets/images/social/maintenance-operations/find-secondary-node.png differ diff --git a/deployment/25.10.3/assets/images/social/maintenance-operations/index.png b/deployment/25.10.3/assets/images/social/maintenance-operations/index.png new file mode 100644 index 00000000..e24af999 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/maintenance-operations/index.png differ diff --git a/deployment/25.10.3/assets/images/social/maintenance-operations/manual-restarting-nodes.png b/deployment/25.10.3/assets/images/social/maintenance-operations/manual-restarting-nodes.png new file mode 100644 index 00000000..f877eeaf Binary files /dev/null and b/deployment/25.10.3/assets/images/social/maintenance-operations/manual-restarting-nodes.png differ diff --git a/deployment/25.10.3/assets/images/social/maintenance-operations/migrating-storage-node.png b/deployment/25.10.3/assets/images/social/maintenance-operations/migrating-storage-node.png new file mode 100644 index 00000000..2395bd8e Binary files /dev/null and b/deployment/25.10.3/assets/images/social/maintenance-operations/migrating-storage-node.png differ diff --git a/deployment/25.10.3/assets/images/social/maintenance-operations/monitoring/accessing-grafana.png b/deployment/25.10.3/assets/images/social/maintenance-operations/monitoring/accessing-grafana.png new file mode 100644 index 00000000..9fd0b851 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/maintenance-operations/monitoring/accessing-grafana.png differ diff --git a/deployment/25.10.3/assets/images/social/maintenance-operations/monitoring/accessing-graylog.png b/deployment/25.10.3/assets/images/social/maintenance-operations/monitoring/accessing-graylog.png new file mode 100644 index 00000000..c38acb41 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/maintenance-operations/monitoring/accessing-graylog.png differ diff --git a/deployment/25.10.3/assets/images/social/maintenance-operations/monitoring/alerts.png b/deployment/25.10.3/assets/images/social/maintenance-operations/monitoring/alerts.png new file mode 100644 index 00000000..d7ca1934 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/maintenance-operations/monitoring/alerts.png differ diff --git a/deployment/25.10.3/assets/images/social/maintenance-operations/monitoring/cluster-health.png b/deployment/25.10.3/assets/images/social/maintenance-operations/monitoring/cluster-health.png new file mode 100644 index 00000000..eb40f15a Binary files /dev/null and b/deployment/25.10.3/assets/images/social/maintenance-operations/monitoring/cluster-health.png differ diff --git a/deployment/25.10.3/assets/images/social/maintenance-operations/monitoring/index.png b/deployment/25.10.3/assets/images/social/maintenance-operations/monitoring/index.png new file mode 100644 index 00000000..30d3c517 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/maintenance-operations/monitoring/index.png differ diff --git a/deployment/25.10.3/assets/images/social/maintenance-operations/monitoring/io-stats.png b/deployment/25.10.3/assets/images/social/maintenance-operations/monitoring/io-stats.png new file mode 100644 index 00000000..8ffa6bc4 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/maintenance-operations/monitoring/io-stats.png differ diff --git a/deployment/25.10.3/assets/images/social/maintenance-operations/monitoring/lvol-conditions.png b/deployment/25.10.3/assets/images/social/maintenance-operations/monitoring/lvol-conditions.png new file mode 100644 index 00000000..98470a63 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/maintenance-operations/monitoring/lvol-conditions.png differ diff --git a/deployment/25.10.3/assets/images/social/maintenance-operations/node-affinity.png b/deployment/25.10.3/assets/images/social/maintenance-operations/node-affinity.png new file mode 100644 index 00000000..f597243b Binary files /dev/null and b/deployment/25.10.3/assets/images/social/maintenance-operations/node-affinity.png differ diff --git a/deployment/25.10.3/assets/images/social/maintenance-operations/reconnect-nvme-device.png b/deployment/25.10.3/assets/images/social/maintenance-operations/reconnect-nvme-device.png new file mode 100644 index 00000000..034148e9 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/maintenance-operations/reconnect-nvme-device.png differ diff --git a/deployment/25.10.3/assets/images/social/maintenance-operations/replacing-storage-node.png b/deployment/25.10.3/assets/images/social/maintenance-operations/replacing-storage-node.png new file mode 100644 index 00000000..c6568a80 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/maintenance-operations/replacing-storage-node.png differ diff --git a/deployment/25.10.3/assets/images/social/maintenance-operations/scaling/expanding-storage-cluster.png b/deployment/25.10.3/assets/images/social/maintenance-operations/scaling/expanding-storage-cluster.png new file mode 100644 index 00000000..a7dd4517 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/maintenance-operations/scaling/expanding-storage-cluster.png differ diff --git a/deployment/25.10.3/assets/images/social/maintenance-operations/scaling/expanding-storage-pool.png b/deployment/25.10.3/assets/images/social/maintenance-operations/scaling/expanding-storage-pool.png new file mode 100644 index 00000000..e65db038 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/maintenance-operations/scaling/expanding-storage-pool.png differ diff --git a/deployment/25.10.3/assets/images/social/maintenance-operations/scaling/index.png b/deployment/25.10.3/assets/images/social/maintenance-operations/scaling/index.png new file mode 100644 index 00000000..a5de8f68 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/maintenance-operations/scaling/index.png differ diff --git a/deployment/25.10.3/assets/images/social/maintenance-operations/security/encryption-kubernetes-secrets.png b/deployment/25.10.3/assets/images/social/maintenance-operations/security/encryption-kubernetes-secrets.png new file mode 100644 index 00000000..208154a8 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/maintenance-operations/security/encryption-kubernetes-secrets.png differ diff --git a/deployment/25.10.3/assets/images/social/maintenance-operations/security/index.png b/deployment/25.10.3/assets/images/social/maintenance-operations/security/index.png new file mode 100644 index 00000000..b4af5259 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/maintenance-operations/security/index.png differ diff --git a/deployment/25.10.3/assets/images/social/maintenance-operations/security/multi-tenancy.png b/deployment/25.10.3/assets/images/social/maintenance-operations/security/multi-tenancy.png new file mode 100644 index 00000000..d4ebecdc Binary files /dev/null and b/deployment/25.10.3/assets/images/social/maintenance-operations/security/multi-tenancy.png differ diff --git a/deployment/25.10.3/assets/images/social/reference/api/index.png b/deployment/25.10.3/assets/images/social/reference/api/index.png new file mode 100644 index 00000000..bfd44f70 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/reference/api/index.png differ diff --git a/deployment/25.10.3/assets/images/social/reference/api/reference.png b/deployment/25.10.3/assets/images/social/reference/api/reference.png new file mode 100644 index 00000000..1cfb76c7 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/reference/api/reference.png differ diff --git a/deployment/25.10.3/assets/images/social/reference/cli/cluster.png b/deployment/25.10.3/assets/images/social/reference/cli/cluster.png new file mode 100644 index 00000000..fad543cc Binary files /dev/null and b/deployment/25.10.3/assets/images/social/reference/cli/cluster.png differ diff --git a/deployment/25.10.3/assets/images/social/reference/cli/control-plane.png b/deployment/25.10.3/assets/images/social/reference/cli/control-plane.png new file mode 100644 index 00000000..d40d7903 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/reference/cli/control-plane.png differ diff --git a/deployment/25.10.3/assets/images/social/reference/cli/index.png b/deployment/25.10.3/assets/images/social/reference/cli/index.png new file mode 100644 index 00000000..9ca12639 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/reference/cli/index.png differ diff --git a/deployment/25.10.3/assets/images/social/reference/cli/qos.png b/deployment/25.10.3/assets/images/social/reference/cli/qos.png new file mode 100644 index 00000000..8d1b2868 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/reference/cli/qos.png differ diff --git a/deployment/25.10.3/assets/images/social/reference/cli/snapshot.png b/deployment/25.10.3/assets/images/social/reference/cli/snapshot.png new file mode 100644 index 00000000..9cd71cca Binary files /dev/null and b/deployment/25.10.3/assets/images/social/reference/cli/snapshot.png differ diff --git a/deployment/25.10.3/assets/images/social/reference/cli/storage-node.png b/deployment/25.10.3/assets/images/social/reference/cli/storage-node.png new file mode 100644 index 00000000..e278d329 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/reference/cli/storage-node.png differ diff --git a/deployment/25.10.3/assets/images/social/reference/cli/storage-pool.png b/deployment/25.10.3/assets/images/social/reference/cli/storage-pool.png new file mode 100644 index 00000000..dc4620fd Binary files /dev/null and b/deployment/25.10.3/assets/images/social/reference/cli/storage-pool.png differ diff --git a/deployment/25.10.3/assets/images/social/reference/cli/volume.png b/deployment/25.10.3/assets/images/social/reference/cli/volume.png new file mode 100644 index 00000000..bb623aa0 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/reference/cli/volume.png differ diff --git a/deployment/25.10.3/assets/images/social/reference/index.png b/deployment/25.10.3/assets/images/social/reference/index.png new file mode 100644 index 00000000..eaf9b688 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/reference/index.png differ diff --git a/deployment/25.10.3/assets/images/social/reference/kubernetes/index.png b/deployment/25.10.3/assets/images/social/reference/kubernetes/index.png new file mode 100644 index 00000000..65bce8a2 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/reference/kubernetes/index.png differ diff --git a/deployment/25.10.3/assets/images/social/reference/nvme-low-level-format.png b/deployment/25.10.3/assets/images/social/reference/nvme-low-level-format.png new file mode 100644 index 00000000..73cb9947 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/reference/nvme-low-level-format.png differ diff --git a/deployment/25.10.3/assets/images/social/reference/supported-linux-distributions.png b/deployment/25.10.3/assets/images/social/reference/supported-linux-distributions.png new file mode 100644 index 00000000..af4bbcc8 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/reference/supported-linux-distributions.png differ diff --git a/deployment/25.10.3/assets/images/social/reference/supported-linux-kernels.png b/deployment/25.10.3/assets/images/social/reference/supported-linux-kernels.png new file mode 100644 index 00000000..a9ffa107 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/reference/supported-linux-kernels.png differ diff --git a/deployment/25.10.3/assets/images/social/reference/troubleshooting/control-plane.png b/deployment/25.10.3/assets/images/social/reference/troubleshooting/control-plane.png new file mode 100644 index 00000000..6f8984e1 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/reference/troubleshooting/control-plane.png differ diff --git a/deployment/25.10.3/assets/images/social/reference/troubleshooting/index.png b/deployment/25.10.3/assets/images/social/reference/troubleshooting/index.png new file mode 100644 index 00000000..3bccfb92 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/reference/troubleshooting/index.png differ diff --git a/deployment/25.10.3/assets/images/social/reference/troubleshooting/simplyblock-csi.png b/deployment/25.10.3/assets/images/social/reference/troubleshooting/simplyblock-csi.png new file mode 100644 index 00000000..a9b58c8e Binary files /dev/null and b/deployment/25.10.3/assets/images/social/reference/troubleshooting/simplyblock-csi.png differ diff --git a/deployment/25.10.3/assets/images/social/reference/troubleshooting/storage-plane.png b/deployment/25.10.3/assets/images/social/reference/troubleshooting/storage-plane.png new file mode 100644 index 00000000..54e68fdb Binary files /dev/null and b/deployment/25.10.3/assets/images/social/reference/troubleshooting/storage-plane.png differ diff --git a/deployment/25.10.3/assets/images/social/reference/upgrade-matrix.png b/deployment/25.10.3/assets/images/social/reference/upgrade-matrix.png new file mode 100644 index 00000000..20a4740b Binary files /dev/null and b/deployment/25.10.3/assets/images/social/reference/upgrade-matrix.png differ diff --git a/deployment/25.10.3/assets/images/social/release-notes/25-10-2.png b/deployment/25.10.3/assets/images/social/release-notes/25-10-2.png new file mode 100644 index 00000000..25a4d50f Binary files /dev/null and b/deployment/25.10.3/assets/images/social/release-notes/25-10-2.png differ diff --git a/deployment/25.10.3/assets/images/social/release-notes/25-10-3.png b/deployment/25.10.3/assets/images/social/release-notes/25-10-3.png new file mode 100644 index 00000000..a8b1fd16 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/release-notes/25-10-3.png differ diff --git a/deployment/25.10.3/assets/images/social/release-notes/25-3-pre.png b/deployment/25.10.3/assets/images/social/release-notes/25-3-pre.png new file mode 100644 index 00000000..483a557a Binary files /dev/null and b/deployment/25.10.3/assets/images/social/release-notes/25-3-pre.png differ diff --git a/deployment/25.10.3/assets/images/social/release-notes/25-6-ga.png b/deployment/25.10.3/assets/images/social/release-notes/25-6-ga.png new file mode 100644 index 00000000..aeaf35e4 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/release-notes/25-6-ga.png differ diff --git a/deployment/25.10.3/assets/images/social/release-notes/25-7-1.png b/deployment/25.10.3/assets/images/social/release-notes/25-7-1.png new file mode 100644 index 00000000..8de67814 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/release-notes/25-7-1.png differ diff --git a/deployment/25.10.3/assets/images/social/release-notes/25-7-2.png b/deployment/25.10.3/assets/images/social/release-notes/25-7-2.png new file mode 100644 index 00000000..388bcb7a Binary files /dev/null and b/deployment/25.10.3/assets/images/social/release-notes/25-7-2.png differ diff --git a/deployment/25.10.3/assets/images/social/release-notes/25-7-3.png b/deployment/25.10.3/assets/images/social/release-notes/25-7-3.png new file mode 100644 index 00000000..aaab5b59 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/release-notes/25-7-3.png differ diff --git a/deployment/25.10.3/assets/images/social/release-notes/25-7-4.png b/deployment/25.10.3/assets/images/social/release-notes/25-7-4.png new file mode 100644 index 00000000..022a0732 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/release-notes/25-7-4.png differ diff --git a/deployment/25.10.3/assets/images/social/release-notes/25-7-5.png b/deployment/25.10.3/assets/images/social/release-notes/25-7-5.png new file mode 100644 index 00000000..29e501ad Binary files /dev/null and b/deployment/25.10.3/assets/images/social/release-notes/25-7-5.png differ diff --git a/deployment/25.10.3/assets/images/social/release-notes/25-7-6.png b/deployment/25.10.3/assets/images/social/release-notes/25-7-6.png new file mode 100644 index 00000000..8008c60a Binary files /dev/null and b/deployment/25.10.3/assets/images/social/release-notes/25-7-6.png differ diff --git a/deployment/25.10.3/assets/images/social/release-notes/index.png b/deployment/25.10.3/assets/images/social/release-notes/index.png new file mode 100644 index 00000000..3d22c49d Binary files /dev/null and b/deployment/25.10.3/assets/images/social/release-notes/index.png differ diff --git a/deployment/25.10.3/assets/images/social/tutorials/index.png b/deployment/25.10.3/assets/images/social/tutorials/index.png new file mode 100644 index 00000000..da200ba0 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/tutorials/index.png differ diff --git a/deployment/25.10.3/assets/images/social/tutorials/kubernetes-deployment.png b/deployment/25.10.3/assets/images/social/tutorials/kubernetes-deployment.png new file mode 100644 index 00000000..4409235e Binary files /dev/null and b/deployment/25.10.3/assets/images/social/tutorials/kubernetes-deployment.png differ diff --git a/deployment/25.10.3/assets/images/social/tutorials/simplyblock-intro.png b/deployment/25.10.3/assets/images/social/tutorials/simplyblock-intro.png new file mode 100644 index 00000000..1acf4f95 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/tutorials/simplyblock-intro.png differ diff --git a/deployment/25.10.3/assets/images/social/tutorials/sla-intro.png b/deployment/25.10.3/assets/images/social/tutorials/sla-intro.png new file mode 100644 index 00000000..b1791f52 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/tutorials/sla-intro.png differ diff --git a/deployment/25.10.3/assets/images/social/usage/baremetal/cloning.png b/deployment/25.10.3/assets/images/social/usage/baremetal/cloning.png new file mode 100644 index 00000000..f05549b0 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/usage/baremetal/cloning.png differ diff --git a/deployment/25.10.3/assets/images/social/usage/baremetal/encrypting.png b/deployment/25.10.3/assets/images/social/usage/baremetal/encrypting.png new file mode 100644 index 00000000..3c97e5bc Binary files /dev/null and b/deployment/25.10.3/assets/images/social/usage/baremetal/encrypting.png differ diff --git a/deployment/25.10.3/assets/images/social/usage/baremetal/expanding.png b/deployment/25.10.3/assets/images/social/usage/baremetal/expanding.png new file mode 100644 index 00000000..42abf25a Binary files /dev/null and b/deployment/25.10.3/assets/images/social/usage/baremetal/expanding.png differ diff --git a/deployment/25.10.3/assets/images/social/usage/baremetal/index.png b/deployment/25.10.3/assets/images/social/usage/baremetal/index.png new file mode 100644 index 00000000..b98cb339 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/usage/baremetal/index.png differ diff --git a/deployment/25.10.3/assets/images/social/usage/baremetal/provisioning.png b/deployment/25.10.3/assets/images/social/usage/baremetal/provisioning.png new file mode 100644 index 00000000..d6987fed Binary files /dev/null and b/deployment/25.10.3/assets/images/social/usage/baremetal/provisioning.png differ diff --git a/deployment/25.10.3/assets/images/social/usage/baremetal/quality-of-service.png b/deployment/25.10.3/assets/images/social/usage/baremetal/quality-of-service.png new file mode 100644 index 00000000..fa6e885a Binary files /dev/null and b/deployment/25.10.3/assets/images/social/usage/baremetal/quality-of-service.png differ diff --git a/deployment/25.10.3/assets/images/social/usage/baremetal/removing.png b/deployment/25.10.3/assets/images/social/usage/baremetal/removing.png new file mode 100644 index 00000000..0c926b3d Binary files /dev/null and b/deployment/25.10.3/assets/images/social/usage/baremetal/removing.png differ diff --git a/deployment/25.10.3/assets/images/social/usage/baremetal/snapshotting.png b/deployment/25.10.3/assets/images/social/usage/baremetal/snapshotting.png new file mode 100644 index 00000000..8cb2f316 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/usage/baremetal/snapshotting.png differ diff --git a/deployment/25.10.3/assets/images/social/usage/index.png b/deployment/25.10.3/assets/images/social/usage/index.png new file mode 100644 index 00000000..6cb67144 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/usage/index.png differ diff --git a/deployment/25.10.3/assets/images/social/usage/qos/index.png b/deployment/25.10.3/assets/images/social/usage/qos/index.png new file mode 100644 index 00000000..f648df13 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/usage/qos/index.png differ diff --git a/deployment/25.10.3/assets/images/social/usage/qos/limiting-iops-and-throughput.png b/deployment/25.10.3/assets/images/social/usage/qos/limiting-iops-and-throughput.png new file mode 100644 index 00000000..b435a039 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/usage/qos/limiting-iops-and-throughput.png differ diff --git a/deployment/25.10.3/assets/images/social/usage/qos/qos-service-classes.png b/deployment/25.10.3/assets/images/social/usage/qos/qos-service-classes.png new file mode 100644 index 00000000..a7c19c4b Binary files /dev/null and b/deployment/25.10.3/assets/images/social/usage/qos/qos-service-classes.png differ diff --git a/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/cloning.png b/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/cloning.png new file mode 100644 index 00000000..38772ea8 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/cloning.png differ diff --git a/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/encrypting.png b/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/encrypting.png new file mode 100644 index 00000000..9fcc0377 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/encrypting.png differ diff --git a/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/expanding.png b/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/expanding.png new file mode 100644 index 00000000..367270bb Binary files /dev/null and b/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/expanding.png differ diff --git a/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/index.png b/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/index.png new file mode 100644 index 00000000..a9b58c8e Binary files /dev/null and b/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/index.png differ diff --git a/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/provisioning.png b/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/provisioning.png new file mode 100644 index 00000000..d1b966f4 Binary files /dev/null and b/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/provisioning.png differ diff --git a/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/quality-of-service.png b/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/quality-of-service.png new file mode 100644 index 00000000..fa6e885a Binary files /dev/null and b/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/quality-of-service.png differ diff --git a/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/removing.png b/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/removing.png new file mode 100644 index 00000000..4eb4267b Binary files /dev/null and b/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/removing.png differ diff --git a/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/snapshotting.png b/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/snapshotting.png new file mode 100644 index 00000000..6925c05a Binary files /dev/null and b/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/snapshotting.png differ diff --git a/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/storage-class.png b/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/storage-class.png new file mode 100644 index 00000000..0e2cccac Binary files /dev/null and b/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/storage-class.png differ diff --git a/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/trimming.png b/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/trimming.png new file mode 100644 index 00000000..2564dbfc Binary files /dev/null and b/deployment/25.10.3/assets/images/social/usage/simplyblock-csi/trimming.png differ diff --git a/deployment/25.10.3/assets/javascripts/bundle.f1b6f286.min.js b/deployment/25.10.3/assets/javascripts/bundle.f1b6f286.min.js new file mode 100644 index 00000000..50826750 --- /dev/null +++ b/deployment/25.10.3/assets/javascripts/bundle.f1b6f286.min.js @@ -0,0 +1,16 @@ +"use strict";(()=>{var Wi=Object.create;var gr=Object.defineProperty;var Vi=Object.getOwnPropertyDescriptor;var Di=Object.getOwnPropertyNames,Dt=Object.getOwnPropertySymbols,Ni=Object.getPrototypeOf,yr=Object.prototype.hasOwnProperty,ao=Object.prototype.propertyIsEnumerable;var io=(e,t,r)=>t in e?gr(e,t,{enumerable:!0,configurable:!0,writable:!0,value:r}):e[t]=r,$=(e,t)=>{for(var r in t||(t={}))yr.call(t,r)&&io(e,r,t[r]);if(Dt)for(var r of Dt(t))ao.call(t,r)&&io(e,r,t[r]);return e};var so=(e,t)=>{var r={};for(var o in e)yr.call(e,o)&&t.indexOf(o)<0&&(r[o]=e[o]);if(e!=null&&Dt)for(var o of Dt(e))t.indexOf(o)<0&&ao.call(e,o)&&(r[o]=e[o]);return r};var xr=(e,t)=>()=>(t||e((t={exports:{}}).exports,t),t.exports);var zi=(e,t,r,o)=>{if(t&&typeof t=="object"||typeof t=="function")for(let n of Di(t))!yr.call(e,n)&&n!==r&&gr(e,n,{get:()=>t[n],enumerable:!(o=Vi(t,n))||o.enumerable});return e};var Mt=(e,t,r)=>(r=e!=null?Wi(Ni(e)):{},zi(t||!e||!e.__esModule?gr(r,"default",{value:e,enumerable:!0}):r,e));var co=(e,t,r)=>new Promise((o,n)=>{var i=p=>{try{s(r.next(p))}catch(c){n(c)}},a=p=>{try{s(r.throw(p))}catch(c){n(c)}},s=p=>p.done?o(p.value):Promise.resolve(p.value).then(i,a);s((r=r.apply(e,t)).next())});var lo=xr((Er,po)=>{(function(e,t){typeof Er=="object"&&typeof po!="undefined"?t():typeof define=="function"&&define.amd?define(t):t()})(Er,function(){"use strict";function e(r){var o=!0,n=!1,i=null,a={text:!0,search:!0,url:!0,tel:!0,email:!0,password:!0,number:!0,date:!0,month:!0,week:!0,time:!0,datetime:!0,"datetime-local":!0};function s(k){return!!(k&&k!==document&&k.nodeName!=="HTML"&&k.nodeName!=="BODY"&&"classList"in k&&"contains"in k.classList)}function p(k){var ft=k.type,qe=k.tagName;return!!(qe==="INPUT"&&a[ft]&&!k.readOnly||qe==="TEXTAREA"&&!k.readOnly||k.isContentEditable)}function c(k){k.classList.contains("focus-visible")||(k.classList.add("focus-visible"),k.setAttribute("data-focus-visible-added",""))}function l(k){k.hasAttribute("data-focus-visible-added")&&(k.classList.remove("focus-visible"),k.removeAttribute("data-focus-visible-added"))}function f(k){k.metaKey||k.altKey||k.ctrlKey||(s(r.activeElement)&&c(r.activeElement),o=!0)}function u(k){o=!1}function d(k){s(k.target)&&(o||p(k.target))&&c(k.target)}function y(k){s(k.target)&&(k.target.classList.contains("focus-visible")||k.target.hasAttribute("data-focus-visible-added"))&&(n=!0,window.clearTimeout(i),i=window.setTimeout(function(){n=!1},100),l(k.target))}function L(k){document.visibilityState==="hidden"&&(n&&(o=!0),X())}function X(){document.addEventListener("mousemove",J),document.addEventListener("mousedown",J),document.addEventListener("mouseup",J),document.addEventListener("pointermove",J),document.addEventListener("pointerdown",J),document.addEventListener("pointerup",J),document.addEventListener("touchmove",J),document.addEventListener("touchstart",J),document.addEventListener("touchend",J)}function ee(){document.removeEventListener("mousemove",J),document.removeEventListener("mousedown",J),document.removeEventListener("mouseup",J),document.removeEventListener("pointermove",J),document.removeEventListener("pointerdown",J),document.removeEventListener("pointerup",J),document.removeEventListener("touchmove",J),document.removeEventListener("touchstart",J),document.removeEventListener("touchend",J)}function J(k){k.target.nodeName&&k.target.nodeName.toLowerCase()==="html"||(o=!1,ee())}document.addEventListener("keydown",f,!0),document.addEventListener("mousedown",u,!0),document.addEventListener("pointerdown",u,!0),document.addEventListener("touchstart",u,!0),document.addEventListener("visibilitychange",L,!0),X(),r.addEventListener("focus",d,!0),r.addEventListener("blur",y,!0),r.nodeType===Node.DOCUMENT_FRAGMENT_NODE&&r.host?r.host.setAttribute("data-js-focus-visible",""):r.nodeType===Node.DOCUMENT_NODE&&(document.documentElement.classList.add("js-focus-visible"),document.documentElement.setAttribute("data-js-focus-visible",""))}if(typeof window!="undefined"&&typeof document!="undefined"){window.applyFocusVisiblePolyfill=e;var t;try{t=new CustomEvent("focus-visible-polyfill-ready")}catch(r){t=document.createEvent("CustomEvent"),t.initCustomEvent("focus-visible-polyfill-ready",!1,!1,{})}window.dispatchEvent(t)}typeof document!="undefined"&&e(document)})});var qr=xr((hy,On)=>{"use strict";/*! + * escape-html + * Copyright(c) 2012-2013 TJ Holowaychuk + * Copyright(c) 2015 Andreas Lubbe + * Copyright(c) 2015 Tiancheng "Timothy" Gu + * MIT Licensed + */var $a=/["'&<>]/;On.exports=Pa;function Pa(e){var t=""+e,r=$a.exec(t);if(!r)return t;var o,n="",i=0,a=0;for(i=r.index;i{/*! + * clipboard.js v2.0.11 + * https://clipboardjs.com/ + * + * Licensed MIT © Zeno Rocha + */(function(t,r){typeof It=="object"&&typeof Yr=="object"?Yr.exports=r():typeof define=="function"&&define.amd?define([],r):typeof It=="object"?It.ClipboardJS=r():t.ClipboardJS=r()})(It,function(){return function(){var e={686:function(o,n,i){"use strict";i.d(n,{default:function(){return Ui}});var a=i(279),s=i.n(a),p=i(370),c=i.n(p),l=i(817),f=i.n(l);function u(D){try{return document.execCommand(D)}catch(A){return!1}}var d=function(A){var M=f()(A);return u("cut"),M},y=d;function L(D){var A=document.documentElement.getAttribute("dir")==="rtl",M=document.createElement("textarea");M.style.fontSize="12pt",M.style.border="0",M.style.padding="0",M.style.margin="0",M.style.position="absolute",M.style[A?"right":"left"]="-9999px";var F=window.pageYOffset||document.documentElement.scrollTop;return M.style.top="".concat(F,"px"),M.setAttribute("readonly",""),M.value=D,M}var X=function(A,M){var F=L(A);M.container.appendChild(F);var V=f()(F);return u("copy"),F.remove(),V},ee=function(A){var M=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body},F="";return typeof A=="string"?F=X(A,M):A instanceof HTMLInputElement&&!["text","search","url","tel","password"].includes(A==null?void 0:A.type)?F=X(A.value,M):(F=f()(A),u("copy")),F},J=ee;function k(D){"@babel/helpers - typeof";return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?k=function(M){return typeof M}:k=function(M){return M&&typeof Symbol=="function"&&M.constructor===Symbol&&M!==Symbol.prototype?"symbol":typeof M},k(D)}var ft=function(){var A=arguments.length>0&&arguments[0]!==void 0?arguments[0]:{},M=A.action,F=M===void 0?"copy":M,V=A.container,Y=A.target,$e=A.text;if(F!=="copy"&&F!=="cut")throw new Error('Invalid "action" value, use either "copy" or "cut"');if(Y!==void 0)if(Y&&k(Y)==="object"&&Y.nodeType===1){if(F==="copy"&&Y.hasAttribute("disabled"))throw new Error('Invalid "target" attribute. Please use "readonly" instead of "disabled" attribute');if(F==="cut"&&(Y.hasAttribute("readonly")||Y.hasAttribute("disabled")))throw new Error(`Invalid "target" attribute. You can't cut text from elements with "readonly" or "disabled" attributes`)}else throw new Error('Invalid "target" value, use a valid Element');if($e)return J($e,{container:V});if(Y)return F==="cut"?y(Y):J(Y,{container:V})},qe=ft;function Fe(D){"@babel/helpers - typeof";return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?Fe=function(M){return typeof M}:Fe=function(M){return M&&typeof Symbol=="function"&&M.constructor===Symbol&&M!==Symbol.prototype?"symbol":typeof M},Fe(D)}function ki(D,A){if(!(D instanceof A))throw new TypeError("Cannot call a class as a function")}function no(D,A){for(var M=0;M0&&arguments[0]!==void 0?arguments[0]:{};this.action=typeof V.action=="function"?V.action:this.defaultAction,this.target=typeof V.target=="function"?V.target:this.defaultTarget,this.text=typeof V.text=="function"?V.text:this.defaultText,this.container=Fe(V.container)==="object"?V.container:document.body}},{key:"listenClick",value:function(V){var Y=this;this.listener=c()(V,"click",function($e){return Y.onClick($e)})}},{key:"onClick",value:function(V){var Y=V.delegateTarget||V.currentTarget,$e=this.action(Y)||"copy",Vt=qe({action:$e,container:this.container,target:this.target(Y),text:this.text(Y)});this.emit(Vt?"success":"error",{action:$e,text:Vt,trigger:Y,clearSelection:function(){Y&&Y.focus(),window.getSelection().removeAllRanges()}})}},{key:"defaultAction",value:function(V){return vr("action",V)}},{key:"defaultTarget",value:function(V){var Y=vr("target",V);if(Y)return document.querySelector(Y)}},{key:"defaultText",value:function(V){return vr("text",V)}},{key:"destroy",value:function(){this.listener.destroy()}}],[{key:"copy",value:function(V){var Y=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body};return J(V,Y)}},{key:"cut",value:function(V){return y(V)}},{key:"isSupported",value:function(){var V=arguments.length>0&&arguments[0]!==void 0?arguments[0]:["copy","cut"],Y=typeof V=="string"?[V]:V,$e=!!document.queryCommandSupported;return Y.forEach(function(Vt){$e=$e&&!!document.queryCommandSupported(Vt)}),$e}}]),M}(s()),Ui=Fi},828:function(o){var n=9;if(typeof Element!="undefined"&&!Element.prototype.matches){var i=Element.prototype;i.matches=i.matchesSelector||i.mozMatchesSelector||i.msMatchesSelector||i.oMatchesSelector||i.webkitMatchesSelector}function a(s,p){for(;s&&s.nodeType!==n;){if(typeof s.matches=="function"&&s.matches(p))return s;s=s.parentNode}}o.exports=a},438:function(o,n,i){var a=i(828);function s(l,f,u,d,y){var L=c.apply(this,arguments);return l.addEventListener(u,L,y),{destroy:function(){l.removeEventListener(u,L,y)}}}function p(l,f,u,d,y){return typeof l.addEventListener=="function"?s.apply(null,arguments):typeof u=="function"?s.bind(null,document).apply(null,arguments):(typeof l=="string"&&(l=document.querySelectorAll(l)),Array.prototype.map.call(l,function(L){return s(L,f,u,d,y)}))}function c(l,f,u,d){return function(y){y.delegateTarget=a(y.target,f),y.delegateTarget&&d.call(l,y)}}o.exports=p},879:function(o,n){n.node=function(i){return i!==void 0&&i instanceof HTMLElement&&i.nodeType===1},n.nodeList=function(i){var a=Object.prototype.toString.call(i);return i!==void 0&&(a==="[object NodeList]"||a==="[object HTMLCollection]")&&"length"in i&&(i.length===0||n.node(i[0]))},n.string=function(i){return typeof i=="string"||i instanceof String},n.fn=function(i){var a=Object.prototype.toString.call(i);return a==="[object Function]"}},370:function(o,n,i){var a=i(879),s=i(438);function p(u,d,y){if(!u&&!d&&!y)throw new Error("Missing required arguments");if(!a.string(d))throw new TypeError("Second argument must be a String");if(!a.fn(y))throw new TypeError("Third argument must be a Function");if(a.node(u))return c(u,d,y);if(a.nodeList(u))return l(u,d,y);if(a.string(u))return f(u,d,y);throw new TypeError("First argument must be a String, HTMLElement, HTMLCollection, or NodeList")}function c(u,d,y){return u.addEventListener(d,y),{destroy:function(){u.removeEventListener(d,y)}}}function l(u,d,y){return Array.prototype.forEach.call(u,function(L){L.addEventListener(d,y)}),{destroy:function(){Array.prototype.forEach.call(u,function(L){L.removeEventListener(d,y)})}}}function f(u,d,y){return s(document.body,u,d,y)}o.exports=p},817:function(o){function n(i){var a;if(i.nodeName==="SELECT")i.focus(),a=i.value;else if(i.nodeName==="INPUT"||i.nodeName==="TEXTAREA"){var s=i.hasAttribute("readonly");s||i.setAttribute("readonly",""),i.select(),i.setSelectionRange(0,i.value.length),s||i.removeAttribute("readonly"),a=i.value}else{i.hasAttribute("contenteditable")&&i.focus();var p=window.getSelection(),c=document.createRange();c.selectNodeContents(i),p.removeAllRanges(),p.addRange(c),a=p.toString()}return a}o.exports=n},279:function(o){function n(){}n.prototype={on:function(i,a,s){var p=this.e||(this.e={});return(p[i]||(p[i]=[])).push({fn:a,ctx:s}),this},once:function(i,a,s){var p=this;function c(){p.off(i,c),a.apply(s,arguments)}return c._=a,this.on(i,c,s)},emit:function(i){var a=[].slice.call(arguments,1),s=((this.e||(this.e={}))[i]||[]).slice(),p=0,c=s.length;for(p;p0&&i[i.length-1])&&(c[0]===6||c[0]===2)){r=0;continue}if(c[0]===3&&(!i||c[1]>i[0]&&c[1]=e.length&&(e=void 0),{value:e&&e[o++],done:!e}}};throw new TypeError(t?"Object is not iterable.":"Symbol.iterator is not defined.")}function N(e,t){var r=typeof Symbol=="function"&&e[Symbol.iterator];if(!r)return e;var o=r.call(e),n,i=[],a;try{for(;(t===void 0||t-- >0)&&!(n=o.next()).done;)i.push(n.value)}catch(s){a={error:s}}finally{try{n&&!n.done&&(r=o.return)&&r.call(o)}finally{if(a)throw a.error}}return i}function q(e,t,r){if(r||arguments.length===2)for(var o=0,n=t.length,i;o1||p(d,L)})},y&&(n[d]=y(n[d])))}function p(d,y){try{c(o[d](y))}catch(L){u(i[0][3],L)}}function c(d){d.value instanceof nt?Promise.resolve(d.value.v).then(l,f):u(i[0][2],d)}function l(d){p("next",d)}function f(d){p("throw",d)}function u(d,y){d(y),i.shift(),i.length&&p(i[0][0],i[0][1])}}function uo(e){if(!Symbol.asyncIterator)throw new TypeError("Symbol.asyncIterator is not defined.");var t=e[Symbol.asyncIterator],r;return t?t.call(e):(e=typeof he=="function"?he(e):e[Symbol.iterator](),r={},o("next"),o("throw"),o("return"),r[Symbol.asyncIterator]=function(){return this},r);function o(i){r[i]=e[i]&&function(a){return new Promise(function(s,p){a=e[i](a),n(s,p,a.done,a.value)})}}function n(i,a,s,p){Promise.resolve(p).then(function(c){i({value:c,done:s})},a)}}function H(e){return typeof e=="function"}function ut(e){var t=function(o){Error.call(o),o.stack=new Error().stack},r=e(t);return r.prototype=Object.create(Error.prototype),r.prototype.constructor=r,r}var zt=ut(function(e){return function(r){e(this),this.message=r?r.length+` errors occurred during unsubscription: +`+r.map(function(o,n){return n+1+") "+o.toString()}).join(` + `):"",this.name="UnsubscriptionError",this.errors=r}});function Qe(e,t){if(e){var r=e.indexOf(t);0<=r&&e.splice(r,1)}}var Ue=function(){function e(t){this.initialTeardown=t,this.closed=!1,this._parentage=null,this._finalizers=null}return e.prototype.unsubscribe=function(){var t,r,o,n,i;if(!this.closed){this.closed=!0;var a=this._parentage;if(a)if(this._parentage=null,Array.isArray(a))try{for(var s=he(a),p=s.next();!p.done;p=s.next()){var c=p.value;c.remove(this)}}catch(L){t={error:L}}finally{try{p&&!p.done&&(r=s.return)&&r.call(s)}finally{if(t)throw t.error}}else a.remove(this);var l=this.initialTeardown;if(H(l))try{l()}catch(L){i=L instanceof zt?L.errors:[L]}var f=this._finalizers;if(f){this._finalizers=null;try{for(var u=he(f),d=u.next();!d.done;d=u.next()){var y=d.value;try{ho(y)}catch(L){i=i!=null?i:[],L instanceof zt?i=q(q([],N(i)),N(L.errors)):i.push(L)}}}catch(L){o={error:L}}finally{try{d&&!d.done&&(n=u.return)&&n.call(u)}finally{if(o)throw o.error}}}if(i)throw new zt(i)}},e.prototype.add=function(t){var r;if(t&&t!==this)if(this.closed)ho(t);else{if(t instanceof e){if(t.closed||t._hasParent(this))return;t._addParent(this)}(this._finalizers=(r=this._finalizers)!==null&&r!==void 0?r:[]).push(t)}},e.prototype._hasParent=function(t){var r=this._parentage;return r===t||Array.isArray(r)&&r.includes(t)},e.prototype._addParent=function(t){var r=this._parentage;this._parentage=Array.isArray(r)?(r.push(t),r):r?[r,t]:t},e.prototype._removeParent=function(t){var r=this._parentage;r===t?this._parentage=null:Array.isArray(r)&&Qe(r,t)},e.prototype.remove=function(t){var r=this._finalizers;r&&Qe(r,t),t instanceof e&&t._removeParent(this)},e.EMPTY=function(){var t=new e;return t.closed=!0,t}(),e}();var Tr=Ue.EMPTY;function qt(e){return e instanceof Ue||e&&"closed"in e&&H(e.remove)&&H(e.add)&&H(e.unsubscribe)}function ho(e){H(e)?e():e.unsubscribe()}var Pe={onUnhandledError:null,onStoppedNotification:null,Promise:void 0,useDeprecatedSynchronousErrorHandling:!1,useDeprecatedNextContext:!1};var dt={setTimeout:function(e,t){for(var r=[],o=2;o0},enumerable:!1,configurable:!0}),t.prototype._trySubscribe=function(r){return this._throwIfClosed(),e.prototype._trySubscribe.call(this,r)},t.prototype._subscribe=function(r){return this._throwIfClosed(),this._checkFinalizedStatuses(r),this._innerSubscribe(r)},t.prototype._innerSubscribe=function(r){var o=this,n=this,i=n.hasError,a=n.isStopped,s=n.observers;return i||a?Tr:(this.currentObservers=null,s.push(r),new Ue(function(){o.currentObservers=null,Qe(s,r)}))},t.prototype._checkFinalizedStatuses=function(r){var o=this,n=o.hasError,i=o.thrownError,a=o.isStopped;n?r.error(i):a&&r.complete()},t.prototype.asObservable=function(){var r=new j;return r.source=this,r},t.create=function(r,o){return new To(r,o)},t}(j);var To=function(e){oe(t,e);function t(r,o){var n=e.call(this)||this;return n.destination=r,n.source=o,n}return t.prototype.next=function(r){var o,n;(n=(o=this.destination)===null||o===void 0?void 0:o.next)===null||n===void 0||n.call(o,r)},t.prototype.error=function(r){var o,n;(n=(o=this.destination)===null||o===void 0?void 0:o.error)===null||n===void 0||n.call(o,r)},t.prototype.complete=function(){var r,o;(o=(r=this.destination)===null||r===void 0?void 0:r.complete)===null||o===void 0||o.call(r)},t.prototype._subscribe=function(r){var o,n;return(n=(o=this.source)===null||o===void 0?void 0:o.subscribe(r))!==null&&n!==void 0?n:Tr},t}(g);var _r=function(e){oe(t,e);function t(r){var o=e.call(this)||this;return o._value=r,o}return Object.defineProperty(t.prototype,"value",{get:function(){return this.getValue()},enumerable:!1,configurable:!0}),t.prototype._subscribe=function(r){var o=e.prototype._subscribe.call(this,r);return!o.closed&&r.next(this._value),o},t.prototype.getValue=function(){var r=this,o=r.hasError,n=r.thrownError,i=r._value;if(o)throw n;return this._throwIfClosed(),i},t.prototype.next=function(r){e.prototype.next.call(this,this._value=r)},t}(g);var At={now:function(){return(At.delegate||Date).now()},delegate:void 0};var Ct=function(e){oe(t,e);function t(r,o,n){r===void 0&&(r=1/0),o===void 0&&(o=1/0),n===void 0&&(n=At);var i=e.call(this)||this;return i._bufferSize=r,i._windowTime=o,i._timestampProvider=n,i._buffer=[],i._infiniteTimeWindow=!0,i._infiniteTimeWindow=o===1/0,i._bufferSize=Math.max(1,r),i._windowTime=Math.max(1,o),i}return t.prototype.next=function(r){var o=this,n=o.isStopped,i=o._buffer,a=o._infiniteTimeWindow,s=o._timestampProvider,p=o._windowTime;n||(i.push(r),!a&&i.push(s.now()+p)),this._trimBuffer(),e.prototype.next.call(this,r)},t.prototype._subscribe=function(r){this._throwIfClosed(),this._trimBuffer();for(var o=this._innerSubscribe(r),n=this,i=n._infiniteTimeWindow,a=n._buffer,s=a.slice(),p=0;p0?e.prototype.schedule.call(this,r,o):(this.delay=o,this.state=r,this.scheduler.flush(this),this)},t.prototype.execute=function(r,o){return o>0||this.closed?e.prototype.execute.call(this,r,o):this._execute(r,o)},t.prototype.requestAsyncId=function(r,o,n){return n===void 0&&(n=0),n!=null&&n>0||n==null&&this.delay>0?e.prototype.requestAsyncId.call(this,r,o,n):(r.flush(this),0)},t}(gt);var Lo=function(e){oe(t,e);function t(){return e!==null&&e.apply(this,arguments)||this}return t}(yt);var kr=new Lo(Oo);var Mo=function(e){oe(t,e);function t(r,o){var n=e.call(this,r,o)||this;return n.scheduler=r,n.work=o,n}return t.prototype.requestAsyncId=function(r,o,n){return n===void 0&&(n=0),n!==null&&n>0?e.prototype.requestAsyncId.call(this,r,o,n):(r.actions.push(this),r._scheduled||(r._scheduled=vt.requestAnimationFrame(function(){return r.flush(void 0)})))},t.prototype.recycleAsyncId=function(r,o,n){var i;if(n===void 0&&(n=0),n!=null?n>0:this.delay>0)return e.prototype.recycleAsyncId.call(this,r,o,n);var a=r.actions;o!=null&&((i=a[a.length-1])===null||i===void 0?void 0:i.id)!==o&&(vt.cancelAnimationFrame(o),r._scheduled=void 0)},t}(gt);var _o=function(e){oe(t,e);function t(){return e!==null&&e.apply(this,arguments)||this}return t.prototype.flush=function(r){this._active=!0;var o=this._scheduled;this._scheduled=void 0;var n=this.actions,i;r=r||n.shift();do if(i=r.execute(r.state,r.delay))break;while((r=n[0])&&r.id===o&&n.shift());if(this._active=!1,i){for(;(r=n[0])&&r.id===o&&n.shift();)r.unsubscribe();throw i}},t}(yt);var me=new _o(Mo);var S=new j(function(e){return e.complete()});function Yt(e){return e&&H(e.schedule)}function Hr(e){return e[e.length-1]}function Xe(e){return H(Hr(e))?e.pop():void 0}function ke(e){return Yt(Hr(e))?e.pop():void 0}function Bt(e,t){return typeof Hr(e)=="number"?e.pop():t}var xt=function(e){return e&&typeof e.length=="number"&&typeof e!="function"};function Gt(e){return H(e==null?void 0:e.then)}function Jt(e){return H(e[bt])}function Xt(e){return Symbol.asyncIterator&&H(e==null?void 0:e[Symbol.asyncIterator])}function Zt(e){return new TypeError("You provided "+(e!==null&&typeof e=="object"?"an invalid object":"'"+e+"'")+" where a stream was expected. You can provide an Observable, Promise, ReadableStream, Array, AsyncIterable, or Iterable.")}function Zi(){return typeof Symbol!="function"||!Symbol.iterator?"@@iterator":Symbol.iterator}var er=Zi();function tr(e){return H(e==null?void 0:e[er])}function rr(e){return fo(this,arguments,function(){var r,o,n,i;return Nt(this,function(a){switch(a.label){case 0:r=e.getReader(),a.label=1;case 1:a.trys.push([1,,9,10]),a.label=2;case 2:return[4,nt(r.read())];case 3:return o=a.sent(),n=o.value,i=o.done,i?[4,nt(void 0)]:[3,5];case 4:return[2,a.sent()];case 5:return[4,nt(n)];case 6:return[4,a.sent()];case 7:return a.sent(),[3,2];case 8:return[3,10];case 9:return r.releaseLock(),[7];case 10:return[2]}})})}function or(e){return H(e==null?void 0:e.getReader)}function U(e){if(e instanceof j)return e;if(e!=null){if(Jt(e))return ea(e);if(xt(e))return ta(e);if(Gt(e))return ra(e);if(Xt(e))return Ao(e);if(tr(e))return oa(e);if(or(e))return na(e)}throw Zt(e)}function ea(e){return new j(function(t){var r=e[bt]();if(H(r.subscribe))return r.subscribe(t);throw new TypeError("Provided object does not correctly implement Symbol.observable")})}function ta(e){return new j(function(t){for(var r=0;r=2;return function(o){return o.pipe(e?b(function(n,i){return e(n,i,o)}):le,Te(1),r?Ve(t):Qo(function(){return new ir}))}}function jr(e){return e<=0?function(){return S}:E(function(t,r){var o=[];t.subscribe(T(r,function(n){o.push(n),e=2,!0))}function pe(e){e===void 0&&(e={});var t=e.connector,r=t===void 0?function(){return new g}:t,o=e.resetOnError,n=o===void 0?!0:o,i=e.resetOnComplete,a=i===void 0?!0:i,s=e.resetOnRefCountZero,p=s===void 0?!0:s;return function(c){var l,f,u,d=0,y=!1,L=!1,X=function(){f==null||f.unsubscribe(),f=void 0},ee=function(){X(),l=u=void 0,y=L=!1},J=function(){var k=l;ee(),k==null||k.unsubscribe()};return E(function(k,ft){d++,!L&&!y&&X();var qe=u=u!=null?u:r();ft.add(function(){d--,d===0&&!L&&!y&&(f=Ur(J,p))}),qe.subscribe(ft),!l&&d>0&&(l=new at({next:function(Fe){return qe.next(Fe)},error:function(Fe){L=!0,X(),f=Ur(ee,n,Fe),qe.error(Fe)},complete:function(){y=!0,X(),f=Ur(ee,a),qe.complete()}}),U(k).subscribe(l))})(c)}}function Ur(e,t){for(var r=[],o=2;oe.next(document)),e}function P(e,t=document){return Array.from(t.querySelectorAll(e))}function R(e,t=document){let r=fe(e,t);if(typeof r=="undefined")throw new ReferenceError(`Missing element: expected "${e}" to be present`);return r}function fe(e,t=document){return t.querySelector(e)||void 0}function Ie(){var e,t,r,o;return(o=(r=(t=(e=document.activeElement)==null?void 0:e.shadowRoot)==null?void 0:t.activeElement)!=null?r:document.activeElement)!=null?o:void 0}var wa=O(h(document.body,"focusin"),h(document.body,"focusout")).pipe(_e(1),Q(void 0),m(()=>Ie()||document.body),G(1));function et(e){return wa.pipe(m(t=>e.contains(t)),K())}function $t(e,t){return C(()=>O(h(e,"mouseenter").pipe(m(()=>!0)),h(e,"mouseleave").pipe(m(()=>!1))).pipe(t?Ht(r=>Le(+!r*t)):le,Q(e.matches(":hover"))))}function Jo(e,t){if(typeof t=="string"||typeof t=="number")e.innerHTML+=t.toString();else if(t instanceof Node)e.appendChild(t);else if(Array.isArray(t))for(let r of t)Jo(e,r)}function x(e,t,...r){let o=document.createElement(e);if(t)for(let n of Object.keys(t))typeof t[n]!="undefined"&&(typeof t[n]!="boolean"?o.setAttribute(n,t[n]):o.setAttribute(n,""));for(let n of r)Jo(o,n);return o}function sr(e){if(e>999){let t=+((e-950)%1e3>99);return`${((e+1e-6)/1e3).toFixed(t)}k`}else return e.toString()}function Tt(e){let t=x("script",{src:e});return C(()=>(document.head.appendChild(t),O(h(t,"load"),h(t,"error").pipe(v(()=>$r(()=>new ReferenceError(`Invalid script: ${e}`))))).pipe(m(()=>{}),_(()=>document.head.removeChild(t)),Te(1))))}var Xo=new g,Ta=C(()=>typeof ResizeObserver=="undefined"?Tt("https://unpkg.com/resize-observer-polyfill"):I(void 0)).pipe(m(()=>new ResizeObserver(e=>e.forEach(t=>Xo.next(t)))),v(e=>O(Ye,I(e)).pipe(_(()=>e.disconnect()))),G(1));function ce(e){return{width:e.offsetWidth,height:e.offsetHeight}}function ge(e){let t=e;for(;t.clientWidth===0&&t.parentElement;)t=t.parentElement;return Ta.pipe(w(r=>r.observe(t)),v(r=>Xo.pipe(b(o=>o.target===t),_(()=>r.unobserve(t)))),m(()=>ce(e)),Q(ce(e)))}function St(e){return{width:e.scrollWidth,height:e.scrollHeight}}function cr(e){let t=e.parentElement;for(;t&&(e.scrollWidth<=t.scrollWidth&&e.scrollHeight<=t.scrollHeight);)t=(e=t).parentElement;return t?e:void 0}function Zo(e){let t=[],r=e.parentElement;for(;r;)(e.clientWidth>r.clientWidth||e.clientHeight>r.clientHeight)&&t.push(r),r=(e=r).parentElement;return t.length===0&&t.push(document.documentElement),t}function De(e){return{x:e.offsetLeft,y:e.offsetTop}}function en(e){let t=e.getBoundingClientRect();return{x:t.x+window.scrollX,y:t.y+window.scrollY}}function tn(e){return O(h(window,"load"),h(window,"resize")).pipe(Me(0,me),m(()=>De(e)),Q(De(e)))}function pr(e){return{x:e.scrollLeft,y:e.scrollTop}}function Ne(e){return O(h(e,"scroll"),h(window,"scroll"),h(window,"resize")).pipe(Me(0,me),m(()=>pr(e)),Q(pr(e)))}var rn=new g,Sa=C(()=>I(new IntersectionObserver(e=>{for(let t of e)rn.next(t)},{threshold:0}))).pipe(v(e=>O(Ye,I(e)).pipe(_(()=>e.disconnect()))),G(1));function tt(e){return Sa.pipe(w(t=>t.observe(e)),v(t=>rn.pipe(b(({target:r})=>r===e),_(()=>t.unobserve(e)),m(({isIntersecting:r})=>r))))}function on(e,t=16){return Ne(e).pipe(m(({y:r})=>{let o=ce(e),n=St(e);return r>=n.height-o.height-t}),K())}var lr={drawer:R("[data-md-toggle=drawer]"),search:R("[data-md-toggle=search]")};function nn(e){return lr[e].checked}function Je(e,t){lr[e].checked!==t&&lr[e].click()}function ze(e){let t=lr[e];return h(t,"change").pipe(m(()=>t.checked),Q(t.checked))}function Oa(e,t){switch(e.constructor){case HTMLInputElement:return e.type==="radio"?/^Arrow/.test(t):!0;case HTMLSelectElement:case HTMLTextAreaElement:return!0;default:return e.isContentEditable}}function La(){return O(h(window,"compositionstart").pipe(m(()=>!0)),h(window,"compositionend").pipe(m(()=>!1))).pipe(Q(!1))}function an(){let e=h(window,"keydown").pipe(b(t=>!(t.metaKey||t.ctrlKey)),m(t=>({mode:nn("search")?"search":"global",type:t.key,claim(){t.preventDefault(),t.stopPropagation()}})),b(({mode:t,type:r})=>{if(t==="global"){let o=Ie();if(typeof o!="undefined")return!Oa(o,r)}return!0}),pe());return La().pipe(v(t=>t?S:e))}function ye(){return new URL(location.href)}function lt(e,t=!1){if(B("navigation.instant")&&!t){let r=x("a",{href:e.href});document.body.appendChild(r),r.click(),r.remove()}else location.href=e.href}function sn(){return new g}function cn(){return location.hash.slice(1)}function pn(e){let t=x("a",{href:e});t.addEventListener("click",r=>r.stopPropagation()),t.click()}function Ma(e){return O(h(window,"hashchange"),e).pipe(m(cn),Q(cn()),b(t=>t.length>0),G(1))}function ln(e){return Ma(e).pipe(m(t=>fe(`[id="${t}"]`)),b(t=>typeof t!="undefined"))}function Pt(e){let t=matchMedia(e);return ar(r=>t.addListener(()=>r(t.matches))).pipe(Q(t.matches))}function mn(){let e=matchMedia("print");return O(h(window,"beforeprint").pipe(m(()=>!0)),h(window,"afterprint").pipe(m(()=>!1))).pipe(Q(e.matches))}function Nr(e,t){return e.pipe(v(r=>r?t():S))}function zr(e,t){return new j(r=>{let o=new XMLHttpRequest;return o.open("GET",`${e}`),o.responseType="blob",o.addEventListener("load",()=>{o.status>=200&&o.status<300?(r.next(o.response),r.complete()):r.error(new Error(o.statusText))}),o.addEventListener("error",()=>{r.error(new Error("Network error"))}),o.addEventListener("abort",()=>{r.complete()}),typeof(t==null?void 0:t.progress$)!="undefined"&&(o.addEventListener("progress",n=>{var i;if(n.lengthComputable)t.progress$.next(n.loaded/n.total*100);else{let a=(i=o.getResponseHeader("Content-Length"))!=null?i:0;t.progress$.next(n.loaded/+a*100)}}),t.progress$.next(5)),o.send(),()=>o.abort()})}function je(e,t){return zr(e,t).pipe(v(r=>r.text()),m(r=>JSON.parse(r)),G(1))}function fn(e,t){let r=new DOMParser;return zr(e,t).pipe(v(o=>o.text()),m(o=>r.parseFromString(o,"text/html")),G(1))}function un(e,t){let r=new DOMParser;return zr(e,t).pipe(v(o=>o.text()),m(o=>r.parseFromString(o,"text/xml")),G(1))}function dn(){return{x:Math.max(0,scrollX),y:Math.max(0,scrollY)}}function hn(){return O(h(window,"scroll",{passive:!0}),h(window,"resize",{passive:!0})).pipe(m(dn),Q(dn()))}function bn(){return{width:innerWidth,height:innerHeight}}function vn(){return h(window,"resize",{passive:!0}).pipe(m(bn),Q(bn()))}function gn(){return z([hn(),vn()]).pipe(m(([e,t])=>({offset:e,size:t})),G(1))}function mr(e,{viewport$:t,header$:r}){let o=t.pipe(te("size")),n=z([o,r]).pipe(m(()=>De(e)));return z([r,t,n]).pipe(m(([{height:i},{offset:a,size:s},{x:p,y:c}])=>({offset:{x:a.x-p,y:a.y-c+i},size:s})))}function _a(e){return h(e,"message",t=>t.data)}function Aa(e){let t=new g;return t.subscribe(r=>e.postMessage(r)),t}function yn(e,t=new Worker(e)){let r=_a(t),o=Aa(t),n=new g;n.subscribe(o);let i=o.pipe(Z(),ie(!0));return n.pipe(Z(),Re(r.pipe(W(i))),pe())}var Ca=R("#__config"),Ot=JSON.parse(Ca.textContent);Ot.base=`${new URL(Ot.base,ye())}`;function xe(){return Ot}function B(e){return Ot.features.includes(e)}function Ee(e,t){return typeof t!="undefined"?Ot.translations[e].replace("#",t.toString()):Ot.translations[e]}function Se(e,t=document){return R(`[data-md-component=${e}]`,t)}function ae(e,t=document){return P(`[data-md-component=${e}]`,t)}function ka(e){let t=R(".md-typeset > :first-child",e);return h(t,"click",{once:!0}).pipe(m(()=>R(".md-typeset",e)),m(r=>({hash:__md_hash(r.innerHTML)})))}function xn(e){if(!B("announce.dismiss")||!e.childElementCount)return S;if(!e.hidden){let t=R(".md-typeset",e);__md_hash(t.innerHTML)===__md_get("__announce")&&(e.hidden=!0)}return C(()=>{let t=new g;return t.subscribe(({hash:r})=>{e.hidden=!0,__md_set("__announce",r)}),ka(e).pipe(w(r=>t.next(r)),_(()=>t.complete()),m(r=>$({ref:e},r)))})}function Ha(e,{target$:t}){return t.pipe(m(r=>({hidden:r!==e})))}function En(e,t){let r=new g;return r.subscribe(({hidden:o})=>{e.hidden=o}),Ha(e,t).pipe(w(o=>r.next(o)),_(()=>r.complete()),m(o=>$({ref:e},o)))}function Rt(e,t){return t==="inline"?x("div",{class:"md-tooltip md-tooltip--inline",id:e,role:"tooltip"},x("div",{class:"md-tooltip__inner md-typeset"})):x("div",{class:"md-tooltip",id:e,role:"tooltip"},x("div",{class:"md-tooltip__inner md-typeset"}))}function wn(...e){return x("div",{class:"md-tooltip2",role:"tooltip"},x("div",{class:"md-tooltip2__inner md-typeset"},e))}function Tn(e,t){if(t=t?`${t}_annotation_${e}`:void 0,t){let r=t?`#${t}`:void 0;return x("aside",{class:"md-annotation",tabIndex:0},Rt(t),x("a",{href:r,class:"md-annotation__index",tabIndex:-1},x("span",{"data-md-annotation-id":e})))}else return x("aside",{class:"md-annotation",tabIndex:0},Rt(t),x("span",{class:"md-annotation__index",tabIndex:-1},x("span",{"data-md-annotation-id":e})))}function Sn(e){return x("button",{class:"md-clipboard md-icon",title:Ee("clipboard.copy"),"data-clipboard-target":`#${e} > code`})}var Ln=Mt(qr());function Qr(e,t){let r=t&2,o=t&1,n=Object.keys(e.terms).filter(p=>!e.terms[p]).reduce((p,c)=>[...p,x("del",null,(0,Ln.default)(c))," "],[]).slice(0,-1),i=xe(),a=new URL(e.location,i.base);B("search.highlight")&&a.searchParams.set("h",Object.entries(e.terms).filter(([,p])=>p).reduce((p,[c])=>`${p} ${c}`.trim(),""));let{tags:s}=xe();return x("a",{href:`${a}`,class:"md-search-result__link",tabIndex:-1},x("article",{class:"md-search-result__article md-typeset","data-md-score":e.score.toFixed(2)},r>0&&x("div",{class:"md-search-result__icon md-icon"}),r>0&&x("h1",null,e.title),r<=0&&x("h2",null,e.title),o>0&&e.text.length>0&&e.text,e.tags&&x("nav",{class:"md-tags"},e.tags.map(p=>{let c=s?p in s?`md-tag-icon md-tag--${s[p]}`:"md-tag-icon":"";return x("span",{class:`md-tag ${c}`},p)})),o>0&&n.length>0&&x("p",{class:"md-search-result__terms"},Ee("search.result.term.missing"),": ",...n)))}function Mn(e){let t=e[0].score,r=[...e],o=xe(),n=r.findIndex(l=>!`${new URL(l.location,o.base)}`.includes("#")),[i]=r.splice(n,1),a=r.findIndex(l=>l.scoreQr(l,1)),...p.length?[x("details",{class:"md-search-result__more"},x("summary",{tabIndex:-1},x("div",null,p.length>0&&p.length===1?Ee("search.result.more.one"):Ee("search.result.more.other",p.length))),...p.map(l=>Qr(l,1)))]:[]];return x("li",{class:"md-search-result__item"},c)}function _n(e){return x("ul",{class:"md-source__facts"},Object.entries(e).map(([t,r])=>x("li",{class:`md-source__fact md-source__fact--${t}`},typeof r=="number"?sr(r):r)))}function Kr(e){let t=`tabbed-control tabbed-control--${e}`;return x("div",{class:t,hidden:!0},x("button",{class:"tabbed-button",tabIndex:-1,"aria-hidden":"true"}))}function An(e){return x("div",{class:"md-typeset__scrollwrap"},x("div",{class:"md-typeset__table"},e))}function Ra(e){var o;let t=xe(),r=new URL(`../${e.version}/`,t.base);return x("li",{class:"md-version__item"},x("a",{href:`${r}`,class:"md-version__link"},e.title,((o=t.version)==null?void 0:o.alias)&&e.aliases.length>0&&x("span",{class:"md-version__alias"},e.aliases[0])))}function Cn(e,t){var o;let r=xe();return e=e.filter(n=>{var i;return!((i=n.properties)!=null&&i.hidden)}),x("div",{class:"md-version"},x("button",{class:"md-version__current","aria-label":Ee("select.version")},t.title,((o=r.version)==null?void 0:o.alias)&&t.aliases.length>0&&x("span",{class:"md-version__alias"},t.aliases[0])),x("ul",{class:"md-version__list"},e.map(Ra)))}var Ia=0;function ja(e){let t=z([et(e),$t(e)]).pipe(m(([o,n])=>o||n),K()),r=C(()=>Zo(e)).pipe(ne(Ne),pt(1),He(t),m(()=>en(e)));return t.pipe(Ae(o=>o),v(()=>z([t,r])),m(([o,n])=>({active:o,offset:n})),pe())}function Fa(e,t){let{content$:r,viewport$:o}=t,n=`__tooltip2_${Ia++}`;return C(()=>{let i=new g,a=new _r(!1);i.pipe(Z(),ie(!1)).subscribe(a);let s=a.pipe(Ht(c=>Le(+!c*250,kr)),K(),v(c=>c?r:S),w(c=>c.id=n),pe());z([i.pipe(m(({active:c})=>c)),s.pipe(v(c=>$t(c,250)),Q(!1))]).pipe(m(c=>c.some(l=>l))).subscribe(a);let p=a.pipe(b(c=>c),re(s,o),m(([c,l,{size:f}])=>{let u=e.getBoundingClientRect(),d=u.width/2;if(l.role==="tooltip")return{x:d,y:8+u.height};if(u.y>=f.height/2){let{height:y}=ce(l);return{x:d,y:-16-y}}else return{x:d,y:16+u.height}}));return z([s,i,p]).subscribe(([c,{offset:l},f])=>{c.style.setProperty("--md-tooltip-host-x",`${l.x}px`),c.style.setProperty("--md-tooltip-host-y",`${l.y}px`),c.style.setProperty("--md-tooltip-x",`${f.x}px`),c.style.setProperty("--md-tooltip-y",`${f.y}px`),c.classList.toggle("md-tooltip2--top",f.y<0),c.classList.toggle("md-tooltip2--bottom",f.y>=0)}),a.pipe(b(c=>c),re(s,(c,l)=>l),b(c=>c.role==="tooltip")).subscribe(c=>{let l=ce(R(":scope > *",c));c.style.setProperty("--md-tooltip-width",`${l.width}px`),c.style.setProperty("--md-tooltip-tail","0px")}),a.pipe(K(),ve(me),re(s)).subscribe(([c,l])=>{l.classList.toggle("md-tooltip2--active",c)}),z([a.pipe(b(c=>c)),s]).subscribe(([c,l])=>{l.role==="dialog"?(e.setAttribute("aria-controls",n),e.setAttribute("aria-haspopup","dialog")):e.setAttribute("aria-describedby",n)}),a.pipe(b(c=>!c)).subscribe(()=>{e.removeAttribute("aria-controls"),e.removeAttribute("aria-describedby"),e.removeAttribute("aria-haspopup")}),ja(e).pipe(w(c=>i.next(c)),_(()=>i.complete()),m(c=>$({ref:e},c)))})}function mt(e,{viewport$:t},r=document.body){return Fa(e,{content$:new j(o=>{let n=e.title,i=wn(n);return o.next(i),e.removeAttribute("title"),r.append(i),()=>{i.remove(),e.setAttribute("title",n)}}),viewport$:t})}function Ua(e,t){let r=C(()=>z([tn(e),Ne(t)])).pipe(m(([{x:o,y:n},i])=>{let{width:a,height:s}=ce(e);return{x:o-i.x+a/2,y:n-i.y+s/2}}));return et(e).pipe(v(o=>r.pipe(m(n=>({active:o,offset:n})),Te(+!o||1/0))))}function kn(e,t,{target$:r}){let[o,n]=Array.from(e.children);return C(()=>{let i=new g,a=i.pipe(Z(),ie(!0));return i.subscribe({next({offset:s}){e.style.setProperty("--md-tooltip-x",`${s.x}px`),e.style.setProperty("--md-tooltip-y",`${s.y}px`)},complete(){e.style.removeProperty("--md-tooltip-x"),e.style.removeProperty("--md-tooltip-y")}}),tt(e).pipe(W(a)).subscribe(s=>{e.toggleAttribute("data-md-visible",s)}),O(i.pipe(b(({active:s})=>s)),i.pipe(_e(250),b(({active:s})=>!s))).subscribe({next({active:s}){s?e.prepend(o):o.remove()},complete(){e.prepend(o)}}),i.pipe(Me(16,me)).subscribe(({active:s})=>{o.classList.toggle("md-tooltip--active",s)}),i.pipe(pt(125,me),b(()=>!!e.offsetParent),m(()=>e.offsetParent.getBoundingClientRect()),m(({x:s})=>s)).subscribe({next(s){s?e.style.setProperty("--md-tooltip-0",`${-s}px`):e.style.removeProperty("--md-tooltip-0")},complete(){e.style.removeProperty("--md-tooltip-0")}}),h(n,"click").pipe(W(a),b(s=>!(s.metaKey||s.ctrlKey))).subscribe(s=>{s.stopPropagation(),s.preventDefault()}),h(n,"mousedown").pipe(W(a),re(i)).subscribe(([s,{active:p}])=>{var c;if(s.button!==0||s.metaKey||s.ctrlKey)s.preventDefault();else if(p){s.preventDefault();let l=e.parentElement.closest(".md-annotation");l instanceof HTMLElement?l.focus():(c=Ie())==null||c.blur()}}),r.pipe(W(a),b(s=>s===o),Ge(125)).subscribe(()=>e.focus()),Ua(e,t).pipe(w(s=>i.next(s)),_(()=>i.complete()),m(s=>$({ref:e},s)))})}function Wa(e){return e.tagName==="CODE"?P(".c, .c1, .cm",e):[e]}function Va(e){let t=[];for(let r of Wa(e)){let o=[],n=document.createNodeIterator(r,NodeFilter.SHOW_TEXT);for(let i=n.nextNode();i;i=n.nextNode())o.push(i);for(let i of o){let a;for(;a=/(\(\d+\))(!)?/.exec(i.textContent);){let[,s,p]=a;if(typeof p=="undefined"){let c=i.splitText(a.index);i=c.splitText(s.length),t.push(c)}else{i.textContent=s,t.push(i);break}}}}return t}function Hn(e,t){t.append(...Array.from(e.childNodes))}function fr(e,t,{target$:r,print$:o}){let n=t.closest("[id]"),i=n==null?void 0:n.id,a=new Map;for(let s of Va(t)){let[,p]=s.textContent.match(/\((\d+)\)/);fe(`:scope > li:nth-child(${p})`,e)&&(a.set(p,Tn(p,i)),s.replaceWith(a.get(p)))}return a.size===0?S:C(()=>{let s=new g,p=s.pipe(Z(),ie(!0)),c=[];for(let[l,f]of a)c.push([R(".md-typeset",f),R(`:scope > li:nth-child(${l})`,e)]);return o.pipe(W(p)).subscribe(l=>{e.hidden=!l,e.classList.toggle("md-annotation-list",l);for(let[f,u]of c)l?Hn(f,u):Hn(u,f)}),O(...[...a].map(([,l])=>kn(l,t,{target$:r}))).pipe(_(()=>s.complete()),pe())})}function $n(e){if(e.nextElementSibling){let t=e.nextElementSibling;if(t.tagName==="OL")return t;if(t.tagName==="P"&&!t.children.length)return $n(t)}}function Pn(e,t){return C(()=>{let r=$n(e);return typeof r!="undefined"?fr(r,e,t):S})}var Rn=Mt(Br());var Da=0;function In(e){if(e.nextElementSibling){let t=e.nextElementSibling;if(t.tagName==="OL")return t;if(t.tagName==="P"&&!t.children.length)return In(t)}}function Na(e){return ge(e).pipe(m(({width:t})=>({scrollable:St(e).width>t})),te("scrollable"))}function jn(e,t){let{matches:r}=matchMedia("(hover)"),o=C(()=>{let n=new g,i=n.pipe(jr(1));n.subscribe(({scrollable:c})=>{c&&r?e.setAttribute("tabindex","0"):e.removeAttribute("tabindex")});let a=[];if(Rn.default.isSupported()&&(e.closest(".copy")||B("content.code.copy")&&!e.closest(".no-copy"))){let c=e.closest("pre");c.id=`__code_${Da++}`;let l=Sn(c.id);c.insertBefore(l,e),B("content.tooltips")&&a.push(mt(l,{viewport$}))}let s=e.closest(".highlight");if(s instanceof HTMLElement){let c=In(s);if(typeof c!="undefined"&&(s.classList.contains("annotate")||B("content.code.annotate"))){let l=fr(c,e,t);a.push(ge(s).pipe(W(i),m(({width:f,height:u})=>f&&u),K(),v(f=>f?l:S)))}}return P(":scope > span[id]",e).length&&e.classList.add("md-code__content"),Na(e).pipe(w(c=>n.next(c)),_(()=>n.complete()),m(c=>$({ref:e},c)),Re(...a))});return B("content.lazy")?tt(e).pipe(b(n=>n),Te(1),v(()=>o)):o}function za(e,{target$:t,print$:r}){let o=!0;return O(t.pipe(m(n=>n.closest("details:not([open])")),b(n=>e===n),m(()=>({action:"open",reveal:!0}))),r.pipe(b(n=>n||!o),w(()=>o=e.open),m(n=>({action:n?"open":"close"}))))}function Fn(e,t){return C(()=>{let r=new g;return r.subscribe(({action:o,reveal:n})=>{e.toggleAttribute("open",o==="open"),n&&e.scrollIntoView()}),za(e,t).pipe(w(o=>r.next(o)),_(()=>r.complete()),m(o=>$({ref:e},o)))})}var Un=".node circle,.node ellipse,.node path,.node polygon,.node rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}marker{fill:var(--md-mermaid-edge-color)!important}.edgeLabel .label rect{fill:#0000}.flowchartTitleText{fill:var(--md-mermaid-label-fg-color)}.label{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.label foreignObject{line-height:normal;overflow:visible}.label div .edgeLabel{color:var(--md-mermaid-label-fg-color)}.edgeLabel,.edgeLabel p,.label div .edgeLabel{background-color:var(--md-mermaid-label-bg-color)}.edgeLabel,.edgeLabel p{fill:var(--md-mermaid-label-bg-color);color:var(--md-mermaid-edge-color)}.edgePath .path,.flowchart-link{stroke:var(--md-mermaid-edge-color);stroke-width:.05rem}.edgePath .arrowheadPath{fill:var(--md-mermaid-edge-color);stroke:none}.cluster rect{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}.cluster span{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}g #flowchart-circleEnd,g #flowchart-circleStart,g #flowchart-crossEnd,g #flowchart-crossStart,g #flowchart-pointEnd,g #flowchart-pointStart{stroke:none}.classDiagramTitleText{fill:var(--md-mermaid-label-fg-color)}g.classGroup line,g.classGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.classGroup text{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.classLabel .box{fill:var(--md-mermaid-label-bg-color);background-color:var(--md-mermaid-label-bg-color);opacity:1}.classLabel .label{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.node .divider{stroke:var(--md-mermaid-node-fg-color)}.relation{stroke:var(--md-mermaid-edge-color)}.cardinality{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.cardinality text{fill:inherit!important}defs marker.marker.composition.class path,defs marker.marker.dependency.class path,defs marker.marker.extension.class path{fill:var(--md-mermaid-edge-color)!important;stroke:var(--md-mermaid-edge-color)!important}defs marker.marker.aggregation.class path{fill:var(--md-mermaid-label-bg-color)!important;stroke:var(--md-mermaid-edge-color)!important}.statediagramTitleText{fill:var(--md-mermaid-label-fg-color)}g.stateGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.stateGroup .state-title{fill:var(--md-mermaid-label-fg-color)!important;font-family:var(--md-mermaid-font-family)}g.stateGroup .composit{fill:var(--md-mermaid-label-bg-color)}.nodeLabel,.nodeLabel p{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}a .nodeLabel{text-decoration:underline}.node circle.state-end,.node circle.state-start,.start-state{fill:var(--md-mermaid-edge-color);stroke:none}.end-state-inner,.end-state-outer{fill:var(--md-mermaid-edge-color)}.end-state-inner,.node circle.state-end{stroke:var(--md-mermaid-label-bg-color)}.transition{stroke:var(--md-mermaid-edge-color)}[id^=state-fork] rect,[id^=state-join] rect{fill:var(--md-mermaid-edge-color)!important;stroke:none!important}.statediagram-cluster.statediagram-cluster .inner{fill:var(--md-default-bg-color)}.statediagram-cluster rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}.statediagram-state rect.divider{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}defs #statediagram-barbEnd{stroke:var(--md-mermaid-edge-color)}.entityTitleText{fill:var(--md-mermaid-label-fg-color)}.attributeBoxEven,.attributeBoxOdd{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}.entityBox{fill:var(--md-mermaid-label-bg-color);stroke:var(--md-mermaid-node-fg-color)}.entityLabel{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.relationshipLabelBox{fill:var(--md-mermaid-label-bg-color);fill-opacity:1;background-color:var(--md-mermaid-label-bg-color);opacity:1}.relationshipLabel{fill:var(--md-mermaid-label-fg-color)}.relationshipLine{stroke:var(--md-mermaid-edge-color)}defs #ONE_OR_MORE_END *,defs #ONE_OR_MORE_START *,defs #ONLY_ONE_END *,defs #ONLY_ONE_START *,defs #ZERO_OR_MORE_END *,defs #ZERO_OR_MORE_START *,defs #ZERO_OR_ONE_END *,defs #ZERO_OR_ONE_START *{stroke:var(--md-mermaid-edge-color)!important}defs #ZERO_OR_MORE_END circle,defs #ZERO_OR_MORE_START circle{fill:var(--md-mermaid-label-bg-color)}text:not([class]):last-child{fill:var(--md-mermaid-label-fg-color)}.actor{fill:var(--md-mermaid-sequence-actor-bg-color);stroke:var(--md-mermaid-sequence-actor-border-color)}text.actor>tspan{fill:var(--md-mermaid-sequence-actor-fg-color);font-family:var(--md-mermaid-font-family)}line{stroke:var(--md-mermaid-sequence-actor-line-color)}.actor-man circle,.actor-man line{fill:var(--md-mermaid-sequence-actorman-bg-color);stroke:var(--md-mermaid-sequence-actorman-line-color)}.messageLine0,.messageLine1{stroke:var(--md-mermaid-sequence-message-line-color)}.note{fill:var(--md-mermaid-sequence-note-bg-color);stroke:var(--md-mermaid-sequence-note-border-color)}.loopText,.loopText>tspan,.messageText,.noteText>tspan{stroke:none;font-family:var(--md-mermaid-font-family)!important}.messageText{fill:var(--md-mermaid-sequence-message-fg-color)}.loopText,.loopText>tspan{fill:var(--md-mermaid-sequence-loop-fg-color)}.noteText>tspan{fill:var(--md-mermaid-sequence-note-fg-color)}#arrowhead path{fill:var(--md-mermaid-sequence-message-line-color);stroke:none}.loopLine{fill:var(--md-mermaid-sequence-loop-bg-color);stroke:var(--md-mermaid-sequence-loop-border-color)}.labelBox{fill:var(--md-mermaid-sequence-label-bg-color);stroke:none}.labelText,.labelText>span{fill:var(--md-mermaid-sequence-label-fg-color);font-family:var(--md-mermaid-font-family)}.sequenceNumber{fill:var(--md-mermaid-sequence-number-fg-color)}rect.rect{fill:var(--md-mermaid-sequence-box-bg-color);stroke:none}rect.rect+text.text{fill:var(--md-mermaid-sequence-box-fg-color)}defs #sequencenumber{fill:var(--md-mermaid-sequence-number-bg-color)!important}";var Gr,Qa=0;function Ka(){return typeof mermaid=="undefined"||mermaid instanceof Element?Tt("https://unpkg.com/mermaid@11/dist/mermaid.min.js"):I(void 0)}function Wn(e){return e.classList.remove("mermaid"),Gr||(Gr=Ka().pipe(w(()=>mermaid.initialize({startOnLoad:!1,themeCSS:Un,sequence:{actorFontSize:"16px",messageFontSize:"16px",noteFontSize:"16px"}})),m(()=>{}),G(1))),Gr.subscribe(()=>co(this,null,function*(){e.classList.add("mermaid");let t=`__mermaid_${Qa++}`,r=x("div",{class:"mermaid"}),o=e.textContent,{svg:n,fn:i}=yield mermaid.render(t,o),a=r.attachShadow({mode:"closed"});a.innerHTML=n,e.replaceWith(r),i==null||i(a)})),Gr.pipe(m(()=>({ref:e})))}var Vn=x("table");function Dn(e){return e.replaceWith(Vn),Vn.replaceWith(An(e)),I({ref:e})}function Ya(e){let t=e.find(r=>r.checked)||e[0];return O(...e.map(r=>h(r,"change").pipe(m(()=>R(`label[for="${r.id}"]`))))).pipe(Q(R(`label[for="${t.id}"]`)),m(r=>({active:r})))}function Nn(e,{viewport$:t,target$:r}){let o=R(".tabbed-labels",e),n=P(":scope > input",e),i=Kr("prev");e.append(i);let a=Kr("next");return e.append(a),C(()=>{let s=new g,p=s.pipe(Z(),ie(!0));z([s,ge(e),tt(e)]).pipe(W(p),Me(1,me)).subscribe({next([{active:c},l]){let f=De(c),{width:u}=ce(c);e.style.setProperty("--md-indicator-x",`${f.x}px`),e.style.setProperty("--md-indicator-width",`${u}px`);let d=pr(o);(f.xd.x+l.width)&&o.scrollTo({left:Math.max(0,f.x-16),behavior:"smooth"})},complete(){e.style.removeProperty("--md-indicator-x"),e.style.removeProperty("--md-indicator-width")}}),z([Ne(o),ge(o)]).pipe(W(p)).subscribe(([c,l])=>{let f=St(o);i.hidden=c.x<16,a.hidden=c.x>f.width-l.width-16}),O(h(i,"click").pipe(m(()=>-1)),h(a,"click").pipe(m(()=>1))).pipe(W(p)).subscribe(c=>{let{width:l}=ce(o);o.scrollBy({left:l*c,behavior:"smooth"})}),r.pipe(W(p),b(c=>n.includes(c))).subscribe(c=>c.click()),o.classList.add("tabbed-labels--linked");for(let c of n){let l=R(`label[for="${c.id}"]`);l.replaceChildren(x("a",{href:`#${l.htmlFor}`,tabIndex:-1},...Array.from(l.childNodes))),h(l.firstElementChild,"click").pipe(W(p),b(f=>!(f.metaKey||f.ctrlKey)),w(f=>{f.preventDefault(),f.stopPropagation()})).subscribe(()=>{history.replaceState({},"",`#${l.htmlFor}`),l.click()})}return B("content.tabs.link")&&s.pipe(Ce(1),re(t)).subscribe(([{active:c},{offset:l}])=>{let f=c.innerText.trim();if(c.hasAttribute("data-md-switching"))c.removeAttribute("data-md-switching");else{let u=e.offsetTop-l.y;for(let y of P("[data-tabs]"))for(let L of P(":scope > input",y)){let X=R(`label[for="${L.id}"]`);if(X!==c&&X.innerText.trim()===f){X.setAttribute("data-md-switching",""),L.click();break}}window.scrollTo({top:e.offsetTop-u});let d=__md_get("__tabs")||[];__md_set("__tabs",[...new Set([f,...d])])}}),s.pipe(W(p)).subscribe(()=>{for(let c of P("audio, video",e))c.pause()}),Ya(n).pipe(w(c=>s.next(c)),_(()=>s.complete()),m(c=>$({ref:e},c)))}).pipe(Ke(se))}function zn(e,{viewport$:t,target$:r,print$:o}){return O(...P(".annotate:not(.highlight)",e).map(n=>Pn(n,{target$:r,print$:o})),...P("pre:not(.mermaid) > code",e).map(n=>jn(n,{target$:r,print$:o})),...P("pre.mermaid",e).map(n=>Wn(n)),...P("table:not([class])",e).map(n=>Dn(n)),...P("details",e).map(n=>Fn(n,{target$:r,print$:o})),...P("[data-tabs]",e).map(n=>Nn(n,{viewport$:t,target$:r})),...P("[title]",e).filter(()=>B("content.tooltips")).map(n=>mt(n,{viewport$:t})))}function Ba(e,{alert$:t}){return t.pipe(v(r=>O(I(!0),I(!1).pipe(Ge(2e3))).pipe(m(o=>({message:r,active:o})))))}function qn(e,t){let r=R(".md-typeset",e);return C(()=>{let o=new g;return o.subscribe(({message:n,active:i})=>{e.classList.toggle("md-dialog--active",i),r.textContent=n}),Ba(e,t).pipe(w(n=>o.next(n)),_(()=>o.complete()),m(n=>$({ref:e},n)))})}var Ga=0;function Ja(e,t){document.body.append(e);let{width:r}=ce(e);e.style.setProperty("--md-tooltip-width",`${r}px`),e.remove();let o=cr(t),n=typeof o!="undefined"?Ne(o):I({x:0,y:0}),i=O(et(t),$t(t)).pipe(K());return z([i,n]).pipe(m(([a,s])=>{let{x:p,y:c}=De(t),l=ce(t),f=t.closest("table");return f&&t.parentElement&&(p+=f.offsetLeft+t.parentElement.offsetLeft,c+=f.offsetTop+t.parentElement.offsetTop),{active:a,offset:{x:p-s.x+l.width/2-r/2,y:c-s.y+l.height+8}}}))}function Qn(e){let t=e.title;if(!t.length)return S;let r=`__tooltip_${Ga++}`,o=Rt(r,"inline"),n=R(".md-typeset",o);return n.innerHTML=t,C(()=>{let i=new g;return i.subscribe({next({offset:a}){o.style.setProperty("--md-tooltip-x",`${a.x}px`),o.style.setProperty("--md-tooltip-y",`${a.y}px`)},complete(){o.style.removeProperty("--md-tooltip-x"),o.style.removeProperty("--md-tooltip-y")}}),O(i.pipe(b(({active:a})=>a)),i.pipe(_e(250),b(({active:a})=>!a))).subscribe({next({active:a}){a?(e.insertAdjacentElement("afterend",o),e.setAttribute("aria-describedby",r),e.removeAttribute("title")):(o.remove(),e.removeAttribute("aria-describedby"),e.setAttribute("title",t))},complete(){o.remove(),e.removeAttribute("aria-describedby"),e.setAttribute("title",t)}}),i.pipe(Me(16,me)).subscribe(({active:a})=>{o.classList.toggle("md-tooltip--active",a)}),i.pipe(pt(125,me),b(()=>!!e.offsetParent),m(()=>e.offsetParent.getBoundingClientRect()),m(({x:a})=>a)).subscribe({next(a){a?o.style.setProperty("--md-tooltip-0",`${-a}px`):o.style.removeProperty("--md-tooltip-0")},complete(){o.style.removeProperty("--md-tooltip-0")}}),Ja(o,e).pipe(w(a=>i.next(a)),_(()=>i.complete()),m(a=>$({ref:e},a)))}).pipe(Ke(se))}function Xa({viewport$:e}){if(!B("header.autohide"))return I(!1);let t=e.pipe(m(({offset:{y:n}})=>n),Be(2,1),m(([n,i])=>[nMath.abs(i-n.y)>100),m(([,[n]])=>n),K()),o=ze("search");return z([e,o]).pipe(m(([{offset:n},i])=>n.y>400&&!i),K(),v(n=>n?r:I(!1)),Q(!1))}function Kn(e,t){return C(()=>z([ge(e),Xa(t)])).pipe(m(([{height:r},o])=>({height:r,hidden:o})),K((r,o)=>r.height===o.height&&r.hidden===o.hidden),G(1))}function Yn(e,{header$:t,main$:r}){return C(()=>{let o=new g,n=o.pipe(Z(),ie(!0));o.pipe(te("active"),He(t)).subscribe(([{active:a},{hidden:s}])=>{e.classList.toggle("md-header--shadow",a&&!s),e.hidden=s});let i=ue(P("[title]",e)).pipe(b(()=>B("content.tooltips")),ne(a=>Qn(a)));return r.subscribe(o),t.pipe(W(n),m(a=>$({ref:e},a)),Re(i.pipe(W(n))))})}function Za(e,{viewport$:t,header$:r}){return mr(e,{viewport$:t,header$:r}).pipe(m(({offset:{y:o}})=>{let{height:n}=ce(e);return{active:o>=n}}),te("active"))}function Bn(e,t){return C(()=>{let r=new g;r.subscribe({next({active:n}){e.classList.toggle("md-header__title--active",n)},complete(){e.classList.remove("md-header__title--active")}});let o=fe(".md-content h1");return typeof o=="undefined"?S:Za(o,t).pipe(w(n=>r.next(n)),_(()=>r.complete()),m(n=>$({ref:e},n)))})}function Gn(e,{viewport$:t,header$:r}){let o=r.pipe(m(({height:i})=>i),K()),n=o.pipe(v(()=>ge(e).pipe(m(({height:i})=>({top:e.offsetTop,bottom:e.offsetTop+i})),te("bottom"))));return z([o,n,t]).pipe(m(([i,{top:a,bottom:s},{offset:{y:p},size:{height:c}}])=>(c=Math.max(0,c-Math.max(0,a-p,i)-Math.max(0,c+p-s)),{offset:a-i,height:c,active:a-i<=p})),K((i,a)=>i.offset===a.offset&&i.height===a.height&&i.active===a.active))}function es(e){let t=__md_get("__palette")||{index:e.findIndex(o=>matchMedia(o.getAttribute("data-md-color-media")).matches)},r=Math.max(0,Math.min(t.index,e.length-1));return I(...e).pipe(ne(o=>h(o,"change").pipe(m(()=>o))),Q(e[r]),m(o=>({index:e.indexOf(o),color:{media:o.getAttribute("data-md-color-media"),scheme:o.getAttribute("data-md-color-scheme"),primary:o.getAttribute("data-md-color-primary"),accent:o.getAttribute("data-md-color-accent")}})),G(1))}function Jn(e){let t=P("input",e),r=x("meta",{name:"theme-color"});document.head.appendChild(r);let o=x("meta",{name:"color-scheme"});document.head.appendChild(o);let n=Pt("(prefers-color-scheme: light)");return C(()=>{let i=new g;return i.subscribe(a=>{if(document.body.setAttribute("data-md-color-switching",""),a.color.media==="(prefers-color-scheme)"){let s=matchMedia("(prefers-color-scheme: light)"),p=document.querySelector(s.matches?"[data-md-color-media='(prefers-color-scheme: light)']":"[data-md-color-media='(prefers-color-scheme: dark)']");a.color.scheme=p.getAttribute("data-md-color-scheme"),a.color.primary=p.getAttribute("data-md-color-primary"),a.color.accent=p.getAttribute("data-md-color-accent")}for(let[s,p]of Object.entries(a.color))document.body.setAttribute(`data-md-color-${s}`,p);for(let s=0;sa.key==="Enter"),re(i,(a,s)=>s)).subscribe(({index:a})=>{a=(a+1)%t.length,t[a].click(),t[a].focus()}),i.pipe(m(()=>{let a=Se("header"),s=window.getComputedStyle(a);return o.content=s.colorScheme,s.backgroundColor.match(/\d+/g).map(p=>(+p).toString(16).padStart(2,"0")).join("")})).subscribe(a=>r.content=`#${a}`),i.pipe(ve(se)).subscribe(()=>{document.body.removeAttribute("data-md-color-switching")}),es(t).pipe(W(n.pipe(Ce(1))),ct(),w(a=>i.next(a)),_(()=>i.complete()),m(a=>$({ref:e},a)))})}function Xn(e,{progress$:t}){return C(()=>{let r=new g;return r.subscribe(({value:o})=>{e.style.setProperty("--md-progress-value",`${o}`)}),t.pipe(w(o=>r.next({value:o})),_(()=>r.complete()),m(o=>({ref:e,value:o})))})}var Jr=Mt(Br());function ts(e){e.setAttribute("data-md-copying","");let t=e.closest("[data-copy]"),r=t?t.getAttribute("data-copy"):e.innerText;return e.removeAttribute("data-md-copying"),r.trimEnd()}function Zn({alert$:e}){Jr.default.isSupported()&&new j(t=>{new Jr.default("[data-clipboard-target], [data-clipboard-text]",{text:r=>r.getAttribute("data-clipboard-text")||ts(R(r.getAttribute("data-clipboard-target")))}).on("success",r=>t.next(r))}).pipe(w(t=>{t.trigger.focus()}),m(()=>Ee("clipboard.copied"))).subscribe(e)}function ei(e,t){return e.protocol=t.protocol,e.hostname=t.hostname,e}function rs(e,t){let r=new Map;for(let o of P("url",e)){let n=R("loc",o),i=[ei(new URL(n.textContent),t)];r.set(`${i[0]}`,i);for(let a of P("[rel=alternate]",o)){let s=a.getAttribute("href");s!=null&&i.push(ei(new URL(s),t))}}return r}function ur(e){return un(new URL("sitemap.xml",e)).pipe(m(t=>rs(t,new URL(e))),de(()=>I(new Map)))}function os(e,t){if(!(e.target instanceof Element))return S;let r=e.target.closest("a");if(r===null)return S;if(r.target||e.metaKey||e.ctrlKey)return S;let o=new URL(r.href);return o.search=o.hash="",t.has(`${o}`)?(e.preventDefault(),I(new URL(r.href))):S}function ti(e){let t=new Map;for(let r of P(":scope > *",e.head))t.set(r.outerHTML,r);return t}function ri(e){for(let t of P("[href], [src]",e))for(let r of["href","src"]){let o=t.getAttribute(r);if(o&&!/^(?:[a-z]+:)?\/\//i.test(o)){t[r]=t[r];break}}return I(e)}function ns(e){for(let o of["[data-md-component=announce]","[data-md-component=container]","[data-md-component=header-topic]","[data-md-component=outdated]","[data-md-component=logo]","[data-md-component=skip]",...B("navigation.tabs.sticky")?["[data-md-component=tabs]"]:[]]){let n=fe(o),i=fe(o,e);typeof n!="undefined"&&typeof i!="undefined"&&n.replaceWith(i)}let t=ti(document);for(let[o,n]of ti(e))t.has(o)?t.delete(o):document.head.appendChild(n);for(let o of t.values()){let n=o.getAttribute("name");n!=="theme-color"&&n!=="color-scheme"&&o.remove()}let r=Se("container");return We(P("script",r)).pipe(v(o=>{let n=e.createElement("script");if(o.src){for(let i of o.getAttributeNames())n.setAttribute(i,o.getAttribute(i));return o.replaceWith(n),new j(i=>{n.onload=()=>i.complete()})}else return n.textContent=o.textContent,o.replaceWith(n),S}),Z(),ie(document))}function oi({location$:e,viewport$:t,progress$:r}){let o=xe();if(location.protocol==="file:")return S;let n=ur(o.base);I(document).subscribe(ri);let i=h(document.body,"click").pipe(He(n),v(([p,c])=>os(p,c)),pe()),a=h(window,"popstate").pipe(m(ye),pe());i.pipe(re(t)).subscribe(([p,{offset:c}])=>{history.replaceState(c,""),history.pushState(null,"",p)}),O(i,a).subscribe(e);let s=e.pipe(te("pathname"),v(p=>fn(p,{progress$:r}).pipe(de(()=>(lt(p,!0),S)))),v(ri),v(ns),pe());return O(s.pipe(re(e,(p,c)=>c)),s.pipe(v(()=>e),te("hash")),e.pipe(K((p,c)=>p.pathname===c.pathname&&p.hash===c.hash),v(()=>i),w(()=>history.back()))).subscribe(p=>{var c,l;history.state!==null||!p.hash?window.scrollTo(0,(l=(c=history.state)==null?void 0:c.y)!=null?l:0):(history.scrollRestoration="auto",pn(p.hash),history.scrollRestoration="manual")}),e.subscribe(()=>{history.scrollRestoration="manual"}),h(window,"beforeunload").subscribe(()=>{history.scrollRestoration="auto"}),t.pipe(te("offset"),_e(100)).subscribe(({offset:p})=>{history.replaceState(p,"")}),s}var ni=Mt(qr());function ii(e){let t=e.separator.split("|").map(n=>n.replace(/(\(\?[!=<][^)]+\))/g,"").length===0?"\uFFFD":n).join("|"),r=new RegExp(t,"img"),o=(n,i,a)=>`${i}${a}`;return n=>{n=n.replace(/[\s*+\-:~^]+/g," ").trim();let i=new RegExp(`(^|${e.separator}|)(${n.replace(/[|\\{}()[\]^$+*?.-]/g,"\\$&").replace(r,"|")})`,"img");return a=>(0,ni.default)(a).replace(i,o).replace(/<\/mark>(\s+)]*>/img,"$1")}}function jt(e){return e.type===1}function dr(e){return e.type===3}function ai(e,t){let r=yn(e);return O(I(location.protocol!=="file:"),ze("search")).pipe(Ae(o=>o),v(()=>t)).subscribe(({config:o,docs:n})=>r.next({type:0,data:{config:o,docs:n,options:{suggest:B("search.suggest")}}})),r}function si(e){var l;let{selectedVersionSitemap:t,selectedVersionBaseURL:r,currentLocation:o,currentBaseURL:n}=e,i=(l=Xr(n))==null?void 0:l.pathname;if(i===void 0)return;let a=ss(o.pathname,i);if(a===void 0)return;let s=ps(t.keys());if(!t.has(s))return;let p=Xr(a,s);if(!p||!t.has(p.href))return;let c=Xr(a,r);if(c)return c.hash=o.hash,c.search=o.search,c}function Xr(e,t){try{return new URL(e,t)}catch(r){return}}function ss(e,t){if(e.startsWith(t))return e.slice(t.length)}function cs(e,t){let r=Math.min(e.length,t.length),o;for(o=0;oS)),o=r.pipe(m(n=>{let[,i]=t.base.match(/([^/]+)\/?$/);return n.find(({version:a,aliases:s})=>a===i||s.includes(i))||n[0]}));r.pipe(m(n=>new Map(n.map(i=>[`${new URL(`../${i.version}/`,t.base)}`,i]))),v(n=>h(document.body,"click").pipe(b(i=>!i.metaKey&&!i.ctrlKey),re(o),v(([i,a])=>{if(i.target instanceof Element){let s=i.target.closest("a");if(s&&!s.target&&n.has(s.href)){let p=s.href;return!i.target.closest(".md-version")&&n.get(p)===a?S:(i.preventDefault(),I(new URL(p)))}}return S}),v(i=>ur(i).pipe(m(a=>{var s;return(s=si({selectedVersionSitemap:a,selectedVersionBaseURL:i,currentLocation:ye(),currentBaseURL:t.base}))!=null?s:i})))))).subscribe(n=>lt(n,!0)),z([r,o]).subscribe(([n,i])=>{R(".md-header__topic").appendChild(Cn(n,i))}),e.pipe(v(()=>o)).subscribe(n=>{var s;let i=new URL(t.base),a=__md_get("__outdated",sessionStorage,i);if(a===null){a=!0;let p=((s=t.version)==null?void 0:s.default)||"latest";Array.isArray(p)||(p=[p]);e:for(let c of p)for(let l of n.aliases.concat(n.version))if(new RegExp(c,"i").test(l)){a=!1;break e}__md_set("__outdated",a,sessionStorage,i)}if(a)for(let p of ae("outdated"))p.hidden=!1})}function ls(e,{worker$:t}){let{searchParams:r}=ye();r.has("q")&&(Je("search",!0),e.value=r.get("q"),e.focus(),ze("search").pipe(Ae(i=>!i)).subscribe(()=>{let i=ye();i.searchParams.delete("q"),history.replaceState({},"",`${i}`)}));let o=et(e),n=O(t.pipe(Ae(jt)),h(e,"keyup"),o).pipe(m(()=>e.value),K());return z([n,o]).pipe(m(([i,a])=>({value:i,focus:a})),G(1))}function pi(e,{worker$:t}){let r=new g,o=r.pipe(Z(),ie(!0));z([t.pipe(Ae(jt)),r],(i,a)=>a).pipe(te("value")).subscribe(({value:i})=>t.next({type:2,data:i})),r.pipe(te("focus")).subscribe(({focus:i})=>{i&&Je("search",i)}),h(e.form,"reset").pipe(W(o)).subscribe(()=>e.focus());let n=R("header [for=__search]");return h(n,"click").subscribe(()=>e.focus()),ls(e,{worker$:t}).pipe(w(i=>r.next(i)),_(()=>r.complete()),m(i=>$({ref:e},i)),G(1))}function li(e,{worker$:t,query$:r}){let o=new g,n=on(e.parentElement).pipe(b(Boolean)),i=e.parentElement,a=R(":scope > :first-child",e),s=R(":scope > :last-child",e);ze("search").subscribe(l=>s.setAttribute("role",l?"list":"presentation")),o.pipe(re(r),Wr(t.pipe(Ae(jt)))).subscribe(([{items:l},{value:f}])=>{switch(l.length){case 0:a.textContent=f.length?Ee("search.result.none"):Ee("search.result.placeholder");break;case 1:a.textContent=Ee("search.result.one");break;default:let u=sr(l.length);a.textContent=Ee("search.result.other",u)}});let p=o.pipe(w(()=>s.innerHTML=""),v(({items:l})=>O(I(...l.slice(0,10)),I(...l.slice(10)).pipe(Be(4),Dr(n),v(([f])=>f)))),m(Mn),pe());return p.subscribe(l=>s.appendChild(l)),p.pipe(ne(l=>{let f=fe("details",l);return typeof f=="undefined"?S:h(f,"toggle").pipe(W(o),m(()=>f))})).subscribe(l=>{l.open===!1&&l.offsetTop<=i.scrollTop&&i.scrollTo({top:l.offsetTop})}),t.pipe(b(dr),m(({data:l})=>l)).pipe(w(l=>o.next(l)),_(()=>o.complete()),m(l=>$({ref:e},l)))}function ms(e,{query$:t}){return t.pipe(m(({value:r})=>{let o=ye();return o.hash="",r=r.replace(/\s+/g,"+").replace(/&/g,"%26").replace(/=/g,"%3D"),o.search=`q=${r}`,{url:o}}))}function mi(e,t){let r=new g,o=r.pipe(Z(),ie(!0));return r.subscribe(({url:n})=>{e.setAttribute("data-clipboard-text",e.href),e.href=`${n}`}),h(e,"click").pipe(W(o)).subscribe(n=>n.preventDefault()),ms(e,t).pipe(w(n=>r.next(n)),_(()=>r.complete()),m(n=>$({ref:e},n)))}function fi(e,{worker$:t,keyboard$:r}){let o=new g,n=Se("search-query"),i=O(h(n,"keydown"),h(n,"focus")).pipe(ve(se),m(()=>n.value),K());return o.pipe(He(i),m(([{suggest:s},p])=>{let c=p.split(/([\s-]+)/);if(s!=null&&s.length&&c[c.length-1]){let l=s[s.length-1];l.startsWith(c[c.length-1])&&(c[c.length-1]=l)}else c.length=0;return c})).subscribe(s=>e.innerHTML=s.join("").replace(/\s/g," ")),r.pipe(b(({mode:s})=>s==="search")).subscribe(s=>{switch(s.type){case"ArrowRight":e.innerText.length&&n.selectionStart===n.value.length&&(n.value=e.innerText);break}}),t.pipe(b(dr),m(({data:s})=>s)).pipe(w(s=>o.next(s)),_(()=>o.complete()),m(()=>({ref:e})))}function ui(e,{index$:t,keyboard$:r}){let o=xe();try{let n=ai(o.search,t),i=Se("search-query",e),a=Se("search-result",e);h(e,"click").pipe(b(({target:p})=>p instanceof Element&&!!p.closest("a"))).subscribe(()=>Je("search",!1)),r.pipe(b(({mode:p})=>p==="search")).subscribe(p=>{let c=Ie();switch(p.type){case"Enter":if(c===i){let l=new Map;for(let f of P(":first-child [href]",a)){let u=f.firstElementChild;l.set(f,parseFloat(u.getAttribute("data-md-score")))}if(l.size){let[[f]]=[...l].sort(([,u],[,d])=>d-u);f.click()}p.claim()}break;case"Escape":case"Tab":Je("search",!1),i.blur();break;case"ArrowUp":case"ArrowDown":if(typeof c=="undefined")i.focus();else{let l=[i,...P(":not(details) > [href], summary, details[open] [href]",a)],f=Math.max(0,(Math.max(0,l.indexOf(c))+l.length+(p.type==="ArrowUp"?-1:1))%l.length);l[f].focus()}p.claim();break;default:i!==Ie()&&i.focus()}}),r.pipe(b(({mode:p})=>p==="global")).subscribe(p=>{switch(p.type){case"f":case"s":case"/":i.focus(),i.select(),p.claim();break}});let s=pi(i,{worker$:n});return O(s,li(a,{worker$:n,query$:s})).pipe(Re(...ae("search-share",e).map(p=>mi(p,{query$:s})),...ae("search-suggest",e).map(p=>fi(p,{worker$:n,keyboard$:r}))))}catch(n){return e.hidden=!0,Ye}}function di(e,{index$:t,location$:r}){return z([t,r.pipe(Q(ye()),b(o=>!!o.searchParams.get("h")))]).pipe(m(([o,n])=>ii(o.config)(n.searchParams.get("h"))),m(o=>{var a;let n=new Map,i=document.createNodeIterator(e,NodeFilter.SHOW_TEXT);for(let s=i.nextNode();s;s=i.nextNode())if((a=s.parentElement)!=null&&a.offsetHeight){let p=s.textContent,c=o(p);c.length>p.length&&n.set(s,c)}for(let[s,p]of n){let{childNodes:c}=x("span",null,p);s.replaceWith(...Array.from(c))}return{ref:e,nodes:n}}))}function fs(e,{viewport$:t,main$:r}){let o=e.closest(".md-grid"),n=o.offsetTop-o.parentElement.offsetTop;return z([r,t]).pipe(m(([{offset:i,height:a},{offset:{y:s}}])=>(a=a+Math.min(n,Math.max(0,s-i))-n,{height:a,locked:s>=i+n})),K((i,a)=>i.height===a.height&&i.locked===a.locked))}function Zr(e,o){var n=o,{header$:t}=n,r=so(n,["header$"]);let i=R(".md-sidebar__scrollwrap",e),{y:a}=De(i);return C(()=>{let s=new g,p=s.pipe(Z(),ie(!0)),c=s.pipe(Me(0,me));return c.pipe(re(t)).subscribe({next([{height:l},{height:f}]){i.style.height=`${l-2*a}px`,e.style.top=`${f}px`},complete(){i.style.height="",e.style.top=""}}),c.pipe(Ae()).subscribe(()=>{for(let l of P(".md-nav__link--active[href]",e)){if(!l.clientHeight)continue;let f=l.closest(".md-sidebar__scrollwrap");if(typeof f!="undefined"){let u=l.offsetTop-f.offsetTop,{height:d}=ce(f);f.scrollTo({top:u-d/2})}}}),ue(P("label[tabindex]",e)).pipe(ne(l=>h(l,"click").pipe(ve(se),m(()=>l),W(p)))).subscribe(l=>{let f=R(`[id="${l.htmlFor}"]`);R(`[aria-labelledby="${l.id}"]`).setAttribute("aria-expanded",`${f.checked}`)}),fs(e,r).pipe(w(l=>s.next(l)),_(()=>s.complete()),m(l=>$({ref:e},l)))})}function hi(e,t){if(typeof t!="undefined"){let r=`https://api.github.com/repos/${e}/${t}`;return st(je(`${r}/releases/latest`).pipe(de(()=>S),m(o=>({version:o.tag_name})),Ve({})),je(r).pipe(de(()=>S),m(o=>({stars:o.stargazers_count,forks:o.forks_count})),Ve({}))).pipe(m(([o,n])=>$($({},o),n)))}else{let r=`https://api.github.com/users/${e}`;return je(r).pipe(m(o=>({repositories:o.public_repos})),Ve({}))}}function bi(e,t){let r=`https://${e}/api/v4/projects/${encodeURIComponent(t)}`;return st(je(`${r}/releases/permalink/latest`).pipe(de(()=>S),m(({tag_name:o})=>({version:o})),Ve({})),je(r).pipe(de(()=>S),m(({star_count:o,forks_count:n})=>({stars:o,forks:n})),Ve({}))).pipe(m(([o,n])=>$($({},o),n)))}function vi(e){let t=e.match(/^.+github\.com\/([^/]+)\/?([^/]+)?/i);if(t){let[,r,o]=t;return hi(r,o)}if(t=e.match(/^.+?([^/]*gitlab[^/]+)\/(.+?)\/?$/i),t){let[,r,o]=t;return bi(r,o)}return S}var us;function ds(e){return us||(us=C(()=>{let t=__md_get("__source",sessionStorage);if(t)return I(t);if(ae("consent").length){let o=__md_get("__consent");if(!(o&&o.github))return S}return vi(e.href).pipe(w(o=>__md_set("__source",o,sessionStorage)))}).pipe(de(()=>S),b(t=>Object.keys(t).length>0),m(t=>({facts:t})),G(1)))}function gi(e){let t=R(":scope > :last-child",e);return C(()=>{let r=new g;return r.subscribe(({facts:o})=>{t.appendChild(_n(o)),t.classList.add("md-source__repository--active")}),ds(e).pipe(w(o=>r.next(o)),_(()=>r.complete()),m(o=>$({ref:e},o)))})}function hs(e,{viewport$:t,header$:r}){return ge(document.body).pipe(v(()=>mr(e,{header$:r,viewport$:t})),m(({offset:{y:o}})=>({hidden:o>=10})),te("hidden"))}function yi(e,t){return C(()=>{let r=new g;return r.subscribe({next({hidden:o}){e.hidden=o},complete(){e.hidden=!1}}),(B("navigation.tabs.sticky")?I({hidden:!1}):hs(e,t)).pipe(w(o=>r.next(o)),_(()=>r.complete()),m(o=>$({ref:e},o)))})}function bs(e,{viewport$:t,header$:r}){let o=new Map,n=P(".md-nav__link",e);for(let s of n){let p=decodeURIComponent(s.hash.substring(1)),c=fe(`[id="${p}"]`);typeof c!="undefined"&&o.set(s,c)}let i=r.pipe(te("height"),m(({height:s})=>{let p=Se("main"),c=R(":scope > :first-child",p);return s+.8*(c.offsetTop-p.offsetTop)}),pe());return ge(document.body).pipe(te("height"),v(s=>C(()=>{let p=[];return I([...o].reduce((c,[l,f])=>{for(;p.length&&o.get(p[p.length-1]).tagName>=f.tagName;)p.pop();let u=f.offsetTop;for(;!u&&f.parentElement;)f=f.parentElement,u=f.offsetTop;let d=f.offsetParent;for(;d;d=d.offsetParent)u+=d.offsetTop;return c.set([...p=[...p,l]].reverse(),u)},new Map))}).pipe(m(p=>new Map([...p].sort(([,c],[,l])=>c-l))),He(i),v(([p,c])=>t.pipe(Fr(([l,f],{offset:{y:u},size:d})=>{let y=u+d.height>=Math.floor(s.height);for(;f.length;){let[,L]=f[0];if(L-c=u&&!y)f=[l.pop(),...f];else break}return[l,f]},[[],[...p]]),K((l,f)=>l[0]===f[0]&&l[1]===f[1])))))).pipe(m(([s,p])=>({prev:s.map(([c])=>c),next:p.map(([c])=>c)})),Q({prev:[],next:[]}),Be(2,1),m(([s,p])=>s.prev.length{let i=new g,a=i.pipe(Z(),ie(!0));if(i.subscribe(({prev:s,next:p})=>{for(let[c]of p)c.classList.remove("md-nav__link--passed"),c.classList.remove("md-nav__link--active");for(let[c,[l]]of s.entries())l.classList.add("md-nav__link--passed"),l.classList.toggle("md-nav__link--active",c===s.length-1)}),B("toc.follow")){let s=O(t.pipe(_e(1),m(()=>{})),t.pipe(_e(250),m(()=>"smooth")));i.pipe(b(({prev:p})=>p.length>0),He(o.pipe(ve(se))),re(s)).subscribe(([[{prev:p}],c])=>{let[l]=p[p.length-1];if(l.offsetHeight){let f=cr(l);if(typeof f!="undefined"){let u=l.offsetTop-f.offsetTop,{height:d}=ce(f);f.scrollTo({top:u-d/2,behavior:c})}}})}return B("navigation.tracking")&&t.pipe(W(a),te("offset"),_e(250),Ce(1),W(n.pipe(Ce(1))),ct({delay:250}),re(i)).subscribe(([,{prev:s}])=>{let p=ye(),c=s[s.length-1];if(c&&c.length){let[l]=c,{hash:f}=new URL(l.href);p.hash!==f&&(p.hash=f,history.replaceState({},"",`${p}`))}else p.hash="",history.replaceState({},"",`${p}`)}),bs(e,{viewport$:t,header$:r}).pipe(w(s=>i.next(s)),_(()=>i.complete()),m(s=>$({ref:e},s)))})}function vs(e,{viewport$:t,main$:r,target$:o}){let n=t.pipe(m(({offset:{y:a}})=>a),Be(2,1),m(([a,s])=>a>s&&s>0),K()),i=r.pipe(m(({active:a})=>a));return z([i,n]).pipe(m(([a,s])=>!(a&&s)),K(),W(o.pipe(Ce(1))),ie(!0),ct({delay:250}),m(a=>({hidden:a})))}function Ei(e,{viewport$:t,header$:r,main$:o,target$:n}){let i=new g,a=i.pipe(Z(),ie(!0));return i.subscribe({next({hidden:s}){e.hidden=s,s?(e.setAttribute("tabindex","-1"),e.blur()):e.removeAttribute("tabindex")},complete(){e.style.top="",e.hidden=!0,e.removeAttribute("tabindex")}}),r.pipe(W(a),te("height")).subscribe(({height:s})=>{e.style.top=`${s+16}px`}),h(e,"click").subscribe(s=>{s.preventDefault(),window.scrollTo({top:0})}),vs(e,{viewport$:t,main$:o,target$:n}).pipe(w(s=>i.next(s)),_(()=>i.complete()),m(s=>$({ref:e},s)))}function wi({document$:e,viewport$:t}){e.pipe(v(()=>P(".md-ellipsis")),ne(r=>tt(r).pipe(W(e.pipe(Ce(1))),b(o=>o),m(()=>r),Te(1))),b(r=>r.offsetWidth{let o=r.innerText,n=r.closest("a")||r;return n.title=o,B("content.tooltips")?mt(n,{viewport$:t}).pipe(W(e.pipe(Ce(1))),_(()=>n.removeAttribute("title"))):S})).subscribe(),B("content.tooltips")&&e.pipe(v(()=>P(".md-status")),ne(r=>mt(r,{viewport$:t}))).subscribe()}function Ti({document$:e,tablet$:t}){e.pipe(v(()=>P(".md-toggle--indeterminate")),w(r=>{r.indeterminate=!0,r.checked=!1}),ne(r=>h(r,"change").pipe(Vr(()=>r.classList.contains("md-toggle--indeterminate")),m(()=>r))),re(t)).subscribe(([r,o])=>{r.classList.remove("md-toggle--indeterminate"),o&&(r.checked=!1)})}function gs(){return/(iPad|iPhone|iPod)/.test(navigator.userAgent)}function Si({document$:e}){e.pipe(v(()=>P("[data-md-scrollfix]")),w(t=>t.removeAttribute("data-md-scrollfix")),b(gs),ne(t=>h(t,"touchstart").pipe(m(()=>t)))).subscribe(t=>{let r=t.scrollTop;r===0?t.scrollTop=1:r+t.offsetHeight===t.scrollHeight&&(t.scrollTop=r-1)})}function Oi({viewport$:e,tablet$:t}){z([ze("search"),t]).pipe(m(([r,o])=>r&&!o),v(r=>I(r).pipe(Ge(r?400:100))),re(e)).subscribe(([r,{offset:{y:o}}])=>{if(r)document.body.setAttribute("data-md-scrolllock",""),document.body.style.top=`-${o}px`;else{let n=-1*parseInt(document.body.style.top,10);document.body.removeAttribute("data-md-scrolllock"),document.body.style.top="",n&&window.scrollTo(0,n)}})}Object.entries||(Object.entries=function(e){let t=[];for(let r of Object.keys(e))t.push([r,e[r]]);return t});Object.values||(Object.values=function(e){let t=[];for(let r of Object.keys(e))t.push(e[r]);return t});typeof Element!="undefined"&&(Element.prototype.scrollTo||(Element.prototype.scrollTo=function(e,t){typeof e=="object"?(this.scrollLeft=e.left,this.scrollTop=e.top):(this.scrollLeft=e,this.scrollTop=t)}),Element.prototype.replaceWith||(Element.prototype.replaceWith=function(...e){let t=this.parentNode;if(t){e.length===0&&t.removeChild(this);for(let r=e.length-1;r>=0;r--){let o=e[r];typeof o=="string"?o=document.createTextNode(o):o.parentNode&&o.parentNode.removeChild(o),r?t.insertBefore(this.previousSibling,o):t.replaceChild(o,this)}}}));function ys(){return location.protocol==="file:"?Tt(`${new URL("search/search_index.js",eo.base)}`).pipe(m(()=>__index),G(1)):je(new URL("search/search_index.json",eo.base))}document.documentElement.classList.remove("no-js");document.documentElement.classList.add("js");var ot=Go(),Ut=sn(),Lt=ln(Ut),to=an(),Oe=gn(),hr=Pt("(min-width: 960px)"),Mi=Pt("(min-width: 1220px)"),_i=mn(),eo=xe(),Ai=document.forms.namedItem("search")?ys():Ye,ro=new g;Zn({alert$:ro});var oo=new g;B("navigation.instant")&&oi({location$:Ut,viewport$:Oe,progress$:oo}).subscribe(ot);var Li;((Li=eo.version)==null?void 0:Li.provider)==="mike"&&ci({document$:ot});O(Ut,Lt).pipe(Ge(125)).subscribe(()=>{Je("drawer",!1),Je("search",!1)});to.pipe(b(({mode:e})=>e==="global")).subscribe(e=>{switch(e.type){case"p":case",":let t=fe("link[rel=prev]");typeof t!="undefined"&<(t);break;case"n":case".":let r=fe("link[rel=next]");typeof r!="undefined"&<(r);break;case"Enter":let o=Ie();o instanceof HTMLLabelElement&&o.click()}});wi({viewport$:Oe,document$:ot});Ti({document$:ot,tablet$:hr});Si({document$:ot});Oi({viewport$:Oe,tablet$:hr});var rt=Kn(Se("header"),{viewport$:Oe}),Ft=ot.pipe(m(()=>Se("main")),v(e=>Gn(e,{viewport$:Oe,header$:rt})),G(1)),xs=O(...ae("consent").map(e=>En(e,{target$:Lt})),...ae("dialog").map(e=>qn(e,{alert$:ro})),...ae("palette").map(e=>Jn(e)),...ae("progress").map(e=>Xn(e,{progress$:oo})),...ae("search").map(e=>ui(e,{index$:Ai,keyboard$:to})),...ae("source").map(e=>gi(e))),Es=C(()=>O(...ae("announce").map(e=>xn(e)),...ae("content").map(e=>zn(e,{viewport$:Oe,target$:Lt,print$:_i})),...ae("content").map(e=>B("search.highlight")?di(e,{index$:Ai,location$:Ut}):S),...ae("header").map(e=>Yn(e,{viewport$:Oe,header$:rt,main$:Ft})),...ae("header-title").map(e=>Bn(e,{viewport$:Oe,header$:rt})),...ae("sidebar").map(e=>e.getAttribute("data-md-type")==="navigation"?Nr(Mi,()=>Zr(e,{viewport$:Oe,header$:rt,main$:Ft})):Nr(hr,()=>Zr(e,{viewport$:Oe,header$:rt,main$:Ft}))),...ae("tabs").map(e=>yi(e,{viewport$:Oe,header$:rt})),...ae("toc").map(e=>xi(e,{viewport$:Oe,header$:rt,main$:Ft,target$:Lt})),...ae("top").map(e=>Ei(e,{viewport$:Oe,header$:rt,main$:Ft,target$:Lt})))),Ci=ot.pipe(v(()=>Es),Re(xs),G(1));Ci.subscribe();window.document$=ot;window.location$=Ut;window.target$=Lt;window.keyboard$=to;window.viewport$=Oe;window.tablet$=hr;window.screen$=Mi;window.print$=_i;window.alert$=ro;window.progress$=oo;window.component$=Ci;})(); +//# sourceMappingURL=bundle.f1b6f286.min.js.map + diff --git a/deployment/25.10.3/assets/javascripts/bundle.f1b6f286.min.js.map b/deployment/25.10.3/assets/javascripts/bundle.f1b6f286.min.js.map new file mode 100644 index 00000000..2644bf10 --- /dev/null +++ b/deployment/25.10.3/assets/javascripts/bundle.f1b6f286.min.js.map @@ -0,0 +1,7 @@ +{ + "version": 3, + "sources": ["node_modules/focus-visible/dist/focus-visible.js", "node_modules/escape-html/index.js", "node_modules/clipboard/dist/clipboard.js", "src/templates/assets/javascripts/bundle.ts", "node_modules/tslib/tslib.es6.mjs", "node_modules/rxjs/src/internal/util/isFunction.ts", "node_modules/rxjs/src/internal/util/createErrorClass.ts", "node_modules/rxjs/src/internal/util/UnsubscriptionError.ts", "node_modules/rxjs/src/internal/util/arrRemove.ts", "node_modules/rxjs/src/internal/Subscription.ts", "node_modules/rxjs/src/internal/config.ts", "node_modules/rxjs/src/internal/scheduler/timeoutProvider.ts", "node_modules/rxjs/src/internal/util/reportUnhandledError.ts", "node_modules/rxjs/src/internal/util/noop.ts", "node_modules/rxjs/src/internal/NotificationFactories.ts", "node_modules/rxjs/src/internal/util/errorContext.ts", "node_modules/rxjs/src/internal/Subscriber.ts", "node_modules/rxjs/src/internal/symbol/observable.ts", "node_modules/rxjs/src/internal/util/identity.ts", "node_modules/rxjs/src/internal/util/pipe.ts", "node_modules/rxjs/src/internal/Observable.ts", "node_modules/rxjs/src/internal/util/lift.ts", "node_modules/rxjs/src/internal/operators/OperatorSubscriber.ts", "node_modules/rxjs/src/internal/scheduler/animationFrameProvider.ts", "node_modules/rxjs/src/internal/util/ObjectUnsubscribedError.ts", "node_modules/rxjs/src/internal/Subject.ts", "node_modules/rxjs/src/internal/BehaviorSubject.ts", "node_modules/rxjs/src/internal/scheduler/dateTimestampProvider.ts", "node_modules/rxjs/src/internal/ReplaySubject.ts", "node_modules/rxjs/src/internal/scheduler/Action.ts", "node_modules/rxjs/src/internal/scheduler/intervalProvider.ts", "node_modules/rxjs/src/internal/scheduler/AsyncAction.ts", "node_modules/rxjs/src/internal/Scheduler.ts", "node_modules/rxjs/src/internal/scheduler/AsyncScheduler.ts", "node_modules/rxjs/src/internal/scheduler/async.ts", "node_modules/rxjs/src/internal/scheduler/QueueAction.ts", "node_modules/rxjs/src/internal/scheduler/QueueScheduler.ts", "node_modules/rxjs/src/internal/scheduler/queue.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameAction.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameScheduler.ts", "node_modules/rxjs/src/internal/scheduler/animationFrame.ts", "node_modules/rxjs/src/internal/observable/empty.ts", "node_modules/rxjs/src/internal/util/isScheduler.ts", "node_modules/rxjs/src/internal/util/args.ts", "node_modules/rxjs/src/internal/util/isArrayLike.ts", "node_modules/rxjs/src/internal/util/isPromise.ts", "node_modules/rxjs/src/internal/util/isInteropObservable.ts", "node_modules/rxjs/src/internal/util/isAsyncIterable.ts", "node_modules/rxjs/src/internal/util/throwUnobservableError.ts", "node_modules/rxjs/src/internal/symbol/iterator.ts", "node_modules/rxjs/src/internal/util/isIterable.ts", "node_modules/rxjs/src/internal/util/isReadableStreamLike.ts", "node_modules/rxjs/src/internal/observable/innerFrom.ts", "node_modules/rxjs/src/internal/util/executeSchedule.ts", "node_modules/rxjs/src/internal/operators/observeOn.ts", "node_modules/rxjs/src/internal/operators/subscribeOn.ts", "node_modules/rxjs/src/internal/scheduled/scheduleObservable.ts", "node_modules/rxjs/src/internal/scheduled/schedulePromise.ts", "node_modules/rxjs/src/internal/scheduled/scheduleArray.ts", "node_modules/rxjs/src/internal/scheduled/scheduleIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleAsyncIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleReadableStreamLike.ts", "node_modules/rxjs/src/internal/scheduled/scheduled.ts", "node_modules/rxjs/src/internal/observable/from.ts", "node_modules/rxjs/src/internal/observable/of.ts", "node_modules/rxjs/src/internal/observable/throwError.ts", "node_modules/rxjs/src/internal/util/EmptyError.ts", "node_modules/rxjs/src/internal/util/isDate.ts", "node_modules/rxjs/src/internal/operators/map.ts", "node_modules/rxjs/src/internal/util/mapOneOrManyArgs.ts", "node_modules/rxjs/src/internal/util/argsArgArrayOrObject.ts", "node_modules/rxjs/src/internal/util/createObject.ts", "node_modules/rxjs/src/internal/observable/combineLatest.ts", "node_modules/rxjs/src/internal/operators/mergeInternals.ts", "node_modules/rxjs/src/internal/operators/mergeMap.ts", "node_modules/rxjs/src/internal/operators/mergeAll.ts", "node_modules/rxjs/src/internal/operators/concatAll.ts", "node_modules/rxjs/src/internal/observable/concat.ts", "node_modules/rxjs/src/internal/observable/defer.ts", "node_modules/rxjs/src/internal/observable/fromEvent.ts", "node_modules/rxjs/src/internal/observable/fromEventPattern.ts", "node_modules/rxjs/src/internal/observable/timer.ts", "node_modules/rxjs/src/internal/observable/merge.ts", "node_modules/rxjs/src/internal/observable/never.ts", "node_modules/rxjs/src/internal/util/argsOrArgArray.ts", "node_modules/rxjs/src/internal/operators/filter.ts", "node_modules/rxjs/src/internal/observable/zip.ts", "node_modules/rxjs/src/internal/operators/audit.ts", "node_modules/rxjs/src/internal/operators/auditTime.ts", "node_modules/rxjs/src/internal/operators/bufferCount.ts", "node_modules/rxjs/src/internal/operators/catchError.ts", "node_modules/rxjs/src/internal/operators/scanInternals.ts", "node_modules/rxjs/src/internal/operators/combineLatest.ts", "node_modules/rxjs/src/internal/operators/combineLatestWith.ts", "node_modules/rxjs/src/internal/operators/debounce.ts", "node_modules/rxjs/src/internal/operators/debounceTime.ts", "node_modules/rxjs/src/internal/operators/defaultIfEmpty.ts", "node_modules/rxjs/src/internal/operators/take.ts", "node_modules/rxjs/src/internal/operators/ignoreElements.ts", "node_modules/rxjs/src/internal/operators/mapTo.ts", "node_modules/rxjs/src/internal/operators/delayWhen.ts", "node_modules/rxjs/src/internal/operators/delay.ts", "node_modules/rxjs/src/internal/operators/distinctUntilChanged.ts", "node_modules/rxjs/src/internal/operators/distinctUntilKeyChanged.ts", "node_modules/rxjs/src/internal/operators/throwIfEmpty.ts", "node_modules/rxjs/src/internal/operators/endWith.ts", "node_modules/rxjs/src/internal/operators/finalize.ts", "node_modules/rxjs/src/internal/operators/first.ts", "node_modules/rxjs/src/internal/operators/takeLast.ts", "node_modules/rxjs/src/internal/operators/merge.ts", "node_modules/rxjs/src/internal/operators/mergeWith.ts", "node_modules/rxjs/src/internal/operators/repeat.ts", "node_modules/rxjs/src/internal/operators/scan.ts", "node_modules/rxjs/src/internal/operators/share.ts", "node_modules/rxjs/src/internal/operators/shareReplay.ts", "node_modules/rxjs/src/internal/operators/skip.ts", "node_modules/rxjs/src/internal/operators/skipUntil.ts", "node_modules/rxjs/src/internal/operators/startWith.ts", "node_modules/rxjs/src/internal/operators/switchMap.ts", "node_modules/rxjs/src/internal/operators/takeUntil.ts", "node_modules/rxjs/src/internal/operators/takeWhile.ts", "node_modules/rxjs/src/internal/operators/tap.ts", "node_modules/rxjs/src/internal/operators/throttle.ts", "node_modules/rxjs/src/internal/operators/throttleTime.ts", "node_modules/rxjs/src/internal/operators/withLatestFrom.ts", "node_modules/rxjs/src/internal/operators/zip.ts", "node_modules/rxjs/src/internal/operators/zipWith.ts", "src/templates/assets/javascripts/browser/document/index.ts", "src/templates/assets/javascripts/browser/element/_/index.ts", "src/templates/assets/javascripts/browser/element/focus/index.ts", "src/templates/assets/javascripts/browser/element/hover/index.ts", "src/templates/assets/javascripts/utilities/h/index.ts", "src/templates/assets/javascripts/utilities/round/index.ts", "src/templates/assets/javascripts/browser/script/index.ts", "src/templates/assets/javascripts/browser/element/size/_/index.ts", "src/templates/assets/javascripts/browser/element/size/content/index.ts", "src/templates/assets/javascripts/browser/element/offset/_/index.ts", "src/templates/assets/javascripts/browser/element/offset/content/index.ts", "src/templates/assets/javascripts/browser/element/visibility/index.ts", "src/templates/assets/javascripts/browser/toggle/index.ts", "src/templates/assets/javascripts/browser/keyboard/index.ts", "src/templates/assets/javascripts/browser/location/_/index.ts", "src/templates/assets/javascripts/browser/location/hash/index.ts", "src/templates/assets/javascripts/browser/media/index.ts", "src/templates/assets/javascripts/browser/request/index.ts", "src/templates/assets/javascripts/browser/viewport/offset/index.ts", "src/templates/assets/javascripts/browser/viewport/size/index.ts", "src/templates/assets/javascripts/browser/viewport/_/index.ts", "src/templates/assets/javascripts/browser/viewport/at/index.ts", "src/templates/assets/javascripts/browser/worker/index.ts", "src/templates/assets/javascripts/_/index.ts", "src/templates/assets/javascripts/components/_/index.ts", "src/templates/assets/javascripts/components/announce/index.ts", "src/templates/assets/javascripts/components/consent/index.ts", "src/templates/assets/javascripts/templates/tooltip/index.tsx", "src/templates/assets/javascripts/templates/annotation/index.tsx", "src/templates/assets/javascripts/templates/clipboard/index.tsx", "src/templates/assets/javascripts/templates/search/index.tsx", "src/templates/assets/javascripts/templates/source/index.tsx", "src/templates/assets/javascripts/templates/tabbed/index.tsx", "src/templates/assets/javascripts/templates/table/index.tsx", "src/templates/assets/javascripts/templates/version/index.tsx", "src/templates/assets/javascripts/components/tooltip2/index.ts", "src/templates/assets/javascripts/components/content/annotation/_/index.ts", "src/templates/assets/javascripts/components/content/annotation/list/index.ts", "src/templates/assets/javascripts/components/content/annotation/block/index.ts", "src/templates/assets/javascripts/components/content/code/_/index.ts", "src/templates/assets/javascripts/components/content/details/index.ts", "src/templates/assets/javascripts/components/content/mermaid/index.css", "src/templates/assets/javascripts/components/content/mermaid/index.ts", "src/templates/assets/javascripts/components/content/table/index.ts", "src/templates/assets/javascripts/components/content/tabs/index.ts", "src/templates/assets/javascripts/components/content/_/index.ts", "src/templates/assets/javascripts/components/dialog/index.ts", "src/templates/assets/javascripts/components/tooltip/index.ts", "src/templates/assets/javascripts/components/header/_/index.ts", "src/templates/assets/javascripts/components/header/title/index.ts", "src/templates/assets/javascripts/components/main/index.ts", "src/templates/assets/javascripts/components/palette/index.ts", "src/templates/assets/javascripts/components/progress/index.ts", "src/templates/assets/javascripts/integrations/clipboard/index.ts", "src/templates/assets/javascripts/integrations/sitemap/index.ts", "src/templates/assets/javascripts/integrations/instant/index.ts", "src/templates/assets/javascripts/integrations/search/highlighter/index.ts", "src/templates/assets/javascripts/integrations/search/worker/message/index.ts", "src/templates/assets/javascripts/integrations/search/worker/_/index.ts", "src/templates/assets/javascripts/integrations/version/findurl/index.ts", "src/templates/assets/javascripts/integrations/version/index.ts", "src/templates/assets/javascripts/components/search/query/index.ts", "src/templates/assets/javascripts/components/search/result/index.ts", "src/templates/assets/javascripts/components/search/share/index.ts", "src/templates/assets/javascripts/components/search/suggest/index.ts", "src/templates/assets/javascripts/components/search/_/index.ts", "src/templates/assets/javascripts/components/search/highlight/index.ts", "src/templates/assets/javascripts/components/sidebar/index.ts", "src/templates/assets/javascripts/components/source/facts/github/index.ts", "src/templates/assets/javascripts/components/source/facts/gitlab/index.ts", "src/templates/assets/javascripts/components/source/facts/_/index.ts", "src/templates/assets/javascripts/components/source/_/index.ts", "src/templates/assets/javascripts/components/tabs/index.ts", "src/templates/assets/javascripts/components/toc/index.ts", "src/templates/assets/javascripts/components/top/index.ts", "src/templates/assets/javascripts/patches/ellipsis/index.ts", "src/templates/assets/javascripts/patches/indeterminate/index.ts", "src/templates/assets/javascripts/patches/scrollfix/index.ts", "src/templates/assets/javascripts/patches/scrolllock/index.ts", "src/templates/assets/javascripts/polyfills/index.ts"], + "sourcesContent": ["(function (global, factory) {\n typeof exports === 'object' && typeof module !== 'undefined' ? factory() :\n typeof define === 'function' && define.amd ? define(factory) :\n (factory());\n}(this, (function () { 'use strict';\n\n /**\n * Applies the :focus-visible polyfill at the given scope.\n * A scope in this case is either the top-level Document or a Shadow Root.\n *\n * @param {(Document|ShadowRoot)} scope\n * @see https://github.com/WICG/focus-visible\n */\n function applyFocusVisiblePolyfill(scope) {\n var hadKeyboardEvent = true;\n var hadFocusVisibleRecently = false;\n var hadFocusVisibleRecentlyTimeout = null;\n\n var inputTypesAllowlist = {\n text: true,\n search: true,\n url: true,\n tel: true,\n email: true,\n password: true,\n number: true,\n date: true,\n month: true,\n week: true,\n time: true,\n datetime: true,\n 'datetime-local': true\n };\n\n /**\n * Helper function for legacy browsers and iframes which sometimes focus\n * elements like document, body, and non-interactive SVG.\n * @param {Element} el\n */\n function isValidFocusTarget(el) {\n if (\n el &&\n el !== document &&\n el.nodeName !== 'HTML' &&\n el.nodeName !== 'BODY' &&\n 'classList' in el &&\n 'contains' in el.classList\n ) {\n return true;\n }\n return false;\n }\n\n /**\n * Computes whether the given element should automatically trigger the\n * `focus-visible` class being added, i.e. whether it should always match\n * `:focus-visible` when focused.\n * @param {Element} el\n * @return {boolean}\n */\n function focusTriggersKeyboardModality(el) {\n var type = el.type;\n var tagName = el.tagName;\n\n if (tagName === 'INPUT' && inputTypesAllowlist[type] && !el.readOnly) {\n return true;\n }\n\n if (tagName === 'TEXTAREA' && !el.readOnly) {\n return true;\n }\n\n if (el.isContentEditable) {\n return true;\n }\n\n return false;\n }\n\n /**\n * Add the `focus-visible` class to the given element if it was not added by\n * the author.\n * @param {Element} el\n */\n function addFocusVisibleClass(el) {\n if (el.classList.contains('focus-visible')) {\n return;\n }\n el.classList.add('focus-visible');\n el.setAttribute('data-focus-visible-added', '');\n }\n\n /**\n * Remove the `focus-visible` class from the given element if it was not\n * originally added by the author.\n * @param {Element} el\n */\n function removeFocusVisibleClass(el) {\n if (!el.hasAttribute('data-focus-visible-added')) {\n return;\n }\n el.classList.remove('focus-visible');\n el.removeAttribute('data-focus-visible-added');\n }\n\n /**\n * If the most recent user interaction was via the keyboard;\n * and the key press did not include a meta, alt/option, or control key;\n * then the modality is keyboard. Otherwise, the modality is not keyboard.\n * Apply `focus-visible` to any current active element and keep track\n * of our keyboard modality state with `hadKeyboardEvent`.\n * @param {KeyboardEvent} e\n */\n function onKeyDown(e) {\n if (e.metaKey || e.altKey || e.ctrlKey) {\n return;\n }\n\n if (isValidFocusTarget(scope.activeElement)) {\n addFocusVisibleClass(scope.activeElement);\n }\n\n hadKeyboardEvent = true;\n }\n\n /**\n * If at any point a user clicks with a pointing device, ensure that we change\n * the modality away from keyboard.\n * This avoids the situation where a user presses a key on an already focused\n * element, and then clicks on a different element, focusing it with a\n * pointing device, while we still think we're in keyboard modality.\n * @param {Event} e\n */\n function onPointerDown(e) {\n hadKeyboardEvent = false;\n }\n\n /**\n * On `focus`, add the `focus-visible` class to the target if:\n * - the target received focus as a result of keyboard navigation, or\n * - the event target is an element that will likely require interaction\n * via the keyboard (e.g. a text box)\n * @param {Event} e\n */\n function onFocus(e) {\n // Prevent IE from focusing the document or HTML element.\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (hadKeyboardEvent || focusTriggersKeyboardModality(e.target)) {\n addFocusVisibleClass(e.target);\n }\n }\n\n /**\n * On `blur`, remove the `focus-visible` class from the target.\n * @param {Event} e\n */\n function onBlur(e) {\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (\n e.target.classList.contains('focus-visible') ||\n e.target.hasAttribute('data-focus-visible-added')\n ) {\n // To detect a tab/window switch, we look for a blur event followed\n // rapidly by a visibility change.\n // If we don't see a visibility change within 100ms, it's probably a\n // regular focus change.\n hadFocusVisibleRecently = true;\n window.clearTimeout(hadFocusVisibleRecentlyTimeout);\n hadFocusVisibleRecentlyTimeout = window.setTimeout(function() {\n hadFocusVisibleRecently = false;\n }, 100);\n removeFocusVisibleClass(e.target);\n }\n }\n\n /**\n * If the user changes tabs, keep track of whether or not the previously\n * focused element had .focus-visible.\n * @param {Event} e\n */\n function onVisibilityChange(e) {\n if (document.visibilityState === 'hidden') {\n // If the tab becomes active again, the browser will handle calling focus\n // on the element (Safari actually calls it twice).\n // If this tab change caused a blur on an element with focus-visible,\n // re-apply the class when the user switches back to the tab.\n if (hadFocusVisibleRecently) {\n hadKeyboardEvent = true;\n }\n addInitialPointerMoveListeners();\n }\n }\n\n /**\n * Add a group of listeners to detect usage of any pointing devices.\n * These listeners will be added when the polyfill first loads, and anytime\n * the window is blurred, so that they are active when the window regains\n * focus.\n */\n function addInitialPointerMoveListeners() {\n document.addEventListener('mousemove', onInitialPointerMove);\n document.addEventListener('mousedown', onInitialPointerMove);\n document.addEventListener('mouseup', onInitialPointerMove);\n document.addEventListener('pointermove', onInitialPointerMove);\n document.addEventListener('pointerdown', onInitialPointerMove);\n document.addEventListener('pointerup', onInitialPointerMove);\n document.addEventListener('touchmove', onInitialPointerMove);\n document.addEventListener('touchstart', onInitialPointerMove);\n document.addEventListener('touchend', onInitialPointerMove);\n }\n\n function removeInitialPointerMoveListeners() {\n document.removeEventListener('mousemove', onInitialPointerMove);\n document.removeEventListener('mousedown', onInitialPointerMove);\n document.removeEventListener('mouseup', onInitialPointerMove);\n document.removeEventListener('pointermove', onInitialPointerMove);\n document.removeEventListener('pointerdown', onInitialPointerMove);\n document.removeEventListener('pointerup', onInitialPointerMove);\n document.removeEventListener('touchmove', onInitialPointerMove);\n document.removeEventListener('touchstart', onInitialPointerMove);\n document.removeEventListener('touchend', onInitialPointerMove);\n }\n\n /**\n * When the polfyill first loads, assume the user is in keyboard modality.\n * If any event is received from a pointing device (e.g. mouse, pointer,\n * touch), turn off keyboard modality.\n * This accounts for situations where focus enters the page from the URL bar.\n * @param {Event} e\n */\n function onInitialPointerMove(e) {\n // Work around a Safari quirk that fires a mousemove on whenever the\n // window blurs, even if you're tabbing out of the page. \u00AF\\_(\u30C4)_/\u00AF\n if (e.target.nodeName && e.target.nodeName.toLowerCase() === 'html') {\n return;\n }\n\n hadKeyboardEvent = false;\n removeInitialPointerMoveListeners();\n }\n\n // For some kinds of state, we are interested in changes at the global scope\n // only. For example, global pointer input, global key presses and global\n // visibility change should affect the state at every scope:\n document.addEventListener('keydown', onKeyDown, true);\n document.addEventListener('mousedown', onPointerDown, true);\n document.addEventListener('pointerdown', onPointerDown, true);\n document.addEventListener('touchstart', onPointerDown, true);\n document.addEventListener('visibilitychange', onVisibilityChange, true);\n\n addInitialPointerMoveListeners();\n\n // For focus and blur, we specifically care about state changes in the local\n // scope. This is because focus / blur events that originate from within a\n // shadow root are not re-dispatched from the host element if it was already\n // the active element in its own scope:\n scope.addEventListener('focus', onFocus, true);\n scope.addEventListener('blur', onBlur, true);\n\n // We detect that a node is a ShadowRoot by ensuring that it is a\n // DocumentFragment and also has a host property. This check covers native\n // implementation and polyfill implementation transparently. If we only cared\n // about the native implementation, we could just check if the scope was\n // an instance of a ShadowRoot.\n if (scope.nodeType === Node.DOCUMENT_FRAGMENT_NODE && scope.host) {\n // Since a ShadowRoot is a special kind of DocumentFragment, it does not\n // have a root element to add a class to. So, we add this attribute to the\n // host element instead:\n scope.host.setAttribute('data-js-focus-visible', '');\n } else if (scope.nodeType === Node.DOCUMENT_NODE) {\n document.documentElement.classList.add('js-focus-visible');\n document.documentElement.setAttribute('data-js-focus-visible', '');\n }\n }\n\n // It is important to wrap all references to global window and document in\n // these checks to support server-side rendering use cases\n // @see https://github.com/WICG/focus-visible/issues/199\n if (typeof window !== 'undefined' && typeof document !== 'undefined') {\n // Make the polyfill helper globally available. This can be used as a signal\n // to interested libraries that wish to coordinate with the polyfill for e.g.,\n // applying the polyfill to a shadow root:\n window.applyFocusVisiblePolyfill = applyFocusVisiblePolyfill;\n\n // Notify interested libraries of the polyfill's presence, in case the\n // polyfill was loaded lazily:\n var event;\n\n try {\n event = new CustomEvent('focus-visible-polyfill-ready');\n } catch (error) {\n // IE11 does not support using CustomEvent as a constructor directly:\n event = document.createEvent('CustomEvent');\n event.initCustomEvent('focus-visible-polyfill-ready', false, false, {});\n }\n\n window.dispatchEvent(event);\n }\n\n if (typeof document !== 'undefined') {\n // Apply the polyfill to the global document, so that no JavaScript\n // coordination is required to use the polyfill in the top-level document:\n applyFocusVisiblePolyfill(document);\n }\n\n})));\n", "/*!\n * escape-html\n * Copyright(c) 2012-2013 TJ Holowaychuk\n * Copyright(c) 2015 Andreas Lubbe\n * Copyright(c) 2015 Tiancheng \"Timothy\" Gu\n * MIT Licensed\n */\n\n'use strict';\n\n/**\n * Module variables.\n * @private\n */\n\nvar matchHtmlRegExp = /[\"'&<>]/;\n\n/**\n * Module exports.\n * @public\n */\n\nmodule.exports = escapeHtml;\n\n/**\n * Escape special characters in the given string of html.\n *\n * @param {string} string The string to escape for inserting into HTML\n * @return {string}\n * @public\n */\n\nfunction escapeHtml(string) {\n var str = '' + string;\n var match = matchHtmlRegExp.exec(str);\n\n if (!match) {\n return str;\n }\n\n var escape;\n var html = '';\n var index = 0;\n var lastIndex = 0;\n\n for (index = match.index; index < str.length; index++) {\n switch (str.charCodeAt(index)) {\n case 34: // \"\n escape = '"';\n break;\n case 38: // &\n escape = '&';\n break;\n case 39: // '\n escape = ''';\n break;\n case 60: // <\n escape = '<';\n break;\n case 62: // >\n escape = '>';\n break;\n default:\n continue;\n }\n\n if (lastIndex !== index) {\n html += str.substring(lastIndex, index);\n }\n\n lastIndex = index + 1;\n html += escape;\n }\n\n return lastIndex !== index\n ? html + str.substring(lastIndex, index)\n : html;\n}\n", "/*!\n * clipboard.js v2.0.11\n * https://clipboardjs.com/\n *\n * Licensed MIT \u00A9 Zeno Rocha\n */\n(function webpackUniversalModuleDefinition(root, factory) {\n\tif(typeof exports === 'object' && typeof module === 'object')\n\t\tmodule.exports = factory();\n\telse if(typeof define === 'function' && define.amd)\n\t\tdefine([], factory);\n\telse if(typeof exports === 'object')\n\t\texports[\"ClipboardJS\"] = factory();\n\telse\n\t\troot[\"ClipboardJS\"] = factory();\n})(this, function() {\nreturn /******/ (function() { // webpackBootstrap\n/******/ \tvar __webpack_modules__ = ({\n\n/***/ 686:\n/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n\n// EXPORTS\n__webpack_require__.d(__webpack_exports__, {\n \"default\": function() { return /* binding */ clipboard; }\n});\n\n// EXTERNAL MODULE: ./node_modules/tiny-emitter/index.js\nvar tiny_emitter = __webpack_require__(279);\nvar tiny_emitter_default = /*#__PURE__*/__webpack_require__.n(tiny_emitter);\n// EXTERNAL MODULE: ./node_modules/good-listener/src/listen.js\nvar listen = __webpack_require__(370);\nvar listen_default = /*#__PURE__*/__webpack_require__.n(listen);\n// EXTERNAL MODULE: ./node_modules/select/src/select.js\nvar src_select = __webpack_require__(817);\nvar select_default = /*#__PURE__*/__webpack_require__.n(src_select);\n;// CONCATENATED MODULE: ./src/common/command.js\n/**\n * Executes a given operation type.\n * @param {String} type\n * @return {Boolean}\n */\nfunction command(type) {\n try {\n return document.execCommand(type);\n } catch (err) {\n return false;\n }\n}\n;// CONCATENATED MODULE: ./src/actions/cut.js\n\n\n/**\n * Cut action wrapper.\n * @param {String|HTMLElement} target\n * @return {String}\n */\n\nvar ClipboardActionCut = function ClipboardActionCut(target) {\n var selectedText = select_default()(target);\n command('cut');\n return selectedText;\n};\n\n/* harmony default export */ var actions_cut = (ClipboardActionCut);\n;// CONCATENATED MODULE: ./src/common/create-fake-element.js\n/**\n * Creates a fake textarea element with a value.\n * @param {String} value\n * @return {HTMLElement}\n */\nfunction createFakeElement(value) {\n var isRTL = document.documentElement.getAttribute('dir') === 'rtl';\n var fakeElement = document.createElement('textarea'); // Prevent zooming on iOS\n\n fakeElement.style.fontSize = '12pt'; // Reset box model\n\n fakeElement.style.border = '0';\n fakeElement.style.padding = '0';\n fakeElement.style.margin = '0'; // Move element out of screen horizontally\n\n fakeElement.style.position = 'absolute';\n fakeElement.style[isRTL ? 'right' : 'left'] = '-9999px'; // Move element to the same position vertically\n\n var yPosition = window.pageYOffset || document.documentElement.scrollTop;\n fakeElement.style.top = \"\".concat(yPosition, \"px\");\n fakeElement.setAttribute('readonly', '');\n fakeElement.value = value;\n return fakeElement;\n}\n;// CONCATENATED MODULE: ./src/actions/copy.js\n\n\n\n/**\n * Create fake copy action wrapper using a fake element.\n * @param {String} target\n * @param {Object} options\n * @return {String}\n */\n\nvar fakeCopyAction = function fakeCopyAction(value, options) {\n var fakeElement = createFakeElement(value);\n options.container.appendChild(fakeElement);\n var selectedText = select_default()(fakeElement);\n command('copy');\n fakeElement.remove();\n return selectedText;\n};\n/**\n * Copy action wrapper.\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @return {String}\n */\n\n\nvar ClipboardActionCopy = function ClipboardActionCopy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n var selectedText = '';\n\n if (typeof target === 'string') {\n selectedText = fakeCopyAction(target, options);\n } else if (target instanceof HTMLInputElement && !['text', 'search', 'url', 'tel', 'password'].includes(target === null || target === void 0 ? void 0 : target.type)) {\n // If input type doesn't support `setSelectionRange`. Simulate it. https://developer.mozilla.org/en-US/docs/Web/API/HTMLInputElement/setSelectionRange\n selectedText = fakeCopyAction(target.value, options);\n } else {\n selectedText = select_default()(target);\n command('copy');\n }\n\n return selectedText;\n};\n\n/* harmony default export */ var actions_copy = (ClipboardActionCopy);\n;// CONCATENATED MODULE: ./src/actions/default.js\nfunction _typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { _typeof = function _typeof(obj) { return typeof obj; }; } else { _typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return _typeof(obj); }\n\n\n\n/**\n * Inner function which performs selection from either `text` or `target`\n * properties and then executes copy or cut operations.\n * @param {Object} options\n */\n\nvar ClipboardActionDefault = function ClipboardActionDefault() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n // Defines base properties passed from constructor.\n var _options$action = options.action,\n action = _options$action === void 0 ? 'copy' : _options$action,\n container = options.container,\n target = options.target,\n text = options.text; // Sets the `action` to be performed which can be either 'copy' or 'cut'.\n\n if (action !== 'copy' && action !== 'cut') {\n throw new Error('Invalid \"action\" value, use either \"copy\" or \"cut\"');\n } // Sets the `target` property using an element that will be have its content copied.\n\n\n if (target !== undefined) {\n if (target && _typeof(target) === 'object' && target.nodeType === 1) {\n if (action === 'copy' && target.hasAttribute('disabled')) {\n throw new Error('Invalid \"target\" attribute. Please use \"readonly\" instead of \"disabled\" attribute');\n }\n\n if (action === 'cut' && (target.hasAttribute('readonly') || target.hasAttribute('disabled'))) {\n throw new Error('Invalid \"target\" attribute. You can\\'t cut text from elements with \"readonly\" or \"disabled\" attributes');\n }\n } else {\n throw new Error('Invalid \"target\" value, use a valid Element');\n }\n } // Define selection strategy based on `text` property.\n\n\n if (text) {\n return actions_copy(text, {\n container: container\n });\n } // Defines which selection strategy based on `target` property.\n\n\n if (target) {\n return action === 'cut' ? actions_cut(target) : actions_copy(target, {\n container: container\n });\n }\n};\n\n/* harmony default export */ var actions_default = (ClipboardActionDefault);\n;// CONCATENATED MODULE: ./src/clipboard.js\nfunction clipboard_typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { clipboard_typeof = function _typeof(obj) { return typeof obj; }; } else { clipboard_typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return clipboard_typeof(obj); }\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\nfunction _defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } }\n\nfunction _createClass(Constructor, protoProps, staticProps) { if (protoProps) _defineProperties(Constructor.prototype, protoProps); if (staticProps) _defineProperties(Constructor, staticProps); return Constructor; }\n\nfunction _inherits(subClass, superClass) { if (typeof superClass !== \"function\" && superClass !== null) { throw new TypeError(\"Super expression must either be null or a function\"); } subClass.prototype = Object.create(superClass && superClass.prototype, { constructor: { value: subClass, writable: true, configurable: true } }); if (superClass) _setPrototypeOf(subClass, superClass); }\n\nfunction _setPrototypeOf(o, p) { _setPrototypeOf = Object.setPrototypeOf || function _setPrototypeOf(o, p) { o.__proto__ = p; return o; }; return _setPrototypeOf(o, p); }\n\nfunction _createSuper(Derived) { var hasNativeReflectConstruct = _isNativeReflectConstruct(); return function _createSuperInternal() { var Super = _getPrototypeOf(Derived), result; if (hasNativeReflectConstruct) { var NewTarget = _getPrototypeOf(this).constructor; result = Reflect.construct(Super, arguments, NewTarget); } else { result = Super.apply(this, arguments); } return _possibleConstructorReturn(this, result); }; }\n\nfunction _possibleConstructorReturn(self, call) { if (call && (clipboard_typeof(call) === \"object\" || typeof call === \"function\")) { return call; } return _assertThisInitialized(self); }\n\nfunction _assertThisInitialized(self) { if (self === void 0) { throw new ReferenceError(\"this hasn't been initialised - super() hasn't been called\"); } return self; }\n\nfunction _isNativeReflectConstruct() { if (typeof Reflect === \"undefined\" || !Reflect.construct) return false; if (Reflect.construct.sham) return false; if (typeof Proxy === \"function\") return true; try { Date.prototype.toString.call(Reflect.construct(Date, [], function () {})); return true; } catch (e) { return false; } }\n\nfunction _getPrototypeOf(o) { _getPrototypeOf = Object.setPrototypeOf ? Object.getPrototypeOf : function _getPrototypeOf(o) { return o.__proto__ || Object.getPrototypeOf(o); }; return _getPrototypeOf(o); }\n\n\n\n\n\n\n/**\n * Helper function to retrieve attribute value.\n * @param {String} suffix\n * @param {Element} element\n */\n\nfunction getAttributeValue(suffix, element) {\n var attribute = \"data-clipboard-\".concat(suffix);\n\n if (!element.hasAttribute(attribute)) {\n return;\n }\n\n return element.getAttribute(attribute);\n}\n/**\n * Base class which takes one or more elements, adds event listeners to them,\n * and instantiates a new `ClipboardAction` on each click.\n */\n\n\nvar Clipboard = /*#__PURE__*/function (_Emitter) {\n _inherits(Clipboard, _Emitter);\n\n var _super = _createSuper(Clipboard);\n\n /**\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n * @param {Object} options\n */\n function Clipboard(trigger, options) {\n var _this;\n\n _classCallCheck(this, Clipboard);\n\n _this = _super.call(this);\n\n _this.resolveOptions(options);\n\n _this.listenClick(trigger);\n\n return _this;\n }\n /**\n * Defines if attributes would be resolved using internal setter functions\n * or custom functions that were passed in the constructor.\n * @param {Object} options\n */\n\n\n _createClass(Clipboard, [{\n key: \"resolveOptions\",\n value: function resolveOptions() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n this.action = typeof options.action === 'function' ? options.action : this.defaultAction;\n this.target = typeof options.target === 'function' ? options.target : this.defaultTarget;\n this.text = typeof options.text === 'function' ? options.text : this.defaultText;\n this.container = clipboard_typeof(options.container) === 'object' ? options.container : document.body;\n }\n /**\n * Adds a click event listener to the passed trigger.\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n */\n\n }, {\n key: \"listenClick\",\n value: function listenClick(trigger) {\n var _this2 = this;\n\n this.listener = listen_default()(trigger, 'click', function (e) {\n return _this2.onClick(e);\n });\n }\n /**\n * Defines a new `ClipboardAction` on each click event.\n * @param {Event} e\n */\n\n }, {\n key: \"onClick\",\n value: function onClick(e) {\n var trigger = e.delegateTarget || e.currentTarget;\n var action = this.action(trigger) || 'copy';\n var text = actions_default({\n action: action,\n container: this.container,\n target: this.target(trigger),\n text: this.text(trigger)\n }); // Fires an event based on the copy operation result.\n\n this.emit(text ? 'success' : 'error', {\n action: action,\n text: text,\n trigger: trigger,\n clearSelection: function clearSelection() {\n if (trigger) {\n trigger.focus();\n }\n\n window.getSelection().removeAllRanges();\n }\n });\n }\n /**\n * Default `action` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultAction\",\n value: function defaultAction(trigger) {\n return getAttributeValue('action', trigger);\n }\n /**\n * Default `target` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultTarget\",\n value: function defaultTarget(trigger) {\n var selector = getAttributeValue('target', trigger);\n\n if (selector) {\n return document.querySelector(selector);\n }\n }\n /**\n * Allow fire programmatically a copy action\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @returns Text copied.\n */\n\n }, {\n key: \"defaultText\",\n\n /**\n * Default `text` lookup function.\n * @param {Element} trigger\n */\n value: function defaultText(trigger) {\n return getAttributeValue('text', trigger);\n }\n /**\n * Destroy lifecycle.\n */\n\n }, {\n key: \"destroy\",\n value: function destroy() {\n this.listener.destroy();\n }\n }], [{\n key: \"copy\",\n value: function copy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n return actions_copy(target, options);\n }\n /**\n * Allow fire programmatically a cut action\n * @param {String|HTMLElement} target\n * @returns Text cutted.\n */\n\n }, {\n key: \"cut\",\n value: function cut(target) {\n return actions_cut(target);\n }\n /**\n * Returns the support of the given action, or all actions if no action is\n * given.\n * @param {String} [action]\n */\n\n }, {\n key: \"isSupported\",\n value: function isSupported() {\n var action = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : ['copy', 'cut'];\n var actions = typeof action === 'string' ? [action] : action;\n var support = !!document.queryCommandSupported;\n actions.forEach(function (action) {\n support = support && !!document.queryCommandSupported(action);\n });\n return support;\n }\n }]);\n\n return Clipboard;\n}((tiny_emitter_default()));\n\n/* harmony default export */ var clipboard = (Clipboard);\n\n/***/ }),\n\n/***/ 828:\n/***/ (function(module) {\n\nvar DOCUMENT_NODE_TYPE = 9;\n\n/**\n * A polyfill for Element.matches()\n */\nif (typeof Element !== 'undefined' && !Element.prototype.matches) {\n var proto = Element.prototype;\n\n proto.matches = proto.matchesSelector ||\n proto.mozMatchesSelector ||\n proto.msMatchesSelector ||\n proto.oMatchesSelector ||\n proto.webkitMatchesSelector;\n}\n\n/**\n * Finds the closest parent that matches a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @return {Function}\n */\nfunction closest (element, selector) {\n while (element && element.nodeType !== DOCUMENT_NODE_TYPE) {\n if (typeof element.matches === 'function' &&\n element.matches(selector)) {\n return element;\n }\n element = element.parentNode;\n }\n}\n\nmodule.exports = closest;\n\n\n/***/ }),\n\n/***/ 438:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar closest = __webpack_require__(828);\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction _delegate(element, selector, type, callback, useCapture) {\n var listenerFn = listener.apply(this, arguments);\n\n element.addEventListener(type, listenerFn, useCapture);\n\n return {\n destroy: function() {\n element.removeEventListener(type, listenerFn, useCapture);\n }\n }\n}\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element|String|Array} [elements]\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction delegate(elements, selector, type, callback, useCapture) {\n // Handle the regular Element usage\n if (typeof elements.addEventListener === 'function') {\n return _delegate.apply(null, arguments);\n }\n\n // Handle Element-less usage, it defaults to global delegation\n if (typeof type === 'function') {\n // Use `document` as the first parameter, then apply arguments\n // This is a short way to .unshift `arguments` without running into deoptimizations\n return _delegate.bind(null, document).apply(null, arguments);\n }\n\n // Handle Selector-based usage\n if (typeof elements === 'string') {\n elements = document.querySelectorAll(elements);\n }\n\n // Handle Array-like based usage\n return Array.prototype.map.call(elements, function (element) {\n return _delegate(element, selector, type, callback, useCapture);\n });\n}\n\n/**\n * Finds closest match and invokes callback.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Function}\n */\nfunction listener(element, selector, type, callback) {\n return function(e) {\n e.delegateTarget = closest(e.target, selector);\n\n if (e.delegateTarget) {\n callback.call(element, e);\n }\n }\n}\n\nmodule.exports = delegate;\n\n\n/***/ }),\n\n/***/ 879:\n/***/ (function(__unused_webpack_module, exports) {\n\n/**\n * Check if argument is a HTML element.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.node = function(value) {\n return value !== undefined\n && value instanceof HTMLElement\n && value.nodeType === 1;\n};\n\n/**\n * Check if argument is a list of HTML elements.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.nodeList = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return value !== undefined\n && (type === '[object NodeList]' || type === '[object HTMLCollection]')\n && ('length' in value)\n && (value.length === 0 || exports.node(value[0]));\n};\n\n/**\n * Check if argument is a string.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.string = function(value) {\n return typeof value === 'string'\n || value instanceof String;\n};\n\n/**\n * Check if argument is a function.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.fn = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return type === '[object Function]';\n};\n\n\n/***/ }),\n\n/***/ 370:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar is = __webpack_require__(879);\nvar delegate = __webpack_require__(438);\n\n/**\n * Validates all params and calls the right\n * listener function based on its target type.\n *\n * @param {String|HTMLElement|HTMLCollection|NodeList} target\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listen(target, type, callback) {\n if (!target && !type && !callback) {\n throw new Error('Missing required arguments');\n }\n\n if (!is.string(type)) {\n throw new TypeError('Second argument must be a String');\n }\n\n if (!is.fn(callback)) {\n throw new TypeError('Third argument must be a Function');\n }\n\n if (is.node(target)) {\n return listenNode(target, type, callback);\n }\n else if (is.nodeList(target)) {\n return listenNodeList(target, type, callback);\n }\n else if (is.string(target)) {\n return listenSelector(target, type, callback);\n }\n else {\n throw new TypeError('First argument must be a String, HTMLElement, HTMLCollection, or NodeList');\n }\n}\n\n/**\n * Adds an event listener to a HTML element\n * and returns a remove listener function.\n *\n * @param {HTMLElement} node\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNode(node, type, callback) {\n node.addEventListener(type, callback);\n\n return {\n destroy: function() {\n node.removeEventListener(type, callback);\n }\n }\n}\n\n/**\n * Add an event listener to a list of HTML elements\n * and returns a remove listener function.\n *\n * @param {NodeList|HTMLCollection} nodeList\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNodeList(nodeList, type, callback) {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.addEventListener(type, callback);\n });\n\n return {\n destroy: function() {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.removeEventListener(type, callback);\n });\n }\n }\n}\n\n/**\n * Add an event listener to a selector\n * and returns a remove listener function.\n *\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenSelector(selector, type, callback) {\n return delegate(document.body, selector, type, callback);\n}\n\nmodule.exports = listen;\n\n\n/***/ }),\n\n/***/ 817:\n/***/ (function(module) {\n\nfunction select(element) {\n var selectedText;\n\n if (element.nodeName === 'SELECT') {\n element.focus();\n\n selectedText = element.value;\n }\n else if (element.nodeName === 'INPUT' || element.nodeName === 'TEXTAREA') {\n var isReadOnly = element.hasAttribute('readonly');\n\n if (!isReadOnly) {\n element.setAttribute('readonly', '');\n }\n\n element.select();\n element.setSelectionRange(0, element.value.length);\n\n if (!isReadOnly) {\n element.removeAttribute('readonly');\n }\n\n selectedText = element.value;\n }\n else {\n if (element.hasAttribute('contenteditable')) {\n element.focus();\n }\n\n var selection = window.getSelection();\n var range = document.createRange();\n\n range.selectNodeContents(element);\n selection.removeAllRanges();\n selection.addRange(range);\n\n selectedText = selection.toString();\n }\n\n return selectedText;\n}\n\nmodule.exports = select;\n\n\n/***/ }),\n\n/***/ 279:\n/***/ (function(module) {\n\nfunction E () {\n // Keep this empty so it's easier to inherit from\n // (via https://github.com/lipsmack from https://github.com/scottcorgan/tiny-emitter/issues/3)\n}\n\nE.prototype = {\n on: function (name, callback, ctx) {\n var e = this.e || (this.e = {});\n\n (e[name] || (e[name] = [])).push({\n fn: callback,\n ctx: ctx\n });\n\n return this;\n },\n\n once: function (name, callback, ctx) {\n var self = this;\n function listener () {\n self.off(name, listener);\n callback.apply(ctx, arguments);\n };\n\n listener._ = callback\n return this.on(name, listener, ctx);\n },\n\n emit: function (name) {\n var data = [].slice.call(arguments, 1);\n var evtArr = ((this.e || (this.e = {}))[name] || []).slice();\n var i = 0;\n var len = evtArr.length;\n\n for (i; i < len; i++) {\n evtArr[i].fn.apply(evtArr[i].ctx, data);\n }\n\n return this;\n },\n\n off: function (name, callback) {\n var e = this.e || (this.e = {});\n var evts = e[name];\n var liveEvents = [];\n\n if (evts && callback) {\n for (var i = 0, len = evts.length; i < len; i++) {\n if (evts[i].fn !== callback && evts[i].fn._ !== callback)\n liveEvents.push(evts[i]);\n }\n }\n\n // Remove event from queue to prevent memory leak\n // Suggested by https://github.com/lazd\n // Ref: https://github.com/scottcorgan/tiny-emitter/commit/c6ebfaa9bc973b33d110a84a307742b7cf94c953#commitcomment-5024910\n\n (liveEvents.length)\n ? e[name] = liveEvents\n : delete e[name];\n\n return this;\n }\n};\n\nmodule.exports = E;\nmodule.exports.TinyEmitter = E;\n\n\n/***/ })\n\n/******/ \t});\n/************************************************************************/\n/******/ \t// The module cache\n/******/ \tvar __webpack_module_cache__ = {};\n/******/ \t\n/******/ \t// The require function\n/******/ \tfunction __webpack_require__(moduleId) {\n/******/ \t\t// Check if module is in cache\n/******/ \t\tif(__webpack_module_cache__[moduleId]) {\n/******/ \t\t\treturn __webpack_module_cache__[moduleId].exports;\n/******/ \t\t}\n/******/ \t\t// Create a new module (and put it into the cache)\n/******/ \t\tvar module = __webpack_module_cache__[moduleId] = {\n/******/ \t\t\t// no module.id needed\n/******/ \t\t\t// no module.loaded needed\n/******/ \t\t\texports: {}\n/******/ \t\t};\n/******/ \t\n/******/ \t\t// Execute the module function\n/******/ \t\t__webpack_modules__[moduleId](module, module.exports, __webpack_require__);\n/******/ \t\n/******/ \t\t// Return the exports of the module\n/******/ \t\treturn module.exports;\n/******/ \t}\n/******/ \t\n/************************************************************************/\n/******/ \t/* webpack/runtime/compat get default export */\n/******/ \t!function() {\n/******/ \t\t// getDefaultExport function for compatibility with non-harmony modules\n/******/ \t\t__webpack_require__.n = function(module) {\n/******/ \t\t\tvar getter = module && module.__esModule ?\n/******/ \t\t\t\tfunction() { return module['default']; } :\n/******/ \t\t\t\tfunction() { return module; };\n/******/ \t\t\t__webpack_require__.d(getter, { a: getter });\n/******/ \t\t\treturn getter;\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/define property getters */\n/******/ \t!function() {\n/******/ \t\t// define getter functions for harmony exports\n/******/ \t\t__webpack_require__.d = function(exports, definition) {\n/******/ \t\t\tfor(var key in definition) {\n/******/ \t\t\t\tif(__webpack_require__.o(definition, key) && !__webpack_require__.o(exports, key)) {\n/******/ \t\t\t\t\tObject.defineProperty(exports, key, { enumerable: true, get: definition[key] });\n/******/ \t\t\t\t}\n/******/ \t\t\t}\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/hasOwnProperty shorthand */\n/******/ \t!function() {\n/******/ \t\t__webpack_require__.o = function(obj, prop) { return Object.prototype.hasOwnProperty.call(obj, prop); }\n/******/ \t}();\n/******/ \t\n/************************************************************************/\n/******/ \t// module exports must be returned from runtime so entry inlining is disabled\n/******/ \t// startup\n/******/ \t// Load entry module and return exports\n/******/ \treturn __webpack_require__(686);\n/******/ })()\n.default;\n});", "/*\n * Copyright (c) 2016-2025 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport \"focus-visible\"\n\nimport {\n EMPTY,\n NEVER,\n Observable,\n Subject,\n defer,\n delay,\n filter,\n map,\n merge,\n mergeWith,\n shareReplay,\n switchMap\n} from \"rxjs\"\n\nimport { configuration, feature } from \"./_\"\nimport {\n at,\n getActiveElement,\n getOptionalElement,\n requestJSON,\n setLocation,\n setToggle,\n watchDocument,\n watchKeyboard,\n watchLocation,\n watchLocationTarget,\n watchMedia,\n watchPrint,\n watchScript,\n watchViewport\n} from \"./browser\"\nimport {\n getComponentElement,\n getComponentElements,\n mountAnnounce,\n mountBackToTop,\n mountConsent,\n mountContent,\n mountDialog,\n mountHeader,\n mountHeaderTitle,\n mountPalette,\n mountProgress,\n mountSearch,\n mountSearchHiglight,\n mountSidebar,\n mountSource,\n mountTableOfContents,\n mountTabs,\n watchHeader,\n watchMain\n} from \"./components\"\nimport {\n SearchIndex,\n setupClipboardJS,\n setupInstantNavigation,\n setupVersionSelector\n} from \"./integrations\"\nimport {\n patchEllipsis,\n patchIndeterminate,\n patchScrollfix,\n patchScrolllock\n} from \"./patches\"\nimport \"./polyfills\"\n\n/* ----------------------------------------------------------------------------\n * Functions - @todo refactor\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch search index\n *\n * @returns Search index observable\n */\nfunction fetchSearchIndex(): Observable {\n if (location.protocol === \"file:\") {\n return watchScript(\n `${new URL(\"search/search_index.js\", config.base)}`\n )\n .pipe(\n // @ts-ignore - @todo fix typings\n map(() => __index),\n shareReplay(1)\n )\n } else {\n return requestJSON(\n new URL(\"search/search_index.json\", config.base)\n )\n }\n}\n\n/* ----------------------------------------------------------------------------\n * Application\n * ------------------------------------------------------------------------- */\n\n/* Yay, JavaScript is available */\ndocument.documentElement.classList.remove(\"no-js\")\ndocument.documentElement.classList.add(\"js\")\n\n/* Set up navigation observables and subjects */\nconst document$ = watchDocument()\nconst location$ = watchLocation()\nconst target$ = watchLocationTarget(location$)\nconst keyboard$ = watchKeyboard()\n\n/* Set up media observables */\nconst viewport$ = watchViewport()\nconst tablet$ = watchMedia(\"(min-width: 960px)\")\nconst screen$ = watchMedia(\"(min-width: 1220px)\")\nconst print$ = watchPrint()\n\n/* Retrieve search index, if search is enabled */\nconst config = configuration()\nconst index$ = document.forms.namedItem(\"search\")\n ? fetchSearchIndex()\n : NEVER\n\n/* Set up Clipboard.js integration */\nconst alert$ = new Subject()\nsetupClipboardJS({ alert$ })\n\n/* Set up progress indicator */\nconst progress$ = new Subject()\n\n/* Set up instant navigation, if enabled */\nif (feature(\"navigation.instant\"))\n setupInstantNavigation({ location$, viewport$, progress$ })\n .subscribe(document$)\n\n/* Set up version selector */\nif (config.version?.provider === \"mike\")\n setupVersionSelector({ document$ })\n\n/* Always close drawer and search on navigation */\nmerge(location$, target$)\n .pipe(\n delay(125)\n )\n .subscribe(() => {\n setToggle(\"drawer\", false)\n setToggle(\"search\", false)\n })\n\n/* Set up global keyboard handlers */\nkeyboard$\n .pipe(\n filter(({ mode }) => mode === \"global\")\n )\n .subscribe(key => {\n switch (key.type) {\n\n /* Go to previous page */\n case \"p\":\n case \",\":\n const prev = getOptionalElement(\"link[rel=prev]\")\n if (typeof prev !== \"undefined\")\n setLocation(prev)\n break\n\n /* Go to next page */\n case \"n\":\n case \".\":\n const next = getOptionalElement(\"link[rel=next]\")\n if (typeof next !== \"undefined\")\n setLocation(next)\n break\n\n /* Expand navigation, see https://bit.ly/3ZjG5io */\n case \"Enter\":\n const active = getActiveElement()\n if (active instanceof HTMLLabelElement)\n active.click()\n }\n })\n\n/* Set up patches */\npatchEllipsis({ viewport$, document$ })\npatchIndeterminate({ document$, tablet$ })\npatchScrollfix({ document$ })\npatchScrolllock({ viewport$, tablet$ })\n\n/* Set up header and main area observable */\nconst header$ = watchHeader(getComponentElement(\"header\"), { viewport$ })\nconst main$ = document$\n .pipe(\n map(() => getComponentElement(\"main\")),\n switchMap(el => watchMain(el, { viewport$, header$ })),\n shareReplay(1)\n )\n\n/* Set up control component observables */\nconst control$ = merge(\n\n /* Consent */\n ...getComponentElements(\"consent\")\n .map(el => mountConsent(el, { target$ })),\n\n /* Dialog */\n ...getComponentElements(\"dialog\")\n .map(el => mountDialog(el, { alert$ })),\n\n /* Color palette */\n ...getComponentElements(\"palette\")\n .map(el => mountPalette(el)),\n\n /* Progress bar */\n ...getComponentElements(\"progress\")\n .map(el => mountProgress(el, { progress$ })),\n\n /* Search */\n ...getComponentElements(\"search\")\n .map(el => mountSearch(el, { index$, keyboard$ })),\n\n /* Repository information */\n ...getComponentElements(\"source\")\n .map(el => mountSource(el))\n)\n\n/* Set up content component observables */\nconst content$ = defer(() => merge(\n\n /* Announcement bar */\n ...getComponentElements(\"announce\")\n .map(el => mountAnnounce(el)),\n\n /* Content */\n ...getComponentElements(\"content\")\n .map(el => mountContent(el, { viewport$, target$, print$ })),\n\n /* Search highlighting */\n ...getComponentElements(\"content\")\n .map(el => feature(\"search.highlight\")\n ? mountSearchHiglight(el, { index$, location$ })\n : EMPTY\n ),\n\n /* Header */\n ...getComponentElements(\"header\")\n .map(el => mountHeader(el, { viewport$, header$, main$ })),\n\n /* Header title */\n ...getComponentElements(\"header-title\")\n .map(el => mountHeaderTitle(el, { viewport$, header$ })),\n\n /* Sidebar */\n ...getComponentElements(\"sidebar\")\n .map(el => el.getAttribute(\"data-md-type\") === \"navigation\"\n ? at(screen$, () => mountSidebar(el, { viewport$, header$, main$ }))\n : at(tablet$, () => mountSidebar(el, { viewport$, header$, main$ }))\n ),\n\n /* Navigation tabs */\n ...getComponentElements(\"tabs\")\n .map(el => mountTabs(el, { viewport$, header$ })),\n\n /* Table of contents */\n ...getComponentElements(\"toc\")\n .map(el => mountTableOfContents(el, {\n viewport$, header$, main$, target$\n })),\n\n /* Back-to-top button */\n ...getComponentElements(\"top\")\n .map(el => mountBackToTop(el, { viewport$, header$, main$, target$ }))\n))\n\n/* Set up component observables */\nconst component$ = document$\n .pipe(\n switchMap(() => content$),\n mergeWith(control$),\n shareReplay(1)\n )\n\n/* Subscribe to all components */\ncomponent$.subscribe()\n\n/* ----------------------------------------------------------------------------\n * Exports\n * ------------------------------------------------------------------------- */\n\nwindow.document$ = document$ /* Document observable */\nwindow.location$ = location$ /* Location subject */\nwindow.target$ = target$ /* Location target observable */\nwindow.keyboard$ = keyboard$ /* Keyboard observable */\nwindow.viewport$ = viewport$ /* Viewport observable */\nwindow.tablet$ = tablet$ /* Media tablet observable */\nwindow.screen$ = screen$ /* Media screen observable */\nwindow.print$ = print$ /* Media print observable */\nwindow.alert$ = alert$ /* Alert subject */\nwindow.progress$ = progress$ /* Progress indicator subject */\nwindow.component$ = component$ /* Component observable */\n", "/******************************************************************************\nCopyright (c) Microsoft Corporation.\n\nPermission to use, copy, modify, and/or distribute this software for any\npurpose with or without fee is hereby granted.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH\nREGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY\nAND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,\nINDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM\nLOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR\nOTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR\nPERFORMANCE OF THIS SOFTWARE.\n***************************************************************************** */\n/* global Reflect, Promise, SuppressedError, Symbol, Iterator */\n\nvar extendStatics = function(d, b) {\n extendStatics = Object.setPrototypeOf ||\n ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) ||\n function (d, b) { for (var p in b) if (Object.prototype.hasOwnProperty.call(b, p)) d[p] = b[p]; };\n return extendStatics(d, b);\n};\n\nexport function __extends(d, b) {\n if (typeof b !== \"function\" && b !== null)\n throw new TypeError(\"Class extends value \" + String(b) + \" is not a constructor or null\");\n extendStatics(d, b);\n function __() { this.constructor = d; }\n d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __());\n}\n\nexport var __assign = function() {\n __assign = Object.assign || function __assign(t) {\n for (var s, i = 1, n = arguments.length; i < n; i++) {\n s = arguments[i];\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p)) t[p] = s[p];\n }\n return t;\n }\n return __assign.apply(this, arguments);\n}\n\nexport function __rest(s, e) {\n var t = {};\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p) && e.indexOf(p) < 0)\n t[p] = s[p];\n if (s != null && typeof Object.getOwnPropertySymbols === \"function\")\n for (var i = 0, p = Object.getOwnPropertySymbols(s); i < p.length; i++) {\n if (e.indexOf(p[i]) < 0 && Object.prototype.propertyIsEnumerable.call(s, p[i]))\n t[p[i]] = s[p[i]];\n }\n return t;\n}\n\nexport function __decorate(decorators, target, key, desc) {\n var c = arguments.length, r = c < 3 ? target : desc === null ? desc = Object.getOwnPropertyDescriptor(target, key) : desc, d;\n if (typeof Reflect === \"object\" && typeof Reflect.decorate === \"function\") r = Reflect.decorate(decorators, target, key, desc);\n else for (var i = decorators.length - 1; i >= 0; i--) if (d = decorators[i]) r = (c < 3 ? d(r) : c > 3 ? d(target, key, r) : d(target, key)) || r;\n return c > 3 && r && Object.defineProperty(target, key, r), r;\n}\n\nexport function __param(paramIndex, decorator) {\n return function (target, key) { decorator(target, key, paramIndex); }\n}\n\nexport function __esDecorate(ctor, descriptorIn, decorators, contextIn, initializers, extraInitializers) {\n function accept(f) { if (f !== void 0 && typeof f !== \"function\") throw new TypeError(\"Function expected\"); return f; }\n var kind = contextIn.kind, key = kind === \"getter\" ? \"get\" : kind === \"setter\" ? \"set\" : \"value\";\n var target = !descriptorIn && ctor ? contextIn[\"static\"] ? ctor : ctor.prototype : null;\n var descriptor = descriptorIn || (target ? Object.getOwnPropertyDescriptor(target, contextIn.name) : {});\n var _, done = false;\n for (var i = decorators.length - 1; i >= 0; i--) {\n var context = {};\n for (var p in contextIn) context[p] = p === \"access\" ? {} : contextIn[p];\n for (var p in contextIn.access) context.access[p] = contextIn.access[p];\n context.addInitializer = function (f) { if (done) throw new TypeError(\"Cannot add initializers after decoration has completed\"); extraInitializers.push(accept(f || null)); };\n var result = (0, decorators[i])(kind === \"accessor\" ? { get: descriptor.get, set: descriptor.set } : descriptor[key], context);\n if (kind === \"accessor\") {\n if (result === void 0) continue;\n if (result === null || typeof result !== \"object\") throw new TypeError(\"Object expected\");\n if (_ = accept(result.get)) descriptor.get = _;\n if (_ = accept(result.set)) descriptor.set = _;\n if (_ = accept(result.init)) initializers.unshift(_);\n }\n else if (_ = accept(result)) {\n if (kind === \"field\") initializers.unshift(_);\n else descriptor[key] = _;\n }\n }\n if (target) Object.defineProperty(target, contextIn.name, descriptor);\n done = true;\n};\n\nexport function __runInitializers(thisArg, initializers, value) {\n var useValue = arguments.length > 2;\n for (var i = 0; i < initializers.length; i++) {\n value = useValue ? initializers[i].call(thisArg, value) : initializers[i].call(thisArg);\n }\n return useValue ? value : void 0;\n};\n\nexport function __propKey(x) {\n return typeof x === \"symbol\" ? x : \"\".concat(x);\n};\n\nexport function __setFunctionName(f, name, prefix) {\n if (typeof name === \"symbol\") name = name.description ? \"[\".concat(name.description, \"]\") : \"\";\n return Object.defineProperty(f, \"name\", { configurable: true, value: prefix ? \"\".concat(prefix, \" \", name) : name });\n};\n\nexport function __metadata(metadataKey, metadataValue) {\n if (typeof Reflect === \"object\" && typeof Reflect.metadata === \"function\") return Reflect.metadata(metadataKey, metadataValue);\n}\n\nexport function __awaiter(thisArg, _arguments, P, generator) {\n function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); }\n return new (P || (P = Promise))(function (resolve, reject) {\n function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }\n function rejected(value) { try { step(generator[\"throw\"](value)); } catch (e) { reject(e); } }\n function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); }\n step((generator = generator.apply(thisArg, _arguments || [])).next());\n });\n}\n\nexport function __generator(thisArg, body) {\n var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g = Object.create((typeof Iterator === \"function\" ? Iterator : Object).prototype);\n return g.next = verb(0), g[\"throw\"] = verb(1), g[\"return\"] = verb(2), typeof Symbol === \"function\" && (g[Symbol.iterator] = function() { return this; }), g;\n function verb(n) { return function (v) { return step([n, v]); }; }\n function step(op) {\n if (f) throw new TypeError(\"Generator is already executing.\");\n while (g && (g = 0, op[0] && (_ = 0)), _) try {\n if (f = 1, y && (t = op[0] & 2 ? y[\"return\"] : op[0] ? y[\"throw\"] || ((t = y[\"return\"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t;\n if (y = 0, t) op = [op[0] & 2, t.value];\n switch (op[0]) {\n case 0: case 1: t = op; break;\n case 4: _.label++; return { value: op[1], done: false };\n case 5: _.label++; y = op[1]; op = [0]; continue;\n case 7: op = _.ops.pop(); _.trys.pop(); continue;\n default:\n if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; }\n if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; }\n if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; }\n if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; }\n if (t[2]) _.ops.pop();\n _.trys.pop(); continue;\n }\n op = body.call(thisArg, _);\n } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; }\n if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true };\n }\n}\n\nexport var __createBinding = Object.create ? (function(o, m, k, k2) {\n if (k2 === undefined) k2 = k;\n var desc = Object.getOwnPropertyDescriptor(m, k);\n if (!desc || (\"get\" in desc ? !m.__esModule : desc.writable || desc.configurable)) {\n desc = { enumerable: true, get: function() { return m[k]; } };\n }\n Object.defineProperty(o, k2, desc);\n}) : (function(o, m, k, k2) {\n if (k2 === undefined) k2 = k;\n o[k2] = m[k];\n});\n\nexport function __exportStar(m, o) {\n for (var p in m) if (p !== \"default\" && !Object.prototype.hasOwnProperty.call(o, p)) __createBinding(o, m, p);\n}\n\nexport function __values(o) {\n var s = typeof Symbol === \"function\" && Symbol.iterator, m = s && o[s], i = 0;\n if (m) return m.call(o);\n if (o && typeof o.length === \"number\") return {\n next: function () {\n if (o && i >= o.length) o = void 0;\n return { value: o && o[i++], done: !o };\n }\n };\n throw new TypeError(s ? \"Object is not iterable.\" : \"Symbol.iterator is not defined.\");\n}\n\nexport function __read(o, n) {\n var m = typeof Symbol === \"function\" && o[Symbol.iterator];\n if (!m) return o;\n var i = m.call(o), r, ar = [], e;\n try {\n while ((n === void 0 || n-- > 0) && !(r = i.next()).done) ar.push(r.value);\n }\n catch (error) { e = { error: error }; }\n finally {\n try {\n if (r && !r.done && (m = i[\"return\"])) m.call(i);\n }\n finally { if (e) throw e.error; }\n }\n return ar;\n}\n\n/** @deprecated */\nexport function __spread() {\n for (var ar = [], i = 0; i < arguments.length; i++)\n ar = ar.concat(__read(arguments[i]));\n return ar;\n}\n\n/** @deprecated */\nexport function __spreadArrays() {\n for (var s = 0, i = 0, il = arguments.length; i < il; i++) s += arguments[i].length;\n for (var r = Array(s), k = 0, i = 0; i < il; i++)\n for (var a = arguments[i], j = 0, jl = a.length; j < jl; j++, k++)\n r[k] = a[j];\n return r;\n}\n\nexport function __spreadArray(to, from, pack) {\n if (pack || arguments.length === 2) for (var i = 0, l = from.length, ar; i < l; i++) {\n if (ar || !(i in from)) {\n if (!ar) ar = Array.prototype.slice.call(from, 0, i);\n ar[i] = from[i];\n }\n }\n return to.concat(ar || Array.prototype.slice.call(from));\n}\n\nexport function __await(v) {\n return this instanceof __await ? (this.v = v, this) : new __await(v);\n}\n\nexport function __asyncGenerator(thisArg, _arguments, generator) {\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\n var g = generator.apply(thisArg, _arguments || []), i, q = [];\n return i = Object.create((typeof AsyncIterator === \"function\" ? AsyncIterator : Object).prototype), verb(\"next\"), verb(\"throw\"), verb(\"return\", awaitReturn), i[Symbol.asyncIterator] = function () { return this; }, i;\n function awaitReturn(f) { return function (v) { return Promise.resolve(v).then(f, reject); }; }\n function verb(n, f) { if (g[n]) { i[n] = function (v) { return new Promise(function (a, b) { q.push([n, v, a, b]) > 1 || resume(n, v); }); }; if (f) i[n] = f(i[n]); } }\n function resume(n, v) { try { step(g[n](v)); } catch (e) { settle(q[0][3], e); } }\n function step(r) { r.value instanceof __await ? Promise.resolve(r.value.v).then(fulfill, reject) : settle(q[0][2], r); }\n function fulfill(value) { resume(\"next\", value); }\n function reject(value) { resume(\"throw\", value); }\n function settle(f, v) { if (f(v), q.shift(), q.length) resume(q[0][0], q[0][1]); }\n}\n\nexport function __asyncDelegator(o) {\n var i, p;\n return i = {}, verb(\"next\"), verb(\"throw\", function (e) { throw e; }), verb(\"return\"), i[Symbol.iterator] = function () { return this; }, i;\n function verb(n, f) { i[n] = o[n] ? function (v) { return (p = !p) ? { value: __await(o[n](v)), done: false } : f ? f(v) : v; } : f; }\n}\n\nexport function __asyncValues(o) {\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\n var m = o[Symbol.asyncIterator], i;\n return m ? m.call(o) : (o = typeof __values === \"function\" ? __values(o) : o[Symbol.iterator](), i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i);\n function verb(n) { i[n] = o[n] && function (v) { return new Promise(function (resolve, reject) { v = o[n](v), settle(resolve, reject, v.done, v.value); }); }; }\n function settle(resolve, reject, d, v) { Promise.resolve(v).then(function(v) { resolve({ value: v, done: d }); }, reject); }\n}\n\nexport function __makeTemplateObject(cooked, raw) {\n if (Object.defineProperty) { Object.defineProperty(cooked, \"raw\", { value: raw }); } else { cooked.raw = raw; }\n return cooked;\n};\n\nvar __setModuleDefault = Object.create ? (function(o, v) {\n Object.defineProperty(o, \"default\", { enumerable: true, value: v });\n}) : function(o, v) {\n o[\"default\"] = v;\n};\n\nexport function __importStar(mod) {\n if (mod && mod.__esModule) return mod;\n var result = {};\n if (mod != null) for (var k in mod) if (k !== \"default\" && Object.prototype.hasOwnProperty.call(mod, k)) __createBinding(result, mod, k);\n __setModuleDefault(result, mod);\n return result;\n}\n\nexport function __importDefault(mod) {\n return (mod && mod.__esModule) ? mod : { default: mod };\n}\n\nexport function __classPrivateFieldGet(receiver, state, kind, f) {\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a getter\");\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot read private member from an object whose class did not declare it\");\n return kind === \"m\" ? f : kind === \"a\" ? f.call(receiver) : f ? f.value : state.get(receiver);\n}\n\nexport function __classPrivateFieldSet(receiver, state, value, kind, f) {\n if (kind === \"m\") throw new TypeError(\"Private method is not writable\");\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a setter\");\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot write private member to an object whose class did not declare it\");\n return (kind === \"a\" ? f.call(receiver, value) : f ? f.value = value : state.set(receiver, value)), value;\n}\n\nexport function __classPrivateFieldIn(state, receiver) {\n if (receiver === null || (typeof receiver !== \"object\" && typeof receiver !== \"function\")) throw new TypeError(\"Cannot use 'in' operator on non-object\");\n return typeof state === \"function\" ? receiver === state : state.has(receiver);\n}\n\nexport function __addDisposableResource(env, value, async) {\n if (value !== null && value !== void 0) {\n if (typeof value !== \"object\" && typeof value !== \"function\") throw new TypeError(\"Object expected.\");\n var dispose, inner;\n if (async) {\n if (!Symbol.asyncDispose) throw new TypeError(\"Symbol.asyncDispose is not defined.\");\n dispose = value[Symbol.asyncDispose];\n }\n if (dispose === void 0) {\n if (!Symbol.dispose) throw new TypeError(\"Symbol.dispose is not defined.\");\n dispose = value[Symbol.dispose];\n if (async) inner = dispose;\n }\n if (typeof dispose !== \"function\") throw new TypeError(\"Object not disposable.\");\n if (inner) dispose = function() { try { inner.call(this); } catch (e) { return Promise.reject(e); } };\n env.stack.push({ value: value, dispose: dispose, async: async });\n }\n else if (async) {\n env.stack.push({ async: true });\n }\n return value;\n}\n\nvar _SuppressedError = typeof SuppressedError === \"function\" ? SuppressedError : function (error, suppressed, message) {\n var e = new Error(message);\n return e.name = \"SuppressedError\", e.error = error, e.suppressed = suppressed, e;\n};\n\nexport function __disposeResources(env) {\n function fail(e) {\n env.error = env.hasError ? new _SuppressedError(e, env.error, \"An error was suppressed during disposal.\") : e;\n env.hasError = true;\n }\n var r, s = 0;\n function next() {\n while (r = env.stack.pop()) {\n try {\n if (!r.async && s === 1) return s = 0, env.stack.push(r), Promise.resolve().then(next);\n if (r.dispose) {\n var result = r.dispose.call(r.value);\n if (r.async) return s |= 2, Promise.resolve(result).then(next, function(e) { fail(e); return next(); });\n }\n else s |= 1;\n }\n catch (e) {\n fail(e);\n }\n }\n if (s === 1) return env.hasError ? Promise.reject(env.error) : Promise.resolve();\n if (env.hasError) throw env.error;\n }\n return next();\n}\n\nexport default {\n __extends,\n __assign,\n __rest,\n __decorate,\n __param,\n __metadata,\n __awaiter,\n __generator,\n __createBinding,\n __exportStar,\n __values,\n __read,\n __spread,\n __spreadArrays,\n __spreadArray,\n __await,\n __asyncGenerator,\n __asyncDelegator,\n __asyncValues,\n __makeTemplateObject,\n __importStar,\n __importDefault,\n __classPrivateFieldGet,\n __classPrivateFieldSet,\n __classPrivateFieldIn,\n __addDisposableResource,\n __disposeResources,\n};\n", "/**\n * Returns true if the object is a function.\n * @param value The value to check\n */\nexport function isFunction(value: any): value is (...args: any[]) => any {\n return typeof value === 'function';\n}\n", "/**\n * Used to create Error subclasses until the community moves away from ES5.\n *\n * This is because compiling from TypeScript down to ES5 has issues with subclassing Errors\n * as well as other built-in types: https://github.com/Microsoft/TypeScript/issues/12123\n *\n * @param createImpl A factory function to create the actual constructor implementation. The returned\n * function should be a named function that calls `_super` internally.\n */\nexport function createErrorClass(createImpl: (_super: any) => any): T {\n const _super = (instance: any) => {\n Error.call(instance);\n instance.stack = new Error().stack;\n };\n\n const ctorFunc = createImpl(_super);\n ctorFunc.prototype = Object.create(Error.prototype);\n ctorFunc.prototype.constructor = ctorFunc;\n return ctorFunc;\n}\n", "import { createErrorClass } from './createErrorClass';\n\nexport interface UnsubscriptionError extends Error {\n readonly errors: any[];\n}\n\nexport interface UnsubscriptionErrorCtor {\n /**\n * @deprecated Internal implementation detail. Do not construct error instances.\n * Cannot be tagged as internal: https://github.com/ReactiveX/rxjs/issues/6269\n */\n new (errors: any[]): UnsubscriptionError;\n}\n\n/**\n * An error thrown when one or more errors have occurred during the\n * `unsubscribe` of a {@link Subscription}.\n */\nexport const UnsubscriptionError: UnsubscriptionErrorCtor = createErrorClass(\n (_super) =>\n function UnsubscriptionErrorImpl(this: any, errors: (Error | string)[]) {\n _super(this);\n this.message = errors\n ? `${errors.length} errors occurred during unsubscription:\n${errors.map((err, i) => `${i + 1}) ${err.toString()}`).join('\\n ')}`\n : '';\n this.name = 'UnsubscriptionError';\n this.errors = errors;\n }\n);\n", "/**\n * Removes an item from an array, mutating it.\n * @param arr The array to remove the item from\n * @param item The item to remove\n */\nexport function arrRemove(arr: T[] | undefined | null, item: T) {\n if (arr) {\n const index = arr.indexOf(item);\n 0 <= index && arr.splice(index, 1);\n }\n}\n", "import { isFunction } from './util/isFunction';\nimport { UnsubscriptionError } from './util/UnsubscriptionError';\nimport { SubscriptionLike, TeardownLogic, Unsubscribable } from './types';\nimport { arrRemove } from './util/arrRemove';\n\n/**\n * Represents a disposable resource, such as the execution of an Observable. A\n * Subscription has one important method, `unsubscribe`, that takes no argument\n * and just disposes the resource held by the subscription.\n *\n * Additionally, subscriptions may be grouped together through the `add()`\n * method, which will attach a child Subscription to the current Subscription.\n * When a Subscription is unsubscribed, all its children (and its grandchildren)\n * will be unsubscribed as well.\n *\n * @class Subscription\n */\nexport class Subscription implements SubscriptionLike {\n /** @nocollapse */\n public static EMPTY = (() => {\n const empty = new Subscription();\n empty.closed = true;\n return empty;\n })();\n\n /**\n * A flag to indicate whether this Subscription has already been unsubscribed.\n */\n public closed = false;\n\n private _parentage: Subscription[] | Subscription | null = null;\n\n /**\n * The list of registered finalizers to execute upon unsubscription. Adding and removing from this\n * list occurs in the {@link #add} and {@link #remove} methods.\n */\n private _finalizers: Exclude[] | null = null;\n\n /**\n * @param initialTeardown A function executed first as part of the finalization\n * process that is kicked off when {@link #unsubscribe} is called.\n */\n constructor(private initialTeardown?: () => void) {}\n\n /**\n * Disposes the resources held by the subscription. May, for instance, cancel\n * an ongoing Observable execution or cancel any other type of work that\n * started when the Subscription was created.\n * @return {void}\n */\n unsubscribe(): void {\n let errors: any[] | undefined;\n\n if (!this.closed) {\n this.closed = true;\n\n // Remove this from it's parents.\n const { _parentage } = this;\n if (_parentage) {\n this._parentage = null;\n if (Array.isArray(_parentage)) {\n for (const parent of _parentage) {\n parent.remove(this);\n }\n } else {\n _parentage.remove(this);\n }\n }\n\n const { initialTeardown: initialFinalizer } = this;\n if (isFunction(initialFinalizer)) {\n try {\n initialFinalizer();\n } catch (e) {\n errors = e instanceof UnsubscriptionError ? e.errors : [e];\n }\n }\n\n const { _finalizers } = this;\n if (_finalizers) {\n this._finalizers = null;\n for (const finalizer of _finalizers) {\n try {\n execFinalizer(finalizer);\n } catch (err) {\n errors = errors ?? [];\n if (err instanceof UnsubscriptionError) {\n errors = [...errors, ...err.errors];\n } else {\n errors.push(err);\n }\n }\n }\n }\n\n if (errors) {\n throw new UnsubscriptionError(errors);\n }\n }\n }\n\n /**\n * Adds a finalizer to this subscription, so that finalization will be unsubscribed/called\n * when this subscription is unsubscribed. If this subscription is already {@link #closed},\n * because it has already been unsubscribed, then whatever finalizer is passed to it\n * will automatically be executed (unless the finalizer itself is also a closed subscription).\n *\n * Closed Subscriptions cannot be added as finalizers to any subscription. Adding a closed\n * subscription to a any subscription will result in no operation. (A noop).\n *\n * Adding a subscription to itself, or adding `null` or `undefined` will not perform any\n * operation at all. (A noop).\n *\n * `Subscription` instances that are added to this instance will automatically remove themselves\n * if they are unsubscribed. Functions and {@link Unsubscribable} objects that you wish to remove\n * will need to be removed manually with {@link #remove}\n *\n * @param teardown The finalization logic to add to this subscription.\n */\n add(teardown: TeardownLogic): void {\n // Only add the finalizer if it's not undefined\n // and don't add a subscription to itself.\n if (teardown && teardown !== this) {\n if (this.closed) {\n // If this subscription is already closed,\n // execute whatever finalizer is handed to it automatically.\n execFinalizer(teardown);\n } else {\n if (teardown instanceof Subscription) {\n // We don't add closed subscriptions, and we don't add the same subscription\n // twice. Subscription unsubscribe is idempotent.\n if (teardown.closed || teardown._hasParent(this)) {\n return;\n }\n teardown._addParent(this);\n }\n (this._finalizers = this._finalizers ?? []).push(teardown);\n }\n }\n }\n\n /**\n * Checks to see if a this subscription already has a particular parent.\n * This will signal that this subscription has already been added to the parent in question.\n * @param parent the parent to check for\n */\n private _hasParent(parent: Subscription) {\n const { _parentage } = this;\n return _parentage === parent || (Array.isArray(_parentage) && _parentage.includes(parent));\n }\n\n /**\n * Adds a parent to this subscription so it can be removed from the parent if it\n * unsubscribes on it's own.\n *\n * NOTE: THIS ASSUMES THAT {@link _hasParent} HAS ALREADY BEEN CHECKED.\n * @param parent The parent subscription to add\n */\n private _addParent(parent: Subscription) {\n const { _parentage } = this;\n this._parentage = Array.isArray(_parentage) ? (_parentage.push(parent), _parentage) : _parentage ? [_parentage, parent] : parent;\n }\n\n /**\n * Called on a child when it is removed via {@link #remove}.\n * @param parent The parent to remove\n */\n private _removeParent(parent: Subscription) {\n const { _parentage } = this;\n if (_parentage === parent) {\n this._parentage = null;\n } else if (Array.isArray(_parentage)) {\n arrRemove(_parentage, parent);\n }\n }\n\n /**\n * Removes a finalizer from this subscription that was previously added with the {@link #add} method.\n *\n * Note that `Subscription` instances, when unsubscribed, will automatically remove themselves\n * from every other `Subscription` they have been added to. This means that using the `remove` method\n * is not a common thing and should be used thoughtfully.\n *\n * If you add the same finalizer instance of a function or an unsubscribable object to a `Subscription` instance\n * more than once, you will need to call `remove` the same number of times to remove all instances.\n *\n * All finalizer instances are removed to free up memory upon unsubscription.\n *\n * @param teardown The finalizer to remove from this subscription\n */\n remove(teardown: Exclude): void {\n const { _finalizers } = this;\n _finalizers && arrRemove(_finalizers, teardown);\n\n if (teardown instanceof Subscription) {\n teardown._removeParent(this);\n }\n }\n}\n\nexport const EMPTY_SUBSCRIPTION = Subscription.EMPTY;\n\nexport function isSubscription(value: any): value is Subscription {\n return (\n value instanceof Subscription ||\n (value && 'closed' in value && isFunction(value.remove) && isFunction(value.add) && isFunction(value.unsubscribe))\n );\n}\n\nfunction execFinalizer(finalizer: Unsubscribable | (() => void)) {\n if (isFunction(finalizer)) {\n finalizer();\n } else {\n finalizer.unsubscribe();\n }\n}\n", "import { Subscriber } from './Subscriber';\nimport { ObservableNotification } from './types';\n\n/**\n * The {@link GlobalConfig} object for RxJS. It is used to configure things\n * like how to react on unhandled errors.\n */\nexport const config: GlobalConfig = {\n onUnhandledError: null,\n onStoppedNotification: null,\n Promise: undefined,\n useDeprecatedSynchronousErrorHandling: false,\n useDeprecatedNextContext: false,\n};\n\n/**\n * The global configuration object for RxJS, used to configure things\n * like how to react on unhandled errors. Accessible via {@link config}\n * object.\n */\nexport interface GlobalConfig {\n /**\n * A registration point for unhandled errors from RxJS. These are errors that\n * cannot were not handled by consuming code in the usual subscription path. For\n * example, if you have this configured, and you subscribe to an observable without\n * providing an error handler, errors from that subscription will end up here. This\n * will _always_ be called asynchronously on another job in the runtime. This is because\n * we do not want errors thrown in this user-configured handler to interfere with the\n * behavior of the library.\n */\n onUnhandledError: ((err: any) => void) | null;\n\n /**\n * A registration point for notifications that cannot be sent to subscribers because they\n * have completed, errored or have been explicitly unsubscribed. By default, next, complete\n * and error notifications sent to stopped subscribers are noops. However, sometimes callers\n * might want a different behavior. For example, with sources that attempt to report errors\n * to stopped subscribers, a caller can configure RxJS to throw an unhandled error instead.\n * This will _always_ be called asynchronously on another job in the runtime. This is because\n * we do not want errors thrown in this user-configured handler to interfere with the\n * behavior of the library.\n */\n onStoppedNotification: ((notification: ObservableNotification, subscriber: Subscriber) => void) | null;\n\n /**\n * The promise constructor used by default for {@link Observable#toPromise toPromise} and {@link Observable#forEach forEach}\n * methods.\n *\n * @deprecated As of version 8, RxJS will no longer support this sort of injection of a\n * Promise constructor. If you need a Promise implementation other than native promises,\n * please polyfill/patch Promise as you see appropriate. Will be removed in v8.\n */\n Promise?: PromiseConstructorLike;\n\n /**\n * If true, turns on synchronous error rethrowing, which is a deprecated behavior\n * in v6 and higher. This behavior enables bad patterns like wrapping a subscribe\n * call in a try/catch block. It also enables producer interference, a nasty bug\n * where a multicast can be broken for all observers by a downstream consumer with\n * an unhandled error. DO NOT USE THIS FLAG UNLESS IT'S NEEDED TO BUY TIME\n * FOR MIGRATION REASONS.\n *\n * @deprecated As of version 8, RxJS will no longer support synchronous throwing\n * of unhandled errors. All errors will be thrown on a separate call stack to prevent bad\n * behaviors described above. Will be removed in v8.\n */\n useDeprecatedSynchronousErrorHandling: boolean;\n\n /**\n * If true, enables an as-of-yet undocumented feature from v5: The ability to access\n * `unsubscribe()` via `this` context in `next` functions created in observers passed\n * to `subscribe`.\n *\n * This is being removed because the performance was severely problematic, and it could also cause\n * issues when types other than POJOs are passed to subscribe as subscribers, as they will likely have\n * their `this` context overwritten.\n *\n * @deprecated As of version 8, RxJS will no longer support altering the\n * context of next functions provided as part of an observer to Subscribe. Instead,\n * you will have access to a subscription or a signal or token that will allow you to do things like\n * unsubscribe and test closed status. Will be removed in v8.\n */\n useDeprecatedNextContext: boolean;\n}\n", "import type { TimerHandle } from './timerHandle';\ntype SetTimeoutFunction = (handler: () => void, timeout?: number, ...args: any[]) => TimerHandle;\ntype ClearTimeoutFunction = (handle: TimerHandle) => void;\n\ninterface TimeoutProvider {\n setTimeout: SetTimeoutFunction;\n clearTimeout: ClearTimeoutFunction;\n delegate:\n | {\n setTimeout: SetTimeoutFunction;\n clearTimeout: ClearTimeoutFunction;\n }\n | undefined;\n}\n\nexport const timeoutProvider: TimeoutProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n setTimeout(handler: () => void, timeout?: number, ...args) {\n const { delegate } = timeoutProvider;\n if (delegate?.setTimeout) {\n return delegate.setTimeout(handler, timeout, ...args);\n }\n return setTimeout(handler, timeout, ...args);\n },\n clearTimeout(handle) {\n const { delegate } = timeoutProvider;\n return (delegate?.clearTimeout || clearTimeout)(handle as any);\n },\n delegate: undefined,\n};\n", "import { config } from '../config';\nimport { timeoutProvider } from '../scheduler/timeoutProvider';\n\n/**\n * Handles an error on another job either with the user-configured {@link onUnhandledError},\n * or by throwing it on that new job so it can be picked up by `window.onerror`, `process.on('error')`, etc.\n *\n * This should be called whenever there is an error that is out-of-band with the subscription\n * or when an error hits a terminal boundary of the subscription and no error handler was provided.\n *\n * @param err the error to report\n */\nexport function reportUnhandledError(err: any) {\n timeoutProvider.setTimeout(() => {\n const { onUnhandledError } = config;\n if (onUnhandledError) {\n // Execute the user-configured error handler.\n onUnhandledError(err);\n } else {\n // Throw so it is picked up by the runtime's uncaught error mechanism.\n throw err;\n }\n });\n}\n", "/* tslint:disable:no-empty */\nexport function noop() { }\n", "import { CompleteNotification, NextNotification, ErrorNotification } from './types';\n\n/**\n * A completion object optimized for memory use and created to be the\n * same \"shape\" as other notifications in v8.\n * @internal\n */\nexport const COMPLETE_NOTIFICATION = (() => createNotification('C', undefined, undefined) as CompleteNotification)();\n\n/**\n * Internal use only. Creates an optimized error notification that is the same \"shape\"\n * as other notifications.\n * @internal\n */\nexport function errorNotification(error: any): ErrorNotification {\n return createNotification('E', undefined, error) as any;\n}\n\n/**\n * Internal use only. Creates an optimized next notification that is the same \"shape\"\n * as other notifications.\n * @internal\n */\nexport function nextNotification(value: T) {\n return createNotification('N', value, undefined) as NextNotification;\n}\n\n/**\n * Ensures that all notifications created internally have the same \"shape\" in v8.\n *\n * TODO: This is only exported to support a crazy legacy test in `groupBy`.\n * @internal\n */\nexport function createNotification(kind: 'N' | 'E' | 'C', value: any, error: any) {\n return {\n kind,\n value,\n error,\n };\n}\n", "import { config } from '../config';\n\nlet context: { errorThrown: boolean; error: any } | null = null;\n\n/**\n * Handles dealing with errors for super-gross mode. Creates a context, in which\n * any synchronously thrown errors will be passed to {@link captureError}. Which\n * will record the error such that it will be rethrown after the call back is complete.\n * TODO: Remove in v8\n * @param cb An immediately executed function.\n */\nexport function errorContext(cb: () => void) {\n if (config.useDeprecatedSynchronousErrorHandling) {\n const isRoot = !context;\n if (isRoot) {\n context = { errorThrown: false, error: null };\n }\n cb();\n if (isRoot) {\n const { errorThrown, error } = context!;\n context = null;\n if (errorThrown) {\n throw error;\n }\n }\n } else {\n // This is the general non-deprecated path for everyone that\n // isn't crazy enough to use super-gross mode (useDeprecatedSynchronousErrorHandling)\n cb();\n }\n}\n\n/**\n * Captures errors only in super-gross mode.\n * @param err the error to capture\n */\nexport function captureError(err: any) {\n if (config.useDeprecatedSynchronousErrorHandling && context) {\n context.errorThrown = true;\n context.error = err;\n }\n}\n", "import { isFunction } from './util/isFunction';\nimport { Observer, ObservableNotification } from './types';\nimport { isSubscription, Subscription } from './Subscription';\nimport { config } from './config';\nimport { reportUnhandledError } from './util/reportUnhandledError';\nimport { noop } from './util/noop';\nimport { nextNotification, errorNotification, COMPLETE_NOTIFICATION } from './NotificationFactories';\nimport { timeoutProvider } from './scheduler/timeoutProvider';\nimport { captureError } from './util/errorContext';\n\n/**\n * Implements the {@link Observer} interface and extends the\n * {@link Subscription} class. While the {@link Observer} is the public API for\n * consuming the values of an {@link Observable}, all Observers get converted to\n * a Subscriber, in order to provide Subscription-like capabilities such as\n * `unsubscribe`. Subscriber is a common type in RxJS, and crucial for\n * implementing operators, but it is rarely used as a public API.\n *\n * @class Subscriber\n */\nexport class Subscriber extends Subscription implements Observer {\n /**\n * A static factory for a Subscriber, given a (potentially partial) definition\n * of an Observer.\n * @param next The `next` callback of an Observer.\n * @param error The `error` callback of an\n * Observer.\n * @param complete The `complete` callback of an\n * Observer.\n * @return A Subscriber wrapping the (partially defined)\n * Observer represented by the given arguments.\n * @nocollapse\n * @deprecated Do not use. Will be removed in v8. There is no replacement for this\n * method, and there is no reason to be creating instances of `Subscriber` directly.\n * If you have a specific use case, please file an issue.\n */\n static create(next?: (x?: T) => void, error?: (e?: any) => void, complete?: () => void): Subscriber {\n return new SafeSubscriber(next, error, complete);\n }\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n protected isStopped: boolean = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n protected destination: Subscriber | Observer; // this `any` is the escape hatch to erase extra type param (e.g. R)\n\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n * There is no reason to directly create an instance of Subscriber. This type is exported for typings reasons.\n */\n constructor(destination?: Subscriber | Observer) {\n super();\n if (destination) {\n this.destination = destination;\n // Automatically chain subscriptions together here.\n // if destination is a Subscription, then it is a Subscriber.\n if (isSubscription(destination)) {\n destination.add(this);\n }\n } else {\n this.destination = EMPTY_OBSERVER;\n }\n }\n\n /**\n * The {@link Observer} callback to receive notifications of type `next` from\n * the Observable, with a value. The Observable may call this method 0 or more\n * times.\n * @param {T} [value] The `next` value.\n * @return {void}\n */\n next(value?: T): void {\n if (this.isStopped) {\n handleStoppedNotification(nextNotification(value), this);\n } else {\n this._next(value!);\n }\n }\n\n /**\n * The {@link Observer} callback to receive notifications of type `error` from\n * the Observable, with an attached `Error`. Notifies the Observer that\n * the Observable has experienced an error condition.\n * @param {any} [err] The `error` exception.\n * @return {void}\n */\n error(err?: any): void {\n if (this.isStopped) {\n handleStoppedNotification(errorNotification(err), this);\n } else {\n this.isStopped = true;\n this._error(err);\n }\n }\n\n /**\n * The {@link Observer} callback to receive a valueless notification of type\n * `complete` from the Observable. Notifies the Observer that the Observable\n * has finished sending push-based notifications.\n * @return {void}\n */\n complete(): void {\n if (this.isStopped) {\n handleStoppedNotification(COMPLETE_NOTIFICATION, this);\n } else {\n this.isStopped = true;\n this._complete();\n }\n }\n\n unsubscribe(): void {\n if (!this.closed) {\n this.isStopped = true;\n super.unsubscribe();\n this.destination = null!;\n }\n }\n\n protected _next(value: T): void {\n this.destination.next(value);\n }\n\n protected _error(err: any): void {\n try {\n this.destination.error(err);\n } finally {\n this.unsubscribe();\n }\n }\n\n protected _complete(): void {\n try {\n this.destination.complete();\n } finally {\n this.unsubscribe();\n }\n }\n}\n\n/**\n * This bind is captured here because we want to be able to have\n * compatibility with monoid libraries that tend to use a method named\n * `bind`. In particular, a library called Monio requires this.\n */\nconst _bind = Function.prototype.bind;\n\nfunction bind any>(fn: Fn, thisArg: any): Fn {\n return _bind.call(fn, thisArg);\n}\n\n/**\n * Internal optimization only, DO NOT EXPOSE.\n * @internal\n */\nclass ConsumerObserver implements Observer {\n constructor(private partialObserver: Partial>) {}\n\n next(value: T): void {\n const { partialObserver } = this;\n if (partialObserver.next) {\n try {\n partialObserver.next(value);\n } catch (error) {\n handleUnhandledError(error);\n }\n }\n }\n\n error(err: any): void {\n const { partialObserver } = this;\n if (partialObserver.error) {\n try {\n partialObserver.error(err);\n } catch (error) {\n handleUnhandledError(error);\n }\n } else {\n handleUnhandledError(err);\n }\n }\n\n complete(): void {\n const { partialObserver } = this;\n if (partialObserver.complete) {\n try {\n partialObserver.complete();\n } catch (error) {\n handleUnhandledError(error);\n }\n }\n }\n}\n\nexport class SafeSubscriber extends Subscriber {\n constructor(\n observerOrNext?: Partial> | ((value: T) => void) | null,\n error?: ((e?: any) => void) | null,\n complete?: (() => void) | null\n ) {\n super();\n\n let partialObserver: Partial>;\n if (isFunction(observerOrNext) || !observerOrNext) {\n // The first argument is a function, not an observer. The next\n // two arguments *could* be observers, or they could be empty.\n partialObserver = {\n next: (observerOrNext ?? undefined) as (((value: T) => void) | undefined),\n error: error ?? undefined,\n complete: complete ?? undefined,\n };\n } else {\n // The first argument is a partial observer.\n let context: any;\n if (this && config.useDeprecatedNextContext) {\n // This is a deprecated path that made `this.unsubscribe()` available in\n // next handler functions passed to subscribe. This only exists behind a flag\n // now, as it is *very* slow.\n context = Object.create(observerOrNext);\n context.unsubscribe = () => this.unsubscribe();\n partialObserver = {\n next: observerOrNext.next && bind(observerOrNext.next, context),\n error: observerOrNext.error && bind(observerOrNext.error, context),\n complete: observerOrNext.complete && bind(observerOrNext.complete, context),\n };\n } else {\n // The \"normal\" path. Just use the partial observer directly.\n partialObserver = observerOrNext;\n }\n }\n\n // Wrap the partial observer to ensure it's a full observer, and\n // make sure proper error handling is accounted for.\n this.destination = new ConsumerObserver(partialObserver);\n }\n}\n\nfunction handleUnhandledError(error: any) {\n if (config.useDeprecatedSynchronousErrorHandling) {\n captureError(error);\n } else {\n // Ideal path, we report this as an unhandled error,\n // which is thrown on a new call stack.\n reportUnhandledError(error);\n }\n}\n\n/**\n * An error handler used when no error handler was supplied\n * to the SafeSubscriber -- meaning no error handler was supplied\n * do the `subscribe` call on our observable.\n * @param err The error to handle\n */\nfunction defaultErrorHandler(err: any) {\n throw err;\n}\n\n/**\n * A handler for notifications that cannot be sent to a stopped subscriber.\n * @param notification The notification being sent\n * @param subscriber The stopped subscriber\n */\nfunction handleStoppedNotification(notification: ObservableNotification, subscriber: Subscriber) {\n const { onStoppedNotification } = config;\n onStoppedNotification && timeoutProvider.setTimeout(() => onStoppedNotification(notification, subscriber));\n}\n\n/**\n * The observer used as a stub for subscriptions where the user did not\n * pass any arguments to `subscribe`. Comes with the default error handling\n * behavior.\n */\nexport const EMPTY_OBSERVER: Readonly> & { closed: true } = {\n closed: true,\n next: noop,\n error: defaultErrorHandler,\n complete: noop,\n};\n", "/**\n * Symbol.observable or a string \"@@observable\". Used for interop\n *\n * @deprecated We will no longer be exporting this symbol in upcoming versions of RxJS.\n * Instead polyfill and use Symbol.observable directly *or* use https://www.npmjs.com/package/symbol-observable\n */\nexport const observable: string | symbol = (() => (typeof Symbol === 'function' && Symbol.observable) || '@@observable')();\n", "/**\n * This function takes one parameter and just returns it. Simply put,\n * this is like `(x: T): T => x`.\n *\n * ## Examples\n *\n * This is useful in some cases when using things like `mergeMap`\n *\n * ```ts\n * import { interval, take, map, range, mergeMap, identity } from 'rxjs';\n *\n * const source$ = interval(1000).pipe(take(5));\n *\n * const result$ = source$.pipe(\n * map(i => range(i)),\n * mergeMap(identity) // same as mergeMap(x => x)\n * );\n *\n * result$.subscribe({\n * next: console.log\n * });\n * ```\n *\n * Or when you want to selectively apply an operator\n *\n * ```ts\n * import { interval, take, identity } from 'rxjs';\n *\n * const shouldLimit = () => Math.random() < 0.5;\n *\n * const source$ = interval(1000);\n *\n * const result$ = source$.pipe(shouldLimit() ? take(5) : identity);\n *\n * result$.subscribe({\n * next: console.log\n * });\n * ```\n *\n * @param x Any value that is returned by this function\n * @returns The value passed as the first parameter to this function\n */\nexport function identity(x: T): T {\n return x;\n}\n", "import { identity } from './identity';\nimport { UnaryFunction } from '../types';\n\nexport function pipe(): typeof identity;\nexport function pipe(fn1: UnaryFunction): UnaryFunction;\nexport function pipe(fn1: UnaryFunction, fn2: UnaryFunction): UnaryFunction;\nexport function pipe(fn1: UnaryFunction, fn2: UnaryFunction, fn3: UnaryFunction): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction,\n fn9: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction,\n fn9: UnaryFunction,\n ...fns: UnaryFunction[]\n): UnaryFunction;\n\n/**\n * pipe() can be called on one or more functions, each of which can take one argument (\"UnaryFunction\")\n * and uses it to return a value.\n * It returns a function that takes one argument, passes it to the first UnaryFunction, and then\n * passes the result to the next one, passes that result to the next one, and so on. \n */\nexport function pipe(...fns: Array>): UnaryFunction {\n return pipeFromArray(fns);\n}\n\n/** @internal */\nexport function pipeFromArray(fns: Array>): UnaryFunction {\n if (fns.length === 0) {\n return identity as UnaryFunction;\n }\n\n if (fns.length === 1) {\n return fns[0];\n }\n\n return function piped(input: T): R {\n return fns.reduce((prev: any, fn: UnaryFunction) => fn(prev), input as any);\n };\n}\n", "import { Operator } from './Operator';\nimport { SafeSubscriber, Subscriber } from './Subscriber';\nimport { isSubscription, Subscription } from './Subscription';\nimport { TeardownLogic, OperatorFunction, Subscribable, Observer } from './types';\nimport { observable as Symbol_observable } from './symbol/observable';\nimport { pipeFromArray } from './util/pipe';\nimport { config } from './config';\nimport { isFunction } from './util/isFunction';\nimport { errorContext } from './util/errorContext';\n\n/**\n * A representation of any set of values over any amount of time. This is the most basic building block\n * of RxJS.\n *\n * @class Observable\n */\nexport class Observable implements Subscribable {\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n */\n source: Observable | undefined;\n\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n */\n operator: Operator | undefined;\n\n /**\n * @constructor\n * @param {Function} subscribe the function that is called when the Observable is\n * initially subscribed to. This function is given a Subscriber, to which new values\n * can be `next`ed, or an `error` method can be called to raise an error, or\n * `complete` can be called to notify of a successful completion.\n */\n constructor(subscribe?: (this: Observable, subscriber: Subscriber) => TeardownLogic) {\n if (subscribe) {\n this._subscribe = subscribe;\n }\n }\n\n // HACK: Since TypeScript inherits static properties too, we have to\n // fight against TypeScript here so Subject can have a different static create signature\n /**\n * Creates a new Observable by calling the Observable constructor\n * @owner Observable\n * @method create\n * @param {Function} subscribe? the subscriber function to be passed to the Observable constructor\n * @return {Observable} a new observable\n * @nocollapse\n * @deprecated Use `new Observable()` instead. Will be removed in v8.\n */\n static create: (...args: any[]) => any = (subscribe?: (subscriber: Subscriber) => TeardownLogic) => {\n return new Observable(subscribe);\n };\n\n /**\n * Creates a new Observable, with this Observable instance as the source, and the passed\n * operator defined as the new observable's operator.\n * @method lift\n * @param operator the operator defining the operation to take on the observable\n * @return a new observable with the Operator applied\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n * If you have implemented an operator using `lift`, it is recommended that you create an\n * operator by simply returning `new Observable()` directly. See \"Creating new operators from\n * scratch\" section here: https://rxjs.dev/guide/operators\n */\n lift(operator?: Operator): Observable {\n const observable = new Observable();\n observable.source = this;\n observable.operator = operator;\n return observable;\n }\n\n subscribe(observerOrNext?: Partial> | ((value: T) => void)): Subscription;\n /** @deprecated Instead of passing separate callback arguments, use an observer argument. Signatures taking separate callback arguments will be removed in v8. Details: https://rxjs.dev/deprecations/subscribe-arguments */\n subscribe(next?: ((value: T) => void) | null, error?: ((error: any) => void) | null, complete?: (() => void) | null): Subscription;\n /**\n * Invokes an execution of an Observable and registers Observer handlers for notifications it will emit.\n *\n * Use it when you have all these Observables, but still nothing is happening.\n *\n * `subscribe` is not a regular operator, but a method that calls Observable's internal `subscribe` function. It\n * might be for example a function that you passed to Observable's constructor, but most of the time it is\n * a library implementation, which defines what will be emitted by an Observable, and when it be will emitted. This means\n * that calling `subscribe` is actually the moment when Observable starts its work, not when it is created, as it is often\n * the thought.\n *\n * Apart from starting the execution of an Observable, this method allows you to listen for values\n * that an Observable emits, as well as for when it completes or errors. You can achieve this in two\n * of the following ways.\n *\n * The first way is creating an object that implements {@link Observer} interface. It should have methods\n * defined by that interface, but note that it should be just a regular JavaScript object, which you can create\n * yourself in any way you want (ES6 class, classic function constructor, object literal etc.). In particular, do\n * not attempt to use any RxJS implementation details to create Observers - you don't need them. Remember also\n * that your object does not have to implement all methods. If you find yourself creating a method that doesn't\n * do anything, you can simply omit it. Note however, if the `error` method is not provided and an error happens,\n * it will be thrown asynchronously. Errors thrown asynchronously cannot be caught using `try`/`catch`. Instead,\n * use the {@link onUnhandledError} configuration option or use a runtime handler (like `window.onerror` or\n * `process.on('error)`) to be notified of unhandled errors. Because of this, it's recommended that you provide\n * an `error` method to avoid missing thrown errors.\n *\n * The second way is to give up on Observer object altogether and simply provide callback functions in place of its methods.\n * This means you can provide three functions as arguments to `subscribe`, where the first function is equivalent\n * of a `next` method, the second of an `error` method and the third of a `complete` method. Just as in case of an Observer,\n * if you do not need to listen for something, you can omit a function by passing `undefined` or `null`,\n * since `subscribe` recognizes these functions by where they were placed in function call. When it comes\n * to the `error` function, as with an Observer, if not provided, errors emitted by an Observable will be thrown asynchronously.\n *\n * You can, however, subscribe with no parameters at all. This may be the case where you're not interested in terminal events\n * and you also handled emissions internally by using operators (e.g. using `tap`).\n *\n * Whichever style of calling `subscribe` you use, in both cases it returns a Subscription object.\n * This object allows you to call `unsubscribe` on it, which in turn will stop the work that an Observable does and will clean\n * up all resources that an Observable used. Note that cancelling a subscription will not call `complete` callback\n * provided to `subscribe` function, which is reserved for a regular completion signal that comes from an Observable.\n *\n * Remember that callbacks provided to `subscribe` are not guaranteed to be called asynchronously.\n * It is an Observable itself that decides when these functions will be called. For example {@link of}\n * by default emits all its values synchronously. Always check documentation for how given Observable\n * will behave when subscribed and if its default behavior can be modified with a `scheduler`.\n *\n * #### Examples\n *\n * Subscribe with an {@link guide/observer Observer}\n *\n * ```ts\n * import { of } from 'rxjs';\n *\n * const sumObserver = {\n * sum: 0,\n * next(value) {\n * console.log('Adding: ' + value);\n * this.sum = this.sum + value;\n * },\n * error() {\n * // We actually could just remove this method,\n * // since we do not really care about errors right now.\n * },\n * complete() {\n * console.log('Sum equals: ' + this.sum);\n * }\n * };\n *\n * of(1, 2, 3) // Synchronously emits 1, 2, 3 and then completes.\n * .subscribe(sumObserver);\n *\n * // Logs:\n * // 'Adding: 1'\n * // 'Adding: 2'\n * // 'Adding: 3'\n * // 'Sum equals: 6'\n * ```\n *\n * Subscribe with functions ({@link deprecations/subscribe-arguments deprecated})\n *\n * ```ts\n * import { of } from 'rxjs'\n *\n * let sum = 0;\n *\n * of(1, 2, 3).subscribe(\n * value => {\n * console.log('Adding: ' + value);\n * sum = sum + value;\n * },\n * undefined,\n * () => console.log('Sum equals: ' + sum)\n * );\n *\n * // Logs:\n * // 'Adding: 1'\n * // 'Adding: 2'\n * // 'Adding: 3'\n * // 'Sum equals: 6'\n * ```\n *\n * Cancel a subscription\n *\n * ```ts\n * import { interval } from 'rxjs';\n *\n * const subscription = interval(1000).subscribe({\n * next(num) {\n * console.log(num)\n * },\n * complete() {\n * // Will not be called, even when cancelling subscription.\n * console.log('completed!');\n * }\n * });\n *\n * setTimeout(() => {\n * subscription.unsubscribe();\n * console.log('unsubscribed!');\n * }, 2500);\n *\n * // Logs:\n * // 0 after 1s\n * // 1 after 2s\n * // 'unsubscribed!' after 2.5s\n * ```\n *\n * @param {Observer|Function} observerOrNext (optional) Either an observer with methods to be called,\n * or the first of three possible handlers, which is the handler for each value emitted from the subscribed\n * Observable.\n * @param {Function} error (optional) A handler for a terminal event resulting from an error. If no error handler is provided,\n * the error will be thrown asynchronously as unhandled.\n * @param {Function} complete (optional) A handler for a terminal event resulting from successful completion.\n * @return {Subscription} a subscription reference to the registered handlers\n * @method subscribe\n */\n subscribe(\n observerOrNext?: Partial> | ((value: T) => void) | null,\n error?: ((error: any) => void) | null,\n complete?: (() => void) | null\n ): Subscription {\n const subscriber = isSubscriber(observerOrNext) ? observerOrNext : new SafeSubscriber(observerOrNext, error, complete);\n\n errorContext(() => {\n const { operator, source } = this;\n subscriber.add(\n operator\n ? // We're dealing with a subscription in the\n // operator chain to one of our lifted operators.\n operator.call(subscriber, source)\n : source\n ? // If `source` has a value, but `operator` does not, something that\n // had intimate knowledge of our API, like our `Subject`, must have\n // set it. We're going to just call `_subscribe` directly.\n this._subscribe(subscriber)\n : // In all other cases, we're likely wrapping a user-provided initializer\n // function, so we need to catch errors and handle them appropriately.\n this._trySubscribe(subscriber)\n );\n });\n\n return subscriber;\n }\n\n /** @internal */\n protected _trySubscribe(sink: Subscriber): TeardownLogic {\n try {\n return this._subscribe(sink);\n } catch (err) {\n // We don't need to return anything in this case,\n // because it's just going to try to `add()` to a subscription\n // above.\n sink.error(err);\n }\n }\n\n /**\n * Used as a NON-CANCELLABLE means of subscribing to an observable, for use with\n * APIs that expect promises, like `async/await`. You cannot unsubscribe from this.\n *\n * **WARNING**: Only use this with observables you *know* will complete. If the source\n * observable does not complete, you will end up with a promise that is hung up, and\n * potentially all of the state of an async function hanging out in memory. To avoid\n * this situation, look into adding something like {@link timeout}, {@link take},\n * {@link takeWhile}, or {@link takeUntil} amongst others.\n *\n * #### Example\n *\n * ```ts\n * import { interval, take } from 'rxjs';\n *\n * const source$ = interval(1000).pipe(take(4));\n *\n * async function getTotal() {\n * let total = 0;\n *\n * await source$.forEach(value => {\n * total += value;\n * console.log('observable -> ' + value);\n * });\n *\n * return total;\n * }\n *\n * getTotal().then(\n * total => console.log('Total: ' + total)\n * );\n *\n * // Expected:\n * // 'observable -> 0'\n * // 'observable -> 1'\n * // 'observable -> 2'\n * // 'observable -> 3'\n * // 'Total: 6'\n * ```\n *\n * @param next a handler for each value emitted by the observable\n * @return a promise that either resolves on observable completion or\n * rejects with the handled error\n */\n forEach(next: (value: T) => void): Promise;\n\n /**\n * @param next a handler for each value emitted by the observable\n * @param promiseCtor a constructor function used to instantiate the Promise\n * @return a promise that either resolves on observable completion or\n * rejects with the handled error\n * @deprecated Passing a Promise constructor will no longer be available\n * in upcoming versions of RxJS. This is because it adds weight to the library, for very\n * little benefit. If you need this functionality, it is recommended that you either\n * polyfill Promise, or you create an adapter to convert the returned native promise\n * to whatever promise implementation you wanted. Will be removed in v8.\n */\n forEach(next: (value: T) => void, promiseCtor: PromiseConstructorLike): Promise;\n\n forEach(next: (value: T) => void, promiseCtor?: PromiseConstructorLike): Promise {\n promiseCtor = getPromiseCtor(promiseCtor);\n\n return new promiseCtor((resolve, reject) => {\n const subscriber = new SafeSubscriber({\n next: (value) => {\n try {\n next(value);\n } catch (err) {\n reject(err);\n subscriber.unsubscribe();\n }\n },\n error: reject,\n complete: resolve,\n });\n this.subscribe(subscriber);\n }) as Promise;\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): TeardownLogic {\n return this.source?.subscribe(subscriber);\n }\n\n /**\n * An interop point defined by the es7-observable spec https://github.com/zenparsing/es-observable\n * @method Symbol.observable\n * @return {Observable} this instance of the observable\n */\n [Symbol_observable]() {\n return this;\n }\n\n /* tslint:disable:max-line-length */\n pipe(): Observable;\n pipe(op1: OperatorFunction): Observable;\n pipe(op1: OperatorFunction, op2: OperatorFunction): Observable;\n pipe(op1: OperatorFunction, op2: OperatorFunction, op3: OperatorFunction): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction,\n op9: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction,\n op9: OperatorFunction,\n ...operations: OperatorFunction[]\n ): Observable;\n /* tslint:enable:max-line-length */\n\n /**\n * Used to stitch together functional operators into a chain.\n * @method pipe\n * @return {Observable} the Observable result of all of the operators having\n * been called in the order they were passed in.\n *\n * ## Example\n *\n * ```ts\n * import { interval, filter, map, scan } from 'rxjs';\n *\n * interval(1000)\n * .pipe(\n * filter(x => x % 2 === 0),\n * map(x => x + x),\n * scan((acc, x) => acc + x)\n * )\n * .subscribe(x => console.log(x));\n * ```\n */\n pipe(...operations: OperatorFunction[]): Observable {\n return pipeFromArray(operations)(this);\n }\n\n /* tslint:disable:max-line-length */\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(): Promise;\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(PromiseCtor: typeof Promise): Promise;\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(PromiseCtor: PromiseConstructorLike): Promise;\n /* tslint:enable:max-line-length */\n\n /**\n * Subscribe to this Observable and get a Promise resolving on\n * `complete` with the last emission (if any).\n *\n * **WARNING**: Only use this with observables you *know* will complete. If the source\n * observable does not complete, you will end up with a promise that is hung up, and\n * potentially all of the state of an async function hanging out in memory. To avoid\n * this situation, look into adding something like {@link timeout}, {@link take},\n * {@link takeWhile}, or {@link takeUntil} amongst others.\n *\n * @method toPromise\n * @param [promiseCtor] a constructor function used to instantiate\n * the Promise\n * @return A Promise that resolves with the last value emit, or\n * rejects on an error. If there were no emissions, Promise\n * resolves with undefined.\n * @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise\n */\n toPromise(promiseCtor?: PromiseConstructorLike): Promise {\n promiseCtor = getPromiseCtor(promiseCtor);\n\n return new promiseCtor((resolve, reject) => {\n let value: T | undefined;\n this.subscribe(\n (x: T) => (value = x),\n (err: any) => reject(err),\n () => resolve(value)\n );\n }) as Promise;\n }\n}\n\n/**\n * Decides between a passed promise constructor from consuming code,\n * A default configured promise constructor, and the native promise\n * constructor and returns it. If nothing can be found, it will throw\n * an error.\n * @param promiseCtor The optional promise constructor to passed by consuming code\n */\nfunction getPromiseCtor(promiseCtor: PromiseConstructorLike | undefined) {\n return promiseCtor ?? config.Promise ?? Promise;\n}\n\nfunction isObserver(value: any): value is Observer {\n return value && isFunction(value.next) && isFunction(value.error) && isFunction(value.complete);\n}\n\nfunction isSubscriber(value: any): value is Subscriber {\n return (value && value instanceof Subscriber) || (isObserver(value) && isSubscription(value));\n}\n", "import { Observable } from '../Observable';\nimport { Subscriber } from '../Subscriber';\nimport { OperatorFunction } from '../types';\nimport { isFunction } from './isFunction';\n\n/**\n * Used to determine if an object is an Observable with a lift function.\n */\nexport function hasLift(source: any): source is { lift: InstanceType['lift'] } {\n return isFunction(source?.lift);\n}\n\n/**\n * Creates an `OperatorFunction`. Used to define operators throughout the library in a concise way.\n * @param init The logic to connect the liftedSource to the subscriber at the moment of subscription.\n */\nexport function operate(\n init: (liftedSource: Observable, subscriber: Subscriber) => (() => void) | void\n): OperatorFunction {\n return (source: Observable) => {\n if (hasLift(source)) {\n return source.lift(function (this: Subscriber, liftedSource: Observable) {\n try {\n return init(liftedSource, this);\n } catch (err) {\n this.error(err);\n }\n });\n }\n throw new TypeError('Unable to lift unknown Observable type');\n };\n}\n", "import { Subscriber } from '../Subscriber';\n\n/**\n * Creates an instance of an `OperatorSubscriber`.\n * @param destination The downstream subscriber.\n * @param onNext Handles next values, only called if this subscriber is not stopped or closed. Any\n * error that occurs in this function is caught and sent to the `error` method of this subscriber.\n * @param onError Handles errors from the subscription, any errors that occur in this handler are caught\n * and send to the `destination` error handler.\n * @param onComplete Handles completion notification from the subscription. Any errors that occur in\n * this handler are sent to the `destination` error handler.\n * @param onFinalize Additional teardown logic here. This will only be called on teardown if the\n * subscriber itself is not already closed. This is called after all other teardown logic is executed.\n */\nexport function createOperatorSubscriber(\n destination: Subscriber,\n onNext?: (value: T) => void,\n onComplete?: () => void,\n onError?: (err: any) => void,\n onFinalize?: () => void\n): Subscriber {\n return new OperatorSubscriber(destination, onNext, onComplete, onError, onFinalize);\n}\n\n/**\n * A generic helper for allowing operators to be created with a Subscriber and\n * use closures to capture necessary state from the operator function itself.\n */\nexport class OperatorSubscriber extends Subscriber {\n /**\n * Creates an instance of an `OperatorSubscriber`.\n * @param destination The downstream subscriber.\n * @param onNext Handles next values, only called if this subscriber is not stopped or closed. Any\n * error that occurs in this function is caught and sent to the `error` method of this subscriber.\n * @param onError Handles errors from the subscription, any errors that occur in this handler are caught\n * and send to the `destination` error handler.\n * @param onComplete Handles completion notification from the subscription. Any errors that occur in\n * this handler are sent to the `destination` error handler.\n * @param onFinalize Additional finalization logic here. This will only be called on finalization if the\n * subscriber itself is not already closed. This is called after all other finalization logic is executed.\n * @param shouldUnsubscribe An optional check to see if an unsubscribe call should truly unsubscribe.\n * NOTE: This currently **ONLY** exists to support the strange behavior of {@link groupBy}, where unsubscription\n * to the resulting observable does not actually disconnect from the source if there are active subscriptions\n * to any grouped observable. (DO NOT EXPOSE OR USE EXTERNALLY!!!)\n */\n constructor(\n destination: Subscriber,\n onNext?: (value: T) => void,\n onComplete?: () => void,\n onError?: (err: any) => void,\n private onFinalize?: () => void,\n private shouldUnsubscribe?: () => boolean\n ) {\n // It's important - for performance reasons - that all of this class's\n // members are initialized and that they are always initialized in the same\n // order. This will ensure that all OperatorSubscriber instances have the\n // same hidden class in V8. This, in turn, will help keep the number of\n // hidden classes involved in property accesses within the base class as\n // low as possible. If the number of hidden classes involved exceeds four,\n // the property accesses will become megamorphic and performance penalties\n // will be incurred - i.e. inline caches won't be used.\n //\n // The reasons for ensuring all instances have the same hidden class are\n // further discussed in this blog post from Benedikt Meurer:\n // https://benediktmeurer.de/2018/03/23/impact-of-polymorphism-on-component-based-frameworks-like-react/\n super(destination);\n this._next = onNext\n ? function (this: OperatorSubscriber, value: T) {\n try {\n onNext(value);\n } catch (err) {\n destination.error(err);\n }\n }\n : super._next;\n this._error = onError\n ? function (this: OperatorSubscriber, err: any) {\n try {\n onError(err);\n } catch (err) {\n // Send any errors that occur down stream.\n destination.error(err);\n } finally {\n // Ensure finalization.\n this.unsubscribe();\n }\n }\n : super._error;\n this._complete = onComplete\n ? function (this: OperatorSubscriber) {\n try {\n onComplete();\n } catch (err) {\n // Send any errors that occur down stream.\n destination.error(err);\n } finally {\n // Ensure finalization.\n this.unsubscribe();\n }\n }\n : super._complete;\n }\n\n unsubscribe() {\n if (!this.shouldUnsubscribe || this.shouldUnsubscribe()) {\n const { closed } = this;\n super.unsubscribe();\n // Execute additional teardown if we have any and we didn't already do so.\n !closed && this.onFinalize?.();\n }\n }\n}\n", "import { Subscription } from '../Subscription';\n\ninterface AnimationFrameProvider {\n schedule(callback: FrameRequestCallback): Subscription;\n requestAnimationFrame: typeof requestAnimationFrame;\n cancelAnimationFrame: typeof cancelAnimationFrame;\n delegate:\n | {\n requestAnimationFrame: typeof requestAnimationFrame;\n cancelAnimationFrame: typeof cancelAnimationFrame;\n }\n | undefined;\n}\n\nexport const animationFrameProvider: AnimationFrameProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n schedule(callback) {\n let request = requestAnimationFrame;\n let cancel: typeof cancelAnimationFrame | undefined = cancelAnimationFrame;\n const { delegate } = animationFrameProvider;\n if (delegate) {\n request = delegate.requestAnimationFrame;\n cancel = delegate.cancelAnimationFrame;\n }\n const handle = request((timestamp) => {\n // Clear the cancel function. The request has been fulfilled, so\n // attempting to cancel the request upon unsubscription would be\n // pointless.\n cancel = undefined;\n callback(timestamp);\n });\n return new Subscription(() => cancel?.(handle));\n },\n requestAnimationFrame(...args) {\n const { delegate } = animationFrameProvider;\n return (delegate?.requestAnimationFrame || requestAnimationFrame)(...args);\n },\n cancelAnimationFrame(...args) {\n const { delegate } = animationFrameProvider;\n return (delegate?.cancelAnimationFrame || cancelAnimationFrame)(...args);\n },\n delegate: undefined,\n};\n", "import { createErrorClass } from './createErrorClass';\n\nexport interface ObjectUnsubscribedError extends Error {}\n\nexport interface ObjectUnsubscribedErrorCtor {\n /**\n * @deprecated Internal implementation detail. Do not construct error instances.\n * Cannot be tagged as internal: https://github.com/ReactiveX/rxjs/issues/6269\n */\n new (): ObjectUnsubscribedError;\n}\n\n/**\n * An error thrown when an action is invalid because the object has been\n * unsubscribed.\n *\n * @see {@link Subject}\n * @see {@link BehaviorSubject}\n *\n * @class ObjectUnsubscribedError\n */\nexport const ObjectUnsubscribedError: ObjectUnsubscribedErrorCtor = createErrorClass(\n (_super) =>\n function ObjectUnsubscribedErrorImpl(this: any) {\n _super(this);\n this.name = 'ObjectUnsubscribedError';\n this.message = 'object unsubscribed';\n }\n);\n", "import { Operator } from './Operator';\nimport { Observable } from './Observable';\nimport { Subscriber } from './Subscriber';\nimport { Subscription, EMPTY_SUBSCRIPTION } from './Subscription';\nimport { Observer, SubscriptionLike, TeardownLogic } from './types';\nimport { ObjectUnsubscribedError } from './util/ObjectUnsubscribedError';\nimport { arrRemove } from './util/arrRemove';\nimport { errorContext } from './util/errorContext';\n\n/**\n * A Subject is a special type of Observable that allows values to be\n * multicasted to many Observers. Subjects are like EventEmitters.\n *\n * Every Subject is an Observable and an Observer. You can subscribe to a\n * Subject, and you can call next to feed values as well as error and complete.\n */\nexport class Subject extends Observable implements SubscriptionLike {\n closed = false;\n\n private currentObservers: Observer[] | null = null;\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n observers: Observer[] = [];\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n isStopped = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n hasError = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n thrownError: any = null;\n\n /**\n * Creates a \"subject\" by basically gluing an observer to an observable.\n *\n * @nocollapse\n * @deprecated Recommended you do not use. Will be removed at some point in the future. Plans for replacement still under discussion.\n */\n static create: (...args: any[]) => any = (destination: Observer, source: Observable): AnonymousSubject => {\n return new AnonymousSubject(destination, source);\n };\n\n constructor() {\n // NOTE: This must be here to obscure Observable's constructor.\n super();\n }\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n lift(operator: Operator): Observable {\n const subject = new AnonymousSubject(this, this);\n subject.operator = operator as any;\n return subject as any;\n }\n\n /** @internal */\n protected _throwIfClosed() {\n if (this.closed) {\n throw new ObjectUnsubscribedError();\n }\n }\n\n next(value: T) {\n errorContext(() => {\n this._throwIfClosed();\n if (!this.isStopped) {\n if (!this.currentObservers) {\n this.currentObservers = Array.from(this.observers);\n }\n for (const observer of this.currentObservers) {\n observer.next(value);\n }\n }\n });\n }\n\n error(err: any) {\n errorContext(() => {\n this._throwIfClosed();\n if (!this.isStopped) {\n this.hasError = this.isStopped = true;\n this.thrownError = err;\n const { observers } = this;\n while (observers.length) {\n observers.shift()!.error(err);\n }\n }\n });\n }\n\n complete() {\n errorContext(() => {\n this._throwIfClosed();\n if (!this.isStopped) {\n this.isStopped = true;\n const { observers } = this;\n while (observers.length) {\n observers.shift()!.complete();\n }\n }\n });\n }\n\n unsubscribe() {\n this.isStopped = this.closed = true;\n this.observers = this.currentObservers = null!;\n }\n\n get observed() {\n return this.observers?.length > 0;\n }\n\n /** @internal */\n protected _trySubscribe(subscriber: Subscriber): TeardownLogic {\n this._throwIfClosed();\n return super._trySubscribe(subscriber);\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n this._throwIfClosed();\n this._checkFinalizedStatuses(subscriber);\n return this._innerSubscribe(subscriber);\n }\n\n /** @internal */\n protected _innerSubscribe(subscriber: Subscriber) {\n const { hasError, isStopped, observers } = this;\n if (hasError || isStopped) {\n return EMPTY_SUBSCRIPTION;\n }\n this.currentObservers = null;\n observers.push(subscriber);\n return new Subscription(() => {\n this.currentObservers = null;\n arrRemove(observers, subscriber);\n });\n }\n\n /** @internal */\n protected _checkFinalizedStatuses(subscriber: Subscriber) {\n const { hasError, thrownError, isStopped } = this;\n if (hasError) {\n subscriber.error(thrownError);\n } else if (isStopped) {\n subscriber.complete();\n }\n }\n\n /**\n * Creates a new Observable with this Subject as the source. You can do this\n * to create custom Observer-side logic of the Subject and conceal it from\n * code that uses the Observable.\n * @return {Observable} Observable that the Subject casts to\n */\n asObservable(): Observable {\n const observable: any = new Observable();\n observable.source = this;\n return observable;\n }\n}\n\n/**\n * @class AnonymousSubject\n */\nexport class AnonymousSubject extends Subject {\n constructor(\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n public destination?: Observer,\n source?: Observable\n ) {\n super();\n this.source = source;\n }\n\n next(value: T) {\n this.destination?.next?.(value);\n }\n\n error(err: any) {\n this.destination?.error?.(err);\n }\n\n complete() {\n this.destination?.complete?.();\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n return this.source?.subscribe(subscriber) ?? EMPTY_SUBSCRIPTION;\n }\n}\n", "import { Subject } from './Subject';\nimport { Subscriber } from './Subscriber';\nimport { Subscription } from './Subscription';\n\n/**\n * A variant of Subject that requires an initial value and emits its current\n * value whenever it is subscribed to.\n *\n * @class BehaviorSubject\n */\nexport class BehaviorSubject extends Subject {\n constructor(private _value: T) {\n super();\n }\n\n get value(): T {\n return this.getValue();\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n const subscription = super._subscribe(subscriber);\n !subscription.closed && subscriber.next(this._value);\n return subscription;\n }\n\n getValue(): T {\n const { hasError, thrownError, _value } = this;\n if (hasError) {\n throw thrownError;\n }\n this._throwIfClosed();\n return _value;\n }\n\n next(value: T): void {\n super.next((this._value = value));\n }\n}\n", "import { TimestampProvider } from '../types';\n\ninterface DateTimestampProvider extends TimestampProvider {\n delegate: TimestampProvider | undefined;\n}\n\nexport const dateTimestampProvider: DateTimestampProvider = {\n now() {\n // Use the variable rather than `this` so that the function can be called\n // without being bound to the provider.\n return (dateTimestampProvider.delegate || Date).now();\n },\n delegate: undefined,\n};\n", "import { Subject } from './Subject';\nimport { TimestampProvider } from './types';\nimport { Subscriber } from './Subscriber';\nimport { Subscription } from './Subscription';\nimport { dateTimestampProvider } from './scheduler/dateTimestampProvider';\n\n/**\n * A variant of {@link Subject} that \"replays\" old values to new subscribers by emitting them when they first subscribe.\n *\n * `ReplaySubject` has an internal buffer that will store a specified number of values that it has observed. Like `Subject`,\n * `ReplaySubject` \"observes\" values by having them passed to its `next` method. When it observes a value, it will store that\n * value for a time determined by the configuration of the `ReplaySubject`, as passed to its constructor.\n *\n * When a new subscriber subscribes to the `ReplaySubject` instance, it will synchronously emit all values in its buffer in\n * a First-In-First-Out (FIFO) manner. The `ReplaySubject` will also complete, if it has observed completion; and it will\n * error if it has observed an error.\n *\n * There are two main configuration items to be concerned with:\n *\n * 1. `bufferSize` - This will determine how many items are stored in the buffer, defaults to infinite.\n * 2. `windowTime` - The amount of time to hold a value in the buffer before removing it from the buffer.\n *\n * Both configurations may exist simultaneously. So if you would like to buffer a maximum of 3 values, as long as the values\n * are less than 2 seconds old, you could do so with a `new ReplaySubject(3, 2000)`.\n *\n * ### Differences with BehaviorSubject\n *\n * `BehaviorSubject` is similar to `new ReplaySubject(1)`, with a couple of exceptions:\n *\n * 1. `BehaviorSubject` comes \"primed\" with a single value upon construction.\n * 2. `ReplaySubject` will replay values, even after observing an error, where `BehaviorSubject` will not.\n *\n * @see {@link Subject}\n * @see {@link BehaviorSubject}\n * @see {@link shareReplay}\n */\nexport class ReplaySubject extends Subject {\n private _buffer: (T | number)[] = [];\n private _infiniteTimeWindow = true;\n\n /**\n * @param bufferSize The size of the buffer to replay on subscription\n * @param windowTime The amount of time the buffered items will stay buffered\n * @param timestampProvider An object with a `now()` method that provides the current timestamp. This is used to\n * calculate the amount of time something has been buffered.\n */\n constructor(\n private _bufferSize = Infinity,\n private _windowTime = Infinity,\n private _timestampProvider: TimestampProvider = dateTimestampProvider\n ) {\n super();\n this._infiniteTimeWindow = _windowTime === Infinity;\n this._bufferSize = Math.max(1, _bufferSize);\n this._windowTime = Math.max(1, _windowTime);\n }\n\n next(value: T): void {\n const { isStopped, _buffer, _infiniteTimeWindow, _timestampProvider, _windowTime } = this;\n if (!isStopped) {\n _buffer.push(value);\n !_infiniteTimeWindow && _buffer.push(_timestampProvider.now() + _windowTime);\n }\n this._trimBuffer();\n super.next(value);\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n this._throwIfClosed();\n this._trimBuffer();\n\n const subscription = this._innerSubscribe(subscriber);\n\n const { _infiniteTimeWindow, _buffer } = this;\n // We use a copy here, so reentrant code does not mutate our array while we're\n // emitting it to a new subscriber.\n const copy = _buffer.slice();\n for (let i = 0; i < copy.length && !subscriber.closed; i += _infiniteTimeWindow ? 1 : 2) {\n subscriber.next(copy[i] as T);\n }\n\n this._checkFinalizedStatuses(subscriber);\n\n return subscription;\n }\n\n private _trimBuffer() {\n const { _bufferSize, _timestampProvider, _buffer, _infiniteTimeWindow } = this;\n // If we don't have an infinite buffer size, and we're over the length,\n // use splice to truncate the old buffer values off. Note that we have to\n // double the size for instances where we're not using an infinite time window\n // because we're storing the values and the timestamps in the same array.\n const adjustedBufferSize = (_infiniteTimeWindow ? 1 : 2) * _bufferSize;\n _bufferSize < Infinity && adjustedBufferSize < _buffer.length && _buffer.splice(0, _buffer.length - adjustedBufferSize);\n\n // Now, if we're not in an infinite time window, remove all values where the time is\n // older than what is allowed.\n if (!_infiniteTimeWindow) {\n const now = _timestampProvider.now();\n let last = 0;\n // Search the array for the first timestamp that isn't expired and\n // truncate the buffer up to that point.\n for (let i = 1; i < _buffer.length && (_buffer[i] as number) <= now; i += 2) {\n last = i;\n }\n last && _buffer.splice(0, last + 1);\n }\n }\n}\n", "import { Scheduler } from '../Scheduler';\nimport { Subscription } from '../Subscription';\nimport { SchedulerAction } from '../types';\n\n/**\n * A unit of work to be executed in a `scheduler`. An action is typically\n * created from within a {@link SchedulerLike} and an RxJS user does not need to concern\n * themselves about creating and manipulating an Action.\n *\n * ```ts\n * class Action extends Subscription {\n * new (scheduler: Scheduler, work: (state?: T) => void);\n * schedule(state?: T, delay: number = 0): Subscription;\n * }\n * ```\n *\n * @class Action\n */\nexport class Action extends Subscription {\n constructor(scheduler: Scheduler, work: (this: SchedulerAction, state?: T) => void) {\n super();\n }\n /**\n * Schedules this action on its parent {@link SchedulerLike} for execution. May be passed\n * some context object, `state`. May happen at some point in the future,\n * according to the `delay` parameter, if specified.\n * @param {T} [state] Some contextual data that the `work` function uses when\n * called by the Scheduler.\n * @param {number} [delay] Time to wait before executing the work, where the\n * time unit is implicit and defined by the Scheduler.\n * @return {void}\n */\n public schedule(state?: T, delay: number = 0): Subscription {\n return this;\n }\n}\n", "import type { TimerHandle } from './timerHandle';\ntype SetIntervalFunction = (handler: () => void, timeout?: number, ...args: any[]) => TimerHandle;\ntype ClearIntervalFunction = (handle: TimerHandle) => void;\n\ninterface IntervalProvider {\n setInterval: SetIntervalFunction;\n clearInterval: ClearIntervalFunction;\n delegate:\n | {\n setInterval: SetIntervalFunction;\n clearInterval: ClearIntervalFunction;\n }\n | undefined;\n}\n\nexport const intervalProvider: IntervalProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n setInterval(handler: () => void, timeout?: number, ...args) {\n const { delegate } = intervalProvider;\n if (delegate?.setInterval) {\n return delegate.setInterval(handler, timeout, ...args);\n }\n return setInterval(handler, timeout, ...args);\n },\n clearInterval(handle) {\n const { delegate } = intervalProvider;\n return (delegate?.clearInterval || clearInterval)(handle as any);\n },\n delegate: undefined,\n};\n", "import { Action } from './Action';\nimport { SchedulerAction } from '../types';\nimport { Subscription } from '../Subscription';\nimport { AsyncScheduler } from './AsyncScheduler';\nimport { intervalProvider } from './intervalProvider';\nimport { arrRemove } from '../util/arrRemove';\nimport { TimerHandle } from './timerHandle';\n\nexport class AsyncAction extends Action {\n public id: TimerHandle | undefined;\n public state?: T;\n // @ts-ignore: Property has no initializer and is not definitely assigned\n public delay: number;\n protected pending: boolean = false;\n\n constructor(protected scheduler: AsyncScheduler, protected work: (this: SchedulerAction, state?: T) => void) {\n super(scheduler, work);\n }\n\n public schedule(state?: T, delay: number = 0): Subscription {\n if (this.closed) {\n return this;\n }\n\n // Always replace the current state with the new state.\n this.state = state;\n\n const id = this.id;\n const scheduler = this.scheduler;\n\n //\n // Important implementation note:\n //\n // Actions only execute once by default, unless rescheduled from within the\n // scheduled callback. This allows us to implement single and repeat\n // actions via the same code path, without adding API surface area, as well\n // as mimic traditional recursion but across asynchronous boundaries.\n //\n // However, JS runtimes and timers distinguish between intervals achieved by\n // serial `setTimeout` calls vs. a single `setInterval` call. An interval of\n // serial `setTimeout` calls can be individually delayed, which delays\n // scheduling the next `setTimeout`, and so on. `setInterval` attempts to\n // guarantee the interval callback will be invoked more precisely to the\n // interval period, regardless of load.\n //\n // Therefore, we use `setInterval` to schedule single and repeat actions.\n // If the action reschedules itself with the same delay, the interval is not\n // canceled. If the action doesn't reschedule, or reschedules with a\n // different delay, the interval will be canceled after scheduled callback\n // execution.\n //\n if (id != null) {\n this.id = this.recycleAsyncId(scheduler, id, delay);\n }\n\n // Set the pending flag indicating that this action has been scheduled, or\n // has recursively rescheduled itself.\n this.pending = true;\n\n this.delay = delay;\n // If this action has already an async Id, don't request a new one.\n this.id = this.id ?? this.requestAsyncId(scheduler, this.id, delay);\n\n return this;\n }\n\n protected requestAsyncId(scheduler: AsyncScheduler, _id?: TimerHandle, delay: number = 0): TimerHandle {\n return intervalProvider.setInterval(scheduler.flush.bind(scheduler, this), delay);\n }\n\n protected recycleAsyncId(_scheduler: AsyncScheduler, id?: TimerHandle, delay: number | null = 0): TimerHandle | undefined {\n // If this action is rescheduled with the same delay time, don't clear the interval id.\n if (delay != null && this.delay === delay && this.pending === false) {\n return id;\n }\n // Otherwise, if the action's delay time is different from the current delay,\n // or the action has been rescheduled before it's executed, clear the interval id\n if (id != null) {\n intervalProvider.clearInterval(id);\n }\n\n return undefined;\n }\n\n /**\n * Immediately executes this action and the `work` it contains.\n * @return {any}\n */\n public execute(state: T, delay: number): any {\n if (this.closed) {\n return new Error('executing a cancelled action');\n }\n\n this.pending = false;\n const error = this._execute(state, delay);\n if (error) {\n return error;\n } else if (this.pending === false && this.id != null) {\n // Dequeue if the action didn't reschedule itself. Don't call\n // unsubscribe(), because the action could reschedule later.\n // For example:\n // ```\n // scheduler.schedule(function doWork(counter) {\n // /* ... I'm a busy worker bee ... */\n // var originalAction = this;\n // /* wait 100ms before rescheduling the action */\n // setTimeout(function () {\n // originalAction.schedule(counter + 1);\n // }, 100);\n // }, 1000);\n // ```\n this.id = this.recycleAsyncId(this.scheduler, this.id, null);\n }\n }\n\n protected _execute(state: T, _delay: number): any {\n let errored: boolean = false;\n let errorValue: any;\n try {\n this.work(state);\n } catch (e) {\n errored = true;\n // HACK: Since code elsewhere is relying on the \"truthiness\" of the\n // return here, we can't have it return \"\" or 0 or false.\n // TODO: Clean this up when we refactor schedulers mid-version-8 or so.\n errorValue = e ? e : new Error('Scheduled action threw falsy error');\n }\n if (errored) {\n this.unsubscribe();\n return errorValue;\n }\n }\n\n unsubscribe() {\n if (!this.closed) {\n const { id, scheduler } = this;\n const { actions } = scheduler;\n\n this.work = this.state = this.scheduler = null!;\n this.pending = false;\n\n arrRemove(actions, this);\n if (id != null) {\n this.id = this.recycleAsyncId(scheduler, id, null);\n }\n\n this.delay = null!;\n super.unsubscribe();\n }\n }\n}\n", "import { Action } from './scheduler/Action';\nimport { Subscription } from './Subscription';\nimport { SchedulerLike, SchedulerAction } from './types';\nimport { dateTimestampProvider } from './scheduler/dateTimestampProvider';\n\n/**\n * An execution context and a data structure to order tasks and schedule their\n * execution. Provides a notion of (potentially virtual) time, through the\n * `now()` getter method.\n *\n * Each unit of work in a Scheduler is called an `Action`.\n *\n * ```ts\n * class Scheduler {\n * now(): number;\n * schedule(work, delay?, state?): Subscription;\n * }\n * ```\n *\n * @class Scheduler\n * @deprecated Scheduler is an internal implementation detail of RxJS, and\n * should not be used directly. Rather, create your own class and implement\n * {@link SchedulerLike}. Will be made internal in v8.\n */\nexport class Scheduler implements SchedulerLike {\n public static now: () => number = dateTimestampProvider.now;\n\n constructor(private schedulerActionCtor: typeof Action, now: () => number = Scheduler.now) {\n this.now = now;\n }\n\n /**\n * A getter method that returns a number representing the current time\n * (at the time this function was called) according to the scheduler's own\n * internal clock.\n * @return {number} A number that represents the current time. May or may not\n * have a relation to wall-clock time. May or may not refer to a time unit\n * (e.g. milliseconds).\n */\n public now: () => number;\n\n /**\n * Schedules a function, `work`, for execution. May happen at some point in\n * the future, according to the `delay` parameter, if specified. May be passed\n * some context object, `state`, which will be passed to the `work` function.\n *\n * The given arguments will be processed an stored as an Action object in a\n * queue of actions.\n *\n * @param {function(state: ?T): ?Subscription} work A function representing a\n * task, or some unit of work to be executed by the Scheduler.\n * @param {number} [delay] Time to wait before executing the work, where the\n * time unit is implicit and defined by the Scheduler itself.\n * @param {T} [state] Some contextual data that the `work` function uses when\n * called by the Scheduler.\n * @return {Subscription} A subscription in order to be able to unsubscribe\n * the scheduled work.\n */\n public schedule(work: (this: SchedulerAction, state?: T) => void, delay: number = 0, state?: T): Subscription {\n return new this.schedulerActionCtor(this, work).schedule(state, delay);\n }\n}\n", "import { Scheduler } from '../Scheduler';\nimport { Action } from './Action';\nimport { AsyncAction } from './AsyncAction';\nimport { TimerHandle } from './timerHandle';\n\nexport class AsyncScheduler extends Scheduler {\n public actions: Array> = [];\n /**\n * A flag to indicate whether the Scheduler is currently executing a batch of\n * queued actions.\n * @type {boolean}\n * @internal\n */\n public _active: boolean = false;\n /**\n * An internal ID used to track the latest asynchronous task such as those\n * coming from `setTimeout`, `setInterval`, `requestAnimationFrame`, and\n * others.\n * @type {any}\n * @internal\n */\n public _scheduled: TimerHandle | undefined;\n\n constructor(SchedulerAction: typeof Action, now: () => number = Scheduler.now) {\n super(SchedulerAction, now);\n }\n\n public flush(action: AsyncAction): void {\n const { actions } = this;\n\n if (this._active) {\n actions.push(action);\n return;\n }\n\n let error: any;\n this._active = true;\n\n do {\n if ((error = action.execute(action.state, action.delay))) {\n break;\n }\n } while ((action = actions.shift()!)); // exhaust the scheduler queue\n\n this._active = false;\n\n if (error) {\n while ((action = actions.shift()!)) {\n action.unsubscribe();\n }\n throw error;\n }\n }\n}\n", "import { AsyncAction } from './AsyncAction';\nimport { AsyncScheduler } from './AsyncScheduler';\n\n/**\n *\n * Async Scheduler\n *\n * Schedule task as if you used setTimeout(task, duration)\n *\n * `async` scheduler schedules tasks asynchronously, by putting them on the JavaScript\n * event loop queue. It is best used to delay tasks in time or to schedule tasks repeating\n * in intervals.\n *\n * If you just want to \"defer\" task, that is to perform it right after currently\n * executing synchronous code ends (commonly achieved by `setTimeout(deferredTask, 0)`),\n * better choice will be the {@link asapScheduler} scheduler.\n *\n * ## Examples\n * Use async scheduler to delay task\n * ```ts\n * import { asyncScheduler } from 'rxjs';\n *\n * const task = () => console.log('it works!');\n *\n * asyncScheduler.schedule(task, 2000);\n *\n * // After 2 seconds logs:\n * // \"it works!\"\n * ```\n *\n * Use async scheduler to repeat task in intervals\n * ```ts\n * import { asyncScheduler } from 'rxjs';\n *\n * function task(state) {\n * console.log(state);\n * this.schedule(state + 1, 1000); // `this` references currently executing Action,\n * // which we reschedule with new state and delay\n * }\n *\n * asyncScheduler.schedule(task, 3000, 0);\n *\n * // Logs:\n * // 0 after 3s\n * // 1 after 4s\n * // 2 after 5s\n * // 3 after 6s\n * ```\n */\n\nexport const asyncScheduler = new AsyncScheduler(AsyncAction);\n\n/**\n * @deprecated Renamed to {@link asyncScheduler}. Will be removed in v8.\n */\nexport const async = asyncScheduler;\n", "import { AsyncAction } from './AsyncAction';\nimport { Subscription } from '../Subscription';\nimport { QueueScheduler } from './QueueScheduler';\nimport { SchedulerAction } from '../types';\nimport { TimerHandle } from './timerHandle';\n\nexport class QueueAction extends AsyncAction {\n constructor(protected scheduler: QueueScheduler, protected work: (this: SchedulerAction, state?: T) => void) {\n super(scheduler, work);\n }\n\n public schedule(state?: T, delay: number = 0): Subscription {\n if (delay > 0) {\n return super.schedule(state, delay);\n }\n this.delay = delay;\n this.state = state;\n this.scheduler.flush(this);\n return this;\n }\n\n public execute(state: T, delay: number): any {\n return delay > 0 || this.closed ? super.execute(state, delay) : this._execute(state, delay);\n }\n\n protected requestAsyncId(scheduler: QueueScheduler, id?: TimerHandle, delay: number = 0): TimerHandle {\n // If delay exists and is greater than 0, or if the delay is null (the\n // action wasn't rescheduled) but was originally scheduled as an async\n // action, then recycle as an async action.\n\n if ((delay != null && delay > 0) || (delay == null && this.delay > 0)) {\n return super.requestAsyncId(scheduler, id, delay);\n }\n\n // Otherwise flush the scheduler starting with this action.\n scheduler.flush(this);\n\n // HACK: In the past, this was returning `void`. However, `void` isn't a valid\n // `TimerHandle`, and generally the return value here isn't really used. So the\n // compromise is to return `0` which is both \"falsy\" and a valid `TimerHandle`,\n // as opposed to refactoring every other instanceo of `requestAsyncId`.\n return 0;\n }\n}\n", "import { AsyncScheduler } from './AsyncScheduler';\n\nexport class QueueScheduler extends AsyncScheduler {\n}\n", "import { QueueAction } from './QueueAction';\nimport { QueueScheduler } from './QueueScheduler';\n\n/**\n *\n * Queue Scheduler\n *\n * Put every next task on a queue, instead of executing it immediately\n *\n * `queue` scheduler, when used with delay, behaves the same as {@link asyncScheduler} scheduler.\n *\n * When used without delay, it schedules given task synchronously - executes it right when\n * it is scheduled. However when called recursively, that is when inside the scheduled task,\n * another task is scheduled with queue scheduler, instead of executing immediately as well,\n * that task will be put on a queue and wait for current one to finish.\n *\n * This means that when you execute task with `queue` scheduler, you are sure it will end\n * before any other task scheduled with that scheduler will start.\n *\n * ## Examples\n * Schedule recursively first, then do something\n * ```ts\n * import { queueScheduler } from 'rxjs';\n *\n * queueScheduler.schedule(() => {\n * queueScheduler.schedule(() => console.log('second')); // will not happen now, but will be put on a queue\n *\n * console.log('first');\n * });\n *\n * // Logs:\n * // \"first\"\n * // \"second\"\n * ```\n *\n * Reschedule itself recursively\n * ```ts\n * import { queueScheduler } from 'rxjs';\n *\n * queueScheduler.schedule(function(state) {\n * if (state !== 0) {\n * console.log('before', state);\n * this.schedule(state - 1); // `this` references currently executing Action,\n * // which we reschedule with new state\n * console.log('after', state);\n * }\n * }, 0, 3);\n *\n * // In scheduler that runs recursively, you would expect:\n * // \"before\", 3\n * // \"before\", 2\n * // \"before\", 1\n * // \"after\", 1\n * // \"after\", 2\n * // \"after\", 3\n *\n * // But with queue it logs:\n * // \"before\", 3\n * // \"after\", 3\n * // \"before\", 2\n * // \"after\", 2\n * // \"before\", 1\n * // \"after\", 1\n * ```\n */\n\nexport const queueScheduler = new QueueScheduler(QueueAction);\n\n/**\n * @deprecated Renamed to {@link queueScheduler}. Will be removed in v8.\n */\nexport const queue = queueScheduler;\n", "import { AsyncAction } from './AsyncAction';\nimport { AnimationFrameScheduler } from './AnimationFrameScheduler';\nimport { SchedulerAction } from '../types';\nimport { animationFrameProvider } from './animationFrameProvider';\nimport { TimerHandle } from './timerHandle';\n\nexport class AnimationFrameAction extends AsyncAction {\n constructor(protected scheduler: AnimationFrameScheduler, protected work: (this: SchedulerAction, state?: T) => void) {\n super(scheduler, work);\n }\n\n protected requestAsyncId(scheduler: AnimationFrameScheduler, id?: TimerHandle, delay: number = 0): TimerHandle {\n // If delay is greater than 0, request as an async action.\n if (delay !== null && delay > 0) {\n return super.requestAsyncId(scheduler, id, delay);\n }\n // Push the action to the end of the scheduler queue.\n scheduler.actions.push(this);\n // If an animation frame has already been requested, don't request another\n // one. If an animation frame hasn't been requested yet, request one. Return\n // the current animation frame request id.\n return scheduler._scheduled || (scheduler._scheduled = animationFrameProvider.requestAnimationFrame(() => scheduler.flush(undefined)));\n }\n\n protected recycleAsyncId(scheduler: AnimationFrameScheduler, id?: TimerHandle, delay: number = 0): TimerHandle | undefined {\n // If delay exists and is greater than 0, or if the delay is null (the\n // action wasn't rescheduled) but was originally scheduled as an async\n // action, then recycle as an async action.\n if (delay != null ? delay > 0 : this.delay > 0) {\n return super.recycleAsyncId(scheduler, id, delay);\n }\n // If the scheduler queue has no remaining actions with the same async id,\n // cancel the requested animation frame and set the scheduled flag to\n // undefined so the next AnimationFrameAction will request its own.\n const { actions } = scheduler;\n if (id != null && actions[actions.length - 1]?.id !== id) {\n animationFrameProvider.cancelAnimationFrame(id as number);\n scheduler._scheduled = undefined;\n }\n // Return undefined so the action knows to request a new async id if it's rescheduled.\n return undefined;\n }\n}\n", "import { AsyncAction } from './AsyncAction';\nimport { AsyncScheduler } from './AsyncScheduler';\n\nexport class AnimationFrameScheduler extends AsyncScheduler {\n public flush(action?: AsyncAction): void {\n this._active = true;\n // The async id that effects a call to flush is stored in _scheduled.\n // Before executing an action, it's necessary to check the action's async\n // id to determine whether it's supposed to be executed in the current\n // flush.\n // Previous implementations of this method used a count to determine this,\n // but that was unsound, as actions that are unsubscribed - i.e. cancelled -\n // are removed from the actions array and that can shift actions that are\n // scheduled to be executed in a subsequent flush into positions at which\n // they are executed within the current flush.\n const flushId = this._scheduled;\n this._scheduled = undefined;\n\n const { actions } = this;\n let error: any;\n action = action || actions.shift()!;\n\n do {\n if ((error = action.execute(action.state, action.delay))) {\n break;\n }\n } while ((action = actions[0]) && action.id === flushId && actions.shift());\n\n this._active = false;\n\n if (error) {\n while ((action = actions[0]) && action.id === flushId && actions.shift()) {\n action.unsubscribe();\n }\n throw error;\n }\n }\n}\n", "import { AnimationFrameAction } from './AnimationFrameAction';\nimport { AnimationFrameScheduler } from './AnimationFrameScheduler';\n\n/**\n *\n * Animation Frame Scheduler\n *\n * Perform task when `window.requestAnimationFrame` would fire\n *\n * When `animationFrame` scheduler is used with delay, it will fall back to {@link asyncScheduler} scheduler\n * behaviour.\n *\n * Without delay, `animationFrame` scheduler can be used to create smooth browser animations.\n * It makes sure scheduled task will happen just before next browser content repaint,\n * thus performing animations as efficiently as possible.\n *\n * ## Example\n * Schedule div height animation\n * ```ts\n * // html:
\n * import { animationFrameScheduler } from 'rxjs';\n *\n * const div = document.querySelector('div');\n *\n * animationFrameScheduler.schedule(function(height) {\n * div.style.height = height + \"px\";\n *\n * this.schedule(height + 1); // `this` references currently executing Action,\n * // which we reschedule with new state\n * }, 0, 0);\n *\n * // You will see a div element growing in height\n * ```\n */\n\nexport const animationFrameScheduler = new AnimationFrameScheduler(AnimationFrameAction);\n\n/**\n * @deprecated Renamed to {@link animationFrameScheduler}. Will be removed in v8.\n */\nexport const animationFrame = animationFrameScheduler;\n", "import { Observable } from '../Observable';\nimport { SchedulerLike } from '../types';\n\n/**\n * A simple Observable that emits no items to the Observer and immediately\n * emits a complete notification.\n *\n * Just emits 'complete', and nothing else.\n *\n * ![](empty.png)\n *\n * A simple Observable that only emits the complete notification. It can be used\n * for composing with other Observables, such as in a {@link mergeMap}.\n *\n * ## Examples\n *\n * Log complete notification\n *\n * ```ts\n * import { EMPTY } from 'rxjs';\n *\n * EMPTY.subscribe({\n * next: () => console.log('Next'),\n * complete: () => console.log('Complete!')\n * });\n *\n * // Outputs\n * // Complete!\n * ```\n *\n * Emit the number 7, then complete\n *\n * ```ts\n * import { EMPTY, startWith } from 'rxjs';\n *\n * const result = EMPTY.pipe(startWith(7));\n * result.subscribe(x => console.log(x));\n *\n * // Outputs\n * // 7\n * ```\n *\n * Map and flatten only odd numbers to the sequence `'a'`, `'b'`, `'c'`\n *\n * ```ts\n * import { interval, mergeMap, of, EMPTY } from 'rxjs';\n *\n * const interval$ = interval(1000);\n * const result = interval$.pipe(\n * mergeMap(x => x % 2 === 1 ? of('a', 'b', 'c') : EMPTY),\n * );\n * result.subscribe(x => console.log(x));\n *\n * // Results in the following to the console:\n * // x is equal to the count on the interval, e.g. (0, 1, 2, 3, ...)\n * // x will occur every 1000ms\n * // if x % 2 is equal to 1, print a, b, c (each on its own)\n * // if x % 2 is not equal to 1, nothing will be output\n * ```\n *\n * @see {@link Observable}\n * @see {@link NEVER}\n * @see {@link of}\n * @see {@link throwError}\n */\nexport const EMPTY = new Observable((subscriber) => subscriber.complete());\n\n/**\n * @param scheduler A {@link SchedulerLike} to use for scheduling\n * the emission of the complete notification.\n * @deprecated Replaced with the {@link EMPTY} constant or {@link scheduled} (e.g. `scheduled([], scheduler)`). Will be removed in v8.\n */\nexport function empty(scheduler?: SchedulerLike) {\n return scheduler ? emptyScheduled(scheduler) : EMPTY;\n}\n\nfunction emptyScheduled(scheduler: SchedulerLike) {\n return new Observable((subscriber) => scheduler.schedule(() => subscriber.complete()));\n}\n", "import { SchedulerLike } from '../types';\nimport { isFunction } from './isFunction';\n\nexport function isScheduler(value: any): value is SchedulerLike {\n return value && isFunction(value.schedule);\n}\n", "import { SchedulerLike } from '../types';\nimport { isFunction } from './isFunction';\nimport { isScheduler } from './isScheduler';\n\nfunction last(arr: T[]): T | undefined {\n return arr[arr.length - 1];\n}\n\nexport function popResultSelector(args: any[]): ((...args: unknown[]) => unknown) | undefined {\n return isFunction(last(args)) ? args.pop() : undefined;\n}\n\nexport function popScheduler(args: any[]): SchedulerLike | undefined {\n return isScheduler(last(args)) ? args.pop() : undefined;\n}\n\nexport function popNumber(args: any[], defaultValue: number): number {\n return typeof last(args) === 'number' ? args.pop()! : defaultValue;\n}\n", "export const isArrayLike = ((x: any): x is ArrayLike => x && typeof x.length === 'number' && typeof x !== 'function');", "import { isFunction } from \"./isFunction\";\n\n/**\n * Tests to see if the object is \"thennable\".\n * @param value the object to test\n */\nexport function isPromise(value: any): value is PromiseLike {\n return isFunction(value?.then);\n}\n", "import { InteropObservable } from '../types';\nimport { observable as Symbol_observable } from '../symbol/observable';\nimport { isFunction } from './isFunction';\n\n/** Identifies an input as being Observable (but not necessary an Rx Observable) */\nexport function isInteropObservable(input: any): input is InteropObservable {\n return isFunction(input[Symbol_observable]);\n}\n", "import { isFunction } from './isFunction';\n\nexport function isAsyncIterable(obj: any): obj is AsyncIterable {\n return Symbol.asyncIterator && isFunction(obj?.[Symbol.asyncIterator]);\n}\n", "/**\n * Creates the TypeError to throw if an invalid object is passed to `from` or `scheduled`.\n * @param input The object that was passed.\n */\nexport function createInvalidObservableTypeError(input: any) {\n // TODO: We should create error codes that can be looked up, so this can be less verbose.\n return new TypeError(\n `You provided ${\n input !== null && typeof input === 'object' ? 'an invalid object' : `'${input}'`\n } where a stream was expected. You can provide an Observable, Promise, ReadableStream, Array, AsyncIterable, or Iterable.`\n );\n}\n", "export function getSymbolIterator(): symbol {\n if (typeof Symbol !== 'function' || !Symbol.iterator) {\n return '@@iterator' as any;\n }\n\n return Symbol.iterator;\n}\n\nexport const iterator = getSymbolIterator();\n", "import { iterator as Symbol_iterator } from '../symbol/iterator';\nimport { isFunction } from './isFunction';\n\n/** Identifies an input as being an Iterable */\nexport function isIterable(input: any): input is Iterable {\n return isFunction(input?.[Symbol_iterator]);\n}\n", "import { ReadableStreamLike } from '../types';\nimport { isFunction } from './isFunction';\n\nexport async function* readableStreamLikeToAsyncGenerator(readableStream: ReadableStreamLike): AsyncGenerator {\n const reader = readableStream.getReader();\n try {\n while (true) {\n const { value, done } = await reader.read();\n if (done) {\n return;\n }\n yield value!;\n }\n } finally {\n reader.releaseLock();\n }\n}\n\nexport function isReadableStreamLike(obj: any): obj is ReadableStreamLike {\n // We don't want to use instanceof checks because they would return\n // false for instances from another Realm, like an \n * \n *\n * */\n\n.aspect-ratio {\n height: 0;\n position: relative;\n}\n\n.aspect-ratio--16x9 { padding-bottom: 56.25%; }\n.aspect-ratio--9x16 { padding-bottom: 177.77%; }\n\n.aspect-ratio--4x3 { padding-bottom: 75%; }\n.aspect-ratio--3x4 { padding-bottom: 133.33%; }\n\n.aspect-ratio--6x4 { padding-bottom: 66.6%; }\n.aspect-ratio--4x6 { padding-bottom: 150%; }\n\n.aspect-ratio--8x5 { padding-bottom: 62.5%; }\n.aspect-ratio--5x8 { padding-bottom: 160%; }\n\n.aspect-ratio--7x5 { padding-bottom: 71.42%; }\n.aspect-ratio--5x7 { padding-bottom: 140%; }\n\n.aspect-ratio--1x1 { padding-bottom: 100%; }\n\n.aspect-ratio--object {\n position: absolute;\n top: 0;\n right: 0;\n bottom: 0;\n left: 0;\n width: 100%;\n height: 100%;\n z-index: 100;\n}\n\n@media #{$breakpoint-not-small}{\n .aspect-ratio-ns {\n height: 0;\n position: relative;\n }\n .aspect-ratio--16x9-ns { padding-bottom: 56.25%; }\n .aspect-ratio--9x16-ns { padding-bottom: 177.77%; }\n .aspect-ratio--4x3-ns { padding-bottom: 75%; }\n .aspect-ratio--3x4-ns { padding-bottom: 133.33%; }\n .aspect-ratio--6x4-ns { padding-bottom: 66.6%; }\n .aspect-ratio--4x6-ns { padding-bottom: 150%; }\n .aspect-ratio--8x5-ns { padding-bottom: 62.5%; }\n .aspect-ratio--5x8-ns { padding-bottom: 160%; }\n .aspect-ratio--7x5-ns { padding-bottom: 71.42%; }\n .aspect-ratio--5x7-ns { padding-bottom: 140%; }\n .aspect-ratio--1x1-ns { padding-bottom: 100%; }\n .aspect-ratio--object-ns {\n position: absolute;\n top: 0;\n right: 0;\n bottom: 0;\n left: 0;\n width: 100%;\n height: 100%;\n z-index: 100;\n }\n}\n\n@media #{$breakpoint-medium}{\n .aspect-ratio-m {\n height: 0;\n position: relative;\n }\n .aspect-ratio--16x9-m { padding-bottom: 56.25%; }\n .aspect-ratio--9x16-m { padding-bottom: 177.77%; }\n .aspect-ratio--4x3-m { padding-bottom: 75%; }\n .aspect-ratio--3x4-m { padding-bottom: 133.33%; }\n .aspect-ratio--6x4-m { padding-bottom: 66.6%; }\n .aspect-ratio--4x6-m { padding-bottom: 150%; }\n .aspect-ratio--8x5-m { padding-bottom: 62.5%; }\n .aspect-ratio--5x8-m { padding-bottom: 160%; }\n .aspect-ratio--7x5-m { padding-bottom: 71.42%; }\n .aspect-ratio--5x7-m { padding-bottom: 140%; }\n .aspect-ratio--1x1-m { padding-bottom: 100%; }\n .aspect-ratio--object-m {\n position: absolute;\n top: 0;\n right: 0;\n bottom: 0;\n left: 0;\n width: 100%;\n height: 100%;\n z-index: 100;\n }\n}\n\n@media #{$breakpoint-large}{\n .aspect-ratio-l {\n height: 0;\n position: relative;\n }\n .aspect-ratio--16x9-l { padding-bottom: 56.25%; }\n .aspect-ratio--9x16-l { padding-bottom: 177.77%; }\n .aspect-ratio--4x3-l { padding-bottom: 75%; }\n .aspect-ratio--3x4-l { padding-bottom: 133.33%; }\n .aspect-ratio--6x4-l { padding-bottom: 66.6%; }\n .aspect-ratio--4x6-l { padding-bottom: 150%; }\n .aspect-ratio--8x5-l { padding-bottom: 62.5%; }\n .aspect-ratio--5x8-l { padding-bottom: 160%; }\n .aspect-ratio--7x5-l { padding-bottom: 71.42%; }\n .aspect-ratio--5x7-l { padding-bottom: 140%; }\n .aspect-ratio--1x1-l { padding-bottom: 100%; }\n .aspect-ratio--object-l {\n position: absolute;\n top: 0;\n right: 0;\n bottom: 0;\n left: 0;\n width: 100%;\n height: 100%;\n z-index: 100;\n }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n IMAGES\n Docs: http://tachyons.io/docs/elements/images/\n\n*/\n\n/* Responsive images! */\n\nimg { max-width: 100%; }\n\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n BACKGROUND SIZE\n Docs: http://tachyons.io/docs/themes/background-size/\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n*/\n\n/*\n Often used in combination with background image set as an inline style\n on an html element.\n*/\n\n .cover { background-size: cover!important; }\n .contain { background-size: contain!important; }\n\n@media #{$breakpoint-not-small} {\n .cover-ns { background-size: cover!important; }\n .contain-ns { background-size: contain!important; }\n}\n\n@media #{$breakpoint-medium} {\n .cover-m { background-size: cover!important; }\n .contain-m { background-size: contain!important; }\n}\n\n@media #{$breakpoint-large} {\n .cover-l { background-size: cover!important; }\n .contain-l { background-size: contain!important; }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n BACKGROUND POSITION\n\n Base:\n bg = background\n\n Modifiers:\n -center = center center\n -top = top center\n -right = center right\n -bottom = bottom center\n -left = center left\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n */\n\n.bg-center { \n background-repeat: no-repeat;\n background-position: center center; \n}\n\n.bg-top { \n background-repeat: no-repeat; \n background-position: top center; \n}\n\n.bg-right { \n background-repeat: no-repeat; \n background-position: center right; \n}\n\n.bg-bottom { \n background-repeat: no-repeat; \n background-position: bottom center; \n}\n\n.bg-left { \n background-repeat: no-repeat; \n background-position: center left; \n}\n\n@media #{$breakpoint-not-small} {\n .bg-center-ns { \n background-repeat: no-repeat;\n background-position: center center; \n }\n\n .bg-top-ns { \n background-repeat: no-repeat; \n background-position: top center; \n }\n\n .bg-right-ns { \n background-repeat: no-repeat; \n background-position: center right; \n }\n\n .bg-bottom-ns { \n background-repeat: no-repeat; \n background-position: bottom center; \n }\n\n .bg-left-ns { \n background-repeat: no-repeat; \n background-position: center left; \n }\n}\n\n@media #{$breakpoint-medium} {\n .bg-center-m { \n background-repeat: no-repeat;\n background-position: center center; \n }\n\n .bg-top-m { \n background-repeat: no-repeat; \n background-position: top center; \n }\n\n .bg-right-m { \n background-repeat: no-repeat; \n background-position: center right; \n }\n\n .bg-bottom-m { \n background-repeat: no-repeat; \n background-position: bottom center; \n }\n\n .bg-left-m { \n background-repeat: no-repeat; \n background-position: center left; \n }\n}\n\n@media #{$breakpoint-large} {\n .bg-center-l { \n background-repeat: no-repeat;\n background-position: center center; \n }\n\n .bg-top-l { \n background-repeat: no-repeat; \n background-position: top center; \n }\n\n .bg-right-l { \n background-repeat: no-repeat; \n background-position: center right; \n }\n\n .bg-bottom-l { \n background-repeat: no-repeat; \n background-position: bottom center; \n }\n\n .bg-left-l { \n background-repeat: no-repeat; \n background-position: center left; \n }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n OUTLINES\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n*/\n\n.outline { outline: 1px solid; }\n.outline-transparent { outline: 1px solid transparent; }\n.outline-0 { outline: 0; }\n\n@media #{$breakpoint-not-small} {\n .outline-ns { outline: 1px solid; }\n .outline-transparent-ns { outline: 1px solid transparent; }\n .outline-0-ns { outline: 0; }\n}\n\n@media #{$breakpoint-medium} {\n .outline-m { outline: 1px solid; }\n .outline-transparent-m { outline: 1px solid transparent; }\n .outline-0-m { outline: 0; }\n}\n\n@media #{$breakpoint-large} {\n .outline-l { outline: 1px solid; }\n .outline-transparent-l { outline: 1px solid transparent; }\n .outline-0-l { outline: 0; }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n BORDERS\n Docs: http://tachyons.io/docs/themes/borders/\n\n Base:\n b = border\n\n Modifiers:\n a = all\n t = top\n r = right\n b = bottom\n l = left\n n = none\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n*/\n\n .ba { border-style: solid; border-width: 1px; }\n .bt { border-top-style: solid; border-top-width: 1px; }\n .br { border-right-style: solid; border-right-width: 1px; }\n .bb { border-bottom-style: solid; border-bottom-width: 1px; }\n .bl { border-left-style: solid; border-left-width: 1px; }\n .bn { border-style: none; border-width: 0; }\n\n\n@media #{$breakpoint-not-small} {\n .ba-ns { border-style: solid; border-width: 1px; }\n .bt-ns { border-top-style: solid; border-top-width: 1px; }\n .br-ns { border-right-style: solid; border-right-width: 1px; }\n .bb-ns { border-bottom-style: solid; border-bottom-width: 1px; }\n .bl-ns { border-left-style: solid; border-left-width: 1px; }\n .bn-ns { border-style: none; border-width: 0; }\n}\n\n@media #{$breakpoint-medium} {\n .ba-m { border-style: solid; border-width: 1px; }\n .bt-m { border-top-style: solid; border-top-width: 1px; }\n .br-m { border-right-style: solid; border-right-width: 1px; }\n .bb-m { border-bottom-style: solid; border-bottom-width: 1px; }\n .bl-m { border-left-style: solid; border-left-width: 1px; }\n .bn-m { border-style: none; border-width: 0; }\n}\n\n@media #{$breakpoint-large} {\n .ba-l { border-style: solid; border-width: 1px; }\n .bt-l { border-top-style: solid; border-top-width: 1px; }\n .br-l { border-right-style: solid; border-right-width: 1px; }\n .bb-l { border-bottom-style: solid; border-bottom-width: 1px; }\n .bl-l { border-left-style: solid; border-left-width: 1px; }\n .bn-l { border-style: none; border-width: 0; }\n}\n\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n BORDER COLORS\n Docs: http://tachyons.io/docs/themes/borders/\n\n Border colors can be used to extend the base\n border classes ba,bt,bb,br,bl found in the _borders.css file.\n\n The base border class by default will set the color of the border\n to that of the current text color. These classes are for the cases\n where you desire for the text and border colors to be different.\n\n Base:\n b = border\n\n Modifiers:\n --color-name = each color variable name is also a border color name\n\n*/\n\n.b--black { border-color: $black; }\n.b--near-black { border-color: $near-black; }\n.b--dark-gray { border-color: $dark-gray; }\n.b--mid-gray { border-color: $mid-gray; }\n.b--gray { border-color: $gray; }\n.b--silver { border-color: $silver; }\n.b--light-silver { border-color: $light-silver; }\n.b--moon-gray { border-color: $moon-gray; }\n.b--light-gray { border-color: $light-gray; }\n.b--near-white { border-color: $near-white; }\n.b--white { border-color: $white; }\n\n.b--white-90 { border-color: $white-90; }\n.b--white-80 { border-color: $white-80; }\n.b--white-70 { border-color: $white-70; }\n.b--white-60 { border-color: $white-60; }\n.b--white-50 { border-color: $white-50; }\n.b--white-40 { border-color: $white-40; }\n.b--white-30 { border-color: $white-30; }\n.b--white-20 { border-color: $white-20; }\n.b--white-10 { border-color: $white-10; }\n.b--white-05 { border-color: $white-05; }\n.b--white-025 { border-color: $white-025; }\n.b--white-0125 { border-color: $white-0125; }\n\n.b--black-90 { border-color: $black-90; }\n.b--black-80 { border-color: $black-80; }\n.b--black-70 { border-color: $black-70; }\n.b--black-60 { border-color: $black-60; }\n.b--black-50 { border-color: $black-50; }\n.b--black-40 { border-color: $black-40; }\n.b--black-30 { border-color: $black-30; }\n.b--black-20 { border-color: $black-20; }\n.b--black-10 { border-color: $black-10; }\n.b--black-05 { border-color: $black-05; }\n.b--black-025 { border-color: $black-025; }\n.b--black-0125 { border-color: $black-0125; }\n\n.b--dark-red { border-color: $dark-red; }\n.b--red { border-color: $red; }\n.b--light-red { border-color: $light-red; }\n.b--orange { border-color: $orange; }\n.b--gold { border-color: $gold; }\n.b--yellow { border-color: $yellow; }\n.b--light-yellow { border-color: $light-yellow; }\n.b--purple { border-color: $purple; }\n.b--light-purple { border-color: $light-purple; }\n.b--dark-pink { border-color: $dark-pink; }\n.b--hot-pink { border-color: $hot-pink; }\n.b--pink { border-color: $pink; }\n.b--light-pink { border-color: $light-pink; }\n.b--dark-green { border-color: $dark-green; }\n.b--green { border-color: $green; }\n.b--light-green { border-color: $light-green; }\n.b--navy { border-color: $navy; }\n.b--dark-blue { border-color: $dark-blue; }\n.b--blue { border-color: $blue; }\n.b--light-blue { border-color: $light-blue; }\n.b--lightest-blue { border-color: $lightest-blue; }\n.b--washed-blue { border-color: $washed-blue; }\n.b--washed-green { border-color: $washed-green; }\n.b--washed-yellow { border-color: $washed-yellow; }\n.b--washed-red { border-color: $washed-red; }\n\n.b--transparent { border-color: $transparent; }\n.b--inherit { border-color: inherit; }\n","\n// Converted Variables\n\n$sans-serif: -apple-system, BlinkMacSystemFont, 'avenir next', avenir, helvetica, 'helvetica neue', ubuntu, roboto, noto, 'segoe ui', arial, sans-serif !default;\n$serif: georgia, serif !default;\n$code: consolas, monaco, monospace !default;\n$font-size-headline: 6rem !default;\n$font-size-subheadline: 5rem !default;\n$font-size-1: 3rem !default;\n$font-size-2: 2.25rem !default;\n$font-size-3: 1.5rem !default;\n$font-size-4: 1.25rem !default;\n$font-size-5: 1rem !default;\n$font-size-6: .875rem !default;\n$font-size-7: .75rem !default;\n$letter-spacing-tight: -.05em !default;\n$letter-spacing-1: .1em !default;\n$letter-spacing-2: .25em !default;\n$line-height-solid: 1 !default;\n$line-height-title: 1.25 !default;\n$line-height-copy: 1.5 !default;\n$measure: 30em !default;\n$measure-narrow: 20em !default;\n$measure-wide: 34em !default;\n$spacing-none: 0 !default;\n$spacing-extra-small: .25rem !default;\n$spacing-small: .5rem !default;\n$spacing-medium: 1rem !default;\n$spacing-large: 2rem !default;\n$spacing-extra-large: 4rem !default;\n$spacing-extra-extra-large: 8rem !default;\n$spacing-extra-extra-extra-large: 16rem !default;\n$spacing-copy-separator: 1.5em !default;\n$height-1: 1rem !default;\n$height-2: 2rem !default;\n$height-3: 4rem !default;\n$height-4: 8rem !default;\n$height-5: 16rem !default;\n$width-1: 1rem !default;\n$width-2: 2rem !default;\n$width-3: 4rem !default;\n$width-4: 8rem !default;\n$width-5: 16rem !default;\n$max-width-1: 1rem !default;\n$max-width-2: 2rem !default;\n$max-width-3: 4rem !default;\n$max-width-4: 8rem !default;\n$max-width-5: 16rem !default;\n$max-width-6: 32rem !default;\n$max-width-7: 48rem !default;\n$max-width-8: 64rem !default;\n$max-width-9: 96rem !default;\n$border-radius-none: 0 !default;\n$border-radius-1: .125rem !default;\n$border-radius-2: .25rem !default;\n$border-radius-3: .5rem !default;\n$border-radius-4: 1rem !default;\n$border-radius-circle: 100% !default;\n$border-radius-pill: 9999px !default;\n$border-width-none: 0 !default;\n$border-width-1: .125rem !default;\n$border-width-2: .25rem !default;\n$border-width-3: .5rem !default;\n$border-width-4: 1rem !default;\n$border-width-5: 2rem !default;\n$box-shadow-1: 0px 0px 4px 2px rgba( 0, 0, 0, 0.2 ) !default;\n$box-shadow-2: 0px 0px 8px 2px rgba( 0, 0, 0, 0.2 ) !default;\n$box-shadow-3: 2px 2px 4px 2px rgba( 0, 0, 0, 0.2 ) !default;\n$box-shadow-4: 2px 2px 8px 0px rgba( 0, 0, 0, 0.2 ) !default;\n$box-shadow-5: 4px 4px 8px 0px rgba( 0, 0, 0, 0.2 ) !default;\n$black: #000 !default;\n$near-black: #111 !default;\n$dark-gray: #333 !default;\n$mid-gray: #555 !default;\n$gray: #777 !default;\n$silver: #999 !default;\n$light-silver: #aaa !default;\n$moon-gray: #ccc !default;\n$light-gray: #eee !default;\n$near-white: #f4f4f4 !default;\n$white: #fff !default;\n$transparent: transparent !default;\n$black-90: rgba(0,0,0,.9) !default;\n$black-80: rgba(0,0,0,.8) !default;\n$black-70: rgba(0,0,0,.7) !default;\n$black-60: rgba(0,0,0,.6) !default;\n$black-50: rgba(0,0,0,.5) !default;\n$black-40: rgba(0,0,0,.4) !default;\n$black-30: rgba(0,0,0,.3) !default;\n$black-20: rgba(0,0,0,.2) !default;\n$black-10: rgba(0,0,0,.1) !default;\n$black-05: rgba(0,0,0,.05) !default;\n$black-025: rgba(0,0,0,.025) !default;\n$black-0125: rgba(0,0,0,.0125) !default;\n$white-90: rgba(255,255,255,.9) !default;\n$white-80: rgba(255,255,255,.8) !default;\n$white-70: rgba(255,255,255,.7) !default;\n$white-60: rgba(255,255,255,.6) !default;\n$white-50: rgba(255,255,255,.5) !default;\n$white-40: rgba(255,255,255,.4) !default;\n$white-30: rgba(255,255,255,.3) !default;\n$white-20: rgba(255,255,255,.2) !default;\n$white-10: rgba(255,255,255,.1) !default;\n$white-05: rgba(255,255,255,.05) !default;\n$white-025: rgba(255,255,255,.025) !default;\n$white-0125: rgba(255,255,255,.0125) !default;\n$dark-red: #e7040f !default;\n$red: #ff4136 !default;\n$light-red: #ff725c !default;\n$orange: #ff6300 !default;\n$gold: #ffb700 !default;\n$yellow: #ffd700 !default;\n$light-yellow: #fbf1a9 !default;\n$purple: #5e2ca5 !default;\n$light-purple: #a463f2 !default;\n$dark-pink: #d5008f !default;\n$hot-pink: #ff41b4 !default;\n$pink: #ff80cc !default;\n$light-pink: #ffa3d7 !default;\n$dark-green: #137752 !default;\n$green: #19a974 !default;\n$light-green: #9eebcf !default;\n$navy: #001b44 !default;\n$dark-blue: #00449e !default;\n$blue: #357edd !default;\n$light-blue: #96ccff !default;\n$lightest-blue: #cdecff !default;\n$washed-blue: #f6fffe !default;\n$washed-green: #e8fdf5 !default;\n$washed-yellow: #fffceb !default;\n$washed-red: #ffdfdf !default;\n\n// Custom Media Query Variables\n\n$breakpoint-not-small: 'screen and (min-width: 30em)' !default;\n$breakpoint-medium: 'screen and (min-width: 30em) and (max-width: 60em)' !default;\n$breakpoint-large: 'screen and (min-width: 60em)' !default;\n\n/*\n\n VARIABLES\n\n*/\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n BORDER RADIUS\n Docs: http://tachyons.io/docs/themes/border-radius/\n\n Base:\n br = border-radius\n\n Modifiers:\n 0 = 0/none\n 1 = 1st step in scale\n 2 = 2nd step in scale\n 3 = 3rd step in scale\n 4 = 4th step in scale\n\n Literal values:\n -100 = 100%\n -pill = 9999px\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n*/\n\n .br0 { border-radius: $border-radius-none }\n .br1 { border-radius: $border-radius-1; }\n .br2 { border-radius: $border-radius-2; }\n .br3 { border-radius: $border-radius-3; }\n .br4 { border-radius: $border-radius-4; }\n .br-100 { border-radius: $border-radius-circle; }\n .br-pill { border-radius: $border-radius-pill; }\n .br--bottom {\n border-top-left-radius: 0;\n border-top-right-radius: 0;\n }\n .br--top {\n border-bottom-left-radius: 0;\n border-bottom-right-radius: 0;\n }\n .br--right {\n border-top-left-radius: 0;\n border-bottom-left-radius: 0;\n }\n .br--left {\n border-top-right-radius: 0;\n border-bottom-right-radius: 0;\n }\n\n@media #{$breakpoint-not-small} {\n .br0-ns { border-radius: $border-radius-none }\n .br1-ns { border-radius: $border-radius-1; }\n .br2-ns { border-radius: $border-radius-2; }\n .br3-ns { border-radius: $border-radius-3; }\n .br4-ns { border-radius: $border-radius-4; }\n .br-100-ns { border-radius: $border-radius-circle; }\n .br-pill-ns { border-radius: $border-radius-pill; }\n .br--bottom-ns {\n border-top-left-radius: 0;\n border-top-right-radius: 0;\n }\n .br--top-ns {\n border-bottom-left-radius: 0;\n border-bottom-right-radius: 0;\n }\n .br--right-ns {\n border-top-left-radius: 0;\n border-bottom-left-radius: 0;\n }\n .br--left-ns {\n border-top-right-radius: 0;\n border-bottom-right-radius: 0;\n }\n}\n\n@media #{$breakpoint-medium} {\n .br0-m { border-radius: $border-radius-none }\n .br1-m { border-radius: $border-radius-1; }\n .br2-m { border-radius: $border-radius-2; }\n .br3-m { border-radius: $border-radius-3; }\n .br4-m { border-radius: $border-radius-4; }\n .br-100-m { border-radius: $border-radius-circle; }\n .br-pill-m { border-radius: $border-radius-pill; }\n .br--bottom-m {\n border-top-left-radius: 0;\n border-top-right-radius: 0;\n }\n .br--top-m {\n border-bottom-left-radius: 0;\n border-bottom-right-radius: 0;\n }\n .br--right-m {\n border-top-left-radius: 0;\n border-bottom-left-radius: 0;\n }\n .br--left-m {\n border-top-right-radius: 0;\n border-bottom-right-radius: 0;\n }\n}\n\n@media #{$breakpoint-large} {\n .br0-l { border-radius: $border-radius-none }\n .br1-l { border-radius: $border-radius-1; }\n .br2-l { border-radius: $border-radius-2; }\n .br3-l { border-radius: $border-radius-3; }\n .br4-l { border-radius: $border-radius-4; }\n .br-100-l { border-radius: $border-radius-circle; }\n .br-pill-l { border-radius: $border-radius-pill; }\n .br--bottom-l {\n border-top-left-radius: 0;\n border-top-right-radius: 0;\n }\n .br--top-l {\n border-bottom-left-radius: 0;\n border-bottom-right-radius: 0;\n }\n .br--right-l {\n border-top-left-radius: 0;\n border-bottom-left-radius: 0;\n }\n .br--left-l {\n border-top-right-radius: 0;\n border-bottom-right-radius: 0;\n }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n BORDER STYLES\n Docs: http://tachyons.io/docs/themes/borders/\n\n Depends on base border module in _borders.css\n\n Base:\n b = border-style\n\n Modifiers:\n --none = none\n --dotted = dotted\n --dashed = dashed\n --solid = solid\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n */\n\n.b--dotted { border-style: dotted; }\n.b--dashed { border-style: dashed; }\n.b--solid { border-style: solid; }\n.b--none { border-style: none; }\n\n@media #{$breakpoint-not-small} {\n .b--dotted-ns { border-style: dotted; }\n .b--dashed-ns { border-style: dashed; }\n .b--solid-ns { border-style: solid; }\n .b--none-ns { border-style: none; }\n}\n\n@media #{$breakpoint-medium} {\n .b--dotted-m { border-style: dotted; }\n .b--dashed-m { border-style: dashed; }\n .b--solid-m { border-style: solid; }\n .b--none-m { border-style: none; }\n}\n\n@media #{$breakpoint-large} {\n .b--dotted-l { border-style: dotted; }\n .b--dashed-l { border-style: dashed; }\n .b--solid-l { border-style: solid; }\n .b--none-l { border-style: none; }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n BORDER WIDTHS\n Docs: http://tachyons.io/docs/themes/borders/\n\n Base:\n bw = border-width\n\n Modifiers:\n 0 = 0 width border\n 1 = 1st step in border-width scale\n 2 = 2nd step in border-width scale\n 3 = 3rd step in border-width scale\n 4 = 4th step in border-width scale\n 5 = 5th step in border-width scale\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n*/\n\n.bw0 { border-width: $border-width-none; }\n.bw1 { border-width: $border-width-1; }\n.bw2 { border-width: $border-width-2; }\n.bw3 { border-width: $border-width-3; }\n.bw4 { border-width: $border-width-4; }\n.bw5 { border-width: $border-width-5; }\n\n/* Resets */\n.bt-0 { border-top-width: $border-width-none }\n.br-0 { border-right-width: $border-width-none }\n.bb-0 { border-bottom-width: $border-width-none }\n.bl-0 { border-left-width: $border-width-none }\n\n@media #{$breakpoint-not-small} {\n .bw0-ns { border-width: $border-width-none; }\n .bw1-ns { border-width: $border-width-1; }\n .bw2-ns { border-width: $border-width-2; }\n .bw3-ns { border-width: $border-width-3; }\n .bw4-ns { border-width: $border-width-4; }\n .bw5-ns { border-width: $border-width-5; }\n .bt-0-ns { border-top-width: $border-width-none }\n .br-0-ns { border-right-width: $border-width-none }\n .bb-0-ns { border-bottom-width: $border-width-none }\n .bl-0-ns { border-left-width: $border-width-none }\n}\n\n@media #{$breakpoint-medium} {\n .bw0-m { border-width: $border-width-none; }\n .bw1-m { border-width: $border-width-1; }\n .bw2-m { border-width: $border-width-2; }\n .bw3-m { border-width: $border-width-3; }\n .bw4-m { border-width: $border-width-4; }\n .bw5-m { border-width: $border-width-5; }\n .bt-0-m { border-top-width: $border-width-none }\n .br-0-m { border-right-width: $border-width-none }\n .bb-0-m { border-bottom-width: $border-width-none }\n .bl-0-m { border-left-width: $border-width-none }\n}\n\n@media #{$breakpoint-large} {\n .bw0-l { border-width: $border-width-none; }\n .bw1-l { border-width: $border-width-1; }\n .bw2-l { border-width: $border-width-2; }\n .bw3-l { border-width: $border-width-3; }\n .bw4-l { border-width: $border-width-4; }\n .bw5-l { border-width: $border-width-5; }\n .bt-0-l { border-top-width: $border-width-none }\n .br-0-l { border-right-width: $border-width-none }\n .bb-0-l { border-bottom-width: $border-width-none }\n .bl-0-l { border-left-width: $border-width-none }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n BOX-SHADOW\n Docs: http://tachyons.io/docs/themes/box-shadow/\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n */\n\n.shadow-1 { box-shadow: $box-shadow-1; }\n.shadow-2 { box-shadow: $box-shadow-2; }\n.shadow-3 { box-shadow: $box-shadow-3; }\n.shadow-4 { box-shadow: $box-shadow-4; }\n.shadow-5 { box-shadow: $box-shadow-5; }\n\n@media #{$breakpoint-not-small} {\n .shadow-1-ns { box-shadow: $box-shadow-1; }\n .shadow-2-ns { box-shadow: $box-shadow-2; }\n .shadow-3-ns { box-shadow: $box-shadow-3; }\n .shadow-4-ns { box-shadow: $box-shadow-4; }\n .shadow-5-ns { box-shadow: $box-shadow-5; }\n}\n\n@media #{$breakpoint-medium} {\n .shadow-1-m { box-shadow: $box-shadow-1; }\n .shadow-2-m { box-shadow: $box-shadow-2; }\n .shadow-3-m { box-shadow: $box-shadow-3; }\n .shadow-4-m { box-shadow: $box-shadow-4; }\n .shadow-5-m { box-shadow: $box-shadow-5; }\n}\n\n@media #{$breakpoint-large} {\n .shadow-1-l { box-shadow: $box-shadow-1; }\n .shadow-2-l { box-shadow: $box-shadow-2; }\n .shadow-3-l { box-shadow: $box-shadow-3; }\n .shadow-4-l { box-shadow: $box-shadow-4; }\n .shadow-5-l { box-shadow: $box-shadow-5; }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n CODE\n\n*/\n\n.pre {\n overflow-x: auto;\n overflow-y: hidden;\n overflow: scroll;\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n COORDINATES\n Docs: http://tachyons.io/docs/layout/position/\n\n Use in combination with the position module.\n\n Base:\n top\n bottom\n right\n left\n\n Modifiers:\n -0 = literal value 0\n -1 = literal value 1\n -2 = literal value 2\n --1 = literal value -1\n --2 = literal value -2\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n*/\n\n.top-0 { top: 0; }\n.right-0 { right: 0; }\n.bottom-0 { bottom: 0; }\n.left-0 { left: 0; }\n\n.top-1 { top: 1rem; }\n.right-1 { right: 1rem; }\n.bottom-1 { bottom: 1rem; }\n.left-1 { left: 1rem; }\n\n.top-2 { top: 2rem; }\n.right-2 { right: 2rem; }\n.bottom-2 { bottom: 2rem; }\n.left-2 { left: 2rem; }\n\n.top--1 { top: -1rem; }\n.right--1 { right: -1rem; }\n.bottom--1 { bottom: -1rem; }\n.left--1 { left: -1rem; }\n\n.top--2 { top: -2rem; }\n.right--2 { right: -2rem; }\n.bottom--2 { bottom: -2rem; }\n.left--2 { left: -2rem; }\n\n\n.absolute--fill {\n top: 0;\n right: 0;\n bottom: 0;\n left: 0;\n}\n\n@media #{$breakpoint-not-small} {\n .top-0-ns { top: 0; }\n .left-0-ns { left: 0; }\n .right-0-ns { right: 0; }\n .bottom-0-ns { bottom: 0; }\n .top-1-ns { top: 1rem; }\n .left-1-ns { left: 1rem; }\n .right-1-ns { right: 1rem; }\n .bottom-1-ns { bottom: 1rem; }\n .top-2-ns { top: 2rem; }\n .left-2-ns { left: 2rem; }\n .right-2-ns { right: 2rem; }\n .bottom-2-ns { bottom: 2rem; }\n .top--1-ns { top: -1rem; }\n .right--1-ns { right: -1rem; }\n .bottom--1-ns { bottom: -1rem; }\n .left--1-ns { left: -1rem; }\n .top--2-ns { top: -2rem; }\n .right--2-ns { right: -2rem; }\n .bottom--2-ns { bottom: -2rem; }\n .left--2-ns { left: -2rem; }\n .absolute--fill-ns {\n top: 0;\n right: 0;\n bottom: 0;\n left: 0;\n }\n}\n\n@media #{$breakpoint-medium} {\n .top-0-m { top: 0; }\n .left-0-m { left: 0; }\n .right-0-m { right: 0; }\n .bottom-0-m { bottom: 0; }\n .top-1-m { top: 1rem; }\n .left-1-m { left: 1rem; }\n .right-1-m { right: 1rem; }\n .bottom-1-m { bottom: 1rem; }\n .top-2-m { top: 2rem; }\n .left-2-m { left: 2rem; }\n .right-2-m { right: 2rem; }\n .bottom-2-m { bottom: 2rem; }\n .top--1-m { top: -1rem; }\n .right--1-m { right: -1rem; }\n .bottom--1-m { bottom: -1rem; }\n .left--1-m { left: -1rem; }\n .top--2-m { top: -2rem; }\n .right--2-m { right: -2rem; }\n .bottom--2-m { bottom: -2rem; }\n .left--2-m { left: -2rem; }\n .absolute--fill-m {\n top: 0;\n right: 0;\n bottom: 0;\n left: 0;\n }\n}\n\n@media #{$breakpoint-large} {\n .top-0-l { top: 0; }\n .left-0-l { left: 0; }\n .right-0-l { right: 0; }\n .bottom-0-l { bottom: 0; }\n .top-1-l { top: 1rem; }\n .left-1-l { left: 1rem; }\n .right-1-l { right: 1rem; }\n .bottom-1-l { bottom: 1rem; }\n .top-2-l { top: 2rem; }\n .left-2-l { left: 2rem; }\n .right-2-l { right: 2rem; }\n .bottom-2-l { bottom: 2rem; }\n .top--1-l { top: -1rem; }\n .right--1-l { right: -1rem; }\n .bottom--1-l { bottom: -1rem; }\n .left--1-l { left: -1rem; }\n .top--2-l { top: -2rem; }\n .right--2-l { right: -2rem; }\n .bottom--2-l { bottom: -2rem; }\n .left--2-l { left: -2rem; }\n .absolute--fill-l {\n top: 0;\n right: 0;\n bottom: 0;\n left: 0;\n }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n CLEARFIX\n http://tachyons.io/docs/layout/clearfix/\n\n*/\n\n/* Nicolas Gallaghers Clearfix solution\n Ref: http://nicolasgallagher.com/micro-clearfix-hack/ */\n\n.cf:before,\n.cf:after { content: \" \"; display: table; }\n.cf:after { clear: both; }\n.cf { *zoom: 1; }\n\n.cl { clear: left; }\n.cr { clear: right; }\n.cb { clear: both; }\n.cn { clear: none; }\n\n@media #{$breakpoint-not-small} {\n .cl-ns { clear: left; }\n .cr-ns { clear: right; }\n .cb-ns { clear: both; }\n .cn-ns { clear: none; }\n}\n\n@media #{$breakpoint-medium} {\n .cl-m { clear: left; }\n .cr-m { clear: right; }\n .cb-m { clear: both; }\n .cn-m { clear: none; }\n}\n\n@media #{$breakpoint-large} {\n .cl-l { clear: left; }\n .cr-l { clear: right; }\n .cb-l { clear: both; }\n .cn-l { clear: none; }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n FLEXBOX\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n*/\n\n.flex { display: flex; }\n.inline-flex { display: inline-flex; }\n\n/* 1. Fix for Chrome 44 bug.\n * https://code.google.com/p/chromium/issues/detail?id=506893 */\n.flex-auto {\n flex: 1 1 auto;\n min-width: 0; /* 1 */\n min-height: 0; /* 1 */\n}\n\n.flex-none { flex: none; }\n\n.flex-column { flex-direction: column; }\n.flex-row { flex-direction: row; }\n.flex-wrap { flex-wrap: wrap; }\n.flex-nowrap { flex-wrap: nowrap; }\n.flex-wrap-reverse { flex-wrap: wrap-reverse; }\n.flex-column-reverse { flex-direction: column-reverse; }\n.flex-row-reverse { flex-direction: row-reverse; }\n\n.items-start { align-items: flex-start; }\n.items-end { align-items: flex-end; }\n.items-center { align-items: center; }\n.items-baseline { align-items: baseline; }\n.items-stretch { align-items: stretch; }\n\n.self-start { align-self: flex-start; }\n.self-end { align-self: flex-end; }\n.self-center { align-self: center; }\n.self-baseline { align-self: baseline; }\n.self-stretch { align-self: stretch; }\n\n.justify-start { justify-content: flex-start; }\n.justify-end { justify-content: flex-end; }\n.justify-center { justify-content: center; }\n.justify-between { justify-content: space-between; }\n.justify-around { justify-content: space-around; }\n\n.content-start { align-content: flex-start; }\n.content-end { align-content: flex-end; }\n.content-center { align-content: center; }\n.content-between { align-content: space-between; }\n.content-around { align-content: space-around; }\n.content-stretch { align-content: stretch; }\n\n.order-0 { order: 0; }\n.order-1 { order: 1; }\n.order-2 { order: 2; }\n.order-3 { order: 3; }\n.order-4 { order: 4; }\n.order-5 { order: 5; }\n.order-6 { order: 6; }\n.order-7 { order: 7; }\n.order-8 { order: 8; }\n.order-last { order: 99999; }\n\n.flex-grow-0 { flex-grow: 0; }\n.flex-grow-1 { flex-grow: 1; }\n\n.flex-shrink-0 { flex-shrink: 0; }\n.flex-shrink-1 { flex-shrink: 1; }\n\n@media #{$breakpoint-not-small} {\n .flex-ns { display: flex; }\n .inline-flex-ns { display: inline-flex; }\n .flex-auto-ns {\n flex: 1 1 auto;\n min-width: 0; /* 1 */\n min-height: 0; /* 1 */\n }\n .flex-none-ns { flex: none; }\n .flex-column-ns { flex-direction: column; }\n .flex-row-ns { flex-direction: row; }\n .flex-wrap-ns { flex-wrap: wrap; }\n .flex-nowrap-ns { flex-wrap: nowrap; }\n .flex-wrap-reverse-ns { flex-wrap: wrap-reverse; }\n .flex-column-reverse-ns { flex-direction: column-reverse; }\n .flex-row-reverse-ns { flex-direction: row-reverse; }\n .items-start-ns { align-items: flex-start; }\n .items-end-ns { align-items: flex-end; }\n .items-center-ns { align-items: center; }\n .items-baseline-ns { align-items: baseline; }\n .items-stretch-ns { align-items: stretch; }\n\n .self-start-ns { align-self: flex-start; }\n .self-end-ns { align-self: flex-end; }\n .self-center-ns { align-self: center; }\n .self-baseline-ns { align-self: baseline; }\n .self-stretch-ns { align-self: stretch; }\n\n .justify-start-ns { justify-content: flex-start; }\n .justify-end-ns { justify-content: flex-end; }\n .justify-center-ns { justify-content: center; }\n .justify-between-ns { justify-content: space-between; }\n .justify-around-ns { justify-content: space-around; }\n\n .content-start-ns { align-content: flex-start; }\n .content-end-ns { align-content: flex-end; }\n .content-center-ns { align-content: center; }\n .content-between-ns { align-content: space-between; }\n .content-around-ns { align-content: space-around; }\n .content-stretch-ns { align-content: stretch; }\n\n .order-0-ns { order: 0; }\n .order-1-ns { order: 1; }\n .order-2-ns { order: 2; }\n .order-3-ns { order: 3; }\n .order-4-ns { order: 4; }\n .order-5-ns { order: 5; }\n .order-6-ns { order: 6; }\n .order-7-ns { order: 7; }\n .order-8-ns { order: 8; }\n .order-last-ns { order: 99999; }\n\n .flex-grow-0-ns { flex-grow: 0; }\n .flex-grow-1-ns { flex-grow: 1; }\n\n .flex-shrink-0-ns { flex-shrink: 0; }\n .flex-shrink-1-ns { flex-shrink: 1; }\n}\n@media #{$breakpoint-medium} {\n .flex-m { display: flex; }\n .inline-flex-m { display: inline-flex; }\n .flex-auto-m {\n flex: 1 1 auto;\n min-width: 0; /* 1 */\n min-height: 0; /* 1 */\n }\n .flex-none-m { flex: none; }\n .flex-column-m { flex-direction: column; }\n .flex-row-m { flex-direction: row; }\n .flex-wrap-m { flex-wrap: wrap; }\n .flex-nowrap-m { flex-wrap: nowrap; }\n .flex-wrap-reverse-m { flex-wrap: wrap-reverse; }\n .flex-column-reverse-m { flex-direction: column-reverse; }\n .flex-row-reverse-m { flex-direction: row-reverse; }\n .items-start-m { align-items: flex-start; }\n .items-end-m { align-items: flex-end; }\n .items-center-m { align-items: center; }\n .items-baseline-m { align-items: baseline; }\n .items-stretch-m { align-items: stretch; }\n\n .self-start-m { align-self: flex-start; }\n .self-end-m { align-self: flex-end; }\n .self-center-m { align-self: center; }\n .self-baseline-m { align-self: baseline; }\n .self-stretch-m { align-self: stretch; }\n\n .justify-start-m { justify-content: flex-start; }\n .justify-end-m { justify-content: flex-end; }\n .justify-center-m { justify-content: center; }\n .justify-between-m { justify-content: space-between; }\n .justify-around-m { justify-content: space-around; }\n\n .content-start-m { align-content: flex-start; }\n .content-end-m { align-content: flex-end; }\n .content-center-m { align-content: center; }\n .content-between-m { align-content: space-between; }\n .content-around-m { align-content: space-around; }\n .content-stretch-m { align-content: stretch; }\n\n .order-0-m { order: 0; }\n .order-1-m { order: 1; }\n .order-2-m { order: 2; }\n .order-3-m { order: 3; }\n .order-4-m { order: 4; }\n .order-5-m { order: 5; }\n .order-6-m { order: 6; }\n .order-7-m { order: 7; }\n .order-8-m { order: 8; }\n .order-last-m { order: 99999; }\n\n .flex-grow-0-m { flex-grow: 0; }\n .flex-grow-1-m { flex-grow: 1; }\n\n .flex-shrink-0-m { flex-shrink: 0; }\n .flex-shrink-1-m { flex-shrink: 1; }\n}\n\n@media #{$breakpoint-large} {\n .flex-l { display: flex; }\n .inline-flex-l { display: inline-flex; }\n .flex-auto-l {\n flex: 1 1 auto;\n min-width: 0; /* 1 */\n min-height: 0; /* 1 */\n }\n .flex-none-l { flex: none; }\n .flex-column-l { flex-direction: column; }\n .flex-row-l { flex-direction: row; }\n .flex-wrap-l { flex-wrap: wrap; }\n .flex-nowrap-l { flex-wrap: nowrap; }\n .flex-wrap-reverse-l { flex-wrap: wrap-reverse; }\n .flex-column-reverse-l { flex-direction: column-reverse; }\n .flex-row-reverse-l { flex-direction: row-reverse; }\n\n .items-start-l { align-items: flex-start; }\n .items-end-l { align-items: flex-end; }\n .items-center-l { align-items: center; }\n .items-baseline-l { align-items: baseline; }\n .items-stretch-l { align-items: stretch; }\n\n .self-start-l { align-self: flex-start; }\n .self-end-l { align-self: flex-end; }\n .self-center-l { align-self: center; }\n .self-baseline-l { align-self: baseline; }\n .self-stretch-l { align-self: stretch; }\n\n .justify-start-l { justify-content: flex-start; }\n .justify-end-l { justify-content: flex-end; }\n .justify-center-l { justify-content: center; }\n .justify-between-l { justify-content: space-between; }\n .justify-around-l { justify-content: space-around; }\n\n .content-start-l { align-content: flex-start; }\n .content-end-l { align-content: flex-end; }\n .content-center-l { align-content: center; }\n .content-between-l { align-content: space-between; }\n .content-around-l { align-content: space-around; }\n .content-stretch-l { align-content: stretch; }\n\n .order-0-l { order: 0; }\n .order-1-l { order: 1; }\n .order-2-l { order: 2; }\n .order-3-l { order: 3; }\n .order-4-l { order: 4; }\n .order-5-l { order: 5; }\n .order-6-l { order: 6; }\n .order-7-l { order: 7; }\n .order-8-l { order: 8; }\n .order-last-l { order: 99999; }\n\n .flex-grow-0-l { flex-grow: 0; }\n .flex-grow-1-l { flex-grow: 1; }\n\n .flex-shrink-0-l { flex-shrink: 0; }\n .flex-shrink-1-l { flex-shrink: 1; }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n DISPLAY\n Docs: http://tachyons.io/docs/layout/display\n\n Base:\n d = display\n\n Modifiers:\n n = none\n b = block\n ib = inline-block\n it = inline-table\n t = table\n tc = table-cell\n tr = table-row\n tcol = table-column\n tcolg = table-column-group\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n*/\n\n.dn { display: none; }\n.di { display: inline; }\n.db { display: block; }\n.dib { display: inline-block; }\n.dit { display: inline-table; }\n.dt { display: table; }\n.dtc { display: table-cell; }\n.dt-row { display: table-row; }\n.dt-row-group { display: table-row-group; }\n.dt-column { display: table-column; }\n.dt-column-group { display: table-column-group; }\n\n/*\n This will set table to full width and then\n all cells will be equal width\n*/\n.dt--fixed {\n table-layout: fixed;\n width: 100%;\n}\n\n@media #{$breakpoint-not-small} {\n .dn-ns { display: none; }\n .di-ns { display: inline; }\n .db-ns { display: block; }\n .dib-ns { display: inline-block; }\n .dit-ns { display: inline-table; }\n .dt-ns { display: table; }\n .dtc-ns { display: table-cell; }\n .dt-row-ns { display: table-row; }\n .dt-row-group-ns { display: table-row-group; }\n .dt-column-ns { display: table-column; }\n .dt-column-group-ns { display: table-column-group; }\n\n .dt--fixed-ns {\n table-layout: fixed;\n width: 100%;\n }\n}\n\n@media #{$breakpoint-medium} {\n .dn-m { display: none; }\n .di-m { display: inline; }\n .db-m { display: block; }\n .dib-m { display: inline-block; }\n .dit-m { display: inline-table; }\n .dt-m { display: table; }\n .dtc-m { display: table-cell; }\n .dt-row-m { display: table-row; }\n .dt-row-group-m { display: table-row-group; }\n .dt-column-m { display: table-column; }\n .dt-column-group-m { display: table-column-group; }\n\n .dt--fixed-m {\n table-layout: fixed;\n width: 100%;\n }\n}\n\n@media #{$breakpoint-large} {\n .dn-l { display: none; }\n .di-l { display: inline; }\n .db-l { display: block; }\n .dib-l { display: inline-block; }\n .dit-l { display: inline-table; }\n .dt-l { display: table; }\n .dtc-l { display: table-cell; }\n .dt-row-l { display: table-row; }\n .dt-row-group-l { display: table-row-group; }\n .dt-column-l { display: table-column; }\n .dt-column-group-l { display: table-column-group; }\n\n .dt--fixed-l {\n table-layout: fixed;\n width: 100%;\n }\n}\n\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n FLOATS\n http://tachyons.io/docs/layout/floats/\n\n 1. Floated elements are automatically rendered as block level elements.\n Setting floats to display inline will fix the double margin bug in\n ie6. You know... just in case.\n\n 2. Don't forget to clearfix your floats with .cf\n\n Base:\n f = float\n\n Modifiers:\n l = left\n r = right\n n = none\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n*/\n\n\n\n.fl { float: left; _display: inline; }\n.fr { float: right; _display: inline; }\n.fn { float: none; }\n\n@media #{$breakpoint-not-small} {\n .fl-ns { float: left; _display: inline; }\n .fr-ns { float: right; _display: inline; }\n .fn-ns { float: none; }\n}\n\n@media #{$breakpoint-medium} {\n .fl-m { float: left; _display: inline; }\n .fr-m { float: right; _display: inline; }\n .fn-m { float: none; }\n}\n\n@media #{$breakpoint-large} {\n .fl-l { float: left; _display: inline; }\n .fr-l { float: right; _display: inline; }\n .fn-l { float: none; }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n FONT FAMILY GROUPS\n Docs: http://tachyons.io/docs/typography/font-family/\n\n*/\n\n\n.sans-serif {\n font-family: $sans-serif;\n}\n\n.serif {\n font-family: $serif;\n}\n\n.system-sans-serif {\n font-family: sans-serif;\n}\n\n.system-serif {\n font-family: serif;\n}\n\n\n/* Monospaced Typefaces (for code) */\n\n/* From http://cssfontstack.com */\ncode, .code {\n font-family: Consolas,\n monaco,\n monospace;\n}\n\n.courier {\n font-family: 'Courier Next',\n courier,\n monospace;\n}\n\n\n/* Sans-Serif Typefaces */\n\n.helvetica {\n font-family: 'helvetica neue', helvetica,\n sans-serif;\n}\n\n.avenir {\n font-family: 'avenir next', avenir,\n sans-serif;\n}\n\n\n/* Serif Typefaces */\n\n.athelas {\n font-family: athelas,\n georgia,\n serif;\n}\n\n.georgia {\n font-family: georgia,\n serif;\n}\n\n.times {\n font-family: times,\n serif;\n}\n\n.bodoni {\n font-family: \"Bodoni MT\",\n serif;\n}\n\n.calisto {\n font-family: \"Calisto MT\",\n serif;\n}\n\n.garamond {\n font-family: garamond,\n serif;\n}\n\n.baskerville {\n font-family: baskerville,\n serif;\n}\n\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n FONT STYLE\n Docs: http://tachyons.io/docs/typography/font-style/\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n*/\n\n.i { font-style: italic; }\n.fs-normal { font-style: normal; }\n\n@media #{$breakpoint-not-small} {\n .i-ns { font-style: italic; }\n .fs-normal-ns { font-style: normal; }\n}\n\n@media #{$breakpoint-medium} {\n .i-m { font-style: italic; }\n .fs-normal-m { font-style: normal; }\n}\n\n@media #{$breakpoint-large} {\n .i-l { font-style: italic; }\n .fs-normal-l { font-style: normal; }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n FONT WEIGHT\n Docs: http://tachyons.io/docs/typography/font-weight/\n\n Base\n fw = font-weight\n\n Modifiers:\n 1 = literal value 100\n 2 = literal value 200\n 3 = literal value 300\n 4 = literal value 400\n 5 = literal value 500\n 6 = literal value 600\n 7 = literal value 700\n 8 = literal value 800\n 9 = literal value 900\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n*/\n\n.normal { font-weight: normal; }\n.b { font-weight: bold; }\n.fw1 { font-weight: 100; }\n.fw2 { font-weight: 200; }\n.fw3 { font-weight: 300; }\n.fw4 { font-weight: 400; }\n.fw5 { font-weight: 500; }\n.fw6 { font-weight: 600; }\n.fw7 { font-weight: 700; }\n.fw8 { font-weight: 800; }\n.fw9 { font-weight: 900; }\n\n\n@media #{$breakpoint-not-small} {\n .normal-ns { font-weight: normal; }\n .b-ns { font-weight: bold; }\n .fw1-ns { font-weight: 100; }\n .fw2-ns { font-weight: 200; }\n .fw3-ns { font-weight: 300; }\n .fw4-ns { font-weight: 400; }\n .fw5-ns { font-weight: 500; }\n .fw6-ns { font-weight: 600; }\n .fw7-ns { font-weight: 700; }\n .fw8-ns { font-weight: 800; }\n .fw9-ns { font-weight: 900; }\n}\n\n@media #{$breakpoint-medium} {\n .normal-m { font-weight: normal; }\n .b-m { font-weight: bold; }\n .fw1-m { font-weight: 100; }\n .fw2-m { font-weight: 200; }\n .fw3-m { font-weight: 300; }\n .fw4-m { font-weight: 400; }\n .fw5-m { font-weight: 500; }\n .fw6-m { font-weight: 600; }\n .fw7-m { font-weight: 700; }\n .fw8-m { font-weight: 800; }\n .fw9-m { font-weight: 900; }\n}\n\n@media #{$breakpoint-large} {\n .normal-l { font-weight: normal; }\n .b-l { font-weight: bold; }\n .fw1-l { font-weight: 100; }\n .fw2-l { font-weight: 200; }\n .fw3-l { font-weight: 300; }\n .fw4-l { font-weight: 400; }\n .fw5-l { font-weight: 500; }\n .fw6-l { font-weight: 600; }\n .fw7-l { font-weight: 700; }\n .fw8-l { font-weight: 800; }\n .fw9-l { font-weight: 900; }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n FORMS\n \n*/\n\n.input-reset {\n -webkit-appearance: none;\n -moz-appearance: none;\n}\n\n.button-reset::-moz-focus-inner,\n.input-reset::-moz-focus-inner {\n border: 0;\n padding: 0;\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n HEIGHTS\n Docs: http://tachyons.io/docs/layout/heights/\n\n Base:\n h = height\n min-h = min-height\n min-vh = min-height vertical screen height\n vh = vertical screen height\n\n Modifiers\n 1 = 1st step in height scale\n 2 = 2nd step in height scale\n 3 = 3rd step in height scale\n 4 = 4th step in height scale\n 5 = 5th step in height scale\n\n -25 = literal value 25%\n -50 = literal value 50%\n -75 = literal value 75%\n -100 = literal value 100%\n\n -auto = string value of auto\n -inherit = string value of inherit\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n*/\n\n/* Height Scale */\n\n.h1 { height: $height-1; }\n.h2 { height: $height-2; }\n.h3 { height: $height-3; }\n.h4 { height: $height-4; }\n.h5 { height: $height-5; }\n\n/* Height Percentages - Based off of height of parent */\n\n.h-25 { height: 25%; }\n.h-50 { height: 50%; }\n.h-75 { height: 75%; }\n.h-100 { height: 100%; }\n\n.min-h-100 { min-height: 100%; }\n\n/* Screen Height Percentage */\n\n.vh-25 { height: 25vh; }\n.vh-50 { height: 50vh; }\n.vh-75 { height: 75vh; }\n.vh-100 { height: 100vh; }\n\n.min-vh-100 { min-height: 100vh; }\n\n\n/* String Properties */\n\n.h-auto { height: auto; }\n.h-inherit { height: inherit; }\n\n@media #{$breakpoint-not-small} {\n .h1-ns { height: $height-1; }\n .h2-ns { height: $height-2; }\n .h3-ns { height: $height-3; }\n .h4-ns { height: $height-4; }\n .h5-ns { height: $height-5; }\n .h-25-ns { height: 25%; }\n .h-50-ns { height: 50%; }\n .h-75-ns { height: 75%; }\n .h-100-ns { height: 100%; }\n .min-h-100-ns { min-height: 100%; }\n .vh-25-ns { height: 25vh; }\n .vh-50-ns { height: 50vh; }\n .vh-75-ns { height: 75vh; }\n .vh-100-ns { height: 100vh; }\n .min-vh-100-ns { min-height: 100vh; }\n .h-auto-ns { height: auto; }\n .h-inherit-ns { height: inherit; }\n}\n\n@media #{$breakpoint-medium} {\n .h1-m { height: $height-1; }\n .h2-m { height: $height-2; }\n .h3-m { height: $height-3; }\n .h4-m { height: $height-4; }\n .h5-m { height: $height-5; }\n .h-25-m { height: 25%; }\n .h-50-m { height: 50%; }\n .h-75-m { height: 75%; }\n .h-100-m { height: 100%; }\n .min-h-100-m { min-height: 100%; }\n .vh-25-m { height: 25vh; }\n .vh-50-m { height: 50vh; }\n .vh-75-m { height: 75vh; }\n .vh-100-m { height: 100vh; }\n .min-vh-100-m { min-height: 100vh; }\n .h-auto-m { height: auto; }\n .h-inherit-m { height: inherit; }\n}\n\n@media #{$breakpoint-large} {\n .h1-l { height: $height-1; }\n .h2-l { height: $height-2; }\n .h3-l { height: $height-3; }\n .h4-l { height: $height-4; }\n .h5-l { height: $height-5; }\n .h-25-l { height: 25%; }\n .h-50-l { height: 50%; }\n .h-75-l { height: 75%; }\n .h-100-l { height: 100%; }\n .min-h-100-l { min-height: 100%; }\n .vh-25-l { height: 25vh; }\n .vh-50-l { height: 50vh; }\n .vh-75-l { height: 75vh; }\n .vh-100-l { height: 100vh; }\n .min-vh-100-l { min-height: 100vh; }\n .h-auto-l { height: auto; }\n .h-inherit-l { height: inherit; }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n LETTER SPACING\n Docs: http://tachyons.io/docs/typography/tracking/\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n*/\n\n.tracked { letter-spacing: $letter-spacing-1; }\n.tracked-tight { letter-spacing: $letter-spacing-tight; }\n.tracked-mega { letter-spacing: $letter-spacing-2; }\n\n@media #{$breakpoint-not-small} {\n .tracked-ns { letter-spacing: $letter-spacing-1; }\n .tracked-tight-ns { letter-spacing: $letter-spacing-tight; }\n .tracked-mega-ns { letter-spacing: $letter-spacing-2; }\n}\n\n@media #{$breakpoint-medium} {\n .tracked-m { letter-spacing: $letter-spacing-1; }\n .tracked-tight-m { letter-spacing: $letter-spacing-tight; }\n .tracked-mega-m { letter-spacing: $letter-spacing-2; }\n}\n\n@media #{$breakpoint-large} {\n .tracked-l { letter-spacing: $letter-spacing-1; }\n .tracked-tight-l { letter-spacing: $letter-spacing-tight; }\n .tracked-mega-l { letter-spacing: $letter-spacing-2; }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n LINE HEIGHT / LEADING\n Docs: http://tachyons.io/docs/typography/line-height\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n*/\n\n .lh-solid { line-height: $line-height-solid; }\n .lh-title { line-height: $line-height-title; }\n .lh-copy { line-height: $line-height-copy; }\n\n@media #{$breakpoint-not-small} {\n .lh-solid-ns { line-height: $line-height-solid; }\n .lh-title-ns { line-height: $line-height-title; }\n .lh-copy-ns { line-height: $line-height-copy; }\n}\n\n@media #{$breakpoint-medium} {\n .lh-solid-m { line-height: $line-height-solid; }\n .lh-title-m { line-height: $line-height-title; }\n .lh-copy-m { line-height: $line-height-copy; }\n}\n\n@media #{$breakpoint-large} {\n .lh-solid-l { line-height: $line-height-solid; }\n .lh-title-l { line-height: $line-height-title; }\n .lh-copy-l { line-height: $line-height-copy; }\n}\n\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n LINKS\n Docs: http://tachyons.io/docs/elements/links/\n\n*/\n\n.link {\n text-decoration: none;\n transition: color .15s ease-in;\n}\n\n.link:link,\n.link:visited {\n transition: color .15s ease-in;\n}\n.link:hover {\n transition: color .15s ease-in;\n}\n.link:active {\n transition: color .15s ease-in;\n}\n.link:focus {\n transition: color .15s ease-in;\n outline: 1px dotted currentColor;\n}\n\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n LISTS\n http://tachyons.io/docs/elements/lists/\n\n*/\n\n.list { list-style-type: none; }\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n MAX WIDTHS\n Docs: http://tachyons.io/docs/layout/max-widths/\n\n Base:\n mw = max-width\n\n Modifiers\n 1 = 1st step in width scale\n 2 = 2nd step in width scale\n 3 = 3rd step in width scale\n 4 = 4th step in width scale\n 5 = 5th step in width scale\n 6 = 6st step in width scale\n 7 = 7nd step in width scale\n 8 = 8rd step in width scale\n 9 = 9th step in width scale\n\n -100 = literal value 100%\n\n -none = string value none\n\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n*/\n\n/* Max Width Percentages */\n\n.mw-100 { max-width: 100%; }\n\n/* Max Width Scale */\n\n.mw1 { max-width: $max-width-1; }\n.mw2 { max-width: $max-width-2; }\n.mw3 { max-width: $max-width-3; }\n.mw4 { max-width: $max-width-4; }\n.mw5 { max-width: $max-width-5; }\n.mw6 { max-width: $max-width-6; }\n.mw7 { max-width: $max-width-7; }\n.mw8 { max-width: $max-width-8; }\n.mw9 { max-width: $max-width-9; }\n\n/* Max Width String Properties */\n\n.mw-none { max-width: none; }\n\n@media #{$breakpoint-not-small} {\n .mw-100-ns { max-width: 100%; }\n\n .mw1-ns { max-width: $max-width-1; }\n .mw2-ns { max-width: $max-width-2; }\n .mw3-ns { max-width: $max-width-3; }\n .mw4-ns { max-width: $max-width-4; }\n .mw5-ns { max-width: $max-width-5; }\n .mw6-ns { max-width: $max-width-6; }\n .mw7-ns { max-width: $max-width-7; }\n .mw8-ns { max-width: $max-width-8; }\n .mw9-ns { max-width: $max-width-9; }\n\n .mw-none-ns { max-width: none; }\n}\n\n@media #{$breakpoint-medium} {\n .mw-100-m { max-width: 100%; }\n\n .mw1-m { max-width: $max-width-1; }\n .mw2-m { max-width: $max-width-2; }\n .mw3-m { max-width: $max-width-3; }\n .mw4-m { max-width: $max-width-4; }\n .mw5-m { max-width: $max-width-5; }\n .mw6-m { max-width: $max-width-6; }\n .mw7-m { max-width: $max-width-7; }\n .mw8-m { max-width: $max-width-8; }\n .mw9-m { max-width: $max-width-9; }\n\n .mw-none-m { max-width: none; }\n}\n\n@media #{$breakpoint-large} {\n .mw-100-l { max-width: 100%; }\n\n .mw1-l { max-width: $max-width-1; }\n .mw2-l { max-width: $max-width-2; }\n .mw3-l { max-width: $max-width-3; }\n .mw4-l { max-width: $max-width-4; }\n .mw5-l { max-width: $max-width-5; }\n .mw6-l { max-width: $max-width-6; }\n .mw7-l { max-width: $max-width-7; }\n .mw8-l { max-width: $max-width-8; }\n .mw9-l { max-width: $max-width-9; }\n\n .mw-none-l { max-width: none; }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n WIDTHS\n Docs: http://tachyons.io/docs/layout/widths/\n\n Base:\n w = width\n\n Modifiers\n 1 = 1st step in width scale\n 2 = 2nd step in width scale\n 3 = 3rd step in width scale\n 4 = 4th step in width scale\n 5 = 5th step in width scale\n\n -10 = literal value 10%\n -20 = literal value 20%\n -25 = literal value 25%\n -30 = literal value 30%\n -33 = literal value 33%\n -34 = literal value 34%\n -40 = literal value 40%\n -50 = literal value 50%\n -60 = literal value 60%\n -70 = literal value 70%\n -75 = literal value 75%\n -80 = literal value 80%\n -90 = literal value 90%\n -100 = literal value 100%\n\n -third = 100% / 3 (Not supported in opera mini or IE8)\n -two-thirds = 100% / 1.5 (Not supported in opera mini or IE8)\n -auto = string value auto\n\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n */\n\n/* Width Scale */\n\n.w1 { width: $width-1; }\n.w2 { width: $width-2; }\n.w3 { width: $width-3; }\n.w4 { width: $width-4; }\n.w5 { width: $width-5; }\n\n.w-10 { width: 10%; }\n.w-20 { width: 20%; }\n.w-25 { width: 25%; }\n.w-30 { width: 30%; }\n.w-33 { width: 33%; }\n.w-34 { width: 34%; }\n.w-40 { width: 40%; }\n.w-50 { width: 50%; }\n.w-60 { width: 60%; }\n.w-70 { width: 70%; }\n.w-75 { width: 75%; }\n.w-80 { width: 80%; }\n.w-90 { width: 90%; }\n.w-100 { width: 100%; }\n\n.w-third { width: (100% / 3); }\n.w-two-thirds { width: (100% / 1.5); }\n.w-auto { width: auto; }\n\n@media #{$breakpoint-not-small} {\n .w1-ns { width: $width-1; }\n .w2-ns { width: $width-2; }\n .w3-ns { width: $width-3; }\n .w4-ns { width: $width-4; }\n .w5-ns { width: $width-5; }\n .w-10-ns { width: 10%; }\n .w-20-ns { width: 20%; }\n .w-25-ns { width: 25%; }\n .w-30-ns { width: 30%; }\n .w-33-ns { width: 33%; }\n .w-34-ns { width: 34%; }\n .w-40-ns { width: 40%; }\n .w-50-ns { width: 50%; }\n .w-60-ns { width: 60%; }\n .w-70-ns { width: 70%; }\n .w-75-ns { width: 75%; }\n .w-80-ns { width: 80%; }\n .w-90-ns { width: 90%; }\n .w-100-ns { width: 100%; }\n .w-third-ns { width: (100% / 3); }\n .w-two-thirds-ns { width: (100% / 1.5); }\n .w-auto-ns { width: auto; }\n}\n\n@media #{$breakpoint-medium} {\n .w1-m { width: $width-1; }\n .w2-m { width: $width-2; }\n .w3-m { width: $width-3; }\n .w4-m { width: $width-4; }\n .w5-m { width: $width-5; }\n .w-10-m { width: 10%; }\n .w-20-m { width: 20%; }\n .w-25-m { width: 25%; }\n .w-30-m { width: 30%; }\n .w-33-m { width: 33%; }\n .w-34-m { width: 34%; }\n .w-40-m { width: 40%; }\n .w-50-m { width: 50%; }\n .w-60-m { width: 60%; }\n .w-70-m { width: 70%; }\n .w-75-m { width: 75%; }\n .w-80-m { width: 80%; }\n .w-90-m { width: 90%; }\n .w-100-m { width: 100%; }\n .w-third-m { width: (100% / 3); }\n .w-two-thirds-m { width: (100% / 1.5); }\n .w-auto-m { width: auto; }\n}\n\n@media #{$breakpoint-large} {\n .w1-l { width: $width-1; }\n .w2-l { width: $width-2; }\n .w3-l { width: $width-3; }\n .w4-l { width: $width-4; }\n .w5-l { width: $width-5; }\n .w-10-l { width: 10%; }\n .w-20-l { width: 20%; }\n .w-25-l { width: 25%; }\n .w-30-l { width: 30%; }\n .w-33-l { width: 33%; }\n .w-34-l { width: 34%; }\n .w-40-l { width: 40%; }\n .w-50-l { width: 50%; }\n .w-60-l { width: 60%; }\n .w-70-l { width: 70%; }\n .w-75-l { width: 75%; }\n .w-80-l { width: 80%; }\n .w-90-l { width: 90%; }\n .w-100-l { width: 100%; }\n .w-third-l { width: (100% / 3); }\n .w-two-thirds-l { width: (100% / 1.5); }\n .w-auto-l { width: auto; }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n OVERFLOW\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n */\n\n.overflow-visible { overflow: visible; }\n.overflow-hidden { overflow: hidden; }\n.overflow-scroll { overflow: scroll; }\n.overflow-auto { overflow: auto; }\n\n.overflow-x-visible { overflow-x: visible; }\n.overflow-x-hidden { overflow-x: hidden; }\n.overflow-x-scroll { overflow-x: scroll; }\n.overflow-x-auto { overflow-x: auto; }\n\n.overflow-y-visible { overflow-y: visible; }\n.overflow-y-hidden { overflow-y: hidden; }\n.overflow-y-scroll { overflow-y: scroll; }\n.overflow-y-auto { overflow-y: auto; }\n\n@media #{$breakpoint-not-small} {\n .overflow-visible-ns { overflow: visible; }\n .overflow-hidden-ns { overflow: hidden; }\n .overflow-scroll-ns { overflow: scroll; }\n .overflow-auto-ns { overflow: auto; }\n .overflow-x-visible-ns { overflow-x: visible; }\n .overflow-x-hidden-ns { overflow-x: hidden; }\n .overflow-x-scroll-ns { overflow-x: scroll; }\n .overflow-x-auto-ns { overflow-x: auto; }\n\n .overflow-y-visible-ns { overflow-y: visible; }\n .overflow-y-hidden-ns { overflow-y: hidden; }\n .overflow-y-scroll-ns { overflow-y: scroll; }\n .overflow-y-auto-ns { overflow-y: auto; }\n}\n\n@media #{$breakpoint-medium} {\n .overflow-visible-m { overflow: visible; }\n .overflow-hidden-m { overflow: hidden; }\n .overflow-scroll-m { overflow: scroll; }\n .overflow-auto-m { overflow: auto; }\n\n .overflow-x-visible-m { overflow-x: visible; }\n .overflow-x-hidden-m { overflow-x: hidden; }\n .overflow-x-scroll-m { overflow-x: scroll; }\n .overflow-x-auto-m { overflow-x: auto; }\n\n .overflow-y-visible-m { overflow-y: visible; }\n .overflow-y-hidden-m { overflow-y: hidden; }\n .overflow-y-scroll-m { overflow-y: scroll; }\n .overflow-y-auto-m { overflow-y: auto; }\n}\n\n@media #{$breakpoint-large} {\n .overflow-visible-l { overflow: visible; }\n .overflow-hidden-l { overflow: hidden; }\n .overflow-scroll-l { overflow: scroll; }\n .overflow-auto-l { overflow: auto; }\n\n .overflow-x-visible-l { overflow-x: visible; }\n .overflow-x-hidden-l { overflow-x: hidden; }\n .overflow-x-scroll-l { overflow-x: scroll; }\n .overflow-x-auto-l { overflow-x: auto; }\n\n .overflow-y-visible-l { overflow-y: visible; }\n .overflow-y-hidden-l { overflow-y: hidden; }\n .overflow-y-scroll-l { overflow-y: scroll; }\n .overflow-y-auto-l { overflow-y: auto; }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n POSITIONING\n Docs: http://tachyons.io/docs/layout/position/\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n*/\n\n.static { position: static; }\n.relative { position: relative; }\n.absolute { position: absolute; }\n.fixed { position: fixed; }\n\n@media #{$breakpoint-not-small} {\n .static-ns { position: static; }\n .relative-ns { position: relative; }\n .absolute-ns { position: absolute; }\n .fixed-ns { position: fixed; }\n}\n\n@media #{$breakpoint-medium} {\n .static-m { position: static; }\n .relative-m { position: relative; }\n .absolute-m { position: absolute; }\n .fixed-m { position: fixed; }\n}\n\n@media #{$breakpoint-large} {\n .static-l { position: static; }\n .relative-l { position: relative; }\n .absolute-l { position: absolute; }\n .fixed-l { position: fixed; }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n OPACITY\n Docs: http://tachyons.io/docs/themes/opacity/\n\n*/\n\n.o-100 { opacity: 1; }\n.o-90 { opacity: .9; }\n.o-80 { opacity: .8; }\n.o-70 { opacity: .7; }\n.o-60 { opacity: .6; }\n.o-50 { opacity: .5; }\n.o-40 { opacity: .4; }\n.o-30 { opacity: .3; }\n.o-20 { opacity: .2; }\n.o-10 { opacity: .1; }\n.o-05 { opacity: .05; }\n.o-025 { opacity: .025; }\n.o-0 { opacity: 0; }\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n ROTATIONS\n\n*/\n\n.rotate-45 { transform: rotate(45deg); }\n.rotate-90 { transform: rotate(90deg); }\n.rotate-135 { transform: rotate(135deg); }\n.rotate-180 { transform: rotate(180deg); }\n.rotate-225 { transform: rotate(225deg); }\n.rotate-270 { transform: rotate(270deg); }\n.rotate-315 { transform: rotate(315deg); }\n\n@media #{$breakpoint-not-small}{\n .rotate-45-ns { transform: rotate(45deg); }\n .rotate-90-ns { transform: rotate(90deg); }\n .rotate-135-ns { transform: rotate(135deg); }\n .rotate-180-ns { transform: rotate(180deg); }\n .rotate-225-ns { transform: rotate(225deg); }\n .rotate-270-ns { transform: rotate(270deg); }\n .rotate-315-ns { transform: rotate(315deg); }\n}\n\n@media #{$breakpoint-medium}{\n .rotate-45-m { transform: rotate(45deg); }\n .rotate-90-m { transform: rotate(90deg); }\n .rotate-135-m { transform: rotate(135deg); }\n .rotate-180-m { transform: rotate(180deg); }\n .rotate-225-m { transform: rotate(225deg); }\n .rotate-270-m { transform: rotate(270deg); }\n .rotate-315-m { transform: rotate(315deg); }\n}\n\n@media #{$breakpoint-large}{\n .rotate-45-l { transform: rotate(45deg); }\n .rotate-90-l { transform: rotate(90deg); }\n .rotate-135-l { transform: rotate(135deg); }\n .rotate-180-l { transform: rotate(180deg); }\n .rotate-225-l { transform: rotate(225deg); }\n .rotate-270-l { transform: rotate(270deg); }\n .rotate-315-l { transform: rotate(315deg); }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n SKINS\n Docs: http://tachyons.io/docs/themes/skins/\n\n Classes for setting foreground and background colors on elements.\n If you haven't declared a border color, but set border on an element, it will\n be set to the current text color.\n\n*/\n\n/* Text colors */\n\n.black-90 { color: $black-90; }\n.black-80 { color: $black-80; }\n.black-70 { color: $black-70; }\n.black-60 { color: $black-60; }\n.black-50 { color: $black-50; }\n.black-40 { color: $black-40; }\n.black-30 { color: $black-30; }\n.black-20 { color: $black-20; }\n.black-10 { color: $black-10; }\n.black-05 { color: $black-05; }\n\n.white-90 { color: $white-90; }\n.white-80 { color: $white-80; }\n.white-70 { color: $white-70; }\n.white-60 { color: $white-60; }\n.white-50 { color: $white-50; }\n.white-40 { color: $white-40; }\n.white-30 { color: $white-30; }\n.white-20 { color: $white-20; }\n.white-10 { color: $white-10; }\n\n.black { color: $black; }\n.near-black { color: $near-black; }\n.dark-gray { color: $dark-gray; }\n.mid-gray { color: $mid-gray; }\n.gray { color: $gray; }\n.silver { color: $silver; }\n.light-silver { color: $light-silver; }\n.moon-gray { color: $moon-gray; }\n.light-gray { color: $light-gray; }\n.near-white { color: $near-white; }\n.white { color: $white; }\n\n.dark-red { color: $dark-red; }\n.red { color: $red; }\n.light-red { color: $light-red; }\n.orange { color: $orange; }\n.gold { color: $gold; }\n.yellow { color: $yellow; }\n.light-yellow { color: $light-yellow; }\n.purple { color: $purple; }\n.light-purple { color: $light-purple; }\n.dark-pink { color: $dark-pink; }\n.hot-pink { color: $hot-pink; }\n.pink { color: $pink; }\n.light-pink { color: $light-pink; }\n.dark-green { color: $dark-green; }\n.green { color: $green; }\n.light-green { color: $light-green; }\n.navy { color: $navy; }\n.dark-blue { color: $dark-blue; }\n.blue { color: $blue; }\n.light-blue { color: $light-blue; }\n.lightest-blue { color: $lightest-blue; }\n.washed-blue { color: $washed-blue; }\n.washed-green { color: $washed-green; }\n.washed-yellow { color: $washed-yellow; }\n.washed-red { color: $washed-red; }\n.color-inherit { color: inherit; }\n\n.bg-black-90 { background-color: $black-90; }\n.bg-black-80 { background-color: $black-80; }\n.bg-black-70 { background-color: $black-70; }\n.bg-black-60 { background-color: $black-60; }\n.bg-black-50 { background-color: $black-50; }\n.bg-black-40 { background-color: $black-40; }\n.bg-black-30 { background-color: $black-30; }\n.bg-black-20 { background-color: $black-20; }\n.bg-black-10 { background-color: $black-10; }\n.bg-black-05 { background-color: $black-05; }\n.bg-white-90 { background-color: $white-90; }\n.bg-white-80 { background-color: $white-80; }\n.bg-white-70 { background-color: $white-70; }\n.bg-white-60 { background-color: $white-60; }\n.bg-white-50 { background-color: $white-50; }\n.bg-white-40 { background-color: $white-40; }\n.bg-white-30 { background-color: $white-30; }\n.bg-white-20 { background-color: $white-20; }\n.bg-white-10 { background-color: $white-10; }\n\n\n\n/* Background colors */\n\n.bg-black { background-color: $black; }\n.bg-near-black { background-color: $near-black; }\n.bg-dark-gray { background-color: $dark-gray; }\n.bg-mid-gray { background-color: $mid-gray; }\n.bg-gray { background-color: $gray; }\n.bg-silver { background-color: $silver; }\n.bg-light-silver { background-color: $light-silver; }\n.bg-moon-gray { background-color: $moon-gray; }\n.bg-light-gray { background-color: $light-gray; }\n.bg-near-white { background-color: $near-white; }\n.bg-white { background-color: $white; }\n.bg-transparent { background-color: $transparent; }\n\n.bg-dark-red { background-color: $dark-red; }\n.bg-red { background-color: $red; }\n.bg-light-red { background-color: $light-red; }\n.bg-orange { background-color: $orange; }\n.bg-gold { background-color: $gold; }\n.bg-yellow { background-color: $yellow; }\n.bg-light-yellow { background-color: $light-yellow; }\n.bg-purple { background-color: $purple; }\n.bg-light-purple { background-color: $light-purple; }\n.bg-dark-pink { background-color: $dark-pink; }\n.bg-hot-pink { background-color: $hot-pink; }\n.bg-pink { background-color: $pink; }\n.bg-light-pink { background-color: $light-pink; }\n.bg-dark-green { background-color: $dark-green; }\n.bg-green { background-color: $green; }\n.bg-light-green { background-color: $light-green; }\n.bg-navy { background-color: $navy; }\n.bg-dark-blue { background-color: $dark-blue; }\n.bg-blue { background-color: $blue; }\n.bg-light-blue { background-color: $light-blue; }\n.bg-lightest-blue { background-color: $lightest-blue; }\n.bg-washed-blue { background-color: $washed-blue; }\n.bg-washed-green { background-color: $washed-green; }\n.bg-washed-yellow { background-color: $washed-yellow; }\n.bg-washed-red { background-color: $washed-red; }\n.bg-inherit { background-color: inherit; }\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n SKINS:PSEUDO\n\n Customize the color of an element when\n it is focused or hovered over.\n\n */\n\n.hover-black:hover,\n.hover-black:focus { color: $black; }\n.hover-near-black:hover,\n.hover-near-black:focus { color: $near-black; }\n.hover-dark-gray:hover,\n.hover-dark-gray:focus { color: $dark-gray; }\n.hover-mid-gray:hover,\n.hover-mid-gray:focus { color: $mid-gray; }\n.hover-gray:hover,\n.hover-gray:focus { color: $gray; }\n.hover-silver:hover,\n.hover-silver:focus { color: $silver; }\n.hover-light-silver:hover,\n.hover-light-silver:focus { color: $light-silver; }\n.hover-moon-gray:hover,\n.hover-moon-gray:focus { color: $moon-gray; }\n.hover-light-gray:hover,\n.hover-light-gray:focus { color: $light-gray; }\n.hover-near-white:hover,\n.hover-near-white:focus { color: $near-white; }\n.hover-white:hover,\n.hover-white:focus { color: $white; }\n\n.hover-black-90:hover,\n.hover-black-90:focus { color: $black-90; }\n.hover-black-80:hover,\n.hover-black-80:focus { color: $black-80; }\n.hover-black-70:hover,\n.hover-black-70:focus { color: $black-70; }\n.hover-black-60:hover,\n.hover-black-60:focus { color: $black-60; }\n.hover-black-50:hover,\n.hover-black-50:focus { color: $black-50; }\n.hover-black-40:hover,\n.hover-black-40:focus { color: $black-40; }\n.hover-black-30:hover,\n.hover-black-30:focus { color: $black-30; }\n.hover-black-20:hover,\n.hover-black-20:focus { color: $black-20; }\n.hover-black-10:hover,\n.hover-black-10:focus { color: $black-10; }\n.hover-white-90:hover,\n.hover-white-90:focus { color: $white-90; }\n.hover-white-80:hover,\n.hover-white-80:focus { color: $white-80; }\n.hover-white-70:hover,\n.hover-white-70:focus { color: $white-70; }\n.hover-white-60:hover,\n.hover-white-60:focus { color: $white-60; }\n.hover-white-50:hover,\n.hover-white-50:focus { color: $white-50; }\n.hover-white-40:hover,\n.hover-white-40:focus { color: $white-40; }\n.hover-white-30:hover,\n.hover-white-30:focus { color: $white-30; }\n.hover-white-20:hover,\n.hover-white-20:focus { color: $white-20; }\n.hover-white-10:hover,\n.hover-white-10:focus { color: $white-10; }\n.hover-inherit:hover,\n.hover-inherit:focus { color: inherit; }\n\n.hover-bg-black:hover,\n.hover-bg-black:focus { background-color: $black; }\n.hover-bg-near-black:hover,\n.hover-bg-near-black:focus { background-color: $near-black; }\n.hover-bg-dark-gray:hover,\n.hover-bg-dark-gray:focus { background-color: $dark-gray; }\n.hover-bg-mid-gray:hover,\n.hover-bg-mid-gray:focus { background-color: $mid-gray; }\n.hover-bg-gray:hover,\n.hover-bg-gray:focus { background-color: $gray; }\n.hover-bg-silver:hover,\n.hover-bg-silver:focus { background-color: $silver; }\n.hover-bg-light-silver:hover,\n.hover-bg-light-silver:focus { background-color: $light-silver; }\n.hover-bg-moon-gray:hover,\n.hover-bg-moon-gray:focus { background-color: $moon-gray; }\n.hover-bg-light-gray:hover,\n.hover-bg-light-gray:focus { background-color: $light-gray; }\n.hover-bg-near-white:hover,\n.hover-bg-near-white:focus { background-color: $near-white; }\n.hover-bg-white:hover,\n.hover-bg-white:focus { background-color: $white; }\n.hover-bg-transparent:hover,\n.hover-bg-transparent:focus { background-color: $transparent; }\n\n.hover-bg-black-90:hover,\n.hover-bg-black-90:focus { background-color: $black-90; }\n.hover-bg-black-80:hover,\n.hover-bg-black-80:focus { background-color: $black-80; }\n.hover-bg-black-70:hover,\n.hover-bg-black-70:focus { background-color: $black-70; }\n.hover-bg-black-60:hover,\n.hover-bg-black-60:focus { background-color: $black-60; }\n.hover-bg-black-50:hover,\n.hover-bg-black-50:focus { background-color: $black-50; }\n.hover-bg-black-40:hover,\n.hover-bg-black-40:focus { background-color: $black-40; }\n.hover-bg-black-30:hover,\n.hover-bg-black-30:focus { background-color: $black-30; }\n.hover-bg-black-20:hover,\n.hover-bg-black-20:focus { background-color: $black-20; }\n.hover-bg-black-10:hover,\n.hover-bg-black-10:focus { background-color: $black-10; }\n.hover-bg-white-90:hover,\n.hover-bg-white-90:focus { background-color: $white-90; }\n.hover-bg-white-80:hover,\n.hover-bg-white-80:focus { background-color: $white-80; }\n.hover-bg-white-70:hover,\n.hover-bg-white-70:focus { background-color: $white-70; }\n.hover-bg-white-60:hover,\n.hover-bg-white-60:focus { background-color: $white-60; }\n.hover-bg-white-50:hover,\n.hover-bg-white-50:focus { background-color: $white-50; }\n.hover-bg-white-40:hover,\n.hover-bg-white-40:focus { background-color: $white-40; }\n.hover-bg-white-30:hover,\n.hover-bg-white-30:focus { background-color: $white-30; }\n.hover-bg-white-20:hover,\n.hover-bg-white-20:focus { background-color: $white-20; }\n.hover-bg-white-10:hover,\n.hover-bg-white-10:focus { background-color: $white-10; }\n\n.hover-dark-red:hover,\n.hover-dark-red:focus { color: $dark-red; }\n.hover-red:hover,\n.hover-red:focus { color: $red; }\n.hover-light-red:hover,\n.hover-light-red:focus { color: $light-red; }\n.hover-orange:hover,\n.hover-orange:focus { color: $orange; }\n.hover-gold:hover,\n.hover-gold:focus { color: $gold; }\n.hover-yellow:hover,\n.hover-yellow:focus { color: $yellow; }\n.hover-light-yellow:hover,\n.hover-light-yellow:focus { color: $light-yellow; }\n.hover-purple:hover,\n.hover-purple:focus { color: $purple; }\n.hover-light-purple:hover,\n.hover-light-purple:focus { color: $light-purple; }\n.hover-dark-pink:hover,\n.hover-dark-pink:focus { color: $dark-pink; }\n.hover-hot-pink:hover,\n.hover-hot-pink:focus { color: $hot-pink; }\n.hover-pink:hover,\n.hover-pink:focus { color: $pink; }\n.hover-light-pink:hover,\n.hover-light-pink:focus { color: $light-pink; }\n.hover-dark-green:hover,\n.hover-dark-green:focus { color: $dark-green; }\n.hover-green:hover,\n.hover-green:focus { color: $green; }\n.hover-light-green:hover,\n.hover-light-green:focus { color: $light-green; }\n.hover-navy:hover,\n.hover-navy:focus { color: $navy; }\n.hover-dark-blue:hover,\n.hover-dark-blue:focus { color: $dark-blue; }\n.hover-blue:hover,\n.hover-blue:focus { color: $blue; }\n.hover-light-blue:hover,\n.hover-light-blue:focus { color: $light-blue; }\n.hover-lightest-blue:hover,\n.hover-lightest-blue:focus { color: $lightest-blue; }\n.hover-washed-blue:hover,\n.hover-washed-blue:focus { color: $washed-blue; }\n.hover-washed-green:hover,\n.hover-washed-green:focus { color: $washed-green; }\n.hover-washed-yellow:hover,\n.hover-washed-yellow:focus { color: $washed-yellow; }\n.hover-washed-red:hover,\n.hover-washed-red:focus { color: $washed-red; }\n\n.hover-bg-dark-red:hover,\n.hover-bg-dark-red:focus { background-color: $dark-red; }\n.hover-bg-red:hover,\n.hover-bg-red:focus { background-color: $red; }\n.hover-bg-light-red:hover,\n.hover-bg-light-red:focus { background-color: $light-red; }\n.hover-bg-orange:hover,\n.hover-bg-orange:focus { background-color: $orange; }\n.hover-bg-gold:hover,\n.hover-bg-gold:focus { background-color: $gold; }\n.hover-bg-yellow:hover,\n.hover-bg-yellow:focus { background-color: $yellow; }\n.hover-bg-light-yellow:hover,\n.hover-bg-light-yellow:focus { background-color: $light-yellow; }\n.hover-bg-purple:hover,\n.hover-bg-purple:focus { background-color: $purple; }\n.hover-bg-light-purple:hover,\n.hover-bg-light-purple:focus { background-color: $light-purple; }\n.hover-bg-dark-pink:hover,\n.hover-bg-dark-pink:focus { background-color: $dark-pink; }\n.hover-bg-hot-pink:hover,\n.hover-bg-hot-pink:focus { background-color: $hot-pink; }\n.hover-bg-pink:hover,\n.hover-bg-pink:focus { background-color: $pink; }\n.hover-bg-light-pink:hover,\n.hover-bg-light-pink:focus { background-color: $light-pink; }\n.hover-bg-dark-green:hover,\n.hover-bg-dark-green:focus { background-color: $dark-green; }\n.hover-bg-green:hover,\n.hover-bg-green:focus { background-color: $green; }\n.hover-bg-light-green:hover,\n.hover-bg-light-green:focus { background-color: $light-green; }\n.hover-bg-navy:hover,\n.hover-bg-navy:focus { background-color: $navy; }\n.hover-bg-dark-blue:hover,\n.hover-bg-dark-blue:focus { background-color: $dark-blue; }\n.hover-bg-blue:hover,\n.hover-bg-blue:focus { background-color: $blue; }\n.hover-bg-light-blue:hover,\n.hover-bg-light-blue:focus { background-color: $light-blue; }\n.hover-bg-lightest-blue:hover,\n.hover-bg-lightest-blue:focus { background-color: $lightest-blue; }\n.hover-bg-washed-blue:hover,\n.hover-bg-washed-blue:focus { background-color: $washed-blue; }\n.hover-bg-washed-green:hover,\n.hover-bg-washed-green:focus { background-color: $washed-green; }\n.hover-bg-washed-yellow:hover,\n.hover-bg-washed-yellow:focus { background-color: $washed-yellow; }\n.hover-bg-washed-red:hover,\n.hover-bg-washed-red:focus { background-color: $washed-red; }\n.hover-bg-inherit:hover,\n.hover-bg-inherit:focus { background-color: inherit; }\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/* Variables */\n\n/*\n SPACING\n Docs: http://tachyons.io/docs/layout/spacing/\n\n An eight step powers of two scale ranging from 0 to 16rem.\n\n Base:\n p = padding\n m = margin\n\n Modifiers:\n a = all\n h = horizontal\n v = vertical\n t = top\n r = right\n b = bottom\n l = left\n\n 0 = none\n 1 = 1st step in spacing scale\n 2 = 2nd step in spacing scale\n 3 = 3rd step in spacing scale\n 4 = 4th step in spacing scale\n 5 = 5th step in spacing scale\n 6 = 6th step in spacing scale\n 7 = 7th step in spacing scale\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n*/\n\n\n.pa0 { padding: $spacing-none; }\n.pa1 { padding: $spacing-extra-small; }\n.pa2 { padding: $spacing-small; }\n.pa3 { padding: $spacing-medium; }\n.pa4 { padding: $spacing-large; }\n.pa5 { padding: $spacing-extra-large; }\n.pa6 { padding: $spacing-extra-extra-large; }\n.pa7 { padding: $spacing-extra-extra-extra-large; }\n\n.pl0 { padding-left: $spacing-none; }\n.pl1 { padding-left: $spacing-extra-small; }\n.pl2 { padding-left: $spacing-small; }\n.pl3 { padding-left: $spacing-medium; }\n.pl4 { padding-left: $spacing-large; }\n.pl5 { padding-left: $spacing-extra-large; }\n.pl6 { padding-left: $spacing-extra-extra-large; }\n.pl7 { padding-left: $spacing-extra-extra-extra-large; }\n\n.pr0 { padding-right: $spacing-none; }\n.pr1 { padding-right: $spacing-extra-small; }\n.pr2 { padding-right: $spacing-small; }\n.pr3 { padding-right: $spacing-medium; }\n.pr4 { padding-right: $spacing-large; }\n.pr5 { padding-right: $spacing-extra-large; }\n.pr6 { padding-right: $spacing-extra-extra-large; }\n.pr7 { padding-right: $spacing-extra-extra-extra-large; }\n\n.pb0 { padding-bottom: $spacing-none; }\n.pb1 { padding-bottom: $spacing-extra-small; }\n.pb2 { padding-bottom: $spacing-small; }\n.pb3 { padding-bottom: $spacing-medium; }\n.pb4 { padding-bottom: $spacing-large; }\n.pb5 { padding-bottom: $spacing-extra-large; }\n.pb6 { padding-bottom: $spacing-extra-extra-large; }\n.pb7 { padding-bottom: $spacing-extra-extra-extra-large; }\n\n.pt0 { padding-top: $spacing-none; }\n.pt1 { padding-top: $spacing-extra-small; }\n.pt2 { padding-top: $spacing-small; }\n.pt3 { padding-top: $spacing-medium; }\n.pt4 { padding-top: $spacing-large; }\n.pt5 { padding-top: $spacing-extra-large; }\n.pt6 { padding-top: $spacing-extra-extra-large; }\n.pt7 { padding-top: $spacing-extra-extra-extra-large; }\n\n.pv0 {\n padding-top: $spacing-none;\n padding-bottom: $spacing-none;\n}\n.pv1 {\n padding-top: $spacing-extra-small;\n padding-bottom: $spacing-extra-small;\n}\n.pv2 {\n padding-top: $spacing-small;\n padding-bottom: $spacing-small;\n}\n.pv3 {\n padding-top: $spacing-medium;\n padding-bottom: $spacing-medium;\n}\n.pv4 {\n padding-top: $spacing-large;\n padding-bottom: $spacing-large;\n}\n.pv5 {\n padding-top: $spacing-extra-large;\n padding-bottom: $spacing-extra-large;\n}\n.pv6 {\n padding-top: $spacing-extra-extra-large;\n padding-bottom: $spacing-extra-extra-large;\n}\n\n.pv7 {\n padding-top: $spacing-extra-extra-extra-large;\n padding-bottom: $spacing-extra-extra-extra-large;\n}\n\n.ph0 {\n padding-left: $spacing-none;\n padding-right: $spacing-none;\n}\n\n.ph1 {\n padding-left: $spacing-extra-small;\n padding-right: $spacing-extra-small;\n}\n\n.ph2 {\n padding-left: $spacing-small;\n padding-right: $spacing-small;\n}\n\n.ph3 {\n padding-left: $spacing-medium;\n padding-right: $spacing-medium;\n}\n\n.ph4 {\n padding-left: $spacing-large;\n padding-right: $spacing-large;\n}\n\n.ph5 {\n padding-left: $spacing-extra-large;\n padding-right: $spacing-extra-large;\n}\n\n.ph6 {\n padding-left: $spacing-extra-extra-large;\n padding-right: $spacing-extra-extra-large;\n}\n\n.ph7 {\n padding-left: $spacing-extra-extra-extra-large;\n padding-right: $spacing-extra-extra-extra-large;\n}\n\n.ma0 { margin: $spacing-none; }\n.ma1 { margin: $spacing-extra-small; }\n.ma2 { margin: $spacing-small; }\n.ma3 { margin: $spacing-medium; }\n.ma4 { margin: $spacing-large; }\n.ma5 { margin: $spacing-extra-large; }\n.ma6 { margin: $spacing-extra-extra-large; }\n.ma7 { margin: $spacing-extra-extra-extra-large; }\n\n.ml0 { margin-left: $spacing-none; }\n.ml1 { margin-left: $spacing-extra-small; }\n.ml2 { margin-left: $spacing-small; }\n.ml3 { margin-left: $spacing-medium; }\n.ml4 { margin-left: $spacing-large; }\n.ml5 { margin-left: $spacing-extra-large; }\n.ml6 { margin-left: $spacing-extra-extra-large; }\n.ml7 { margin-left: $spacing-extra-extra-extra-large; }\n\n.mr0 { margin-right: $spacing-none; }\n.mr1 { margin-right: $spacing-extra-small; }\n.mr2 { margin-right: $spacing-small; }\n.mr3 { margin-right: $spacing-medium; }\n.mr4 { margin-right: $spacing-large; }\n.mr5 { margin-right: $spacing-extra-large; }\n.mr6 { margin-right: $spacing-extra-extra-large; }\n.mr7 { margin-right: $spacing-extra-extra-extra-large; }\n\n.mb0 { margin-bottom: $spacing-none; }\n.mb1 { margin-bottom: $spacing-extra-small; }\n.mb2 { margin-bottom: $spacing-small; }\n.mb3 { margin-bottom: $spacing-medium; }\n.mb4 { margin-bottom: $spacing-large; }\n.mb5 { margin-bottom: $spacing-extra-large; }\n.mb6 { margin-bottom: $spacing-extra-extra-large; }\n.mb7 { margin-bottom: $spacing-extra-extra-extra-large; }\n\n.mt0 { margin-top: $spacing-none; }\n.mt1 { margin-top: $spacing-extra-small; }\n.mt2 { margin-top: $spacing-small; }\n.mt3 { margin-top: $spacing-medium; }\n.mt4 { margin-top: $spacing-large; }\n.mt5 { margin-top: $spacing-extra-large; }\n.mt6 { margin-top: $spacing-extra-extra-large; }\n.mt7 { margin-top: $spacing-extra-extra-extra-large; }\n\n.mv0 {\n margin-top: $spacing-none;\n margin-bottom: $spacing-none;\n}\n.mv1 {\n margin-top: $spacing-extra-small;\n margin-bottom: $spacing-extra-small;\n}\n.mv2 {\n margin-top: $spacing-small;\n margin-bottom: $spacing-small;\n}\n.mv3 {\n margin-top: $spacing-medium;\n margin-bottom: $spacing-medium;\n}\n.mv4 {\n margin-top: $spacing-large;\n margin-bottom: $spacing-large;\n}\n.mv5 {\n margin-top: $spacing-extra-large;\n margin-bottom: $spacing-extra-large;\n}\n.mv6 {\n margin-top: $spacing-extra-extra-large;\n margin-bottom: $spacing-extra-extra-large;\n}\n.mv7 {\n margin-top: $spacing-extra-extra-extra-large;\n margin-bottom: $spacing-extra-extra-extra-large;\n}\n\n.mh0 {\n margin-left: $spacing-none;\n margin-right: $spacing-none;\n}\n.mh1 {\n margin-left: $spacing-extra-small;\n margin-right: $spacing-extra-small;\n}\n.mh2 {\n margin-left: $spacing-small;\n margin-right: $spacing-small;\n}\n.mh3 {\n margin-left: $spacing-medium;\n margin-right: $spacing-medium;\n}\n.mh4 {\n margin-left: $spacing-large;\n margin-right: $spacing-large;\n}\n.mh5 {\n margin-left: $spacing-extra-large;\n margin-right: $spacing-extra-large;\n}\n.mh6 {\n margin-left: $spacing-extra-extra-large;\n margin-right: $spacing-extra-extra-large;\n}\n.mh7 {\n margin-left: $spacing-extra-extra-extra-large;\n margin-right: $spacing-extra-extra-extra-large;\n}\n\n@media #{$breakpoint-not-small} {\n .pa0-ns { padding: $spacing-none; }\n .pa1-ns { padding: $spacing-extra-small; }\n .pa2-ns { padding: $spacing-small; }\n .pa3-ns { padding: $spacing-medium; }\n .pa4-ns { padding: $spacing-large; }\n .pa5-ns { padding: $spacing-extra-large; }\n .pa6-ns { padding: $spacing-extra-extra-large; }\n .pa7-ns { padding: $spacing-extra-extra-extra-large; }\n\n .pl0-ns { padding-left: $spacing-none; }\n .pl1-ns { padding-left: $spacing-extra-small; }\n .pl2-ns { padding-left: $spacing-small; }\n .pl3-ns { padding-left: $spacing-medium; }\n .pl4-ns { padding-left: $spacing-large; }\n .pl5-ns { padding-left: $spacing-extra-large; }\n .pl6-ns { padding-left: $spacing-extra-extra-large; }\n .pl7-ns { padding-left: $spacing-extra-extra-extra-large; }\n\n .pr0-ns { padding-right: $spacing-none; }\n .pr1-ns { padding-right: $spacing-extra-small; }\n .pr2-ns { padding-right: $spacing-small; }\n .pr3-ns { padding-right: $spacing-medium; }\n .pr4-ns { padding-right: $spacing-large; }\n .pr5-ns { padding-right: $spacing-extra-large; }\n .pr6-ns { padding-right: $spacing-extra-extra-large; }\n .pr7-ns { padding-right: $spacing-extra-extra-extra-large; }\n\n .pb0-ns { padding-bottom: $spacing-none; }\n .pb1-ns { padding-bottom: $spacing-extra-small; }\n .pb2-ns { padding-bottom: $spacing-small; }\n .pb3-ns { padding-bottom: $spacing-medium; }\n .pb4-ns { padding-bottom: $spacing-large; }\n .pb5-ns { padding-bottom: $spacing-extra-large; }\n .pb6-ns { padding-bottom: $spacing-extra-extra-large; }\n .pb7-ns { padding-bottom: $spacing-extra-extra-extra-large; }\n\n .pt0-ns { padding-top: $spacing-none; }\n .pt1-ns { padding-top: $spacing-extra-small; }\n .pt2-ns { padding-top: $spacing-small; }\n .pt3-ns { padding-top: $spacing-medium; }\n .pt4-ns { padding-top: $spacing-large; }\n .pt5-ns { padding-top: $spacing-extra-large; }\n .pt6-ns { padding-top: $spacing-extra-extra-large; }\n .pt7-ns { padding-top: $spacing-extra-extra-extra-large; }\n\n .pv0-ns {\n padding-top: $spacing-none;\n padding-bottom: $spacing-none;\n }\n .pv1-ns {\n padding-top: $spacing-extra-small;\n padding-bottom: $spacing-extra-small;\n }\n .pv2-ns {\n padding-top: $spacing-small;\n padding-bottom: $spacing-small;\n }\n .pv3-ns {\n padding-top: $spacing-medium;\n padding-bottom: $spacing-medium;\n }\n .pv4-ns {\n padding-top: $spacing-large;\n padding-bottom: $spacing-large;\n }\n .pv5-ns {\n padding-top: $spacing-extra-large;\n padding-bottom: $spacing-extra-large;\n }\n .pv6-ns {\n padding-top: $spacing-extra-extra-large;\n padding-bottom: $spacing-extra-extra-large;\n }\n .pv7-ns {\n padding-top: $spacing-extra-extra-extra-large;\n padding-bottom: $spacing-extra-extra-extra-large;\n }\n .ph0-ns {\n padding-left: $spacing-none;\n padding-right: $spacing-none;\n }\n .ph1-ns {\n padding-left: $spacing-extra-small;\n padding-right: $spacing-extra-small;\n }\n .ph2-ns {\n padding-left: $spacing-small;\n padding-right: $spacing-small;\n }\n .ph3-ns {\n padding-left: $spacing-medium;\n padding-right: $spacing-medium;\n }\n .ph4-ns {\n padding-left: $spacing-large;\n padding-right: $spacing-large;\n }\n .ph5-ns {\n padding-left: $spacing-extra-large;\n padding-right: $spacing-extra-large;\n }\n .ph6-ns {\n padding-left: $spacing-extra-extra-large;\n padding-right: $spacing-extra-extra-large;\n }\n .ph7-ns {\n padding-left: $spacing-extra-extra-extra-large;\n padding-right: $spacing-extra-extra-extra-large;\n }\n\n .ma0-ns { margin: $spacing-none; }\n .ma1-ns { margin: $spacing-extra-small; }\n .ma2-ns { margin: $spacing-small; }\n .ma3-ns { margin: $spacing-medium; }\n .ma4-ns { margin: $spacing-large; }\n .ma5-ns { margin: $spacing-extra-large; }\n .ma6-ns { margin: $spacing-extra-extra-large; }\n .ma7-ns { margin: $spacing-extra-extra-extra-large; }\n\n .ml0-ns { margin-left: $spacing-none; }\n .ml1-ns { margin-left: $spacing-extra-small; }\n .ml2-ns { margin-left: $spacing-small; }\n .ml3-ns { margin-left: $spacing-medium; }\n .ml4-ns { margin-left: $spacing-large; }\n .ml5-ns { margin-left: $spacing-extra-large; }\n .ml6-ns { margin-left: $spacing-extra-extra-large; }\n .ml7-ns { margin-left: $spacing-extra-extra-extra-large; }\n\n .mr0-ns { margin-right: $spacing-none; }\n .mr1-ns { margin-right: $spacing-extra-small; }\n .mr2-ns { margin-right: $spacing-small; }\n .mr3-ns { margin-right: $spacing-medium; }\n .mr4-ns { margin-right: $spacing-large; }\n .mr5-ns { margin-right: $spacing-extra-large; }\n .mr6-ns { margin-right: $spacing-extra-extra-large; }\n .mr7-ns { margin-right: $spacing-extra-extra-extra-large; }\n\n .mb0-ns { margin-bottom: $spacing-none; }\n .mb1-ns { margin-bottom: $spacing-extra-small; }\n .mb2-ns { margin-bottom: $spacing-small; }\n .mb3-ns { margin-bottom: $spacing-medium; }\n .mb4-ns { margin-bottom: $spacing-large; }\n .mb5-ns { margin-bottom: $spacing-extra-large; }\n .mb6-ns { margin-bottom: $spacing-extra-extra-large; }\n .mb7-ns { margin-bottom: $spacing-extra-extra-extra-large; }\n\n .mt0-ns { margin-top: $spacing-none; }\n .mt1-ns { margin-top: $spacing-extra-small; }\n .mt2-ns { margin-top: $spacing-small; }\n .mt3-ns { margin-top: $spacing-medium; }\n .mt4-ns { margin-top: $spacing-large; }\n .mt5-ns { margin-top: $spacing-extra-large; }\n .mt6-ns { margin-top: $spacing-extra-extra-large; }\n .mt7-ns { margin-top: $spacing-extra-extra-extra-large; }\n\n .mv0-ns {\n margin-top: $spacing-none;\n margin-bottom: $spacing-none;\n }\n .mv1-ns {\n margin-top: $spacing-extra-small;\n margin-bottom: $spacing-extra-small;\n }\n .mv2-ns {\n margin-top: $spacing-small;\n margin-bottom: $spacing-small;\n }\n .mv3-ns {\n margin-top: $spacing-medium;\n margin-bottom: $spacing-medium;\n }\n .mv4-ns {\n margin-top: $spacing-large;\n margin-bottom: $spacing-large;\n }\n .mv5-ns {\n margin-top: $spacing-extra-large;\n margin-bottom: $spacing-extra-large;\n }\n .mv6-ns {\n margin-top: $spacing-extra-extra-large;\n margin-bottom: $spacing-extra-extra-large;\n }\n .mv7-ns {\n margin-top: $spacing-extra-extra-extra-large;\n margin-bottom: $spacing-extra-extra-extra-large;\n }\n\n .mh0-ns {\n margin-left: $spacing-none;\n margin-right: $spacing-none;\n }\n .mh1-ns {\n margin-left: $spacing-extra-small;\n margin-right: $spacing-extra-small;\n }\n .mh2-ns {\n margin-left: $spacing-small;\n margin-right: $spacing-small;\n }\n .mh3-ns {\n margin-left: $spacing-medium;\n margin-right: $spacing-medium;\n }\n .mh4-ns {\n margin-left: $spacing-large;\n margin-right: $spacing-large;\n }\n .mh5-ns {\n margin-left: $spacing-extra-large;\n margin-right: $spacing-extra-large;\n }\n .mh6-ns {\n margin-left: $spacing-extra-extra-large;\n margin-right: $spacing-extra-extra-large;\n }\n .mh7-ns {\n margin-left: $spacing-extra-extra-extra-large;\n margin-right: $spacing-extra-extra-extra-large;\n }\n\n}\n\n@media #{$breakpoint-medium} {\n .pa0-m { padding: $spacing-none; }\n .pa1-m { padding: $spacing-extra-small; }\n .pa2-m { padding: $spacing-small; }\n .pa3-m { padding: $spacing-medium; }\n .pa4-m { padding: $spacing-large; }\n .pa5-m { padding: $spacing-extra-large; }\n .pa6-m { padding: $spacing-extra-extra-large; }\n .pa7-m { padding: $spacing-extra-extra-extra-large; }\n\n .pl0-m { padding-left: $spacing-none; }\n .pl1-m { padding-left: $spacing-extra-small; }\n .pl2-m { padding-left: $spacing-small; }\n .pl3-m { padding-left: $spacing-medium; }\n .pl4-m { padding-left: $spacing-large; }\n .pl5-m { padding-left: $spacing-extra-large; }\n .pl6-m { padding-left: $spacing-extra-extra-large; }\n .pl7-m { padding-left: $spacing-extra-extra-extra-large; }\n\n .pr0-m { padding-right: $spacing-none; }\n .pr1-m { padding-right: $spacing-extra-small; }\n .pr2-m { padding-right: $spacing-small; }\n .pr3-m { padding-right: $spacing-medium; }\n .pr4-m { padding-right: $spacing-large; }\n .pr5-m { padding-right: $spacing-extra-large; }\n .pr6-m { padding-right: $spacing-extra-extra-large; }\n .pr7-m { padding-right: $spacing-extra-extra-extra-large; }\n\n .pb0-m { padding-bottom: $spacing-none; }\n .pb1-m { padding-bottom: $spacing-extra-small; }\n .pb2-m { padding-bottom: $spacing-small; }\n .pb3-m { padding-bottom: $spacing-medium; }\n .pb4-m { padding-bottom: $spacing-large; }\n .pb5-m { padding-bottom: $spacing-extra-large; }\n .pb6-m { padding-bottom: $spacing-extra-extra-large; }\n .pb7-m { padding-bottom: $spacing-extra-extra-extra-large; }\n\n .pt0-m { padding-top: $spacing-none; }\n .pt1-m { padding-top: $spacing-extra-small; }\n .pt2-m { padding-top: $spacing-small; }\n .pt3-m { padding-top: $spacing-medium; }\n .pt4-m { padding-top: $spacing-large; }\n .pt5-m { padding-top: $spacing-extra-large; }\n .pt6-m { padding-top: $spacing-extra-extra-large; }\n .pt7-m { padding-top: $spacing-extra-extra-extra-large; }\n\n .pv0-m {\n padding-top: $spacing-none;\n padding-bottom: $spacing-none;\n }\n .pv1-m {\n padding-top: $spacing-extra-small;\n padding-bottom: $spacing-extra-small;\n }\n .pv2-m {\n padding-top: $spacing-small;\n padding-bottom: $spacing-small;\n }\n .pv3-m {\n padding-top: $spacing-medium;\n padding-bottom: $spacing-medium;\n }\n .pv4-m {\n padding-top: $spacing-large;\n padding-bottom: $spacing-large;\n }\n .pv5-m {\n padding-top: $spacing-extra-large;\n padding-bottom: $spacing-extra-large;\n }\n .pv6-m {\n padding-top: $spacing-extra-extra-large;\n padding-bottom: $spacing-extra-extra-large;\n }\n .pv7-m {\n padding-top: $spacing-extra-extra-extra-large;\n padding-bottom: $spacing-extra-extra-extra-large;\n }\n\n .ph0-m {\n padding-left: $spacing-none;\n padding-right: $spacing-none;\n }\n .ph1-m {\n padding-left: $spacing-extra-small;\n padding-right: $spacing-extra-small;\n }\n .ph2-m {\n padding-left: $spacing-small;\n padding-right: $spacing-small;\n }\n .ph3-m {\n padding-left: $spacing-medium;\n padding-right: $spacing-medium;\n }\n .ph4-m {\n padding-left: $spacing-large;\n padding-right: $spacing-large;\n }\n .ph5-m {\n padding-left: $spacing-extra-large;\n padding-right: $spacing-extra-large;\n }\n .ph6-m {\n padding-left: $spacing-extra-extra-large;\n padding-right: $spacing-extra-extra-large;\n }\n .ph7-m {\n padding-left: $spacing-extra-extra-extra-large;\n padding-right: $spacing-extra-extra-extra-large;\n }\n\n .ma0-m { margin: $spacing-none; }\n .ma1-m { margin: $spacing-extra-small; }\n .ma2-m { margin: $spacing-small; }\n .ma3-m { margin: $spacing-medium; }\n .ma4-m { margin: $spacing-large; }\n .ma5-m { margin: $spacing-extra-large; }\n .ma6-m { margin: $spacing-extra-extra-large; }\n .ma7-m { margin: $spacing-extra-extra-extra-large; }\n\n .ml0-m { margin-left: $spacing-none; }\n .ml1-m { margin-left: $spacing-extra-small; }\n .ml2-m { margin-left: $spacing-small; }\n .ml3-m { margin-left: $spacing-medium; }\n .ml4-m { margin-left: $spacing-large; }\n .ml5-m { margin-left: $spacing-extra-large; }\n .ml6-m { margin-left: $spacing-extra-extra-large; }\n .ml7-m { margin-left: $spacing-extra-extra-extra-large; }\n\n .mr0-m { margin-right: $spacing-none; }\n .mr1-m { margin-right: $spacing-extra-small; }\n .mr2-m { margin-right: $spacing-small; }\n .mr3-m { margin-right: $spacing-medium; }\n .mr4-m { margin-right: $spacing-large; }\n .mr5-m { margin-right: $spacing-extra-large; }\n .mr6-m { margin-right: $spacing-extra-extra-large; }\n .mr7-m { margin-right: $spacing-extra-extra-extra-large; }\n\n .mb0-m { margin-bottom: $spacing-none; }\n .mb1-m { margin-bottom: $spacing-extra-small; }\n .mb2-m { margin-bottom: $spacing-small; }\n .mb3-m { margin-bottom: $spacing-medium; }\n .mb4-m { margin-bottom: $spacing-large; }\n .mb5-m { margin-bottom: $spacing-extra-large; }\n .mb6-m { margin-bottom: $spacing-extra-extra-large; }\n .mb7-m { margin-bottom: $spacing-extra-extra-extra-large; }\n\n .mt0-m { margin-top: $spacing-none; }\n .mt1-m { margin-top: $spacing-extra-small; }\n .mt2-m { margin-top: $spacing-small; }\n .mt3-m { margin-top: $spacing-medium; }\n .mt4-m { margin-top: $spacing-large; }\n .mt5-m { margin-top: $spacing-extra-large; }\n .mt6-m { margin-top: $spacing-extra-extra-large; }\n .mt7-m { margin-top: $spacing-extra-extra-extra-large; }\n\n .mv0-m {\n margin-top: $spacing-none;\n margin-bottom: $spacing-none;\n }\n .mv1-m {\n margin-top: $spacing-extra-small;\n margin-bottom: $spacing-extra-small;\n }\n .mv2-m {\n margin-top: $spacing-small;\n margin-bottom: $spacing-small;\n }\n .mv3-m {\n margin-top: $spacing-medium;\n margin-bottom: $spacing-medium;\n }\n .mv4-m {\n margin-top: $spacing-large;\n margin-bottom: $spacing-large;\n }\n .mv5-m {\n margin-top: $spacing-extra-large;\n margin-bottom: $spacing-extra-large;\n }\n .mv6-m {\n margin-top: $spacing-extra-extra-large;\n margin-bottom: $spacing-extra-extra-large;\n }\n .mv7-m {\n margin-top: $spacing-extra-extra-extra-large;\n margin-bottom: $spacing-extra-extra-extra-large;\n }\n\n .mh0-m {\n margin-left: $spacing-none;\n margin-right: $spacing-none;\n }\n .mh1-m {\n margin-left: $spacing-extra-small;\n margin-right: $spacing-extra-small;\n }\n .mh2-m {\n margin-left: $spacing-small;\n margin-right: $spacing-small;\n }\n .mh3-m {\n margin-left: $spacing-medium;\n margin-right: $spacing-medium;\n }\n .mh4-m {\n margin-left: $spacing-large;\n margin-right: $spacing-large;\n }\n .mh5-m {\n margin-left: $spacing-extra-large;\n margin-right: $spacing-extra-large;\n }\n .mh6-m {\n margin-left: $spacing-extra-extra-large;\n margin-right: $spacing-extra-extra-large;\n }\n .mh7-m {\n margin-left: $spacing-extra-extra-extra-large;\n margin-right: $spacing-extra-extra-extra-large;\n }\n\n}\n\n@media #{$breakpoint-large} {\n .pa0-l { padding: $spacing-none; }\n .pa1-l { padding: $spacing-extra-small; }\n .pa2-l { padding: $spacing-small; }\n .pa3-l { padding: $spacing-medium; }\n .pa4-l { padding: $spacing-large; }\n .pa5-l { padding: $spacing-extra-large; }\n .pa6-l { padding: $spacing-extra-extra-large; }\n .pa7-l { padding: $spacing-extra-extra-extra-large; }\n\n .pl0-l { padding-left: $spacing-none; }\n .pl1-l { padding-left: $spacing-extra-small; }\n .pl2-l { padding-left: $spacing-small; }\n .pl3-l { padding-left: $spacing-medium; }\n .pl4-l { padding-left: $spacing-large; }\n .pl5-l { padding-left: $spacing-extra-large; }\n .pl6-l { padding-left: $spacing-extra-extra-large; }\n .pl7-l { padding-left: $spacing-extra-extra-extra-large; }\n\n .pr0-l { padding-right: $spacing-none; }\n .pr1-l { padding-right: $spacing-extra-small; }\n .pr2-l { padding-right: $spacing-small; }\n .pr3-l { padding-right: $spacing-medium; }\n .pr4-l { padding-right: $spacing-large; }\n .pr5-l { padding-right: $spacing-extra-large; }\n .pr6-l { padding-right: $spacing-extra-extra-large; }\n .pr7-l { padding-right: $spacing-extra-extra-extra-large; }\n\n .pb0-l { padding-bottom: $spacing-none; }\n .pb1-l { padding-bottom: $spacing-extra-small; }\n .pb2-l { padding-bottom: $spacing-small; }\n .pb3-l { padding-bottom: $spacing-medium; }\n .pb4-l { padding-bottom: $spacing-large; }\n .pb5-l { padding-bottom: $spacing-extra-large; }\n .pb6-l { padding-bottom: $spacing-extra-extra-large; }\n .pb7-l { padding-bottom: $spacing-extra-extra-extra-large; }\n\n .pt0-l { padding-top: $spacing-none; }\n .pt1-l { padding-top: $spacing-extra-small; }\n .pt2-l { padding-top: $spacing-small; }\n .pt3-l { padding-top: $spacing-medium; }\n .pt4-l { padding-top: $spacing-large; }\n .pt5-l { padding-top: $spacing-extra-large; }\n .pt6-l { padding-top: $spacing-extra-extra-large; }\n .pt7-l { padding-top: $spacing-extra-extra-extra-large; }\n\n .pv0-l {\n padding-top: $spacing-none;\n padding-bottom: $spacing-none;\n }\n .pv1-l {\n padding-top: $spacing-extra-small;\n padding-bottom: $spacing-extra-small;\n }\n .pv2-l {\n padding-top: $spacing-small;\n padding-bottom: $spacing-small;\n }\n .pv3-l {\n padding-top: $spacing-medium;\n padding-bottom: $spacing-medium;\n }\n .pv4-l {\n padding-top: $spacing-large;\n padding-bottom: $spacing-large;\n }\n .pv5-l {\n padding-top: $spacing-extra-large;\n padding-bottom: $spacing-extra-large;\n }\n .pv6-l {\n padding-top: $spacing-extra-extra-large;\n padding-bottom: $spacing-extra-extra-large;\n }\n .pv7-l {\n padding-top: $spacing-extra-extra-extra-large;\n padding-bottom: $spacing-extra-extra-extra-large;\n }\n\n .ph0-l {\n padding-left: $spacing-none;\n padding-right: $spacing-none;\n }\n .ph1-l {\n padding-left: $spacing-extra-small;\n padding-right: $spacing-extra-small;\n }\n .ph2-l {\n padding-left: $spacing-small;\n padding-right: $spacing-small;\n }\n .ph3-l {\n padding-left: $spacing-medium;\n padding-right: $spacing-medium;\n }\n .ph4-l {\n padding-left: $spacing-large;\n padding-right: $spacing-large;\n }\n .ph5-l {\n padding-left: $spacing-extra-large;\n padding-right: $spacing-extra-large;\n }\n .ph6-l {\n padding-left: $spacing-extra-extra-large;\n padding-right: $spacing-extra-extra-large;\n }\n .ph7-l {\n padding-left: $spacing-extra-extra-extra-large;\n padding-right: $spacing-extra-extra-extra-large;\n }\n\n .ma0-l { margin: $spacing-none; }\n .ma1-l { margin: $spacing-extra-small; }\n .ma2-l { margin: $spacing-small; }\n .ma3-l { margin: $spacing-medium; }\n .ma4-l { margin: $spacing-large; }\n .ma5-l { margin: $spacing-extra-large; }\n .ma6-l { margin: $spacing-extra-extra-large; }\n .ma7-l { margin: $spacing-extra-extra-extra-large; }\n\n .ml0-l { margin-left: $spacing-none; }\n .ml1-l { margin-left: $spacing-extra-small; }\n .ml2-l { margin-left: $spacing-small; }\n .ml3-l { margin-left: $spacing-medium; }\n .ml4-l { margin-left: $spacing-large; }\n .ml5-l { margin-left: $spacing-extra-large; }\n .ml6-l { margin-left: $spacing-extra-extra-large; }\n .ml7-l { margin-left: $spacing-extra-extra-extra-large; }\n\n .mr0-l { margin-right: $spacing-none; }\n .mr1-l { margin-right: $spacing-extra-small; }\n .mr2-l { margin-right: $spacing-small; }\n .mr3-l { margin-right: $spacing-medium; }\n .mr4-l { margin-right: $spacing-large; }\n .mr5-l { margin-right: $spacing-extra-large; }\n .mr6-l { margin-right: $spacing-extra-extra-large; }\n .mr7-l { margin-right: $spacing-extra-extra-extra-large; }\n\n .mb0-l { margin-bottom: $spacing-none; }\n .mb1-l { margin-bottom: $spacing-extra-small; }\n .mb2-l { margin-bottom: $spacing-small; }\n .mb3-l { margin-bottom: $spacing-medium; }\n .mb4-l { margin-bottom: $spacing-large; }\n .mb5-l { margin-bottom: $spacing-extra-large; }\n .mb6-l { margin-bottom: $spacing-extra-extra-large; }\n .mb7-l { margin-bottom: $spacing-extra-extra-extra-large; }\n\n .mt0-l { margin-top: $spacing-none; }\n .mt1-l { margin-top: $spacing-extra-small; }\n .mt2-l { margin-top: $spacing-small; }\n .mt3-l { margin-top: $spacing-medium; }\n .mt4-l { margin-top: $spacing-large; }\n .mt5-l { margin-top: $spacing-extra-large; }\n .mt6-l { margin-top: $spacing-extra-extra-large; }\n .mt7-l { margin-top: $spacing-extra-extra-extra-large; }\n\n .mv0-l {\n margin-top: $spacing-none;\n margin-bottom: $spacing-none;\n }\n .mv1-l {\n margin-top: $spacing-extra-small;\n margin-bottom: $spacing-extra-small;\n }\n .mv2-l {\n margin-top: $spacing-small;\n margin-bottom: $spacing-small;\n }\n .mv3-l {\n margin-top: $spacing-medium;\n margin-bottom: $spacing-medium;\n }\n .mv4-l {\n margin-top: $spacing-large;\n margin-bottom: $spacing-large;\n }\n .mv5-l {\n margin-top: $spacing-extra-large;\n margin-bottom: $spacing-extra-large;\n }\n .mv6-l {\n margin-top: $spacing-extra-extra-large;\n margin-bottom: $spacing-extra-extra-large;\n }\n .mv7-l {\n margin-top: $spacing-extra-extra-extra-large;\n margin-bottom: $spacing-extra-extra-extra-large;\n }\n\n .mh0-l {\n margin-left: $spacing-none;\n margin-right: $spacing-none;\n }\n .mh1-l {\n margin-left: $spacing-extra-small;\n margin-right: $spacing-extra-small;\n }\n .mh2-l {\n margin-left: $spacing-small;\n margin-right: $spacing-small;\n }\n .mh3-l {\n margin-left: $spacing-medium;\n margin-right: $spacing-medium;\n }\n .mh4-l {\n margin-left: $spacing-large;\n margin-right: $spacing-large;\n }\n .mh5-l {\n margin-left: $spacing-extra-large;\n margin-right: $spacing-extra-large;\n }\n .mh6-l {\n margin-left: $spacing-extra-extra-large;\n margin-right: $spacing-extra-extra-large;\n }\n .mh7-l {\n margin-left: $spacing-extra-extra-extra-large;\n margin-right: $spacing-extra-extra-extra-large;\n }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n NEGATIVE MARGINS\n\n Base:\n n = negative\n\n Modifiers:\n a = all\n t = top\n r = right\n b = bottom\n l = left\n\n 1 = 1st step in spacing scale\n 2 = 2nd step in spacing scale\n 3 = 3rd step in spacing scale\n 4 = 4th step in spacing scale\n 5 = 5th step in spacing scale\n 6 = 6th step in spacing scale\n 7 = 7th step in spacing scale\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n*/\n\n\n\n.na1 { margin: -$spacing-extra-small; }\n.na2 { margin: -$spacing-small; }\n.na3 { margin: -$spacing-medium; }\n.na4 { margin: -$spacing-large; }\n.na5 { margin: -$spacing-extra-large; }\n.na6 { margin: -$spacing-extra-extra-large; }\n.na7 { margin: -$spacing-extra-extra-extra-large; }\n\n.nl1 { margin-left: -$spacing-extra-small; }\n.nl2 { margin-left: -$spacing-small; }\n.nl3 { margin-left: -$spacing-medium; }\n.nl4 { margin-left: -$spacing-large; }\n.nl5 { margin-left: -$spacing-extra-large; }\n.nl6 { margin-left: -$spacing-extra-extra-large; }\n.nl7 { margin-left: -$spacing-extra-extra-extra-large; }\n\n.nr1 { margin-right: -$spacing-extra-small; }\n.nr2 { margin-right: -$spacing-small; }\n.nr3 { margin-right: -$spacing-medium; }\n.nr4 { margin-right: -$spacing-large; }\n.nr5 { margin-right: -$spacing-extra-large; }\n.nr6 { margin-right: -$spacing-extra-extra-large; }\n.nr7 { margin-right: -$spacing-extra-extra-extra-large; }\n\n.nb1 { margin-bottom: -$spacing-extra-small; }\n.nb2 { margin-bottom: -$spacing-small; }\n.nb3 { margin-bottom: -$spacing-medium; }\n.nb4 { margin-bottom: -$spacing-large; }\n.nb5 { margin-bottom: -$spacing-extra-large; }\n.nb6 { margin-bottom: -$spacing-extra-extra-large; }\n.nb7 { margin-bottom: -$spacing-extra-extra-extra-large; }\n\n.nt1 { margin-top: -$spacing-extra-small; }\n.nt2 { margin-top: -$spacing-small; }\n.nt3 { margin-top: -$spacing-medium; }\n.nt4 { margin-top: -$spacing-large; }\n.nt5 { margin-top: -$spacing-extra-large; }\n.nt6 { margin-top: -$spacing-extra-extra-large; }\n.nt7 { margin-top: -$spacing-extra-extra-extra-large; }\n\n@media #{$breakpoint-not-small} {\n\n .na1-ns { margin: -$spacing-extra-small; }\n .na2-ns { margin: -$spacing-small; }\n .na3-ns { margin: -$spacing-medium; }\n .na4-ns { margin: -$spacing-large; }\n .na5-ns { margin: -$spacing-extra-large; }\n .na6-ns { margin: -$spacing-extra-extra-large; }\n .na7-ns { margin: -$spacing-extra-extra-extra-large; }\n\n .nl1-ns { margin-left: -$spacing-extra-small; }\n .nl2-ns { margin-left: -$spacing-small; }\n .nl3-ns { margin-left: -$spacing-medium; }\n .nl4-ns { margin-left: -$spacing-large; }\n .nl5-ns { margin-left: -$spacing-extra-large; }\n .nl6-ns { margin-left: -$spacing-extra-extra-large; }\n .nl7-ns { margin-left: -$spacing-extra-extra-extra-large; }\n\n .nr1-ns { margin-right: -$spacing-extra-small; }\n .nr2-ns { margin-right: -$spacing-small; }\n .nr3-ns { margin-right: -$spacing-medium; }\n .nr4-ns { margin-right: -$spacing-large; }\n .nr5-ns { margin-right: -$spacing-extra-large; }\n .nr6-ns { margin-right: -$spacing-extra-extra-large; }\n .nr7-ns { margin-right: -$spacing-extra-extra-extra-large; }\n\n .nb1-ns { margin-bottom: -$spacing-extra-small; }\n .nb2-ns { margin-bottom: -$spacing-small; }\n .nb3-ns { margin-bottom: -$spacing-medium; }\n .nb4-ns { margin-bottom: -$spacing-large; }\n .nb5-ns { margin-bottom: -$spacing-extra-large; }\n .nb6-ns { margin-bottom: -$spacing-extra-extra-large; }\n .nb7-ns { margin-bottom: -$spacing-extra-extra-extra-large; }\n\n .nt1-ns { margin-top: -$spacing-extra-small; }\n .nt2-ns { margin-top: -$spacing-small; }\n .nt3-ns { margin-top: -$spacing-medium; }\n .nt4-ns { margin-top: -$spacing-large; }\n .nt5-ns { margin-top: -$spacing-extra-large; }\n .nt6-ns { margin-top: -$spacing-extra-extra-large; }\n .nt7-ns { margin-top: -$spacing-extra-extra-extra-large; }\n\n}\n\n@media #{$breakpoint-medium} {\n .na1-m { margin: -$spacing-extra-small; }\n .na2-m { margin: -$spacing-small; }\n .na3-m { margin: -$spacing-medium; }\n .na4-m { margin: -$spacing-large; }\n .na5-m { margin: -$spacing-extra-large; }\n .na6-m { margin: -$spacing-extra-extra-large; }\n .na7-m { margin: -$spacing-extra-extra-extra-large; }\n\n .nl1-m { margin-left: -$spacing-extra-small; }\n .nl2-m { margin-left: -$spacing-small; }\n .nl3-m { margin-left: -$spacing-medium; }\n .nl4-m { margin-left: -$spacing-large; }\n .nl5-m { margin-left: -$spacing-extra-large; }\n .nl6-m { margin-left: -$spacing-extra-extra-large; }\n .nl7-m { margin-left: -$spacing-extra-extra-extra-large; }\n\n .nr1-m { margin-right: -$spacing-extra-small; }\n .nr2-m { margin-right: -$spacing-small; }\n .nr3-m { margin-right: -$spacing-medium; }\n .nr4-m { margin-right: -$spacing-large; }\n .nr5-m { margin-right: -$spacing-extra-large; }\n .nr6-m { margin-right: -$spacing-extra-extra-large; }\n .nr7-m { margin-right: -$spacing-extra-extra-extra-large; }\n\n .nb1-m { margin-bottom: -$spacing-extra-small; }\n .nb2-m { margin-bottom: -$spacing-small; }\n .nb3-m { margin-bottom: -$spacing-medium; }\n .nb4-m { margin-bottom: -$spacing-large; }\n .nb5-m { margin-bottom: -$spacing-extra-large; }\n .nb6-m { margin-bottom: -$spacing-extra-extra-large; }\n .nb7-m { margin-bottom: -$spacing-extra-extra-extra-large; }\n\n .nt1-m { margin-top: -$spacing-extra-small; }\n .nt2-m { margin-top: -$spacing-small; }\n .nt3-m { margin-top: -$spacing-medium; }\n .nt4-m { margin-top: -$spacing-large; }\n .nt5-m { margin-top: -$spacing-extra-large; }\n .nt6-m { margin-top: -$spacing-extra-extra-large; }\n .nt7-m { margin-top: -$spacing-extra-extra-extra-large; }\n\n}\n\n@media #{$breakpoint-large} {\n .na1-l { margin: -$spacing-extra-small; }\n .na2-l { margin: -$spacing-small; }\n .na3-l { margin: -$spacing-medium; }\n .na4-l { margin: -$spacing-large; }\n .na5-l { margin: -$spacing-extra-large; }\n .na6-l { margin: -$spacing-extra-extra-large; }\n .na7-l { margin: -$spacing-extra-extra-extra-large; }\n\n .nl1-l { margin-left: -$spacing-extra-small; }\n .nl2-l { margin-left: -$spacing-small; }\n .nl3-l { margin-left: -$spacing-medium; }\n .nl4-l { margin-left: -$spacing-large; }\n .nl5-l { margin-left: -$spacing-extra-large; }\n .nl6-l { margin-left: -$spacing-extra-extra-large; }\n .nl7-l { margin-left: -$spacing-extra-extra-extra-large; }\n\n .nr1-l { margin-right: -$spacing-extra-small; }\n .nr2-l { margin-right: -$spacing-small; }\n .nr3-l { margin-right: -$spacing-medium; }\n .nr4-l { margin-right: -$spacing-large; }\n .nr5-l { margin-right: -$spacing-extra-large; }\n .nr6-l { margin-right: -$spacing-extra-extra-large; }\n .nr7-l { margin-right: -$spacing-extra-extra-extra-large; }\n\n .nb1-l { margin-bottom: -$spacing-extra-small; }\n .nb2-l { margin-bottom: -$spacing-small; }\n .nb3-l { margin-bottom: -$spacing-medium; }\n .nb4-l { margin-bottom: -$spacing-large; }\n .nb5-l { margin-bottom: -$spacing-extra-large; }\n .nb6-l { margin-bottom: -$spacing-extra-extra-large; }\n .nb7-l { margin-bottom: -$spacing-extra-extra-extra-large; }\n\n .nt1-l { margin-top: -$spacing-extra-small; }\n .nt2-l { margin-top: -$spacing-small; }\n .nt3-l { margin-top: -$spacing-medium; }\n .nt4-l { margin-top: -$spacing-large; }\n .nt5-l { margin-top: -$spacing-extra-large; }\n .nt6-l { margin-top: -$spacing-extra-extra-large; }\n .nt7-l { margin-top: -$spacing-extra-extra-extra-large; }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n TABLES\n Docs: http://tachyons.io/docs/elements/tables/\n\n*/\n\n.collapse {\n border-collapse: collapse;\n border-spacing: 0;\n}\n\n.striped--light-silver:nth-child(odd) {\n background-color: $light-silver;\n}\n\n.striped--moon-gray:nth-child(odd) {\n background-color: $moon-gray;\n}\n\n.striped--light-gray:nth-child(odd) {\n background-color: $light-gray;\n}\n\n.striped--near-white:nth-child(odd) {\n background-color: $near-white;\n}\n\n.stripe-light:nth-child(odd) {\n background-color: $white-10;\n}\n\n.stripe-dark:nth-child(odd) {\n background-color: $black-10;\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n TEXT DECORATION\n Docs: http://tachyons.io/docs/typography/text-decoration/\n\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n*/\n\n.strike { text-decoration: line-through; }\n.underline { text-decoration: underline; }\n.no-underline { text-decoration: none; }\n\n\n@media #{$breakpoint-not-small} {\n .strike-ns { text-decoration: line-through; }\n .underline-ns { text-decoration: underline; }\n .no-underline-ns { text-decoration: none; }\n}\n\n@media #{$breakpoint-medium} {\n .strike-m { text-decoration: line-through; }\n .underline-m { text-decoration: underline; }\n .no-underline-m { text-decoration: none; }\n}\n\n@media #{$breakpoint-large} {\n .strike-l { text-decoration: line-through; }\n .underline-l { text-decoration: underline; }\n .no-underline-l { text-decoration: none; }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n TEXT ALIGN\n Docs: http://tachyons.io/docs/typography/text-align/\n\n Base\n t = text-align\n\n Modifiers\n l = left\n r = right\n c = center\n j = justify\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n*/\n\n.tl { text-align: left; }\n.tr { text-align: right; }\n.tc { text-align: center; }\n.tj { text-align: justify; }\n\n@media #{$breakpoint-not-small} {\n .tl-ns { text-align: left; }\n .tr-ns { text-align: right; }\n .tc-ns { text-align: center; }\n .tj-ns { text-align: justify; }\n}\n\n@media #{$breakpoint-medium} {\n .tl-m { text-align: left; }\n .tr-m { text-align: right; }\n .tc-m { text-align: center; }\n .tj-m { text-align: justify; }\n}\n\n@media #{$breakpoint-large} {\n .tl-l { text-align: left; }\n .tr-l { text-align: right; }\n .tc-l { text-align: center; }\n .tj-l { text-align: justify; }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n TEXT TRANSFORM\n Docs: http://tachyons.io/docs/typography/text-transform/\n\n Base:\n tt = text-transform\n\n Modifiers\n c = capitalize\n l = lowercase\n u = uppercase\n n = none\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n*/\n\n.ttc { text-transform: capitalize; }\n.ttl { text-transform: lowercase; }\n.ttu { text-transform: uppercase; }\n.ttn { text-transform: none; }\n\n@media #{$breakpoint-not-small} {\n .ttc-ns { text-transform: capitalize; }\n .ttl-ns { text-transform: lowercase; }\n .ttu-ns { text-transform: uppercase; }\n .ttn-ns { text-transform: none; }\n}\n\n@media #{$breakpoint-medium} {\n .ttc-m { text-transform: capitalize; }\n .ttl-m { text-transform: lowercase; }\n .ttu-m { text-transform: uppercase; }\n .ttn-m { text-transform: none; }\n}\n\n@media #{$breakpoint-large} {\n .ttc-l { text-transform: capitalize; }\n .ttl-l { text-transform: lowercase; }\n .ttu-l { text-transform: uppercase; }\n .ttn-l { text-transform: none; }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n TYPE SCALE\n Docs: http://tachyons.io/docs/typography/scale/\n\n Base:\n f = font-size\n\n Modifiers\n 1 = 1st step in size scale\n 2 = 2nd step in size scale\n 3 = 3rd step in size scale\n 4 = 4th step in size scale\n 5 = 5th step in size scale\n 6 = 6th step in size scale\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n*/\n\n/*\n * For Hero/Marketing Titles\n *\n * These generally are too large for mobile\n * so be careful using them on smaller screens.\n * */\n\n.f-6,\n.f-headline {\n font-size: $font-size-headline;\n}\n.f-5,\n.f-subheadline {\n font-size: $font-size-subheadline;\n}\n\n\n/* Type Scale */\n\n\n.f1 { font-size: $font-size-1; }\n.f2 { font-size: $font-size-2; }\n.f3 { font-size: $font-size-3; }\n.f4 { font-size: $font-size-4; }\n.f5 { font-size: $font-size-5; }\n.f6 { font-size: $font-size-6; }\n.f7 { font-size: $font-size-7; }\n\n@media #{$breakpoint-not-small}{\n .f-6-ns,\n .f-headline-ns { font-size: $font-size-headline; }\n .f-5-ns,\n .f-subheadline-ns { font-size: $font-size-subheadline; }\n .f1-ns { font-size: $font-size-1; }\n .f2-ns { font-size: $font-size-2; }\n .f3-ns { font-size: $font-size-3; }\n .f4-ns { font-size: $font-size-4; }\n .f5-ns { font-size: $font-size-5; }\n .f6-ns { font-size: $font-size-6; }\n .f7-ns { font-size: $font-size-7; }\n}\n\n@media #{$breakpoint-medium} {\n .f-6-m,\n .f-headline-m { font-size: $font-size-headline; }\n .f-5-m,\n .f-subheadline-m { font-size: $font-size-subheadline; }\n .f1-m { font-size: $font-size-1; }\n .f2-m { font-size: $font-size-2; }\n .f3-m { font-size: $font-size-3; }\n .f4-m { font-size: $font-size-4; }\n .f5-m { font-size: $font-size-5; }\n .f6-m { font-size: $font-size-6; }\n .f7-m { font-size: $font-size-7; }\n}\n\n@media #{$breakpoint-large} {\n .f-6-l,\n .f-headline-l {\n font-size: $font-size-headline;\n }\n .f-5-l,\n .f-subheadline-l {\n font-size: $font-size-subheadline;\n }\n .f1-l { font-size: $font-size-1; }\n .f2-l { font-size: $font-size-2; }\n .f3-l { font-size: $font-size-3; }\n .f4-l { font-size: $font-size-4; }\n .f5-l { font-size: $font-size-5; }\n .f6-l { font-size: $font-size-6; }\n .f7-l { font-size: $font-size-7; }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n TYPOGRAPHY\n http://tachyons.io/docs/typography/measure/\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n*/\n\n\n\n/* Measure is limited to ~66 characters */\n.measure {\n max-width: $measure;\n}\n\n/* Measure is limited to ~80 characters */\n.measure-wide {\n max-width: $measure-wide;\n}\n\n/* Measure is limited to ~45 characters */\n.measure-narrow {\n max-width: $measure-narrow;\n}\n\n/* Book paragraph style - paragraphs are indented with no vertical spacing. */\n.indent {\n text-indent: 1em;\n margin-top: 0;\n margin-bottom: 0;\n}\n\n.small-caps {\n font-variant: small-caps;\n}\n\n/* Combine this class with a width to truncate text (or just leave as is to truncate at width of containing element. */\n\n.truncate {\n white-space: nowrap;\n overflow: hidden;\n text-overflow: ellipsis;\n}\n\n@media #{$breakpoint-not-small} {\n .measure-ns {\n max-width: $measure;\n }\n .measure-wide-ns {\n max-width: $measure-wide;\n }\n .measure-narrow-ns {\n max-width: $measure-narrow;\n }\n .indent-ns {\n text-indent: 1em;\n margin-top: 0;\n margin-bottom: 0;\n }\n .small-caps-ns {\n font-variant: small-caps;\n }\n .truncate-ns {\n white-space: nowrap;\n overflow: hidden;\n text-overflow: ellipsis;\n }\n}\n\n@media #{$breakpoint-medium} {\n .measure-m {\n max-width: $measure;\n }\n .measure-wide-m {\n max-width: $measure-wide;\n }\n .measure-narrow-m {\n max-width: $measure-narrow;\n }\n .indent-m {\n text-indent: 1em;\n margin-top: 0;\n margin-bottom: 0;\n }\n .small-caps-m {\n font-variant: small-caps;\n }\n .truncate-m {\n white-space: nowrap;\n overflow: hidden;\n text-overflow: ellipsis;\n }\n}\n\n@media #{$breakpoint-large} {\n .measure-l {\n max-width: $measure;\n }\n .measure-wide-l {\n max-width: $measure-wide;\n }\n .measure-narrow-l {\n max-width: $measure-narrow;\n }\n .indent-l {\n text-indent: 1em;\n margin-top: 0;\n margin-bottom: 0;\n }\n .small-caps-l {\n font-variant: small-caps;\n }\n .truncate-l {\n white-space: nowrap;\n overflow: hidden;\n text-overflow: ellipsis;\n }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n UTILITIES\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n*/\n\n/* Equivalent to .overflow-y-scroll */\n.overflow-container {\n overflow-y: scroll;\n}\n\n.center {\n margin-right: auto;\n margin-left: auto;\n}\n\n.mr-auto { margin-right: auto; }\n.ml-auto { margin-left: auto; }\n\n@media #{$breakpoint-not-small}{\n .center-ns {\n margin-right: auto;\n margin-left: auto;\n }\n .mr-auto-ns { margin-right: auto; }\n .ml-auto-ns { margin-left: auto; }\n}\n\n@media #{$breakpoint-medium}{\n .center-m {\n margin-right: auto;\n margin-left: auto;\n }\n .mr-auto-m { margin-right: auto; }\n .ml-auto-m { margin-left: auto; }\n}\n\n@media #{$breakpoint-large}{\n .center-l {\n margin-right: auto;\n margin-left: auto;\n }\n .mr-auto-l { margin-right: auto; }\n .ml-auto-l { margin-left: auto; }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n VISIBILITY\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n*/\n\n\n/*\n Text that is hidden but accessible\n Ref: http://snook.ca/archives/html_and_css/hiding-content-for-accessibility\n*/\n\n.clip {\n position: fixed !important;\n _position: absolute !important;\n clip: rect(1px 1px 1px 1px); /* IE6, IE7 */\n clip: rect(1px, 1px, 1px, 1px);\n}\n\n@media #{$breakpoint-not-small} {\n .clip-ns {\n position: fixed !important;\n _position: absolute !important;\n clip: rect(1px 1px 1px 1px); /* IE6, IE7 */\n clip: rect(1px, 1px, 1px, 1px);\n }\n}\n\n@media #{$breakpoint-medium} {\n .clip-m {\n position: fixed !important;\n _position: absolute !important;\n clip: rect(1px 1px 1px 1px); /* IE6, IE7 */\n clip: rect(1px, 1px, 1px, 1px);\n }\n}\n\n@media #{$breakpoint-large} {\n .clip-l {\n position: fixed !important;\n _position: absolute !important;\n clip: rect(1px 1px 1px 1px); /* IE6, IE7 */\n clip: rect(1px, 1px, 1px, 1px);\n }\n}\n\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n WHITE SPACE\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n*/\n\n\n.ws-normal { white-space: normal; }\n.nowrap { white-space: nowrap; }\n.pre { white-space: pre; }\n\n@media #{$breakpoint-not-small} {\n .ws-normal-ns { white-space: normal; }\n .nowrap-ns { white-space: nowrap; }\n .pre-ns { white-space: pre; }\n}\n\n@media #{$breakpoint-medium} {\n .ws-normal-m { white-space: normal; }\n .nowrap-m { white-space: nowrap; }\n .pre-m { white-space: pre; }\n}\n\n@media #{$breakpoint-large} {\n .ws-normal-l { white-space: normal; }\n .nowrap-l { white-space: nowrap; }\n .pre-l { white-space: pre; }\n}\n\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n VERTICAL ALIGN\n\n Media Query Extensions:\n -ns = not-small\n -m = medium\n -l = large\n\n*/\n\n.v-base { vertical-align: baseline; }\n.v-mid { vertical-align: middle; }\n.v-top { vertical-align: top; }\n.v-btm { vertical-align: bottom; }\n\n@media #{$breakpoint-not-small} {\n .v-base-ns { vertical-align: baseline; }\n .v-mid-ns { vertical-align: middle; }\n .v-top-ns { vertical-align: top; }\n .v-btm-ns { vertical-align: bottom; }\n}\n\n@media #{$breakpoint-medium} {\n .v-base-m { vertical-align: baseline; }\n .v-mid-m { vertical-align: middle; }\n .v-top-m { vertical-align: top; }\n .v-btm-m { vertical-align: bottom; }\n}\n\n@media #{$breakpoint-large} {\n .v-base-l { vertical-align: baseline; }\n .v-mid-l { vertical-align: middle; }\n .v-top-l { vertical-align: top; }\n .v-btm-l { vertical-align: bottom; }\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n HOVER EFFECTS\n Docs: http://tachyons.io/docs/themes/hovers/\n\n - Dim\n - Glow\n - Hide Child\n - Underline text\n - Grow\n - Pointer\n - Shadow\n\n*/\n\n/*\n\n Dim element on hover by adding the dim class.\n\n*/\n.dim {\n opacity: 1;\n transition: opacity .15s ease-in;\n}\n.dim:hover,\n.dim:focus {\n opacity: .5;\n transition: opacity .15s ease-in;\n}\n.dim:active {\n opacity: .8; transition: opacity .15s ease-out;\n}\n\n/*\n\n Animate opacity to 100% on hover by adding the glow class.\n\n*/\n.glow {\n transition: opacity .15s ease-in;\n}\n.glow:hover,\n.glow:focus {\n opacity: 1;\n transition: opacity .15s ease-in;\n}\n\n/*\n\n Hide child & reveal on hover:\n\n Put the hide-child class on a parent element and any nested element with the\n child class will be hidden and displayed on hover or focus.\n\n
\n
Hidden until hover or focus
\n
Hidden until hover or focus
\n
Hidden until hover or focus
\n
Hidden until hover or focus
\n
\n*/\n\n.hide-child .child {\n opacity: 0;\n transition: opacity .15s ease-in;\n}\n.hide-child:hover .child,\n.hide-child:focus .child,\n.hide-child:active .child {\n opacity: 1;\n transition: opacity .15s ease-in;\n}\n\n.underline-hover:hover,\n.underline-hover:focus {\n text-decoration: underline;\n}\n\n/* Can combine this with overflow-hidden to make background images grow on hover\n * even if you are using background-size: cover */\n\n.grow {\n -moz-osx-font-smoothing: grayscale;\n backface-visibility: hidden;\n transform: translateZ(0);\n transition: transform 0.25s ease-out;\n}\n\n.grow:hover,\n.grow:focus {\n transform: scale(1.05);\n}\n\n.grow:active {\n transform: scale(.90);\n}\n\n.grow-large {\n -moz-osx-font-smoothing: grayscale;\n backface-visibility: hidden;\n transform: translateZ(0);\n transition: transform .25s ease-in-out;\n}\n\n.grow-large:hover,\n.grow-large:focus {\n transform: scale(1.2);\n}\n\n.grow-large:active {\n transform: scale(.95);\n}\n\n/* Add pointer on hover */\n\n.pointer:hover {\n cursor: pointer;\n}\n\n/*\n Add shadow on hover.\n\n Performant box-shadow animation pattern from\n http://tobiasahlin.com/blog/how-to-animate-box-shadow/\n*/\n\n.shadow-hover {\n cursor: pointer;\n position: relative;\n transition: all 0.5s cubic-bezier(0.165, 0.84, 0.44, 1);\n}\n\n.shadow-hover::after {\n content: '';\n box-shadow: 0px 0px 16px 2px rgba( 0, 0, 0, .2 );\n border-radius: inherit;\n opacity: 0;\n position: absolute;\n top: 0;\n left: 0;\n width: 100%;\n height: 100%;\n z-index: -1;\n transition: opacity 0.5s cubic-bezier(0.165, 0.84, 0.44, 1);\n}\n\n.shadow-hover:hover::after,\n.shadow-hover:focus::after {\n opacity: 1;\n}\n\n/* Combine with classes in skins and skins-pseudo for\n * many different transition possibilities. */\n\n.bg-animate,\n.bg-animate:hover,\n.bg-animate:focus {\n transition: background-color .15s ease-in-out;\n}\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n Z-INDEX\n\n Base\n z = z-index\n\n Modifiers\n -0 = literal value 0\n -1 = literal value 1\n -2 = literal value 2\n -3 = literal value 3\n -4 = literal value 4\n -5 = literal value 5\n -999 = literal value 999\n -9999 = literal value 9999\n\n -max = largest accepted z-index value as integer\n\n -inherit = string value inherit\n -initial = string value initial\n -unset = string value unset\n\n MDN: https://developer.mozilla.org/en/docs/Web/CSS/z-index\n Spec: http://www.w3.org/TR/CSS2/zindex.html\n Articles:\n https://philipwalton.com/articles/what-no-one-told-you-about-z-index/\n\n Tips on extending:\n There might be a time worth using negative z-index values.\n Or if you are using tachyons with another project, you might need to\n adjust these values to suit your needs.\n\n*/\n\n.z-0 { z-index: 0; }\n.z-1 { z-index: 1; }\n.z-2 { z-index: 2; }\n.z-3 { z-index: 3; }\n.z-4 { z-index: 4; }\n.z-5 { z-index: 5; }\n\n.z-999 { z-index: 999; }\n.z-9999 { z-index: 9999; }\n\n.z-max {\n z-index: 2147483647;\n}\n\n.z-inherit { z-index: inherit; }\n.z-initial { z-index: initial; }\n.z-unset { z-index: unset; }\n\n","\n// Converted Variables\n\n\n// Custom Media Query Variables\n\n\n/*\n\n NESTED\n Tachyons module for styling nested elements\n that are generated by a cms.\n\n*/\n\n.nested-copy-line-height p,\n.nested-copy-line-height ul,\n.nested-copy-line-height ol {\n line-height: $line-height-copy;\n}\n\n.nested-headline-line-height h1,\n.nested-headline-line-height h2,\n.nested-headline-line-height h3,\n.nested-headline-line-height h4,\n.nested-headline-line-height h5,\n.nested-headline-line-height h6 {\n line-height: $line-height-title;\n}\n\n.nested-list-reset ul,\n.nested-list-reset ol {\n padding-left: 0;\n margin-left: 0;\n list-style-type: none;\n}\n\n.nested-copy-indent p+p {\n text-indent: $letter-spacing-1;\n margin-top: $spacing-none;\n margin-bottom: $spacing-none;\n}\n\n.nested-copy-seperator p+p {\n margin-top: $spacing-copy-separator;\n}\n\n.nested-img img {\n width: 100%;\n max-width: 100%;\n display: block;\n}\n\n.nested-links a {\n color: $blue;\n transition: color .15s ease-in;\n}\n\n.nested-links a:hover,\n.nested-links a:focus {\n color: $light-blue;\n transition: color .15s ease-in;\n}\n","@use \"sass:meta\";\n@use \"variables\" as *;\n@use \"type\";\n@use \"mixins\";\n\n.wrapper {\n width: 100%;\n max-width: 1460px;\n margin: 0 auto;\n box-sizing: border-box;\n}\n\n.opblock-tag-section {\n display: flex;\n flex-direction: column;\n}\n\n.try-out.btn-group {\n padding: 0;\n display: flex;\n flex: 0.1 2 auto;\n}\n\n.try-out__btn {\n margin-left: 1.25rem;\n}\n\n.opblock-tag {\n display: flex;\n align-items: center;\n\n padding: 10px 20px 10px 10px;\n\n cursor: pointer;\n transition: all 0.2s;\n\n border-bottom: 1px solid rgba($opblock-tag-border-bottom-color, 0.3);\n\n &:hover {\n background: rgba($opblock-tag-background-color-hover, 0.02);\n }\n}\n\n.opblock-tag {\n font-size: 24px;\n\n margin: 0 0 5px 0;\n\n @include type.text_headline();\n\n &.no-desc {\n span {\n flex: 1;\n }\n }\n\n svg {\n transition: all 0.4s;\n }\n\n small {\n font-size: 14px;\n font-weight: normal;\n\n flex: 2;\n\n padding: 0 10px;\n\n @include type.text_body();\n }\n\n > div {\n overflow: hidden;\n white-space: nowrap;\n text-overflow: ellipsis;\n flex: 1 1 150px;\n font-weight: 400;\n }\n\n @media (max-width: 640px) {\n small {\n flex: 1;\n }\n\n > div {\n flex: 1;\n }\n }\n\n .info__externaldocs {\n text-align: right;\n }\n}\n\n.parameter__type {\n font-size: 12px;\n\n padding: 5px 0;\n\n @include type.text_code();\n}\n\n.parameter-controls {\n margin-top: 0.75em;\n}\n\n.examples {\n &__title {\n display: block;\n font-size: 1.1em;\n font-weight: bold;\n margin-bottom: 0.75em;\n }\n\n &__section {\n margin-top: 1.5em;\n }\n &__section-header {\n font-weight: bold;\n font-size: 0.9rem;\n margin-bottom: 0.5rem;\n }\n}\n\n.examples-select {\n margin-bottom: 0.75em;\n display: inline-block;\n .examples-select-element {\n width: 100%;\n }\n &__section-label {\n font-weight: bold;\n font-size: 0.9rem;\n margin-right: 0.5rem;\n }\n}\n\n.example {\n &__section {\n margin-top: 1.5em;\n }\n &__section-header {\n font-weight: bold;\n font-size: 0.9rem;\n margin-bottom: 0.5rem;\n }\n}\n\n.view-line-link {\n position: relative;\n top: 3px;\n\n width: 20px;\n margin: 0 5px;\n\n cursor: pointer;\n transition: all 0.5s;\n}\n\n.opblock {\n margin: 0 0 15px 0;\n\n border: 1px solid $opblock-border-color;\n border-radius: 4px;\n box-shadow: 0 0 3px rgba($opblock-box-shadow-color, 0.19);\n\n .tab-header {\n display: flex;\n\n flex: 1;\n\n .tab-item {\n padding: 0 40px;\n\n cursor: pointer;\n\n &:first-of-type {\n padding: 0 40px 0 0;\n }\n &.active {\n h4 {\n span {\n position: relative;\n\n &:after {\n position: absolute;\n bottom: -15px;\n left: 50%;\n\n width: 120%;\n height: 4px;\n\n content: \"\";\n transform: translateX(-50%);\n\n background: $opblock-tab-header-tab-item-active-h4-span-after-background-color;\n }\n }\n }\n }\n }\n }\n\n &.is-open {\n .opblock-summary {\n border-bottom: 1px solid $opblock-isopen-summary-border-bottom-color;\n }\n }\n\n .opblock-section-header {\n display: flex;\n align-items: center;\n\n padding: 8px 20px;\n\n min-height: 50px;\n\n background: rgba(255, 255, 255, .05);\n box-shadow: 0 1px 2px\n rgba($opblock-isopen-section-header-box-shadow-color, 0.1);\n\n > label {\n font-size: 12px;\n font-weight: bold;\n\n display: flex;\n align-items: center;\n\n margin: 0 0 0 auto;\n\n @include type.text_headline();\n\n > span {\n padding: 0 10px 0 0;\n }\n }\n\n h4 {\n font-size: 14px;\n\n flex: 1;\n\n margin: 0;\n\n @include type.text_headline();\n }\n }\n\n .opblock-summary-method {\n font-size: 14px;\n font-weight: bold;\n\n min-width: 80px;\n padding: 6px 0;\n\n text-align: center;\n\n border-radius: 3px;\n background: $opblock-summary-method-background-color;\n text-shadow: 0 1px 0 rgba($opblock-summary-method-text-shadow-color, 0.1);\n\n @include type.text_headline($opblock-summary-method-font-color);\n\n @media (max-width: 768px) {\n font-size: 12px;\n }\n }\n\n .opblock-summary-path,\n .opblock-summary-operation-id,\n .opblock-summary-path__deprecated {\n font-size: 16px;\n\n display: flex;\n align-items: center;\n\n word-break: break-word;\n\n @include type.text_code();\n\n @media (max-width: 768px) {\n font-size: 12px;\n }\n }\n\n .opblock-summary-path {\n flex-shrink: 1;\n }\n\n @media (max-width: 640px) {\n .opblock-summary-path {\n max-width: 100%;\n }\n }\n\n .opblock-summary-path__deprecated {\n text-decoration: line-through;\n }\n\n .opblock-summary-operation-id {\n font-size: 14px;\n }\n\n .opblock-summary-description {\n font-size: 13px;\n\n word-break: break-word;\n\n @include type.text_body();\n }\n\n .opblock-summary-path-description-wrapper {\n display: flex;\n flex-direction: row;\n align-items: center;\n flex-wrap: wrap;\n gap: 0px 10px;\n\n padding: 0 10px;\n\n width: 100%;\n }\n\n @media (max-width: 550px) {\n .opblock-summary-path-description-wrapper {\n flex-direction: column;\n align-items: flex-start;\n }\n }\n\n .opblock-summary {\n display: flex;\n align-items: center;\n\n padding: 5px;\n\n cursor: pointer;\n\n .view-line-link {\n position: relative;\n top: 2px;\n\n width: 0;\n margin: 0;\n\n cursor: pointer;\n transition: all 0.5s;\n }\n\n &:hover {\n .view-line-link {\n width: 18px;\n margin: 0 5px;\n\n &.copy-to-clipboard {\n width: 24px;\n }\n }\n }\n }\n\n &.opblock-post {\n @include mixins.method($color-post);\n }\n\n &.opblock-put {\n @include mixins.method($color-put);\n }\n\n &.opblock-delete {\n @include mixins.method($color-delete);\n }\n\n &.opblock-get {\n @include mixins.method($color-get);\n }\n\n &.opblock-patch {\n @include mixins.method($color-patch);\n }\n\n &.opblock-head {\n @include mixins.method($color-head);\n }\n\n &.opblock-options {\n @include mixins.method($color-options);\n }\n\n &.opblock-deprecated {\n opacity: 0.6;\n\n @include mixins.method($color-disabled);\n }\n\n .opblock-schemes {\n padding: 8px 20px;\n\n .schemes-title {\n padding: 0 10px 0 0;\n }\n }\n}\n\n.filter {\n .operation-filter-input {\n width: 100%;\n margin: 20px 0;\n padding: 10px 10px;\n\n border: 2px solid $operational-filter-input-border-color;\n }\n}\n\n.filter,\n.download-url-wrapper {\n .failed {\n color: red;\n }\n\n .loading {\n color: #aaa;\n }\n}\n\n.model-example {\n margin-top: 1em;\n\n .model-container {\n width: 100%;\n overflow-x: auto;\n\n .model-hint:not(.model-hint--embedded) {\n top: -1.15em;\n }\n }\n}\n\n.tab {\n display: flex;\n\n padding: 0;\n\n list-style: none;\n\n li {\n font-size: 12px;\n\n min-width: 60px;\n padding: 0;\n\n cursor: pointer;\n\n @include type.text_headline();\n\n &:first-of-type {\n position: relative;\n\n padding-left: 0;\n padding-right: 12px;\n\n &:after {\n position: absolute;\n top: 0;\n right: 6px;\n\n width: 1px;\n height: 100%;\n\n content: \"\";\n\n background: rgba($tab-list-item-first-background-color, 0.2);\n }\n }\n\n &.active {\n font-weight: bold;\n }\n\n button.tablinks {\n background: none;\n border: 0;\n padding: 0;\n\n color: inherit;\n font-family: inherit;\n font-weight: inherit;\n }\n }\n}\n\n.opblock-description-wrapper,\n.opblock-external-docs-wrapper,\n.opblock-title_normal {\n font-size: 12px;\n\n margin: 0 0 5px 0;\n padding: 15px 20px;\n\n @include type.text_body();\n\n h4 {\n font-size: 12px;\n\n margin: 0 0 5px 0;\n\n @include type.text_body();\n }\n\n p {\n font-size: 14px;\n\n margin: 0;\n\n @include type.text_body();\n }\n}\n\n.opblock-external-docs-wrapper {\n h4 {\n padding-left: 0px;\n }\n}\n\n.execute-wrapper {\n padding: 20px;\n\n text-align: right;\n\n .btn {\n width: 100%;\n padding: 8px 40px;\n }\n}\n\n.body-param-options {\n display: flex;\n flex-direction: column;\n\n .body-param-edit {\n padding: 10px 0;\n }\n\n label {\n padding: 8px 0;\n select {\n margin: 3px 0 0 0;\n }\n }\n}\n\n.responses-inner {\n padding: 20px;\n\n h5,\n h4 {\n font-size: 12px;\n\n margin: 10px 0 5px 0;\n\n @include type.text_body();\n }\n\n .curl {\n overflow-y: auto;\n max-height: 400px;\n min-height: 6em;\n }\n}\n\n.response-col_status {\n font-size: 14px;\n\n @include type.text_body();\n\n .response-undocumented {\n font-size: 11px;\n\n @include type.text_code($response-col-status-undocumented-font-color);\n }\n}\n\n.response-col_links {\n padding-left: 2em;\n max-width: 40em;\n font-size: 14px;\n\n @include type.text_body();\n\n .response-undocumented {\n font-size: 11px;\n\n @include type.text_code($response-col-links-font-color);\n }\n\n .operation-link {\n margin-bottom: 1.5em;\n\n .description {\n margin-bottom: 0.5em;\n }\n }\n}\n\n.opblock-body {\n background: rgb(30, 33, 41);\n\n .opblock-loading-animation {\n display: block;\n margin: 3em auto;\n }\n}\n\n.opblock-body pre.microlight {\n font-size: 12px;\n\n margin: 0;\n padding: revert !important;\n background: revert !important;\n\n white-space: pre-wrap;\n word-wrap: break-word;\n word-break: break-all;\n word-break: break-word;\n hyphens: auto;\n\n overflow-wrap: break-word;\n @include type.text_code($opblock-body-font-color);\n\n // disabled to have syntax highliting with react-syntax-highlight\n // span\n // {\n // color: $opblock-body-font-color !important;\n // }\n\n .headerline {\n display: block;\n }\n}\n\n.highlight-code {\n position: relative;\n border-radius: 4px;\n background: $opblock-body-background-color !important;\n\n > .microlight {\n overflow-y: auto;\n max-height: 400px;\n min-height: 6em;\n\n code {\n white-space: pre-wrap !important;\n word-break: break-all;\n background: revert !important;\n }\n }\n\n &:before {\n content: \"\";\n display: block;\n width: 100%;\n height: 10px;\n }\n\n &:after {\n content: \"\";\n display: block;\n width: 100%;\n height: 10px;\n }\n}\n.curl-command {\n position: relative;\n}\n\n.download-contents {\n position: absolute;\n bottom: 10px;\n right: 10px;\n background: #7d8293;\n text-align: center;\n padding: 5px;\n border: none;\n border-radius: 4px;\n font-family: sans-serif;\n font-weight: 600;\n color: white;\n font-size: 14px;\n height: 30px;\n justify-content: center;\n align-items: center;\n display: flex;\n}\n\n.scheme-container {\n margin: 0 0 20px 0;\n padding: 30px 0;\n\n .schemes {\n display: flex;\n align-items: flex-end;\n justify-content: space-between;\n flex-wrap: wrap;\n\n gap: 10px;\n\n /*\n This wraps the servers or schemes selector.\n This was added to make sure the Authorize button is always on the right\n and the servers or schemes selector is always on the left.\n */\n > .schemes-server-container {\n display: flex;\n flex-wrap: wrap;\n\n gap: 10px;\n\n > label {\n font-size: 12px;\n font-weight: bold;\n\n display: flex;\n flex-direction: column;\n\n margin: -20px 15px 0 0;\n\n @include type.text_headline();\n\n select {\n min-width: 130px;\n\n text-transform: uppercase;\n }\n }\n }\n\n /*\n This checks if the schemes-server-container is not present and\n aligns the authorize button to the right\n */\n &:not(:has(.schemes-server-container)) {\n justify-content: flex-end;\n }\n\n /*\n Target Authorize Button in schemes wrapper\n This was added here to fix responsiveness issues with the authorize button\n within the schemes wrapper without affecting other instances of it's usage\n */\n .auth-wrapper {\n flex: none;\n justify-content: start;\n\n .authorize {\n padding-right: 20px;\n margin: 0;\n\n display: flex;\n\n flex-wrap: nowrap;\n }\n }\n }\n}\n\n.loading-container {\n padding: 40px 0 60px;\n margin-top: 1em;\n min-height: 1px;\n display: flex;\n justify-content: center;\n align-items: center;\n flex-direction: column;\n\n .loading {\n position: relative;\n\n &:after {\n font-size: 10px;\n font-weight: bold;\n\n position: absolute;\n top: 50%;\n left: 50%;\n\n content: \"loading\";\n transform: translate(-50%, -50%);\n text-transform: uppercase;\n\n @include type.text_headline();\n }\n\n &:before {\n position: absolute;\n top: 50%;\n left: 50%;\n\n display: block;\n\n width: 60px;\n height: 60px;\n margin: -30px -30px;\n\n content: \"\";\n animation:\n rotation 1s infinite linear,\n opacity 0.5s;\n\n opacity: 1;\n border: 2px solid rgba($loading-container-before-border-color, 0.1);\n border-top-color: rgba($loading-container-before-border-top-color, 0.6);\n border-radius: 100%;\n\n backface-visibility: hidden;\n\n @keyframes rotation {\n to {\n transform: rotate(360deg);\n }\n }\n }\n }\n}\n\n.response-controls {\n padding-top: 1em;\n display: flex;\n}\n\n.response-control-media-type {\n margin-right: 1em;\n\n &--accept-controller {\n select {\n border-color: $response-content-type-controls-accept-header-select-border-color;\n border-width: 0;\n box-shadow: none;\n color: rgba(226, 228, 233, 0.82);\n background: rgba(255,255,255,0.05)\n url('data:image/svg+xml, ')\n right 10px center no-repeat;\n }\n }\n\n &__accept-message {\n color: $response-content-type-controls-accept-header-small-font-color;\n font-size: 0.7em;\n }\n\n &__title {\n display: block;\n margin-bottom: 0.2em;\n font-size: 0.7em;\n }\n}\n\n.response-control-examples {\n &__title {\n display: block;\n margin-bottom: 0.2em;\n font-size: 0.7em;\n }\n}\n\n@keyframes blinker {\n 50% {\n opacity: 0;\n }\n}\n\n.hidden {\n display: none;\n}\n\n.no-margin {\n height: auto;\n border: none;\n margin: 0;\n padding: 0;\n}\n\n.float-right {\n float: right;\n}\n\n.svg-assets {\n position: absolute;\n width: 0;\n height: 0;\n}\n\nsection {\n h3 {\n @include type.text_headline();\n }\n}\n\na.nostyle {\n text-decoration: inherit;\n color: inherit;\n cursor: pointer;\n display: inline;\n\n &:visited {\n text-decoration: inherit;\n color: inherit;\n cursor: pointer;\n }\n}\n\n.fallback {\n padding: 1em;\n color: #aaa;\n}\n\n.version-pragma {\n height: 100%;\n padding: 5em 0px;\n\n &__message {\n display: flex;\n justify-content: center;\n height: 100%;\n font-size: 1.2em;\n text-align: center;\n line-height: 1.5em;\n\n padding: 0px 0.6em;\n\n > div {\n max-width: 55ch;\n flex: 1;\n }\n\n code {\n background-color: #dedede;\n padding: 4px 4px 2px;\n white-space: pre;\n }\n }\n}\n\n.opblock-link {\n font-weight: normal;\n\n &.shown {\n font-weight: bold;\n }\n}\n\nspan {\n &.token-string {\n color: #555;\n }\n\n &.token-not-formatted {\n color: #555;\n font-weight: bold;\n }\n}\n\n.information-container {\n display: none;\n}\n","@use \"sass:color\";\n@use \"variables\" as *;\n\n// - - - - - - - - - - - - - - - - - - -\n// - - _mixins.scss module\n// styles for the _mixins.scss module\n@function calculateRem($size) {\n $remSize: $size / 16px;\n @return $remSize * 1rem;\n}\n\n@mixin font-size($size) {\n font-size: $size;\n font-size: calculateRem($size);\n}\n\n%clearfix {\n &:before,\n &:after {\n display: table;\n\n content: \" \";\n }\n &:after {\n clear: both;\n }\n}\n\n@mixin size($width, $height: $width) {\n width: $width;\n height: $height;\n}\n\n$ease: (\n in-quad: cubic-bezier(0.55, 0.085, 0.68, 0.53),\n in-cubic: cubic-bezier(0.55, 0.055, 0.675, 0.19),\n in-quart: cubic-bezier(0.895, 0.03, 0.685, 0.22),\n in-quint: cubic-bezier(0.755, 0.05, 0.855, 0.06),\n in-sine: cubic-bezier(0.47, 0, 0.745, 0.715),\n in-expo: cubic-bezier(0.95, 0.05, 0.795, 0.035),\n in-circ: cubic-bezier(0.6, 0.04, 0.98, 0.335),\n in-back: cubic-bezier(0.6, -0.28, 0.735, 0.045),\n out-quad: cubic-bezier(0.25, 0.46, 0.45, 0.94),\n out-cubic: cubic-bezier(0.215, 0.61, 0.355, 1),\n out-quart: cubic-bezier(0.165, 0.84, 0.44, 1),\n out-quint: cubic-bezier(0.23, 1, 0.32, 1),\n out-sine: cubic-bezier(0.39, 0.575, 0.565, 1),\n out-expo: cubic-bezier(0.19, 1, 0.22, 1),\n out-circ: cubic-bezier(0.075, 0.82, 0.165, 1),\n out-back: cubic-bezier(0.175, 0.885, 0.32, 1.275),\n in-out-quad: cubic-bezier(0.455, 0.03, 0.515, 0.955),\n in-out-cubic: cubic-bezier(0.645, 0.045, 0.355, 1),\n in-out-quart: cubic-bezier(0.77, 0, 0.175, 1),\n in-out-quint: cubic-bezier(0.86, 0, 0.07, 1),\n in-out-sine: cubic-bezier(0.445, 0.05, 0.55, 0.95),\n in-out-expo: cubic-bezier(1, 0, 0, 1),\n in-out-circ: cubic-bezier(0.785, 0.135, 0.15, 0.86),\n in-out-back: cubic-bezier(0.68, -0.55, 0.265, 1.55),\n);\n\n@function ease($key) {\n @if map-has-key($ease, $key) {\n @return map-get($ease, $key);\n }\n\n @warn 'Unkown \\'#{$key}\\' in $ease.';\n @return null;\n}\n\n@mixin ease($key) {\n transition-timing-function: ease($key);\n}\n\n@mixin text-truncate {\n overflow: hidden;\n\n white-space: nowrap;\n text-overflow: ellipsis;\n}\n\n@mixin aspect-ratio($width, $height) {\n position: relative;\n &:before {\n display: block;\n\n width: 100%;\n padding-top: ($height / $width) * 100%;\n\n content: \"\";\n }\n > iframe {\n position: absolute;\n top: 0;\n right: 0;\n bottom: 0;\n left: 0;\n }\n}\n\n$browser-context: 16;\n\n@function em($pixels, $context: $browser-context) {\n @if (unitless($pixels)) {\n $pixels: $pixels * 1px;\n }\n\n @if (unitless($context)) {\n $context: $context * 1px;\n }\n\n @return $pixels / $context * 1em;\n}\n\n@mixin maxHeight($height) {\n @media (max-height: $height) {\n @content;\n }\n}\n\n@mixin breakpoint($class) {\n @if $class == tablet {\n @media (min-width: 768px) and (max-width: 1024px) {\n @content;\n }\n } @else if $class == mobile {\n @media (min-width: 320px) and (max-width: 736px) {\n @content;\n }\n } @else if $class == desktop {\n @media (min-width: 1400px) {\n @content;\n }\n } @else {\n @warn 'Breakpoint mixin supports: tablet, mobile, desktop';\n }\n}\n\n@mixin invalidFormElement() {\n animation: shake 0.4s 1;\n border-color: $color-delete;\n background: color.adjust($color-delete, $lightness: 35%);\n}\n\n@mixin method($color) {\n border-color: $color;\n background: rgba($color, 0.1);\n\n .opblock-summary-method {\n background: $color;\n }\n\n .opblock-summary {\n border-color: $color;\n }\n\n .tab-header .tab-item.active h4 span:after {\n background: $color;\n }\n}\n","@use \"variables\" as *;\n@use \"type\";\n@use \"mixins\";\n\n.btn {\n font-size: 14px;\n font-weight: bold;\n\n padding: 5px 23px;\n\n transition: all 0.3s;\n\n border: 2px solid $btn-border-color;\n border-radius: 4px;\n background: transparent;\n box-shadow: 0 1px 2px rgba($btn-box-shadow-color, 0.1);\n\n @include type.text_headline();\n\n &.btn-sm {\n font-size: 12px;\n padding: 4px 23px;\n }\n\n &[disabled] {\n cursor: not-allowed;\n\n opacity: 0.3;\n }\n\n &:hover {\n box-shadow: 0 0 5px rgba($btn-box-shadow-color, 0.3);\n }\n\n &.cancel {\n border-color: $btn-cancel-border-color;\n background-color: $btn-cancel-background-color;\n @include type.text_headline($btn-cancel-font-color);\n }\n\n &.authorize {\n line-height: 1;\n\n display: inline;\n\n color: $btn-authorize-font-color;\n border-color: $btn-authorize-border-color;\n background-color: $btn-authorize-background-color;\n\n span {\n float: left;\n\n padding: 4px 20px 0 0;\n }\n\n svg {\n fill: $btn-authorize-svg-fill-color;\n }\n }\n\n &.execute {\n background-color: $btn-execute-background-color-alt;\n color: $btn-execute-font-color;\n border-color: $btn-execute-border-color;\n }\n}\n\n.btn-group {\n display: flex;\n\n padding: 30px;\n\n .btn {\n flex: 1;\n\n &:first-child {\n border-radius: 4px 0 0 4px;\n }\n\n &:last-child {\n border-radius: 0 4px 4px 0;\n }\n }\n}\n\n.authorization__btn {\n padding: 0 0 0 10px;\n\n border: none;\n background: none;\n\n .locked {\n opacity: 1;\n }\n\n .unlocked {\n opacity: 0.4;\n }\n\n svg {\n fill: $text-code-default-font-color;\n }\n}\n\n.opblock-summary-control,\n.models-control,\n.model-box-control {\n all: inherit;\n flex: 1;\n border-bottom: 0;\n padding: 0;\n cursor: pointer;\n\n &:focus {\n outline: auto;\n }\n}\n\n.expand-methods,\n.expand-operation {\n border: none;\n background: none;\n\n svg {\n width: 20px;\n height: 20px;\n fill: $text-code-default-font-color;\n }\n}\n\n.expand-methods {\n padding: 0 10px;\n\n &:hover {\n svg {\n fill: $expand-methods-svg-fill-color-hover;\n }\n }\n\n svg {\n transition: all 0.3s;\n\n fill: $expand-methods-svg-fill-color;\n }\n}\n\nbutton {\n cursor: pointer;\n\n &.invalid {\n @include mixins.invalidFormElement();\n }\n}\n\n.copy-to-clipboard {\n position: absolute;\n display: flex;\n justify-content: center;\n align-items: center;\n bottom: 10px;\n right: 100px;\n width: 30px;\n height: 30px;\n background: #7d8293;\n border-radius: 4px;\n border: none;\n\n button {\n flex-grow: 1;\n flex-shrink: 1;\n border: none;\n height: 25px;\n background: url(\"data:image/svg+xml, \")\n center center no-repeat;\n }\n}\n\n.copy-to-clipboard:active {\n background: #5e626f;\n}\n\n.opblock-control-arrow {\n border: none;\n text-align: center;\n background: none;\n\n svg {\n fill: $text-code-default-font-color;\n }\n}\n\n// overrides for smaller copy button for curl command\n.curl-command .copy-to-clipboard {\n bottom: 5px;\n right: 10px;\n width: 20px;\n height: 20px;\n\n button {\n height: 18px;\n }\n}\n\n// overrides for copy to clipboard button\n.opblock .opblock-summary .view-line-link.copy-to-clipboard {\n height: 26px;\n position: unset;\n}\n","@use \"variables\" as *;\n@use \"mixins\";\n@use \"type\";\n\nselect {\n font-size: 14px;\n font-weight: bold;\n\n padding: 5px 40px 5px 10px;\n\n border: 2px solid $form-select-border-color;\n border-radius: 4px;\n background: $form-select-background-color\n url('data:image/svg+xml, ')\n right 10px center no-repeat;\n background-size: 20px;\n box-shadow: 0 1px 2px 0 rgba($form-select-box-shadow-color, 0.25);\n\n @include type.text_headline();\n appearance: none;\n\n &[multiple] {\n margin: 5px 0;\n padding: 5px;\n\n background: $form-select-background-color;\n }\n\n &.invalid {\n @include mixins.invalidFormElement();\n }\n}\n\n.opblock-body select {\n min-width: 230px;\n @media (max-width: 768px) {\n min-width: 180px;\n }\n @media (max-width: 640px) {\n width: 100%;\n min-width: 100%;\n }\n}\n\nlabel {\n font-size: 12px;\n font-weight: bold;\n\n margin: 0 0 5px 0;\n\n @include type.text_headline();\n}\n\ninput[type=\"text\"],\ninput[type=\"password\"],\ninput[type=\"search\"],\ninput[type=\"email\"],\ninput[type=\"file\"] {\n line-height: 1;\n\n @media (max-width: 768px) {\n max-width: 175px;\n }\n}\n\ninput[type=\"text\"],\ninput[type=\"password\"],\ninput[type=\"search\"],\ninput[type=\"email\"],\ninput[type=\"file\"],\ntextarea {\n min-width: 100px;\n margin: 5px 0;\n padding: 8px 10px;\n\n border: 1px solid $form-input-border-color;\n border-radius: 4px;\n background: $form-input-background-color;\n\n &.invalid {\n @include mixins.invalidFormElement();\n }\n}\n\ninput,\ntextarea,\nselect {\n &[disabled] {\n background-color: #fafafa;\n color: #888;\n cursor: not-allowed;\n }\n}\n\nselect[disabled] {\n border-color: #888;\n}\n\ntextarea[disabled] {\n background-color: #41444e;\n color: #fff;\n}\n\n@keyframes shake {\n 10%,\n 90% {\n transform: translate3d(-1px, 0, 0);\n }\n\n 20%,\n 80% {\n transform: translate3d(2px, 0, 0);\n }\n\n 30%,\n 50%,\n 70% {\n transform: translate3d(-4px, 0, 0);\n }\n\n 40%,\n 60% {\n transform: translate3d(4px, 0, 0);\n }\n}\n\ntextarea {\n font-size: 12px;\n\n width: 100%;\n min-height: 280px;\n padding: 10px;\n\n border: none;\n border-radius: 4px;\n outline: none;\n background: rgba($form-textarea-background-color, 0.8);\n\n @include type.text_code();\n\n &:focus {\n border: 2px solid $form-textarea-focus-border-color;\n }\n\n &.curl {\n font-size: 12px;\n\n min-height: 100px;\n margin: 0;\n padding: 10px;\n\n resize: none;\n\n border-radius: 4px;\n background: $form-textarea-curl-background-color;\n\n @include type.text_code($form-textarea-curl-font-color);\n }\n}\n\n.checkbox {\n padding: 5px 0 10px;\n\n transition: opacity 0.5s;\n\n color: $form-checkbox-label-font-color;\n\n label {\n display: flex;\n }\n\n p {\n font-weight: normal !important;\n font-style: italic;\n\n margin: 0 !important;\n\n @include type.text_code();\n }\n\n input[type=\"checkbox\"] {\n display: none;\n\n & + label > .item {\n position: relative;\n top: 3px;\n\n display: inline-block;\n\n width: 16px;\n height: 16px;\n margin: 0 8px 0 0;\n padding: 5px;\n\n cursor: pointer;\n\n border-radius: 1px;\n background: $form-checkbox-background-color;\n box-shadow: 0 0 0 2px $form-checkbox-box-shadow-color;\n\n flex: none;\n\n &:active {\n transform: scale(0.9);\n }\n }\n\n &:checked + label > .item {\n background: $form-checkbox-background-color\n url('data:image/svg+xml, ')\n center center no-repeat;\n }\n }\n}\n","@use \"variables\" as *;\n@use \"type\";\n\n.dialog-ux {\n position: fixed;\n z-index: 9999;\n top: 0;\n right: 0;\n bottom: 0;\n left: 0;\n\n .backdrop-ux {\n position: fixed;\n top: 0;\n right: 0;\n bottom: 0;\n left: 0;\n\n background: rgba($dialog-ux-backdrop-background-color, 0.8);\n }\n\n .modal-ux {\n position: absolute;\n z-index: 9999;\n top: 50%;\n left: 50%;\n\n width: 100%;\n min-width: 300px;\n max-width: 650px;\n\n transform: translate(-50%, -50%);\n\n border: 1px solid $dialog-ux-modal-border-color;\n border-radius: 4px;\n background: $dialog-ux-modal-background-color;\n box-shadow: 0 10px 30px 0 rgba($dialog-ux-modal-box-shadow-color, 0.2);\n }\n\n .modal-ux-content {\n overflow-y: auto;\n\n max-height: 540px;\n padding: 20px;\n\n p {\n font-size: 12px;\n\n margin: 0 0 5px 0;\n\n color: $dialog-ux-modal-content-font-color;\n\n @include type.text_body();\n }\n\n h4 {\n font-size: 18px;\n font-weight: 600;\n\n margin: 15px 0 0 0;\n\n @include type.text_headline();\n }\n }\n\n .modal-ux-header {\n display: flex;\n\n padding: 12px 0;\n\n border-bottom: 1px solid $dialog-ux-modal-header-border-bottom-color;\n\n align-items: center;\n\n .close-modal {\n padding: 0 10px;\n\n border: none;\n background: none;\n\n appearance: none;\n\n svg {\n fill: #ffffff;\n }\n }\n\n h3 {\n font-size: 20px;\n font-weight: 600;\n\n margin: 0;\n padding: 0 20px;\n\n flex: 1;\n @include type.text_headline();\n }\n }\n}\n","@use \"variables\" as *;\n@use \"type\";\n\n.model {\n font-size: 12px;\n font-weight: 300;\n\n @include type.text_code();\n\n .deprecated {\n span,\n td {\n color: $model-deprecated-font-color !important;\n }\n\n > td:first-of-type {\n text-decoration: line-through;\n }\n }\n &-toggle {\n font-size: 10px;\n\n position: relative;\n top: 6px;\n\n display: inline-block;\n\n margin: auto 0.3em;\n\n cursor: pointer;\n transition: transform 0.15s ease-in;\n transform: rotate(90deg);\n transform-origin: 50% 50%;\n\n &.collapsed {\n transform: rotate(0deg);\n }\n\n &:after {\n display: block;\n\n width: 20px;\n height: 20px;\n\n content: \"\";\n\n background: url('data:image/svg+xml, ')\n center no-repeat;\n background-size: 100%;\n }\n }\n\n &-jump-to-path {\n position: relative;\n\n cursor: pointer;\n\n .view-line-link {\n position: absolute;\n top: -0.4em;\n\n cursor: pointer;\n }\n }\n\n &-title {\n position: relative;\n\n &:hover .model-hint {\n display: block;\n }\n }\n\n &-hint {\n position: absolute;\n top: -1.8em;\n\n display: none;\n\n padding: 0.1em 0.5em;\n\n white-space: nowrap;\n\n color: $model-hint-font-color;\n border-radius: 4px;\n background: rgba($model-hint-background-color, 0.7);\n }\n\n p {\n margin: 0 0 1em 0;\n }\n\n .property {\n color: #999;\n font-style: italic;\n\n &.primitive {\n color: #6b6b6b;\n\n &.extension {\n display: block;\n\n > td:first-child {\n padding-left: 0;\n padding-right: 0;\n width: auto;\n\n &:after {\n content: \":\\00a0\";\n }\n }\n }\n }\n }\n\n .external-docs {\n color: #666;\n font-weight: normal;\n }\n}\n\ntable.model {\n tr {\n &.description {\n color: #666;\n font-weight: normal;\n\n td:first-child {\n font-weight: bold;\n }\n }\n\n &.property-row {\n &.required td:first-child {\n font-weight: bold;\n }\n\n td {\n vertical-align: top;\n\n &:first-child {\n padding-right: 0.2em;\n }\n }\n\n .star {\n color: red;\n }\n }\n\n &.extension {\n color: #777;\n\n td:last-child {\n vertical-align: top;\n }\n }\n\n &.external-docs {\n td:first-child {\n font-weight: bold;\n }\n }\n\n .renderedMarkdown p:first-child {\n margin-top: 0;\n }\n }\n}\n\nsection.models {\n margin: 30px 0;\n\n border: 1px solid rgba($section-models-border-color, 0.3);\n border-radius: 4px;\n\n .pointer {\n cursor: pointer;\n }\n\n &.is-open {\n padding: 0 0 20px;\n h4 {\n margin: 0 0 5px 0;\n\n border-bottom: 1px solid\n rgba($section-models-isopen-h4-border-bottom-color, 0.3);\n }\n }\n h4 {\n font-size: 16px;\n\n display: flex;\n align-items: center;\n\n margin: 0;\n padding: 10px 20px 10px 10px;\n\n cursor: pointer;\n transition: all 0.2s;\n\n @include type.text_headline($section-models-h4-font-color);\n\n svg {\n transition: all 0.4s;\n }\n\n span {\n flex: 1;\n }\n\n &:hover {\n background: rgba($section-models-h4-background-color-hover, 0.02);\n }\n }\n\n h5 {\n font-size: 16px;\n\n margin: 0 0 10px 0;\n\n @include type.text_headline($section-models-h5-font-color);\n }\n\n .model-jump-to-path {\n position: relative;\n top: 5px;\n }\n\n .model-container {\n margin: 0 20px 15px;\n position: relative;\n\n transition: all 0.5s;\n\n border-radius: 4px;\n background: rgba($section-models-model-container-background-color, 0.05);\n\n &:hover {\n background: rgba($section-models-model-container-background-color, 0.07);\n }\n\n &:first-of-type {\n margin: 20px;\n }\n\n &:last-of-type {\n margin: 0 20px;\n }\n\n .models-jump-to-path {\n position: absolute;\n top: 8px;\n right: 5px;\n opacity: 0.65;\n }\n }\n\n .model-box {\n background: none;\n\n &:has(.model-box) {\n width: 100%;\n overflow-x: auto;\n }\n }\n}\n\n.model-box {\n padding: 10px;\n display: inline-block;\n\n border-radius: 4px;\n background: rgba($section-models-model-box-background-color, 0.1);\n\n .model-jump-to-path {\n position: relative;\n top: 4px;\n }\n\n &.deprecated {\n opacity: 0.5;\n }\n}\n\n.model-title {\n font-size: 16px;\n\n @include type.text_headline($section-models-model-title-font-color);\n\n img {\n margin-left: 1em;\n position: relative;\n bottom: 0px;\n }\n}\n\n.model-deprecated-warning {\n font-size: 16px;\n font-weight: 600;\n\n margin-right: 1em;\n\n @include type.text_headline($color-delete);\n}\n\nspan {\n > span.model {\n .brace-close {\n padding: 0 0 0 10px;\n }\n }\n}\n\n.prop-name {\n display: inline-block;\n\n margin-right: 1em;\n}\n\n.prop-type {\n color: $prop-type-font-color;\n}\n\n.prop-enum {\n display: block;\n}\n.prop-format {\n color: $prop-format-font-color;\n}\n","@use \"variables\" as *;\n@use \"type\";\n\n.servers {\n > label {\n font-size: 12px;\n\n margin: -20px 15px 0 0;\n\n @include type.text_headline();\n\n select {\n min-width: 130px;\n max-width: 100%;\n width: 100%;\n border-color: $response-content-type-controls-accept-header-select-border-color;\n border-width: 0;\n box-shadow: none;\n color: rgba(226, 228, 233, 0.82);\n background: rgba(255,255,255,0.05)\n url('data:image/svg+xml, ')\n right 10px center no-repeat;\n }\n }\n\n h4.message {\n padding-bottom: 2em;\n }\n\n table {\n tr {\n width: 30em;\n }\n td {\n display: inline-block;\n max-width: 15em;\n vertical-align: middle;\n padding-top: 10px;\n padding-bottom: 10px;\n\n &:first-of-type {\n padding-right: 1em;\n }\n\n input {\n width: 100%;\n height: 100%;\n }\n }\n }\n\n .computed-url {\n margin: 2em 0;\n\n code {\n display: inline-block;\n padding: 4px;\n font-size: 16px;\n margin: 0 1em;\n }\n }\n}\n\n.servers-title {\n font-size: 12px;\n font-weight: bold;\n}\n\n.operation-servers {\n h4.message {\n margin-bottom: 2em;\n }\n}\n","@use \"type\";\n@use \"variables\" as *;\n\ntable {\n width: 100%;\n padding: 0 10px;\n\n border-collapse: collapse;\n\n &.model {\n tbody {\n tr {\n td {\n padding: 0;\n\n vertical-align: top;\n\n &:first-of-type {\n width: 174px;\n padding: 0 0 0 2em;\n }\n }\n }\n }\n }\n\n &.headers {\n td {\n font-size: 12px;\n font-weight: 300;\n\n vertical-align: middle;\n\n @include type.text_code();\n }\n\n .header-example {\n color: #999;\n font-style: italic;\n }\n }\n\n tbody {\n tr {\n td {\n padding: 10px 0 0 0;\n\n vertical-align: top;\n\n &:first-of-type {\n min-width: 6em;\n padding: 10px 0 10px 10px;\n }\n\n &:has(.model-box) {\n max-width: 1px; // fits content in available space instead of growing the table beyond its container\n }\n }\n }\n }\n\n thead {\n tr {\n th,\n td {\n font-size: 12px;\n font-weight: bold;\n\n background-color: revert;\n padding: 12px 0;\n\n text-align: left;\n\n border-bottom: 1px solid rgba($table-thead-td-border-bottom-color, 0.2);\n\n @include type.text_body();\n\n &:first-of-type {\n padding: 12px 0 12px 10px;\n }\n }\n }\n }\n}\n\n.parameters-col_description {\n width: 99%; // forces other columns to shrink to their content widths\n margin-bottom: 2em;\n input {\n width: 100%;\n max-width: 340px;\n background-color: rgba(255,255,255,0.05);\n border: none;\n color: rgba(226, 228, 233, 0.82);\n\n &::placeholder {\n color: rgba(226, 228, 233, 0.5);\n }\n }\n\n select {\n border-width: 1px;\n }\n\n .markdown,\n .renderedMarkdown {\n p {\n margin: 0;\n }\n }\n}\n\n.parameter__name {\n font-size: 16px;\n font-weight: normal;\n\n // hack to give breathing room to the name column\n // TODO: refactor all of this to flexbox\n margin-right: 0.75em;\n\n @include type.text_headline();\n\n &.required {\n font-weight: bold;\n\n span {\n color: #F03535;\n }\n\n &:after {\n font-size: 10px;\n\n position: relative;\n top: -6px;\n\n padding: 5px;\n\n content: \"required\";\n\n color: #F03535;\n }\n }\n}\n\n.parameter__in,\n.parameter__extension {\n font-size: 12px;\n font-style: italic;\n\n @include type.text_code($table-parameter-in-font-color);\n}\n\n.parameter__deprecated {\n font-size: 12px;\n font-style: italic;\n\n @include type.text_code($table-parameter-deprecated-font-color);\n}\n\n.parameter__empty_value_toggle {\n display: block;\n font-size: 13px;\n padding-top: 5px;\n padding-bottom: 12px;\n\n input {\n margin-right: 7px;\n width: auto;\n }\n\n &.disabled {\n opacity: 0.7;\n }\n}\n\n.table-container {\n padding: 20px;\n}\n\n.response-col_description {\n width: 99%; // forces other columns to shrink to their content widths\n\n .markdown,\n .renderedMarkdown {\n p {\n margin: 0;\n }\n }\n}\n\n.response-col_links {\n min-width: 6em;\n}\n\n.response__extension {\n font-size: 12px;\n font-style: italic;\n\n @include type.text_code($table-parameter-in-font-color);\n}\n","@use \"variables\" as *;\n@use \"type\";\n\n.topbar {\n padding: 10px 0;\n\n background-color: $topbar-background-color;\n .topbar-wrapper {\n display: flex;\n align-items: center;\n flex-wrap: wrap;\n gap: 10px;\n }\n @media (max-width: 550px) {\n .topbar-wrapper {\n flex-direction: column;\n align-items: start;\n }\n }\n\n a {\n font-size: 1.5em;\n font-weight: bold;\n\n display: flex;\n align-items: center;\n flex: 1;\n\n max-width: 300px;\n\n text-decoration: none;\n\n @include type.text_headline($topbar-link-font-color);\n\n span {\n margin: 0;\n padding: 0 10px;\n }\n }\n\n .download-url-wrapper {\n display: flex;\n flex: 3;\n justify-content: flex-end;\n\n input[type=\"text\"] {\n width: 100%;\n max-width: 100%;\n margin: 0;\n\n border: 2px solid $topbar-download-url-wrapper-element-border-color;\n border-radius: 4px 0 0 4px;\n outline: none;\n }\n\n .select-label {\n display: flex;\n align-items: center;\n\n width: 100%;\n max-width: 600px;\n margin: 0;\n color: #f0f0f0;\n span {\n font-size: 16px;\n\n flex: 1;\n\n padding: 0 10px 0 0;\n\n text-align: right;\n }\n\n select {\n flex: 2;\n\n width: 100%;\n\n border: 2px solid $topbar-download-url-wrapper-element-border-color;\n outline: none;\n box-shadow: none;\n }\n }\n\n .download-url-button {\n font-size: 16px;\n font-weight: bold;\n\n padding: 4px 30px;\n\n border: none;\n border-radius: 0 4px 4px 0;\n background: $topbar-download-url-button-background-color;\n\n @include type.text_headline($topbar-download-url-button-font-color);\n }\n }\n @media (max-width: 550px) {\n .download-url-wrapper {\n width: 100%;\n }\n }\n}\n","@use \"sass:color\";\n@use \"variables\" as *;\n@use \"type\";\n\n.info {\n margin: 50px 0;\n\n &.failed-config {\n max-width: 880px;\n margin-left: auto;\n margin-right: auto;\n text-align: center;\n }\n\n hgroup.main {\n margin: 0 0 20px 0;\n a {\n font-size: 12px;\n }\n }\n pre {\n font-size: 14px;\n }\n p,\n li,\n table {\n font-size: 14px;\n\n @include type.text_body();\n }\n\n h1,\n h2,\n h3,\n h4,\n h5 {\n @include type.text_body();\n }\n\n a {\n font-size: 14px;\n\n transition: all 0.4s;\n\n @include type.text_body($info-link-font-color);\n\n &:hover {\n color: color.adjust($info-link-font-color-hover, $lightness: -15%);\n }\n }\n > div {\n margin: 0 0 5px 0;\n }\n\n .base-url {\n font-size: 12px;\n font-weight: 300 !important;\n\n margin: 0;\n\n @include type.text_code();\n }\n\n .description {\n display: none;\n }\n\n .title {\n display: none;\n font-size: 36px;\n\n margin: 0;\n\n @include type.text_body();\n\n small {\n font-size: 10px;\n\n position: relative;\n top: -5px;\n\n display: inline-block;\n\n margin: 0 0 0 5px;\n padding: 2px 4px;\n\n vertical-align: super;\n\n border-radius: 57px;\n background: $info-title-small-background-color;\n\n &.version-stamp {\n background-color: #89bf04;\n }\n\n pre {\n margin: 0;\n padding: 0;\n\n @include type.text_headline($info-title-small-pre-font-color);\n }\n }\n }\n}\n","@use \"variables\" as *;\n@use \"type\";\n\n.auth-btn-wrapper {\n display: flex;\n\n padding: 10px 0;\n\n justify-content: center;\n\n .authorize {\n margin-right: 10px !important;\n }\n}\n\n.auth-wrapper {\n display: flex;\n\n flex: 1;\n justify-content: flex-end;\n\n .authorize {\n padding-right: 20px;\n margin-left: 10px;\n margin-right: 10px;\n }\n}\n\n.auth-container {\n margin: 0 0 10px 0;\n padding: 10px 20px;\n\n border-bottom: 1px solid $auth-container-border-color;\n\n &:last-of-type {\n margin: 0;\n padding: 10px 20px;\n\n border: 0;\n }\n\n h4 {\n margin: 5px 0 15px 0 !important;\n }\n\n .wrapper {\n margin: 0;\n padding: 0;\n }\n\n input[type=\"text\"],\n input[type=\"password\"] {\n min-width: 230px;\n }\n\n .errors {\n font-size: 12px;\n\n padding: 10px;\n\n border-radius: 4px;\n\n background-color: #ffeeee;\n\n color: red;\n\n margin: 1em;\n\n @include type.text_code();\n\n b {\n text-transform: capitalize;\n margin-right: 1em;\n }\n }\n}\n\n.scopes {\n h2 {\n font-size: 14px;\n\n @include type.text_headline();\n\n a {\n font-size: 12px;\n color: $auth-select-all-none-link-font-color;\n cursor: pointer;\n padding-left: 10px;\n text-decoration: underline;\n }\n }\n}\n\n.scope-def {\n padding: 0 0 20px 0;\n}\n","@use \"variables\" as *;\n@use \"type\";\n\n.errors-wrapper {\n margin: 20px;\n padding: 10px 20px;\n\n animation: scaleUp 0.5s;\n\n border: 2px solid $color-delete;\n border-radius: 4px;\n background: rgba($color-delete, 0.1);\n\n .error-wrapper {\n margin: 0 0 10px 0;\n }\n\n .errors {\n h4 {\n font-size: 14px;\n\n margin: 0;\n\n @include type.text_code();\n }\n\n small {\n color: $errors-wrapper-errors-small-font-color;\n }\n\n .message {\n white-space: pre-line;\n\n &.thrown {\n max-width: 100%;\n }\n }\n\n .error-line {\n text-decoration: underline;\n cursor: pointer;\n }\n }\n\n hgroup {\n display: flex;\n\n align-items: center;\n\n h4 {\n font-size: 20px;\n\n margin: 0;\n\n flex: 1;\n @include type.text_headline();\n }\n }\n}\n\n@keyframes scaleUp {\n 0% {\n transform: scale(0.8);\n\n opacity: 0;\n }\n 100% {\n transform: scale(1);\n\n opacity: 1;\n }\n}\n",".Resizer.vertical.disabled {\n display: none;\n}\n","@use \"variables\" as *;\n@use \"type\";\n\n.markdown,\n.renderedMarkdown {\n p,\n pre {\n margin: 1em auto;\n\n word-break: break-all; /* Fallback trick */\n word-break: break-word;\n }\n pre {\n color: black;\n font-weight: normal;\n white-space: pre-wrap;\n background: none;\n padding: 0px;\n }\n\n code {\n font-size: 14px;\n padding: 5px 7px;\n\n border-radius: 4px;\n background: rgba($info-code-background-color, 0.05);\n\n @include type.text_code($info-code-font-color);\n }\n\n pre > code {\n display: block;\n }\n}\n","@use \"./../../../components/mixins\";\n\n.json-schema-2020-12 {\n &-keyword--\\$vocabulary {\n ul {\n @include mixins.expansion-border;\n }\n }\n\n &-\\$vocabulary-uri {\n margin-left: 35px;\n\n &--disabled {\n text-decoration: line-through;\n }\n }\n}\n","@use \"./../../../style/variables\" as *;\n@use \"./../../../style/type\";\n\n@mixin expansion-border {\n margin: 0 0 0 20px;\n border-left: 1px dashed\n rgba($section-models-model-container-background-color, 0.1);\n}\n\n@mixin json-schema-2020-12-keyword--primary {\n color: $text-code-default-font-color;\n font-style: normal;\n}\n\n@mixin json-schema-2020-12-keyword--extension {\n color: #929292;\n font-style: italic;\n}\n\n@mixin json-schema-2020-12-keyword {\n margin: 5px 0 5px 0;\n\n &__children {\n @include expansion-border;\n padding: 0;\n\n &--collapsed {\n display: none;\n }\n }\n\n &__name {\n font-size: 12px;\n margin-left: 20px;\n font-weight: bold;\n\n &--primary {\n @include json-schema-2020-12-keyword--primary;\n }\n\n &--secondary {\n color: #6b6b6b;\n font-style: italic;\n }\n\n &--extension {\n @include json-schema-2020-12-keyword--extension;\n }\n }\n\n &__value {\n color: #6b6b6b;\n font-style: italic;\n font-size: 12px;\n font-weight: normal;\n\n &--primary {\n @include json-schema-2020-12-keyword--primary;\n }\n\n &--secondary {\n color: #6b6b6b;\n font-style: italic;\n }\n\n &--extension {\n @include json-schema-2020-12-keyword--extension;\n }\n\n &--warning {\n @include type.text_code();\n font-style: normal;\n display: inline-block;\n margin-left: 10px;\n line-height: 1.5;\n padding: 1px 4px 1px 4px;\n border-radius: 4px;\n color: red;\n border: 1px dashed red;\n }\n }\n}\n","@use \"./../../mixins\";\n\n.json-schema-2020-12-keyword--default {\n .json-schema-2020-12-json-viewer__name {\n @include mixins.json-schema-2020-12-keyword--primary;\n }\n\n .json-schema-2020-12-json-viewer__value {\n @include mixins.json-schema-2020-12-keyword--primary;\n }\n}\n",".json-schema-2020-12-keyword--dependentRequired {\n & > ul {\n display: inline-block;\n padding: 0;\n margin: 0;\n\n li {\n display: inline;\n list-style-type: none;\n }\n }\n}\n",".json-schema-2020-12-keyword--description {\n color: #6b6b6b;\n font-size: 12px;\n margin-left: 20px;\n\n & p {\n margin: 0;\n }\n}\n","@use \"./../../mixins\";\n\n.json-schema-2020-12-keyword--examples {\n .json-schema-2020-12-json-viewer__name {\n @include mixins.json-schema-2020-12-keyword--primary;\n }\n\n .json-schema-2020-12-json-viewer__value {\n @include mixins.json-schema-2020-12-keyword--primary;\n }\n}\n","@use \"./../../mixins\";\n\n.json-schema-2020-12-json-viewer-extension-keyword {\n .json-schema-2020-12-json-viewer__name {\n @include mixins.json-schema-2020-12-keyword--extension;\n }\n\n .json-schema-2020-12-json-viewer__value {\n @include mixins.json-schema-2020-12-keyword--extension;\n }\n}\n","@use \"./../../../../../style/variables\" as *;\n\n.json-schema-2020-12 {\n &-keyword--patternProperties {\n ul {\n margin: 0;\n padding: 0;\n border: none;\n }\n\n .json-schema-2020-12__title:first-of-type::before {\n color: $prop-type-font-color;\n content: \"/\";\n }\n\n .json-schema-2020-12__title:first-of-type::after {\n color: $prop-type-font-color;\n content: \"/\";\n }\n }\n}\n",".json-schema-2020-12 {\n &-keyword--properties {\n & > ul {\n margin: 0;\n padding: 0;\n border: none;\n }\n }\n\n &-property {\n list-style-type: none;\n\n &--required {\n &\n > .json-schema-2020-12:first-of-type\n > .json-schema-2020-12-head\n .json-schema-2020-12__title:after {\n content: \"*\";\n color: red;\n font-weight: bold;\n }\n }\n }\n}\n","@use \"./../../../../../style/variables\" as *;\n@use \"./../../../../../style/type\";\n\n.json-schema-2020-12 {\n &__title {\n @include type.text_headline($section-models-model-title-font-color);\n display: inline-block;\n font-weight: bold;\n font-size: 12px;\n line-height: normal;\n\n & .json-schema-2020-12-keyword__name {\n margin: 0;\n }\n }\n\n &-property {\n margin: 7px 0;\n\n .json-schema-2020-12__title {\n @include type.text_code();\n font-size: 12px;\n vertical-align: middle;\n }\n }\n}\n","@use \"./../../../../style/variables\" as *;\n@use \"./../mixins\";\n@use \"./$vocabulary/$vocabulary\" as vocabulary;\n@use \"./Const/const\";\n@use \"./Constraint/constraint\";\n@use \"./Default/default\";\n@use \"./DependentRequired/dependent-required\";\n@use \"./Description/description\";\n@use \"./Enum/enum\";\n@use \"./Examples/examples\";\n@use \"./ExtensionKeywords/extension-keywords\";\n@use \"./PatternProperties/pattern-properties\";\n@use \"./Properties/properties\";\n@use \"./Title/title\";\n\n.json-schema-2020-12-keyword {\n @include mixins.json-schema-2020-12-keyword;\n}\n\n.json-schema-2020-12-keyword__name--secondary\n + .json-schema-2020-12-keyword__value--secondary::before {\n content: \"=\";\n}\n\n.json-schema-2020-12__attribute {\n font-family: monospace;\n color: $text-code-default-font-color;\n font-size: 12px;\n text-transform: lowercase;\n padding-left: 10px;\n\n &--primary {\n color: $prop-type-font-color;\n }\n\n &--muted {\n color: gray;\n }\n\n &--warning {\n color: red;\n }\n}\n","@use \"./../mixins\";\n@use \"./../keywords/all\";\n\n.json-schema-2020-12-json-viewer {\n @include mixins.json-schema-2020-12-keyword;\n}\n\n.json-schema-2020-12-json-viewer__name--secondary\n + .json-schema-2020-12-json-viewer__value--secondary::before {\n content: \"=\";\n}\n","@use \"./../../../../style/variables\" as *;\n@use \"./../../components/mixins\";\n\n.json-schema-2020-12 {\n margin: 0 20px 15px 20px;\n border-radius: 4px;\n padding: 12px 0 12px 20px;\n background-color: rgba(\n $section-models-model-container-background-color,\n 0.05\n );\n\n &:first-of-type {\n margin: 20px;\n }\n\n &:last-of-type {\n margin: 0 20px;\n }\n\n &--embedded {\n background-color: inherit;\n padding: 0 inherit 0 inherit;\n }\n\n &-body {\n @include mixins.expansion-border;\n margin: 2px 0;\n\n &--collapsed {\n display: none;\n }\n }\n}\n",".json-schema-2020-12-accordion {\n outline: none;\n border: none;\n padding-left: 0;\n\n &__children {\n display: inline-block;\n }\n\n &__icon {\n width: 18px;\n height: 18px;\n display: inline-block;\n vertical-align: bottom;\n\n &--expanded {\n transition: transform 0.15s ease-in;\n transform: rotate(-90deg);\n transform-origin: 50% 50%;\n }\n\n &--collapsed {\n transition: transform 0.15s ease-in;\n transform: rotate(0deg);\n transform-origin: 50% 50%;\n }\n\n & svg {\n height: 20px;\n width: 20px;\n }\n }\n}\n","@use \"./../../../../style/variables\" as *;\n@use \"./../../../../style/type\";\n\n.json-schema-2020-12-expand-deep-button {\n @include type.text_headline($section-models-model-title-font-color);\n font-size: 12px;\n color: rgb(175, 174, 174);\n border: none;\n padding-right: 0;\n}\n",".model-box {\n // inferred names of Schema Objects\n &\n .json-schema-2020-12:not(.json-schema-2020-12--embedded)\n > .json-schema-2020-12-head\n .json-schema-2020-12__title:first-of-type {\n font-size: 16px;\n }\n\n & > .json-schema-2020-12 {\n margin: 0;\n }\n\n .json-schema-2020-12 {\n padding: 0;\n background-color: transparent;\n }\n\n .json-schema-2020-12-accordion,\n .json-schema-2020-12-expand-deep-button {\n background-color: transparent;\n }\n}\n",".models\n .json-schema-2020-12:not(.json-schema-2020-12--embedded)\n > .json-schema-2020-12-head\n .json-schema-2020-12__title:first-of-type {\n font-size: 16px;\n}\n\n.models .json-schema-2020-12:not(.json-schema-2020-12--embedded) {\n width: calc(100% - 40px);\n overflow-x: auto;\n}\n"],"names":[],"sourceRoot":""} \ No newline at end of file diff --git a/deployment/25.10.3/deployments/air-gap/index.html b/deployment/25.10.3/deployments/air-gap/index.html new file mode 100644 index 00000000..7d588391 --- /dev/null +++ b/deployment/25.10.3/deployments/air-gap/index.html @@ -0,0 +1,4681 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Air Gap Installation - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Air Gap Installation

+ +

Simplyblock can be installed in an air-gapped environment. However, the necessary images must be downloaded to +install and run the control plane, the storage nodes, and the Kubernetes CSI driver. In addition, for Kubernetes +deployments, you want to download or clone the +simplyblock helm repository ⧉ which contains the helm charts for +Kubernetes-based storage and caching nodes, as well as the Kubernetes CSI driver.

+

For an air-gapped installation, we recommend an air-gapped container repository installation. Tools such as +JFrog Artifactory ⧉ or +Sonatype Nexus ⧉ help with the setup and management of +container images in air-gapped environments.

+

The general installation instructions are similar to non-air-gapped installations, with the need to update the +container download locations to point to your local container repository.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/deployments/baremetal/index.html b/deployment/25.10.3/deployments/baremetal/index.html new file mode 100644 index 00000000..97a7595c --- /dev/null +++ b/deployment/25.10.3/deployments/baremetal/index.html @@ -0,0 +1,4831 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Plain Linux Initiators - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Plain Linux Initiators

+ +

Simplyblock storage can be attached over the network to Linux hosts which are not running Kubernetes, Proxmox or +OpenStack.

+

While no simplyblock components must be installed on these hosts, some OS-level configuration steps are required. +Those manual steps are typically taken care of by the CSI driver or Proxmox integration.

+

On plain Linux initiators, those steps have to be performed manually on each host that will connect simplyblock logical +volumes.

+

Install Nvme Client Package

+
=== "RHEL / Alma / Rocky"
+
+    ```bash
+    sudo dnf install -y nvme-cli
+    ```
+
+=== "Debian / Ubuntu"
+
+    ```bash
+    sudo apt install -y nvme-cli
+    ```
+
+

Load the NVMe over Fabrics Kernel Modules

+

For NVMe over TCP and NVMe over RoCE:

+

Simplyblock is built upon the NVMe over Fabrics standard and uses NVMe over TCP (NVMe/TCP) by default.

+

While the driver is part of the Linux kernel with kernel versions 5.x and later, it is not enabled by default. Hence, +when using simplyblock, the driver needs to be loaded.

+
Loading the NVMe/TCP driver
modprobe nvme-tcp
+
+
Loading the NVMe/RDMA driver
modprobe nvme-rdma
+
+

When loading the NVMe/TCP or NVMe/RDMA driver, the NVMe over Fabrics driver automatically get loaded too, as the former depends on its +provided foundations.

+

It is possible to check for successful loading of both drivers with the following command:

+
Checking the drivers being loaded
lsmod | grep 'nvme_'
+
+

The response should list the drivers as nvme_tcp and nvme_fabrics as seen in the following example:

+
Example output of the driver listing
[demo@demo ~]# lsmod | grep 'nvme_'
+nvme_tcp               57344  0
+nvme_keyring           16384  1 nvme_tcp
+nvme_fabrics           45056  1 nvme_tcp
+nvme_core             237568  3 nvme_tcp,nvme,nvme_fabrics
+nvme_auth              28672  1 nvme_core
+t10_pi                 20480  2 sd_mod,nvme_core
+
+

To make the driver loading persistent and survive system reboots, it has to be configured to be loaded at system startup +time. This can be achieved by either adding it to /etc/modules (Debian / Ubuntu) or creating a config file under +/etc/modules-load.d/ (Red Hat / Alma / Rocky).

+
+
+
+
echo "nvme-tcp" | sudo tee -a /etc/modules-load.d/nvme-tcp.conf
+
+
+
+
echo "nvme-tcp" | sudo tee -a /etc/modules
+
+
+
+
+

After rebooting the system, the driver should be loaded automatically. It can be checked again via the above provided +lsmod command.

+

Create a Storage Pool

+

Before logical volumes can be created and connected, a storage pool is required. If a pool already exists, it can be +reused. Otherwise, creating a storage pool can be created on any control plane node as follows:

+
Create a Storage Pool
sbctl pool add <POOL_NAME> <CLUSTER_UUID>
+
+

The last line of a successful storage pool creation returns the new pool id.

+
Example output of creating a storage pool
[demo@demo ~]# sbctl pool add test 4502977c-ae2d-4046-a8c5-ccc7fa78eb9a
+2025-03-05 06:36:06,093: INFO: Adding pool
+2025-03-05 06:36:06,098: INFO: {"cluster_id": "4502977c-ae2d-4046-a8c5-ccc7fa78eb9a", "event": "OBJ_CREATED", "object_name": "Pool", "message": "Pool created test", "caused_by": "cli"}
+2025-03-05 06:36:06,100: INFO: Done
+ad35b7bb-7703-4d38-884f-d8e56ffdafc6 # <- Pool Id
+
+

Create and Connect a Logical Volume

+

To create a new logical volume, the following command can be run on any control plane node.

+
sbctl volume add \
+  --max-rw-iops <IOPS> \
+  --max-r-mbytes <THROUGHPUT> \
+  --max-w-mbytes <THROUGHPUT> \
+  --ndcs <DATA CHUNKS IN STRIPE> \
+  --npcs <PARITY CHUNKS IN STRIPE>
+  --fabric {tcp, rdma}
+  --lvol-priority-class <1-6>
+  <VOLUME_NAME> \
+  <VOLUME_SIZE> \
+  <POOL_NAME>
+
+
+

Info

+

The parameters ndcs and npcs define the erasure-coding schema (e.g., --ndcs=4 --npcs=2). The settings are +optional. If not specified, the cluster default is chosen. Valid for ndcs are 1, 2, and 4, and for npcs 0,1, +and 2. However, it must be considered that the number of cluster nodes must be equal to or larger than (ndcs + +npcs).

+

The parameter --fabric defines the fabric by which the volume is connected to the cluster. It is optional and the +default is tcp. The fabric type rdma can only be chosen for hosts with an RDMA-capable NIC and for clusters that +support RDMA. A priority class is optional as well and can be selected only if the cluster defines it. A cluster can +define 0-6 priority classes. The default is 0.

+
+
Example of creating a logical volume
sbctl volume add --ndcs 2 --ndcs 1 --fabric tcp lvol01 1000G test  
+
+

In this example, a logical volume with the name lvol01 and 1TB of thinly provisioned capacity is created in the pool +named test. The uuid of the logical volume is returned at the end of the operation.

+

For additional parameters, see Add a new Logical Volume.

+

To connect a logical volume on the initiator (or Linux client), execute the following command on a any control plane +node. This command returns one or more connection commands to be executed on the client.

+
sbctl volume connect \
+  <VOLUME_ID>
+
+
Example of retrieving the connection strings of a logical volume
sbctl volume connect a898b44d-d7ee-41bb-bc0a-989ad4711780
+
+sudo nvme connect --reconnect-delay=2 --ctrl-loss-tmo=3600 --nr-io-queues=32 --keep-alive-tmo=5 --transport=tcp --traddr=10.10.20.2 --trsvcid=9101 --nqn=nqn.2023-02.io.simplyblock:fa66b0a0-477f-46be-8db5-b1e3a32d771a:lvol:a898b44d-d7ee-41bb-bc0a-989ad4711780
+sudo nvme connect --reconnect-delay=2 --ctrl-loss-tmo=3600 --nr-io-queues=32 --keep-alive-tmo=5 --transport=tcp --traddr=10.10.20.3 --trsvcid=9101 --nqn=nqn.2023-02.io.simplyblock:fa66b0a0-477f-46be-8db5-b1e3a32d771a:lvol:a898b44d-d7ee-41bb-bc0a-989ad4711780
+
+

The output can be copy-pasted to the host to which the volumes should be attached.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/deployments/cluster-deployment-options/index.html b/deployment/25.10.3/deployments/cluster-deployment-options/index.html new file mode 100644 index 00000000..1901a290 --- /dev/null +++ b/deployment/25.10.3/deployments/cluster-deployment-options/index.html @@ -0,0 +1,4937 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Cluster deployment options - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Cluster deployment options

+ +

The following options can be set when creating a cluster. This applies to both plain linux and kubernetes deployments. +Most cannot be changed later on, so careful planning is recommended.

+

--enable-node-affinity

+

As long as a node is not full (out of capacity), the first chunk +of data is always stored on the local node (the node to which the volume is attached). +This reduces network traffic and latency - accelerating particularly the read - but may lead to an +inequal distribution of capacity within the cluster. Generally, using node affinity accelerates +reads, but leads to higher variability in performance across nodes in the cluster. +It is recommended on shared networks and networks below 100gb/s.

+

--data-chunks-per-stripe, --parity-chunks-per-stripe

+

Those two parameters together make up the default erasure coding schema of the node (e.g. 1+1, 2+2, 4+2). Starting from R25.10, it is also +possible to set individual schemas per volume, but this feature is still in alpha-stage.

+

--cap-warn, --cap-crit

+

Warning and critical limits for overall cluster utilization. The warning +limit will just cause issuance of warnings in the event log if exceeded, the "critical" limit will +place the cluster into read-only mode. For large clusters, 99% of "critical" limit is ok, for small +clusters (less than 50TB) better use 97%.

+

--prov-cap-warn, --prov-cap-crit

+

Warning and critical limits for over-provisioning. Exceeding +these limits will cause entries in the cluster log. If the critical limit is exceeded, +new volumes cannot be provisioned and volumes cannot be enlarged. A limit of 500% is typical.

+

--log-del-interval

+

Number of days by which logs are retained. Log storage can grow significantly and it is recommended to keep logs for not longer than one week.

+

--metrics-retention-period

+

Number of days by which the io statistics and other metrics are retained. The amount of data per day is significant, typically limit to a few days or a week.

+

--contact-point

+

This is a webhook endpoint for alerting (critical events such as storage nodes becoming unreachable)

+

--fabric

+

Choose tcp, rdma or both. If both fabrics are chosen, volumes can connect to the cluster +using both options (defined per volume or storage class), but the cluster internally uses rdma.

+

--qpair-count

+

The default amount of queue pairs (sockets) per volume for an initiator (host) to connect to the +target (server). More queue pairs per volume increase concurrency and volume performance, but require more +server resources (ram, cpu) and thus limit the total amount of volumes per storage node. The default is 3. +If you need few, very performant volumes, increase the amount, if you need a large amount of less performant +volumes decrease it. More than 12 parallel connections have limited impact on overall performance. Also, the +host requires at least one core per queue pair.

+

--name

+

A human-readable name for the cluster

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/deployments/data-migration/index.html b/deployment/25.10.3/deployments/data-migration/index.html new file mode 100644 index 00000000..ddde1479 --- /dev/null +++ b/deployment/25.10.3/deployments/data-migration/index.html @@ -0,0 +1,5015 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Data Migration - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Data Migration

+ +

When migrating existing data to simplyblock, the process can be performed at the block level or the file system +level, depending on the source system and migration requirements. Because Simplyblock provides logical Volumes (LVs) +as virtual block devices, data can be migrated using standard block device cloning tools such as dd, as well +as file-based tools like rsync after the block device has been formatted.

+

Therefore, sata migration to simplyblock is a straightforward process using common block-level and file-level tools. +For full disk cloning, dd and similar utilities are effective. For selective file migrations, rsync provides +flexibility and reliability. Proper planning and validation of available storage capacity are essential to ensure +successful and complete data transfers.

+

Block-Level Migration Using dd

+

A block-level copy duplicates the entire content of a source block device, including partition tables, file systems, and +data. This method is ideal when migrating entire disks or volumes.

+
Creating a block-level clone of a block device
dd if=/dev/source-device of=/dev/simplyblock-device bs=4M status=progress
+
+
    +
  • if= specifies the input (source) device.
  • +
  • of= specifies the output (Simplyblock Logical Volume) device.
  • +
  • bs=4M sets the block size for efficiency.
  • +
  • status=progress provides real-time progress updates.
  • +
+
+

Info

+

Ensure that the simplyblock logical volume is at least as large as the source device to prevent data loss.

+
+

Alternative Block-Level Cloning Tools

+

Other block-level tools such as Clonezilla, partclone, or dcfldd may also be used for disk duplication, depending +on the specific environment and desired features like compression or network transfer.

+

File-Level Migration Using rsync

+

For scenarios where only file contents need to be migrated (for example, after creating a new file system on a +simplyblock logical volume), rsync is a reliable tool.

+
    +
  1. +

    First, format the Simplyblock Logical Volume: +

    Format the simplyblock block device with ext4
    mkfs.ext4 /dev/simplyblock-device
    +

    +
  2. +
  3. +

    Mount the Logical Volume: +

    Mount the block device
    mount /dev/simplyblock-device /mnt/simplyblock
    +

    +
  4. +
  5. +

    Use rsync to copy files from the source directory: +

    Synchronize the source disks content using rsync
    rsync -avh --progress /source/data/ /mnt/simplyblock/
    +

    +
      +
    • -a preserves permissions, timestamps, and symbolic links.
    • +
    • -v provides verbose output.
    • +
    • -h makes output human-readable.
    • +
    • --progress shows transfer progress.
    • +
    +
  6. +
+

Minimal-Downtime Migration Strategy

+

An alternative, but more complex solution enables minimal downtime. This option utilizes the Linux dm (Device Mapper) +subsystem.

+

Using the Device Mapper, the current and new block devices will be moved into a RAID-1 and synchronized (re-silvered) +in the background. This solution requires two minimal downtimes to create and remount the devices.

+
+

Warning

+

This method is quite involved, requires a lot of steps, and can lead to data loss in case of wrong commands or +parameters. It should only be used by advanced users that understand the danger of the commands below.

+Furthermore, this migration method MUST NOT be used for boot devices!

+
+

In this walkthrough, we assume the new simplyblock logical volume is already connected to the system.

+

Preparation

+

To successfully execute this data migration, a few values are required. First of all, the two device names of the +currently used and new device need to be collected.

+

This can be done by executing the command lsblk to list all attached block devices.

+
lsblk provides information about all attached block devices
lsblk
+
+

In this example, sda is the boot device which hosts the operating system, while sdb is the currently used block +device and nvme0n1 is the newly attached simplyblock logical volume. The latter two should be noted down.

+
+

Danger

+

It is important to understand the difference between the currently used and the new device. Using them in the wrong +order in the following steps will cause any or all data to be lost!

+
+
Find the source and target block devices using lsblk
[root@demo ~]# lsblk
+NAME                      MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS
+sda                         8:0    0   25G  0 disk
+├─sda1                      8:1    0    1G  0 part  /boot/efi
+├─sda2                      8:2    0    2G  0 part  /boot
+└─sda3                      8:3    0 21.9G  0 part
+  └─ubuntu--vg-ubuntu--lv 252:0    0   11G  0 lvm   /
+sdb                         8:16   0   25G  0 disk
+└─sdb1                      8:17   0   25G  0 part  /data/pg
+sr0                        11:0    1 57.4M  0 rom
+nvme0n1                   259:0    0   25G  0 disk
+
+

Next up the cluster size of the current device is required. The value must be set on the RAID to-be-created. It needs +to be noted down.

+
Find the block size of the source filesystem
tune2fs -l /dev/sdb1 | grep -i 'block size'
+
+

In this example, the block size is 4 KiB (4096 bytes).

+
Example output of the block size
[root@demo ~]# tune2fs -l /dev/sdb1 | grep -i 'block size'
+Block size:               4096
+
+

Last, it is important to ensure that the new target device is at least as large or larger than the current device. +lsblk can be used again to get the required numbers.

+
lsblk with byte sizes of the block devices
lsblk -b
+
+

In this example, both devices are the same size, 26843545600 bytes in total disk capacity.

+
Example output of lsblk -b
[root@demo ~]# lsblk -b
+NAME                      MAJ:MIN RM        SIZE RO TYPE  MOUNTPOINTS
+sda                         8:0    0 26843545600  0 disk
+├─sda1                      8:1    0  1127219200  0 part  /boot/efi
+├─sda2                      8:2    0  2147483648  0 part  /boot
+└─sda3                      8:3    0 23566745600  0 part
+  └─ubuntu--vg-ubuntu--lv 252:0    0 11781799936  0 lvm   /
+sdb                         8:16   0 26843545600  0 disk
+└─sdb1                      8:17   0 26843513344  0 part  /data/pg
+sr0                        11:0    1    60225536  0 rom
+nvme0n1                   259:0    0 26843545600  0 disk
+
+

Device Mapper RAID Setup

+
+

Danger

+

From here on out, mistakes can cause any or all data to be lost!
+It is strongly recommended to only go further, if ensured that the values above are correct and after a full data +backup is created. It is also recommended to test the backup before continuing. A failure to do so can cause issues +in case it cannot be replayed.

+
+

Now, it's time to create the temporary RAID for disk synchronization. Anything beyond this point is dangerous.

+
+

Warning

+

Any service accessing the current block device or any of its partitions need to be shutdown and the block device +and its partitions need to be unmounted. It is required for the device to not be busy.

+

Example of PostgreSQL shutdown and partition unmount
service postgresql stop
+umount /data/pg
+

+
+
Building a RAID-1 with mdadm
mdadm --build --chunk=<CHUNK_SIZE> --level=1 \
+    --raid-devices=2 --bitmap=none \
+    <RAID_NAME> <CURRENT_DEVICE_FILE> missing
+
+

In this example, the RAID is created using the /dev/sdb device file and 4096 as the chunk size. The newly created +RAID is called migration. The RAID-level is 1 (meaning, RAID-1) and it includes 2 devices. The missing at the end +of the command is required to tell the device mapper that the second device of the RAID is missing for now. It will be +added later.

+
Example output of a RAID-1 with mdadm
[root@demo ~]# mdadm --build --chunk=4096 --level=1 --raid-devices=2 --bitmap=none migration /dev/sdb missing
+mdadm: array /dev/md/migration built and started.
+
+

To ensure that the RAID was created successfully, all device files with /dev/md* can be listed. In this case, +/dev/md127 is the actual RAID device, while /dev/md/migration is the device mapper file.

+
Finding the new device mapper device files
[root@demo ~]# ls /dev/md*
+/dev/md127  /dev/md127p1
+
+/dev/md:
+migration  migration1
+
+

After the RAID device name is confirmed, the new RAID device can be mounted. In this example, the original block device +was partitioned. Hence, the RAID device also has one partition /dev/md127p1. This is what needs to be mounted to the +same mount point as the original disk before, /data/pg in this example.

+
Mount the new device mapper device file
[root@demo ~]# mount /dev/md127p1 /data/pg/
+
+
+

Info

+

All services that require access to the data can be started again. The RAID itself is still in a degraded state, but +it provides the same data security as the original device.

+
+

Now the second, new device must be added to the RAID setup to start the re-silvering (data synchronization) process. +This is again done using mdadm tool.

+
Add the new simplyblock block device to RAID-1
mdadm <RAID_DEVICE_MAPPER_FILE> --add <NEW_DEVICE_FILE>
+
+

In the example, we add /dev/nvme0n1 (the simplyblock logical volume) to the RAID named "migration."

+
Example output of mdadm --add
[root@demo ~]# mdadm /dev/md/migration --add /dev/nvme0n1
+mdadm: added /dev/nvme0n1
+
+

After the device was added to the RAID setup, a background process is automatically started to synchronize the newly +added device to the first device in the setup. This process is called re-silvering.

+
+

Info

+

While the devices are synchronized, the read and write performance may be impacted due to the additional I/O +operations of the synchronization process. However, the process runs on a very low priority and shouldn't impact +the live operation too extensively.

+For AWS users: if the migration uses an Amazon EBS volume as the source, ensure enough IOPS to cover live +operation and migration.

+
+

The synchronization process status can be monitored using one of two commands:

+
Check status of re-silvering
mdadm -D <RAID_DEVICE_FILE>
+cat /proc/mdstat
+
+
Example output of a status check via mdadm
[root@demo ~]#mdadm -D /dev/md127
+/dev/md127:
+           Version :
+     Creation Time : Sat Mar 15 17:24:17 2025
+        Raid Level : raid1
+        Array Size : 26214400 (25.00 GiB 26.84 GB)
+     Used Dev Size : 26214400 (25.00 GiB 26.84 GB)
+      Raid Devices : 2
+     Total Devices : 2
+
+             State : clean, degraded, recovering
+    Active Devices : 1
+   Working Devices : 2
+    Failed Devices : 0
+     Spare Devices : 1
+
+Consistency Policy : resync
+
+    Rebuild Status : 98% complete
+
+    Number   Major   Minor   RaidDevice State
+       0       8       16        0      active sync   /dev/sdb
+       2     259        0        1      spare rebuilding   /dev/nvme0n1
+
+
Example output of a status check via /proc/mdstat
[root@demo ~]# cat /proc/mdstat 
+Personalities : [raid1] 
+md0 : active raid1 sdb[1] nvme0n1[0]
+      10484664 blocks super 1.2 [2/2] [UU]
+      [========>............]  resync = 42.3% (4440832/10484664) finish=0.4min speed=201856K/sec
+
+unused devices: <none>
+
+

After the Synchronization is done

+

Eventually, the synchronization finishes. At this point, the two devices (original and new) are kept in sync by the +device mapper system.

+
Example out of a finished synchronzation
[root@demo ~]# mdadm -D /dev/md127
+/dev/md127:
+           Version :
+     Creation Time : Sat Mar 15 17:24:17 2025
+        Raid Level : raid1
+        Array Size : 26214400 (25.00 GiB 26.84 GB)
+     Used Dev Size : 26214400 (25.00 GiB 26.84 GB)
+      Raid Devices : 2
+     Total Devices : 2
+
+             State : clean
+    Active Devices : 2
+   Working Devices : 2
+    Failed Devices : 0
+     Spare Devices : 0
+
+Consistency Policy : resync
+
+    Number   Major   Minor   RaidDevice State
+       0       8       16        0      active sync   /dev/sdb
+       2     259        0        1      active sync   /dev/nvme0n1
+
+

To fully switch to the new simplyblock logical volume, a second, minimal, downtime is required.

+

The RAID device needs to be unmounted and the device mapper stopped.

+
Stopping the device mapper RAID-1
umount <MOUNT_POINT>
+mdadm --stop <DEVICE_MAPPER_FILE>
+
+

In this example /data/pg and /dev/md/migration are used.

+
Example output of a stopped RAID-1
[root@demo ~]# umount /data/pg/
+[root@demo ~]# mdadm --stop /dev/md/migration
+mdadm: stopped /dev/md/migration
+
+

Now, the system should be restarted. If a system reboot takes too long and is out of the scope of the available +maintenance window, a re-read of the partition tables can be forced.

+
Re-read partition table
blockdev --rereadpt <NEW_DEVICE_FILE>
+
+

After re-reading the partition table of a device, the partition should be recognized and visible.

+
Example output of re-reading the partition table
[root@demo ~]# blockdev --rereadpt /dev/nvme0n1
+[root@demo ~]# ls /dev/nvme0n1p1
+/dev/nvme0n1p1
+
+

As a last step, the partition must be mounted to the same mount point as the RAID device before. If the mount is +successful, the services can be started again.

+
Mounting the plain block device and restarting services
[root@demo ~]# mount /dev/nvme0n1p1 /data/pg/
+[root@demo ~]# service postgresql start
+
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/deployments/deployment-preparation/cloud-instance-recommendations/index.html b/deployment/25.10.3/deployments/deployment-preparation/cloud-instance-recommendations/index.html new file mode 100644 index 00000000..a1bf69f6 --- /dev/null +++ b/deployment/25.10.3/deployments/deployment-preparation/cloud-instance-recommendations/index.html @@ -0,0 +1,4961 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Cloud Instance Recommendations - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Cloud Instance Recommendations

+ +

Simplyblock has been tested on and recommends the following instance types. There is generally no restriction on other instance types as long as they fulfill the system requirements.

+

AWS Amazon EC2 Recommendations

+

Simplyblock can work with local instance storage (local NVMe devices) and Amazon EBS volumes. For performance reasons, +Amazon EBS is not recommended for high-performance clusters.

+
+

Critical

+

If local NVMe devices are chosen, make sure that the nodes in the cluster are provisioned into a placement group of type +Spread!

+
+

Generally, with AWS, there are three considerations when selecting virtual machine types:

+
    +
  • Minimum requirements of vCPU and RAM
  • +
  • Locally attached NVMe devices
  • +
  • Network performance (dedicated and "up to")
  • +
+

Based on those criteria, simplyblock commonly recommends the following virtual machine types for storage nodes:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
VM TypevCPU(s)RAMLocally Attached StorageNetwork Performance
i4g.8xlarge32256 GB2x 3750 GB18.5 GBit/s
i4g.16xlarge64512 GB4x 3750 GB37.5 GBit/s
i3en.6xlarge24192 GB2x 7500 GB25 GBit/s
i3en.12xlarge48384 GB4x 7500 GB50 GBit/s
i3en.24xlarge96768 GB8x 7500 GB100 GBit/s
m5d.4xlarge1664 GB2x 300 GB10 GBit/s
i4i.8xlarge32256 GB2x 3750 GB18.75 GBit/s
i4i.12xlarge48384 GB3x 3750 GB28.12 GBit/s
+

Google Compute Engine Recommendations

+

In GCP, physical hosts are highly-shared and sliced into virtual machines. This isn't only true for network CPU, RAM, +and network bandwidth, but also virtualized NVMe devices. Google Compute Engine NVMe devices provide a specific number +of queue pairs (logical connections between the virtual machine and physical NVMe device) depending on the size of the +disk. Hence, separately attached NVMe devices are highly recommended to achieve the required number of queue pairs of +simplyblock.

+
+

Critical

+

If local NVMe devices are chosen, make sure that the nodes in the cluster are provisioned into a placement group of +type Spread!

+
+

Generally, with GCP, there are three considerations when selecting virtual machine types:

+
    +
  • Minimum requirements of vCPU and RAM
  • +
  • The size of the locally attached NVMe devices (SSD Storage)
  • +
  • Network performance
  • +
+

Based on those criteria, simplyblock commonly recommends the following virtual machine types for storage nodes:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
VM TypevCPU(s)RAMAdditional Local SSD StorageNetwork Performance
n2-standard-8832 GB2x 2500 GB16 GBit/s
n2-standard-161664 GB2x 2500 GB32 GBit/s
n2-standard-3232128 GB4x 2500 GB32 GBit/s
n2-standard-4848192 GB4x 2500 GB50 GBit/s
n2-standard-4848192 GB4x 2500 GB50 GBit/s
n2-standard-6464256 GB6x 2500 GB75 GBit/s
n2-standard-8064320 GB8x 2500 GB100 GBit/s
+

Attaching an additional Local SSD on Google Compute Engine

+

The above recommended instance types do not provide NVMe storage by default. It has to specifically be added to the +virtual machine at creation time. It cannot be changed after the virtual machine is created.

+

To add additional Local SSD Storage to a virtual machine, the operating system section must be selected in the wizard, +then "Add local SSD" must be clicked. Now an additional disk can be added.

+
+

Warning

+

It is important that NVMe is selected as the interface type. SCSI will not work!

+
+

Google Compute Engine wizard screenshot for adding additional local SSDs to a virtual machine

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/deployments/deployment-preparation/erasure-coding-scheme/index.html b/deployment/25.10.3/deployments/deployment-preparation/erasure-coding-scheme/index.html new file mode 100644 index 00000000..54121fb5 --- /dev/null +++ b/deployment/25.10.3/deployments/deployment-preparation/erasure-coding-scheme/index.html @@ -0,0 +1,4992 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Erasure Coding Scheme - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Erasure Coding Scheme

+ +

Choosing the appropriate erasure coding scheme is crucial when deploying a simplyblock storage cluster, as it +directly impacts data redundancy, storage efficiency, and overall system performance. Simplyblock currently supports +the following erasure coding schemes: 1+1, 2+1, 4+1, 1+2, 2+2, and 4+2. Understanding the +trade-offs between redundancy and storage utilization will help determine the best option for your workload. All schemas +have been performance-optimized by specialized algorithms. There is, however, a remaining capacity-to-performance +trade-off.

+
+

Info

+

Starting from 25.10.1, it is possible to select alternative erasure coding schemas per volume. However, this feature +is still experimental (technical preview) and not recommended for production. A cluster must provide sufficient +nodes for the largest schema used in any of the volumes (e.g., 4+2: min. 6 nodes, recommended 7 nodes).

+
+

Erasure Coding Schemes

+

Erasure coding (EC) is a data protection mechanism that distributes data and parity across multiple storage nodes, +allowing data recovery in case of hardware failures. The notation k+m represents:

+
    +
  • k: The number of data fragments.
  • +
  • m: The number of parity fragments.
  • +
+

If you need more information on erasure coding, see the dedicated concept page for +erasure coding.

+

Scheme: 1+1

+
    +
  • Description: In the 1+1 scheme, data is mirrored, effectively creating an exact copy of every data block.
  • +
  • Redundancy Level: Can tolerate the failure of one storage node.
  • +
  • Raw-to-Effective Ratio: 200%
  • +
  • Available Storage Capacity: 50%
  • +
  • Performance Considerations: Offers fast recovery and high read performance due to data mirroring.
  • +
  • Best Use Cases:
      +
    • Workloads requiring high availability and minimal recovery time.
    • +
    • Applications where performance is prioritized over storage efficiency.
    • +
    • Requires 3 or more nodes for full redundancy.
    • +
    +
  • +
+

Scheme: 2+1

+
    +
  • Description: In the 2+1 scheme, data is divided into two fragments with one parity fragment, offering a + balance between performance and storage efficiency.
  • +
  • Redundancy Level: Can tolerate the failure of one storage node.
  • +
  • Raw-to-Effective Ratio: 150%
  • +
  • Available Storage Capacity: 66.6%
  • +
  • Performance Considerations: For writes of 8K or higher, lower write amplification compared to 1+1, as data is distributed across multiple nodes. This typically results in similar or higher IOPS. However, for small random writes (4K), the write performance is worse than 1+1. Write latency is somewhat higher than with 1+1. Read performance is similar to 1+1, if local node affinity is disabled. With node affinity enabled, read performance is slightly worse (up to 25%). In a degraded state (one node offline / unavailable or failed disk), the performance is worse than with 1+1. Recovery time to full redundancy from single disk error is slightly higher than with 1+1.
  • +
  • Best Use Cases:
      +
    • Deployments where storage efficiency is relevant without significantly compromising performance.
    • +
    • Requires 4 or more nodes for full redundancy.
    • +
    +
  • +
+

Scheme: 4+1

+
    +
  • Description: In the 4+1 scheme, data is divided into four fragments with one parity fragment, offering + optimal storage efficiency.
  • +
  • Redundancy Level: Can tolerate the failure of one storage node.
  • +
  • Raw-to-Effective Ratio: 125%
  • +
  • Available Storage Capacity: 80%
  • +
  • Performance Considerations: For writes of 16K or higher, lower write amplification compared to 2+1, as data is distributed across more nodes. This typically results in similar or higher write IOPS. However, for 4-8K random writes, the write performance is typically worse than 2+1. Write latency is somewhat similar to 2+1. Read performance is similar to 2+1, if local node affinity is disabled. With node affinity enabled, read performance is slightly worse (up to 13%). In a degraded state (one node offline / unavailable or failed disk), the performance is worse than with 2+1. Recovery time to full redundancy from single disk error is slightly higher than with 2+1.
  • +
  • Best Use Cases:
      +
    • Deployments where storage efficiency is a priority without significantly compromising performance.
    • +
    • Requires 6 or more nodes for full redundancy.
    • +
    +
  • +
+

Scheme: 1+2

+
    +
  • Description: In the 1+2 scheme, data is replicated twice, effectively creating multiple copies of every data block.
  • +
  • Redundancy Level: Can tolerate the failure of two storage nodes.
  • +
  • Raw-to-Effective Ratio: 300%
  • +
  • Available Storage Capacity: 33.3%
  • +
  • Performance Considerations: Offers fast recovery and high read performance due to data replication, but write performance is lower than with 1+1 in all cases (~33%).
  • +
  • Best Use Cases:
      +
    • Workloads requiring high redundancy and minimal recovery time.
    • +
    • Applications where performance is prioritized over storage efficiency.
    • +
    • Requires 4 or more nodes for full redundancy.
    • +
    +
  • +
+

Scheme: 2+2

+
    +
  • Description: In the 2+2 scheme, data is divided into two fragments with two parity fragments, offering a great + balance between redundancy and storage efficiency.
  • +
  • Redundancy Level: Can tolerate the failure of two storage nodes.
  • +
  • Raw-to-Effective Ratio: 200%
  • +
  • Available Storage Capacity: 50%
  • +
  • Performance Considerations: Similar to 2+1, but with higher write latencies and lower effective write IOPS due to higher write amplification.
  • +
  • Best Use Cases:
      +
    • Deployments where high redundancy and storage efficiency is important without compromising redundancy.
    • +
    • Applications that can tolerate slightly higher recovery times compared to 1+2.
    • +
    • Requires 5 or more nodes for full redundancy.
    • +
    +
  • +
+

Scheme: 4+2

+
    +
  • Description: In the 4+2 scheme, data is divided into four fragments with two parity fragments, offering a great + balance between redundancy and storage efficiency.
  • +
  • Redundancy Level: Can tolerate the failure of two storage nodes.
  • +
  • Raw-to-Effective Ratio: 150%
  • +
  • Available Storage Capacity: 66.6%
  • +
  • Performance Considerations: Similar to 4+1, but with higher write latencies and lower effective write IOPS due to higher write amplification.
  • +
  • Best Use Cases:
      +
    • Deployments where high redundancy and storage efficiency is a priority.
    • +
    • Requires 7 or more nodes in a cluster.
    • +
    +
  • +
+

Choosing the Scheme

+

When selecting an erasure coding scheme for simplyblock, consider the following:

+
    +
  1. Redundancy Requirements: If the priority is maximum data protection and quick recovery, 1+1 or 1+2 are ideal. For a + balance between protection and efficiency, 2+1 or 2+2 is preferred.
  2. +
  3. Storage Capacity: 1+1 requires double the storage space, whereas 2+1 provides better storage efficiency. 1+2 requires triple the storage space, whereas 2+2 provides great storage efficiency and fault tolerance.
  4. +
  5. Performance Needs: 1+1 and 2+2 offer faster reads and writes due to mirroring, while 2+1 and 2+2 reduce write amplification and optimize for storage usage.
  6. +
  7. Cluster Size: Smaller clusters benefit from 1+1 or 1+2 due to its simplicity and faster rebuild times, whereas 2+1 and 2+2 are more effective in larger clusters.
  8. +
  9. Recovery Time Objectives (RTOs): If minimizing downtime is critical, 1+1 and 1+2 offer near-instant recovery compared to 2+1 and 2+2 which require rebuilding of the lost data from parity information.
  10. +
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/deployments/deployment-preparation/index.html b/deployment/25.10.3/deployments/deployment-preparation/index.html new file mode 100644 index 00000000..d9c0a47e --- /dev/null +++ b/deployment/25.10.3/deployments/deployment-preparation/index.html @@ -0,0 +1,4679 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Deployment Preparation - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Deployment Preparation

+ +

Proper deployment planning is essential for ensuring the performance, scalability, and resilience of a simplyblock +storage cluster.

+

Before installation, key factors such as node sizing, storage capacity, and fault tolerance mechanisms should be +carefully evaluated to match workload requirements. This section provides guidance on sizing management nodes and +storage nodes, helping administrators allocate adequate CPU, memory, and disk resources for optimal cluster performance.

+

Additionally, it explores selectable erasure coding schemes, detailing how different configurations impact storage +efficiency, redundancy, and recovery performance. Other critical considerations, such as network infrastructure, +high-availability strategies, and workload-specific optimizations, are also covered to assist in designing a simplyblock +deployment that meets both operational and business needs.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/deployments/deployment-preparation/numa-considerations/index.html b/deployment/25.10.3/deployments/deployment-preparation/numa-considerations/index.html new file mode 100644 index 00000000..e28974d5 --- /dev/null +++ b/deployment/25.10.3/deployments/deployment-preparation/numa-considerations/index.html @@ -0,0 +1,4805 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + NUMA Considerations - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

NUMA Considerations

+ +

Modern multi-socket servers use a memory architecture called +NUMA (Non-Uniform Memory Access) ⧉. +In a NUMA system, each CPU socket has its own local memory and I/O paths. Accessing local resources is faster than +reaching across sockets to remote memory or devices. Simplyblock is fully NUMA-aware.

+

On a host with more than one socket, by default one or two storage nodes are deployed per socket.

+

Two storage nodes per socket are deployed if:

+
    +
  • more than 32 vCPUs (cores) per NUMA socket are dedicated to simplyblock per socket
  • +
  • more than 10 NVMe devices are connected to the NUMA socket
  • +
+

Users can change this behavior. Either by setting the appropriate Helm Chart parameters (in case of Kubernetes-based +storage node deployment) or by manually modifying the initially created configuration file on the storage node +(after running sbctl sn configure).

+

It is critical for performance that all NVMe devices of a storage node are directly connected to the NUMA socket to +which the storage node is deployed.

+

If a socket has no NVMe devices connected, it will not qualify to run a simplyblock storage node.

+

It is also important that the NIC(s) used by simplyblock for storage traffic are connected to the same NUMA socket. +However, simplyblock does not auto-assign a NIC and users have manually to take care of that.

+

Checking NUMA Configuration

+

Before configuring simplyblock, the system configuration should be checked for multiple NUMA nodes. This can be done +using the lscpu tool.

+
How to check the NUMA configuration
lscpu | grep -i numa
+
+
Example output of the NUMA configuration
root@demo:~# lscpu | grep -i numa
+NUMA node(s):                         2
+NUMA node0 CPU(s):                    0-31
+NUMA node1 CPU(s):                    32-63
+
+

In the example above, the system has two NUMA nodes.

+
+

Recommendation

+

If the system consists of multiple NUMA nodes, it is recommended to configure simplyblock with multiple storage +nodes per storage host. The number of storage nodes should match the number of NUMA nodes.

+
+

Ensuring NUMA-Aware Devices

+

For optimal performance, there should be a similar number of NVMe devices per NUMA node. Additionally, it is recommended +to provide one Ethernet NIC per NUMA node.

+

To check the NUMA assignment of PCI-e devices, the lspci tool and a small script can be used.

+
Install pciutils which includes lspci
yum install pciutils
+
+
Small script to list all PCI-e devices and their NUMA nodes
#!/bin/bash
+
+for i in  /sys/class/*/*/device; do
+    pci=$(basename "$(readlink $i)")
+    if [ -e $i/numa_node ]; then
+        echo "NUMA Node: `cat $i/numa_node` ($i): `lspci -s $pci`" ;
+    fi
+done | sort
+
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/deployments/deployment-preparation/system-requirements/index.html b/deployment/25.10.3/deployments/deployment-preparation/system-requirements/index.html new file mode 100644 index 00000000..7b7aa9ff --- /dev/null +++ b/deployment/25.10.3/deployments/deployment-preparation/system-requirements/index.html @@ -0,0 +1,5359 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + System Requirements - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +
+

Info

+

In cloud environments including GCP and AWS, instance types are pre-configured. In general,
+there are no restrictions on instance types as long as these system requirements are met. However, it is highly +recommended to stay with the Recommended Cloud Instance Types for production.

+

For hyper-converged deployments, it is important that node sizing applies to the dedicated +resources consumed by simplyblock. Hyper-converged instances must provide enough of resources +to satisfy both, simplyblock and other compute demand, including the Kubernetes worker itself and the +operating system.

+
+

Hardware Architecture Support

+
    +
  • For the control plane, simplyblock requires x86-64 compatible CPUs.
  • +
  • For the storage plane, simplyblock supports x86-64 or ARM64 (AArch64) compatible CPUs.
  • +
+

Virtualization Support

+

Both simplyblock storage nodes and control plane nodes can run fully virtualized. It has been tested on plain KVM, +Proxmox, Nitro (AWS EC2) and GCP.

+

For storage node production deployments, SR-IOV is required for NVMEs and network interfaces (NICs). Furthermore, +dedicated cores must be assigned exclusively to the virtual machines running storage node (no over-provisioning).

+

Deployment Models

+

Two deployment options are supported:

+
    +
  • Plain Linux: In this mode, which also called Docker mode, all nodes are deployed to separate hosts. Storage nodes + are usually bare-metal and control plane nodes are usually VMs.
  • +
+

Basic Docker knowledge is helpful, but all management can be performed within the system via its CLI or API.

+
    +
  • Kubernetes: In Kubernetes, both disaggregated deployments with dedicated workers or clusters for storage nodes, + or hyper-converged deployments (co-located with compute workloads) are supported. A wide range of Kubernetes distros + and operating systems are supported.
  • +
+

Kubernetes Knowledge is required.

+

The minimum system requirements below concern simplyblock only and must be dedicated to simplyblock.

+

Minimum System Requirements

+
+

Info

+

If the use of erasure coding is intended, DDR5 RAM is recommended for maximum performance. In addition, it is +recommended to use CPUs with large L1 caches, as those will perform better.

+
+

The following minimum system requirements resources must be exclusive to simplyblock and are not available to the host +operating system or other processes. This includes vCPUs, RAM, locally attached virtual or physical NVMe devices, +network bandwidth, and free space on the boot disk.

+

Overview

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Node TypevCPU(s)RAM (GB)Locally Attached StorageNetwork PerformanceFree Boot DiskNumber of Nodes
Storage Node8+6+1x fully dedicated NVMe10 GBit/s10 GB1 (2 for HA)
Control Plane*416-1 GBit/s35 GB1 (3 for HA)
+

*Plain Linux Deployment, up to 5 nodes, 1,000 logical volumes, 2,500 snapshots

+

Storage Nodes

+

IOPS performance depends on Storage Node vCPU. The maximum performance will be reached with +32 physical cores per socket. In such a scenario, the deployment will dedicate (isolate) 24 cores to +Simplyblock Data Plane (spdk_80xx containers) and the rest will remain under control of Linux.

+
+

Info

+

Simplyblock auto-detects NUMA nodes. It will configure and deploy storage nodes per NUMA node.

+

Each NUMA socket requires directly attached NVMe devices and NICs to deploy a storage node. +For more information on simplyblock on NUMA, see NUMA Considerations.

+
+
+

Info

+

It is recommended to deploy multiple storage nodes per storage host if there are more than 32 cores available +per socket.

+

During deployment, simplyblock detects the underlying configuration and prepares a configuration file with the +recommended deployment strategy, including the recommended amount of storage nodes per storage host based on the +detected configuration. This file is later processed when adding the storage nodes to the storage host. +Manual changes to the configuration are possible if the proposed configuration is not applicable.

+
+

As hyper-converged deployments have to share vCPUs, it is recommended to dedicate 8 vCPU per socket +to simplyblock. For example, on a system with 32 cores (64 vCPU) per socket, this amounts to +12.5% of vCPU capacity per host. For very IO-intensive applications, this amount should be increased.

+
+

Warning

+

On storage nodes, required vCPUs will be automatically isolated from the operating system. No +kernel-space, user-space processes, or interrupt handler can be scheduled on these vCPUs.

+
+
+

Info

+

For RAM, it is required to estimate the expected average number of logical volumes per node, as well as the +average raw storage capacity, which can be utilized per node. For example, if each node in +a cluster has 100 TiB of raw capacity, this would be the average too. In a 5-node cluster, +with a maximum of 2,500 logical volumes, the average per node would be 500.

+
+ + + + + + + + + + + + + + + + + + + + + +
UnitMemory Requirement
Fixed amount3 GiB
Per logical volume (cluster average per node)25 MiB
% of maximum storage capacity (cluster average per node)1.5 GiB / TiB
+
+

Info

+

For disaggregated setups, it is recommended to add 50% to these numbers as a reserve. In +a purely hyper-converged setup, stay at the requirement.

+
+

Control Plane

+

General control plane requirements provided above apply to the plain linux deployment. +For a Kubernetes-based control plane, the minimum requirements per replica are:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ServicevCPU(s)RAM (GB)Disk (GB)
Simplyblock Meta-Database145
Observability Stack4825
Simplyblock Web-API120.5
Other Simplyblock Services120.5
+

If more than 2,500 volumes or more than 5 storage nodes are attached to the control plane, additional RAM and vCPU +are advised. Also, the required observability disk space must be increased, if retention of logs and statistics for +more than 7 days is required.

+
+

Info

+

3 replicas are mandatory for the Key-Value-Store. The WebAPI runs as a Daemonset on all Workers, if no taint is applied. +The Observability Stack can optionally be replicated and the sb-services run without replication.

+
+

Hyperthreading

+

If 32 or more physical cores are available per storage node, it is highly recommended to turn off hyperthreading in the +BIOS or UEFI setup services.

+

NVMe Devices

+

NVMe devices must support 4KB native block size and are recommended to be sized between 1.9 TiB and 7.68 TiB. +Large NVMe devices are supported, but performance per TiB is lower and rebalancing can take longer.

+

In general, all NVMe used in a single cluster should exhibit a similar performance profile per TB. +Therefore, within a single cluster, all NVMe devices are recommended to be of the same size, +but this is not a hard requirement.

+

Clusters are lightweight, and it is recommended to use different clusters for different types of +hardware (NVMe, networking, compute) or with a different performance profile per TiB of raw storage.

+
+

Warning

+

Simplyblock only works with non-partitioned, exclusive NVMe devices (virtual via SRV-IO or physical) as its backing +storage.

+

Individual NVMe namespaces or partitions cannot be claimed by simplyblock, only dedicated NVMe controllers.

+

Devices are not allowed to be mounted under Linux and the entire device will be low-level formatted and +re-partioned during deployment.

+

Additionally, devices will be detached from the operating system's control and will no longer show up in lsblk +once simplyblock's storage nodes are running.

+
+
+

Info

+

It is required to Low-Level Format Devices with 4KB block size before +deploying Simplyblock.

+
+

Network Requirements

+

In production, simplyblock requires a redundant network for storage traffic (e.g., via LACP, Stacked Switches, MLAG, +active/active or active/passive NICs, STP or MSTP).

+

Simplyblock implements NVMe over Fabrics (NVMe-oF), specifically NVMe over TCP, and works over any Ethernet +interconnect.

+
+

Recommendation

+

Simplyblock highly recommends NICs with RDMA/ROCEv2 support such as NVIDIA Mellanox network adapters (ConnectX-6 or higher). +Those network adapters are available from brands such as NVIDIA, Intel and Broadcom.

+
+

For production, software-defined switches such as Linux Bridge or OVS cannot be used. An interface on top of a Linux +bond over two ports of the NIC(s) or using SRV-IO must be created.

+

Also, it is recommended to use a separate physical NIC with two ports (bonded) and a highly available network for +management traffic. For management traffic, a 1 GBit/s network is sufficient and a Linux Bridge may be used.

+
+

Warning

+

All storage nodes within a cluster and all hosts accessing storage shall reside within the same hardware VLAN.

+

Avoid any gateways, firewalls, or proxies higher than L2 on the network path.

+
+

PCIe Version

+

The minimum required PCIe standard for NVMe devices is PCIe 3.0. However, PCIe 4.0 or higher is strongly recommended.

+

Operating System Requirements (Control Plane, Storage Plane)

+

Control plane nodes, as well as storage nodes in a plain linux deployment, require one of the following +operating systems:

+ + + + + + + + + + + + + + + + + + + + + +
Operating SystemVersions
Alma Linux9
Rocky Linux9
Redhat Enterprise Linux (RHEL)9
+

In a hyper-converged deployment a broad range of operating systems are supported. The availability also depends on the +utilized Kubernetes distribution.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Operating SystemVersions
Alma Linux9, 10
Rocky Linux9, 10
Redhat Enterprise Linux (RHEL)9, 10
Ubuntu22.04, 24.04
Debian12, 13
Talosfrom 1.6.7
+

The operating system must be on the latest patch-level.

+

Operating System Requirements (Initiator)

+

An initiator is the operating system to which simplyblock logical volumes are attached over the network (NVMe/TCP).

+

For further information on the requirements of the initiator-side (client-only), see:

+ +

Kubernetes Requirements

+

For Kubernetes-based deployments, the following Kubernetes environments and distributions are supported:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
DistributionVersions
Amazon EKS1.28 and higher
Google GKE1.28 and higher
K3s1.29 and higher
Kubernetes (vanilla)1.28 and higher
Talos1.6.7 and higher
Openshift4.15 and higher
+

Proxmox Requirements

+

The Proxmox integration supports any Proxmox installation of version 8.0 and higher.

+

OpenStack Requirements

+

The OpenStack integration supports any OpenStack installation of version 25.1 (Epoxy) or higher. Support for older +versions may be available on request.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/deployments/index.html b/deployment/25.10.3/deployments/index.html new file mode 100644 index 00000000..b702ae0b --- /dev/null +++ b/deployment/25.10.3/deployments/index.html @@ -0,0 +1,4758 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Deployments - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+ +
+ + + +
+
+ + + + + + + + + + + + + +

Deployments

+ +

Simplyblock is a highly flexible storage solution.

+

Different initiator (host) drivers (Kubernetes CSI, Proxmox, OpenStack) are available. The storage cluster +deployment can be installed into Kubernetes (disaggregated or hyper-converged) +or via Docker (also called "Plain Linux" deployment). The Docker-based deployment is fully +deployed and managed via the Simplyblock CLI or API, minimal Docker knowledge is required.

+

Control Plane Installation

+

Each storage cluster requires a control plane to run. Multiple storage clusters may be connected to a single control +plane. The deployment of the control plane must happen before a storage cluster deployment. +The control plane can be installed into a Kubernetes Cluster or on Plain Linux VMs (using Docker internally). +For details, see the Control Plane Deployment on VM or Install Control Plane on Kubernetes

+

Storage Node Installation

+

For details on how to install the storage cluster into Plain Linux, see Install Simplyblock Storage Nodes on Linux.

+

For installation of Storage Nodes into Kubernetes, see here: Install Storage Nodes on Kubernetes

+

Installation of Drivers

+

Simplyblock logical volumes are NVMe over TCP or RDMA (ROCEv2) volumes. +They are attached to the Linux kernel via the provided nvme-tcp or nvme-rdma +modules and managed via the nvme-cli tool. For more information, see + Linux NVMe-oF Attach. +On top of the NVMe-oF devices, which show up as linux block devices such as /dev/nvme1n1,
+life cycle automation is performed by the orchestrator-specific Simplyblock drivers:

+ +

Generally, before creating volumes it is important to understand the difference btw. an +NVMe-oF Subsystem and a Namespace.

+

System Requirements and Sizing

+

Simplyblock is designed for high-performance storage operations. Therefore, it has specific system requirements that +must be met. The following sections describe the system and node sizing requirements.

+ +

For deployments on hyper-scalers, like Amazon AWS and Google GCP, there are instance type recommendations. While other +instance types may work, it is highly recommended to use the instance type recommendations.

+ + + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/deployments/install-on-linux/index.html b/deployment/25.10.3/deployments/install-on-linux/index.html new file mode 100644 index 00000000..846ea701 --- /dev/null +++ b/deployment/25.10.3/deployments/install-on-linux/index.html @@ -0,0 +1,4760 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Install Simplyblock on Linux - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Install Simplyblock on Linux

+ +

Installing simplyblock for production on plain linux (Docker) requires a few components to be installed. Furthermore, +there are a couple of configuration steps to secure the network, ensure the performance, and data protection in the case +of hardware or software failures.

+

Simplyblock provides two test scripts to automatically check your system's configuration. While those may not catch all +edge cases, they can help to streamline the configuration check. This script can be run multiple times during the +preparation phase to find missing configurations during the process.

+
Automatically check your configurations
# Configuration check for the control plane (management nodes)
+curl -s -L https://install.simplyblock.io/scripts/prerequisites-cp.sh | bash
+
+# Configuration check for the storage plane (storage nodes)
+curl -s -L https://install.simplyblock.io/scripts/prerequisites-sn.sh | bash
+
+

Before We Start

+

A simplyblock production cluster consists of three different types of nodes in the plain linux (Docker) variant +of the deployment:

+
    +
  1. Management nodes are part of the control plane which managed the cluster(s).
  2. +
  3. Storage nodes are part of a specific storage cluster and provide capacity to the distributed storage pool. A + production cluster requires at least three nodes.
  4. +
  5. Secondary nodes are part of a specific storage cluster and enable automatic fail over for NVMe-oF connections. In a + high-availability cluster, every primary storage node automatically provides a secondary storage node.
  6. +
+

In a plain-linux deployment multiple storage nodes can reside on the same host. This has to be done on multi-socket +systems as nodes have to be aligned with NUMA sockets. However, the management nodes require separate VMs.

+

A single control plane can manage one or more clusters. If started afresh, a control plane must be set up before +creating a storage cluster. If there is a preexisting control plane, an additional storage cluster can be added +to it directly.

+

More information on the control plane, storage plane, and the different node types is available under +Simplyblock Cluster in the architecture section.

+

Network Preparation

+

For network requirements, +see System Requirements.

+

On storage nodes, simplyblock can use either one network interface for both storage and management +or separate interfaces (VLANs or subnets).

+

To install simplyblock in your environment, you may have to adopt these commands to match your configuration.

+ + + + + + + + + + + + + + + + + + + + + + + +
Network interfaceNetwork definitionAbbreviationSubnet
eth0Control Planecontrol192.168.10.0/24
eth1Storage Planestorage10.10.10.0/24
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/deployments/install-on-linux/install-cp/index.html b/deployment/25.10.3/deployments/install-on-linux/install-cp/index.html new file mode 100644 index 00000000..ea63f9f4 --- /dev/null +++ b/deployment/25.10.3/deployments/install-on-linux/install-cp/index.html @@ -0,0 +1,5088 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Install Control Plane - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Install Control Plane

+ +

Control Plane Installation

+

The first step when installing simplyblock on plain linux (Docker), is to install the control plane. The control +plane manages one or more storage clusters. If an existing control plane is available and the new cluster should be +added to it, this section can be skipped.

+

In this case, the following section can be skipped to Storage Plane Installation.

+

Firewall Configuration (CP)

+

Simplyblock requires a number of TCP and UDP ports to be opened from certain networks. Additionally, it requires IPv6 +to be disabled on management nodes.

+

The following is a list of all ports (TCP and UDP) required to operate as a management node. Attention is required, as +this list is for management nodes only. Storage nodes have a different port configuration.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ServiceDirectionSource / Target NetworkPortProtocol(s)
ICMPingresscontrol-ICMP
Cluster APIingressstorage, control, admin80TCP
SSHingressstorage, control, admin22TCP
Greylogingressstorage, control12201TCP / UDP
Greylogingressstorage, control12202TCP
Greylogingressstorage, control13201TCP
Greylogingressstorage, control13202TCP
Docker Daemon Remote Accessingressstorage, control2375TCP
Docker Swarm Remote Accessingressstorage, control2377TCP
Docker Overlay Networkingressstorage, control4789UDP
Docker Network Discoveryingressstorage, control7946TCP / UDP
FoundationDBingressstorage, control4500TCP
Prometheusingressstorage, control9100TCP
Cluster Controlegressstorage, control8080-8890TCP
spdk-http-proxyegressstorage, control5000TCP
spdk-firewall-proxyegressstorage, control5001TCP
Docker Daemon Remote Accessegressstorage, control2375TCP
Docker Swarm Remote Accessegressstorage, control2377TCP
Docker Overlay Networkegressstorage, control4789UDP
Docker Network Discoveryegressstorage, control7946TCP / UDP
+

With the previously defined subnets, the following snippet disables IPv6 and configures the iptables automatically.

+
+

Danger

+

The example assumes that you have an external firewall between the admin network and the public internet!
+If this is not the case, ensure the correct source access for ports 22 and 80.

+
+
Network Configuration
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1
+sudo sysctl -w net.ipv6.conf.default.disable_ipv6=1
+
+# Clean up
+sudo iptables -F SIMPLYBLOCK
+sudo iptables -D DOCKER-FORWARD -j SIMPLYBLOCK
+sudo iptables -X SIMPLYBLOCK
+# Setup
+sudo iptables -N SIMPLYBLOCK
+sudo iptables -I DOCKER-FORWARD 1 -j SIMPLYBLOCK
+sudo iptables -A INPUT -p tcp --dport 22 -j ACCEPT
+sudo iptables -A SIMPLYBLOCK -m state --state ESTABLISHED,RELATED -j RETURN
+sudo iptables -A SIMPLYBLOCK -p tcp --dport 80 -j RETURN
+sudo iptables -A SIMPLYBLOCK -p tcp --dport 2375 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN
+sudo iptables -A SIMPLYBLOCK -p tcp --dport 2377 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN
+sudo iptables -A SIMPLYBLOCK -p tcp --dport 4500 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN
+sudo iptables -A SIMPLYBLOCK -p udp --dport 4789 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN
+sudo iptables -A SIMPLYBLOCK -p tcp --dport 7946 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN
+sudo iptables -A SIMPLYBLOCK -p udp --dport 7946 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN
+sudo iptables -A SIMPLYBLOCK -p tcp --dport 9100 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN
+sudo iptables -A SIMPLYBLOCK -p tcp --dport 12201 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN
+sudo iptables -A SIMPLYBLOCK -p udp --dport 12201 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN
+sudo iptables -A SIMPLYBLOCK -p tcp --dport 12202 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN
+sudo iptables -A SIMPLYBLOCK -p tcp --dport 13201 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN
+sudo iptables -A SIMPLYBLOCK -p tcp --dport 13202 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN
+sudo iptables -A SIMPLYBLOCK -s 0.0.0.0/0 -j DROP
+
+

Management Node Installation

+

Now that the network is configured, the management node software can be installed.

+

Simplyblock provides a command line interface called sbctl. It's built in Python and requires +Python 3 and Pip (the Python package manager) installed on the machine. This can be achieved with yum.

+
Install Python and Pip
sudo yum -y install python3-pip
+
+

Afterward, the sbctl command line interface can be installed. Upgrading the CLI later on uses the +same command.

+
Install Simplyblock CLI
sudo pip install sbctl --upgrade
+
+
+

Recommendation

+

Simplyblock recommends to only upgrade sbctl if a system upgrade is executed to prevent potential +incompatibilities between the running simplyblock cluster and the version of sbctl.

+
+

At this point, a quick check with the simplyblock provided system check can reveal potential issues quickly.

+
Automatically check your configuration
curl -s -L https://install.simplyblock.io/scripts/prerequisites-cp.sh | bash
+
+

If the check succeeds, it's time to set up the primary management node:

+
Deploy the primary management node
sbctl cluster create --ifname=<IF_NAME> --ha-type=ha
+
+

Additional cluster deployment options can be found in the Cluster Deployment Options.

+

The output should look something like this:

+
Example output of control plane deployment
[root@vm11 ~]# sbctl cluster create --ifname=eth0 --ha-type=ha
+2025-02-26 12:37:06,097: INFO: Installing dependencies...
+2025-02-26 12:37:13,338: INFO: Installing dependencies > Done
+2025-02-26 12:37:13,358: INFO: Node IP: 192.168.10.1
+2025-02-26 12:37:13,510: INFO: Configuring docker swarm...
+2025-02-26 12:37:14,199: INFO: Configuring docker swarm > Done
+2025-02-26 12:37:14,200: INFO: Adding new cluster object
+File moved to /usr/local/lib/python3.9/site-packages/simplyblock_core/scripts/alerting/alert_resources.yaml successfully.
+2025-02-26 12:37:14,269: INFO: Deploying swarm stack ...
+2025-02-26 12:38:52,601: INFO: Deploying swarm stack > Done
+2025-02-26 12:38:52,604: INFO: deploying swarm stack succeeded
+2025-02-26 12:38:52,605: INFO: Configuring DB...
+2025-02-26 12:39:06,003: INFO: Configuring DB > Done
+2025-02-26 12:39:06,106: INFO: Settings updated for existing indices.
+2025-02-26 12:39:06,147: INFO: Template created for future indices.
+2025-02-26 12:39:06,505: INFO: {"cluster_id": "7bef076c-82b7-46a5-9f30-8c938b30e655", "event": "OBJ_CREATED", "object_name": "Cluster", "message": "Cluster created 7bef076c-82b7-46a5-9f30-8c938b30e655", "caused_by": "cli"}
+2025-02-26 12:39:06,529: INFO: {"cluster_id": "7bef076c-82b7-46a5-9f30-8c938b30e655", "event": "OBJ_CREATED", "object_name": "MgmtNode", "message": "Management node added vm11", "caused_by": "cli"}
+2025-02-26 12:39:06,533: INFO: Done
+2025-02-26 12:39:06,535: INFO: New Cluster has been created
+2025-02-26 12:39:06,535: INFO: 7bef076c-82b7-46a5-9f30-8c938b30e655
+7bef076c-82b7-46a5-9f30-8c938b30e655
+
+

If the deployment was successful, the last line returns the cluster id. This should be noted down. It's required in +further steps of the installation.

+

Additionally to the cluster id, the cluster secret is required in many further steps. The following command can be used +to retrieve it.

+
Get the cluster secret
sbctl cluster get-secret <CLUSTER_ID>
+
+
Example output get cluster secret
[root@vm11 ~]# sbctl cluster get-secret 7bef076c-82b7-46a5-9f30-8c938b30e655
+e8SQ1ElMm8Y9XIwyn8O0
+
+

Secondary Management Nodes

+

A production cluster requires at least three management nodes in the control plane. Hence, additional management +nodes need to be added.

+

On the secondary nodes, the network requires the same configuration as on the primary. Executing the commands under +Firewall Configuration (CP) will get the node prepared.

+

Afterward, Python, Pip, and sbctl need to be installed.

+
Deployment preparation
sudo yum -y install python3-pip
+pip install sbctl --upgrade
+
+

Finally, we deploy the management node software and join the control plane cluster.

+
Secondary management node deployment
sbctl mgmt add <CP_PRIMARY_IP> <CLUSTER_ID> <CLUSTER_SECRET> <IF_NAME>
+
+

Running against the primary management node in the control plane should create an output similar to the following +example:

+
Example output joining a control plane cluster
[demo@demo ~]# sbctl mgmt add 192.168.10.1 7bef076c-82b7-46a5-9f30-8c938b30e655 e8SQ1ElMm8Y9XIwyn8O0 eth0
+2025-02-26 12:40:17,815: INFO: Cluster found, NQN:nqn.2023-02.io.simplyblock:7bef076c-82b7-46a5-9f30-8c938b30e655
+2025-02-26 12:40:17,816: INFO: Installing dependencies...
+2025-02-26 12:40:25,606: INFO: Installing dependencies > Done
+2025-02-26 12:40:25,626: INFO: Node IP: 192.168.10.2
+2025-02-26 12:40:26,802: INFO: Joining docker swarm...
+2025-02-26 12:40:27,719: INFO: Joining docker swarm > Done
+2025-02-26 12:40:32,726: INFO: Adding management node object
+2025-02-26 12:40:32,745: INFO: {"cluster_id": "7bef076c-82b7-46a5-9f30-8c938b30e655", "event": "OBJ_CREATED", "object_name": "MgmtNode", "message": "Management node added vm12", "caused_by": "cli"}
+2025-02-26 12:40:32,752: INFO: Done
+2025-02-26 12:40:32,755: INFO: Node joined the cluster
+cdde125a-0bf3-4841-a6ef-a0b2f41b8245
+
+

From here, additional management nodes can be added to the control plane cluster. If the control plane cluster is ready, +the storage plane can be installed.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/deployments/install-on-linux/install-sp/index.html b/deployment/25.10.3/deployments/install-on-linux/install-sp/index.html new file mode 100644 index 00000000..a9d68a1e --- /dev/null +++ b/deployment/25.10.3/deployments/install-on-linux/install-sp/index.html @@ -0,0 +1,5308 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Install Storage Plane - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Install Storage Plane

+ +

Storage Plane Installation

+

The installation of a storage plane requires a functioning control plane. If no control plane cluster is available yet, +it must be installed beforehand. Jump right to the Control Plane Installation.

+

The following examples assume two subnets are available.

+

Firewall Configuration (SP)

+

Simplyblock requires a number of TCP and UDP ports to be opened from certain networks. Additionally, it requires IPv6 +to be disabled on management nodes.

+

Following is a list of all ports (TCP and UDP) required for operation as a storage node. Attention is required, as this +list is for storage nodes only. Management nodes have a different port configuration.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ServiceDirectionSource / Target NetworkPort(s)Protocol(s)
ICMPingresscontrol-ICMP
Storage node APIingressstorage5000TCP
spdk-firewall-proxyingressstorage5001TCP
spdk-http-proxyingressstorage, control8080-8180TCP
hublvol-nvmf-subsys-portingressstorage, control9030-9059TCP
internal-nvmf-subsys-portingressstorage, control9060-9099TCP
lvol-nvmf-subsys-portingressstorage, control9100-9200TCP
SSHingressstorage, control, admin22TCP
Docker Daemon Remote Accessingressstorage, control2375TCP
Docker Swarm Remote Accessingressstorage, control2377TCP
Docker Overlay Networkingressstorage, control4789UDP
Docker Network Discoveryingressstorage, control7946TCP / UDP
Greylogingresscontrol12202TCP
FoundationDBegressstorage4500TCP
Docker Daemon Remote Accessegressstorage, control2375TCP
Docker Swarm Remote Accessegressstorage, control2377TCP
Docker Overlay Networkegressstorage, control4789UDP
Docker Network Discoveryegressstorage, control7946TCP / UDP
Greylogegresscontrol12202TCP
+

With the previously defined subnets, the following snippet disables IPv6 and configures the iptables automatically.

+
+

Danger

+

The example assumes that you have an external firewall between the admin network and the public internet!
+If this is not the case, ensure the correct source access for port 22.

+
+
Disable IPv6
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1
+sudo sysctl -w net.ipv6.conf.default.disable_ipv6=1
+
+

Docker Swarm, by default, creates iptables entries open to the world. If no external firewall is available, the created +iptables configuration needs to be restricted.

+

The following script will create additional iptables rules prepended to Docker's forwarding rules and only enabling +access from internal networks. This script should be stored in /usr/local/sbin/simplyblock-iptables.sh.

+
Configuration script for Iptables
#!/usr/bin/env bash
+
+# Clean up
+sudo iptables -F SIMPLYBLOCK
+sudo iptables -D DOCKER-FORWARD -j SIMPLYBLOCK
+sudo iptables -X SIMPLYBLOCK
+
+# Setup
+sudo iptables -N SIMPLYBLOCK
+sudo iptables -I DOCKER-FORWARD 1 -j SIMPLYBLOCK
+sudo iptables -A INPUT -p tcp --dport 22 -j ACCEPT
+sudo iptables -A SIMPLYBLOCK -p tcp --dport 2375 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN
+sudo iptables -A SIMPLYBLOCK -p tcp --dport 2377 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN
+sudo iptables -A SIMPLYBLOCK -p tcp --dport 4420 -s 10.10.10.0/24 -j RETURN
+sudo iptables -A SIMPLYBLOCK -p udp --dport 4789 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN
+sudo iptables -A SIMPLYBLOCK -p tcp --dport 5000 -s 192.168.10.0/24 -j RETURN
+sudo iptables -A SIMPLYBLOCK -p tcp --dport 7946 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN
+sudo iptables -A SIMPLYBLOCK -p udp --dport 7946 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN
+sudo iptables -A SIMPLYBLOCK -p tcp --dport 8080:8890 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN
+sudo iptables -A SIMPLYBLOCK -p tcp --dport 9090-9900 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN
+sudo iptables -A SIMPLYBLOCK -s 0.0.0.0/0 -j DROP
+
+

To automatically run this script whenever Docker is started or restarted, it must be attached to a Systemd service, +stored as /etc/systemd/system/simplyblock-iptables.service.

+
Systemd script to set up Iptables
[Unit]
+Description=Simplyblock Iptables Restrictions for Docker 
+After=docker.service
+BindsTo=docker.service
+ReloadPropagatedFrom=docker.service
+
+[Service]
+Type=oneshot
+ExecStart=/usr/local/sbin/simplyblock-iptables.sh
+ExecReload=/usr/local/sbin/simplyblock-iptables.sh
+RemainAfterExit=yes
+
+[Install]
+WantedBy=multi-user.target
+
+

After both files are stored in their respective locations, the bash script needs to be made executable, and the Systemd +service needs to be enabled to start automatically.

+
Enabling service file
chmod +x /usr/local/sbin/simplyblock-iptables.sh
+systemctl enable simplyblock-iptables.service
+systemctl start simplyblock-iptables.service
+
+

Storage Node Installation

+

Now that the network is configured, the storage node software can be installed.

+
+

Info

+

All storage nodes can be prepared at this point, as they are added to the cluster in the next step. Therefore, it +is recommended to execute this step on all storage nodes before moving to the next step.

+
+

Simplyblock provides a command line interface called sbctl. It's built in Python and requires +Python 3 and Pip (the Python package manager) are installed on the machine. This can be achieved with yum.

+
Install Python and Pip
sudo yum -y install python3-pip
+
+

Afterward, the sbctl command line interface can be installed. Upgrading the CLI later on uses the +same command.

+
Install Simplyblock CLI
sudo pip install sbctl --upgrade
+
+
+

Recommendation

+

Simplyblock recommends to only upgrade sbctl if a system upgrade is executed to prevent potential +incompatibilities between the running simplyblock cluster and the version of sbctl.

+
+

At this point, a quick check with the simplyblock provided system check can reveal potential issues quickly.

+
Automatically check your configuration
curl -s -L https://install.simplyblock.io/scripts/prerequisites-sn.sh | bash
+
+

NVMe Device Preparation

+

Once the check is complete, the NVMe devices in each storage node can be prepared. To prevent data loss in case of a +sudden power outage, NVMe devices need to be formatted for a specific LBA format.

+
+

Warning

+

Failing to format NVMe devices with the correct LBA format can lead to data loss or data corruption in the case +of a sudden power outage or other loss of power. If you can't find the necessary LBA format, it is best to ask +your simplyblock contact for further instructions.

+

On AWS, the necessary LBA format is not available. Simplyblock is, however, fully tested and supported with AWS.

+
+

The lsblk is the best way to find all NVMe devices attached to a system.

+
Example output of lsblk
[demo@demo-3 ~]# sudo lsblk
+NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
+sda           8:0    0   30G  0 disk
+├─sda1        8:1    0    1G  0 part /boot
+└─sda2        8:2    0   29G  0 part
+  ├─rl-root 253:0    0   26G  0 lvm  /
+  └─rl-swap 253:1    0    3G  0 lvm  [SWAP]
+nvme3n1     259:0    0  6.5G  0 disk
+nvme2n1     259:1    0   70G  0 disk
+nvme1n1     259:2    0   70G  0 disk
+nvme0n1     259:3    0   70G  0 disk
+
+

In the example, we see four NVMe devices. Three devices of 70GiB and one device with 6.5GiB storage capacity.

+

To find the correct LBA format (lbaf) for each of the devices, the nvme CLI can be used.

+
Show NVMe namespace information
sudo nvme id-ns /dev/nvmeXnY
+
+

The output depends on the NVMe device itself, but looks something like this:

+
Example output of NVMe namespace information
[demo@demo-3 ~]# sudo nvme id-ns /dev/nvme0n1
+NVME Identify Namespace 1:
+...
+lbaf  0 : ms:0   lbads:9  rp:0
+lbaf  1 : ms:8   lbads:9  rp:0
+lbaf  2 : ms:16  lbads:9  rp:0
+lbaf  3 : ms:64  lbads:9  rp:0
+lbaf  4 : ms:0   lbads:12 rp:0 (in use)
+lbaf  5 : ms:8   lbads:12 rp:0
+lbaf  6 : ms:16  lbads:12 rp:0
+lbaf  7 : ms:64  lbads:12 rp:0
+
+

From this output, the required lbaf configuration can be found. The necessary configuration has to have the following +values:

+ + + + + + + + + + + + + + + + + + + + + +
PropertyValue
ms0
lbads12
rp0
+

In the example, the required LBA format is 4. If an NVMe device doesn't have that combination, any other lbads=12 +combination will work. However, simplyblock recommends asking for the best available combination.

+
+

Info

+

In some rare cases, no lbads=12 combination will be available. In this case, it is ok to leave the current +setup. This is specifically true for certain cloud providers such as AWS.

+
+

In our example, the device is already formatted with the correct lbaf (see the "in use"). It is, however, +recommended to always format the device before use.

+

To format the drive, the nvme cli is used again.

+
Formatting the NVMe device
sudo nvme format --lbaf=<lbaf> --ses=0 /dev/nvmeXnY
+
+

The output of the command should give a successful response when executed similarly to the example below.

+
Example output of NVMe device formatting
[demo@demo-3 ~]# sudo nvme format --lbaf=4 --ses=0 /dev/nvme0n1
+You are about to format nvme0n1, namespace 0x1.
+WARNING: Format may irrevocably delete this device's data.
+You have 10 seconds to press Ctrl-C to cancel this operation.
+
+Use the force [--force] option to suppress this warning.
+Sending format operation ...
+Success formatting namespace:1
+
+
+

Warning

+

This operation needs to be repeated for each NVMe device that will be handled by simplyblock.

+
+

Configuration and Deployment

+

The low-level format of the devices is required only once.

+

With all NVMe devices prepared, the storage node software can be deployed.

+

The actual deployment process happens in three steps: +- Creating the storage node configuration +- Deploy the first stage (the storage node API) +- Deploy the second stage (the actual storage node services). Remember that this step is performed from a control plane node.

+

The configuration process creates the configuration file, which contains all the assignments of NVMe devices, NICs, and +potentially available NUMA nodes. By default, simplyblock +will configure one storage node per NUMA node.

+
Configure the storage node
sudo sbctl storage-node configure \
+  --max-lvol <MAX_LOGICAL_VOLUMES> \
+  --max-size <MAX_PROVISIONING_CAPACITY>
+
+
Example output of storage node configure
[demo@demo-3 ~]# sudo sbctl sn configure --nodes-per-socket=2 --max-lvol=50 --max-size=1T
+2025-05-14 10:40:17,460: INFO: 0000:00:04.0 is already bound to nvme.
+0000:00:1e.0
+0000:00:1e.0
+0000:00:1f.0
+0000:00:1f.0
+0000:00:1e.0
+0000:00:1f.0
+2025-05-14 10:40:17,841: INFO: JSON file successfully written to /etc/simplyblock/sn_config_file
+2025-05-14 10:40:17,905: INFO: JSON file successfully written to /etc/simplyblock/system_info
+True
+
+

A full set of the parameters for the configure subcommand can be found in the +CLI reference.

+

It is also possible to adjust the configuration file manually, e.g., to remove NVMe devices. +After the configuration has been created, the first stage deployment can be executed.

+
Deploy the storage node
sudo sbctl storage-node deploy --ifname eth0
+
+

The output will look something like the following example:

+
Example output of a storage node deployment
[demo@demo-3 ~]# sudo sbctl storage-node deploy --ifname eth0
+2025-02-26 13:35:06,991: INFO: NVMe SSD devices found on node:
+2025-02-26 13:35:07,038: INFO: Installing dependencies...
+2025-02-26 13:35:13,508: INFO: Node IP: 192.168.10.2
+2025-02-26 13:35:13,623: INFO: Pulling image public.ecr.aws/simply-block/simplyblock:hmdi
+2025-02-26 13:35:15,219: INFO: Recreating SNodeAPI container
+2025-02-26 13:35:15,543: INFO: Pulling image public.ecr.aws/simply-block/ultra:main-latest
+192.168.10.2:5000
+
+

On a successful deployment, the last line will provide the storage node's control channel address. This should be noted +for all storage nodes, as it is required in the next step to attach the storage node to the simplyblock storage cluster.

+

When all storage nodes are added, it's finally time to activate the storage plane.

+

Attach the Storage Node to the Control Plane

+

When all storage nodes are prepared, they can be added to the storage cluster.

+
+

Warning

+

The following commands are executed from a management node. Attaching a storage node to a control plane is executed +from a management node.

+
+
Attaching a storage node to the storage plane
sudo sbctl storage-node add-node <CLUSTER_ID> <SN_CTR_ADDR> <MGT_IF> \
+  --partitions <NUM_OF_PARTITIONS> \
+  --data-nics <DATA_IF>
+
+

If a separate NIC (e.g., BOND device) is used for storage traffic (no matter if in the cluster and between hosts and +cluster nodes), the --data-nics parameter must be specified. In R25.10, zero or one data NICs are supported. Zero data +NICs will utilize the management interface for all traffic.

+
+

Info

+

The number of partitions (NUM_OF_PARTITIONS) depends on the storage node setup. If a storage node has a +separate journaling device (e.g., a SLC NVMe device), the value should be zero (0) to prevent the storage +devices from being partitioned. This improves the performance and prevents device sharing between the journal and +the actual data storage location. However, in most cases, a separate journaling device is not available or required +and the value of --partitions has to be 1.

+
+

The output will look something like the following example:

+
Example output of adding a storage node to the storage plane
[demo@demo ~]# sudo sbctl storage-node add-node 7bef076c-82b7-46a5-9f30-8c938b30e655 192.168.10.2:5000 eth0 --number-of-devices 3 --data-nics eth1
+2025-02-26 14:55:17,236: INFO: Adding Storage node: 192.168.10.2:5000
+2025-02-26 14:55:17,340: INFO: Instance id: 0b0c825e-3d16-4d91-a237-51e55c6ffefe
+2025-02-26 14:55:17,341: INFO: Instance cloud: None
+2025-02-26 14:55:17,341: INFO: Instance type: None
+2025-02-26 14:55:17,342: INFO: Instance privateIp: 192.168.10.2
+2025-02-26 14:55:17,342: INFO: Instance public_ip: 192.168.10.2
+2025-02-26 14:55:17,347: INFO: Node Memory info
+2025-02-26 14:55:17,347: INFO: Total: 24.3 GB
+2025-02-26 14:55:17,348: INFO: Free: 23.2 GB
+2025-02-26 14:55:17,348: INFO: Minimum required huge pages memory is : 14.8 GB
+2025-02-26 14:55:17,349: INFO: Joining docker swarm...
+2025-02-26 14:55:21,060: INFO: Deploying SPDK
+2025-02-26 14:55:31,969: INFO: adding alceml_2d1c235a-1f4d-44c7-9ac1-1db40e23a2c4
+2025-02-26 14:55:32,010: INFO: creating subsystem nqn.2023-02.io.simplyblock:vm12:dev:2d1c235a-1f4d-44c7-9ac1-1db40e23a2c4
+2025-02-26 14:55:32,022: INFO: adding listener for nqn.2023-02.io.simplyblock:vm12:dev:2d1c235a-1f4d-44c7-9ac1-1db40e23a2c4 on IP 10.10.10.2
+2025-02-26 14:55:32,303: INFO: Connecting to remote devices
+2025-02-26 14:55:32,321: INFO: Connecting to remote JMs
+2025-02-26 14:55:32,342: INFO: Make other nodes connect to the new devices
+2025-02-26 14:55:32,346: INFO: Setting node status to Active
+2025-02-26 14:55:32,357: INFO: {"cluster_id": "3196b77c-e6ee-46c3-8291-736debfe2472", "event": "STATUS_CHANGE", "object_name": "StorageNode", "message": "Storage node status changed from: in_creation to: online", "caused_by": "monitor"}
+2025-02-26 14:55:32,361: INFO: Sending event updates, node: 37b404b9-36aa-40b3-8b74-7f3af86bd5a5, status: online
+2025-02-26 14:55:32,368: INFO: Sending to: 37b404b9-36aa-40b3-8b74-7f3af86bd5a5
+2025-02-26 14:55:32,389: INFO: Connecting to remote devices
+2025-02-26 14:55:32,442: WARNING: The cluster status is not active (unready), adding the node without distribs and lvstore
+2025-02-26 14:55:32,443: INFO: Done
+
+

Repeat this process for all prepared storage nodes to add them to the storage plane.

+

Activate the Storage Cluster

+

The last step, after all nodes are added to the storage cluster, is to activate the storage plane.

+
Storage cluster activation
sudo sbctl cluster activate <CLUSTER_ID>
+
+

The command output should look like this, and respond with a successful activation of the storage cluster

+
Example output of a storage cluster activation
[demo@demo ~]# sbctl cluster activate 7bef076c-82b7-46a5-9f30-8c938b30e655
+2025-02-28 13:35:26,053: INFO: {"cluster_id": "7bef076c-82b7-46a5-9f30-8c938b30e655", "event": "STATUS_CHANGE", "object_name": "Cluster", "message": "Cluster status changed from unready to in_activation", "caused_by": "cli"}
+2025-02-28 13:35:26,322: INFO: Connecting remote_jm_43560b0a-f966-405f-b27a-2c571a2bb4eb to 2f4dafb1-d610-42a7-9a53-13732459523e
+2025-02-28 13:35:31,133: INFO: Connecting remote_jm_43560b0a-f966-405f-b27a-2c571a2bb4eb to b7db725a-96e2-40d1-b41b-738495d97093
+2025-02-28 13:35:55,791: INFO: {"cluster_id": "7bef076c-82b7-46a5-9f30-8c938b30e655", "event": "STATUS_CHANGE", "object_name": "Cluster", "message": "Cluster status changed from in_activation to active", "caused_by": "cli"}
+
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/deployments/kubernetes/index.html b/deployment/25.10.3/deployments/kubernetes/index.html new file mode 100644 index 00000000..afed725d --- /dev/null +++ b/deployment/25.10.3/deployments/kubernetes/index.html @@ -0,0 +1,4682 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Install Simplyblock on Kubernetes - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Install Simplyblock on Kubernetes

+ +

Three simplyblock components can be installed into existing Kubernetes environments:

+
    +
  • Control Plane: In Kubernetes-based deployments, the simplyblock control plane can be installed into a Kubernetes + cluster. This is always the first step.
  • +
  • Storage Plane: In Kubernetes-based deployments, the simplyblock storage plane can be installed into Kubernetes + clusters once the control plane is ready. It is possible to use separate workers or even separate clusters as storage nodes or to combine them with + compute. The storage plane installs also installs necessary components of the CSI driver, no extra helm chart is needed.
  • +
+

In general, all Kubernetes deployments follow the same procedure. However, here are some specifics worth to mention around openshift and talos. + Also, if you want to use volume-based e2e encryption with customer-managed keys, please see here.

+

The Simplyblock CSI Driver can also be separately installed to connect to any external storage cluster +(this can be another hyperconverged or disaggregated cluster under Kubernetes or a Linux-based disaggregated deployment), see: Install Simplyblock CSI.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/deployments/kubernetes/install-csi/index.html b/deployment/25.10.3/deployments/kubernetes/install-csi/index.html new file mode 100644 index 00000000..85090113 --- /dev/null +++ b/deployment/25.10.3/deployments/kubernetes/install-csi/index.html @@ -0,0 +1,5151 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Install Simplyblock CSI - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Install Simplyblock CSI

+ +

Simplyblock provides a seamless integration with Kubernetes through its Kubernetes CSI driver.

+

Before installing the Kubernetes CSI Driver, a control plane must be present, a (empty) storage cluster must have +been added to the control plane, and a storage pool must have been created.

+

This section explains how to install a CSI driver and connect it to a disaggregated storage cluster, which must already +exist prior to the CSI driver installation. The disaggregated cluster must be installed onto +Plain Linux Hosts or into an Existing Kubernetes Cluster. +It must not be co-located on the same Kubernetes worker nodes as the CSI driver installation.

+

For co-located (hyper-converged) deployment (which includes the CSI driver and storage node deployment), see +Hyper-Converged Deployment.

+

CSI Driver System Requirements

+

The CSI driver consists of two parts:

+
    +
  • A controller part, which communicates to the control plane via the control plane API
  • +
  • A node part, which is deployed to and must be present on all nodes with pods attaching simplyblock storage (Daemonset)
  • +
+

The worker node of the node part must satisfy the following requirements:

+ +

Installation Options

+

To install the Simplyblock CSI Driver, a Helm chart is provided. While it can be installed manually, the Helm chart is +strongly recommended. If a manual installation is preferred, see the +CSI Driver Repository ⧉.

+

Retrieving Credentials

+

Credentials are available via sbctl cluster get-secret from any of the control plane nodes. For further +information on the command, see Retrieving a Cluster Secret.

+

First, the unique cluster id must be retrieved. Note down the cluster UUID of the cluster to access.

+
Retrieving the Cluster UUID
sudo sbctl cluster list
+
+

An example of the output is below.

+
Example output of a cluster listing
[demo@demo ~]# sbctl cluster list
++--------------------------------------+-----------------------------------------------------------------+---------+-------+------------+---------------+-----+--------+
+| UUID                                 | NQN                                                             | ha_type | tls   | mgmt nodes | storage nodes | Mod | Status |
++--------------------------------------+-----------------------------------------------------------------+---------+-------+------------+---------------+-----+--------+
+| 4502977c-ae2d-4046-a8c5-ccc7fa78eb9a | nqn.2023-02.io.simplyblock:4502977c-ae2d-4046-a8c5-ccc7fa78eb9a | ha      | False | 1          | 4             | 1x1 | active |
++--------------------------------------+-----------------------------------------------------------------+---------+-------+------------+---------------+-----+--------+
+
+

In addition, the cluster secret must be retrieved. Note down the cluster secret.

+
Retrieve the Cluster Secret
sbctl cluster get-secret <CLUSTER_UUID>
+
+

Retrieving the cluster secret will look somewhat like that.

+
Example output of retrieving a cluster secret
[demo@demo ~]# sbctl cluster get-secret 4502977c-ae2d-4046-a8c5-ccc7fa78eb9a
+oal4PVNbZ80uhLMah2Bs
+
+

Creating a Storage Pool

+

Additionally, a storage pool is required. If a pool already exists, it can be reused. Otherwise, creating a storage +pool can be created as follows:

+
Create a Storage Pool
sbctl pool add <POOL_NAME> <CLUSTER_UUID>
+
+

The last line of a successful storage pool creation returns the new pool id.

+
Example output of creating a storage pool
[demo@demo ~]# sbctl pool add test 4502977c-ae2d-4046-a8c5-ccc7fa78eb9a
+2025-03-05 06:36:06,093: INFO: Adding pool
+2025-03-05 06:36:06,098: INFO: {"cluster_id": "4502977c-ae2d-4046-a8c5-ccc7fa78eb9a", "event": "OBJ_CREATED", "object_name": "Pool", "message": "Pool created test", "caused_by": "cli"}
+2025-03-05 06:36:06,100: INFO: Done
+ad35b7bb-7703-4d38-884f-d8e56ffdafc6 # <- Pool Id
+
+

The last item necessary before deploying the CSI driver is the control plane address. It is recommended to front the +simplyblock API with an AWS load balancer, HAproxy, or similar service. Hence, your control plane address is the +"public" endpoint of this load balancer.

+

Deploying the Helm Chart

+

Anyhow, deploying the Simplyblock CSI Driver using the provided Helm Chart comes down to providing the four necessary +values, adding the helm chart repository, and installing the driver.

+
Install Simplyblock's CSI Driver
CLUSTER_UUID="<UUID>"
+CLUSTER_SECRET="<SECRET>"
+CNTR_ADDR="<CONTROL-PLANE-ADDR>"
+POOL_NAME="<POOL-NAME>"
+helm repo add simplyblock-csi https://install.simplyblock.io/helm/csi
+helm repo update
+helm install -n simplyblock --create-namespace simplyblock simplyblock-csi/spdk-csi \
+    --set csiConfig.simplybk.uuid=${CLUSTER_UUID} \
+    --set csiConfig.simplybk.ip=${CNTR_ADDR} \
+    --set csiSecret.simplybk.secret=${CLUSTER_SECRET} \
+    --set logicalVolume.pool_name=${POOL_NAME}
+
+
Example output of the CSI driver deployment
demo@demo ~> export CLUSTER_UUID="4502977c-ae2d-4046-a8c5-ccc7fa78eb9a"
+demo@demo ~> export CLUSTER_SECRET="oal4PVNbZ80uhLMah2Bs"
+demo@demo ~> export CNTR_ADDR="http://192.168.10.1/"
+demo@demo ~> export POOL_NAME="test"
+demo@demo ~> helm repo add simplyblock-csi https://install.simplyblock.io/helm/csi
+"simplyblock-csi" has been added to your repositories
+demo@demo ~> helm repo update
+Hang tight while we grab the latest from your chart repositories...
+...Successfully got an update from the "simplyblock-csi" chart repository
+Update Complete. ⎈Happy Helming!⎈
+demo@demo ~> helm install -n simplyblock --create-namespace simplyblock simplyblock-csi/spdk-csi \
+  --set csiConfig.simplybk.uuid=${CLUSTER_UUID} \
+  --set csiConfig.simplybk.ip=${CNTR_ADDR} \
+  --set csiSecret.simplybk.secret=${CLUSTER_SECRET} \
+  --set logicalVolume.pool_name=${POOL_NAME}
+NAME: simplyblock-csi
+LAST DEPLOYED: Wed Mar  5 15:06:02 2025
+NAMESPACE: simplyblock
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+NOTES:
+The Simplyblock SPDK Driver is getting deployed to your cluster.
+
+To check CSI SPDK Driver pods status, please run:
+
+  kubectl --namespace=simplyblock get pods --selector="release=simplyblock-csi" --watch
+demo@demo ~> kubectl --namespace=simplyblock get pods --selector="release=simplyblock-csi" --watch
+NAME                   READY   STATUS    RESTARTS   AGE
+spdkcsi-controller-0   6/6     Running   0          30s
+spdkcsi-node-tzclt     2/2     Running   0          30s
+
+

There are a lot of additional parameters for the Helm Chart deployment. Most parameters, however, aren't required in +real-world CSI driver deployments and should only be used on request of simplyblock.

+

The full list of parameters is available here: Kubernetes Helm Chart Parameters.

+

Please note that the storagenode.create? parameter must be set tofalse` (the default) to deploy only the CSI driver.

+

Multi Cluster Support

+

The Simplyblock CSI driver now offers multi-cluster support and zone-aware configurations, allowing to connect with multiple simplyblock clusters based on ClusterID +or based on their topology zone. +Previously, the CSI driver could only connect to a single cluster.

+

To enable interaction with multiple clusters, there are two key changes:

+
    +
  1. Parameter cluster_id in a storage class: A new parameter, cluster_id, has been added to the storage class. + This parameter specifies which Simplyblock cluster a given request should be directed to.
  2. +
  3. Secret simplyblock-csi-secret-v2: A new Kubernetes secret, simplyblock-csi-secret-v2, has been added to + store credentials for all configured simplyblock clusters.
  4. +
+

Adding a Cluster

+

When the Simplyblock CSI driver is initially installed, only a single cluster can be referenced.

+
helm install simplyblock-csi ./ \
+    --set csiConfig.simplybk.uuid=${CLUSTER_ID} \
+    --set csiConfig.simplybk.ip=${CLUSTER_IP} \
+    --set csiSecret.simplybk.secret=${CLUSTER_SECRET} \
+
+

The CLUSTER_ID (UUID), gateway endpoint (CLUSTER_IP), and secret (CLUSTER_SECRET) of the initial cluster must be +provided. This command automatically creates the simplyblock-csi-secret-v2 secret.

+

The structure of the simplyblock-csi-secret-v2 secret is as following:

+
apiVersion: v1
+data:
+  secret.json: <base64 encoded secret>
+kind: Secret
+metadata:
+  name: simplyblock-csi-secret-v2
+type: Opaque
+
+

The decoded secret must be valid JSON content and contain an array of JSON items, one per cluster. Each items consists +of three properties, cluster_id, cluster_endpoint, and cluster_secret.

+
{
+   "clusters": [
+     {
+       "cluster_id": "4ec308a1-61cf-4ec6-bff9-aa837f7bc0ea",
+       "cluster_endpoint": "http://127.0.0.1",
+       "cluster_secret": "super_secret"
+     }
+   ]
+}
+
+

To add a new cluster, the current secret must be retrieved from Kubernetes, edited (adding the new cluster information), +and uploaded to the Kubernetes cluster.

+
# Save cluster secret to a file
+kubectl get secret simplyblock-csi-secret-v2 -o jsonpath='{.data.secret\.json}' | base64 --decode > secret.yaml
+
+# Edit the clusters and add the new cluster's cluster_id, cluster_endpoint, cluster_secret
+# vi secret.json 
+
+cat secret.json | base64 | tr -d '\n' > secret-encoded.json
+
+# Replace data.secret.json with the content of secret-encoded.json
+kubectl -n simplyblock edit secret simplyblock-csi-secret-v2
+
+

Using Multi Cluster

+

Option 1: Cluster ID–Based Method (One StorageClass per Cluster)

+

In this approach, each SimplyBlock cluster has its own dedicated StorageClass that specifies which cluster to use for provisioning. +This is ideal for setups where workloads are manually directed to specific clusters.

+

For example:

+
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+  name: simplyblock-csi-sc-cluster1
+provisioner: csi.simplyblock.io
+parameters:
+  cluster_id: "luster-uuid-1"
+  ... other parameters
+reclaimPolicy: Delete
+volumeBindingMode: WaitForFirstConsumer
+allowVolumeExpansion: true
+
+

You can define another StorageClass for a different cluster:

+
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+  name: simplyblock-csi-sc-cluster2
+provisioner: csi.simplyblock.io
+parameters:
+  cluster_id: "cluster-uuid-2"
+  ... other parameters
+reclaimPolicy: Delete
+volumeBindingMode: WaitForFirstConsumer
+allowVolumeExpansion: true
+
+

Each StorageClass references a unique cluster_id. +The CSI driver uses that ID to determine which SimplyBlock cluster to connect to.

+

Option 2: Zone-Aware Method (Automatic Multi-Cluster Selection)

+

This approach allows a single StorageClass to automatically select the appropriate SimplyBlock cluster based on the Kubernetes zone where the workload runs. +It is recommended for multi-zone Kubernetes deployments that span multiple SimplyBlock clusters.

+

storageclass.zoneClusterMap

+

Sets the mapping between Kubernetes zones and SimplyBlock cluster IDs. +Each zone is associated with one cluster.

+

storageclass.allowedTopologyZones

+

Sets the list of zones where the StorageClass is permitted to provision volumes. +This ensures that scheduling aligns with the clusters defined in zoneClusterMap.

+

example:

+
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+  name: simplyblock-csi-sc
+provisioner: csi.simplyblock.io
+parameters:
+  zone_cluster_map: |
+    {"us-east-1a":"cluster-uuid-1","us-east-1b":"cluster-uuid-2"}
+  ... other parameters
+reclaimPolicy: Delete
+volumeBindingMode: WaitForFirstConsumer
+allowVolumeExpansion: true
+allowedTopologies:
+- matchLabelExpressions:
+  - key: topology.kubernetes.io/zone
+    values:
+      - us-east-1a
+      - us-east-1b
+
+

This method allows Kubernetes to automatically pick the right cluster based on the pod’s scheduling zone.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/deployments/kubernetes/k8s-control-plane/index.html b/deployment/25.10.3/deployments/kubernetes/k8s-control-plane/index.html new file mode 100644 index 00000000..235fbefc --- /dev/null +++ b/deployment/25.10.3/deployments/kubernetes/k8s-control-plane/index.html @@ -0,0 +1,4754 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Install Simplyblock Control Plane on Kubernetes - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Install Simplyblock Control Plane on Kubernetes

+ +
Install CLI
pip install sbctl --upgrade
+
+

After installing the CLI, navigate to the Helm chart directory within the installed package:

+
cd /usr/local/lib/python3.9/site-packages/simplyblock_core/scripts/charts/
+
+

Then build the Helm dependencies and deploy the simplyblock control plane:

+
helm dependency build ./
+helm upgrade --install sbcli --namespace simplyblock --create-namespace ./
+
+

Before running the helm install, you can edit the values.yaml file to match your specific configuration — +for example, to set cluster parameters, storage options, or node selectors according to your environment.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ServiceDirectionSource / Target NetworkPortProtocol(s)
ICMPingresscontrol-ICMP
Cluster APIingressstorage, control, admin80TCP
FoundationDBingressstorage, control4500TCP
Cluster Controlegressstorage, control8080-8890TCP
spdk-http-proxyegressstorage, control5000TCP
spdk-firewall-proxyegressstorage, control5001TCP
+

Find and exec into the admin control pod (replace the pod name if different):

+
kubectl -n simplyblock exec -it simplyblock-admin-control-<uuid> -- bash
+
+
Install Control Plane
sbctl cluster create --mgmt-ip <WORKER_IP> --ha-type ha --mode kubernetes
+
+
+

Info

+

You need to add additional parameter when using a Loadbalancer --ingress-host-source loadbalancer and --dns-name <LB_INGRESS_DNS>

+
+

Additional parameters for the cluster create command can be found at Cluster Deployment Options.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/deployments/kubernetes/k8s-storage-plane/index.html b/deployment/25.10.3/deployments/kubernetes/k8s-storage-plane/index.html new file mode 100644 index 00000000..02f3d09a --- /dev/null +++ b/deployment/25.10.3/deployments/kubernetes/k8s-storage-plane/index.html @@ -0,0 +1,5078 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Install Simplyblock Storage Plane on Kubernetes - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Install Simplyblock Storage Plane on Kubernetes

+ +

When installed on Kubernetes, simplyblock installations consist of three parts, the control plane, the storage nodes +and the CSI driver.

+
+

Info

+

In a Kubernetes deployment, not all Kubernetes workers have to become part of the storage cluster. +Simplyblock uses node labels to identify Kubernetes workers that are deemed as storage hosting instances.

+

It is common to add dedicated Kubernetes worker nodes for storage to the same +Kubernetes cluster. They can be separated into a different node pool, and using a different type of host. In this case, +it is important to remember to taint the Kubernetes worker accordingly to prevent other services from being +scheduled on this worker.

+
+

Retrieving Credentials

+

Credentials are available via sbctl cluster get-secret from any of the control plane nodes. For further +information on the command, see Retrieving a Cluster Secret.

+

First, the unique cluster id must be retrieved. Note down the cluster UUID of the cluster to access.

+
Retrieving the Cluster UUID
sudo sbctl cluster list
+
+

An example of the output is below.

+
Example output of a cluster listing
[demo@demo ~]# sbctl cluster list
++--------------------------------------+-----------------------------------------------------------------+---------+-------+------------+---------------+-----+--------+
+| UUID                                 | NQN                                                             | ha_type | tls   | mgmt nodes | storage nodes | Mod | Status |
++--------------------------------------+-----------------------------------------------------------------+---------+-------+------------+---------------+-----+--------+
+| 4502977c-ae2d-4046-a8c5-ccc7fa78eb9a | nqn.2023-02.io.simplyblock:4502977c-ae2d-4046-a8c5-ccc7fa78eb9a | ha      | False | 1          | 4             | 1x1 | active |
++--------------------------------------+-----------------------------------------------------------------+---------+-------+------------+---------------+-----+--------+
+
+

In addition, the cluster secret must be retrieved. Note down the cluster secret.

+
Retrieve the Cluster Secret
sbctl cluster get-secret <CLUSTER_UUID>
+
+

Retrieving the cluster secret will look somewhat like that.

+
Example output of retrieving a cluster secret
[demo@demo ~]# sbctl cluster get-secret 4502977c-ae2d-4046-a8c5-ccc7fa78eb9a
+oal4PVNbZ80uhLMah2Bs
+
+

Creating a Storage Pool

+

Additionally, a storage pool is required. If a pool already exists, it can be reused. Otherwise, creating a storage +pool can be created as follows:

+
Create a Storage Pool
sbctl pool add <POOL_NAME> <CLUSTER_UUID>
+
+

The last line of a successful storage pool creation returns the new pool id.

+
Example output of creating a storage pool
[demo@demo ~]# sbctl pool add test 4502977c-ae2d-4046-a8c5-ccc7fa78eb9a
+2025-03-05 06:36:06,093: INFO: Adding pool
+2025-03-05 06:36:06,098: INFO: {"cluster_id": "4502977c-ae2d-4046-a8c5-ccc7fa78eb9a", "event": "OBJ_CREATED", "object_name": "Pool", "message": "Pool created test", "caused_by": "cli"}
+2025-03-05 06:36:06,100: INFO: Done
+ad35b7bb-7703-4d38-884f-d8e56ffdafc6 # <- Pool Id
+
+
+

Info

+

It is possible to configure QoS limits on a storage pool level. This limit will collectively cap all volumes +assigned to this pool without being limited individually. In fact, if pool-level QoS is active, it is not +allowed to set volume-level QoS in the storage class!

+
+

Example:

+
Create a Storage Pool with QoS Limits
sbctl pool add <POOL_NAME> <CLUSTER_UUID> --max-iops 10000 --max-rw-mb 500 --max-w-mb 100
+
+

Labeling Nodes

+

Before the Helm Chart can be installed, it is required to label all Kubernetes worker nodes deemed as storage nodes.

+

It is also possible to label additional nodes at a later stage to add them to the storage cluster. However, expanding +a storage cluster always requires at least two new nodes to be added as part of the same expansion operation.

+
Label the Kubernetes worker node
kubectl label nodes <NODE_NAME> io.simplyblock.node-type=simplyblock-storage-plane
+
+

Networking Configuration

+

Multiple ports are required to be opened on storage node hosts.

+

Ports using the same source and target networks (VLANs) will not require any additional firewall settings.

+

Opening ports may be required between the control plane and storage networks as those typically reside on different +VLANs.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ServiceDirectionSource / Target NetworkPort(s)Protocol(s)
ICMPingresscontrol-ICMP
Storage node APIingressstorage5000TCP
spdk-firewall-proxyingressstorage5001TCP
spdk-http-proxyingressstorage, control8080-8180TCP
hublvol-nvmf-subsys-portingressstorage, control9030-9059TCP
internal-nvmf-subsys-portingressstorage, control9060-9099TCP
lvol-nvmf-subsys-portingressstorage, control9100-9200TCP
FoundationDBegressstorage4500TCP
Control plane APIegresscontrol80TCP
+

Installing CSI Driver and Storage Nodes via Helm

+

In the simplest deployment, compared to a pure Simplyblock CSI Driver installation, the deployment of +a storage node via the Helm Chart requires only one additional parameter --set storagenode.create=true:

+
Install the helm chart
CLUSTER_UUID="<UUID>"
+CLUSTER_SECRET="<SECRET>"
+CNTR_ADDR="<CONTROL-PLANE-ADDR>"
+POOL_NAME="<POOL-NAME>"
+helm repo add simplyblock-csi https://install.simplyblock.io/helm/csi
+helm repo add simplyblock-controller https://install.simplyblock.io/helm/controller
+helm repo update
+
+# Install Simplyblock CSI Driver and Storage Node API
+helm install -n simplyblock \
+    --create-namespace simplyblock \
+    simplyblock-csi/spdk-csi \
+    --set csiConfig.simplybk.uuid=<CLUSTER_UUID> \
+    --set csiConfig.simplybk.ip=<CNTR_ADDR> \
+    --set csiSecret.simplybk.secret=<CLUSTER_SECRET> \
+    --set logicalVolume.pool_name=<POOL_NAME> \
+    --set storagenode.create=true
+
+
Example output of the Simplyblock Kubernetes deployment
demo@demo ~> export CLUSTER_UUID="4502977c-ae2d-4046-a8c5-ccc7fa78eb9a"
+demo@demo ~> export CLUSTER_SECRET="oal4PVNbZ80uhLMah2Bs"
+demo@demo ~> export CNTR_ADDR="http://192.168.10.1/"
+demo@demo ~> export POOL_NAME="test"
+demo@demo ~> helm repo add simplyblock-csi https://install.simplyblock.io/helm/csi
+"simplyblock-csi" has been added to your repositories
+demo@demo ~> helm repo add simplyblock-controller https://install.simplyblock.io/helm/controller
+"simplyblock-controller" has been added to your repositories
+demo@demo ~> helm repo update
+Hang tight while we grab the latest from your chart repositories...
+...Successfully got an update from the "simplyblock-csi" chart repository
+...Successfully got an update from the "simplyblock-controller" chart repository
+Update Complete. ⎈Happy Helming!⎈
+demo@demo ~> helm install -n simplyblock --create-namespace simplyblock simplyblock-csi/spdk-csi \
+  --set csiConfig.simplybk.uuid=${CLUSTER_UUID} \
+  --set csiConfig.simplybk.ip=${CNTR_ADDR} \
+  --set csiSecret.simplybk.secret=${CLUSTER_SECRET} \
+  --set logicalVolume.pool_name=${POOL_NAME}
+NAME: simplyblock-csi
+LAST DEPLOYED: Wed Mar  5 15:06:02 2025
+NAMESPACE: simplyblock
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+NOTES:
+The Simplyblock SPDK Driver is getting deployed to your cluster.
+
+To check CSI SPDK Driver pods status, please run:
+
+  kubectl --namespace=simplyblock get pods --selector="release=simplyblock-csi" --watch
+demo@demo ~> kubectl --namespace=simplyblock get pods --selector="release=simplyblock-csi" --watch
+NAME                   READY   STATUS    RESTARTS   AGE
+spdkcsi-controller-0   6/6     Running   0          30s
+spdkcsi-node-tzclt     2/2     Running   0          30s
+
+

There are a number of other Helm Chart parameters that are important for storage node deployment in hyper-converged +mode. The most important ones are:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionDefault
storagenode.ifnameSets the interface name of the management interface (traffic between storage nodes and control plane, see storage mgmt VLAN). Highly available ports and networks are required in production. While this value can be changed at a later point in time, it requires a storage node restart.eth0
storagenode.maxSizeSets the maximum utilized storage capacity of this storage node. A conservative setting is the expected cluster capacity. This setting has significant impact on RAM demand with 0.02% of maxSixe is required in additional RAM.150g
storagenode.isolateCoresEnabled core isolation of cores used by simplyblock from other processes and system, including IRQs, can significantly increase performance. Core isolation requires a Kubernetes worker node restart after the deployment is completed. Changes are performed via a privileged container on the OS-level (grub).false
storagenode.dataNicsSets the interface name of the storage network(s). This includes traffic inside the storage cluster and between csi-nodes and storage nodes. Highly available ports and networks are required for production.
storagenode.pciAllowedSets the list of allowed NVMe PCIe addresses.<empty>
storagenode.pciBlockedSets the list of blocked NVMe PCIe addresses.<empty>
storagenode.socketsToUseSets the list of NUMA sockets to use. If a worker node has more than 1 NUMA socket, it is possible to deploy more than one simplyblock storage node per host, depending on the distribution of NVMe devices and NICs across NUMA sockets and the resource demand of other workloads.1
storagenode.nodesPerSocketSets the number of storage nodes to be deployed per NUMA socket. It is possible to deploy one or two storage nodes per socket. This improves performance if one each NUMA socket has more than 32 cores.1
storagenode.coresPercentageSets the percentage of total cores (vCPUs) available to simplyblock storage node services. It must be ensured that the configured percentage yields at least 8 vCPUs per storage node. For example, if a host has 128 vCPUs on two NUMA sockets (64 each) and --storagenode.socketsToUse=2 and --storagenode.nodesPerSocket=1, at least 13% (as 13% * 64 > 8) must be set. Simplyblock does not use more than 32 vCPUs per storage node efficiently.<empty>
+
+

Warning

+

The resources consumed by simplyblock are exclusively used and have to be aligned with resources required by other +workloads. For further information, see Minimum System Requirements.

+
+
+

Info

+

The RAM requirement itself is split in between huge page memory and system memory. However, this is transparent to +users.

+

Simplyblock takes care of allocating, reserving, and freeing huge pages as part of its overall RAM management.

+

The total amount of RAM required depends on the number of vCPUs used, the number of active logical volumes +(Persistent Volume Claims or PVCs) and the utilized virtual storage on this node. This doesn't mean the physical +storage provided on the storage host, but the storage connected to via this storage node.

+
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/deployments/kubernetes/openshift/index.html b/deployment/25.10.3/deployments/kubernetes/openshift/index.html new file mode 100644 index 00000000..59e13f54 --- /dev/null +++ b/deployment/25.10.3/deployments/kubernetes/openshift/index.html @@ -0,0 +1,4754 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + OpenShift - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

OpenShift

+ +

When installing simplyblock on OpenShift, the process is very similar to Kubernetes, with one key difference, +OpenShift requires explicitly granting the privileged Security Context Constraint (SCC) to service accounts to enable +storage and SPDK operations.

+
+

Info

+

In OpenShift deployments, not all worker nodes must host storage components. +Simplyblock uses node labels to identify nodes that participate in the storage cluster. +You can isolate storage workloads on dedicated worker nodes or node pools.

+
+

Prerequisites

+

Ensure your OpenShift cluster is operational and that you have administrator privileges.

+

Before deploying Simplyblock components, grant the required SCC permissions:

+
Grant SCC permissions
oc adm policy add-scc-to-group privileged system:serviceaccounts
+
+

This step is mandatory to allow SPDK and storage-related containers to run with the privileges required for NVMe device +access.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/deployments/kubernetes/talos/index.html b/deployment/25.10.3/deployments/kubernetes/talos/index.html new file mode 100644 index 00000000..ddc9515b --- /dev/null +++ b/deployment/25.10.3/deployments/kubernetes/talos/index.html @@ -0,0 +1,4825 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Talos - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Talos

+ +

Talos Linux ⧉ is a minimal Linux distribution optimized for Kubernetes. Built as an immutable +distribution image, it provides a minimal attack surface but requires some changes to the image to run simplyblock.

+

Simplyblock requires a set of additional Linux kernel modules, as well as tools being available in the Talos image. +That means that a custom Talos image has to be built to run simplyblock. The following section explains the required +changes to make Talos compliant.

+

Required Kernel Modules (Worker Node)

+

On Kubernetes worker nodes, simplyblock requires a few kernel modules to be loaded.

+
Content of kernel-module-config.yaml
machine:
+  kernel:
+    modules:
+      - name: nbd 
+      - name: uio_pci_generic
+      - name: vfio_pci
+      - name: vfio_iommu_type1
+
+

Huge Pages Reservations

+

Simplyblock requires huge pages memory to operate. The storage engine expects to find huge pages of 2 MiB page size. The +required amount of huge pages depends on a number of factors.

+

To apply the change to Talos' worker nodes, a YAML configuration file with the following content is required. The number +of pages is to be replaced with the number calculated above.

+
Content of huge-pages-config.yaml
machine:
+  sysctls:
+     vm.nr_hugepages: "<number-of-pages>"
+
+

To activate the huge pages, the talosctl command should be used.

+
Enable Huge Pages in Talos
demo@demo ~> talosctl apply-config --nodes <worker_node_ip> \
+    --file huge-pages-config.yaml -m reboot
+demo@demo ~> talosctl service kubelet restart --nodes <worker_node_ip>
+
+

Required Talos Permissions

+

Simyplyblock's CSI driver requires connecting NVMe over Fabrics devices, as well as mounting and formatting them. +Therefore, the CSI driver has to run as a privileged container. Hence, Talos must be configured to start the +simplyblock's CSI driver in privileged mode.

+

Talos allows overriding the pod security admission settings at a Kubernetes namespace level. To enable privileged mode +and grant the required access to the simplyblock CSI driver, a specific simplyblock namespace with the appropriate +security exemptions must be created:

+
Content of simplyblock-namespace.yaml
apiVersion: v1
+kind: Namespace
+metadata:
+  name: simplyblock
+  labels:
+    pod-security.kubernetes.io/enforce: privileged
+    pod-security.kubernetes.io/enforce-version: latest
+    pod-security.kubernetes.io/audit: privileged
+    pod-security.kubernetes.io/audit-version: latest
+    pod-security.kubernetes.io/warn: privileged
+    pod-security.kubernetes.io/warn-version: latest
+
+

To enable the required permissions, apply the namespace configuration using kubectl.

+
Enabled privileged mode for simplyblock
demo@demo ~> kubectl apply -f simplyblock-namespace.yaml
+
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/deployments/kubernetes/volume-encryption/index.html b/deployment/25.10.3/deployments/kubernetes/volume-encryption/index.html new file mode 100644 index 00000000..b96e34c5 --- /dev/null +++ b/deployment/25.10.3/deployments/kubernetes/volume-encryption/index.html @@ -0,0 +1,4881 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Volume Encryption - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Volume Encryption

+ +

Simplyblock supports encryption of logical volumes (LVs) to protect data at rest, ensuring that sensitive +information remains secure across the distributed storage cluster. Encryption is applied during volume creation as +part of the storage class specification.

+

Encrypting Logical Volumes ensures that simplyblock storage meets data protection and compliance requirements, +safeguarding sensitive workloads without compromising performance.

+
+

Warning

+

Encryption must be specified at the time of volume creation. Existing logical volumes cannot be retroactively +encrypted.

+
+

Encrypting Volumes with Simplyblock

+

Simplyblock supports the encryption of logical volumes. Internally, simplyblock utilizes the industry-proven +crypto bdev ⧉ provided by SPDK to implement its encryption +functionality.

+

The encryption uses an AES_XTS variable-length block cipher. This cipher requires two keys of 16 to 32 bytes each. The +keys need to have the same length, meaning that if one key is 32 bytes long, the other one has to be 32 bytes, too.

+
+

Recommendation

+

Simplyblock strongly recommends two keys of 32 bytes.

+
+

Generate Random Keys

+

Simplyblock does not provide an integrated way to generate encryption keys, but recommends using the OpenSSL tool chain. +For Kubernetes, the encryption key needs to be provided as base64. Hence, it's encoded right away.

+

To generate the two keys, the following command is run twice. The result must be stored for later.

+
Create an Encryption Key
openssl rand -hex 32 | base64 -w0
+
+

Create the Kubernetes Secret

+

Next up, a Kubernetes Secret is created, providing the two just-created encryption keys.

+
Create a Kubernetes Secret Resource
apiVersion: v1
+kind: Secret
+metadata:
+  name: my-encryption-keys
+data:
+  crypto_key1: YzIzYzllY2I4MWJmYmY1ZDM5ZDA0NThjNWZlNzQwNjY2Y2RjZDViNWE4NTZkOTA5YmRmODFjM2UxM2FkZGU4Ngo=
+  crypto_key2: ZmFhMGFlMzZkNmIyODdhMjYxMzZhYWI3ZTcwZDEwZjBmYWJlMzYzMDRjNTBjYTY5Nzk2ZGRlZGJiMDMwMGJmNwo=
+
+

The Kubernetes Secret can be used for one or more logical volumes. Using different encryption keys, multiple tenants +can be secured with an additional isolation layer against each other.

+

StorageClass Configuration

+

A new Kubernetes StorageClass needs to be created, or an existing one needs to be configured. To use encryption on a +persistent volume claim level, the storage class has to be set for encryption.

+
Example StorageClass
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+  name: my-encrypted-volumes
+provisioner: csi.simplyblock.io
+parameters:
+  encryption: "True" # This is important!
+  ... other parameters
+reclaimPolicy: Delete
+volumeBindingMode: Immediate
+allowVolumeExpansion: true
+
+

Create a PersistentVolumeClaim

+

When requesting a logical volume through a Kubernetes PersistentVolumeClaim, the storage class and the secret resources +have to be connected to the PVC. When picked up, simplyblock will automatically collect the keys and create the logical +volumes as a fully encrypted logical volume.

+
Create an encrypting PersistentVolumeClaim
apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  annotations:
+    simplybk/secret-name: my-encryption-keys # Encryption keys
+  name: my-encrypted-volume-claim
+spec:
+  storageClassName: my-encrypted-volumes # StorageClass
+  accessModes:
+    - ReadWriteOnce
+  resources:
+    requests:
+      storage: 200Gi
+
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/deployments/nvme-namespaces-and-subsystems/index.html b/deployment/25.10.3/deployments/nvme-namespaces-and-subsystems/index.html new file mode 100644 index 00000000..c3656fec --- /dev/null +++ b/deployment/25.10.3/deployments/nvme-namespaces-and-subsystems/index.html @@ -0,0 +1,4712 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Nvme namespaces and subsystems - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Nvme namespaces and subsystems

+ +

To connect to a storage volume, both locally and via NVMe-oF, you need a subsystem and a namespace.

+

An NVMe-oF subsystem is the exported entity that the host connects to over the fabric (RDMA, TCP). +A subsystem is identified by its unique worldwide name (NQN) and can be roughly seen as a +controller, which exposes and connects one or multiple namespaces (actual volumes) to hosts.

+

The NQN of a subsystem can contain the namespace uuid and is worldwide unique. +In Simplyblock it looks as follows (the last part behind :lvol:<uuid> indicates the namespace representing the volume):

+

qn.2023-02.io.simplyblock:136012a7-f386-4091-ae0f-4e763059e9c8:lvol:6809b758-1c73-451f-810c-210c18d6aa14

+

Together with the IP address, the fully qualified subsystem address has to be given to connect, but +In Simplyblock this process is either automated (CSI, OpenStack or Proxmox) or guided (plain linux attach).

+

It’s roughly equivalent to an NVMe controller complex — a logical device that can contain one or more namespaces.

+

Now subsystems are backed by multiple queue pairs, each of which is backed by a network connection such as a TCP socket. +More queue pairs require more resources from the cluster but make the volumes faster.

+

Namespaces on the other side are actual block storage regions that hold user data. +It’s the NVMe analog of a “LUN” in SCSI — the thing that actually stores and serves data blocks. +It has an NSID, size ond block format and UUID.

+

When a host connects to the subsystem, each namespace appears as a separate block device:

+
/dev/nvme0n1
+/dev/nvme0n2
+
+

All namespaces on the same subsystem use the same network connections to transfer IO.

+

It’s what you would use for:

+

Creating a filesystem (e.g., mkfs.ext4 /dev/nvme0n1) +Raw block I/O (e.g., via fio, dd, or SPDK bdevs) +So the namespace is the thing you actually read and write data to.

+
+

Info

+

In simplyblock, you can define how many namespace volumes are to be created for a particular +subsystem. This allows sharing of subystems by Linux block devices (e.g. nvme0nX), where each of them +is less performance-critical. In Kubernetes, to use different relationships (e.g. 1:10) between subsystem +and namespace, different storage classes are required.

+
+

To manually create volumes with multiple namespaces per subsystem, use:

+

sbctl lvol add lvol01 100G pool01 --max-namespace-per-subsys 10

+

This adds a new subsystem with a namespace and allows up to 9 more namespaces on this volume. +To add new namespaces to the same subsystem, use:

+

sbctl lvol add lvol02 100G --uuid <UUID>

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/deployments/openstack/index.html b/deployment/25.10.3/deployments/openstack/index.html new file mode 100644 index 00000000..c1eb5bfe --- /dev/null +++ b/deployment/25.10.3/deployments/openstack/index.html @@ -0,0 +1,4753 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + OpenStack Integration - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

OpenStack Integration

+ +
+

Info

+

This driver is still not part of the official OpenStack support matrix.

+

We are working on getting it there.

+
+

Features Supported

+

The following list of features is supported: +- Thin provisioning +- Creating a volume +- Resizing (extend) a volume +- Deleting a volume +- Snapshotting a volume +- Reverting to snapshot +- Cloning a volume (copy-on-write) +- Extending an attached volume +- Multi-attaching a volume +- Volume migration (driver-supported) +- QoS +- Active/active HA support

+

Deployment

+

Depending on the fabric, it is necessary to load the Linux kernel modules on compute nodes and controller:

+

Load NVMe/TCP on Ubuntu or Debian
sudo apt-get install -y linux-modules-extra-$(uname -r)
+sudo modprobe nvme_tcp
+
+
Load NVMe/TCP on RHEL, Rocky or Alma
sudo modprobe nvme_tcp
+
+In case you need the RoCE/RDMA fabric or both fabrics, (also) run:

+

Load NVMe/RoCE on Ubuntu or Debian
sudo apt-get install -y linux-modules-extra-$(uname -r)
+sudo modprobe nvme_rdma
+
+
Load NVMe/RoCE on RHEL, Rocky or Alma
sudo modprobe nvme_rdma
+

+
Update globals.yaml
enable_cinder: "yes"
+...
+#This is a fork of the cinder-volume driver container including Simplyblock:
+cinder_volume_image: "docker.io/simplyblock/cinder-volume"
+#If Simplyblock is the only Cinder Storage Backend:
+skip_cinder_backend_check: "yes"
+
+
Update Cinder Override for Simplyblock Backend Located in /etc/kolla/config/cinder.conf
[DEFAULT]
+debug = True
+# Add Simplyblock to enabled_backends list
+enabled_backends = simplyblock
+
+[simplyblock]
+volume_driver = cinder.volume.drivers.simplyblock.driver.SimplyblockDriver
+volume_backend_name = simplyblock
+simplyblock_endpoint = <simplyblock_endpoint>
+simplyblock_cluster_uuid = <simplyblock_cluster_uuid>
+simplyblock_cluster_secret = <simplyblock_cluster_secret>
+simplyblock_pool_name = <simplyblock_pool_name>
+
+
Rerun Kolla-Ansible Deploy Command for Cinder
kolla-ansible deploy -i <inventory_file> --tags cinder
+
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/deployments/proxmox/index.html b/deployment/25.10.3/deployments/proxmox/index.html new file mode 100644 index 00000000..e895280d --- /dev/null +++ b/deployment/25.10.3/deployments/proxmox/index.html @@ -0,0 +1,4786 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Proxmox Integration - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Proxmox Integration

+ +

Proxmox Virtual Environment (Proxmox VE) is an open-source server virtualization platform that integrates KVM-based +virtual machines and LXC containers with a web-based management interface.

+

Simplyblock seamlessly integrates with Proxmox through its storage plugin. The storage plugin enables the automatic +provisioning of storage volumes for Proxmox's KVM virtual machines and LXC containers. Simplyblock is fully integrated +into the Proxmox user interface.

+

After being deployed, virtual machine and container images can be provisioned to simplyblock logical volumes, inheriting +all performance and reliability characteristics. Volumes provisioned using the simplyblock Proxmox integration are +automatically managed and provided to the hypervisor in an ad-hoc fashion. The Proxmox UI and command line interface can +manage the volume lifecycle.

+

Install Simplyblock for Proxmox

+

Simplyblock's Proxmox storage plugin can be installed from the simplyblock apt repository. To register the simplyblock +apt repository, simplyblock offers a script to handle the repository registration automatically.

+
+

Info

+

All the following commands require root permissions for execution. It is recommended to log in as root or open a +root shell using sudo su.

+
+
Automatically register the Simplyblock Debian Repository
curl https://install.simplyblock.io/install-debian-repository | bash
+
+

If a manual registration is preferred, the repository public key must be downloaded and made available to apt. This key +is used for signature verification.

+
Install the Simplyblock Public Key
curl -o /etc/apt/keyrings/simplyblock.gpg https://install.simplyblock.io/simplyblock.key
+
+

Afterward, the repository needs to be registered for apt itself. The following line registers the apt repository.

+
Register the Simplyblock Debian Repository
echo 'deb [signed-by=/etc/apt/keyrings/simplyblock.gpg] https://install.simplyblock.io/debian stable main' | \
+    tee /etc/apt/sources.list.d/simplyblock.list
+
+

Install the Simplyblock-Proxmox Package

+

After the registration of the repository, an apt update will refresh all available package information and make the +simplyblock-proxmox package available. The update must not show any errors related to the simplyblock apt repository.

+

With the updated repository information, an apt install simplyblock-proxmox installed the simplyblock storage plugin.

+
Install the Simplyblock Proxmox Integration
apt update
+apt install simplyblock-proxmox
+
+

Now, register a simplyblock storage pool with Proxmox. The new Proxmox storage can have an arbitrary name and multiple +simplyblock storage pools can be registered as long as their Proxmox names are different.

+
Enable Simplyblock as a Storage Provider
pvesm add simplyblock <NAME> \
+    --entrypoint=<CONTROL_PLANE_ADDR> \
+    --cluster=<CLUSTER_ID> \
+    --secret=<CLUSTER_SECRET> \
+    --pool=<STORAGE_POOL_NAME>
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescription
NAMEThe name of the storage pool in Proxmox.
CONTROL_PLANE_ADDRThe api address of the simplyblock control plane.
CLUSTER_IDThe simplyblock storage cluster id. The cluster id can be found using sbctl cluster list.
CLUSTER_SECRETThe simplyblock storage cluster secret. The cluster secret can be retrieved using sbctl cluster get-secret.
STORAGE_POOL_NAMEThe simplyblock storage pool name to attach.
+

After Installation

+

In the Proxmox user interface, a storage of type simplyblock is now available.

+

+

The hypervisor is now configured and can use a simplyblock storage cluster as a storage backend.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/important-notes/acronyms/index.html b/deployment/25.10.3/important-notes/acronyms/index.html new file mode 100644 index 00000000..c6465000 --- /dev/null +++ b/deployment/25.10.3/important-notes/acronyms/index.html @@ -0,0 +1,4868 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Acronyms & Abbreviations - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Acronyms & Abbreviations

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Acronym or AbbreviationExplanation
APIApplication Programming Interface
AWSAmazon Web Services
CIDRClassless Inter-Domain Routing
CLICommand Line Interface
COWCopy On Write
CPControl Plane
CSIContainer Storage Interface
DMADirect Memory Access
EAErasure Coding
HAHigh Availability
HTTPHypertext Transfer Protocol
IDIdentifier
IOInput-Output
IOMMUInput-Output Memory Management Unit
IPInternet Protocol
K8sKubernetes
LVLogical Volume
MFTMaximum Tolerable Failure
NICNetwork Interface Card
NQNNVMe Qualified Name
NVMeNon-Volatile Memory Express
NVMe-oFNVMe over Fabrics
NVMe/RoCENVMe over RDMA on Converged Ethernet
NVMe/TCPNVMe over TCP
OSOperating System
PVPersistent Volume
PVCPersistent Volume Claim
QOSQuality of Service
RAIDRedundant Array of Independent Disks
RDMARemote Direct Memory Access
ROWRedirect On Write
ROXRead Only Many
RWORead Write Once
RWXRead Write Many
SCStorage Class
SDKSoftware Development Kit
SDSSoftware Defined Storage
SPStorage Plane
SPDKStorage Performance Development Kit
SSDSolid State Drive
SSLSecure Socket Layer
TCPTransmission Control Protocol
TLSTransport Layer Security
UDPUser Datagram Protocol
UUIDUniversally Unique Identifier
VMVirtual Machine
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/important-notes/contributing/index.html b/deployment/25.10.3/important-notes/contributing/index.html new file mode 100644 index 00000000..4661f00f --- /dev/null +++ b/deployment/25.10.3/important-notes/contributing/index.html @@ -0,0 +1,4928 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Contributing - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Contributing to Simplyblock Documentation

+

Overview

+

Simplyblock's documentation is publicly available, and we welcome contributions from the community to improve clarity, +fix errors, and enhance the overall quality of our documentation. While simplyblock itself is not open source, our +documentation is publicly hosted GitHub ⧉. We encourage +users to provide feedback, report typos, suggest improvements, and submit fixes for documentation inconsistencies.

+

How to Contribute

+

The simplyblock documentation is built using mkdocs ⧉, specifically using the +mkdocs-material ⧉ variant.

+

Changes to the documentation can be made by changing or adding the necessary Markdown files.

+

1. Provide Feedback or Report Issues

+

If you notice any inaccuracies, typos, missing information, or outdated content, you can submit an issue on our GitHub +repository:

+
    +
  1. Navigate to the Simplyblock Documentation GitHub Repository ⧉.
  2. +
  3. Click on the Issues tab.
  4. +
  5. Click New Issue and provide a clear description of the problem or suggestion.
  6. +
  7. Submit the issue, and our team will review it.
  8. +
+

2. Make Edits and Submit a Pull Request (PR)

+

If you'd like to make direct changes to the documentation, follow these steps:

+
    +
  1. +

    Fork the Repository

    +
  2. +
  3. +

    Visit Simplyblock Documentation GitHub ⧉ and click Fork to create + your own copy of the repository.

    +
  4. +
  5. +

    Clone the Repository

    +
  6. +
  7. +

    Clone your fork to your local machine: +

    git clone https://github.com/YOUR_USERNAME/documentation.git
    +cd documentation
    +

    +
  8. +
  9. +

    Create a New Branch

    +
  10. +
  11. +

    Always create a new branch for your changes: +

    git checkout -b update-docs
    +

    +
  12. +
  13. +

    Make Changes

    +
  14. +
  15. +

    Edit the relevant Markdown (.md) files using a text editor or IDE. The documentation files can be found in the + /docs directory.

    +
  16. +
  17. +

    Ensure that formatting follows existing conventions.

    +
  18. +
  19. +

    Commit and Push Your Changes

    +
  20. +
  21. +

    Commit your changes with a clear message: +

    git commit -m "Fix typo in installation guide"
    +

    +
  22. +
  23. +

    Push the changes to your fork: +

    git push origin update-docs
    +

    +
  24. +
  25. +

    Create a Pull Request (PR)

    +
  26. +
  27. +

    Navigate to the original simplyblock documentation repository.

    +
  28. +
  29. Click New Pull Request and select your branch.
  30. +
  31. Provide a concise description of the changes and submit the PR.
  32. +
  33. Our team will review and merge accepted contributions.
  34. +
+

Contribution Guidelines

+
    +
  • Ensure all content remains clear, concise, and professional.
  • +
  • Follow Markdown syntax conventions used throughout the documentation.
  • +
  • Keep changes focused on documentation improvements (not product functionality).
  • +
  • Be respectful and constructive in all discussions and contributions.
  • +
+

Getting in Touch

+

If you have questions about contributing, feel free to open an issue or contact us via the simplyblock support channels.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/important-notes/documentation-conventions/index.html b/deployment/25.10.3/important-notes/documentation-conventions/index.html new file mode 100644 index 00000000..5c7e6bee --- /dev/null +++ b/deployment/25.10.3/important-notes/documentation-conventions/index.html @@ -0,0 +1,4909 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Documentation Conventions - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Documentation Conventions

+ +

Feature Stages

+

Features in simplyblock are released when reaching general availability. However, sometimes, features are made available +earlier to receive feedback from testers. Those features must be explicitly enabled and are marked in the +documentation accordingly. Features without a specific label are considered ready for production.

+

The documentation uses the following feature stage labels:

+
    +
  • General Availability: This is the default stage if nothing else is defined for the feature. In this stage, the + feature is considered ready for production.
  • +
  • Technical Preview: The feature is provided for testing and feedback acquisition. It is not regarded as stable + or complete. Breaking changes may occur, which could break backward compatibility. Features + in this stage are not considered ready for production. Features in this stage need to + be specifically enabled before use.
  • +
+

Admonitions (Call-Outs)

+

Notes

+

Notes include additional information that may be interesting but not crucial.

+
+

Note

+

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod +nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor +massa, nec semper lorem quam in massa.

+
+

Recommendations

+

Recommendations include best practices and recommendations.

+
+

Recommendation

+

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod +nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor +massa, nec semper lorem quam in massa.

+
+

Infos

+

Information boxes include background and links to additional information.

+
+

Info

+

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod +nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor +massa, nec semper lorem quam in massa.

+
+

Warnings

+

Warnings contain crucial information that should be considered before proceeding.

+
+

Warning

+

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod +nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor +massa, nec semper lorem quam in massa.

+
+

Dangers

+

Dangers contain crucial information that can lead to harmful consequences, such as data +loss and irreversible damage.

+
+

Danger

+

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod +nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor +massa, nec semper lorem quam in massa.

+
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/important-notes/index.html b/deployment/25.10.3/important-notes/index.html new file mode 100644 index 00000000..4463f881 --- /dev/null +++ b/deployment/25.10.3/important-notes/index.html @@ -0,0 +1,4673 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Important Notes - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Important Notes

+ +

Simplyblock is a high-performance yet reliable distributed block storage optimized for Kubernetes that is compatible +with any bare metal and virtualized Linux environments. It also provides integrations with other environments, such as +Proxmox.

+

To enable the successful operation of your new simplyblock cluster, this section defines some initial conventions and +terminology when working with this documentation.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/important-notes/known-issues/index.html b/deployment/25.10.3/important-notes/known-issues/index.html new file mode 100644 index 00000000..086d9053 --- /dev/null +++ b/deployment/25.10.3/important-notes/known-issues/index.html @@ -0,0 +1,4739 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Known Issues - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Known Issues

+ +

Kubernetes

+
    +
  • Currently, it is not possible to resize a logical volume clone. The resize command does not fail and the new size + is shown by lsblk. But when remounting the filesystem with the option to resize, it fails.
  • +
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/important-notes/terminology/index.html b/deployment/25.10.3/important-notes/terminology/index.html new file mode 100644 index 00000000..bb3ce250 --- /dev/null +++ b/deployment/25.10.3/important-notes/terminology/index.html @@ -0,0 +1,5607 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Terminology - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Terminology

+ + +

Storage Cluster

+

A simplyblock storage cluster is a group of interconnected storage nodes that work together to provide a scalable, +fault-tolerant, and high-performance storage system. Unlike traditional single-node storage solutions, storage clusters +distribute data across multiple nodes, ensuring redundancy, load balancing, and resilience against hardware failures. To +optimize data availability and efficiency, these clusters can be configured using different architectures, including +replication and erasure coding. Storage clusters are commonly used in cloud storage, high-performance computing (HPC), +and enterprise data centers, enabling seamless scalability and improved data accessibility across distributed +environments.

+

Storage Node

+

A storage node in a simplyblock distributed storage cluster is a physical or virtual machine that contributes storage +resources to the cluster. It provides a portion of the overall storage capacity and participates in the data +distribution, redundancy, and retrieval processes. In simplyblock, each logical volume is attached to particular primary +and secondary storage nodes via the nmvf protocol. The nodes run the in-memory data services for this volume on the hot +data path and provide access to underlying data. The data stored on such a volume is distributed within the cluster +following a defined placement logic.

+

Storage Pool

+

A storage pool in simplyblock groups logical volumes and assigns them optional quotas (caps) of capacity, IOPS, and +read-write throughput. Storage pools are defined on a cluster level and can span logical volumes across multiple +storage nodes. Therefore, storage pools implement a tenant concept.

+

Storage Device

+

A storage device is a physical or virtualized NVMe drive in simplyblock, but not a partition. It is identified by its +PCIe address and serial number. Simplyblock currently supports a wide range of different types of NVMe drives with +varying characteristics of performance, features, and capacities.

+

NVMe (Non-Volatile Memory Express)

+

NVMe (Non-Volatile Memory Express) is a high-performance storage protocol explicitly designed for flash-based storage +devices like SSDs, leveraging the PCIe (Peripheral Component Interconnect Express) interface for ultra-low latency and +high throughput. Unlike traditional protocols such as SATA or SAS, NVMe takes advantage of parallelism and multiple +queues, significantly improving data transfer speeds and reducing CPU overhead. It is widely used in enterprise storage, +cloud computing, and high-performance computing (HPC) environments, where speed and efficiency are critical. NVMe is +also the foundation for NVMe-over-Fabrics (NVMe-oF), which extends its benefits across networked storage systems, +enhancing scalability and flexibility in distributed environments.

+

NVMe-oF (NVMe over Fabrics)

+

NVMe-oF (NVMe over Fabrics) is an extension of the NVMe (Non-Volatile Memory Express) protocol that enables +high-performance, low-latency access to remote NVMe storage devices over network fabrics such as TCP, RDMA (RoCE, +iWARP), and Fibre Channel (FC). Unlike traditional networked storage protocols, NVMe-oF maintains the efficiency and +parallelism of direct-attached NVMe storage while allowing disaggregation of compute and storage resources. This +architecture improves scalability, resource utilization, and flexibility in cloud, enterprise, and high-performance +computing (HPC) environments. NVMe-oF is a key technology in modern software-defined and disaggregated storage +infrastructures, providing fast and efficient remote storage access.

+

NVMe/TCP (NVMe over TCP)

+

NVMe/TCP (NVMe over TCP) is a transport protocol that extends NVMe-over-Fabrics (NVMe-oF) using standard TCP/IP networks +to enable high-performance, low-latency access to remote NVMe storage. By leveraging existing Ethernet infrastructure, +NVMe/TCP eliminates the need for specialized networking hardware such as RDMA (RoCE or iWARP) or Fibre Channel (FC), +making it a cost-effective and easily deployable solution for cloud, enterprise, and data center storage environments. +It maintains the efficiency of NVMe, providing scalable, high-throughput, and low-latency remote storage access while +ensuring broad compatibility with modern network architectures.

+

NVMe/RoCE (NVMe over RDMA over Converged Ethernet)

+

NVMe/RoCE (NVMe over RoCE) is a high-performance storage transport protocol that extends NVMe-over-Fabrics (NVMe-oF) +using RDMA over Converged Ethernet (RoCE) to enable ultra-low-latency and high-throughput access to remote NVMe storage +devices. By leveraging Remote Direct Memory Access (RDMA), NVMe/RoCE bypasses the CPU for data transfers, reducing +latency and improving efficiency compared to traditional TCP-based storage protocols. This makes it ideal for +high-performance computing (HPC), enterprise storage, and latency-sensitive applications such as financial trading and +AI workloads. NVMe/RoCE requires lossless Ethernet networking and specialized NICs to fully utilize its performance +advantages.

+

Multipathing

+

Multipathing is a storage networking technique that enables multiple physical paths between a compute system and a +storage device to improve redundancy, load balancing, and fault tolerance. Multipathing enhances performance and +reliability by using multiple connections, ensuring continuous access to storage even if one path fails. It is commonly +implemented in Fibre Channel (FC), iSCSI, and NVMe-oF (including NVMe/TCP and NVMe/RoCE) environments, where high +availability and optimized data transfer are critical.

+

Management Node

+

A management node is a containerized component that orchestrates, monitors, and controls the distributed storage +cluster. It forms part of the control plane, managing cluster-wide configurations, provisioning logical volumes, +handling metadata operations, and ensuring overall system health. Management nodes facilitate communication between +storage nodes and client applications, enforcing policies such as access control, data placement, and fault tolerance. +They also provide an interface for administrators to interact with the storage system via the Simplyblock CLI or API, +enabling seamless deployment, scaling, and maintenance of the storage infrastructure.

+

Distributed Erasure Coding

+

Distributed Erasure coding is a data protection technique used in distributed storage systems to provide fault tolerance and +redundancy while minimizing storage overhead. It works by breaking data into k data fragments and generating m parity +fragments using mathematical algorithms. These k + m fragments are then distributed across multiple storage nodes, +allowing the system to reconstruct lost or corrupted data from any k available fragments. Compared to traditional +replication, erasure coding offers greater storage efficiency while maintaining high availability, making it ideal for +cloud storage, object storage, and high-performance computing (HPC) environments where durability and cost-effectiveness +are critical.

+

Simplyblock supports all combinations of k = 1,2,4 and m = 1,2. The erasure coding implementation uses highly +performance-optimized algorithms specific to the selected schema.

+

Replication

+

Replication in storage is the process of creating and maintaining identical copies of data across multiple storage +devices or nodes to ensure fault tolerance, high availability, and disaster recovery. Replication can occur +synchronously, where data is copied in real-time to ensure consistency, or asynchronously, where updates are delayed to +optimize performance. It is commonly used in distributed storage systems, cloud storage, and database management to +protect against hardware failures and data loss. By maintaining redundant copies, replication enhances data resilience, +load balancing, and accessibility, making it a fundamental technique for enterprise and cloud-scale storage solutions. +Simplyblock supports synchronous replication.

+

RAID (Redundant Array of Independent Disks)

+

RAID (Redundant Array of Independent Disks) is a data storage technology that combines multiple physical drives into a +single logical unit to improve performance, fault tolerance, or both. RAID configurations vary based on their purpose: +RAID 0 (striping) enhances speed but offers no redundancy, RAID 1 (mirroring) duplicates data for high availability, and +RAID 5, 6, and 10 use combinations of striping and parity to balance performance and fault tolerance. RAID is widely +used in enterprise storage, servers, and high-performance computing to protect against drive failures and optimize data +access. It can be implemented in hardware controllers or software-defined storage solutions, depending on system +requirements.

+

Quality of Service

+

Quality of Service (QoS) refers to the ability to define and enforce performance guarantees for storage workloads by +controlling key metrics such as IOPS (Input/Output Operations Per Second), throughput, and latency. QoS ensures that +different applications receive appropriate levels of performance, preventing resource contention in multi-tenant +environments. By setting limits and priorities for Logical Volumes (LVs), Simplyblock allows administrators to allocate +storage resources efficiently, ensuring critical workloads maintain consistent performance even under high demand. +This capability is essential for optimizing storage operations, improving reliability, and meeting service-level +agreements (SLAs) in distributed cloud-native environments. In simplyblock, it is possible to limit (cap) IOPS or throughput +of individual logical volumes or entire storage pools, and additionally to create QoS classes and provide a fair +relative resource allocation (IOPS and/or throughput) to each class. Logical volumes can be assigned to classes.

+

SPDK (Storage Performance Development Kit)

+

Storage Performance Development Kit (SPDK) is an open-source set of libraries and tools designed to optimize +high-performance, low-latency storage applications by bypassing traditional kernel-based I/O processing. SPDK leverages +user-space and polled-mode drivers to eliminate context switching and interrupts, significantly reducing CPU overhead +and improving throughput. It is particularly suited for NVMe storage, NVMe-over-Fabrics (NVMe-oF), and iSCSI target +acceleration, making it a key technology in software-defined storage solutions. By providing a highly efficient +framework for storage processing, SPDK enables modern storage architectures to achieve high IOPS, reduced latency, and +better resource utilization in cloud and enterprise environments.

+

Volume Snapshot (Copy-On-Write, Reverse)

+

A volume snapshot is a point-in-time copy of a storage volume, file system, or virtual machine that captures its state +without duplicating the entire data set. Snapshots enable rapid data recovery, backup, and versioning by preserving only +the changes made since the last snapshot.

+

In the world of storage, different snapshot concepts exist. Simplyblock uses copy-on-write snapshots, which means that +taking the snapshot is an instant operation since no data has to be moved.

+

Later on, volumes can be instantly reverted to a snapshot and copy-on-write volumes can be instantly created (cloned) +from a snapshot.

+

Due to the entirely distributed nature of the underlying storage in simplyblock, dependent snapshots and copy-on-write +clones do not affect the performance of the originating volume or each other.

+

Volume Clone

+

A volume clone is an exact, fully independent copy of a storage volume, virtual machine, or dataset that can be used for +testing, development, backup, or deployment purposes. Unlike snapshots, which capture a point-in-time state and depend +on the original data, a clone is a complete duplication that can operate separately without relying on the source. +Cloning is commonly used in enterprise storage, cloud environments, and containerized applications to create quick, +reproducible environments for workloads without affecting the original data. Storage systems often use thin cloning to +optimize space by sharing unchanged data blocks between the original and the clone, reducing storage overhead. COW is +widely implemented in storage virtualization and containerized environments, enabling fast, space-efficient backups, +cloning, and data protection while maintaining high system performance.

+

CoW (Copy-on-Write)

+

Copy-on-Write (COW) is an efficient data management technique used in snapshots, cloning, and memory management to +optimize storage usage and performance. Instead of immediately duplicating data, COW defers copying until a modification +is made, ensuring that only changed data blocks are written to a new location. This approach minimizes storage overhead, +speeds up snapshot creation, and reduces unnecessary data duplication.

+

+ +

Kubernetes

+

Kubernetes (K8s) ⧉ is an open-source container orchestration +platform that automates the deployment, scaling, and management of containerized applications across clusters of +machines. Initially developed by Google and now maintained by +the Cloud Native Computing Foundation (CNCF) ⧉, +Kubernetes provides a robust framework for load balancing, self-healing, storage orchestration, and automated rollouts +and rollbacks. It manages application workloads using Pods, Deployments, Services, and Persistent Volumes (PVs), +ensuring scalability and resilience. By abstracting underlying infrastructure, Kubernetes enables organizations to +efficiently run containerized applications across on-premises, cloud, and hybrid environments, making it a cornerstone +of modern cloud-native computing.

+

Kubernetes CSI (Container Storage Interface)

+

The Kubernetes Container Storage Interface (CSI) ⧉ +is a standardized API enabling external storage providers to integrate their storage solutions with Kubernetes. CSI +allows Kubernetes to dynamically provision, attach, mount, and manage Persistent Volumes (PVs) across different storage +backends without requiring changes to the Kubernetes core. Using a CSI driver, storage vendors can offer block and file +storage to Kubernetes workloads, supporting advanced features like snapshotting, cloning, and volume expansion. CSI +enhances Kubernetes’ flexibility by enabling seamless integration with cloud, on-premises, and software-defined storage +solutions, making it the de facto method for managing storage in containerized environments.

+

Pod

+

A Pod in Kubernetes is the smallest and most basic deployable unit, representing a single instance of a running process +in a cluster. A Pod can contain one or multiple containerized applications that share networking, storage, and runtime +configurations, enabling efficient communication and resource sharing. Kubernetes schedules and manages Pods, ensuring +they are deployed on suitable worker nodes based on resource availability and constraints. Since Pods are ephemeral, +they are often managed by higher-level controllers like Deployments, StatefulSets, or DaemonSets to maintain +availability and scalability. Pods facilitate scalable, resilient, and cloud-native application deployments across +diverse infrastructure environments.

+

Persistent Volume

+

A Persistent Volume (PV) is a cluster-wide Kubernetes storage resource that provides durable and independent storage for +Pods allow data to persist beyond the lifecycle of individual containers. Unlike ephemeral storage, which is tied to +a Pod’s runtime, a PV is provisioned either statically by an administrator or dynamically using StorageClasses. +Applications request storage by creating Persistent Volume Claims (PVCs), which Kubernetes binds to an available PV +based on capacity and access requirements. Persistent Volumes support different access modes, such as ReadWriteOnce ( +RWO), ReadOnlyMany (ROX), and ReadWriteMany (RWX), and are backed by various storage solutions, including local disks, +network-attached storage (NAS), and cloud-based storage services.

+

Persistent Volume Claim

+

A Persistent Volume Claim (PVC) is a request for Kubernetes storage made by a Pod, allowing it to dynamically or +statically access a Persistent Volume (PV). PVCs specify storage requirements such as size, access mode (ReadWriteOnce, +ReadOnlyMany, or ReadWriteMany), and storage class. Kubernetes automatically binds a PVC to a suitable PV based on these +criteria, abstracting the underlying storage details from applications. This separation enables dynamic storage +provisioning, ensuring that Pods can seamlessly consume persistent storage resources without needing direct knowledge of +the storage infrastructure. When a PVC is deleted, its associated PV handling depends on its reclaim policy (Retain, +Recycle, or Delete), determining whether the storage is preserved, cleared, or removed.

+

Storage Class

+

A StorageClass is a Kubernetes abstraction that defines different types of storage available within a cluster, enabling +dynamic provisioning of Persistent Volumes (PVs). It allows administrators to specify storage requirements such as +performance characteristics, replication policies, and backend storage providers (e.g., cloud block storage, network +file systems, or distributed storage systems). Each StorageClass includes a provisioner, which determines how volumes +are created and parameters that define specific configurations for the underlying storage system. By referencing a +StorageClass in a Persistent Volume Claim (PVC), users can automatically provision storage that meets their +application's needs without manually pre-allocating PVs, streamlining storage management in cloud-native environments.

+ +

TCP (Transmission Control Protocol)

+

Transmission Control Protocol (TCP) is a core communication protocol in the Internet Protocol (IP) suite that ensures +reliable, ordered, and error-checked data delivery between devices over a network. TCP operates at the transport +layer and establishes a connection-oriented communication channel using a three-way handshake process to synchronize +data exchange. It segments large data streams into smaller packets, ensures their correct sequencing, and retransmits +lost packets to maintain data integrity. TCP is widely used in applications requiring stable and accurate data +transmission, such as web browsing, email, and file transfers, making it a fundamental protocol for modern networked +systems.

+

UDP (User Datagram Protocol)

+

User Datagram Protocol (UDP) is a lightweight, connectionless communication protocol in the Internet Protocol (IP) suite +that enables fast, low-latency data transmission without guaranteeing delivery, order, or error correction. Unlike +Transmission Control Protocol (TCP), UDP does not establish a connection before sending data, making it more efficient +for applications prioritizing speed over reliability. It is commonly used in real-time communications, streaming +services, online gaming, and DNS lookups, where occasional data loss is acceptable in exchange for reduced latency and +overhead.

+

IP (Internet Protocol), IPv4, IPv6

+

Internet Protocol (IP) is the fundamental networking protocol that enables devices to communicate over the Internet and +private networks by assigning unique IP addresses to each device. Operating at the network layer of the Internet +Protocol suite, IP is responsible for routing and delivering data packets from a source to a destination based on their +addresses. It functions in a connectionless manner, meaning each packet is sent independently and may take different +paths to reach its destination. IP exists in two primary versions: IPv4, which uses 32-bit addresses, and IPv6, which +uses 128-bit addresses for expanded address space. IP works alongside transport layer protocols like TCP and UDP to +ensure effective data transmission across networks.

+

Netmask

+

A netmask is a numerical value used in IP networking to define a subnet's range of IP addresses. It works by +masking a portion of an IP address to distinguish the network part from the host part. A netmask consists of a series of +binary ones (1s) followed by zeros (0s), where the ones represent the network portion and the zeros indicate the host +portion. Common netmasks include 255.255.255.0 (/24) for standard subnets and 255.255.0.0 (/16) for larger networks. +Netmasks are essential in subnetting, routing, and IP address allocation, ensuring efficient traffic management and +communication within networks.

+

CIDR (Classless Inter-Domain Routing)

+

Classless Inter-Domain Routing (CIDR) is a method for allocating and managing IP addresses more efficiently than the +traditional class-based system. CIDR uses variable-length subnet masking (VLSM) to define IP address ranges with +flexible subnet sizes, reducing wasted addresses and improving routing efficiency. CIDR notation represents an IP +address followed by a slash (/) and a number indicating the number of significant bits in the subnet mask (e.g., +192.168.1.0/24 means the first 24 bits define the network, leaving 8 bits for host addresses). Widely used in modern +networking and the internet, CIDR helps optimize IP address distribution and enhance routing aggregation, reducing the +size of global routing tables.

+

Hyper-Converged

+

Hyper-converged refers to an IT infrastructure model that integrates compute, storage, and networking into a single, +software-defined system. Unlike traditional architectures that rely on separate hardware components for each function, +hyper-converged infrastructure (HCI) leverages virtualization and centralized management to streamline operations, +improve scalability, and reduce complexity. This approach enhances performance, fault tolerance, and resource efficiency +by distributing workloads across multiple nodes, allowing seamless scaling by adding more nodes. HCI is widely +used in cloud environments, virtual desktop infrastructure (VDI), and enterprise data centers for its ease of +deployment, automation capabilities, and cost-effectiveness.

+

Disaggregated

+

Disaggregated refers to an IT architecture approach where compute, storage, and networking resources are separated into +independent components rather than tightly integrated within the same physical system. In disaggregated storage, +for example, storage resources are managed independently of compute nodes, allowing for flexible scaling, improved +resource utilization, and reduced hardware dependencies. This contrasts with traditional or hyper-converged +architectures, where these resources are combined. Disaggregated architectures are widely used in cloud computing, +high-performance computing (HPC), and modern data centers to enhance scalability, cost-efficiency, and operational +flexibility while optimizing performance for dynamic workloads.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/index.html b/deployment/25.10.3/index.html new file mode 100644 index 00000000..0451bc40 --- /dev/null +++ b/deployment/25.10.3/index.html @@ -0,0 +1,4805 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Home - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Welcome to the Simplyblock Documentation

+

Welcome to the Simplyblock Documentation, your comprehensive resource for understanding, deploying, and managing +simplyblock's cloud-native, high-performance storage platform. This documentation provides detailed information on +architecture, installation, configuration, and best practices, ensuring you have the necessary guidance to maximize +the efficiency and reliability of your simplyblock deployment.

+

Getting Started

+
+
    +
  • +

    Learn the basics

    +
    +

    General information about simplyblock, the documentation, and +important terms. Read here first.

    +

    Important Notes

    +
  • +
  • +

    Plan the deployment

    +
    +

    Before starting to deploy simplyblock, take a moment to make yourself +familiar with the required node sizing and other considerations for +a performant and stable cluster operation.

    +

    Deployment Planning

    +
  • +
  • +

    Deploy Simplyblock

    +
    +

    Deploy simplyblock on Kubernetes, bare metal, or virtualized +Linux machines. Choose between hyper-converged, disaggregated, +or hybrid deployment models.

    +

    Simplyblock Deployment

    +
  • +
  • +

    Operate Simplyblock

    +
    +

    After the installation of a simplyblock cluster, learn how to +operate and maintain it.

    +

    Simplyblock Usage
    + Simplyblock Operations

    +
  • +
+
+

Keep Updated

+

Sign up for our newsletter and keep updated on what's happening at simplyblock.

+ + + + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/maintenance-operations/cluster-upgrade/index.html b/deployment/25.10.3/maintenance-operations/cluster-upgrade/index.html new file mode 100644 index 00000000..0e2cea0a --- /dev/null +++ b/deployment/25.10.3/maintenance-operations/cluster-upgrade/index.html @@ -0,0 +1,4828 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Upgrading a Cluster - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Upgrading a Cluster

+ +

Simplyblock clusters consist of two independent parts: a control plane with management nodes, and a storage plane with +storage nodes. A single control plane can be used to manage for multiple storage planes.

+

The control plane and storage planes can be updated independently. It is, however, not recommended to run an upgraded +control plane without upgrading the storage planes.

+
+

Recommendation

+

If multiple storage planes are connected to a single control plane, it is recommended to upgrade the control plane +first.

+
+

Upgrading the control plane and storage cluster is currently not an online operation and requires downtime. Planning an +upgrade as part of a maintenance window is recommended. They should be an online operation from next release.

+

Upgrading the CLI

+

Before starting a cluster upgrade, all storage and control plane nodes must update the CLI (sbctl).

+

This can be achieved using the same command used during the initial installation. It is important, though, to provide +the --upgrade parameter to pip to ensure an upgrade to happen.

+
sudo pip install sbctl --upgrade
+
+

Upgrading a Control Plane

+

This section outlines the process of upgrading the control plane. An upgrade introduces new versions of the management +and monitoring services.

+

To upgrade a control plane, the following command must be executed:

+
sudo sbctl cluster update <CLUSTER_ID> --cp-only true
+
+

After issuing the command, the individual management services will be upgraded and restarted on all management nodes.

+

Upgrading a Storage Plane

+

Now to upgrade the storage plane, the following steps are performed for each of the storage nodes. From the control plane, +issue the following commands.

+
+

Warning

+

Ensure not all storage nodes are offline at the same time. Storage nodes must be updated in a round-robin fashion. In +between, it is important to wait until the cluster is in ACTIVE state again and finished with the REBALANCING task.

+
+
sudo sbctl storage-node suspend <NODE_ID>
+sudo sbctl storage-node shutdown <NODE_ID> 
+
+

If the shutdown doesn't work by itself, you may savely force a shutdown using the --force parameter.

+
sudo sbctl storage-node shutdown <NODE_ID> --force 
+
+

Ensure the node has become offline before continuing.

+
sudo sbctl storage-node list 
+
+

Next up, on the storage node itself, a redployment must be executed. To achieve that, ssh into the storage node and run the following command.

+
sudo sbctl storage-node deploy
+
+

Finally, the new storage node deployment can be restarted from the control plane.

+
sudo sbctl --dev storage-node restart <NODE-ID> --spdk-image <UPGRADE SPDK IMAGE>
+
+
+

Note

+

One can find the upgrade spdk image from env_var file on storage node, location: /usr/local/lib/python3.9/site-packages/simplyblock_core/env_var

+
+

Once the node is restarted, wait until the cluster is stabilized. Depending on the capacity of a storage node, this can take a few minutes. +The status of the cluster can be checked via the cluster listing or listing the tasks and checkking their progress.

+
sudo sbctl cluster list
+sudo sbctl cluster list-tasks <CLUSTER_ID>
+
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/maintenance-operations/find-secondary-node/index.html b/deployment/25.10.3/maintenance-operations/find-secondary-node/index.html new file mode 100644 index 00000000..edf5afe2 --- /dev/null +++ b/deployment/25.10.3/maintenance-operations/find-secondary-node/index.html @@ -0,0 +1,4685 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Finding the Secondary Node - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Finding the Secondary Node

+ +

Simplyblock, in high-availability mode, creates two connections per logical volume: a primary and a secondary +connection.

+

The secondary connection will be used in case of issues or failures of the primary storage node which owns the logical +volume.

+

For debugging purposes, sometimes it is useful to find out which host is used as the secondary for a specific primary +storage node. This can be achieved using the command line tool sbctl by asking for the details of +the primary storage node and grepping for the secondary id.

+
Find secondary for a primary
sbctl storage-node get <NODE_ID> | grep secondary_node_id
+
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/maintenance-operations/index.html b/deployment/25.10.3/maintenance-operations/index.html new file mode 100644 index 00000000..bc7e0cbf --- /dev/null +++ b/deployment/25.10.3/maintenance-operations/index.html @@ -0,0 +1,4675 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Operations - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Operations

+ +

Ensuring data resilience and maintaining cluster health are critical aspects of managing a simplyblock storage +deployment. This section covers best practices for backing up and restoring individual volumes or entire clusters, +helping organizations safeguard their data against failures, corruption, or accidental deletions.

+

Additionally, simplyblock provides comprehensive monitoring capabilities using built-in Prometheus and Grafana for +real-time visualization of cluster health, I/O statistics, and performance metrics.

+

This section details how to configure and use these monitoring tools, ensuring optimal performance, early issue +detection, and proactive storage management in cloud-native and enterprise environments.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/maintenance-operations/manual-restarting-nodes/index.html b/deployment/25.10.3/maintenance-operations/manual-restarting-nodes/index.html new file mode 100644 index 00000000..7c497753 --- /dev/null +++ b/deployment/25.10.3/maintenance-operations/manual-restarting-nodes/index.html @@ -0,0 +1,4862 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Stopping and Manually Restarting a Storage Node - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Stopping and Manually Restarting a Storage Node

+ +

There are a few reasons to manually restart a storage node: +- After a storage node became unavailable, the auto-restart did not work +- A cluster upgrade +- A planned storage node maintenance

+
+

Critical

+

There is an auto-restart functionality, which restarts a storage node in case the monitoring service detects +an issue with that specific node. This can be the case if one of the containers exited, after a reboot +of the host, or because of an internal node error which causes the management interface to become +unresponsive. The auto-restart functionality retries multiple times. It will not work in one of +the following cases:

+
    +
  • The cluster is suspended (e.g. two or more storage nodes are offline)
  • +
  • The RPC interface is responsive and the container is up, but the storage node has another health issue
  • +
  • The host or docker service are not available or hanging (e.g. network issue)
  • +
  • Too many retries (e.g. because there is a problem with the lvolstore recovering for some of the logical volimes)
  • +
+

In these cases, a manual restart is required.

+
+

Shutdown of Storage Nodes

+
+

Warning

+

Nodes can only be restarted from offline state!

+

It is important to ensure that the cluster is not in degraded state and all other nodes are online +before shutting down a storage node for maintainance or upgrades! Otherwise loss of availability - io interrupt - may occur!

+
+

Suspending a storage node and then shutting it down:

+
Shutdown storage node
sbctl storage-node suspend <NODE_ID> 
+sbctl storage-node shutdown <NODE_ID> 
+
+

If that does not work, it is ok to forcefully shutdown the storage node.

+
Shutdown storage node forcefully
sbctl storage-node shutdown <NODE_ID> --force
+
+

Storage Node in Offline State

+

It is very important to notice that with a storage node in state offline, the cluster is in a degraded state. +Write and read performance can be impacted, and if another node goes offline, I/O will be interrupted. +Therefore, it is recommended to keep nodes in offline state as short as possible!

+

If a longer maintenance window (hours to weeks) is required, it is recommended to migrate +the storage node to another host for the time being. This alternative host can be without NVMe devices. +Node migration is entirely automated. Later the storage node can be migrated back to its original host.

+

Restarting a Storage Node

+

A storage node can be restarted using the following command:

+
Restarting storage node
sbctl storage-node restart <NODE_ID> 
+
+

In the rare case the restart may hang. If this is the case, it is ok to forcefully shutdown and forcefully +restart the storage node:

+
Restarting storage node
sbctl storage-node restart <NODE_ID> --force 
+
+

Restarting Docker Service

+
+

Warning

+

This applies to disaggregated storage nodes under Docker (only non-Kubernetes setups) only.

+
+

If there is a problem with the entire Docker service on a host, the Docker service may require a restart. +In such a case, auto-restart will not be able to automatically self-heal the storage node. This happens because the +container responsible for self-healing and auto-restarting (SNodeAPI) itself does not respond anymore.

+
Restarting docker service
sudo systemctl restart docker --force
+
+

After restarting the Docker service, the auto-restart will start to self-heal the storage node after a short delay. +A manual restart of the storage node is not required.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/maintenance-operations/migrating-storage-node/index.html b/deployment/25.10.3/maintenance-operations/migrating-storage-node/index.html new file mode 100644 index 00000000..ae9a3092 --- /dev/null +++ b/deployment/25.10.3/maintenance-operations/migrating-storage-node/index.html @@ -0,0 +1,4978 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Migrating a Storage Node - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Migrating a Storage Node

+ +

Simplyblock storage clusters are designed as always-on. That means that a storage node migration is an online operation +that doesn't require explicit maintenance windows or storage downtime.

+

Storage Node Migration

+

Migrating a storage node is a three-step process. First, the new storage node will be pre-deployed, after that the old +storage node must be shutdown properly. It will be restarted (migrated) with the new storage node's storage node api address, +and finally, the new storage node will become the primary storage node.

+
+

Warning

+

Between each process step, it is required to wait for storage node migration tasks to complete. Otherwise, there +may have an impact on the system's performance or, worse, may lead to data loss.

+
+

As part of the process, the existing storage node id will be moved to the new host machine. All logical volumes +allocated on the old storage node will be moved to the new storage node and will automatically be reconnected.

+

First-Stage Storage Node Deployment

+

To install the first stage of a storage node, the installation guide for the selected environment should be followed.

+

The process will diverge after executing the initial deployment command sbctl storage-node deploy. +If the command finishes successfully, resume from the next section of this page.

+ +

Preparing the New Storage Host

+

The new storage host must be prepared before a storage node can be migrated. It must fulfill the +pre-requisites for a storage node according to the installation documentation for the selected +installation method.

+

To prepare the new storage host, the following commands must be executed.

+
Preparing the configuration
sbctl storage-node configure \
+    --max-lvol=<MAX_LVOL> \
+    --max-size=<MAX_SIZE> \
+    [--nodes-per-socket=<NUM_OF_NODES>] 
+
+
Preparing the instance
sbctl storage-node deploy [--isolate-cores --ifname=<IFNAME>] 
+
+

The full list of parameters for either command can be found in the +CLI documentation.

+

Restart Old Storage Node

+
+

Warning

+

Before migrating the storage node on a storage host, the ols storage node must be put in offline state.

+

If the storage node is not yet offline, it can be forced into offline state using the following command.

+
Shutdown storage node on old instance
sbctl storage-node shutdown <NODE_ID> --force
+
+
+

To start the migration process of logical volumes, the old storage node needs to be restarted with the new storage +node's API address.

+

In this example, it is assumed that the new storage node's IP address is 192.168.10.100. The IP address must be +changed according to the real-world setup.

+
+

Danger

+

Providing the wrong IP address can lead to service interruption and data loss.

+
+

To restart the node, the following command must be run:

+
Restarting a storage node to initiate the migration
sbctl storage-node restart <NODE_ID> --node-addr=<NEW_NODE_IP>:5000
+
+
+

Warning

+

The parameter --node-addr expects the API endpoint of the new storage node. This API is reachable on port 5000. +It must be ensured that the given parameter is the new IP address and the port, separated by a colon.

+
+
Example output of the node restart
demo@cp-1 ~> sbctl storage-node restart 788c3686-9d75-4392-b0ab-47798fd4a3c1 --node-addr 192.168.10.64:5000
+2025-04-02 13:24:26,785: INFO: Restarting storage node
+2025-04-02 13:24:26,796: INFO: Setting node state to restarting
+2025-04-02 13:24:26,807: INFO: {"cluster_id": "a84537e2-62d8-4ef0-b2e4-8462b9e8ea96", "event": "STATUS_CHANGE", "object_name": "StorageNode", "message": "Storage node status changed from: unreachable to: in_restart", "caused_by": "monitor"}
+2025-04-02 13:24:26,812: INFO: Sending event updates, node: 788c3686-9d75-4392-b0ab-47798fd4a3c1, status: in_restart
+2025-04-02 13:24:26,843: INFO: Sending to: f4b37b6c-6e36-490f-adca-999859747eb4
+2025-04-02 13:24:26,859: INFO: Sending to: 71c31962-7313-4317-8330-9f09a3e77a72
+2025-04-02 13:24:26,870: INFO: Sending to: 93a812f9-2981-4048-a8fa-9f39f562f1aa
+2025-04-02 13:24:26,893: INFO: Restarting on new node with ip: 192.168.10.64:5000
+2025-04-02 13:24:27,037: INFO: Restarting Storage node: 192.168.10.64
+2025-04-02 13:24:27,097: INFO: Restarting SPDK
+...
+2025-04-02 13:24:40,012: INFO: creating subsystem nqn.2023-02.io.simplyblock:a84537e2-62d8-4ef0-b2e4-8462b9e8ea96:lvol:13945596-4fbc-46a5-bbb1-ebe4d3e2af26
+2025-04-02 13:24:40,025: INFO: creating subsystem nqn.2023-02.io.simplyblock:a84537e2-62d8-4ef0-b2e4-8462b9e8ea96:lvol:2c593f82-d96c-4eb7-8d1c-30c534f6592d
+2025-04-02 13:24:40,037: INFO: creating subsystem nqn.2023-02.io.simplyblock:a84537e2-62d8-4ef0-b2e4-8462b9e8ea96:lvol:e3d2d790-4d14-4875-a677-0776335e4588
+2025-04-02 13:24:40,048: INFO: creating subsystem nqn.2023-02.io.simplyblock:a84537e2-62d8-4ef0-b2e4-8462b9e8ea96:lvol:1086d1bf-e77f-4ddf-b374-3575cfd68d30
+2025-04-02 13:24:40,414: INFO: {"cluster_id": "a84537e2-62d8-4ef0-b2e4-8462b9e8ea96", "event": "OBJ_CREATED", "object_name": "StorageNode", "message": "Port blocked: 9091", "caused_by": "cli"}
+2025-04-02 13:24:40,494: INFO: Add BDev to subsystem
+2025-04-02 13:24:40,495: INFO: 1
+2025-04-02 13:24:40,495: INFO: adding listener for nqn.2023-02.io.simplyblock:a84537e2-62d8-4ef0-b2e4-8462b9e8ea96:lvol:13945596-4fbc-46a5-bbb1-ebe4d3e2af26 on IP 10.10.10.64
+2025-04-02 13:24:40,499: INFO: Add BDev to subsystem
+2025-04-02 13:24:40,499: INFO: 1
+2025-04-02 13:24:40,500: INFO: adding listener for nqn.2023-02.io.simplyblock:a84537e2-62d8-4ef0-b2e4-8462b9e8ea96:lvol:e3d2d790-4d14-4875-a677-0776335e4588 on IP 10.10.10.64
+2025-04-02 13:24:40,503: INFO: Add BDev to subsystem
+2025-04-02 13:24:40,504: INFO: 1
+2025-04-02 13:24:40,504: INFO: adding listener for nqn.2023-02.io.simplyblock:a84537e2-62d8-4ef0-b2e4-8462b9e8ea96:lvol:2c593f82-d96c-4eb7-8d1c-30c534f6592d on IP 10.10.10.64
+2025-04-02 13:24:40,507: INFO: Add BDev to subsystem
+2025-04-02 13:24:40,508: INFO: 1
+2025-04-02 13:24:40,509: INFO: adding listener for nqn.2023-02.io.simplyblock:a84537e2-62d8-4ef0-b2e4-8462b9e8ea96:lvol:1086d1bf-e77f-4ddf-b374-3575cfd68d30 on IP 10.10.10.64
+2025-04-02 13:24:41,861: INFO: {"cluster_id": "a84537e2-62d8-4ef0-b2e4-8462b9e8ea96", "event": "OBJ_CREATED", "object_name": "StorageNode", "message": "Port allowed: 9091", "caused_by": "cli"}
+2025-04-02 13:24:41,894: INFO: Done
+Success
+
+

Make new Storage Node Primary

+

After the migration has successfully finished, the new storage node must be made the primary storage node for the owned +set of logical volumes.

+

This can be initiated using the following command:

+
Make the new storage node the primary
sbctl storage-node make-primary <NODE_ID>
+
+

The following is the example output.

+
Example output of primary change
demo@cp-1 ~> sbctl storage-node make-primary 788c3686-9d75-4392-b0ab-47798fd4a3c1
+2025-04-02 13:25:02,220: INFO: Adding device 65965029-4ab3-44b9-a9d4-29550e6c14ae
+2025-04-02 13:25:02,251: INFO: bdev already exists alceml_65965029-4ab3-44b9-a9d4-29550e6c14ae
+2025-04-02 13:25:02,252: INFO: bdev already exists alceml_65965029-4ab3-44b9-a9d4-29550e6c14ae_PT
+2025-04-02 13:25:02,266: INFO: subsystem already exists True
+2025-04-02 13:25:02,267: INFO: bdev already added to subsys alceml_65965029-4ab3-44b9-a9d4-29550e6c14ae_PT
+2025-04-02 13:25:02,285: INFO: Setting device online
+2025-04-02 13:25:02,301: INFO: {"cluster_id": "a84537e2-62d8-4ef0-b2e4-8462b9e8ea96", "event": "OBJ_CREATED", "object_name": "NVMeDevice", "message": "Device created: 65965029-4ab3-44b9-a9d4-29550e6c14ae", "caused_by": "cli"}
+2025-04-02 13:25:02,305: INFO: Make other nodes connect to the node devices
+2025-04-02 13:25:02,383: INFO: Connecting to node 71c31962-7313-4317-8330-9f09a3e77a72
+2025-04-02 13:25:02,384: INFO: bdev found remote_alceml_197c2d40-d39a-4a10-84eb-41c68a6834c7_qosn1
+2025-04-02 13:25:02,385: INFO: bdev found remote_alceml_5202854e-e3b3-4063-b6b9-9a83c1bbefe9_qosn1
+2025-04-02 13:25:02,386: INFO: bdev found remote_alceml_15c5f6de-63b6-424c-b4c0-49c3169c0135_qosn1
+2025-04-02 13:25:02,386: INFO: Connecting to node 93a812f9-2981-4048-a8fa-9f39f562f1aa
+2025-04-02 13:25:02,439: INFO: Connecting to node f4b37b6c-6e36-490f-adca-999859747eb4
+2025-04-02 13:25:02,440: INFO: bdev found remote_alceml_0544ef17-6130-4a79-8350-536c51a30303_qosn1
+2025-04-02 13:25:02,441: INFO: bdev found remote_alceml_e9d69493-1ce8-4386-af1a-8bd4feec82c6_qosn1
+2025-04-02 13:25:02,442: INFO: bdev found remote_alceml_5cc0aed8-f579-4a4c-9c31-04fb8d781af8_qosn1
+2025-04-02 13:25:02,443: INFO: Connecting to node 93a812f9-2981-4048-a8fa-9f39f562f1aa
+2025-04-02 13:25:02,493: INFO: Connecting to node f4b37b6c-6e36-490f-adca-999859747eb4
+2025-04-02 13:25:02,494: INFO: bdev found remote_alceml_0544ef17-6130-4a79-8350-536c51a30303_qosn1
+2025-04-02 13:25:02,494: INFO: bdev found remote_alceml_e9d69493-1ce8-4386-af1a-8bd4feec82c6_qosn1
+2025-04-02 13:25:02,495: INFO: bdev found remote_alceml_5cc0aed8-f579-4a4c-9c31-04fb8d781af8_qosn1
+2025-04-02 13:25:02,495: INFO: Connecting to node 71c31962-7313-4317-8330-9f09a3e77a72
+2025-04-02 13:25:02,496: INFO: bdev found remote_alceml_197c2d40-d39a-4a10-84eb-41c68a6834c7_qosn1
+2025-04-02 13:25:02,496: INFO: bdev found remote_alceml_5202854e-e3b3-4063-b6b9-9a83c1bbefe9_qosn1
+2025-04-02 13:25:02,497: INFO: bdev found remote_alceml_15c5f6de-63b6-424c-b4c0-49c3169c0135_qosn1
+2025-04-02 13:25:02,667: INFO: {"cluster_id": "a84537e2-62d8-4ef0-b2e4-8462b9e8ea96", "event": "OBJ_CREATED", "object_name": "JobSchedule", "message": "task created: 773ae420-3491-4ea6-aaf4-b7b1103132f6", "caused_by": "cli"}
+2025-04-02 13:25:02,675: INFO: {"cluster_id": "a84537e2-62d8-4ef0-b2e4-8462b9e8ea96", "event": "OBJ_CREATED", "object_name": "JobSchedule", "message": "task created: 95eaf69f-6926-454e-a023-8d9341f7c4c6", "caused_by": "cli"}
+2025-04-02 13:25:02,682: INFO: {"cluster_id": "a84537e2-62d8-4ef0-b2e4-8462b9e8ea96", "event": "OBJ_CREATED", "object_name": "JobSchedule", "message": "task created: 0a0f7942-46d7-46b2-9dc6-c5787bc3691e", "caused_by": "cli"}
+2025-04-02 13:25:02,690: INFO: {"cluster_id": "a84537e2-62d8-4ef0-b2e4-8462b9e8ea96", "event": "OBJ_CREATED", "object_name": "JobSchedule", "message": "task created: 0f10c95e-937b-4e9b-99ca-e13815ae3578", "caused_by": "cli"}
+2025-04-02 13:25:02,698: INFO: {"cluster_id": "a84537e2-62d8-4ef0-b2e4-8462b9e8ea96", "event": "OBJ_CREATED", "object_name": "JobSchedule", "message": "task created: fb36c4c7-d128-4a43-894f-50fb406bab30", "caused_by": "cli"}
+2025-04-02 13:25:02,707: INFO: {"cluster_id": "a84537e2-62d8-4ef0-b2e4-8462b9e8ea96", "event": "OBJ_CREATED", "object_name": "JobSchedule", "message": "task created: d5480f1f-e113-49ab-8c9d-3663e7ba512b", "caused_by": "cli"}
+2025-04-02 13:25:02,717: INFO: {"cluster_id": "a84537e2-62d8-4ef0-b2e4-8462b9e8ea96", "event": "OBJ_CREATED", "object_name": "JobSchedule", "message": "task created: 8e910437-7957-4701-b626-5dffce0284dc", "caused_by": "cli"}
+2025-04-02 13:25:02,727: INFO: {"cluster_id": "a84537e2-62d8-4ef0-b2e4-8462b9e8ea96", "event": "OBJ_CREATED", "object_name": "JobSchedule", "message": "task created: 919fceb4-ee48-4c72-96b0-a4367b8d0f67", "caused_by": "cli"}
+2025-04-02 13:25:02,737: INFO: {"cluster_id": "a84537e2-62d8-4ef0-b2e4-8462b9e8ea96", "event": "OBJ_CREATED", "object_name": "JobSchedule", "message": "task created: da076017-c0ba-4e5b-8bcd-7748fa56305e", "caused_by": "cli"}
+2025-04-02 13:25:02,748: INFO: {"cluster_id": "a84537e2-62d8-4ef0-b2e4-8462b9e8ea96", "event": "OBJ_CREATED", "object_name": "JobSchedule", "message": "task created: fa43687f-33ff-486d-8460-2b07bbc18cff", "caused_by": "cli"}
+2025-04-02 13:25:02,757: INFO: {"cluster_id": "a84537e2-62d8-4ef0-b2e4-8462b9e8ea96", "event": "OBJ_CREATED", "object_name": "JobSchedule", "message": "task created: e53431ce-c7c9-40a9-8e11-4dafefce79d8", "caused_by": "cli"}
+2025-04-02 13:25:02,768: INFO: {"cluster_id": "a84537e2-62d8-4ef0-b2e4-8462b9e8ea96", "event": "OBJ_CREATED", "object_name": "JobSchedule", "message": "task created: 38e320ca-1fd1-4f8e-9ef1-2defa50f1d22", "caused_by": "cli"}
+2025-04-02 13:25:02,813: INFO: Adding device 7e5145e7-d8fc-4d60-8af1-3f5015cb3021
+2025-04-02 13:25:02,837: INFO: bdev already exists alceml_7e5145e7-d8fc-4d60-8af1-3f5015cb3021
+2025-04-02 13:25:02,837: INFO: bdev already exists alceml_7e5145e7-d8fc-4d60-8af1-3f5015cb3021_PT
+2025-04-02 13:25:02,851: INFO: subsystem already exists True
+2025-04-02 13:25:02,852: INFO: bdev already added to subsys alceml_7e5145e7-d8fc-4d60-8af1-3f5015cb3021_PT
+2025-04-02 13:25:02,879: INFO: Setting device online
+2025-04-02 13:25:02,893: INFO: {"cluster_id": "a84537e2-62d8-4ef0-b2e4-8462b9e8ea96", "event": "OBJ_CREATED", "object_name": "NVMeDevice", "message": "Device created: 7e5145e7-d8fc-4d60-8af1-3f5015cb3021", "caused_by": "cli"}
+2025-04-02 13:25:02,897: INFO: Make other nodes connect to the node devices
+2025-04-02 13:25:02,968: INFO: Connecting to node 71c31962-7313-4317-8330-9f09a3e77a72
+2025-04-02 13:25:02,969: INFO: bdev found remote_alceml_197c2d40-d39a-4a10-84eb-41c68a6834c7_qosn1
+2025-04-02 13:25:02,970: INFO: bdev found remote_alceml_5202854e-e3b3-4063-b6b9-9a83c1bbefe9_qosn1
+2025-04-02 13:25:02,971: INFO: bdev found remote_alceml_15c5f6de-63b6-424c-b4c0-49c3169c0135_qosn1
+2025-04-02 13:25:02,971: INFO: Connecting to node 93a812f9-2981-4048-a8fa-9f39f562f1aa
+...
+2025-04-02 13:25:10,255: INFO: {"cluster_id": "a84537e2-62d8-4ef0-b2e4-8462b9e8ea96", "event": "OBJ_CREATED", "object_name": "JobSchedule", "message": "task created: a4692e1d-a527-44f7-8a86-28060eb466cf", "caused_by": "cli"}
+2025-04-02 13:25:10,277: INFO: {"cluster_id": "a84537e2-62d8-4ef0-b2e4-8462b9e8ea96", "event": "OBJ_CREATED", "object_name": "JobSchedule", "message": "task created: bab06208-bd27-4002-bc7b-dd92cf7b9b66", "caused_by": "cli"}
+True
+
+

At this point, the old storage node is automatically removed from the cluster, and the storage node id is taken over by +the new storage node. Any operation on the old storage node, such as an OS reinstall, can be safely executed.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/maintenance-operations/monitoring/accessing-grafana/index.html b/deployment/25.10.3/maintenance-operations/monitoring/accessing-grafana/index.html new file mode 100644 index 00000000..d2625b99 --- /dev/null +++ b/deployment/25.10.3/maintenance-operations/monitoring/accessing-grafana/index.html @@ -0,0 +1,4826 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Accessing Grafana - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Accessing Grafana

+ +

Simplyblock's control plane includes a Prometheus, Grafana, and Graylog installation.

+

Grafana retrieves metric data from Prometheus, including capacity, I/O statistics, and the cluster event log. +Additionally, Grafana is used for alerting via Slack or email.

+

The standard retention period for metrics is 7 days. However, this can be changed when creating a cluster.

+

How to access Grafana

+

Grafana can be accessed through all management node API. It is recommended to set up a load balancer with session +stickyness in front of the Grafana installation(s).

+
Grafana URLs
http://<MGMT_NODE_IP>/grafana
+
+

To retrieve the endpoint address from the cluster itself, use the following command:

+
Retrieving the Grafana endpoint
sbctl cluster get <CLUSTER_ID> | grep grafana_endpoint
+
+

Credentials

+

The Grafana installation uses the cluster secret as its password for the user admin. To retrieve the cluster secret, +the following commands should be used:

+
Get the cluster uuid
sbctl cluster list
+
+
Get the cluster secret
sbctl cluster get-secret <CLUSTER_ID>
+
+

Credentials
+Username:
+Password:

+

Grafana Dashboards

+

All dashboards are stored in per-cluster folders. Each cluster contains the following dashboard entries:

+
    +
  • Cluster
  • +
  • Storage node
  • +
  • Device
  • +
  • Logical Volume
  • +
  • Storage Pool
  • +
  • Storage Plane node(s) system monitoring
  • +
  • Control Plane node(s) system monitoring
  • +
+

Dashboard widgets are designed to be self-explanatory.

+

By default, each dashboard contains data for all objects (e.g., all devices) in a cluster. It is, however, possible to +filter them by particular objects (e.g., devices, storage nodes, or logical volumes) and to change the timescale and +window.

+

Dashboards include physical and logical capacity utilization dynamics, IOPS, I/O throughput, and latency dynamics (all +separate for read, write, and unmap). While all data from the event log is currently stored in Prometheus, they weren't +used at the time of writing.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/maintenance-operations/monitoring/accessing-graylog/index.html b/deployment/25.10.3/maintenance-operations/monitoring/accessing-graylog/index.html new file mode 100644 index 00000000..4b9f690f --- /dev/null +++ b/deployment/25.10.3/maintenance-operations/monitoring/accessing-graylog/index.html @@ -0,0 +1,4784 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Accessing Graylog - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Accessing Graylog

+ +

Simplyblock's control plane includes a Prometheus, Grafana, and Graylog installation.

+

Graylog retrieves logs for all control plane and storage node services.

+

The standard retention period for metrics is 7 days. However, this can be changed when creating a cluster.

+

How to access Graylog

+

Graylog can be accessed through all management node API. It is recommended to set up a load balancer with session +stickyness in front of the Graylog installation(s).

+
Graylog URLs
http://<MGMT_NODE_IP>/graylog
+
+

Credentials

+

The Graylog installation uses the cluster secret as its password for the user admin. To retrieve the cluster secret, +the following command should be used:

+
Get the cluster secret
sbctl cluster get-secret <CLUSTER_ID>
+
+

Credentials
+Username: admin
+Password:

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/maintenance-operations/monitoring/alerts/index.html b/deployment/25.10.3/maintenance-operations/monitoring/alerts/index.html new file mode 100644 index 00000000..798b40b5 --- /dev/null +++ b/deployment/25.10.3/maintenance-operations/monitoring/alerts/index.html @@ -0,0 +1,4802 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Alerting - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Alerting

+ +

Simplyblock uses Grafana to configure and manage alerting rules.

+

By default, Grafana is configured to send alerts to Slack channels. However, Grafana also allows alerting via email +notifications, but this requires the use of an authorized SMTP server to send a message.

+

An SMTP server is currently not part of the management stack and must be deployed separately. Alerts can be triggered +based on on-time or interval-based thresholds of statistical data collected (IO statistics, capacity information) or +based on events from the cluster event log.

+

Pre-Defined Alerts

+

The following pre-defined alerts are available:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
AlertTrigger
device-unavailableStorage device became unavailable.
device-read-onlyStorage device changed to status: read-only.
cluster-status-degradedStorage node changed to status: degraded.
cluster-status-suspendedStorage node changed to status: suspended.
storage-node-unreachableStorage node became unreachable.
storage-node-offlineStorage node became unavailable.
storage-node-healthcheck-failureStorage node with negative healthcheck.
logical-volume-offlineLogical volume became unavailable.
critical-capacity-reachedCritical absolute capacity utilization in a cluster was reached. The threshold value can be configured at cluster creation time using --cap-crit.
critical-provisioning-capacity-reachedCritical absolute provisioned capacity utilization in a cluster was reached. The threshold value can be configured at cluster creation time using --prov-cap-crit.
root-fs-low-disk-spaceRoot filesystem free disk space is below 20%.
+

It is possible to configure the Slack webhook for alerting during cluster creation or to modify it at a later point in +time.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/maintenance-operations/monitoring/cluster-health/index.html b/deployment/25.10.3/maintenance-operations/monitoring/cluster-health/index.html new file mode 100644 index 00000000..452dbfcf --- /dev/null +++ b/deployment/25.10.3/maintenance-operations/monitoring/cluster-health/index.html @@ -0,0 +1,4847 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Cluster Health - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Cluster Health

+ +

A simplyblock cluster consists of interconnected management nodes (control plane) and storage nodes (storage plane) +working together to deliver a resilient, distributed storage platform. Monitoring the overall health, availability, and +performance of the cluster is essential for ensuring data integrity, fault tolerance, and optimal operation under +varying workloads. Simplyblock provides detailed metrics and status indicators at both the node and cluster levels to +help administrators proactively detect issues and maintain system stability.

+

Accessing Cluster Status

+

To access a cluster's status, the sbctl command line tool can be used:

+
Accessing the status of a cluster
sbctl cluster status <CLUSTER_ID>
+
+

All details of the command are available in the +CLI reference.

+

Accessing Cluster Statistics

+

To access a cluster's performance and I/O statistics, the sbctl command line tool can be used:

+
Accessing the statistics of a cluster
sbctl cluster show <CLUSTER_ID>
+
+

All details of the command are available in the +CLI reference.

+

The information is also available through Grafana in the cluster's dashboard.

+

Accessing Cluster I/O Statistics

+

To access a cluster's performance and I/O statistics, the sbctl command line tool can be used:

+
Accessing the I/O statistics of a cluster
sbctl cluster get-io-stats <CLUSTER_ID>
+
+

All details of the command are available in the +CLI reference.

+

The information is also available through Grafana in the cluster's dashboard.

+

Accessing Cluster Capacity Information

+

To access a cluster's capacity information, the sbctl command line tool can be used:

+
Accessing the capcity information of a cluster
sbctl cluster get-capacity <CLUSTER_ID>
+
+

All details of the command are available in the +CLI reference.

+

Accessing Cluster Health Information

+

To access a cluster's health status, the sbctl command line tool can be used:

+
Accessing the health status of a cluster
sbctl cluster check <CLUSTER_ID>
+
+

All details of the command are available in the +CLI reference.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/maintenance-operations/monitoring/index.html b/deployment/25.10.3/maintenance-operations/monitoring/index.html new file mode 100644 index 00000000..5371f93a --- /dev/null +++ b/deployment/25.10.3/maintenance-operations/monitoring/index.html @@ -0,0 +1,4675 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Monitoring - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Monitoring

+ +

Monitoring the health, performance, and resource utilization of a Simplyblock cluster is crucial for ensuring optimal +operation, early issue detection, and efficient capacity planning. The sbctl command line interface +provides a comprehensive set of tools to retrieve real-time and historical metrics related to Logical Volumes (LVs), +storage nodes, I/O performance, and system status. By leveraging sbctl, administrators can quickly +diagnose bottlenecks, monitor resource consumption, and maintain overall system stability.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/maintenance-operations/monitoring/io-stats/index.html b/deployment/25.10.3/maintenance-operations/monitoring/io-stats/index.html new file mode 100644 index 00000000..164f0b05 --- /dev/null +++ b/deployment/25.10.3/maintenance-operations/monitoring/io-stats/index.html @@ -0,0 +1,4833 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Accessing I/O Stats ({{ cliname }}) - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Accessing I/O Stats (sbctl)

+ +

Simplyblock's sbctl tool provides the option to retrieve some extensive I/O statistics. Those +contain a number of relevant metrics of historic and current I/O activities per device, storage node, logical volume, +and cluster.

+

These metrics include:

+
    +
  • Read and write throughput (in MB/s)
  • +
  • I/O operations per second (IOPS) for read, write, and unmap
  • +
  • Total amount of bytes read and written
  • +
  • Total number of I/O operations since the start of a node
  • +
  • Latency ticks
  • +
  • Average read, write, and unmap latency
  • +
+

Accessing Cluster Statistics

+

To access cluster-wide statistics, use the following command:

+
Accessing cluster-wide I/O statistics
sbctl cluster get-io-stats <CLUSTER_ID>
+
+

More information about the command is available in the +CLI reference section.

+

Accessing Storage Node Statistics

+

To access the I/O statistics of a storage node (which includes all physical NVMe devices), use the following command:

+
Accessing storage node I/O statistics
sbctl storage-node get-io-stats <NODE_ID>
+
+

More information about the command is available in the +CLI reference section.

+

To access the I/O statistics of a specific device in a storage node, use the following command:

+
Accessing storage node device I/O statistics
sbctl storage-node get-io-stats-device <DEVICE_ID>
+
+

More information about the command is available in the +CLI reference section.

+

Accessing Storage Pool Statistics

+

To access logical volume-specific statistics, use the following command:

+
Accessing storage pool I/O statistics
sbctl storage-pool get-io-stats <POOL_ID>
+
+

More information about the command is available in the +CLI reference section.

+

Accessing Logical Volume Statistics

+

To access logical volume-specific statistics, use the following command:

+
Accessing logical volume I/O statistics
sbctl volume get-io-stats <VOLUME_ID>
+
+

More information about the command is available in the +CLI reference section.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/maintenance-operations/monitoring/lvol-conditions/index.html b/deployment/25.10.3/maintenance-operations/monitoring/lvol-conditions/index.html new file mode 100644 index 00000000..cfa2cedf --- /dev/null +++ b/deployment/25.10.3/maintenance-operations/monitoring/lvol-conditions/index.html @@ -0,0 +1,4774 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Logical Volume Conditions - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Logical Volume Conditions

+ +

Logical volumes are the core storage abstraction in simplyblock, representing high-performance, distributed NVMe +block devices backed by the cluster. Maintaining visibility into the health, status, and performance of these volumes is +critical for ensuring workload reliability, troubleshooting issues, and planning resource utilization. Simplyblock +continuously monitors volume-level metrics and exposes them through both CLI and observability tools, giving operators +detailed insight into system behavior.

+

Accessing Logical Volume Statistics

+

To access a logical volume's performance and I/O statistics, the sbctl command line tool can be used:

+
Accessing the statistics of a logical volume
sbctl volume get-io-stats <VOLUME_ID>
+
+

All details of the command are available in the +CLI reference.

+

The information is also available through Grafana in the logical volume's dashboard.

+

Accessing Logical Volume Health Information

+

To access a logical volume's health status, the sbctl command line tool can be used:

+
Accessing the health status of a logical volume
sbctl volume check <VOLUME_ID>
+
+

All details of the command are available in the +CLI reference.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/maintenance-operations/node-affinity/index.html b/deployment/25.10.3/maintenance-operations/node-affinity/index.html new file mode 100644 index 00000000..c0c059b1 --- /dev/null +++ b/deployment/25.10.3/maintenance-operations/node-affinity/index.html @@ -0,0 +1,4793 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Configure Node Affinity - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Configure Node Affinity

+ +

Simplyblock features node affinity, sometimes also referred to as data locality. This feature ensures that storage +volumes are physically co-located on storage or Kubernetes worker nodes running the corresponding workloads. This +minimizes network latency and maximizes I/O performance by keeping data close to the application. Ideal for +latency-sensitive workloads, node affinity enables smarter, faster, and more efficient storage access in hyper-converged +and hybrid environments.

+
+

Info

+

Node affinity is only available with hyper-converged or hybrid setups.

+
+

Node affinity does not sacrifice fault tolerance, as parity data will still be distributed to other storage cluster nodes +enabling transparent failover in case of a failure, or spill over in the situation where the locally available storage +runs out of available capacity.

+

Enabling Node Affinity

+

To use node affinity, the storage cluster needs to be created with node affinity activated. When node affinity is +enabled for a logical volume, it will influence how the data distribution algorithm will handle read and write requests.

+

To enable node affinity at creation time of the cluster, the --enable-node-affinity parameter needs to be added:

+
Enabling node affinity when the cluster is created
sbctl cluster create \
+    --ifname=<IF_NAME> \
+    --ha-type=ha \
+    --enable-node-affinity # <- this is important
+
+

To see all available parameters for cluster creation, see +Cluster Create.

+

When the cluster was created with node affinity enabled, logical volumes can be created with node affinity, which will +always try to locate data co-located with the requested storage node.

+

Create a Node Affine Logical Volume

+

When creating a logical volume, it is possible to provide a host id (storage node UUID) to request the storage cluster +to co-locate the volume with this storage node. This configuration will have no influence on storage clusters without +node affinity enabled.

+

To create a co-located logical volume, the parameter --host-id needs to be added to the creation command:

+
Create a node affine logical volume
sbctl volume add <NAME> <SIZE> <POOL> \
+    --host-id=<HOST_ID> \
+    ... # other parameters
+
+

To see all available parameters for a logical volume creation, see +Logical Volume Creation.

+

The storage node UUID (or host id) can be found using the sbctl storage-node list command.

+
List all storage nodes in a storage cluster
sbctl storage-node list --cluster-id=<CLUSTER_ID>
+
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/maintenance-operations/reconnect-nvme-device/index.html b/deployment/25.10.3/maintenance-operations/reconnect-nvme-device/index.html new file mode 100644 index 00000000..92369cd9 --- /dev/null +++ b/deployment/25.10.3/maintenance-operations/reconnect-nvme-device/index.html @@ -0,0 +1,4772 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Reconnecting Logical Volume - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Reconnecting Logical Volume

+ +

After outages of storage nodes, primary and secondary NVMe over Fabrics connections may need to be re-established. With +integrations such as simplyblock's Kubernetes CSI driver and the Proxmox integration, this is automatically handled.

+

With plain Linux clients, the connections have to be reconnected manually. This is especially important when a storage +node is unavailable for more than 60 seconds (by default).

+

Reconnect a Missing NVMe Controller

+

To reconnect the NVMe controllers for the logical volume, the normal nvme connect commands are executed again. This +will immediately reconnect missing controllers and connection paths.

+
Retrieve connection strings
{cliname} volume connect <VOLUME_ID>
+
+
Example output for connection string retrieval
[demo@demo ~]# {cliname} volume connect 82e587c5-4a94-42a1-86e5-a5b8a6a75fc4
+sudo nvme connect --reconnect-delay=2 --ctrl-loss-tmo=60 --nr-io-queues=6 --keep-alive-tmo=5 --transport=tcp --traddr=192.168.10.112 --trsvcid=9100 --nqn=nqn.2023-02.io.simplyblock:0f2c4cb0-a71c-4830-bcff-11112f0ee51a:lvol:82e587c5-4a94-42a1-86e5-a5b8a6a75fc4
+sudo nvme connect --reconnect-delay=2 --ctrl-loss-tmo=60 --nr-io-queues=6 --keep-alive-tmo=5 --transport=tcp --traddr=192.168.10.113 --trsvcid=9100 --nqn=nqn.2023-02.io.simplyblock:0f2c4cb0-a71c-4830-bcff-11112f0ee51a:lvol:82e587c5-4a94-42a1-86e5-a5b8a6a75fc4
+
+

Increase Loss Timeout

+

Alternatively, depending on the environment, it is possible to increase the timeout after which Linux assumes the +NVMe controller to be lost and stops with reconnection attempts.

+

To increase the timeout, the parameter --ctrl-loss-tmo can be increased. The value is the number of seconds until +the Linux kernel stops the reconnection attempt and removes the controller from the list of valid multipath routes.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/maintenance-operations/replacing-storage-node/index.html b/deployment/25.10.3/maintenance-operations/replacing-storage-node/index.html new file mode 100644 index 00000000..6b99d060 --- /dev/null +++ b/deployment/25.10.3/maintenance-operations/replacing-storage-node/index.html @@ -0,0 +1,4786 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Replacing a Storage Node - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Replacing a Storage Node

+ +

A simplyblock storage cluster is designed to be always up. Hence, operations such as extending a cluster or +replacing a storage node are online operations and don't require a system downtime. However, there are a few +things to keep in mind when replacing a storage node.

+
+

Danger

+

If a storage node should be migrated, Migrating a Storage Node must be followed. +Removing a storage node from a simplyblock cluster without migrating it will make the logical volumes owned by this +storage node inaccessible!

+
+

Starting the new Storage Node

+

It is always recommended to start the new storage node before removing the old one, even if the remaining +cluster has enough storage available to absorb the additional (temporary) storage requirement.

+

Every operation that changes the cluster topology comes with a set of migration tasks, moving data across +the cluster to ensure equal usage distribution.

+

If a storage node failed and cannot be recovered, adding a new storage node is perfectly fine, though.

+

To start a new storage node, follow the storage node installation according to your chosen setup:

+ +

Remove the old Storage Node

+
+

Danger

+

All volumes on this storage node, which haven't been migrated before the removal, will become inaccessible!

+
+

To remove the old storage node, use the sbctl command line tool.

+
Remove a storage node
sbctl storage-node remove <NODE_ID>
+
+

Wait until the operation has successfully finished. Afterward, the storage node is removed from the cluster.

+

This can be checked again with the sbctl command line tool.

+
List storage nodes
sbctl storage-node list --cluster-id=<CLUSTER_ID>
+
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/maintenance-operations/scaling/expanding-storage-cluster/index.html b/deployment/25.10.3/maintenance-operations/scaling/expanding-storage-cluster/index.html new file mode 100644 index 00000000..b5f27fb3 --- /dev/null +++ b/deployment/25.10.3/maintenance-operations/scaling/expanding-storage-cluster/index.html @@ -0,0 +1,4689 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Expanding a Storage Cluster - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Expanding a Storage Cluster

+ +

Simplyblock is designed as an always-on storage solution. Hence, storage cluster expansion is an online operation +without a need for maintenance downtime.

+

However, every operation that changes the cluster topology comes with a set of migration tasks, moving data across +the cluster to ensure equal usage distribution. While these migration tasks are low priority and their overhead is +designed to be minimal, it is still recommended to expand the cluster at times when the storage cluster isn't under +full utilization.

+

To start a new storage node, follow the storage node installation according to your chosen set-up:

+ + + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/maintenance-operations/scaling/expanding-storage-pool/index.html b/deployment/25.10.3/maintenance-operations/scaling/expanding-storage-pool/index.html new file mode 100644 index 00000000..97f7e0d9 --- /dev/null +++ b/deployment/25.10.3/maintenance-operations/scaling/expanding-storage-pool/index.html @@ -0,0 +1,4748 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Expanding a Storage Pool - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Expanding a Storage Pool

+ +

Simplyblock is designed as on always-on a storage system. Therefore, expanding a storage pool is an online operation and +does not require a maintenance window or system downtime.

+

When expanding a storage pool, its capacity will be extended, offering an extended quota of the overall storage cluster.

+

Storage Pool Expansion

+

To expand a storage pool, the sbctl command line interface:

+
Expanding the storage pool
sbctl storage-pool set <POOL_ID> --pool-max=<NEW_SIZE>
+
+

The value of NEW_SIZE must be given as 20G, 20T, etc.

+

All valid parameters can be found in the +Storage Pool CLI Reference.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/maintenance-operations/scaling/index.html b/deployment/25.10.3/maintenance-operations/scaling/index.html new file mode 100644 index 00000000..4f227286 --- /dev/null +++ b/deployment/25.10.3/maintenance-operations/scaling/index.html @@ -0,0 +1,4675 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Scaling - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Scaling

+ +

Simplyblock is designed with a scale-out architecture that enables seamless growth of both storage capacity and +performance by simply adding more nodes to the cluster. Built for modern, cloud-native environments, simplyblock +supports linear scalability across compute, network, and storage layers—without downtime or disruption to active +workloads. Whether you're scaling to accommodate petabytes of data, high IOPS requirements, or enhanced throughput, +simplyblock delivers predictable performance and resilience at scale.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/maintenance-operations/security/encryption-kubernetes-secrets/index.html b/deployment/25.10.3/maintenance-operations/security/encryption-kubernetes-secrets/index.html new file mode 100644 index 00000000..cc6d68ec --- /dev/null +++ b/deployment/25.10.3/maintenance-operations/security/encryption-kubernetes-secrets/index.html @@ -0,0 +1,4881 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Encrypting with Kubernetes Secrets - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Encrypting with Kubernetes Secrets

+ +

Simplyblock supports encryption of logical volumes (LVs) to protect data at rest, ensuring that sensitive +information remains secure across the distributed storage cluster. Encryption is applied during volume creation as +part of the storage class specification.

+

Encrypting Logical Volumes ensures that simplyblock storage meets data protection and compliance requirements, +safeguarding sensitive workloads without compromising performance.

+
+

Warning

+

Encryption must be specified at the time of volume creation. Existing logical volumes cannot be retroactively +encrypted.

+
+

Encrypting Volumes with Simplyblock

+

Simplyblock supports the encryption of logical volumes. Internally, simplyblock utilizes the industry-proven +crypto bdev ⧉ provided by SPDK to implement its encryption +functionality.

+

The encryption uses an AES_XTS variable-length block cipher. This cipher requires two keys of 16 to 32 bytes each. The +keys need to have the same length, meaning that if one key is 32 bytes long, the other one has to be 32 bytes, too.

+
+

Recommendation

+

Simplyblock strongly recommends two keys of 32 bytes.

+
+

Generate Random Keys

+

Simplyblock does not provide an integrated way to generate encryption keys, but recommends using the OpenSSL tool chain. +For Kubernetes, the encryption key needs to be provided as base64. Hence, it's encoded right away.

+

To generate the two keys, the following command is run twice. The result must be stored for later.

+
Create an Encryption Key
openssl rand -hex 32 | base64 -w0
+
+

Create the Kubernetes Secret

+

Next up, a Kubernetes Secret is created, providing the two just-created encryption keys.

+
Create a Kubernetes Secret Resource
apiVersion: v1
+kind: Secret
+metadata:
+  name: my-encryption-keys
+data:
+  crypto_key1: YzIzYzllY2I4MWJmYmY1ZDM5ZDA0NThjNWZlNzQwNjY2Y2RjZDViNWE4NTZkOTA5YmRmODFjM2UxM2FkZGU4Ngo=
+  crypto_key2: ZmFhMGFlMzZkNmIyODdhMjYxMzZhYWI3ZTcwZDEwZjBmYWJlMzYzMDRjNTBjYTY5Nzk2ZGRlZGJiMDMwMGJmNwo=
+
+

The Kubernetes Secret can be used for one or more logical volumes. Using different encryption keys, multiple tenants +can be secured with an additional isolation layer against each other.

+

StorageClass Configuration

+

A new Kubernetes StorageClass needs to be created, or an existing one needs to be configured. To use encryption on a +persistent volume claim level, the storage class has to be set for encryption.

+
Example StorageClass
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+  name: my-encrypted-volumes
+provisioner: csi.simplyblock.io
+parameters:
+  encryption: "True" # This is important!
+  ... other parameters
+reclaimPolicy: Delete
+volumeBindingMode: Immediate
+allowVolumeExpansion: true
+
+

Create a PersistentVolumeClaim

+

When requesting a logical volume through a Kubernetes PersistentVolumeClaim, the storage class and the secret resources +have to be connected to the PVC. When picked up, simplyblock will automatically collect the keys and create the logical +volumes as a fully encrypted logical volume.

+
Create an encrypting PersistentVolumeClaim
apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  annotations:
+    simplybk/secret-name: my-encryption-keys # Encryption keys
+  name: my-encrypted-volume-claim
+spec:
+  storageClassName: my-encrypted-volumes # StorageClass
+  accessModes:
+    - ReadWriteOnce
+  resources:
+    requests:
+      storage: 200Gi
+
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/maintenance-operations/security/index.html b/deployment/25.10.3/maintenance-operations/security/index.html new file mode 100644 index 00000000..ac4ee241 --- /dev/null +++ b/deployment/25.10.3/maintenance-operations/security/index.html @@ -0,0 +1,4675 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Security - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Security

+ +

Security is a core pillar of the simplyblock platform, designed to protect data across every layer of the storage +stack. From encryption at rest to multi-tenant isolation and secure communications, simplyblock provides robust, +enterprise-grade features that help meet stringent compliance and data protection requirements. Security is +enforced by design, ensuring your workloads and sensitive data remain protected against internal and external +threats.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/maintenance-operations/security/multi-tenancy/index.html b/deployment/25.10.3/maintenance-operations/security/multi-tenancy/index.html new file mode 100644 index 00000000..67707921 --- /dev/null +++ b/deployment/25.10.3/maintenance-operations/security/multi-tenancy/index.html @@ -0,0 +1,4851 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Multi-Tenancy - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Multi-Tenancy

+ +

Simplyblock is designed to support secure and efficient multitenancy, enabling multiple independent tenants to share the +same physical infrastructure without compromising data isolation, performance guarantees, or security. This capability +is essential in cloud environments, managed services, and enterprise deployments where infrastructure is consolidated +across internal departments or external customers.

+

Storage Isolation

+

Simplyblock provides multiple layers of isolation between multiple tenants, depending on requirements and how tenants +are defined.

+

Storage Pool Isolation

+

If tenants are expected to have multiple volumes, defining the overall available storage quota a tenant can access and +assign to volumes might be required. Hence, simplyblock enables the creation of a storage pool with a maximum capacity +per tenant. All volumes for this tenant should be created in their respective storage pool and automatically count +towards the storage quota.

+

Logical Volume Isolation

+

If a tenant is expected to have only one volume or strong isolation between volumes is required, each logical volume +can be seen as fully isolated at the storage layer. Access to volumes is tightly controlled, and each LV is only exposed +to the workloads explicitly granted access.

+

Quality of Service (QoS)

+

To prevent noisy neighbor effects and ensure fair resource allocation, simplyblock supports per-volume Quality of +Service (QoS) configurations. Administrators can define IOPS and bandwidth limits for each logical volume, providing +predictable performance and protecting tenants from resource contention.

+

Quality of service is available for +Kubernetes-based installation quality of service and +plain Linux installation quality of service.

+

Encryption and Data Security

+

All data is protected with encryption at rest, using strong AES-based cryptographic algorithms. Encryption is applied at +the volume level, ensuring that tenant data remains secure and inaccessible to other users, even at the physical storage +layer. Encryption keys are logically separated between tenants to support strong cryptographic isolation.

+

Encryption is available for Kubernetes-based installation encryption and +plain Linux installation encryption.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/reference/api/index.html b/deployment/25.10.3/reference/api/index.html new file mode 100644 index 00000000..c48c0044 --- /dev/null +++ b/deployment/25.10.3/reference/api/index.html @@ -0,0 +1,4739 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + API / Developer SDK - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

API / Developer SDK

+ +

Simplyblock offers a comprehensive API to manage and automate cluster operations. This includes all cluster-wide +operations, logical volume-specific operations, health information, and

+
    +
  • Retrieve information about the cluster and its health status
  • +
  • Automatically manage a logical volume lifecycle
  • +
  • Integrate simplyblock into deployment processes and workflow automations
  • +
  • Create custom alerts and warnings
  • +
+

Authentication

+

Any request to the simplyblock API requires authorization information to be provided. Unauthorized requests +return an HTTP status 401 (Unauthorized).

+

To provide authorization information, the simplyblock API uses the Authorization HTTP header with a +combination of the cluster UUID and the cluster secret.

+

HTTP Authorization header:

+
Authorization: <CLUSTER_UUID> <CLUSTER_SECRET>
+
+

The cluster id is provided during the initial cluster installation. The cluster secret can be obtained using +the simplyblock commandline interface tool sbctl.

+
sbctl cluster get-secret CLUSTER_UUID
+
+

PUT and POST Requests

+

For requests that send a JSON payload to the backend endpoint, it is important to set the Content-Type header +accordingly. Requests that require this header to be set are of type HTTP PUT or HTTP POST.

+

The expected content type is application/json:

+
Content-Type: application/json
+
+

API Documentation

+

The full API documentation is hosted on Postman. You can find the full API collection on the +Postman API project ⧉.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/reference/api/reference/index.html b/deployment/25.10.3/reference/api/reference/index.html new file mode 100644 index 00000000..5cac532e --- /dev/null +++ b/deployment/25.10.3/reference/api/reference/index.html @@ -0,0 +1,4679 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + API Reference - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

API Reference

+ +

!! SWAGGER ERROR: File ../../../scripts/sbcli-repo/simplyblock_web/static/swagger.yaml not found. !!

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/reference/cli/cluster/index.html b/deployment/25.10.3/reference/cli/cluster/index.html new file mode 100644 index 00000000..e04d611e --- /dev/null +++ b/deployment/25.10.3/reference/cli/cluster/index.html @@ -0,0 +1,6081 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Cluster commands - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Cluster commands

+ +
sbctl cluster --help
+
+

Cluster commands

+

Creates a new cluster

+

Created a new control plane cluster with the current node as the primary control plane node.

+
sbctl cluster create
+    --cap-warn=<CAP_WARN>
+    --cap-crit=<CAP_CRIT>
+    --prov-cap-warn=<PROV_CAP_WARN>
+    --prov-cap-crit=<PROV_CAP_CRIT>
+    --ifname=<IFNAME>
+    --mgmt-ip=<MGMT_IP>
+    --tls-secret-name=<TLS_SECRET_NAME>
+    --log-del-interval=<LOG_DEL_INTERVAL>
+    --metrics-retention-period=<METRICS_RETENTION_PERIOD>
+    --contact-point=<CONTACT_POINT>
+    --grafana-endpoint=<GRAFANA_ENDPOINT>
+    --data-chunks-per-stripe=<DATA_CHUNKS_PER_STRIPE>
+    --parity-chunks-per-stripe=<PARITY_CHUNKS_PER_STRIPE>
+    --ha-type=<HA_TYPE>
+    --is-single-node
+    --mode=<MODE>
+    --ingress-host-source=<INGRESS_HOST_SOURCE>
+    --dns-name=<DNS_NAME>
+    --enable-node-affinity
+    --fabric=<FABRIC>
+    --strict-node-anti-affinity
+    --name=<NAME>
+    --qpair-count=<QPAIR_COUNT>
+    --client-qpair-count=<CLIENT_QPAIR_COUNT>
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--cap-warnCapacity warning level in percent, default: 89integerFalse89
--cap-critCapacity critical level in percent, default: 99integerFalse99
--prov-cap-warnCapacity warning level in percent, default: 250integerFalse250
--prov-cap-critCapacity critical level in percent, default: 500integerFalse500
--ifnameManagement interface name, e.g. eth0stringFalse-
--mgmt-ipManagement IP address to use for the node (e.g., 192.168.1.10)stringFalse-
--tls-secret-nameName of the Kubernetes TLS Secret to be used by the Ingress for HTTPS termination (e.g., my-tls-secret)stringFalse-
--log-del-intervalLogging retention policy, default: 3dstringFalse3d
--metrics-retention-periodRetention period for I/O statistics (Prometheus), default: 7dstringFalse7d
--contact-pointEmail or slack webhook url to be used for alertingstringFalse
--grafana-endpointEndpoint url for GrafanastringFalse
--data-chunks-per-stripeErasure coding schema parameter k (distributed raid), default: 1integerFalse1
--parity-chunks-per-stripeErasure coding schema parameter n (distributed raid), default: 1integerFalse1
--ha-typeLogical volume HA type (single, ha), default is cluster ha type

Available Options:
- single
- ha
stringFalseha
--is-single-nodeFor single node clusters onlymarkerFalseFalse
--modeEnvironment to deploy management services, default: docker

Available Options:
- docker
- kubernetes
stringFalsedocker
--ingress-host-sourceIngress host source: 'hostip' for node IP, 'loadbalancer' for external LB, or 'dns' for custom domain

Available Options:
- hostip
- loadbalancer
- dns
stringFalsehostip
--dns-nameFully qualified DNS name to use as the Ingress host (required if --ingress-host-source=dns)stringFalse
--enable-node-affinityEnable node affinity for storage nodesmarkerFalse-
--fabricfabric: tcp, rdma or both (specify: tcp, rdma)

Available Options:
- tcp
- rdma
- tcp,rdma
stringFalsetcp
--strict-node-anti-affinityEnable strict node anti affinity for storage nodes. Never more than one chunk is placed on a node. This requires a minimum of data-chunks-in-stripe + parity-chunks-in-stripe + 1 nodes in the cluster.markerFalse-
--name, -nAssigns a name to the newly created cluster.stringFalse-
--qpair-countIncrease for clusters with few but very large logical volumes or decrease for clusters with a large number of very small logical volumes.range(0..128)False32
--client-qpair-countIncrease for clusters with few but very large logical volumes or decrease for clusters with a large number of very small logical volumes.range(0..128)False3
+

Adds a new cluster

+

Adds a new cluster

+
sbctl cluster add
+    --cap-warn=<CAP_WARN>
+    --cap-crit=<CAP_CRIT>
+    --prov-cap-warn=<PROV_CAP_WARN>
+    --prov-cap-crit=<PROV_CAP_CRIT>
+    --data-chunks-per-stripe=<DATA_CHUNKS_PER_STRIPE>
+    --parity-chunks-per-stripe=<PARITY_CHUNKS_PER_STRIPE>
+    --ha-type=<HA_TYPE>
+    --enable-node-affinity
+    --fabric=<FABRIC>
+    --is-single-node
+    --qpair-count=<QPAIR_COUNT>
+    --client-qpair-count=<CLIENT_QPAIR_COUNT>
+    --strict-node-anti-affinity
+    --name=<NAME>
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--cap-warnCapacity warning level in percent, default: 89integerFalse89
--cap-critCapacity critical level in percent, default: 99integerFalse99
--prov-cap-warnCapacity warning level in percent, default: 250integerFalse250
--prov-cap-critCapacity critical level in percent, default: 500integerFalse500
--data-chunks-per-stripeErasure coding schema parameter k (distributed raid), default: 1integerFalse1
--parity-chunks-per-stripeErasure coding schema parameter n (distributed raid), default: 1integerFalse1
--ha-typeLogical volume HA type (single, ha), default is cluster single type

Available Options:
- single
- ha
stringFalseha
--enable-node-affinityEnables node affinity for storage nodesmarkerFalse-
--fabricfabric: tcp, rdma or both (specify: tcp, rdma)

Available Options:
- tcp
- rdma
- tcp,rdma
stringFalsetcp
--is-single-nodeFor single node clusters onlymarkerFalseFalse
--qpair-countIncrease for clusters with few but very large logical volumes or decrease for clusters with a large number of very small logical volumes.range(0..128)False32
--client-qpair-countIncrease for clusters with few but very large logical volumes or decrease for clusters with a large number of very small logical volumes.range(0..128)False3
--strict-node-anti-affinityEnable strict node anti affinity for storage nodes. Never more than one chunk is placed on a node. This requires a minimum of data-chunks-in-stripe + parity-chunks-in-stripe + 1 nodes in the cluster."markerFalse-
--name, -nAssigns a name to the newly created cluster.stringFalse-
+

Activates a cluster.

+

Once a cluster has sufficient nodes added, it needs to be activated. Can also be used to re-activate a suspended cluster.

+
sbctl cluster activate
+    <CLUSTER_ID>
+    --force
+    --force-lvstore-create
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
CLUSTER_IDCluster idstringTrue
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--forceForce recreate distr and lv storesmarkerFalse-
--force-lvstore-createForce recreate lv storesmarkerFalse-
+

Shows the cluster list

+

Shows the cluster list

+
sbctl cluster list
+    --json
+
+ + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--jsonPrint json outputmarkerFalse-
+

Shows a cluster's status

+

Shows a cluster's status

+
sbctl cluster status
+    <CLUSTER_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
CLUSTER_IDCluster idstringTrue
+

Create lvstore on newly added nodes to the cluster

+

Create lvstore on newly added nodes to the cluster

+
sbctl cluster complete-expand
+    <CLUSTER_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
CLUSTER_IDCluster idstringTrue
+

Shows a cluster's statistics

+

Shows a cluster's statistics

+
sbctl cluster show
+    <CLUSTER_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
CLUSTER_IDCluster idstringTrue
+

Gets a cluster's information

+

Gets a cluster's information

+
sbctl cluster get
+    <CLUSTER_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
CLUSTER_IDCluster idstringTrue
+

Gets a cluster's capacity

+

Gets a cluster's capacity

+
sbctl cluster get-capacity
+    <CLUSTER_ID>
+    --json
+    --history=<HISTORY>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
CLUSTER_IDCluster idstringTrue
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--jsonPrint json outputmarkerFalse-
--history(XXdYYh), list history records (one for every 15 minutes) for XX days and YY hours (up to 10 days in total).stringFalse-
+

Gets a cluster's I/O statistics

+

Gets a cluster's I/O statistics

+
sbctl cluster get-io-stats
+    <CLUSTER_ID>
+    --records=<RECORDS>
+    --history=<HISTORY>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
CLUSTER_IDCluster idstringTrue
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--recordsNumber of records, default: 20integerFalse20
--history(XXdYYh), list history records (one for every 15 minutes) for XX days and YY hours (up to 10 days in total).stringFalse-
+

Returns a cluster's status logs

+

Returns a cluster's status logs

+
sbctl cluster get-logs
+    <CLUSTER_ID>
+    --json
+    --limit=<LIMIT>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
CLUSTER_IDCluster idstringTrue
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--jsonReturn JSON formatted logsmarkerFalse-
--limitshow last number of logs, default 50integerFalse50
+

Gets a cluster's secret

+

Gets a cluster's secret

+
sbctl cluster get-secret
+    <CLUSTER_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
CLUSTER_IDCluster idstringTrue
+

Updates a cluster's secret

+

Updates a cluster's secret

+
sbctl cluster update-secret
+    <CLUSTER_ID>
+    <SECRET>
+
+ + + + + + + + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
CLUSTER_IDCluster idstringTrue
SECRETnew 20 characters passwordstringTrue
+

Updates a cluster's fabric

+

Updates a cluster's fabric

+
sbctl cluster update-fabric
+    <CLUSTER_ID>
+    <FABRIC>
+
+ + + + + + + + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
CLUSTER_IDCluster idstringTrue
FABRICfabric: tcp, rdma or both (specify: tcp, rdma)stringTrue
+

Checks a cluster's health

+

Checks a cluster's health

+
sbctl cluster check
+    <CLUSTER_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
CLUSTER_IDCluster idstringTrue
+

Updates a cluster to new version

+

Updates a the control plane to a new version. To update the storage nodes, they have to be shutdown and restarted. This can be done in a rolling manner. Attention: verify that an upgrade path is available and has been tested!"

+
sbctl cluster update
+    <CLUSTER_ID>
+    --cp-only=<CP_ONLY>
+    --restart=<RESTART>
+    --spdk-image=<SPDK_IMAGE>
+    --mgmt-image=<MGMT_IMAGE>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
CLUSTER_IDCluster idstringTrue
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--cp-onlyUpdate the control plane onlybooleanFalseFalse
--restartRestart the management servicesbooleanFalseFalse
--spdk-imageRestart the storage nodes using the provided imagestringFalse-
--mgmt-imageRestart the management services using the provided imagestringFalse-
+

Lists tasks of a cluster

+

Lists tasks of a cluster

+
sbctl cluster list-tasks
+    <CLUSTER_ID>
+    --limit=<LIMIT>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
CLUSTER_IDCluster idstringTrue
+ + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--limitshow last number of tasks, default 50integerFalse50
+

Cancels task by task id

+

Cancels task by task id

+
sbctl cluster cancel-task
+    <TASK_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
TASK_IDTask idstringTrue
+

Get rebalancing subtasks list

+

Get rebalancing subtasks list

+
sbctl cluster get-subtasks
+    <TASK_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
TASK_IDTask idstringTrue
+

Deletes a cluster

+

This is only possible, if no storage nodes and pools are attached to the cluster

+
sbctl cluster delete
+    <CLUSTER_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
CLUSTER_IDCluster idstringTrue
+

Assigns or changes a name to a cluster

+

Assigns or changes a name to a cluster

+
sbctl cluster change-name
+    <CLUSTER_ID>
+    <NAME>
+
+ + + + + + + + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
CLUSTER_IDCluster idstringTrue
NAMENamestringTrue
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/reference/cli/control-plane/index.html b/deployment/25.10.3/reference/cli/control-plane/index.html new file mode 100644 index 00000000..edf842d6 --- /dev/null +++ b/deployment/25.10.3/reference/cli/control-plane/index.html @@ -0,0 +1,4900 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Control plane commands - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Control plane commands

+ +
sbctl control-plane --help
+
+

Aliases: cp mgmt

+

Control plane commands

+

Adds a control plane to the cluster (local run)

+

Adds a control plane to the cluster (local run)

+
sbctl control-plane add
+    <CLUSTER_IP>
+    <CLUSTER_ID>
+    <CLUSTER_SECRET>
+    --ifname=<IFNAME>
+    --mgmt-ip=<MGMT_IP>
+    --mode=<MODE>
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
CLUSTER_IPCluster IP addressstringTrue
CLUSTER_IDCluster idstringTrue
CLUSTER_SECRETCluster secretstringTrue
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--ifnameManagement interface namestringFalse-
--mgmt-ipManagement IP address to use for the node (e.g., 192.168.1.10)stringFalse-
--modeEnvironment to deploy management services, default: docker

Available Options:
- docker
- kubernetes
stringFalsedocker
+

Lists all control plane nodes

+

Lists all control plane nodes

+
sbctl control-plane list
+    --json
+
+ + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--jsonPrint outputs in json formatmarkerFalse-
+

Removes a control plane node

+

Removes a control plane node

+
sbctl control-plane remove
+    <NODE_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
NODE_IDControl plane node idstringTrue
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/reference/cli/index.html b/deployment/25.10.3/reference/cli/index.html new file mode 100644 index 00000000..1e517a70 --- /dev/null +++ b/deployment/25.10.3/reference/cli/index.html @@ -0,0 +1,4671 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + CLI / Command-line interface - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

CLI / Command-line interface

+ +

Simplyblock provides a feature-rich CLI (command line interface) client to manage all aspects of the storage cluster.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/reference/cli/qos/index.html b/deployment/25.10.3/reference/cli/qos/index.html new file mode 100644 index 00000000..a350cbe8 --- /dev/null +++ b/deployment/25.10.3/reference/cli/qos/index.html @@ -0,0 +1,4888 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + qos commands - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

qos commands

+ +
sbctl qos --help
+
+

qos commands

+

Creates a new QOS class

+

Creates a new QOS class

+
sbctl qos add
+    <NAME>
+    <WEIGHT>
+    <CLUSTER_ID>
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
NAMEQOS class namestringTrue
WEIGHTQOS class weightintegerTrue
CLUSTER_IDCluster UUIDstringTrue
+

Lists all qos classes

+

Lists all qos classes

+
sbctl qos list
+    <CLUSTER_ID>
+    --json
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
CLUSTER_IDCluster UUIDstringTrue
+ + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--jsonPrint json outputmarkerFalse-
+

Delete a class

+

Delete a class

+
sbctl qos delete
+    <NAME>
+    <CLUSTER_ID>
+
+ + + + + + + + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
NAMEQOS class namestringTrue
CLUSTER_IDCluster UUIDstringTrue
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/reference/cli/snapshot/index.html b/deployment/25.10.3/reference/cli/snapshot/index.html new file mode 100644 index 00000000..9b42e7e7 --- /dev/null +++ b/deployment/25.10.3/reference/cli/snapshot/index.html @@ -0,0 +1,4945 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Snapshot commands - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Snapshot commands

+ +
sbctl snapshot --help
+
+

Snapshot commands

+

Creates a new snapshot

+

Creates a new snapshot

+
sbctl snapshot add
+    <VOLUME_ID>
+    <NAME>
+
+ + + + + + + + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
VOLUME_IDLogical volume idstringTrue
NAMENew snapshot namestringTrue
+

Lists all snapshots

+

Lists all snapshots

+
sbctl snapshot list
+    --all
+
+ + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--allList soft deleted snapshotsmarkerFalse-
+

Deletes a snapshot

+

Deletes a snapshot

+
sbctl snapshot delete
+    <SNAPSHOT_ID>
+    --force
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
SNAPSHOT_IDSnapshot idstringTrue
+ + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--forceForce removemarkerFalse-
+

Provisions a new logical volume from an existing snapshot

+

Provisions a new logical volume from an existing snapshot

+
sbctl snapshot clone
+    <SNAPSHOT_ID>
+    <LVOL_NAME>
+    --resize=<RESIZE>
+
+ + + + + + + + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
SNAPSHOT_IDSnapshot idstringTrue
LVOL_NAMELogical volume namestringTrue
+ + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--resizeNew logical volume size: 10M, 10G, 10(bytes). Can only increase.sizeFalse0
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/reference/cli/storage-node/index.html b/deployment/25.10.3/reference/cli/storage-node/index.html new file mode 100644 index 00000000..a7dbc9c9 --- /dev/null +++ b/deployment/25.10.3/reference/cli/storage-node/index.html @@ -0,0 +1,6404 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Storage node commands - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Storage node commands

+ +
sbctl storage-node --help
+
+

Aliases: sn

+

Storage node commands

+

Prepares a host to be used as a storage node

+

Runs locally on to-be storage node hosts. Installs storage node dependencies and prepares it to be used as a storage node. Only required, in standalone deployment outside of Kubernetes.

+
sbctl storage-node deploy
+    --ifname=<IFNAME>
+    --isolate-cores
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--ifnameThe network interface to be used for communication between the control plane and the storage node.stringFalse-
--isolate-coresIsolate cores in kernel args for provided cpu maskmarkerFalseFalse
+

Prepare a configuration file to be used when adding the storage node

+

Runs locally on to-be storage node hosts. Reads system information (CPUs topology, NVME devices) and prepares yaml config to be used when adding the storage node.

+
sbctl storage-node configure
+    --max-lvol=<MAX_LVOL>
+    --max-size=<MAX_SIZE>
+    --nodes-per-socket=<NODES_PER_SOCKET>
+    --sockets-to-use=<SOCKETS_TO_USE>
+    --pci-allowed=<PCI_ALLOWED>
+    --pci-blocked=<PCI_BLOCKED>
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--max-lvolMax logical volume per storage nodeintegerTrue-
--max-sizeMaximum amount of GB to be utilized on this storage node. This cannot be larger than the total effective cluster capacity. A safe value is 1/n * 2.0 of effective cluster capacity. Meaning, if you have three storage nodes, each with 100 TiB of raw capacity and a cluster with erasure coding scheme 1+1 (two replicas), the effective cluster capacity is 100 TiB * 3 / 2 = 150 TiB. Setting this parameter to 150 TiB / 3 * 2 = 100TiB would be a safe choice.stringTrue-
--nodes-per-socketnumber of each node to be added per each socket.integerFalse1
--sockets-to-useSystem socket to use when adding storage nodes. Comma-separated list: e.g. 0,1stringFalse0
--pci-allowedStorage PCI addresses to use for storage devices(Normal address and full address are accepted). Comma-separated list: e.g. 0000:00:01.0,00:02.0stringFalse
--pci-blockedStorage PCI addresses to not use for storage devices(Normal address and full address are accepted). Comma-separated list: e.g. 0000:00:01.0,00:02.0stringFalse
+

Upgrade the automated configuration file with new changes of cpu mask or storage devices

+

Regenerate the core distribution and auto calculation according to changes in cpu_mask and ssd_pcis only

+
sbctl storage-node configure-upgrade
+
+

Cleans a previous simplyblock deploy (local run)

+

Run locally on storage nodes and control plane hosts. Remove a previous deployment to support a fresh scratch-deployment of cluster software.

+
sbctl storage-node deploy-cleaner
+
+

Adds a storage node by its IP address

+

Adds a storage node by its IP address

+
sbctl storage-node add-node
+    <CLUSTER_ID>
+    <NODE_ADDR>
+    <IFNAME>
+    --journal-partition=<JOURNAL_PARTITION>
+    --data-nics=<DATA_NICS>
+    --ha-jm-count=<HA_JM_COUNT>
+    --namespace=<NAMESPACE>
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
CLUSTER_IDCluster idstringTrue
NODE_ADDRAddress of storage node api to add, like :5000stringTrue
IFNAMEManagement interface namestringTrue
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--journal-partition1: auto-create small partitions for journal on nvme devices. 0: use a separate (the smallest) nvme device of the node for journal. The journal needs a maximum of 3 percent of total available raw disk space.integerFalse1
--data-nicsStorage network interface names, e.g. eth0,eth1stringFalse-
--ha-jm-countHA JM countintegerFalse3
--namespaceKubernetes namespace to deploy onstringFalse-
+

Deletes a storage node object from the state database.

+

Deletes a storage node object from the state database. It must only be used on clusters without any logical volumes. Warning: This is dangerous and could lead to unstable cluster if used on active cluster.

+
sbctl storage-node delete
+    <NODE_ID>
+    --force
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
NODE_IDStorage node idstringTrue
+ + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--forceForce delete storage node from DB...Hopefully you know what you domarkerFalse-
+

Removes a storage node from the cluster

+

The storage node cannot be used or added any more. Any data residing on this storage node will be migrated to the remaining storage nodes. The user must ensure that there is sufficient free space in remaining cluster to allow for successful node removal.

+
+

Danger

+

If there isn't enough storage available, the cluster may run full and switch to read-only mode.

+
+
sbctl storage-node remove
+    <NODE_ID>
+    --force-remove
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
NODE_IDStorage node idstringTrue
+ + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--force-removeForce remove all logical volumes and snapshotsmarkerFalse-
+

Lists all storage nodes

+

Lists all storage nodes

+
sbctl storage-node list
+    --cluster-id=<CLUSTER_ID>
+    --json
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--cluster-idCluster idstringFalse-
--jsonPrint outputs in json formatmarkerFalse-
+

Gets a storage node's information

+

Gets a storage node's information

+
sbctl storage-node get
+    <NODE_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
NODE_IDStorage node idstringTrue
+

Restarts a storage node

+

A storage node is required to be offline to be restarted. All functions and device drivers will be reset as a result of the restart. New physical devices can only be added with a storage node restart. During restart, the node will not accept any I/O.

+
sbctl storage-node restart
+    <NODE_ID>
+    --max-lvol=<MAX_LVOL>
+    --node-addr=<NODE_ADDR>
+    --force
+    --ssd-pcie=<SSD_PCIE>
+    --force-lvol-recreate
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
NODE_IDStorage node idstringTrue
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--max-lvolMax logical volume per storage nodeintegerFalse0
--node-addr, --node-ipAllows to restart an existing storage node on new host or hardware. Devices attached to storage nodes have to be attached to new hosts. Otherwise, they have to be marked as failed and removed from cluster. Triggers a pro-active migration of data from those devices onto other storage nodes.

The provided value must be presented in the form of IP:PORT. Be default the port number is 5000.
stringFalse-
--forceForce restartmarkerFalse-
--ssd-pcieNew Nvme PCIe address to add to the storage node. Can be more than one.stringFalse
--force-lvol-recreateForce LVol recreate on node restart even if lvol bdev was not recoveredmarkerFalseFalse
+

Initiates a storage node shutdown

+

Once the command is issued, the node will stop accepting IO,but IO, which was previously received, will still be processed. In a high-availability setup, this will not impact operations.

+
sbctl storage-node shutdown
+    <NODE_ID>
+    --force
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
NODE_IDStorage node idstringTrue
+ + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--forceForce node shutdownmarkerFalse-
+

Suspends a storage node

+

The node will stop accepting new IO, but will finish processing any IO, which has been received already.

+
sbctl storage-node suspend
+    <NODE_ID>
+    --force
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
NODE_IDStorage node idstringTrue
+ + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--forceForce node suspendmarkerFalse-
+

Resumes a storage node

+

Resumes a storage node

+
sbctl storage-node resume
+    <NODE_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
NODE_IDStorage node idstringTrue
+

Gets storage node IO statistics

+

Gets storage node IO statistics

+
sbctl storage-node get-io-stats
+    <NODE_ID>
+    --history=<HISTORY>
+    --records=<RECORDS>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
NODE_IDStorage node idstringTrue
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--historylist history records -one for every 15 minutes- for XX days and YY hours -up to 10 days in total-, format: XXdYYhstringFalse-
--recordsNumber of records, default: 20integerFalse20
+

Gets a storage node's capacity statistics

+

Gets a storage node's capacity statistics

+
sbctl storage-node get-capacity
+    <NODE_ID>
+    --history=<HISTORY>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
NODE_IDStorage node idstringTrue
+ + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--historylist history records -one for every 15 minutes- for XX days and YY hours -up to 10 days in total-, format: XXdYYhstringFalse-
+

Lists storage devices

+

Lists storage devices

+
sbctl storage-node list-devices
+    <NODE_ID>
+    --json
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
NODE_IDStorage node idstringTrue
+ + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--jsonPrint outputs in json formatmarkerFalse-
+

Gets storage device by its id

+

Gets storage device by its id

+
sbctl storage-node get-device
+    <DEVICE_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
DEVICE_IDDevice idstringTrue
+

Resets a storage device

+

Hardware device reset. Resetting the device can return the device from an unavailable into online state, if successful.

+
sbctl storage-node reset-device
+    <DEVICE_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
DEVICE_IDDevice idstringTrue
+

Restarts a storage device

+

A previously logically or physically removed or unavailable device, which has been re-inserted, may be returned into online state. If the device is not physically present, accessible or healthy, it will flip back into unavailable state again.

+
sbctl storage-node restart-device
+    <DEVICE_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
DEVICE_IDDevice idstringTrue
+

Adds a new storage device

+

Adds a device, including a previously detected device (currently in "new" state) into cluster and launches an auto-rebalancing background process in which some cluster capacity is re-distributed to this newly added device.

+
sbctl storage-node add-device
+    <DEVICE_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
DEVICE_IDDevice idstringTrue
+

Logically removes a storage device

+

Logical removes a storage device. The device will become unavailable, irrespectively if it was physically removed from the server. This function can be used if auto-detection of removal did not work or if the device must be maintained while remaining inserted into the server.

+
sbctl storage-node remove-device
+    <DEVICE_ID>
+    --force
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
DEVICE_IDDevice idstringTrue
+ + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--forceForce device removemarkerFalse-
+

Sets storage device to failed state

+

Sets a storage device to state failed. This command can be used, if an administrator believes that the device must be replaced. Attention: a failed state is final, meaning, all data on the device will be automatically recovered to other devices in the cluster.

+
sbctl storage-node set-failed-device
+    <DEVICE_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
DEVICE_IDDevice IDstringTrue
+

Gets a device's capacity

+

Gets a device's capacity

+
sbctl storage-node get-capacity-device
+    <DEVICE_ID>
+    --history=<HISTORY>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
DEVICE_IDDevice idstringTrue
+ + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--historylist history records -one for every 15 minutes- for XX days and YY hours -up to 10 days in total-, format: XXdYYhstringFalse-
+

Gets a device's IO statistics

+

Gets a device's IO statistics

+
sbctl storage-node get-io-stats-device
+    <DEVICE_ID>
+    --history=<HISTORY>
+    --records=<RECORDS>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
DEVICE_IDDevice idstringTrue
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--historylist history records -one for every 15 minutes- for XX days and YY hours -up to 10 days in total-, format: XXdYYhstringFalse-
--recordsNumber of records, default: 20integerFalse20
+

Gets the data interfaces list of a storage node

+

Gets the data interfaces list of a storage node

+
sbctl storage-node port-list
+    <NODE_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
NODE_IDStorage node idstringTrue
+

Gets the data interfaces' IO stats

+

Gets the data interfaces' IO stats

+
sbctl storage-node port-io-stats
+    <PORT_ID>
+    --history=<HISTORY>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
PORT_IDData port idstringTrue
+ + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--historylist history records -one for every 15 minutes- for XX days and YY hours -up to 10 days in total, format: XXdYYhstringFalse-
+

Checks the health status of a storage node

+

Verifies if all of the NVMe-oF connections to and from the storage node, including those to and from other storage devices in the cluster and the meta-data journal, are available and healthy and all internal objects of the node, such as data placement and erasure coding services, are in a healthy state.

+
sbctl storage-node check
+    <NODE_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
NODE_IDStorage node idstringTrue
+

Checks the health status of a device

+

Checks the health status of a device

+
sbctl storage-node check-device
+    <DEVICE_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
DEVICE_IDDevice idstringTrue
+

Gets the node's information

+

Gets the node's information

+
sbctl storage-node info
+    <NODE_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
NODE_IDStorage node idstringTrue
+

Restarts a journaling device

+

Restarts a journaling device

+
sbctl storage-node restart-jm-device
+    <JM_DEVICE_ID>
+    --force
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
JM_DEVICE_IDJournaling device idstringTrue
+ + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--forceForce device removemarkerFalse-
+

Forces to make the provided node id primary

+

Makes the storage node the primary node. This is required after certain storage cluster operations, such +as a storage node migration.

+
sbctl storage-node make-primary
+    <NODE_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
NODE_IDStorage node idstringTrue
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/reference/cli/storage-pool/index.html b/deployment/25.10.3/reference/cli/storage-pool/index.html new file mode 100644 index 00000000..f81d188f --- /dev/null +++ b/deployment/25.10.3/reference/cli/storage-pool/index.html @@ -0,0 +1,5290 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Storage pool commands - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Storage pool commands

+ +
sbctl storage-pool --help
+
+

Aliases: pool

+

Storage pool commands

+

Adds a new storage pool

+

Adds a new storage pool

+
sbctl storage-pool add
+    <NAME>
+    <CLUSTER_ID>
+    --pool-max=<POOL_MAX>
+    --lvol-max=<LVOL_MAX>
+    --max-rw-iops=<MAX_RW_IOPS>
+    --max-rw-mbytes=<MAX_RW_MBYTES>
+    --max-r-mbytes=<MAX_R_MBYTES>
+    --max-w-mbytes=<MAX_W_MBYTES>
+    --qos-host=<QOS_HOST>
+
+ + + + + + + + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
NAMENew pool namestringTrue
CLUSTER_IDCluster idstringTrue
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--pool-maxPool maximum size: 20M, 20G, 0. Default: 0sizeFalse0
--lvol-maxLogical volume maximum size: 20M, 20G, 0. Default: 0sizeFalse0
--max-rw-iopsMaximum Read Write IO Per SecondintegerFalse-
--max-rw-mbytesMaximum Read Write Megabytes Per SecondintegerFalse-
--max-r-mbytesMaximum Read Megabytes Per SecondintegerFalse-
--max-w-mbytesMaximum Write Megabytes Per SecondintegerFalse-
--qos-hostNode UUID for QoS poolstringFalse-
+

Sets a storage pool's attributes

+

Sets a storage pool's attributes

+
sbctl storage-pool set
+    <POOL_ID>
+    --pool-max=<POOL_MAX>
+    --lvol-max=<LVOL_MAX>
+    --max-rw-iops=<MAX_RW_IOPS>
+    --max-rw-mbytes=<MAX_RW_MBYTES>
+    --max-r-mbytes=<MAX_R_MBYTES>
+    --max-w-mbytes=<MAX_W_MBYTES>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
POOL_IDPool idstringTrue
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--pool-maxPool maximum size: 20M, 20GsizeFalse-
--lvol-maxLogical volume maximum size: 20M, 20GsizeFalse-
--max-rw-iopsMaximum Read Write IO Per SecondintegerFalse-
--max-rw-mbytesMaximum Read Write Megabytes Per SecondintegerFalse-
--max-r-mbytesMaximum Read Megabytes Per SecondintegerFalse-
--max-w-mbytesMaximum Write Megabytes Per SecondintegerFalse-
+

Lists all storage pools

+

Lists all storage pools

+
sbctl storage-pool list
+    --json
+    --cluster-id=<CLUSTER_ID>
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--jsonPrint outputs in json formatmarkerFalse-
--cluster-idCluster idstringFalse-
+

Gets a storage pool's details

+

Gets a storage pool's details

+
sbctl storage-pool get
+    <POOL_ID>
+    --json
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
POOL_IDPool idstringTrue
+ + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--jsonPrint outputs in json formatmarkerFalse-
+

Deletes a storage pool

+

It is only possible to delete a pool if it is empty (no provisioned logical volumes contained).

+
sbctl storage-pool delete
+    <POOL_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
POOL_IDPool idstringTrue
+

Set a storage pool's status to Active

+

Set a storage pool's status to Active

+
sbctl storage-pool enable
+    <POOL_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
POOL_IDPool idstringTrue
+

Sets a storage pool's status to Inactive.

+

Sets a storage pool's status to Inactive.

+
sbctl storage-pool disable
+    <POOL_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
POOL_IDPool idstringTrue
+

Gets a storage pool's capacity

+

Gets a storage pool's capacity

+
sbctl storage-pool get-capacity
+    <POOL_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
POOL_IDPool idstringTrue
+

Gets a storage pool's I/O statistics

+

Gets a storage pool's I/O statistics

+
sbctl storage-pool get-io-stats
+    <POOL_ID>
+    --history=<HISTORY>
+    --records=<RECORDS>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
POOL_IDPool idstringTrue
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--history(XXdYYh), list history records (one for every 15 minutes) for XX days and YY hours (up to 10 days in total).stringFalse-
--recordsNumber of records, default: 20integerFalse20
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/reference/cli/volume/index.html b/deployment/25.10.3/reference/cli/volume/index.html new file mode 100644 index 00000000..c4ac19f7 --- /dev/null +++ b/deployment/25.10.3/reference/cli/volume/index.html @@ -0,0 +1,5647 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Logical volume commands - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Logical volume commands

+ +
sbctl volume --help
+
+

Aliases: lvol

+

Logical volume commands

+

Adds a new logical volume

+

Adds a new logical volume

+
sbctl volume add
+    <NAME>
+    <SIZE>
+    <POOL>
+    --snapshot
+    --max-size=<MAX_SIZE>
+    --host-id=<HOST_ID>
+    --encrypt
+    --crypto-key1=<CRYPTO_KEY1>
+    --crypto-key2=<CRYPTO_KEY2>
+    --max-rw-iops=<MAX_RW_IOPS>
+    --max-rw-mbytes=<MAX_RW_MBYTES>
+    --max-r-mbytes=<MAX_R_MBYTES>
+    --max-w-mbytes=<MAX_W_MBYTES>
+    --max-namespace-per-subsys=<MAX_NAMESPACE_PER_SUBSYS>
+    --ha-type=<HA_TYPE>
+    --fabric=<FABRIC>
+    --lvol-priority-class=<LVOL_PRIORITY_CLASS>
+    --namespace=<NAMESPACE>
+    --pvc-name=<PVC_NAME>
+    --data-chunks-per-stripe=<DATA_CHUNKS_PER_STRIPE>
+    --parity-chunks-per-stripe=<PARITY_CHUNKS_PER_STRIPE>
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
NAMENew logical volume namestringTrue
SIZELogical volume size: 10M, 10G, 10(bytes)sizeTrue
POOLPool id or namestringTrue
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--snapshot, -sMake logical volume with snapshot capability, default: falsemarkerFalseFalse
--max-sizeLogical volume max sizesizeFalse1000T
--host-idPrimary storage node id or HostnamestringFalse-
--encryptUse inline data encryption and decryption on the logical volumemarkerFalse-
--crypto-key1Hex value of key1 to be used for logical volume encryptionstringFalse-
--crypto-key2Hex value of key2 to be used for logical volume encryptionstringFalse-
--max-rw-iopsMaximum Read Write IO Per SecondintegerFalse-
--max-rw-mbytesMaximum Read Write Megabytes Per SecondintegerFalse-
--max-r-mbytesMaximum Read Megabytes Per SecondintegerFalse-
--max-w-mbytesMaximum Write Megabytes Per SecondintegerFalse-
--max-namespace-per-subsysMaximum Namespace per subsystemintegerFalse32
--ha-typeLogical volume HA type (single, ha), default is cluster HA type

Available Options:
- single
- default
- ha
stringFalsedefault
--fabrictcp or rdma (tcp is default). Cluster must support chosen fabric.

Available Options:
- tcp
- rdma
- tcp,rdma
stringFalsetcp
--lvol-priority-classLogical volume priority classintegerFalse0
--namespaceSet logical volume namespace for k8s clientsstringFalse-
--pvc-name, --pvc_nameSet the logical volume persistent volume claim name for Kubernetes clients.

+
+

Warning

+

The old parameter name --pvc_name is deprecated and shouldn't be used anymore. It will eventually be +removed. Please exchange the use of --pvc_name with --pvc-name. | string | False | - |

+
+

| --data-chunks-per-stripe| Erasure coding schema parameter k (distributed raid), default: 1 | integer | False | 0 | +| --parity-chunks-per-stripe| Erasure coding schema parameter n (distributed raid), default: 1 | integer | False | 0 |

+

Changes QoS settings for an active logical volume

+

Changes QoS settings for an active logical volume

+
sbctl volume qos-set
+    <VOLUME_ID>
+    --max-rw-iops=<MAX_RW_IOPS>
+    --max-rw-mbytes=<MAX_RW_MBYTES>
+    --max-r-mbytes=<MAX_R_MBYTES>
+    --max-w-mbytes=<MAX_W_MBYTES>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
VOLUME_IDLogical volume idstringTrue
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--max-rw-iopsMaximum Read Write IO Per SecondintegerFalse-
--max-rw-mbytesMaximum Read Write Megabytes Per SecondintegerFalse-
--max-r-mbytesMaximum Read Megabytes Per SecondintegerFalse-
--max-w-mbytesMaximum Write Megabytes Per SecondintegerFalse-
+

Lists logical volumes

+

Lists logical volumes

+
sbctl volume list
+    --cluster-id=<CLUSTER_ID>
+    --pool=<POOL>
+    --json
+    --all
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--cluster-idList logical volumes in particular clusterstringFalse-
--poolList logical volumes in particular pool id or namestringFalse-
--jsonPrint outputs in json formatmarkerFalse-
--allList soft deleted logical volumesmarkerFalse-
+

Gets the logical volume details

+

Gets the logical volume details

+
sbctl volume get
+    <VOLUME_ID>
+    --json
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
VOLUME_IDLogical volume id or namestringTrue
+ + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--jsonPrint outputs in json formatmarkerFalse-
+

Deletes a logical volume

+

Deletes a logical volume. Attention: All data will be lost! This is an irreversible operation! Actual storage capacity will be freed as an asynchronous background task. It may take a while until the actual storage is released.

+
sbctl volume delete
+    <VOLUME_ID>
+    --force
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
VOLUME_IDLogical volumes id or idsstringTrue
+ + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--forceForce delete logical volume from the clustermarkerFalse-
+

Gets the logical volume's NVMe/TCP connection string(s)

+

Multiple connections to the cluster are always available for multi-pathing and high-availability.

+
sbctl volume connect
+    <VOLUME_ID>
+    --ctrl-loss-tmo=<CTRL_LOSS_TMO>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
VOLUME_IDLogical volume idstringTrue
+ + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--ctrl-loss-tmoControl loss timeout for this volumeintegerFalse-
+

Resizes a logical volume

+

Resizes a logical volume. Only increasing a volume is possible. The new capacity must fit into the storage pool's free capacity.

+
sbctl volume resize
+    <VOLUME_ID>
+    <SIZE>
+
+ + + + + + + + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
VOLUME_IDLogical volume idstringTrue
SIZENew logical volume size size: 10M, 10G, 10(bytes)sizeTrue
+

Creates a snapshot from a logical volume

+

Creates a snapshot from a logical volume

+
sbctl volume create-snapshot
+    <VOLUME_ID>
+    <NAME>
+
+ + + + + + + + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
VOLUME_IDLogical volume idstringTrue
NAMESnapshot namestringTrue
+

Provisions a logical volumes from an existing snapshot

+

Provisions a logical volumes from an existing snapshot

+
sbctl volume clone
+    <SNAPSHOT_ID>
+    <CLONE_NAME>
+    --resize=<RESIZE>
+
+ + + + + + + + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
SNAPSHOT_IDSnapshot idstringTrue
CLONE_NAMEClone namestringTrue
+ + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--resizeNew logical volume size: 10M, 10G, 10(bytes). Can only increase.sizeFalse0
+

Gets a logical volume's capacity

+

Gets a logical volume's capacity

+
sbctl volume get-capacity
+    <VOLUME_ID>
+    --history=<HISTORY>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
VOLUME_IDLogical volume idstringTrue
+ + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--history(XXdYYh), list history records (one for every 15 minutes) for XX days and YY hours (up to 10 days in total).stringFalse-
+

Gets a logical volume's I/O statistics

+

Gets a logical volume's I/O statistics

+
sbctl volume get-io-stats
+    <VOLUME_ID>
+    --history=<HISTORY>
+    --records=<RECORDS>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
VOLUME_IDLogical volume idstringTrue
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionData TypeRequiredDefault
--history(XXdYYh), list history records (one for every 15 minutes) for XX days and YY hours (up to 10 days in total).stringFalse-
--recordsNumber of records, default: 20integerFalse20
+

Checks a logical volume's health

+

Checks a logical volume's health

+
sbctl volume check
+    <VOLUME_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
VOLUME_IDLogical volume idstringTrue
+

Inflate a logical volume

+

All unallocated clusters are allocated and copied from the parent or zero filled if not allocated in the parent. Then all dependencies on the parent are removed.

+
sbctl volume inflate
+    <VOLUME_ID>
+
+ + + + + + + + + + + + + + + + + +
ArgumentDescriptionData TypeRequired
VOLUME_IDLogical volume idstringTrue
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/reference/index.html b/deployment/25.10.3/reference/index.html new file mode 100644 index 00000000..d6fde9d6 --- /dev/null +++ b/deployment/25.10.3/reference/index.html @@ -0,0 +1,4675 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Reference - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Reference

+ +

Simplyblock provides multiple interfaces for managing and interacting with its distributed storage system, including the +sbctl command-line interface (CLI) and Management API. The sbctl CLI offers a +powerful, scriptable way to perform essential operations such as provisioning, expanding, snapshotting, and cloning +logical volumes, making it ideal for administrators who prefer direct command-line access.

+

The simplyblock Management API enables integration with external automation and orchestration tools, allowing seamless +management of storage resources at scale. Additionally, this section includes a reference list of supported Linux +kernels and distributions, ensuring compatibility across various environments.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/reference/kubernetes/index.html b/deployment/25.10.3/reference/kubernetes/index.html new file mode 100644 index 00000000..6b4b2822 --- /dev/null +++ b/deployment/25.10.3/reference/kubernetes/index.html @@ -0,0 +1,5439 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Kubernetes Helm Chart Parameters - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Kubernetes Helm Chart Parameters

+ +

Simplyblock provides a Helm chart to install one or more components into Kubernetes. Available components are the CSI +driver, storage nodes, and caching nodes.

+

This reference provides an overview of all available parameters that can be set on the Helm chart during installation +or upgrade.

+

CSI Parameters

+

Commonly configured CSI driver parameters:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionDefault
csiConfig.simplybk.uuidSets the simplyblock cluster id on which the volumes are provisioned.
csiConfig.simplybk.ipSets the HTTP(S) API Gateway endpoint connected to the management node.https://o5ls1ykzbb.execute-api.eu-central-1.amazonaws.com
csiSecret.simplybk.secretSets the cluster secret associated with the cluster.
logicalVolume.encryptionSpecifies whether logical volumes should be encrypted.False
csiSecret.simplybkPvc.crypto_key1Sets the first encryption key.
csiSecret.simplybkPvc.crypto_key2Sets the second encryption key.
logicalVolume.pool_nameSets the storage pool name where logical volumes are created. This storage pool needs exist.testing1
logicalVolume.qos_rw_iopsSets the maximum read-write IOPS. Zero means unlimited.0
logicalVolume.qos_rw_mbytesSets the maximum read-write Mbps. Zero means unlimited.0
logicalVolume.qos_r_mbytesSets the maximum read Mbps. Zero means unlimited.0
logicalVolume.qos_w_mbytesSets the maximum write Mbps. Zero means unlimited.0
logicalVolume.numDataChunksSets the number of Erasure coding schema parameter k (distributed raid).1
logicalVolume.numParityChunksSets the number of Erasure coding schema parameter n (distributed raid).1
logicalVolume.lvol_priority_classSets the logical volume priority class.0
logicalVolume.fabricSets the NVMe-oF transport type.tcp
logicalVolume.tune2fs_reserved_blocksSets the percentage of disk blocks reserved for system.0
logicalVolume.max_namespace_per_subsysSets the maximum namespace per subsystem.1
storageclass.createSpecifies whether to create a StorageClass.true
snapshotclass.createSpecifies whether to create a SnapshotClass.true
snapshotcontroller.createSpecifies whether to create a snapshot controller and CRD for snapshot support it.true
+

Additional, uncommonly configured CSI driver parameters:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionDefault
driverNameSets an alternative driver name.csi.simplyblock.io
serviceAccount.createSpecifies whether to create service account for spdkcsi-controller.true
rbac.createSpecifies whether to create RBAC permissions for the spdkcsi-controller.true
controller.replicasSets the replica number of the spdkcsi-controller.1
serviceAccount.createSpecifies whether to create service account for the csi controller.true
rbac.createSpecifies whether to create RBAC permissions for the csi controller.true
controller.replicasSets the replica number of the csi controller.1
controller.tolerations.createSpecifies whether to create tolerations for the csi controller.false
controller.tolerations.effectSets the effect of tolerations on the csi controller.<empty>
controller.tolerations.keySets the key of tolerations for the csi controller.<empty>
controller.tolerations.operatorSets the operator for the csi controller tolerations.Exists
controller.tolerations.valueSets the value of tolerations for the csi controller.<empty>
controller.nodeSelector.createSpecifies whether to create nodeSelector for the csi controller.false
controller.nodeSelector.keySets the key of nodeSelector for the csi controller.<empty>
controller.nodeSelector.valueSets the value of nodeSelector for the csi controller.<empty>
externallyManagedConfigmap.createSpecifies whether a externallyManagedConfigmap should be created.true
externallyManagedSecret.createSpecifies whether a externallyManagedSecret should be created.true
podAnnotationsAnnotations to apply to all pods in the chart.{}
simplyBlockAnnotationsAnnotations to apply to simplyblock Kubernetes resources like DaemonSets, Deployments, or StatefulSets.{}
node.tolerations.createSpecifies whether to create tolerations for the CSI driver node.false
node.tolerations.effectSets the effect of tolerations on the CSI driver node.<empty>
node.tolerations.keySets the key of tolerations for the CSI driver node.<empty>
node.tolerations.operatorSets the operator for the csi node tolerations.Exists
node.tolerations.valueSets the value of tolerations for the CSI driver node.<empty>
node.nodeSelector.createSpecifies whether to create nodeSelector for the CSI driver node.false
node.nodeSelector.keySets the key of nodeSelector for the CSI driver node.<empty>
node.nodeSelector.valueSets the value of nodeSelector for the CSI driver node.<empty>
storageclass.volumeBindingModeSets when PersistentVolumes are bound and provisioned.WaitForFirstConsumer
storageclass.zoneClusterMapSets the mapping between Kubernetes zones and SimplyBlock clusters for multi-cluster or multi-zone deployments.{}
storageclass.allowedTopologyZonesSets the list of topology zones where the StorageClass is allowed to provision volumes.[]
+

Storage Node Parameters

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionDefault
storagenode.daemonsets[0].nameSets the name of the storage node DaemonSet.storage-node-ds
storagenode.daemonsets[0].appLabelSets the label applied to the storage node DaemonSet for identification.storage-node
storagenode.daemonsets[0].nodeSelector.keySets the key used in the nodeSelector to constrain which nodes the DaemonSet should run on.type
storagenode.daemonsets[0].nodeSelector.valueSets the value for the nodeSelector key to match against specific nodes.simplyblock-storage-plane
storagenode.daemonsets[0].tolerations.createSpecifies whether to create tolerations for the storage node.false
storagenode.daemonsets[0].tolerations.effectSets the effect of tolerations on the storage node.<empty>
storagenode.daemonsets[0].tolerations.keySets the key of tolerations for the storage node.<empty>
storagenode.daemonsets[0].tolerations.operatorSets the operator for the storage node tolerations.Exists
storagenode.daemonsets[0].tolerations.valueSets the value of tolerations for the storage node.<empty>
storagenode.daemonsets[1].nameSets the name of the restart storage node DaemonSet.storage-node-ds-restart
storagenode.daemonsets[1].appLabelSets the label applied to the restart storage node DaemonSet for identification.storage-node-restart
storagenode.daemonsets[1].nodeSelector.keySets the key used in the nodeSelector to constrain which nodes the DaemonSet should run on.type
storagenode.daemonsets[1].nodeSelector.valueSets the value for the nodeSelector key to match against specific nodes.simplyblock-storage-plane-restart
storagenode.daemonsets[1].tolerations.createSpecifies whether to create tolerations for the restart storage node.false
storagenode.daemonsets[1].tolerations.effectSets the effect of tolerations on the restart storage node.<empty>
storagenode.daemonsets[1].tolerations.keySets the key of tolerations for the restart storage node.<empty>
storagenode.daemonsets[1].tolerations.operatorSets the operator for the restart storage node tolerations.Exists
storagenode.daemonsets[1].tolerations.valueSets the value of tolerations for the restart storage node.<empty>
storagenode.createSpecifies whether to create storage node on kubernetes worker node.false
storagenode.ifnameSets the default interface to be used for binding the storage node to host interface.eth0
storagenode.maxLogicalVolumesSets the default maximum number of logical volumes per storage node.10
storagenode.maxSnapshotsSets the default maximum number of snapshot per storage node.10
storagenode.maxSizeSets the max provisioning size of all storage nodes.150g
storagenode.numPartitionsSets the number of partitions to create per device.1
storagenode.numDevicesSets the number of devices per storage node.1
storagenode.numDistribsSets the number of distribs per storage node.2
storagenode.isolateCoresEnables automatic core isolation.false
storagenode.dataNicsSets the data interface names.<empty>
storagenode.pciAllowedSets the list of allowed NVMe PCIe addresses.<empty>
storagenode.pciBlockedSets the list of blocked NVMe PCIe addresses.<empty>
storagenode.socketsToUseSets the list of sockets to use.<empty>
storagenode.nodesPerSocketSets the number of nodes to use per socket.<empty>
storagenode.coresPercentageSets the percentage of total cores (vCPUs) available to simplyblock storage node services.<empty>
storagenode.ubuntuHostSet to true if the worker node runs Ubuntu and needs the nvme-tcp kernel module installed.false
storagenode.openShiftClusterSet to true if it an OpenShift Cluster and needs core isolation.false
+

Caching Node Parameters

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionDefault
cachingnode.tolerations.createSpecifies whether to create tolerations for the caching node.false
cachingnode.tolerations.effectSets the effect of tolerations on caching nodes.<empty>
cachingnode.tolerations.keySets the tolerations key for caching nodes.<empty>
cachingnode.tolerations.operatorSets the operator for caching node tolerations.Exists
cachingnode.tolerations.valueSets the value of tolerations for caching nodes.<empty>
cachingnode.createSpecifies whether to create caching nodes on Kubernetes worker nodes.false
cachingnode.nodeSelector.keySets the key used in the nodeSelector to constrain which nodes the DaemonSet should run on.type
cachingnode.nodeSelector.valueSets the value for the nodeSelector key to match against specific nodes.simplyblock-cache
cachingnode.ifnameSets the default interface to be used for binding the caching node to host interface.eth0
cachingnode.cpuMaskSets the CPU mask for the spdk app to use for caching node.<empty>
cachingnode.spdkMemSets the amount of hugepages memory to allocate for caching node.<empty>
cachingnode.multipathingSpecifies whether to enable multipathing for logical volume connections.true
+

Image Overrides

+
+

Danger

+
+

Overriding pinned image tags can result in an unusable state. +The following parameters should only be used after an explicit request from simplyblock.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionDefault
image.csi.repositorySimplyblock CSI driver image.simplyblock/spdkcsi
image.csi.tagSimplyblock CSI driver image tag.v0.1.0
image.csi.pullPolicySimplyblock CSI driver image pull policy.Always
image.csiProvisioner.repositoryCSI provisioner image.registry.k8s.io/sig-storage/csi-provisioner
image.csiProvisioner.tagCSI provisioner image tag.v4.0.1
image.csiProvisioner.pullPolicyCSI provisioner image pull policy.Always
image.csiAttacher.repositoryCSI attacher image.gcr.io/k8s-staging-sig-storage/csi-attacher
image.csiAttacher.tagCSI attacher image tag.v4.5.1
image.csiAttacher.pullPolicyCSI attacher image pull policy.Always
image.nodeDriverRegistrar.repositoryCSI node driver registrar image.registry.k8s.io/sig-storage/csi-node-driver-registrar
image.nodeDriverRegistrar.tagCSI node driver registrar image tag.v2.10.1
image.nodeDriverRegistrar.pullPolicyCSI node driver registrar image pull policy.Always
image.csiSnapshotter.repositoryCSI snapshotter image.registry.k8s.io/sig-storage/csi-snapshotter
image.csiSnapshotter.tagCSI snapshotter image tag.v7.0.2
image.csiSnapshotter.pullPolicyCSI snapshotter image pull policy.Always
image.csiResizer.repositoryCSI resizer image.gcr.io/k8s-staging-sig-storage/csi-resizer
image.csiResizer.tagCSI resizer image tag.v1.10.1
image.csiResizer.pullPolicyCSI resizer image pull policy.Always
image.csiHealthMonitor.repositoryCSI external health-monitor controller image.gcr.io/k8s-staging-sig-storage/csi-external-health-monitor-controller
image.csiHealthMonitor.tagCSI external health-monitor controller image tag.v0.11.0
image.csiHealthMonitor.pullPolicyCSI external health-monitor controller image pull policy.Always
image.simplyblock.repositorySimplyblock management image.simplyblock/simplyblock
image.simplyblock.tagSimplyblock management image tag.R25.5-Hotfix
image.simplyblock.pullPolicySimplyblock management image pull policy.Always
image.storageNode.repositorySimplyblock storage-node controller image.simplyblock/simplyblock
image.storageNode.tagSimplyblock storage-node controller image tag.v0.1.0
image.storageNode.pullPolicySimplyblock storage-node controller image pull policy.Always
image.cachingNode.repositorySimplyblock caching-node controller image.simplyblock/simplyblock
image.cachingNode.tagSimplyblock caching-node controller image tag.v0.1.0
image.cachingNode.pullPolicySimplyblock caching-node controller image pull policy.Always
image.mgmtAPI.repositorySimplyblock management api image.python
image.mgmtAPI.tagSimplyblock management api image tag.3.10
image.mgmtAPI.pullPolicySimplyblock management api image pull policy.Always
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/reference/nvme-low-level-format/index.html b/deployment/25.10.3/reference/nvme-low-level-format/index.html new file mode 100644 index 00000000..a552a17b --- /dev/null +++ b/deployment/25.10.3/reference/nvme-low-level-format/index.html @@ -0,0 +1,4765 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + NVMe Low-Level Format - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

NVMe Low-Level Format

+ +

Once the check is complete, the NVMe devices in each storage node can be prepared. To prevent data loss in case of a +sudden power outage, NVMe devices need to be formatted for a specific LBA format.

+
+

Warning

+

Failing to format NVMe devices with the correct LBA format can lead to data loss or data corruption in the case +of a sudden power outage or other loss of power. If you can't find the necessary LBA format, it is best to ask +your simplyblock contact for further instructions.

+

On AWS, the necessary LBA format is not available. Simplyblock is, however, fully tested and supported with AWS.

+
+

The lsblk is the best way to find all NVMe devices attached to a system.

+
Example output of lsblk
[demo@demo-3 ~]# sudo lsblk
+NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
+sda           8:0    0   30G  0 disk
+├─sda1        8:1    0    1G  0 part /boot
+└─sda2        8:2    0   29G  0 part
+  ├─rl-root 253:0    0   26G  0 lvm  /
+  └─rl-swap 253:1    0    3G  0 lvm  [SWAP]
+nvme3n1     259:0    0  6.5G  0 disk
+nvme2n1     259:1    0   70G  0 disk
+nvme1n1     259:2    0   70G  0 disk
+nvme0n1     259:3    0   70G  0 disk
+
+

In the example, we see four NVMe devices. Three devices of 70GiB and one device with 6.5GiB storage capacity.

+

To find the correct LBA format (lbaf) for each of the devices, the nvme CLI can be used.

+
Show NVMe namespace information
sudo nvme id-ns /dev/nvmeXnY
+
+

The output depends on the NVMe device itself, but looks something like this:

+
Example output of NVMe namespace information
[demo@demo-3 ~]# sudo nvme id-ns /dev/nvme0n1
+NVME Identify Namespace 1:
+...
+lbaf  0 : ms:0   lbads:9  rp:0
+lbaf  1 : ms:8   lbads:9  rp:0
+lbaf  2 : ms:16  lbads:9  rp:0
+lbaf  3 : ms:64  lbads:9  rp:0
+lbaf  4 : ms:0   lbads:12 rp:0 (in use)
+lbaf  5 : ms:8   lbads:12 rp:0
+lbaf  6 : ms:16  lbads:12 rp:0
+lbaf  7 : ms:64  lbads:12 rp:0
+
+

From this output, the required lbaf configuration can be found. The necessary configuration has to have the following +values:

+ + + + + + + + + + + + + + + + + + + + + +
PropertyValue
ms0
lbads12
rp0
+

In the example, the required LBA format is 4. If an NVMe device doesn't have that combination, any other lbads=12 +combination will work. However, simplyblock recommends asking for the best available combination.

+
+

Info

+

In some rare cases, no lbads=12 combination will be available. In this case, it is ok to leave the current +setup. This is specifically true for certain cloud providers such as AWS.

+
+

In our example, the device is already formatted with the correct lbaf (see the "in use"). It is, however, +recommended to always format the device before use.

+

To format the drive, the nvme cli is used again.

+
Formatting the NVMe device
sudo nvme format --lbaf=<lbaf> --ses=0 /dev/nvmeXnY
+
+

The output of the command should give a successful response when executed similarly to the example below.

+
Example output of NVMe device formatting
[demo@demo-3 ~]# sudo nvme format --lbaf=4 --ses=0 /dev/nvme0n1
+You are about to format nvme0n1, namespace 0x1.
+WARNING: Format may irrevocably delete this device's data.
+You have 10 seconds to press Ctrl-C to cancel this operation.
+
+Use the force [--force] option to suppress this warning.
+Sending format operation ...
+Success formatting namespace:1
+
+
+

Warning

+

This operation needs to be repeated for each NVMe device that will be handled by simplyblock.

+
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/reference/supported-linux-distributions/index.html b/deployment/25.10.3/reference/supported-linux-distributions/index.html new file mode 100644 index 00000000..e85b11e5 --- /dev/null +++ b/deployment/25.10.3/reference/supported-linux-distributions/index.html @@ -0,0 +1,4999 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Supported Linux Distributions - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Supported Linux Distributions

+ +

Simplyblock requires a Linux Kernel 5.19 or later with NVMe over Fabrics and NVMe over TCP enabled. However, +sbctl, the simplyblock commandline interface, requires some additional tools and expects certain +conventions for configuration files and locations. Therefore, simplyblock officially only supports Red Hat-based Linux +distributions as of now.

+

While others may work, manual intervention may be required, and simplyblock cannot support those.

+

Control Plane (Plain Linux)

+

The following Linux distributions are considered tested and supported to run a control plane:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
DistributionVersionArchitectureSupport Level
Red Hat Enterprise Linux9 and laterx64Fully supported
Rocky Linux9 and laterx64Fully supported
AlmaLinux9 and laterx64Fully supported
+

Storage Plane (Plain Linux)

+

The following Linux distributions are considered tested and supported to run a disaggregated storage plane:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
DistributionVersionArchitectureSupport Level
Red Hat Enterprise Linux9 and laterx64, arm64Fully supported
Rocky Linux9 and laterx64, arm64Fully supported
AlmaLinux9 and laterx64, arm64Fully supported
+

Kubernetes: Control Plane and Storage Plane

+

The following Linux distributions are considered tested and supported to run a hyper-converged storage plane:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
DistributionVersionArchitectureSupport Level
Red Hat Enterprise Linux9 and laterx64, arm64Fully supported
Rocky Linux9 and laterx64, arm64Fully supported
Alma Linux9 and laterx64, arm64Fully supported
Ubuntu22.04 and laterx64, arm64Fully supported
Debian12 or laterx64, arm64Fully supported
Amazon Linux 2 (AL2)-x64, arm64Fully supported
Amazon Linux 2023-x64, arm64Fully supported
Talos1.6.7 or laterx64, arm64Fully supported
+

Hosts (Initiators accessing Storage Cluster over NVMf)

+

The following Linux distributions are considered tested and supported as NVMe-oF storage clients:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
DistributionVersionArchitectureSupport Level
Red Hat Enterprise Linux8.1 and laterx64, arm64Fully supported
CentOS8 and laterx64, arm64Fully supported
Rocky Linux9 and laterx64, arm64Fully supported
AlmaLinux9 and laterx64, arm64Fully supported
Ubuntu18.04x64, arm64Fully supported
Ubuntu20.04x64, arm64Fully supported
Ubuntu22.04x64, arm64Fully supported
Debian12 or laterx64, arm64Fully supported
Amazon Linux 2 (AL2)-x64, arm64Partially supported1
Amazon Linux 2023-x64, arm64Partially supported1
+

1 Amazon Linux 2 and Amazon Linux 2023 have a bug with +NVMe over Fabrics Multipathing. That means that NVMe over Fabrics +on any Amazon Linux operates in a degraded state with the risk of connection outages. Alternatively, +multipathing must be configured using the Linux Device Manager (dm) via DM-MPIO.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/reference/supported-linux-kernels/index.html b/deployment/25.10.3/reference/supported-linux-kernels/index.html new file mode 100644 index 00000000..91007609 --- /dev/null +++ b/deployment/25.10.3/reference/supported-linux-kernels/index.html @@ -0,0 +1,4742 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Supported Linux Kernels - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Supported Linux Kernels

+ +

Simplyblock is built upon NVMe over Fabrics. Hence, it requires a Linux kernel with NVMe and NVMe-oF support.

+

As a general rule, every Linux kernel 5.19 or later is expected to work, as long as the kernel modules for NVMe (nvme), +NVMe over Fabrics (nvme-of), and NVMe over TCP (nvme-tcp) are available. In most cases, the latter two kernel +modules need to be loaded manually or persisted. Please see +the Bare Metal or Virtualized (Linux) installation section on how to do this.

+

The following kernels are known to be compatible and tested. Additional kernel versions may work, but are untested.

+ + + + + + + + + + + + + + + + + + + + + + + + + +
OSLinux KernelPrerequisite
Red Hat Enterprise Linux4.18.0-xxx Kernel on x86_64modprobe nvme-tcp
Amazon Linux 2Kernel 5.10 AMI 2.0.20230822.0modprobe nvme-tcp
Amazon Linux 20232023.1.20230825.0 x86_64 HVM kernel-6.1modprobe nvme-tcp
+
+

Warning

+

Amazon Linux 2 and Amazon Linux 2023 have a bug with +NVMe over Fabrics Multipathing. That means that NVMe over Fabrics on any Amazon Linux operates in a degraded +state with the risk of connection outages. As an alternative, multipathing must be configured using the Linux Device +Manager (dm) via DM-MPIO. Use the following DM-MPIO configuration:

+
cat /etc/multipath.conf 
+defaults {
+    polling_interval 1
+    user_friendly_names yes
+    find_multipaths yes
+    enable_foreign nvme
+    checker_timeout 3
+    failback immediate
+    max_polling_interval 3
+    detect_checker yes
+}
+
+devices {
+    device {
+        vendor "NVMe"
+        product ".*"
+        path_grouping_policy group_by_prio
+        path_selector "service-time 0"
+        failback "immediate"
+        no_path_retry "queue"
+        hardware_handler "1 ana"
+    }
+}
+
+blacklist {
+}
+
+
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/reference/troubleshooting/control-plane/index.html b/deployment/25.10.3/reference/troubleshooting/control-plane/index.html new file mode 100644 index 00000000..89ffbb11 --- /dev/null +++ b/deployment/25.10.3/reference/troubleshooting/control-plane/index.html @@ -0,0 +1,4830 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Control Plane - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Control Plane

+ +

FoundationDB Error

+

Symptom: FoundationDB error. All services that rely upon the FoundationDB key-value storage are offline or refuse to start.

+
    +
  1. Ensure that IPv6 is disabled: +
    Network Configuration
    sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1
    +sudo sysctl -w net.ipv6.conf.default.disable_ipv6=1
    +
  2. +
  3. Ensure sufficient disk space on the root partition on all control plane nodes. Free disk space can be checked with df -h.
  4. +
  5. If not enough free disk space is available, start by checking the Graylog, MongoDB, and Elasticsearch containers. If those consume most of the disk space, old indices (2-3) can be deleted.
  6. +
  7. Increase the root partition size.
  8. +
  9. If you cannot increase the root partition size, remove any data or service not relevant to the simplyblock control plane and run a docker system prune.
  10. +
  11. Restart the Docker daemon: systemctl restart docker
  12. +
  13. Reboot the node
  14. +
+

Graylog Service Is Unresponsive

+

Symptom: The Graylog service cannot be reached anymore or is unresponsive.

+
    +
  1. Ensure enough free available memory
  2. +
  3. If short on available memory, stop services non-relevant to the simplyblock control plane.
  4. +
  5. If that doesn't help, reboot the host.
  6. +
+

Graylog Storage is Full

+

Symptom: The Graylog service cannot start or is unresponsive, and the storage disk is full.

+
    +
  1. Identify the cause of the disk running full. Run the following commands to find the largest files on the Graylog disk. +
    Find the largest files
    df -h
    +du -sh /var/lib/docker
    +du -sh /var/lib/docker/containers
    +du -sh /var/lib/docker/volumes
    +
  2. +
  3. Delete the old Graylog indices via the Graylog UI.
      +
    • Go to System -> Indices
    • +
    • Select your index set
    • +
    • Adjust the Max Number of Indices to a lower number
    • +
    +
  4. +
  5. Reduce Docker disk usage by removing unused Docker volumes and images, as well as old containers. +
    Remove old Docker entities
    docker volume prune -f
    +docker image prune -f
    +docker rm $(sudo docker ps -aq --filter "status=exited")
    +
  6. +
  7. Cleanup OpenSearch, Graylog, and MongoDB volume paths and restart the services. +
    Cleaning up adjacent services
    # Scale services down
    +docker service update monitoring_graylog --replicas=0
    +docker service update monitoring_opensearch --replicas=0
    +docker service update monitoring_mongodb --replicas=0
    +
    +# Remove old data
    +rm -rf /var/lib/docker/volumes/monitoring_graylog_data
    +rm -rf /var/lib/docker/volumes/monitoring_os_data
    +rm -rf /var/lib/docker/volumes/monitoring_mongodb_data
    +
    +# Restart services
    +docker service update monitoring_mongodb --replicas=1
    +docker service update monitoring_opensearch --replicas=1
    +docker service update monitoring_graylog --replicas=1
    +
  8. +
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/reference/troubleshooting/index.html b/deployment/25.10.3/reference/troubleshooting/index.html new file mode 100644 index 00000000..e64bd562 --- /dev/null +++ b/deployment/25.10.3/reference/troubleshooting/index.html @@ -0,0 +1,4676 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Troubleshooting - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Troubleshooting

+ +

Simplyblock is designed as a system for minimal manual intervention. However, once in a while, there may be issues that +require some special treatment.

+

This section provides practical solutions for common issues you might encounter when deploying or operating simplyblock. +Whether you're dealing with deployment hiccups, performance anomalies, connectivity problems, or configuration errors, +you'll find step-by-step guidance to help you diagnose and resolve them quickly. Use this guide to keep your simplyblock +environment running smoothly and with confidence.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/reference/troubleshooting/simplyblock-csi/index.html b/deployment/25.10.3/reference/troubleshooting/simplyblock-csi/index.html new file mode 100644 index 00000000..c2d80390 --- /dev/null +++ b/deployment/25.10.3/reference/troubleshooting/simplyblock-csi/index.html @@ -0,0 +1,4854 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Kubernetes CSI - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Kubernetes CSI

+ +

High-Level CSI Driver Architecture

+

** Controller Plugin:** Runs as a Deployment and manages volume provisioning and deletion.

+

Node Plugin: Runs as a DaemonSet and handles volume attachment, mounting, and unmounting.

+

Sidecars: Handle tasks like external provisioning (csi-provisioner), attaching (csi-attacher), and monitoring +(csi-node-driver-registrar).

+

Finding CSI Driver Logs for a Specific PVC

+
    +
  1. Identify the Node Where the PVC is Mounted +
    Get the pod name using the persistent volume claim
    kubectl get pods -A -o \
    +jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.volumes[*].persistentVolumeClaim.claimName}{"\n"}{end}' | \
    +grep <PVC_NAME>
    +
    +
    Find the node the pod is bound to
    kubectl get pods -A -o \
    +jsonpath='{range .items[*]}{.spec.nodeName}{"\t"}{.spec.volumes[*].persistentVolumeClaim.claimName}{"\n"}{end}' | \
    +grep <PVC_NAME>
    +
  2. +
  3. Find the CSI driver pod on that node +
    Find the CSI driver pod
    kubectl get pods -n <CSI_NAMESPACE> -o wide | grep <NODE_NAME>
    +
  4. +
  5. Get Logs from the node plugin +
    Get the CSI driver pod logs
    kubectl logs -n <CSI_NAMESPACE> <CSI_NODE_POD> -c <DRIVER_CONTAINER>
    +
  6. +
+ +

If the error is NVMe-related (e.g., volume attachment failure, device not found), follow these steps.

+
    +
  1. +

    Ensure that nvme-cli is installed

    +
    +
    +
    +
    sudo dnf install -y nvme-cli
    +
    +
    +
    +
    sudo apt install -y nvme-cli
    +
    +
    +
    +
    +
  2. +
  3. +

    Verify if the nvme-tcp kernel module is loaded +

    Check NVMe/TCP kernel module is loaded
    lsmod | grep nvme_tcp
    +

    +

    If not available, the driver can be loaded temporarily using the following command:

    +
    Load NVMe/TCP kernel module
    sudo modprobe nvme-tcp
    +
    +

    However, to ensure it is automatically loaded at system startup, it should be persisted as following:

    +
    +
    +
    +
    echo "nvme-tcp" | sudo tee -a /etc/modules-load.d/nvme-tcp.conf
    +
    +
    +
    +
    echo "nvme-tcp" | sudo tee -a /etc/modules
    +
    +
    +
    +
    +
  4. +
  5. +

    Check NVMe Connection Status +

    Check NVMe-oF connection
    sudo nvme list-subsys
    +

    +

    If the expected NVMe subsystem is missing, reconnect manually:

    +
    Manually reconnect the NVMe-oF device
    sudo nvme connect -t tcp \
    +    -n <NVME_SUBSYS_NAME> \
    +    -a <TARGET_IP> \
    +    -s <TARGET_PORT> \
    +    -l <CTRL_LOSS_TIMEOUT> \
    +    -c <RECONNECT_DELAY> \
    +    -i <NR_IO_QUEUES>
    +
    +
  6. +
  7. +

    If the issue persists, gather kernel logs and provide them to the simplyblock support team: +

    Collect logs for support
    sudo dmesg | grep -i nvme
    +

    +
  8. +
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/reference/troubleshooting/storage-plane/index.html b/deployment/25.10.3/reference/troubleshooting/storage-plane/index.html new file mode 100644 index 00000000..fd350f22 --- /dev/null +++ b/deployment/25.10.3/reference/troubleshooting/storage-plane/index.html @@ -0,0 +1,4780 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Storage Plane - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Storage Plane

+ +

Fresh Cluster Cannot Be Activated

+

Symptom: After a fresh deployment, the cluster cannot be activated. The activation process hangs or fails, and the +storage nodes show n/0 disks available in the disks column (sbctl storage-node list).

+
    +
  1. Shutdown all storage nodes: sbctl storage-node shutdown --force
  2. +
  3. Force remove all storage nodes: sbctl storage-node remove --force
  4. +
  5. Delete all storage nodes: sbctl storage-node delete
  6. +
  7. Re-add all storage nodes. The disks should become active.
  8. +
  9. Try to activate the cluster.
  10. +
+

Storage Node Health Check Shows Health=False

+

Symptom: The storage node health check returns health=false (sbctl storage-node list).

+
    +
  1. First run sbctl storage-node check.
  2. +
  3. If the command keeps showing an unhealthy storage node, suspend, shutdown, and restart the storage node.
  4. +
+
+

Danger

+

Never shutdown or restart a storage node while the cluster is in degraded state. This can lead to potential +I/O operation. This is independent of the cluster's high-availability status.

+Check the cluster status with any of the following commands:

+
sbctl cluster list
+sbctl cluster get <cluster-id>
+sbctl cluster show <cluster-id>
+
+
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/reference/upgrade-matrix/index.html b/deployment/25.10.3/reference/upgrade-matrix/index.html new file mode 100644 index 00000000..618966db --- /dev/null +++ b/deployment/25.10.3/reference/upgrade-matrix/index.html @@ -0,0 +1,4702 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Upgrade Matrix - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Upgrade Matrix

+ +

Simplyblock supports in-place upgrades of existing clusters. However, not all versions can be upgraded straight to +the latest versions. Hence, some upgrades may include multiple steps.

+

Possible upgrade paths are described in the following table. If the currently installed version is not listed on the +requested version, an upgrade to a further supported version must be executed first.

+ + + + + + + + + + + + + + + + + + + + + +
Requested VersionInstalled Version
25.5.x25.5.x, 25.3-PRE
25.7.725.7.5.
25.10.125.7.5., 25.7.7.
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/release-notes/25-10-2/index.html b/deployment/25.10.3/release-notes/25-10-2/index.html new file mode 100644 index 00000000..d1fc8cc9 --- /dev/null +++ b/deployment/25.10.3/release-notes/25-10-2/index.html @@ -0,0 +1,4817 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 25.10.2 - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

25.10.2

+ +

Simplyblock is happy to release the general availability release of Simplyblock 25.10.2.

+

New Features

+
    +
  • Control Plane: The control plane can alternatively deploy into existing kubernetes clusters + and co-locate on workers with storage nodes.
  • +
  • Kubernetes Support Matrix: Added OpenShift starting from version 4.20.0
  • +
  • OpenStack driver: OpenStack driver is now available. It has support for most optional features + and is tested from OpenStack 25.1 (Epoxy). On request, older versions of OS may be supported.
  • +
  • The required memory footprint on storage nodes has been reduced from 0.2% of storage capacity to 0.05% of storage capacity.
  • +
  • Qos: Pool-level QoS feature has been added.
  • +
  • QoS Service Classes: A new feature to assign a service class to a volume. Service classes + provide full performance isolation within the cluster.
  • +
  • Support for flexible erasure coding schemas within a cluster.
  • +
  • Support for RDMA fabric and mixed fabrics (RDMA, TCP)
  • +
  • Some write performance improvements during first write to volume and node outage
  • +
  • Support for namespace volumes. A single NVMe subsystem can expose up to 32 namespace volumes now
  • +
+

Fixes

+
    +
  • Control Plane: Fixed a problem, which could lead to stuck deletes.
  • +
+

Upgrade Considerations

+

It is possible to upgrade from 25.7.6 and 25.7.7. It is possible to add rdma support for the fabric +during online upgrade.

+

Known Issues

+

Use of different erasure coding schemas per cluster is available, but can cause io interrupt issues in some tests. +This feature is experimental and not GA.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/release-notes/25-10-3/index.html b/deployment/25.10.3/release-notes/25-10-3/index.html new file mode 100644 index 00000000..327c3c88 --- /dev/null +++ b/deployment/25.10.3/release-notes/25-10-3/index.html @@ -0,0 +1,4806 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 25.10.3 - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

25.10.3

+ +

Simplyblock is happy to release the general availability release of Simplyblock 25.10.3.

+

New Features

+

No new features.

+

Fixes

+
    +
  • Control Plane: Fixed an issue with FoundationDB where clusters were able to exceed the maximum size of a single entry.
  • +
  • Control Plane: Fixed an issue where a default QoS class was assigned to logical volumes.
  • +
  • Control Plane: Fixed an issue where a connected wasn't re-established when a node goes down.
  • +
  • Storage Plane: Optimized the storage usage calculation of the storage node.
  • +
+

Upgrade Considerations

+

It is possible to upgrade from 25.7.6 and 25.7.7. It is possible to add RDMA support for the fabric +during the online upgrade.

+

Known Issues

+

Use of different erasure coding schemas per cluster is available but can cause io interrupt issues in some tests. +This feature is experimental and not GA.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/release-notes/25-3-pre/index.html b/deployment/25.10.3/release-notes/25-3-pre/index.html new file mode 100644 index 00000000..527c785e --- /dev/null +++ b/deployment/25.10.3/release-notes/25-3-pre/index.html @@ -0,0 +1,4965 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 25.3-PRE - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

25.3-PRE

+ +

Simplyblock is happy to release the pre-release version of the upcoming Simplyblock 25.3.

+
+

Warning

+

This is a pre-release and may contain issues. It is not recommended for production use.

+
+

New Features

+

Simplyblock strives to provide a strong product. Following is a list of the enhancements and features that made it into +this release.

+
    +
  • High availability has been significantly hardened for production. Main improvements concern the support for safe and interruption free fail-over and fail-back in different types of outage scenarios. Those include: partial network outage, full network outage, host failure, container failure, reboots, graceful and ungraceful shutdowns of nodes. Tested for single und dial node outages.
  • +
  • Multiple journal compression bugs have been identified and fixed.
  • +
  • Multiple journal fail-over bugs have been identified and fixed.
  • +
  • Logical volume creation, deletion, snapshotting, and resizing can now be performed via a secondary storage node (when the primary storage node is offline).
  • +
  • The system has been hardened against high load scenarios, determined by the amount of parallel NVMe-oF volumes per node, the amount of storage and parallel I/O. Tested up to 400 concurrent and fully active logical volumes per node and u to 20 concurrent I/O processes per logical volume.
  • +
  • Erasure coding schemes 2+1, 2+2, 4+2, 4+4 have been made power-fail-safe with high availability enabled.
  • +
  • System has been extensively tested outside of AWS with KVM-based virtualization and on bare-metal deployments.
  • +
  • Significant rework of the command line tool sbcli to simplify commands and parameters and make it more consistent. For more information see Important Changes.
  • +
  • Support for Linux Core Isolation to improve performance and system stability.
  • +
  • Added support for Proxmox via the Simplyblock Proxmox Integration.
  • +
+

Important Changes

+

Simplyblock made significant changes to the command line tool sbcli to simplify working with it. Many parameters and +commands were meant for internal testing and confusing to users. Hence, simplyblock decided to make those private.

+

Parameters and commands that were made private should not influence users. If the change to private for a parameter or +command should affect your deployment, please reach out.

+

Most changes are backwards-compatible, however, some are not. Following is a list of all changes.

+
    +
  • Command: storage-node
      +
    • Renamed command sn to storage-node (sn still works as an alias)
    • +
    • Changed subcommand device-testing-mode to private
    • +
    • Changed subcommand info-spdk to private
    • +
    • Changed subcommand remove-jm-device to private
    • +
    • Changed subcommand send-cluster-map to private
    • +
    • Changed subcommand get-cluster-map to private
    • +
    • Changed subcommand dump-lvstore to private
    • +
    • Changed subcommand set to private
    • +
    • Subcommand: deploy
        +
      • Added parameter --cpu-mask
      • +
      • Added parameter --isolate-cores
      • +
      +
    • +
    • Subcommand: add-node
        +
      • Renamed parameter --partitions to --journal-patition
      • +
      • Renamed parameter --storage-nics to --data-nics
      • +
      • Renamed parameter --number-of-vcpus to --vcpu-count
      • +
      • Added parameter --max-snap (private)
      • +
      • Changed parameter --jm-percent to private
      • +
      • Changed parameter --number-of-devices to private
      • +
      • Changed parameter --size-of-device to private
      • +
      • Changed parameter --cpu-mask to private
      • +
      • Changed parameter --spdk-image to private
      • +
      • Changed parameter --spdk-debug to private
      • +
      • Changed parameter --iobuf_small_bufsize to private
      • +
      • Changed parameter --iobuf_large_bufsize to private
      • +
      • Changed parameter --enable-test-device to private
      • +
      • Changed parameter --disable-ha-jm to private
      • +
      • Changed parameter --id-device-by-nqn to private
      • +
      • Changed parameter --max-snap to private
      • +
      +
    • +
    • Subcommand: restart
        +
      • Renamed parameter --node-ip to --node-addr (--node-ip still works but is deprecated and should be exchanged)
      • +
      • Changed parameter --max-snap to private
      • +
      • Changed parameter --max-size to private
      • +
      • Changed parameter --spdk-image to private
      • +
      • Changed parameter --spdk-debug to private
      • +
      • Changed parameter --iobuf_small_bufsize to private
      • +
      • Changed parameter --iobuf_large_bufsize to private
      • +
      +
    • +
    • Subcommand: list-devices
        +
      • Removed parameter --sort / -s
      • +
      +
    • +
    +
  • +
  • Command: cluster
  • +
  • Changed subcommand graceful-shutdown to private
  • +
  • Changed subcommand graceful-startup to private
      +
    • Subcommand: deploy
        +
      • Renamed parameter --separate-journal-device to --journal-partition
      • +
      • Renamed parameter --storage-nics to --data-nics
      • +
      • Renamed parameter --number-of-vcpus to --vcpu-count
      • +
      • Changed parameter --ha-jm-count to private
      • +
      • Changed parameter --enable-qos to private
      • +
      • Changed parameter --blk-size to private
      • +
      • Changed parameter --page_size to private
      • +
      • Changed parameter --CLI_PASS to private
      • +
      • Changed parameter --grafana-endpoint to private
      • +
      • Changed parameter --distr-bs to private
      • +
      • Changed parameter --max-queue-size to private
      • +
      • Changed parameter --inflight-io-threshold to private
      • +
      • Changed parameter --jm-percent to private
      • +
      • Changed parameter --max-snap to private
      • +
      • Changed parameter --number-of-distribs to private
      • +
      • Changed parameter --size-of-device to private
      • +
      • Changed parameter --cpu-mask to private
      • +
      • Changed parameter --spdk-image to private
      • +
      • Changed parameter --spdk-debug to private
      • +
      • Changed parameter --iobuf_small_bufsize to private
      • +
      • Changed parameter --iobuf_large_bufsize to private
      • +
      • Changed parameter --enable-test-device to private
      • +
      • Changed parameter --disable-ha-jm to private
      • +
      • Changed parameter --lvol-name to private
      • +
      • Changed parameter --lvol-size to private
      • +
      • Changed parameter --pool-name to private
      • +
      • Changed parameter --pool-max to private
      • +
      • Changed parameter --snapshot / -s to private
      • +
      • Changed parameter --max-volume-size to private
      • +
      • Changed parameter --encrypt to private
      • +
      • Changed parameter --crypto-key1 to private
      • +
      • Changed parameter --crypto-key2 to private
      • +
      • Changed parameter --max-rw-iops to private
      • +
      • Changed parameter --max-rw-mbytes to private
      • +
      • Changed parameter --max-r-mbytes to private
      • +
      • Changed parameter --max-w-mbytes to private
      • +
      • Changed parameter --distr-vuid to private
      • +
      • Changed parameter --lvol-ha-type to private
      • +
      • Changed parameter --lvol-priority-class to private
      • +
      • Changed parameter --fstype to private
      • +
      +
    • +
    • Subcommand: create
        +
      • Changed parameter --page_size to private
      • +
      • Changed parameter --CLI_PASS to private
      • +
      • Changed parameter --distr-bs to private
      • +
      • Changed parameter --distr-chunk-bs to private
      • +
      • Changed parameter --ha-type to private
      • +
      • Changed parameter --max-queue-size to private
      • +
      • Changed parameter --inflight-io-threshold to private
      • +
      • Changed parameter --enable-qos to private
      • +
      +
    • +
    • Subcommand: add
        +
      • Changed parameter --page_size to private
      • +
      • Changed parameter --distr-bs to private
      • +
      • Changed parameter --distr-chunk-bs to private
      • +
      • Changed parameter --max-queue-size to private
      • +
      • Changed parameter --inflight-io-threshold to private
      • +
      • Changed parameter --enable-qos to private
      • +
      +
    • +
    +
  • +
  • Command: storage-pool + - Removed subcommand get-secret + - Removed subcommand update-secret
      +
    • Subcommand: add
        +
      • Changed parameter --has-secret to private +-Command: caching-node
      • +
      +
    • +
    • Subcommand: add-node
        +
      • Renamed parameter --number-of-vcpus to --vcpu-count
      • +
      • Changed parameter --cpu-mask to private
      • +
      • Changed parameter --memory to private
      • +
      • Changed parameter --spdk-image to private
      • +
      +
    • +
    • Command: volume
        +
      • Changed subcommand list-mem to private
      • +
      • Changed subcommand move to private
      • +
      +
    • +
    • Subcommand: add
        +
      • Renamed parameter --pvc_name to --pvc-name (--pvc_name still works but is deprecated and should be exchanged)
      • +
      • Changed parameter --distr-vuid to private
      • +
      • Changed parameter --uid to private
      • +
      +
    • +
    +
  • +
+

Known Issues

+

Simplyblock always seeks to provide a stable and strong release. However, smaller known issues happen. Following is +a list of known issues of the current simplyblock release.

+
+

Info

+

This is a pre-release and many of those known issues are expected to be resolved with the final release.

+
+
    +
  • The control plane reaches a limit at around 2,200 logical volumes.
  • +
  • If a storage node goes offline while a logical volume is being deleted, the storage cluster may keep some garbage.
  • +
  • In rare cases, resizing a logical volume under high I/O load may cause a storage node restart.
  • +
  • If a storage cluster reaches its capacity limit and runs full, file systems on logical volumes may return I/O errors.
  • +
  • A fail-back after a fail-over may increase to >10s (with freezing I/O) with a larger number of logical volumes per storage node (>100 logical volumes).
  • +
  • A fail-over time may increase to >5s (with freezing I/O) on large logical volumes (>5 TB).
  • +
  • During a node outage, I/O performance may drop significantly with certain I/O patterns due to a performance issue in the journal compression.
  • +
  • Journal compression may cause significant I/O performance drops (10-20s) in periodic intervals under certain I/O load patterns, especially when the logical volume capacity reaches its limits for the first time.
  • +
  • A peak read IOPS performance regression has been observed.
  • +
  • In rare cases, a primary-secondary storage node combination may get into a flip-flop situation with multiple fail-over/fail-back iterations due to network or configuration issues of particular logical volumes or clients.
  • +
  • A secondary node may get stuck when trying to restart under high load (>100 logical volumes).
  • +
  • Node affinity rules are not considered after a storage node migration to a new host.
  • +
  • Return code of sbcli commands is always 0.
  • +
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/release-notes/25-6-ga/index.html b/deployment/25.10.3/release-notes/25-6-ga/index.html new file mode 100644 index 00000000..e40b294c --- /dev/null +++ b/deployment/25.10.3/release-notes/25-6-ga/index.html @@ -0,0 +1,4875 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 25.6 - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

25.6

+ +

Simplyblock is happy to release the general availability release of Simplyblock 25.6.

+

New Features

+

Simplyblock strives to provide a strong product. Following is a list of the enhancements and features that made it into +this release.

+
    +
  • General: Renamed sbcli to sbctl. The old sbcli command is deprecated but still available as a fallback for scripts.
  • +
  • Storage Plane: Increased maximum of available logical volumes per storage node to 1,000.
  • +
  • Storage Plane: Added the option to start multiple storage nodes in parallel on the same host. This is useful for machines with multiple NUMA nodes and many CPU cores to increase scalability.
  • +
  • Storage Plane: Added NVMe multipathing independent high-availability between storage nodes to harden resilience against network issues and improve failover.
  • +
  • Storage Plane: Removed separate secondary storage nodes for failover. From now on, all storage nodes act as primary and secondary storage nodes.
  • +
  • Storage Plane: Added I/O redirection in case of failover to secondary to improve cluster stability and failover times.
  • +
  • Storage Plane: Added support for CPU Core Isolation to improve performance consistency. Core masks and core isolation are auto-applied on disaggregated setups.
  • +
  • Storage Plane: Added sbctl storage-node configure command to automate the configuration of new storage nodes. See Configure a Storage Node for more information.
  • +
  • Storage Plane: Added optimized algorithms for the 4+1 and 4+2 erasure coding configurations.
  • +
  • Storage Plane: Reimplemented the Quality of Service (QoS) subsystem with significant less overhead than the old one.
  • +
  • Storage Plane: Added support for namespaced logical volumes (experimental).
  • +
  • Storage Plane: Reimplemented the initialization of a new page to significantly improve the performance of first write to page issues.
  • +
  • Storage Plane: Added support for optional labels when using strict anti-affinity.
  • +
  • Storage Plane: Added support for node affinity in case of a device failure to try to recover onto another device on the host.
  • +
  • Proxmox: Added support for native Proxmox node migration.
  • +
  • Talos: Added support to deploy on Talos-based OS-images.
  • +
  • AWS: Added Bottlerocket support.
  • +
  • AWS: Added multipathing support for Amazon Linux 2, Amazon Linux 2023, Bottlerocket.
  • +
  • GCP: Added support for Google Compute Engine.
  • +
+

Fixes

+
    +
  • Storage Plane: Fixed a critical issue on cluster expansion during rebalancing.
  • +
  • Storage Plane: Optimized internal journal compression of meta-data to use fewer CPU resources.
  • +
  • Storage Plane: Significantly improved the fail-back time in failover situations.
  • +
  • Storage Plane: Fixed a CRC checksum error that occurred in rare situations after a node outage.
  • +
  • Storage Plane: Fixed a conflict resolution issue with could lead to data corruption in failover scenarios.
  • +
  • Storage Plane: Fixed a segmentation fault after resizing multiple logical volumes in a fast sequence.
  • +
  • Storage Plane: Fixed data placement issues which could lead to unexpected I/O interruptions after a sequence of outages.
  • +
  • Storage Plane: Reduced huge pages consumption by about 1.5x. Huge pages are automatically recalculated on node restart.
  • +
  • Storage Plane: Fixed an RPC issue on clusters with eight or more storage nodes.
  • +
  • Storage Plane: Fixed a race condition on storage node restarts or cluster reactivations.
  • +
  • Storage Plane: Hardened a health-check issue which affected multiple services.
  • +
  • Storage Plane: Improved shared buffers calculation on large storage nodes.
  • +
  • Storage Plane: Fixed an issue in the metadata journal which could lead to a temporary conflict and I/O interruption on the fail-back of large logical volumes.
  • +
  • Storage Plane: Fixed an issue where a storage node would stay unhealthy after a cluster upgrade.
  • +
  • Storage Plane: Fixed an issue where the internal journal devices would fail to automatically reconnect after a cluster outage.
  • +
  • Storage Plane: Fixed an issue where a restart of one storage node could lead to a crash on another storage node.
  • +
  • Storage Plane: Fixed reattaching Amazon EBS volumes when migrating a storage node to a new host.
  • +
  • Control Plane: Improved error handling on the internal controller code base.
  • +
  • Control Plane: Fixed a range of false-positive detections that lead to unexpected storage node restarts or, in rare cases, cluster suspension.
  • +
  • Control Plane: Cleaned up the API from unnecessary calls and fixed smaller response content issues.
  • +
  • Control Plane: Improved handling of Greylog in case of primary management node failover.
  • +
  • Control Plane: Fixed multiple issues regarding spill-over and outages in case of a management node disk running full.
  • +
  • Control Plane: Fixed an issue where the generated logical volume connection string is missing the logical volume id in the NQN.
  • +
  • Control Plane: Fixed multiple build issues with ARM64 CPUs.
  • +
  • Control Plane: Fixed multiple issues when deleting logical volumes and snapshots which could lead to dangling garbage and inconsistencies.
  • +
  • Control Plane: Fixed a primary storage node restart bug.
  • +
  • Control Plane: Improved NVMe device detection by switching from serial number to PCIe address.
  • +
  • Control Plane: Fixed an issue, related to logical volume operations, where FoundationDB's memory consumption would continue to increase over time.
  • +
  • Control Plane: Fixed an issue where migration tasks would stale with status "mig error: 8, retrying".
  • +
  • Control Plane: Improved observability by polling thread information from SPDK and store it in Prometheus.
  • +
  • Control Plane: Improved the performance of parallel logical volume creations.
  • +
  • Control Plane: Fixed data unit of read_speed in the Grafana cluster dashboard.
  • +
  • Control Plane: Fixed an issue where the PromAgent image wouldn't be upgraded on a cluster upgrade.
  • +
  • Kubernetes: Fixed an issue where the CSI driver would hang if it tries to delete a snapshot in error state.
  • +
  • Kubernetes: Fixed hanging NVMe/TCP connections in the CSI driver on storage node restarts or failovers.
  • +
  • Kubernetes: Fixed an issue with failing volume snapshots.
  • +
  • Kubernetes: Improved the version pinning of required services.
  • +
  • Kubernetes: Fixed an issue where NVMe/TCP connections in multipathing setups would be disconnected in the wrong order.
  • +
  • Proxmox: Improved automatic reconnecting of volumes after storage node restarts and failovers.
  • +
  • +
+

Important Changes

+
    +
  • Architecture: Separate secondary nodes have been removed as a concept. Instead, in a high-availability cluster, every deployed primary storage node also acts as a secondary storage node to another primary.
  • +
  • Storage Plane: NVMe devices are now identified by their serial number to enable PCIe renumbering in case of changes to the system configuration.
  • +
  • Firewall rules adjustment: An existing port range TCP/8080-8890 was changed to TCP/8080-8180. The firewall configuration and AWS Security Groups need to be adjusted accordingly.
  • +
  • Firewall rules adjustment: An existing port range TCP/9090-9900 was changed to TCP/9100-9200. The firewall configuration and AWS Security Groups need to be adjusted accordingly.
  • +
  • Firewall rules adjustment: A new port range TCP/9030-9059 was added. The firewall configuration and AWS Security Groups need to be adjusted accordingly.
  • +
  • Firewall rules adjustment: A new port range TCP/9060-9099 was added. The firewall configuration and AWS Security Groups need to be adjusted accordingly.
  • +
  • Firewall rules adjustment: An existing port TCP/4420 has been removed. The firewall configuration and AWS Security Groups need to be adjusted accordingly.
  • +
  • Image registry: The image registry moved from Amazon ECR to DockerHub.
  • +
+

Known Issues

+

Simplyblock always seeks to provide a stable and strong release. However, smaller known issues happen. Following up is +a list of known issues for the current simplyblock release.

+
    +
  • GCP: On GCP, multiple Local SSDs are connected as NVMe Namespace devices. There is a bug that prevents more than one Local SSD from being added to a storage node. For the time being, use one Local SSD per storage node. The storage node must be sized accordingly.
  • +
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/release-notes/25-7-1/index.html b/deployment/25.10.3/release-notes/25-7-1/index.html new file mode 100644 index 00000000..5211411f --- /dev/null +++ b/deployment/25.10.3/release-notes/25-7-1/index.html @@ -0,0 +1,4827 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 25.7.1 - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

25.7.1

+ +

Simplyblock is happy to release the general availability release of Simplyblock 25.7.1.

+

New Features

+
    +
  • General: Clarified deployment documentation.
  • +
  • General: Added documentation for cluster expansion.
  • +
  • General: Added documentation for storage node migration.
  • +
  • Kubernetes: Improved the Helm Chart for simplified Kubernetes deployments.
  • +
  • Kubernetes: Automated applying Core Isolation on Kubernetes worker nodes.
  • +
  • Proxmox: Added support for quality of service settings.
  • +
+

Fixes

+
    +
  • Control Plane: Fixed an issue where the storage utilization of a logical volume wasn't shown when the primary storage node was offline.
  • +
  • Control Plane: Fixed an issue where the snapshot health status was shown as unhealthy while it was healthy.
  • +
  • Control Plane: Fixed an issue where a client would fail to reconnect after a network outage due to a missing property in the configuration.
  • +
  • Control Plane: Fixed an issue where the cluster would not be shown as degraded while a data migration operation is ongoing.
  • +
  • Control Plane: Fixed an issue where it was possible to restart a storage node even if it was not in offline state.
  • +
  • Storage Plane: Fixed an issue which caused a cluster suspension (hence I/O interruption) in case of a partial or full network outage.
  • +
  • Storage Plane: Fixed an issue where a logical volume wasn't correctly deleted if the operation was issued as asynchronous.
  • +
  • Storage Plane: Fixed a segfault on secondary nodes.
  • +
  • Storage Plane: Fixed an error on the journal for large numbers of records.
  • +
  • Storage Plane: Fixed an I/O leakage between primary and secondary storage nodes for certain I/O patterns.
  • +
  • Storage Plane: Fixed an issue where rebalancing stopped early after cluster expansion, causing the cluster to become imbalanced.
  • +
  • Storage Plane: Fixed an issue where a snapshot wasn't correctly re-registered after a failback.
  • +
  • Storage Plane: Fixed a distrib error on network outages.
  • +
  • Storage Plane: Fixed an issue where a storage node would get stuck in down state after a restart.
  • +
  • Storage Plane: Fixed an issue where a checksum error could happen after failing back from a partial outage.
  • +
  • Proxmox: Fixed an issue when adding additional storage pools.
  • +
+

Important Changes

+

No changes in this release.

+

Known Issues

+

Simplyblock always seeks to provide a stable and strong release. However, smaller known issues happen. Following up is +a list of known issues for the current simplyblock release.

+
    +
  • GCP: On GCP, multiple Local SSDs are connected as NVMe Namespace devices. Simplyblock recommends to use C4A-based ARM servers.
  • +
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/release-notes/25-7-2/index.html b/deployment/25.10.3/release-notes/25-7-2/index.html new file mode 100644 index 00000000..657871ba --- /dev/null +++ b/deployment/25.10.3/release-notes/25-7-2/index.html @@ -0,0 +1,4809 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 25.7.2 - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

25.7.2

+ +

Simplyblock is happy to release the general availability release of Simplyblock 25.7.2.

+

New Features

+

No new features in this release.

+

Fixes

+
    +
  • Control Plane: Fixed an issue where the reference counter for snapshots wasn't correctly decremented on deletion of a child.
  • +
  • Control Plane: Improved checks for inflight I/O operations in a specific Distrib group.
  • +
  • Control Plane: Improved handling of the job task list.
  • +
  • Proxmox: Fixed an issue where a volume rename wouldn't work correctly.
  • +
  • Proxmox: Fixed logical volume lookup for deployments with multiple Proxmox client clusters.
  • +
+

Important Changes

+

No changes in this release.

+

Known Issues

+

Simplyblock always seeks to provide a stable and strong release. However, smaller known issues happen. Following up is +a list of known issues for the current simplyblock release.

+
    +
  • GCP: On GCP, multiple Local SSDs are connected as NVMe Namespace devices. Simplyblock recommends to use C4A-based ARM servers.
  • +
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/release-notes/25-7-3/index.html b/deployment/25.10.3/release-notes/25-7-3/index.html new file mode 100644 index 00000000..b2ff9568 --- /dev/null +++ b/deployment/25.10.3/release-notes/25-7-3/index.html @@ -0,0 +1,4813 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 25.7.3 - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

25.7.3

+ +

Simplyblock is happy to release the general availability release of Simplyblock 25.7.3.

+

New Features

+
    +
  • Proxmox: Added volume size checking.
  • +
+

Fixes

+
    +
  • Control Plane: Hardened deployment of a storage node if the SPDK container already exists.
  • +
  • Storage Plane: Fixed access to /etc which is unavailable on Talos. Thanks brunnels ⧉!
  • +
  • Storage Plane: Added Talos detection to skip nsenter when running on Talos.
  • +
  • Proxmox: Fixed false-negative reporting of the volume usage.
  • +
  • Proxmox: Hardened parameter validation.
  • +
  • Proxmox: Hardened the deletion of already deleted volumes.
  • +
  • Proxmox: Hardened the handling of storage pool references.
  • +
+

Important Changes

+

No changes in this release.

+

Known Issues

+

Simplyblock always seeks to provide a stable and strong release. However, smaller known issues happen. Following up is +a list of known issues for the current simplyblock release.

+
    +
  • GCP: On GCP, multiple Local SSDs are connected as NVMe Namespace devices. Simplyblock recommends to use C4A-based ARM servers.
  • +
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/release-notes/25-7-4/index.html b/deployment/25.10.3/release-notes/25-7-4/index.html new file mode 100644 index 00000000..15e9e0b8 --- /dev/null +++ b/deployment/25.10.3/release-notes/25-7-4/index.html @@ -0,0 +1,4807 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 25.7.4 - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

25.7.4

+ +

Simplyblock is happy to release the general availability release of Simplyblock 25.7.4.

+

New Features

+

No new features in this release.

+

Fixes

+
    +
  • Control Plane: Added a check on cluster upgrades to delete dangling snapshots.
  • +
  • Control Plane: Added a check to the health check service to automatically fix an issue in the Distrib cluster map.
  • +
  • Control Plane: Fixed an issue where an already deleted logical volume would prevent snapshots from being deleted.
  • +
+

Important Changes

+

No changes in this release.

+

Known Issues

+

Simplyblock always seeks to provide a stable and strong release. However, smaller known issues happen. Following up is +a list of known issues for the current simplyblock release.

+
    +
  • GCP: On GCP, multiple Local SSDs are connected as NVMe Namespace devices. Simplyblock recommends to use C4A-based ARM servers.
  • +
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/release-notes/25-7-5/index.html b/deployment/25.10.3/release-notes/25-7-5/index.html new file mode 100644 index 00000000..c810abeb --- /dev/null +++ b/deployment/25.10.3/release-notes/25-7-5/index.html @@ -0,0 +1,4808 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 25.7.5 - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

25.7.5

+ +

Simplyblock is happy to release the general availability release of Simplyblock 25.7.5.

+

New Features

+
    +
  • Storage Plane: Added support for vfio device driver, with a fallback to the legacy uio driver.
  • +
+

Fixes

+
    +
  • Control Plane: Improved the reliability of migrations when restarting a storage node, or a storage node recovers from a network outage.
  • +
  • Storage Plane: Fixed an issue where the vfio driver wouldn't be available on some systems.
  • +
+

Important Changes

+

No changes in this release.

+

Known Issues

+

Simplyblock always seeks to provide a stable and strong release. However, smaller known issues happen. Following up is +a list of known issues for the current simplyblock release.

+
    +
  • GCP: On GCP, multiple Local SSDs are connected as NVMe namespaced devices on x86-64 virtual machine types. Simplyblock recommends using C4A-based ARM servers which provide individual NVMe controllers per NVMe.
  • +
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/release-notes/25-7-6/index.html b/deployment/25.10.3/release-notes/25-7-6/index.html new file mode 100644 index 00000000..b4c677b9 --- /dev/null +++ b/deployment/25.10.3/release-notes/25-7-6/index.html @@ -0,0 +1,4813 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 25.7.6 - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

25.7.6

+ +

Simplyblock is happy to release the general availability release of Simplyblock 25.7.6.

+

New Features

+
    +
  • Storage Plane: Added the logical volume name to the logical volume metrics endpoint.
  • +
+

Fixes

+
    +
  • Control Plane: Fixed an issue where a new logging configuration wasn't applied to existing gelf containers when adding a new storage node to a cluster.
  • +
  • Storage Plane: Fixed an issue where the /etc volume mount would still be mounted on Talos.
  • +
  • Storage Plane: Fixed an issue where the nsenter command would be issues on Talos.
  • +
  • Storage Plane: Fixed an issue where physical storage node labels would be configured in a single-node setup.
  • +
  • Control Plane: Fixed an issue where the snapshot monitoring service would be restarted if missing in the service listing.
  • +
  • Control Plane: Fixed an issue where bdevs might have been left out from connection after port allow.
  • +
  • Control Plane: Improved device migration task handling.
  • +
+

Important Changes

+

No changes in this release.

+

Known Issues

+

Simplyblock always seeks to provide a stable and strong release. However, smaller known issues happen. Following up is +a list of known issues for the current simplyblock release.

+
    +
  • GCP: On GCP, multiple Local SSDs are connected as NVMe namespaced devices on x86-64 virtual machine types. Simplyblock recommends using C4A-based ARM servers which provide individual NVMe controllers per NVMe.
  • +
+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/release-notes/index.html b/deployment/25.10.3/release-notes/index.html new file mode 100644 index 00000000..1f566ff2 --- /dev/null +++ b/deployment/25.10.3/release-notes/index.html @@ -0,0 +1,4672 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Release Notes - Simplyblock Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Release Notes

+ +

Simplyblock regularly provides new releases with new features, performance enhancements, bugfixes, and more.

+

This section provides detailed information about each Simplyblock release, including new features, enhancements, bug +fixes, and known issues. Stay informed about the latest developments to ensure optimal performance and take full +advantage of simplyblock's capabilities.

+ + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/deployment/25.10.3/search/search_index.json b/deployment/25.10.3/search/search_index.json new file mode 100644 index 00000000..38e27b3c --- /dev/null +++ b/deployment/25.10.3/search/search_index.json @@ -0,0 +1 @@ +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":""},{"location":"#welcome-to-the-simplyblock-documentation","title":"Welcome to the Simplyblock Documentation","text":"

Welcome to the Simplyblock Documentation, your comprehensive resource for understanding, deploying, and managing simplyblock's cloud-native, high-performance storage platform. This documentation provides detailed information on architecture, installation, configuration, and best practices, ensuring you have the necessary guidance to maximize the efficiency and reliability of your simplyblock deployment.

"},{"location":"#getting-started","title":"Getting Started","text":"
  • Learn the basics

    General information about simplyblock, the documentation, and important terms. Read here first.

    Important Notes

  • Plan the deployment

    Before starting to deploy simplyblock, take a moment to make yourself familiar with the required node sizing and other considerations for a performant and stable cluster operation.

    Deployment Planning

  • Deploy Simplyblock

    Deploy simplyblock on Kubernetes, bare metal, or virtualized Linux machines. Choose between hyper-converged, disaggregated, or hybrid deployment models.

    Simplyblock Deployment

  • Operate Simplyblock

    After the installation of a simplyblock cluster, learn how to operate and maintain it.

    Simplyblock Usage Simplyblock Operations

"},{"location":"#keep-updated","title":"Keep Updated","text":"

Sign up for our newsletter and keep updated on what's happening at simplyblock.

"},{"location":"architecture/","title":"Architecture","text":"

Simplyblock is a cloud-native, software-defined storage platform designed for high performance, scalability, and resilience. It provides NVMe over TCP (NVMe/TCP) and NVMe over RDMA (ROCEv2) block storage, enabling efficient data access across distributed environments. Understanding the architecture, key concepts, and common terminology is essential for effectively deploying and managing simplyblock in various infrastructure setups, including Kubernetes clusters, virtualized environments, and bare-metal deployments. This documentation provides a comprehensive overview of simplyblock\u2019s internal architecture, the components that power it, and the best practices for integrating it into your storage infrastructure.

This section covers several critical topics, including the architecture of simplyblock, core concepts such as Logical Volumes (LVs), Storage Nodes, and Management Nodes, as well as Quality of Service (QoS) mechanisms and redundancy strategies. Additionally, we define common terminology used throughout the documentation to ensure clarity and consistency. Readers will also find guidelines on document conventions, such as formatting, naming standards, and command syntax, which help maintain uniformity across all technical content.

Simplyblock is an evolving platform, and community contributions play a vital role in improving its documentation. Whether you are a developer, storage administrator, or end user, your insights and feedback are valuable. This section provides details on how to contribute to the documentation, report issues, suggest improvements, and submit pull requests. By working together, we can ensure that simplyblock\u2019s documentation remains accurate, up-to-date, and beneficial for all users.

"},{"location":"architecture/high-availability-fault-tolerance/","title":"High Availability and Fault Tolerance","text":"

Simplyblock is designed to provide enterprise-grade high availability (HA) and fault tolerance for enterprise and cloud-native storage environments. Through a combination of distributed architecture and advanced data protection mechanisms, simplyblock ensures continuous data access, resilience against failures, and minimal service disruption. Fault tolerance is embedded at multiple levels of the system, from data redundancy to control plane and storage path resilience.

"},{"location":"architecture/high-availability-fault-tolerance/#fault-tolerance-and-high-availability-mechanisms","title":"Fault Tolerance and High Availability Mechanisms","text":"

Simplyblock\u2019s architecture provides robust fault tolerance and high availability by combining distributed erasure coding, multipath access with failover, and redundant management and storage planes. These capabilities ensure that Simplyblock storage clusters deliver the reliability and resiliency required for critical, high-demand workloads in modern distributed environments.

"},{"location":"architecture/high-availability-fault-tolerance/#1-distributed-erasure-coding","title":"1. Distributed Erasure Coding","text":"

Simplyblock protects data using distributed erasure coding, which ensures that data is striped across multiple storage nodes along with parity fragments. This provides:

  • Redundancy: Data can be reconstructed even if one or more nodes fail, depending on the configured erasure coding scheme (such as 1+1, 1+2, 2+1, or 2+2).
  • Efficiency: Storage overhead is minimized compared to full replication while maintaining strong fault tolerance.
  • Automatic Rebuilds: In the event of node or disk failures, missing data is rebuilt automatically using parity information stored across the cluster.
"},{"location":"architecture/high-availability-fault-tolerance/#2-multipathing-with-primary-and-secondary-nodes","title":"2. Multipathing with Primary and Secondary Nodes","text":"

Simplyblock supports NVMe-over-Fabrics (NVMe-oF) multipathing to provide path redundancy between clients and storage:

  • Primary and Secondary Paths: Each Logical Volume (LV) is accessible through both a primary node and one or more secondary nodes.
  • Automatic Failover: If the primary node becomes unavailable, traffic is automatically redirected to a secondary node with minimal disruption.
  • Load Balancing: Multipathing also distributes I/O across available paths to optimize performance and reliability.
"},{"location":"architecture/high-availability-fault-tolerance/#3-redundant-control-plane-and-storage-plane","title":"3. Redundant Control Plane and Storage Plane","text":"

To ensure cluster-wide availability, Simplyblock operates with full redundancy in both its control plane and storage plane:

  • Control Plane (Management Nodes):

    • Deployed as a highly available set of management nodes, typically in a quorum-based configuration.
    • Responsible for cluster health, topology management, and coordination.
    • Remains operational even if one or more management nodes fail.
  • Storage Plane (Storage Nodes):

    • Storage services are distributed across multiple storage nodes.
    • Data and workloads are automatically rebalanced and protected in case of node or device failures.
    • Failures are handled transparently with automatic recovery processes.
"},{"location":"architecture/high-availability-fault-tolerance/#benefits-of-simplyblocks-high-availability-design","title":"Benefits of Simplyblock\u2019s High Availability Design","text":"
  • No single point of failure across the control plane, storage plane, and data paths.
  • Seamless failover and recovery from node, network, or disk failures.
  • Efficient use of storage capacity while ensuring redundancy through erasure coding.
  • Continuous operation during maintenance and upgrade procedures.
"},{"location":"architecture/simplyblock-architecture/","title":"Simplyblock Architecture","text":"

Simplyblock is a cloud-native, distributed block storage platform designed to deliver scalable, high-performance, and resilient storage through a software-defined architecture. Centered around NVMe-over-Fabrics (NVMe-oF), simplyblock separates compute and storage to enable scale-out elasticity, high availability, and low-latency operations in modern, containerized environments. The architecture is purpose-built to support Kubernetes-native deployments with seamless integration but supports virtual and even physical machines as clients as well.

"},{"location":"architecture/simplyblock-architecture/#control-plane","title":"Control Plane","text":"

The control plane hosts the Simplyblock Management API and CLI endpoints with identical features. The CLI is equally available on all management nodes. The API and CLI are secured using HTTPS / TLS.

The control plane operates through redundant management nodes that handle cluster health, metadata, and orchestration. A quorum-based model ensures no single point of failure.

"},{"location":"architecture/simplyblock-architecture/#control-plane-responsibilities","title":"Control Plane Responsibilities","text":"

The control plane provides the following functionality:

  • Lifecycle management of clusters:
    • Deploy storage clusters
    • Manages nodes and devices
    • Resize and reconfigure clusters
  • Lifecycle management of logical volumes and pools
    • For Kubernetes, the Simplyblock CSI driver integrates with the persistent volume lifecycle management
  • Cluster operations
    • I/O Statistics
    • Capacity Statistics
    • Alerts
    • Logging
    • others

The control plane also provides real-time collection and aggregation of I/O stats (performance, capacity, utilization), proactive cluster monitoring and health checks, monitoring dashboards, alerting, a log file repository with a management interface, data migration, and automated node and device restart services.

For monitoring dashboards and alerting, the simplyblock control plane provides Grafana and Prometheus. Both systems are configured to provide a set of standard alerts that can be delivered via Slack or email. Additionally, customers are free to define their own custom alerts.

For log management, simplyblock uses Graylog. For a comprehensive insight, Graylog is configured to collect container logs from the control plane and storage plane services, the RPC communication between the control plane and storage cluster and the data services logs (SPDK\u00a0\u29c9 or Storage Performance Development Kit).

"},{"location":"architecture/simplyblock-architecture/#control-plane-state-storage","title":"Control Plane State Storage","text":"

The control plane is implemented as a stack of containers running on one or more management nodes. For production environments, simplyblock requires at least three management nodes for high availability. The management nodes run as a set of replicated, stateful services.

For internal state storage, the control plane uses (FoundationDB\u00a0\u29c9) as its key-value store. FoundationDB, by itself, operates in a replicated highly-available cluster across all management nodes.

Within Kubernetes deployments, the control plane can now also be deployed alongside the storage nodes on the same k8s workers. It will, however, run in separate pods.

"},{"location":"architecture/simplyblock-architecture/#storage-plane","title":"Storage Plane","text":"

The storage plane consists of distributed storage nodes that run on Linux-based systems and provide logical volumes ( LVs) as virtual NVMe devices. Using SPDK and DPDK (Data Plane Development Kit), simplyblock achieves high-speed, user-space storage operations with minimal latency.

To achieve that, simplyblock detaches NVMe devices from the Linux kernel, bypassing the typical kernel-based handling. It then takes full control of the device directly, handling all communication with the hardware in user-space. That removes transitions from user-space to kernel and back, improving latency and reducing processing time and context switches.

"},{"location":"architecture/simplyblock-architecture/#scaling-and-performance","title":"Scaling and Performance","text":"

Simplyblock supports linear scale-out by adding storage nodes without service disruption. Performance increases with additional cores, network interfaces, and NVMe devices, with SPDK minimizing CPU overhead for maximum throughput.

Data written to a simplyblock logical volume is split into chunks and distributed across the storage plane cluster nodes. This improves throughput by parallelizing the access to data through multiple storage nodes.

"},{"location":"architecture/simplyblock-architecture/#data-protection-fault-tolerance","title":"Data Protection & Fault Tolerance","text":"

Simplyblock's storage engine implements erasure coding, a RAID-like system, which uses parity information to protect data and restore it in case of a failure. Due to the fully distributed nature of simplyblock's erasure coding implementation, parity information is not only stored on disks other than the original data chunk, but also on other nodes. This improves data protection and enables higher fault tolerance than typical implementations. While most erasure coding implementations provide a Maximum Tolerable Failure (MFT) in terms of how many disks can fail, simplyblock defines it as the number of nodes that can fail.

As a second layer, simplyblock leverages NVMe-oF multipathing to ensure continuous access to logical volumes by automatically handling failover between primary and secondary nodes. Each volume is presented with multiple active paths, allowing I/O operations to seamlessly reroute through secondary nodes if the primary node becomes unavailable due to failure, maintenance, or network disruption. This multipath configuration is managed transparently by the NVMe-oF subsystem, providing path redundancy, eliminating single points of failure, and maintaining high availability without requiring manual intervention. The system continuously monitors path health, and when the primary path is restored, it can be automatically reintegrated, ensuring optimal performance and reliability.

Last, simplyblock provides robust encryption for data-at-rest, ensuring that all data stored on logical volumes is protected using industry-standard AES_XTS encryption with minimal performance overhead. This encryption is applied at the volume level and is managed transparently within the simplyblock cluster, allowing compliance with strict regulatory requirements such as GDPR, HIPAA, and PCI-DSS. Furthermore, simplyblock\u2019s architecture is designed for strong multitenant isolation, ensuring that encryption keys, metadata, and data are securely segregated between tenants. This guarantees that unauthorized access between workloads and users is prevented, making simplyblock an ideal solution for shared environments where security, compliance, and tenant separation are critical.

"},{"location":"architecture/simplyblock-architecture/#technologies-in-simplyblock","title":"Technologies in Simplyblock","text":"

Building strong and reliable distributed storage technology has to build on a strong foundation. That's why simplyblock uses a variety of open-source key technologies as its basis.

Component Technologies Networking NVMe-oF\u00a0\u29c9, NVMe/TCP, NVMe/RoCE, DPDK\u00a0\u29c9 Storage SPDK\u00a0\u29c9, FoundationDB\u00a0\u29c9, MongoDB\u00a0\u29c9 Observability Prometheus\u00a0\u29c9, Thanos\u00a0\u29c9, Grafana\u00a0\u29c9 Logging Graylog\u00a0\u29c9, OpenSearch\u00a0\u29c9 Kubernetes SPDK CSI\u00a0\u29c9, Kubernetes CSI\u00a0\u29c9 Operating System Linux\u00a0\u29c9"},{"location":"architecture/storage-performance-and-qos/","title":"Performance and QoS","text":""},{"location":"architecture/storage-performance-and-qos/#storage-performance-indicators","title":"Storage Performance Indicators","text":"

Storage performance can be categorized by latency (the aggregate response time of an IO request from the host to the storage system) and throughput. Throughput can be broken down into random IOPS throughput and sequential throughput.

IOPS and sequential throughput must be measured relative to capacity (i.e., IOPS per TB).

Latency and IOPS throughput depend heavily on the IO operation (read, write, unmap) and the IO size (4K, 8K, 16K, 32K, ...). For comparability, it is typically tested with a 4K IO size, but tests with 8K to 128K are standard too.

Latency is strongly influenced by the overall load on the overall storage system. If there is intense IO pressure, queues build up and response times go up. This is no different from a traffic jam on the highway or a queue at the airline counter. Therefore, to compare latency results, it must be measured under a fixed system load (amount of parallel IO, its size, and IO type mix).

Important

For latency, consistency matters. High latency variability, especially in the tail, can severely impact workloads. Therefore, 99th percentile latency may be more important than the average or median.

"},{"location":"architecture/storage-performance-and-qos/#challenges-with-hyper-converged-and-software-defined-storage","title":"Challenges with Hyper-Converged and Software-Defined Storage","text":"

Unequal load distribution across cluster nodes, and the dynamics of specific nodes under Linux or Windows (dynamic multithreading, network bandwidth fluctuations, etc.), create significant challenges for consistent, high storage performance in such an environment.

Mixed IO patterns increase these challenges from different workloads.

This can cause substantial variability in latency, IOPS throughput, and high-tail latency, with a negative impact on workloads.

"},{"location":"architecture/storage-performance-and-qos/#simplyblock-how-we-ensure-ultra-low-latency-in-the-99th-percentile","title":"Simplyblock: How We Ensure Ultra-Low Latency In The 99th Percentile","text":"

Simplyblock exhibits a range of architectural characteristics and features to guarantee consistently low latency and IOPS in both disaggregated and hyper-converged environments.

"},{"location":"architecture/storage-performance-and-qos/#pseudo-randomized-distributed-data-placement-with-fast-re-balancing","title":"Pseudo-Randomized, Distributed Data Placement With Fast Re-Balancing","text":"

Simplyblock is a fully distributed solution. Back-storage is balanced across all nodes in the cluster on a very granular level. Relative to their capacity and performance, each device and node in the cluster receives a similar amount and size of IO. This feature ensures an entirely equal distribution of load across the network, compute, and NVMe drives.

In case of drive or node failures, distributed rebalancing occurs to reach the fully balanced state as quickly as possible. When adding drives and nodes, performance increases in a linear manner. This mechanism avoids local overload and keeps latency and IOPS throughput consistent across the cluster, independent of which node is accessed.

"},{"location":"architecture/storage-performance-and-qos/#built-end-to-end-with-and-for-nvme","title":"Built End-To-End With And For NVMe","text":"

Storage access is entirely based on NVMe (local back-storage) and NVMe over Fabric (hosts to storage nodes and storage nodes to storage nodes). This protocol is inherently asynchronous and supports highly parallel processing, eliminating bottlenecks specific to mixed IO patterns on other protocols (such as iSCSI) and ensuring consistently low latency.

"},{"location":"architecture/storage-performance-and-qos/#support-for-rocev2","title":"Support for ROCEv2","text":"

Simplyblock also supports NVMe over RDMA (ROCEv2). RDMA, as a transport layer, offers significant latency and tail latency advantages over TCP. Today, RDMA can be used in most data center environments because it requires only specific hardware features from NICs, which are available across a broad range of models. It runs over UDP/IP and, as such, does not require any changes to the networking.

"},{"location":"architecture/storage-performance-and-qos/#full-core-isolation-and-numa-awareness","title":"Full Core-Isolation And NUMA Awareness","text":"

Simplyblock implements full CPU core isolation and NUMA socket affinity. Simplyblock\u2019s storage nodes are auto-deployed per NUMA socket and utilize only socket-specific resources, meaning compute, memory, network interfaces, and NVMe.

All CPU cores assigned to simplyblock are isolated from the operating system (user-space compute and IRQ handling), and internal threads are pinned to cores. This avoids any scheduling-induced delays or variability in storage processing.

"},{"location":"architecture/storage-performance-and-qos/#user-space-zero-copy-framework-kockless-and-asynchronous","title":"User-Space, Zero-Copy Framework (Kockless and Asynchronous)","text":"

Simplyblock uses a user-space framework (SPDK\u00a0\u29c9). SPDK implemented a zero-copy model across the entire storage processing chain. This includes the data plane, the Kinux vfio driver, and the entirely non-locking, asynchronous DPDK threading model. It enables avoiding Linux p-threads and any inter-thread synchronization, providing much higher latency predictability and a lower baseline latency.

"},{"location":"architecture/storage-performance-and-qos/#advanced-qos-quality-of-service","title":"Advanced QoS (Quality of Service)","text":"

Simplyblock implements two independent, critical QoS mechanisms.

"},{"location":"architecture/storage-performance-and-qos/#volume-and-pool-level-caps","title":"Volume and Pool-Level Caps","text":"

A cap, such as an IOPS, throughput limit, or a combination of both, can be set on an individual volume or an entire pool within the cluster. Through this limit, general-purpose volumes can be pooled and limited in their total IOPS or throughput to avoid noisy-neighbor effects and protect more critical workloads.

"},{"location":"architecture/storage-performance-and-qos/#qos-service-classes","title":"QoS Service Classes","text":"

On each cluster, up to 7 service classes can be defined (class 0 is the default). For each class, cluster performance (a combination of IOPS and throughput) can be allocated in relative terms (e.g., 20%) for performance guarantees.

General-purpose volumes can be allocated in the default class, while more critical workloads can be split across other service classes. If other classes do not use up their quotas, the default class can still allocate all available resources.

"},{"location":"architecture/storage-performance-and-qos/#why-qos-service-classes-are-critical","title":"Why QoS Service Classes are Critical","text":"

Why is a limit not sufficient? Imagine a heavily mixed workload in the cluster. Some workloads are read-intensive, while others are write-intensive. Some workloads require a lot of small random IO, while others read and write large sequential IO. There is no absolute number of IOPS or throughput a cluster can provide, considering the dynamics of workloads.

Therefore, using absolute limits on one pool of volumes is effective for protecting others from spillover effects and undesired behavior. Still, it does not guarantee performance for a particular class of volumes.

Service classes provide a much better degree of isolation under the consideration of dynamic workloads. As long as you do not overload a particular service class, the general IO pressure on the cluster will not matter for the performance of volumes in that class.

"},{"location":"architecture/what-is-simplyblock/","title":"What is Simplyblock?","text":"

Simplyblock is a high-performance, distributed storage orchestration layer designed for cloud-native environments. It provides NVMe over TCP (NVMe/TCP) block storage to hosts and offers block storage to containers through its Container Storage Interface (CSI) and ProxMox drivers.

"},{"location":"architecture/what-is-simplyblock/#what-makes-simplyblock-special","title":"What makes Simplyblock Special?","text":"
  • Environment Agnostic: Simplyblock operates seamlessly across major cloud providers, regional, and specialized providers, bare-metal and virtual provisioners, and private clouds, including both virtualized and bare-metal Kubernetes environments.

  • NVMe-Optimized: Simplyblock is built from scratch around NVMe. All internal and external storage access is entirely based on NVMe and NVMe over Fabric (TCP, RDMA). This includes local back-storage on storage nodes, host-to-cluster, and node-to-node traffic. Together with the user-space data plane, distributed data placement, and advanced quality of service (QoS) and other characteristics, this makes simplyblock the storage platform with the most advanced performance guarantees in hyperconverged solutions available today.

  • User-Space Data Plane: Simplyblock data plane is built entirely in user-space with an interrupt-free, lockless, zero-copy architecture with thread-to-core pinning. The hot data path entirely avoids Linux kernel involvement, data copies, dynamic thread scheduling, and inter-thread synchronization. Its deployment is fully numa-node-aware.

  • Advanced QoS: Simplyblock provides not only IOPS or throughput-based caps, but also true QoS service classes, effectively isolating IO traffic.

  • Distributed Data Placement: Simplyblock's advanced data placement, which is based on small, fixed-size data chunks, ensures a perfectly balanced utilization of storage, compute, and network bandwidth, avoiding any performance bottlenecks local to specific nodes. This provides almost linear performance scalability for the cluster.

  • Containerized Architecture: The solution comprises:

    • Storage Nodes: Container stacks delivering distributed data services via NVMe over Fabrics (NVMe over TCP), forming storage clusters.
    • Management Nodes: Container stacks offering control and management services, collectively known as the control plane.
  • Platform Support: Simplyblock supports deployment on virtual machines, bare-metal instances, and Kubernetes containers, compatible with x86 and ARM architectures.

  • Deployment Flexibility: Simplyblock offers the greatest deployment flexibility in the industry. It can be deployed hyper-converged, disaggregated, and in a hybrid fashion, combining the best of both worlds.

"},{"location":"architecture/what-is-simplyblock/#customer-benefits-across-industries","title":"Customer Benefits Across Industries","text":"

Simplyblock offers tailored advantages to various sectors:

  • Financial Services: Enhances data management by boosting performance, strengthening security, and optimizing cloud storage costs.

  • Media and Gaming: Improves storage performance, reduces costs, and streamlines data management, facilitating efficient handling of large media files and gaming data.

  • Technology and SaaS Companies: Provides cost savings and performance enhancements, simplifying storage management and improving application performance without significant infrastructure changes.

  • Telecommunications: Offers ultra-low-latency access to data, enhances security, and simplifies complex storage infrastructures, aiding in the efficient management of customer records and network telemetry.

  • Blockchain and Cryptocurrency: Delivers cost efficiency, enhanced performance, scalability, and robust data security, addressing the unique storage demands of blockchain networks.

"},{"location":"architecture/concepts/","title":"Concepts","text":"

Understanding the fundamental concepts behind simplyblock is essential for effectively utilizing its distributed storage architecture. Simplyblock provides a cloud-native, software-defined storage solution that enables highly scalable, high-performance storage for containerized and virtualized environments. By leveraging NVMe over TCP (NVMe/TCP) and advanced data management features, simplyblock ensures low-latency access, high availability, and seamless scalability. This documentation section provides detailed explanations of key storage concepts within simplyblock, helping users understand how its storage components function and interact within a distributed system.

The concepts covered in this section include Logical Volumes (LVs), Snapshots, Clones, Hyper-Convergence, Disaggregation, and more. Each concept plays is crucial in optimizing storage performance, ensuring data durability, and enabling efficient resource allocation. Whether you are deploying simplyblock in a Kubernetes environment, a virtualized infrastructure, or a bare-metal setup, understanding these core principles will help you design, configure, and manage your storage clusters effectively.

By familiarizing yourself with these concepts, you will gain insight into how simplyblock abstracts storage resources, provides scalable and resilient data services, and integrates with modern cloud-native environments. This knowledge is essential for leveraging simplyblock to meet your organization's storage performance, reliability, and scalability requirements.

"},{"location":"architecture/concepts/automatic-rebalancing/","title":"Automatic Rebalancing","text":"

Automatic rebalancing is a fundamental feature of distributed data storage systems designed to maintain an even distribution of data across storage nodes. This process ensures optimal performance, prevents resource overutilization, and enhances system resilience by dynamically redistributing data in response to changes in cluster topology or workload patterns.

In a distributed storage system, data is typically spread across multiple storage nodes for redundancy, scalability, and performance. Over time, various factors can lead to an imbalance in data distribution, such as:

  • The addition of new storage nodes, which initially lack any data.
  • The removal or failure of existing nodes, requiring data redistribution to maintain availability.
  • The equal distribution of data across storage nodes.

Automatic rebalancing addresses these issues by dynamically redistributing data across the cluster. This process is driven by an algorithm that continuously monitors data distribution and redistributes data when imbalances are detected. The goal is to achieve uniform data placement while minimizing performance overhead during the rebalancing process.

"},{"location":"architecture/concepts/disaggregated/","title":"Disaggregated","text":"

Disaggregated storage represents a modern approach to distributed storage architectures, where compute and storage resources are decoupled. This separation allows for greater flexibility, scalability, and efficiency in managing data across large-scale distributed environments.

Traditional storage architectures typically integrate compute and storage within the same nodes, leading to resource contention and inefficiencies. Disaggregated storage solutions address these limitations by separating storage resources from compute resources, enabling independent scaling of each component based on workload demands.

Key characteristics of disaggregated storage solutions include:

  • Independent Scalability: Compute and storage can be scaled separately, optimizing resource utilization and reducing unnecessary hardware expansion.
  • Resource Efficiency: Storage is pooled and accessible across multiple compute nodes, reducing data duplication and improving overall efficiency.
  • Improved Performance: By reducing bottlenecks associated with tightly coupled storage, applications can achieve better latency and throughput.
  • Flexibility and Adaptability: Different storage technologies (e.g., NVMe-over-Fabrics, object storage) can be integrated seamlessly, allowing organizations to adopt the best-fit storage solutions for specific workloads.
  • Simplified Management: Centralized storage management reduces complexity, enabling easier provisioning, monitoring, and maintenance of storage resources.
"},{"location":"architecture/concepts/erasure-coding/","title":"Erasure Coding","text":"

Erasure coding is a data protection mechanism used in distributed storage systems to enhance fault tolerance and optimize storage efficiency. It provides redundancy by dividing data into multiple fragments and encoding it with additional parity fragments, enabling data recovery in the event of node failures.

Traditional data redundancy methods, such as replication, require multiple full copies of data, leading to significant storage overhead. Erasure coding improves upon this by using mathematical algorithms to generate parity fragments, allowing data reconstruction with fewer overheads.

The core principle of erasure coding involves breaking data into k data fragments and computing m parity fragments. These k+m fragments are distributed across multiple storage nodes. The system can recover lost data using any k available fragments, even if up to m fragments are missing or corrupted.

Erasure coding has a number of key characteristics:

  • High Fault Tolerance: Erasure coding can tolerate multiple node failures while allowing full data recovery.
  • Storage Efficiency: Compared to replication, erasure coding requires less additional storage to achieve similar levels of redundancy.
  • Computational Overhead: Encoding and decoding operations involve computational complexity, which may impact performance in latency-sensitive applications.
  • Flexibility: The parameters k and m can be adjusted to balance redundancy, performance, and storage overhead.
"},{"location":"architecture/concepts/hyper-converged/","title":"Hyper-Converged","text":"

Hyper-converged storage is a key component of hyper-converged infrastructure (HCI), where compute, storage, and networking resources are tightly integrated into a unified system. This approach simplifies management, enhances scalability, and optimizes resource utilization in distributed data storage environments.

Traditional storage architectures often separate compute and storage into distinct hardware layers, requiring complex management and specialized hardware. Hyper-converged storage consolidates these resources within the same nodes, forming a software-defined storage (SDS) layer that dynamically distributes and manages data across the cluster.

Key characteristics of hyper-converged storage include:

  • Integrated Storage and Compute: Storage resources are virtualized and distributed across the compute nodes, eliminating the need for dedicated storage arrays.
  • Scalability: New nodes can be added seamlessly, increasing both compute and storage capacity without complex reconfiguration.
  • Software-Defined Storage (SDS): A software layer abstracts and manages storage resources, enabling automation, fault tolerance, and efficiency.
  • High Availability and Resilience: Data is replicated across nodes to ensure redundancy and fault tolerance, minimizing downtime.
  • Simplified Management: A unified management interface enables streamlined provisioning, monitoring, and maintenance of storage and compute resources.
"},{"location":"architecture/concepts/logical-volumes/","title":"Logical Volumes","text":"

Logical Volumes (LVs) in Simplyblock are virtual NVMe devices that provide scalable, high-performance storage within a distributed storage cluster. They enable flexible storage allocation, efficient resource utilization, and seamless data management for cloud-native applications.

A Logical Volume (LV) in simplyblock is an abstracted storage entity dynamically allocated from a storage pool managed by the simplyblock system. Unlike traditional block storage, simplyblock\u2019s LVs offer advanced features such as thin provisioning, snapshotting, and replication to enhance resilience and scalability.

Key characteristics of Logical Volumes include:

  • Dynamic Allocation: LVs can be created, resized, and deleted on demand without manual intervention in the underlying hardware.
  • Thin Provisioning: Storage space is allocated only when needed, optimizing resource utilization.
  • High Performance: Simplyblock\u2019s architecture ensures low-latency access to LVs, making them suitable for demanding workloads.
  • Fault Tolerance: Data is distributed across multiple nodes to prevent data loss and improve reliability.

Two basic types of logical volumes are supported by simplyblock:

  • NVMe-oF Subsystems: Each logical volume is backed by a separate set of queue pairs. By default, each subsystem provides three queue parts and one network connection.

Volumes show up in Linux using lsblk as /dev/nvme0n2, /dev/nvme1n1, /dev/nvmeXn1, ...

  • NVMe-oF Namespaces: Each logical volume is backed by an NVMe namespace. A namespace is a feature similar to a logical partition of a drive, although it is defined on the NVMe level (device or target). Up to 32 namespaces share a single NVMe subsystem and its queue pairs and connections.

This is a more resource-efficient, but performance-limited, version of an individual volume. It is useful, if many, small volumes are required. Both methods can be combined in a single cluster.

Volumes show up in Linux using lsblk as /dev/nvme0n1, /dev/nvme0n2, /dev/nvme0nX, ...

"},{"location":"architecture/concepts/persistent-volumes/","title":"Persistent Volumes","text":"

Persistent Volumes (PVs) in Kubernetes provide a mechanism for managing storage resources independently of individual Pods. Unlike ephemeral storage, which is tied to the lifecycle of a Pod, PVs ensure data persistence across Pod restarts and rescheduling, enabling stateful applications to function reliably in a Kubernetes cluster.

In Kubernetes, storage resources are abstracted through the Persistent Volume framework, which decouples storage provisioning from application deployment. A Persistent Volume (PV) represents a piece of storage that has been provisioned in the cluster, while a Persistent Volume Claim (PVC) is a request for storage made by an application.

Key characteristics of Persistent Volumes include:

  • Decoupled Storage Management: PVs exist independently of Pods, allowing storage to persist even when Pods are deleted or rescheduled.
  • Dynamic and Static Provisioning: Storage can be provisioned manually by administrators (static provisioning) or automatically by storage classes (dynamic provisioning).
  • Access Modes: PVs support multiple access modes, such as ReadWriteOnce (RWO), ReadOnlyMany (ROX), and ReadWriteMany (RWX), defining how storage can be accessed by Pods.
  • Reclaim Policies: When a PV is no longer needed, it can be retained, recycled, or deleted based on its configured reclaim policy.
  • Storage Classes: Kubernetes allows administrators to define different types of storage using StorageClasses, enabling automated provisioning of PVs based on workload requirements.
"},{"location":"architecture/concepts/simplyblock-cluster/","title":"Simplyblock Cluster","text":"

The simplyblock storage platform consists of three different types of cluster nodes and belongs to the control plane or storage plane.

"},{"location":"architecture/concepts/simplyblock-cluster/#control-plane","title":"Control Plane","text":"

The control plane orchestrates, monitors, and controls the overall storage infrastructure. It provides centralized administration, policy enforcement, and automation for managing storage nodes, logical volumes (LVs), and cluster-wide configurations. The control plane operates independently of the storage plane, ensuring that control and metadata operations do not interfere with data processing. It facilitates provisioning, fault management, and system scaling while offering APIs and CLI tools for seamless integration with external management systems. A single control plane can manage multiple clusters.

"},{"location":"architecture/concepts/simplyblock-cluster/#storage-plane","title":"Storage Plane","text":"

The storage plane is the layer responsible for managing and distributing data across storage nodes within a cluster. It handles data placement, replication, fault tolerance, and access control, ensuring that logical volumes (LVs) provide high-performance, low-latency storage to applications. The storage plane operates independently of the control plane, allowing seamless scalability and dynamic resource allocation without disrupting system operations. By leveraging NVMe-over-TCP and software-defined storage principles, simplyblock\u2019s storage plane ensures efficient data distribution, high availability, and resilience, making it ideal for cloud-native and high-performance computing environments.

"},{"location":"architecture/concepts/simplyblock-cluster/#management-node","title":"Management Node","text":"

A management node is a node of the control plane cluster. The management node runs the necessary management services including the Cluster API, services such as Grafana, Prometheus, and Graylog, as well as the FoundationDB database cluster.

"},{"location":"architecture/concepts/simplyblock-cluster/#storage-node","title":"Storage Node","text":"

A storage node is a node of the storage plane cluster. The storage node provides storage capacity to the distributed storage pool of a specific storage cluster. The storage node runs the necessary data management services including the Storage Node Management API, the SPDK service, and handles logical volume primary connections of NVMe-oF multipathing.

"},{"location":"architecture/concepts/simplyblock-cluster/#secondary-node","title":"Secondary Node","text":"

A secondary node is a node of the storage plane cluster. The secondary node provides automatic fail over and high availability for logical volumes using NVMe-oF multipathing. In a highly available cluster, simplyblock automatically provisions secondary nodes alongside primary nodes and assigns one secondary node per primary.

"},{"location":"architecture/concepts/snapshots-clones/","title":"Snapshots and Clones","text":"

Volume snapshots and volume clones are essential data management features in distributed storage systems that enable data protection, recovery, and replication. While both techniques involve capturing the state of a volume at a specific point in time, they serve distinct purposes and operate using different mechanisms.

"},{"location":"architecture/concepts/snapshots-clones/#volume-snapshots","title":"Volume Snapshots","text":"

A volume snapshot is a read-only, point-in-time copy of a storage volume. It preserves the state of the volume at the moment the snapshot is taken, allowing users to restore data or create new volumes based on the captured state. Snapshots are typically implemented using copy-on-write (COW) or redirect-on-write (ROW) techniques, minimizing storage overhead and improving efficiency.

Key characteristics of volume snapshots include:

  • Space Efficiency: Snapshots share common data blocks with the original volume, requiring minimal additional storage.
  • Fast Creation: As snapshots do not duplicate data immediately, they can be created almost instantaneously.
  • Versioning and Recovery: Users can revert a volume to a previous state using snapshots, aiding in disaster recovery and data protection.
  • Performance Considerations: While snapshots are efficient, excessive snapshot accumulation can impact performance due to metadata overhead and fragmentation.
"},{"location":"architecture/concepts/snapshots-clones/#volume-clones","title":"Volume Clones","text":"

A volume clone is a writable, independent copy of a storage volume, created from either an existing volume or a snapshot. Unlike snapshots, clones are fully functional duplicates that can operate as separate storage entities.

Key characteristics of volume clones include:

  • Writable and Independent: Clones can be modified without affecting the original volume.
  • Use Case for Testing and Development: Clones are commonly used for staging environments, testing, and application sandboxing.
  • Storage Overhead: Unlike snapshots, clones may require additional storage capacity to accommodate changes made after cloning.
  • Immediate Availability: A clone provides an instant copy of the original volume, avoiding long data copying processes.
"},{"location":"architecture/concepts/storage-pooling/","title":"Storage Pooling","text":"

Storage pooling is a technique used in distributed data storage systems to aggregate multiple storage devices into a single, unified storage resource. This approach enhances resource utilization, improves scalability, and simplifies management by abstracting physical storage infrastructure into a logical storage pool.

Traditional storage architectures often rely on dedicated storage devices assigned to specific applications or workloads, leading to inefficiencies in resource allocation and potential underutilization. Storage pooling addresses these challenges by combining storage resources from multiple nodes into a shared pool, allowing dynamic allocation based on demand.

Key characteristics of storage pooling include:

  • Resource Aggregation: Multiple physical storage devices, such as HDDs, SSDs, or NVMe drives, are combined into a single logical storage entity.
  • Dynamic Allocation: Storage capacity can be allocated dynamically to workloads based on usage patterns and demand.
  • Improved Efficiency: By eliminating the constraints of static storage assignments, storage pooling optimizes resource utilization and reduces wasted capacity.
  • Scalability: Additional storage devices or nodes can seamlessly integrate into the storage pool without disrupting operations.
  • Simplified Management: Centralized control and monitoring enable streamlined administration of storage resources.
"},{"location":"deployments/","title":"Deployments","text":"

Simplyblock is a highly flexible storage solution.

Different initiator (host) drivers (Kubernetes CSI, Proxmox, OpenStack) are available. The storage cluster deployment can be installed into Kubernetes (disaggregated or hyper-converged) or via Docker (also called \"Plain Linux\" deployment). The Docker-based deployment is fully deployed and managed via the Simplyblock CLI or API, minimal Docker knowledge is required.

"},{"location":"deployments/#control-plane-installation","title":"Control Plane Installation","text":"

Each storage cluster requires a control plane to run. Multiple storage clusters may be connected to a single control plane. The deployment of the control plane must happen before a storage cluster deployment. The control plane can be installed into a Kubernetes Cluster or on Plain Linux VMs (using Docker internally). For details, see the Control Plane Deployment on VM or Install Control Plane on Kubernetes

"},{"location":"deployments/#storage-node-installation","title":"Storage Node Installation","text":"

For details on how to install the storage cluster into Plain Linux, see Install Simplyblock Storage Nodes on Linux.

For installation of Storage Nodes into Kubernetes, see here: Install Storage Nodes on Kubernetes

"},{"location":"deployments/#installation-of-drivers","title":"Installation of Drivers","text":"

Simplyblock logical volumes are NVMe over TCP or RDMA (ROCEv2) volumes. They are attached to the Linux kernel via the provided nvme-tcp or nvme-rdma modules and managed via the nvme-cli tool. For more information, see Linux NVMe-oF Attach. On top of the NVMe-oF devices, which show up as linux block devices such as /dev/nvme1n1, life cycle automation is performed by the orchestrator-specific Simplyblock drivers:

  • On Kubernetes: Simplyblock CSI Driver
  • On Proxmox: Proxmox Integration
  • On OpenStack: Cinder Driver

Generally, before creating volumes it is important to understand the difference btw. an NVMe-oF Subsystem and a Namespace.

"},{"location":"deployments/#system-requirements-and-sizing","title":"System Requirements and Sizing","text":"

Simplyblock is designed for high-performance storage operations. Therefore, it has specific system requirements that must be met. The following sections describe the system and node sizing requirements.

  • System Requirements
  • Erasure Coding Configuration
  • Air Gapped Installation

For deployments on hyper-scalers, like Amazon AWS and Google GCP, there are instance type recommendations. While other instance types may work, it is highly recommended to use the instance type recommendations.

  • Amazon EC2
  • Google Compute Engine
"},{"location":"deployments/cluster-deployment-options/","title":"Cluster deployment options","text":"

The following options can be set when creating a cluster. This applies to both plain linux and kubernetes deployments. Most cannot be changed later on, so careful planning is recommended.

"},{"location":"deployments/cluster-deployment-options/#-enable-node-affinity","title":"--enable-node-affinity","text":"

As long as a node is not full (out of capacity), the first chunk of data is always stored on the local node (the node to which the volume is attached). This reduces network traffic and latency - accelerating particularly the read - but may lead to an inequal distribution of capacity within the cluster. Generally, using node affinity accelerates reads, but leads to higher variability in performance across nodes in the cluster. It is recommended on shared networks and networks below 100gb/s.

"},{"location":"deployments/cluster-deployment-options/#-data-chunks-per-stripe-parity-chunks-per-stripe","title":"--data-chunks-per-stripe, --parity-chunks-per-stripe","text":"

Those two parameters together make up the default erasure coding schema of the node (e.g. 1+1, 2+2, 4+2). Starting from R25.10, it is also possible to set individual schemas per volume, but this feature is still in alpha-stage.

"},{"location":"deployments/cluster-deployment-options/#-cap-warn-cap-crit","title":"--cap-warn, --cap-crit","text":"

Warning and critical limits for overall cluster utilization. The warning limit will just cause issuance of warnings in the event log if exceeded, the \"critical\" limit will place the cluster into read-only mode. For large clusters, 99% of \"critical\" limit is ok, for small clusters (less than 50TB) better use 97%.

"},{"location":"deployments/cluster-deployment-options/#-prov-cap-warn-prov-cap-crit","title":"--prov-cap-warn, --prov-cap-crit","text":"

Warning and critical limits for over-provisioning. Exceeding these limits will cause entries in the cluster log. If the critical limit is exceeded, new volumes cannot be provisioned and volumes cannot be enlarged. A limit of 500% is typical.

"},{"location":"deployments/cluster-deployment-options/#-log-del-interval","title":"--log-del-interval","text":"

Number of days by which logs are retained. Log storage can grow significantly and it is recommended to keep logs for not longer than one week.

"},{"location":"deployments/cluster-deployment-options/#-metrics-retention-period","title":"--metrics-retention-period","text":"

Number of days by which the io statistics and other metrics are retained. The amount of data per day is significant, typically limit to a few days or a week.

"},{"location":"deployments/cluster-deployment-options/#-contact-point","title":"--contact-point","text":"

This is a webhook endpoint for alerting (critical events such as storage nodes becoming unreachable)

"},{"location":"deployments/cluster-deployment-options/#-fabric","title":"--fabric","text":"

Choose tcp, rdma or both. If both fabrics are chosen, volumes can connect to the cluster using both options (defined per volume or storage class), but the cluster internally uses rdma.

"},{"location":"deployments/cluster-deployment-options/#-qpair-count","title":"--qpair-count","text":"

The default amount of queue pairs (sockets) per volume for an initiator (host) to connect to the target (server). More queue pairs per volume increase concurrency and volume performance, but require more server resources (ram, cpu) and thus limit the total amount of volumes per storage node. The default is 3. If you need few, very performant volumes, increase the amount, if you need a large amount of less performant volumes decrease it. More than 12 parallel connections have limited impact on overall performance. Also, the host requires at least one core per queue pair.

"},{"location":"deployments/cluster-deployment-options/#-name","title":"--name","text":"

A human-readable name for the cluster

"},{"location":"deployments/nvme-namespaces-and-subsystems/","title":"Nvme namespaces and subsystems","text":"

To connect to a storage volume, both locally and via NVMe-oF, you need a subsystem and a namespace.

An NVMe-oF subsystem is the exported entity that the host connects to over the fabric (RDMA, TCP). A subsystem is identified by its unique worldwide name (NQN) and can be roughly seen as a controller, which exposes and connects one or multiple namespaces (actual volumes) to hosts.

The NQN of a subsystem can contain the namespace uuid and is worldwide unique. In Simplyblock it looks as follows (the last part behind :lvol:<uuid> indicates the namespace representing the volume):

qn.2023-02.io.simplyblock:136012a7-f386-4091-ae0f-4e763059e9c8:lvol:6809b758-1c73-451f-810c-210c18d6aa14

Together with the IP address, the fully qualified subsystem address has to be given to connect, but In Simplyblock this process is either automated (CSI, OpenStack or Proxmox) or guided (plain linux attach).

It\u2019s roughly equivalent to an NVMe controller complex \u2014 a logical device that can contain one or more namespaces.

Now subsystems are backed by multiple queue pairs, each of which is backed by a network connection such as a TCP socket. More queue pairs require more resources from the cluster but make the volumes faster.

Namespaces on the other side are actual block storage regions that hold user data. It\u2019s the NVMe analog of a \u201cLUN\u201d in SCSI \u2014 the thing that actually stores and serves data blocks. It has an NSID, size ond block format and UUID.

When a host connects to the subsystem, each namespace appears as a separate block device:

/dev/nvme0n1\n/dev/nvme0n2\n

All namespaces on the same subsystem use the same network connections to transfer IO.

It\u2019s what you would use for:

Creating a filesystem (e.g., mkfs.ext4 /dev/nvme0n1) Raw block I/O (e.g., via fio, dd, or SPDK bdevs) So the namespace is the thing you actually read and write data to.

Info

In simplyblock, you can define how many namespace volumes are to be created for a particular subsystem. This allows sharing of subystems by Linux block devices (e.g. nvme0nX), where each of them is less performance-critical. In Kubernetes, to use different relationships (e.g. 1:10) between subsystem and namespace, different storage classes are required.

To manually create volumes with multiple namespaces per subsystem, use:

sbctl lvol add lvol01 100G pool01 --max-namespace-per-subsys 10

This adds a new subsystem with a namespace and allows up to 9 more namespaces on this volume. To add new namespaces to the same subsystem, use:

sbctl lvol add lvol02 100G --uuid <UUID>

"},{"location":"deployments/air-gap/","title":"Air Gap Installation","text":"

Simplyblock can be installed in an air-gapped environment. However, the necessary images must be downloaded to install and run the control plane, the storage nodes, and the Kubernetes CSI driver. In addition, for Kubernetes deployments, you want to download or clone the simplyblock helm repository\u00a0\u29c9 which contains the helm charts for Kubernetes-based storage and caching nodes, as well as the Kubernetes CSI driver.

For an air-gapped installation, we recommend an air-gapped container repository installation. Tools such as JFrog Artifactory\u00a0\u29c9 or Sonatype Nexus\u00a0\u29c9 help with the setup and management of container images in air-gapped environments.

The general installation instructions are similar to non-air-gapped installations, with the need to update the container download locations to point to your local container repository.

"},{"location":"deployments/baremetal/","title":"Plain Linux Initiators","text":"

Simplyblock storage can be attached over the network to Linux hosts which are not running Kubernetes, Proxmox or OpenStack.

While no simplyblock components must be installed on these hosts, some OS-level configuration steps are required. Those manual steps are typically taken care of by the CSI driver or Proxmox integration.

On plain Linux initiators, those steps have to be performed manually on each host that will connect simplyblock logical volumes.

"},{"location":"deployments/baremetal/#install-nvme-client-package","title":"Install Nvme Client Package","text":"
=== \"RHEL / Alma / Rocky\"\n\n    ```bash\n    sudo dnf install -y nvme-cli\n    ```\n\n=== \"Debian / Ubuntu\"\n\n    ```bash\n    sudo apt install -y nvme-cli\n    ```\n
"},{"location":"deployments/baremetal/#load-the-nvme-over-fabrics-kernel-modules","title":"Load the NVMe over Fabrics Kernel Modules","text":"

For NVMe over TCP and NVMe over RoCE:

Simplyblock is built upon the NVMe over Fabrics standard and uses NVMe over TCP (NVMe/TCP) by default.

While the driver is part of the Linux kernel with kernel versions 5.x and later, it is not enabled by default. Hence, when using simplyblock, the driver needs to be loaded.

Loading the NVMe/TCP driver
modprobe nvme-tcp\n
Loading the NVMe/RDMA driver
modprobe nvme-rdma\n

When loading the NVMe/TCP or NVMe/RDMA driver, the NVMe over Fabrics driver automatically get loaded too, as the former depends on its provided foundations.

It is possible to check for successful loading of both drivers with the following command:

Checking the drivers being loaded
lsmod | grep 'nvme_'\n

The response should list the drivers as nvme_tcp and nvme_fabrics as seen in the following example:

Example output of the driver listing
[demo@demo ~]# lsmod | grep 'nvme_'\nnvme_tcp               57344  0\nnvme_keyring           16384  1 nvme_tcp\nnvme_fabrics           45056  1 nvme_tcp\nnvme_core             237568  3 nvme_tcp,nvme,nvme_fabrics\nnvme_auth              28672  1 nvme_core\nt10_pi                 20480  2 sd_mod,nvme_core\n

To make the driver loading persistent and survive system reboots, it has to be configured to be loaded at system startup time. This can be achieved by either adding it to /etc/modules (Debian / Ubuntu) or creating a config file under /etc/modules-load.d/ (Red Hat / Alma / Rocky).

Red Hat / Alma / RockyDebian / Ubuntu
echo \"nvme-tcp\" | sudo tee -a /etc/modules-load.d/nvme-tcp.conf\n
echo \"nvme-tcp\" | sudo tee -a /etc/modules\n

After rebooting the system, the driver should be loaded automatically. It can be checked again via the above provided lsmod command.

"},{"location":"deployments/baremetal/#create-a-storage-pool","title":"Create a Storage Pool","text":"

Before logical volumes can be created and connected, a storage pool is required. If a pool already exists, it can be reused. Otherwise, creating a storage pool can be created on any control plane node as follows:

Create a Storage Pool
sbctl pool add <POOL_NAME> <CLUSTER_UUID>\n

The last line of a successful storage pool creation returns the new pool id.

Example output of creating a storage pool
[demo@demo ~]# sbctl pool add test 4502977c-ae2d-4046-a8c5-ccc7fa78eb9a\n2025-03-05 06:36:06,093: INFO: Adding pool\n2025-03-05 06:36:06,098: INFO: {\"cluster_id\": \"4502977c-ae2d-4046-a8c5-ccc7fa78eb9a\", \"event\": \"OBJ_CREATED\", \"object_name\": \"Pool\", \"message\": \"Pool created test\", \"caused_by\": \"cli\"}\n2025-03-05 06:36:06,100: INFO: Done\nad35b7bb-7703-4d38-884f-d8e56ffdafc6 # <- Pool Id\n
"},{"location":"deployments/baremetal/#create-and-connect-a-logical-volume","title":"Create and Connect a Logical Volume","text":"

To create a new logical volume, the following command can be run on any control plane node.

sbctl volume add \\\n  --max-rw-iops <IOPS> \\\n  --max-r-mbytes <THROUGHPUT> \\\n  --max-w-mbytes <THROUGHPUT> \\\n  --ndcs <DATA CHUNKS IN STRIPE> \\\n  --npcs <PARITY CHUNKS IN STRIPE>\n  --fabric {tcp, rdma}\n  --lvol-priority-class <1-6>\n  <VOLUME_NAME> \\\n  <VOLUME_SIZE> \\\n  <POOL_NAME>\n

Info

The parameters ndcs and npcs define the erasure-coding schema (e.g., --ndcs=4 --npcs=2). The settings are optional. If not specified, the cluster default is chosen. Valid for ndcs are 1, 2, and 4, and for npcs 0,1, and 2. However, it must be considered that the number of cluster nodes must be equal to or larger than (ndcs + npcs).

The parameter --fabric defines the fabric by which the volume is connected to the cluster. It is optional and the default is tcp. The fabric type rdma can only be chosen for hosts with an RDMA-capable NIC and for clusters that support RDMA. A priority class is optional as well and can be selected only if the cluster defines it. A cluster can define 0-6 priority classes. The default is 0.

Example of creating a logical volume
sbctl volume add --ndcs 2 --ndcs 1 --fabric tcp lvol01 1000G test  \n

In this example, a logical volume with the name lvol01 and 1TB of thinly provisioned capacity is created in the pool named test. The uuid of the logical volume is returned at the end of the operation.

For additional parameters, see Add a new Logical Volume.

To connect a logical volume on the initiator (or Linux client), execute the following command on a any control plane node. This command returns one or more connection commands to be executed on the client.

sbctl volume connect \\\n  <VOLUME_ID>\n
Example of retrieving the connection strings of a logical volume
sbctl volume connect a898b44d-d7ee-41bb-bc0a-989ad4711780\n\nsudo nvme connect --reconnect-delay=2 --ctrl-loss-tmo=3600 --nr-io-queues=32 --keep-alive-tmo=5 --transport=tcp --traddr=10.10.20.2 --trsvcid=9101 --nqn=nqn.2023-02.io.simplyblock:fa66b0a0-477f-46be-8db5-b1e3a32d771a:lvol:a898b44d-d7ee-41bb-bc0a-989ad4711780\nsudo nvme connect --reconnect-delay=2 --ctrl-loss-tmo=3600 --nr-io-queues=32 --keep-alive-tmo=5 --transport=tcp --traddr=10.10.20.3 --trsvcid=9101 --nqn=nqn.2023-02.io.simplyblock:fa66b0a0-477f-46be-8db5-b1e3a32d771a:lvol:a898b44d-d7ee-41bb-bc0a-989ad4711780\n

The output can be copy-pasted to the host to which the volumes should be attached.

"},{"location":"deployments/data-migration/","title":"Data Migration","text":"

When migrating existing data to simplyblock, the process can be performed at the block level or the file system level, depending on the source system and migration requirements. Because Simplyblock provides logical Volumes (LVs) as virtual block devices, data can be migrated using standard block device cloning tools such as dd, as well as file-based tools like rsync after the block device has been formatted.

Therefore, sata migration to simplyblock is a straightforward process using common block-level and file-level tools. For full disk cloning, dd and similar utilities are effective. For selective file migrations, rsync provides flexibility and reliability. Proper planning and validation of available storage capacity are essential to ensure successful and complete data transfers.

"},{"location":"deployments/data-migration/#block-level-migration-using-dd","title":"Block-Level Migration Using dd","text":"

A block-level copy duplicates the entire content of a source block device, including partition tables, file systems, and data. This method is ideal when migrating entire disks or volumes.

Creating a block-level clone of a block device
dd if=/dev/source-device of=/dev/simplyblock-device bs=4M status=progress\n
  • if= specifies the input (source) device.
  • of= specifies the output (Simplyblock Logical Volume) device.
  • bs=4M sets the block size for efficiency.
  • status=progress provides real-time progress updates.

Info

Ensure that the simplyblock logical volume is at least as large as the source device to prevent data loss.

"},{"location":"deployments/data-migration/#alternative-block-level-cloning-tools","title":"Alternative Block-Level Cloning Tools","text":"

Other block-level tools such as Clonezilla, partclone, or dcfldd may also be used for disk duplication, depending on the specific environment and desired features like compression or network transfer.

"},{"location":"deployments/data-migration/#file-level-migration-using-rsync","title":"File-Level Migration Using rsync","text":"

For scenarios where only file contents need to be migrated (for example, after creating a new file system on a simplyblock logical volume), rsync is a reliable tool.

  1. First, format the Simplyblock Logical Volume:

    Format the simplyblock block device with ext4
    mkfs.ext4 /dev/simplyblock-device\n

  2. Mount the Logical Volume:

    Mount the block device
    mount /dev/simplyblock-device /mnt/simplyblock\n

  3. Use rsync to copy files from the source directory:

    Synchronize the source disks content using rsync
    rsync -avh --progress /source/data/ /mnt/simplyblock/\n

    • -a preserves permissions, timestamps, and symbolic links.
    • -v provides verbose output.
    • -h makes output human-readable.
    • --progress shows transfer progress.
"},{"location":"deployments/data-migration/#minimal-downtime-migration-strategy","title":"Minimal-Downtime Migration Strategy","text":"

An alternative, but more complex solution enables minimal downtime. This option utilizes the Linux dm (Device Mapper) subsystem.

Using the Device Mapper, the current and new block devices will be moved into a RAID-1 and synchronized (re-silvered) in the background. This solution requires two minimal downtimes to create and remount the devices.

Warning

This method is quite involved, requires a lot of steps, and can lead to data loss in case of wrong commands or parameters. It should only be used by advanced users that understand the danger of the commands below. Furthermore, this migration method MUST NOT be used for boot devices!

In this walkthrough, we assume the new simplyblock logical volume is already connected to the system.

"},{"location":"deployments/data-migration/#preparation","title":"Preparation","text":"

To successfully execute this data migration, a few values are required. First of all, the two device names of the currently used and new device need to be collected.

This can be done by executing the command lsblk to list all attached block devices.

lsblk provides information about all attached block devices
lsblk\n

In this example, sda is the boot device which hosts the operating system, while sdb is the currently used block device and nvme0n1 is the newly attached simplyblock logical volume. The latter two should be noted down.

Danger

It is important to understand the difference between the currently used and the new device. Using them in the wrong order in the following steps will cause any or all data to be lost!

Find the source and target block devices using lsblk
[root@demo ~]# lsblk\nNAME                      MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS\nsda                         8:0    0   25G  0 disk\n\u251c\u2500sda1                      8:1    0    1G  0 part  /boot/efi\n\u251c\u2500sda2                      8:2    0    2G  0 part  /boot\n\u2514\u2500sda3                      8:3    0 21.9G  0 part\n  \u2514\u2500ubuntu--vg-ubuntu--lv 252:0    0   11G  0 lvm   /\nsdb                         8:16   0   25G  0 disk\n\u2514\u2500sdb1                      8:17   0   25G  0 part  /data/pg\nsr0                        11:0    1 57.4M  0 rom\nnvme0n1                   259:0    0   25G  0 disk\n

Next up the cluster size of the current device is required. The value must be set on the RAID to-be-created. It needs to be noted down.

Find the block size of the source filesystem
tune2fs -l /dev/sdb1 | grep -i 'block size'\n

In this example, the block size is 4 KiB (4096 bytes).

Example output of the block size
[root@demo ~]# tune2fs -l /dev/sdb1 | grep -i 'block size'\nBlock size:               4096\n

Last, it is important to ensure that the new target device is at least as large or larger than the current device. lsblk can be used again to get the required numbers.

lsblk with byte sizes of the block devices
lsblk -b\n

In this example, both devices are the same size, 26843545600 bytes in total disk capacity.

Example output of lsblk -b
[root@demo ~]# lsblk -b\nNAME                      MAJ:MIN RM        SIZE RO TYPE  MOUNTPOINTS\nsda                         8:0    0 26843545600  0 disk\n\u251c\u2500sda1                      8:1    0  1127219200  0 part  /boot/efi\n\u251c\u2500sda2                      8:2    0  2147483648  0 part  /boot\n\u2514\u2500sda3                      8:3    0 23566745600  0 part\n  \u2514\u2500ubuntu--vg-ubuntu--lv 252:0    0 11781799936  0 lvm   /\nsdb                         8:16   0 26843545600  0 disk\n\u2514\u2500sdb1                      8:17   0 26843513344  0 part  /data/pg\nsr0                        11:0    1    60225536  0 rom\nnvme0n1                   259:0    0 26843545600  0 disk\n
"},{"location":"deployments/data-migration/#device-mapper-raid-setup","title":"Device Mapper RAID Setup","text":"

Danger

From here on out, mistakes can cause any or all data to be lost! It is strongly recommended to only go further, if ensured that the values above are correct and after a full data backup is created. It is also recommended to test the backup before continuing. A failure to do so can cause issues in case it cannot be replayed.

Now, it's time to create the temporary RAID for disk synchronization. Anything beyond this point is dangerous.

Warning

Any service accessing the current block device or any of its partitions need to be shutdown and the block device and its partitions need to be unmounted. It is required for the device to not be busy.

Example of PostgreSQL shutdown and partition unmount
service postgresql stop\numount /data/pg\n

Building a RAID-1 with mdadm
mdadm --build --chunk=<CHUNK_SIZE> --level=1 \\\n    --raid-devices=2 --bitmap=none \\\n    <RAID_NAME> <CURRENT_DEVICE_FILE> missing\n

In this example, the RAID is created using the /dev/sdb device file and 4096 as the chunk size. The newly created RAID is called migration. The RAID-level is 1 (meaning, RAID-1) and it includes 2 devices. The missing at the end of the command is required to tell the device mapper that the second device of the RAID is missing for now. It will be added later.

Example output of a RAID-1 with mdadm
[root@demo ~]# mdadm --build --chunk=4096 --level=1 --raid-devices=2 --bitmap=none migration /dev/sdb missing\nmdadm: array /dev/md/migration built and started.\n

To ensure that the RAID was created successfully, all device files with /dev/md* can be listed. In this case, /dev/md127 is the actual RAID device, while /dev/md/migration is the device mapper file.

Finding the new device mapper device files
[root@demo ~]# ls /dev/md*\n/dev/md127  /dev/md127p1\n\n/dev/md:\nmigration  migration1\n

After the RAID device name is confirmed, the new RAID device can be mounted. In this example, the original block device was partitioned. Hence, the RAID device also has one partition /dev/md127p1. This is what needs to be mounted to the same mount point as the original disk before, /data/pg in this example.

Mount the new device mapper device file
[root@demo ~]# mount /dev/md127p1 /data/pg/\n

Info

All services that require access to the data can be started again. The RAID itself is still in a degraded state, but it provides the same data security as the original device.

Now the second, new device must be added to the RAID setup to start the re-silvering (data synchronization) process. This is again done using mdadm tool.

Add the new simplyblock block device to RAID-1
mdadm <RAID_DEVICE_MAPPER_FILE> --add <NEW_DEVICE_FILE>\n

In the example, we add /dev/nvme0n1 (the simplyblock logical volume) to the RAID named \"migration.\"

Example output of mdadm --add
[root@demo ~]# mdadm /dev/md/migration --add /dev/nvme0n1\nmdadm: added /dev/nvme0n1\n

After the device was added to the RAID setup, a background process is automatically started to synchronize the newly added device to the first device in the setup. This process is called re-silvering.

Info

While the devices are synchronized, the read and write performance may be impacted due to the additional I/O operations of the synchronization process. However, the process runs on a very low priority and shouldn't impact the live operation too extensively. For AWS users: if the migration uses an Amazon EBS volume as the source, ensure enough IOPS to cover live operation and migration.

The synchronization process status can be monitored using one of two commands:

Check status of re-silvering
mdadm -D <RAID_DEVICE_FILE>\ncat /proc/mdstat\n
Example output of a status check via mdadm
[root@demo ~]#mdadm -D /dev/md127\n/dev/md127:\n           Version :\n     Creation Time : Sat Mar 15 17:24:17 2025\n        Raid Level : raid1\n        Array Size : 26214400 (25.00 GiB 26.84 GB)\n     Used Dev Size : 26214400 (25.00 GiB 26.84 GB)\n      Raid Devices : 2\n     Total Devices : 2\n\n             State : clean, degraded, recovering\n    Active Devices : 1\n   Working Devices : 2\n    Failed Devices : 0\n     Spare Devices : 1\n\nConsistency Policy : resync\n\n    Rebuild Status : 98% complete\n\n    Number   Major   Minor   RaidDevice State\n       0       8       16        0      active sync   /dev/sdb\n       2     259        0        1      spare rebuilding   /dev/nvme0n1\n
Example output of a status check via /proc/mdstat
[root@demo ~]# cat /proc/mdstat \nPersonalities : [raid1] \nmd0 : active raid1 sdb[1] nvme0n1[0]\n      10484664 blocks super 1.2 [2/2] [UU]\n      [========>............]  resync = 42.3% (4440832/10484664) finish=0.4min speed=201856K/sec\n\nunused devices: <none>\n
"},{"location":"deployments/data-migration/#after-the-synchronization-is-done","title":"After the Synchronization is done","text":"

Eventually, the synchronization finishes. At this point, the two devices (original and new) are kept in sync by the device mapper system.

Example out of a finished synchronzation
[root@demo ~]# mdadm -D /dev/md127\n/dev/md127:\n           Version :\n     Creation Time : Sat Mar 15 17:24:17 2025\n        Raid Level : raid1\n        Array Size : 26214400 (25.00 GiB 26.84 GB)\n     Used Dev Size : 26214400 (25.00 GiB 26.84 GB)\n      Raid Devices : 2\n     Total Devices : 2\n\n             State : clean\n    Active Devices : 2\n   Working Devices : 2\n    Failed Devices : 0\n     Spare Devices : 0\n\nConsistency Policy : resync\n\n    Number   Major   Minor   RaidDevice State\n       0       8       16        0      active sync   /dev/sdb\n       2     259        0        1      active sync   /dev/nvme0n1\n

To fully switch to the new simplyblock logical volume, a second, minimal, downtime is required.

The RAID device needs to be unmounted and the device mapper stopped.

Stopping the device mapper RAID-1
umount <MOUNT_POINT>\nmdadm --stop <DEVICE_MAPPER_FILE>\n

In this example /data/pg and /dev/md/migration are used.

Example output of a stopped RAID-1
[root@demo ~]# umount /data/pg/\n[root@demo ~]# mdadm --stop /dev/md/migration\nmdadm: stopped /dev/md/migration\n

Now, the system should be restarted. If a system reboot takes too long and is out of the scope of the available maintenance window, a re-read of the partition tables can be forced.

Re-read partition table
blockdev --rereadpt <NEW_DEVICE_FILE>\n

After re-reading the partition table of a device, the partition should be recognized and visible.

Example output of re-reading the partition table
[root@demo ~]# blockdev --rereadpt /dev/nvme0n1\n[root@demo ~]# ls /dev/nvme0n1p1\n/dev/nvme0n1p1\n

As a last step, the partition must be mounted to the same mount point as the RAID device before. If the mount is successful, the services can be started again.

Mounting the plain block device and restarting services
[root@demo ~]# mount /dev/nvme0n1p1 /data/pg/\n[root@demo ~]# service postgresql start\n
"},{"location":"deployments/deployment-preparation/","title":"Deployment Preparation","text":"

Proper deployment planning is essential for ensuring the performance, scalability, and resilience of a simplyblock storage cluster.

Before installation, key factors such as node sizing, storage capacity, and fault tolerance mechanisms should be carefully evaluated to match workload requirements. This section provides guidance on sizing management nodes and storage nodes, helping administrators allocate adequate CPU, memory, and disk resources for optimal cluster performance.

Additionally, it explores selectable erasure coding schemes, detailing how different configurations impact storage efficiency, redundancy, and recovery performance. Other critical considerations, such as network infrastructure, high-availability strategies, and workload-specific optimizations, are also covered to assist in designing a simplyblock deployment that meets both operational and business needs.

"},{"location":"deployments/deployment-preparation/cloud-instance-recommendations/","title":"Cloud Instance Recommendations","text":"

Simplyblock has been tested on and recommends the following instance types. There is generally no restriction on other instance types as long as they fulfill the system requirements.

"},{"location":"deployments/deployment-preparation/cloud-instance-recommendations/#aws-amazon-ec2-recommendations","title":"AWS Amazon EC2 Recommendations","text":"

Simplyblock can work with local instance storage (local NVMe devices) and Amazon EBS volumes. For performance reasons, Amazon EBS is not recommended for high-performance clusters.

Critical

If local NVMe devices are chosen, make sure that the nodes in the cluster are provisioned into a placement group of type Spread!

Generally, with AWS, there are three considerations when selecting virtual machine types:

  • Minimum requirements of vCPU and RAM
  • Locally attached NVMe devices
  • Network performance (dedicated and \"up to\")

Based on those criteria, simplyblock commonly recommends the following virtual machine types for storage nodes:

VM Type vCPU(s) RAM Locally Attached Storage Network Performance i4g.8xlarge 32 256 GB 2x 3750 GB 18.5 GBit/s i4g.16xlarge 64 512 GB 4x 3750 GB 37.5 GBit/s i3en.6xlarge 24 192 GB 2x 7500 GB 25 GBit/s i3en.12xlarge 48 384 GB 4x 7500 GB 50 GBit/s i3en.24xlarge 96 768 GB 8x 7500 GB 100 GBit/s m5d.4xlarge 16 64 GB 2x 300 GB 10 GBit/s i4i.8xlarge 32 256 GB 2x 3750 GB 18.75 GBit/s i4i.12xlarge 48 384 GB 3x 3750 GB 28.12 GBit/s"},{"location":"deployments/deployment-preparation/cloud-instance-recommendations/#google-compute-engine-recommendations","title":"Google Compute Engine Recommendations","text":"

In GCP, physical hosts are highly-shared and sliced into virtual machines. This isn't only true for network CPU, RAM, and network bandwidth, but also virtualized NVMe devices. Google Compute Engine NVMe devices provide a specific number of queue pairs (logical connections between the virtual machine and physical NVMe device) depending on the size of the disk. Hence, separately attached NVMe devices are highly recommended to achieve the required number of queue pairs of simplyblock.

Critical

If local NVMe devices are chosen, make sure that the nodes in the cluster are provisioned into a placement group of type Spread!

Generally, with GCP, there are three considerations when selecting virtual machine types:

  • Minimum requirements of vCPU and RAM
  • The size of the locally attached NVMe devices (SSD Storage)
  • Network performance

Based on those criteria, simplyblock commonly recommends the following virtual machine types for storage nodes:

VM Type vCPU(s) RAM Additional Local SSD Storage Network Performance n2-standard-8 8 32 GB 2x 2500 GB 16 GBit/s n2-standard-16 16 64 GB 2x 2500 GB 32 GBit/s n2-standard-32 32 128 GB 4x 2500 GB 32 GBit/s n2-standard-48 48 192 GB 4x 2500 GB 50 GBit/s n2-standard-48 48 192 GB 4x 2500 GB 50 GBit/s n2-standard-64 64 256 GB 6x 2500 GB 75 GBit/s n2-standard-80 64 320 GB 8x 2500 GB 100 GBit/s"},{"location":"deployments/deployment-preparation/cloud-instance-recommendations/#attaching-an-additional-local-ssd-on-google-compute-engine","title":"Attaching an additional Local SSD on Google Compute Engine","text":"

The above recommended instance types do not provide NVMe storage by default. It has to specifically be added to the virtual machine at creation time. It cannot be changed after the virtual machine is created.

To add additional Local SSD Storage to a virtual machine, the operating system section must be selected in the wizard, then \"Add local SSD\" must be clicked. Now an additional disk can be added.

Warning

It is important that NVMe is selected as the interface type. SCSI will not work!

"},{"location":"deployments/deployment-preparation/erasure-coding-scheme/","title":"Erasure Coding Scheme","text":"

Choosing the appropriate erasure coding scheme is crucial when deploying a simplyblock storage cluster, as it directly impacts data redundancy, storage efficiency, and overall system performance. Simplyblock currently supports the following erasure coding schemes: 1+1, 2+1, 4+1, 1+2, 2+2, and 4+2. Understanding the trade-offs between redundancy and storage utilization will help determine the best option for your workload. All schemas have been performance-optimized by specialized algorithms. There is, however, a remaining capacity-to-performance trade-off.

Info

Starting from 25.10.1, it is possible to select alternative erasure coding schemas per volume. However, this feature is still experimental (technical preview) and not recommended for production. A cluster must provide sufficient nodes for the largest schema used in any of the volumes (e.g., 4+2: min. 6 nodes, recommended 7 nodes).

"},{"location":"deployments/deployment-preparation/erasure-coding-scheme/#erasure-coding-schemes","title":"Erasure Coding Schemes","text":"

Erasure coding (EC) is a data protection mechanism that distributes data and parity across multiple storage nodes, allowing data recovery in case of hardware failures. The notation k+m represents:

  • k: The number of data fragments.
  • m: The number of parity fragments.

If you need more information on erasure coding, see the dedicated concept page for erasure coding.

"},{"location":"deployments/deployment-preparation/erasure-coding-scheme/#scheme-11","title":"Scheme: 1+1","text":"
  • Description: In the 1+1 scheme, data is mirrored, effectively creating an exact copy of every data block.
  • Redundancy Level: Can tolerate the failure of one storage node.
  • Raw-to-Effective Ratio: 200%
  • Available Storage Capacity: 50%
  • Performance Considerations: Offers fast recovery and high read performance due to data mirroring.
  • Best Use Cases:
    • Workloads requiring high availability and minimal recovery time.
    • Applications where performance is prioritized over storage efficiency.
    • Requires 3 or more nodes for full redundancy.
"},{"location":"deployments/deployment-preparation/erasure-coding-scheme/#scheme-21","title":"Scheme: 2+1","text":"
  • Description: In the 2+1 scheme, data is divided into two fragments with one parity fragment, offering a balance between performance and storage efficiency.
  • Redundancy Level: Can tolerate the failure of one storage node.
  • Raw-to-Effective Ratio: 150%
  • Available Storage Capacity: 66.6%
  • Performance Considerations: For writes of 8K or higher, lower write amplification compared to 1+1, as data is distributed across multiple nodes. This typically results in similar or higher IOPS. However, for small random writes (4K), the write performance is worse than 1+1. Write latency is somewhat higher than with 1+1. Read performance is similar to 1+1, if local node affinity is disabled. With node affinity enabled, read performance is slightly worse (up to 25%). In a degraded state (one node offline / unavailable or failed disk), the performance is worse than with 1+1. Recovery time to full redundancy from single disk error is slightly higher than with 1+1.
  • Best Use Cases:
    • Deployments where storage efficiency is relevant without significantly compromising performance.
    • Requires 4 or more nodes for full redundancy.
"},{"location":"deployments/deployment-preparation/erasure-coding-scheme/#scheme-41","title":"Scheme: 4+1","text":"
  • Description: In the 4+1 scheme, data is divided into four fragments with one parity fragment, offering optimal storage efficiency.
  • Redundancy Level: Can tolerate the failure of one storage node.
  • Raw-to-Effective Ratio: 125%
  • Available Storage Capacity: 80%
  • Performance Considerations: For writes of 16K or higher, lower write amplification compared to 2+1, as data is distributed across more nodes. This typically results in similar or higher write IOPS. However, for 4-8K random writes, the write performance is typically worse than 2+1. Write latency is somewhat similar to 2+1. Read performance is similar to 2+1, if local node affinity is disabled. With node affinity enabled, read performance is slightly worse (up to 13%). In a degraded state (one node offline / unavailable or failed disk), the performance is worse than with 2+1. Recovery time to full redundancy from single disk error is slightly higher than with 2+1.
  • Best Use Cases:
    • Deployments where storage efficiency is a priority without significantly compromising performance.
    • Requires 6 or more nodes for full redundancy.
"},{"location":"deployments/deployment-preparation/erasure-coding-scheme/#scheme-12","title":"Scheme: 1+2","text":"
  • Description: In the 1+2 scheme, data is replicated twice, effectively creating multiple copies of every data block.
  • Redundancy Level: Can tolerate the failure of two storage nodes.
  • Raw-to-Effective Ratio: 300%
  • Available Storage Capacity: 33.3%
  • Performance Considerations: Offers fast recovery and high read performance due to data replication, but write performance is lower than with 1+1 in all cases (~33%).
  • Best Use Cases:
    • Workloads requiring high redundancy and minimal recovery time.
    • Applications where performance is prioritized over storage efficiency.
    • Requires 4 or more nodes for full redundancy.
"},{"location":"deployments/deployment-preparation/erasure-coding-scheme/#scheme-22","title":"Scheme: 2+2","text":"
  • Description: In the 2+2 scheme, data is divided into two fragments with two parity fragments, offering a great balance between redundancy and storage efficiency.
  • Redundancy Level: Can tolerate the failure of two storage nodes.
  • Raw-to-Effective Ratio: 200%
  • Available Storage Capacity: 50%
  • Performance Considerations: Similar to 2+1, but with higher write latencies and lower effective write IOPS due to higher write amplification.
  • Best Use Cases:
    • Deployments where high redundancy and storage efficiency is important without compromising redundancy.
    • Applications that can tolerate slightly higher recovery times compared to 1+2.
    • Requires 5 or more nodes for full redundancy.
"},{"location":"deployments/deployment-preparation/erasure-coding-scheme/#scheme-42","title":"Scheme: 4+2","text":"
  • Description: In the 4+2 scheme, data is divided into four fragments with two parity fragments, offering a great balance between redundancy and storage efficiency.
  • Redundancy Level: Can tolerate the failure of two storage nodes.
  • Raw-to-Effective Ratio: 150%
  • Available Storage Capacity: 66.6%
  • Performance Considerations: Similar to 4+1, but with higher write latencies and lower effective write IOPS due to higher write amplification.
  • Best Use Cases:
    • Deployments where high redundancy and storage efficiency is a priority.
    • Requires 7 or more nodes in a cluster.
"},{"location":"deployments/deployment-preparation/erasure-coding-scheme/#choosing-the-scheme","title":"Choosing the Scheme","text":"

When selecting an erasure coding scheme for simplyblock, consider the following:

  1. Redundancy Requirements: If the priority is maximum data protection and quick recovery, 1+1 or 1+2 are ideal. For a balance between protection and efficiency, 2+1 or 2+2 is preferred.
  2. Storage Capacity: 1+1 requires double the storage space, whereas 2+1 provides better storage efficiency. 1+2 requires triple the storage space, whereas 2+2 provides great storage efficiency and fault tolerance.
  3. Performance Needs: 1+1 and 2+2 offer faster reads and writes due to mirroring, while 2+1 and 2+2 reduce write amplification and optimize for storage usage.
  4. Cluster Size: Smaller clusters benefit from 1+1 or 1+2 due to its simplicity and faster rebuild times, whereas 2+1 and 2+2 are more effective in larger clusters.
  5. Recovery Time Objectives (RTOs): If minimizing downtime is critical, 1+1 and 1+2 offer near-instant recovery compared to 2+1 and 2+2 which require rebuilding of the lost data from parity information.
"},{"location":"deployments/deployment-preparation/numa-considerations/","title":"NUMA Considerations","text":"

Modern multi-socket servers use a memory architecture called NUMA (Non-Uniform Memory Access)\u00a0\u29c9. In a NUMA system, each CPU socket has its own local memory and I/O paths. Accessing local resources is faster than reaching across sockets to remote memory or devices. Simplyblock is fully NUMA-aware.

On a host with more than one socket, by default one or two storage nodes are deployed per socket.

Two storage nodes per socket are deployed if:

  • more than 32 vCPUs (cores) per NUMA socket are dedicated to simplyblock per socket
  • more than 10 NVMe devices are connected to the NUMA socket

Users can change this behavior. Either by setting the appropriate Helm Chart parameters (in case of Kubernetes-based storage node deployment) or by manually modifying the initially created configuration file on the storage node (after running sbctl sn configure).

It is critical for performance that all NVMe devices of a storage node are directly connected to the NUMA socket to which the storage node is deployed.

If a socket has no NVMe devices connected, it will not qualify to run a simplyblock storage node.

It is also important that the NIC(s) used by simplyblock for storage traffic are connected to the same NUMA socket. However, simplyblock does not auto-assign a NIC and users have manually to take care of that.

"},{"location":"deployments/deployment-preparation/numa-considerations/#checking-numa-configuration","title":"Checking NUMA Configuration","text":"

Before configuring simplyblock, the system configuration should be checked for multiple NUMA nodes. This can be done using the lscpu tool.

How to check the NUMA configuration
lscpu | grep -i numa\n
Example output of the NUMA configuration
root@demo:~# lscpu | grep -i numa\nNUMA node(s):                         2\nNUMA node0 CPU(s):                    0-31\nNUMA node1 CPU(s):                    32-63\n

In the example above, the system has two NUMA nodes.

Recommendation

If the system consists of multiple NUMA nodes, it is recommended to configure simplyblock with multiple storage nodes per storage host. The number of storage nodes should match the number of NUMA nodes.

"},{"location":"deployments/deployment-preparation/numa-considerations/#ensuring-numa-aware-devices","title":"Ensuring NUMA-Aware Devices","text":"

For optimal performance, there should be a similar number of NVMe devices per NUMA node. Additionally, it is recommended to provide one Ethernet NIC per NUMA node.

To check the NUMA assignment of PCI-e devices, the lspci tool and a small script can be used.

Install pciutils which includes lspci
yum install pciutils\n
Small script to list all PCI-e devices and their NUMA nodes
#!/bin/bash\n\nfor i in  /sys/class/*/*/device; do\n    pci=$(basename \"$(readlink $i)\")\n    if [ -e $i/numa_node ]; then\n        echo \"NUMA Node: `cat $i/numa_node` ($i): `lspci -s $pci`\" ;\n    fi\ndone | sort\n
"},{"location":"deployments/deployment-preparation/system-requirements/","title":"System Requirements","text":"

Info

In cloud environments including GCP and AWS, instance types are pre-configured. In general, there are no restrictions on instance types as long as these system requirements are met. However, it is highly recommended to stay with the Recommended Cloud Instance Types for production.

For hyper-converged deployments, it is important that node sizing applies to the dedicated resources consumed by simplyblock. Hyper-converged instances must provide enough of resources to satisfy both, simplyblock and other compute demand, including the Kubernetes worker itself and the operating system.

"},{"location":"deployments/deployment-preparation/system-requirements/#hardware-architecture-support","title":"Hardware Architecture Support","text":"
  • For the control plane, simplyblock requires x86-64 compatible CPUs.
  • For the storage plane, simplyblock supports x86-64 or ARM64 (AArch64) compatible CPUs.
"},{"location":"deployments/deployment-preparation/system-requirements/#virtualization-support","title":"Virtualization Support","text":"

Both simplyblock storage nodes and control plane nodes can run fully virtualized. It has been tested on plain KVM, Proxmox, Nitro (AWS EC2) and GCP.

For storage node production deployments, SR-IOV is required for NVMEs and network interfaces (NICs). Furthermore, dedicated cores must be assigned exclusively to the virtual machines running storage node (no over-provisioning).

"},{"location":"deployments/deployment-preparation/system-requirements/#deployment-models","title":"Deployment Models","text":"

Two deployment options are supported:

  • Plain Linux: In this mode, which also called Docker mode, all nodes are deployed to separate hosts. Storage nodes are usually bare-metal and control plane nodes are usually VMs.

Basic Docker knowledge is helpful, but all management can be performed within the system via its CLI or API.

  • Kubernetes: In Kubernetes, both disaggregated deployments with dedicated workers or clusters for storage nodes, or hyper-converged deployments (co-located with compute workloads) are supported. A wide range of Kubernetes distros and operating systems are supported.

Kubernetes Knowledge is required.

The minimum system requirements below concern simplyblock only and must be dedicated to simplyblock.

"},{"location":"deployments/deployment-preparation/system-requirements/#minimum-system-requirements","title":"Minimum System Requirements","text":"

Info

If the use of erasure coding is intended, DDR5 RAM is recommended for maximum performance. In addition, it is recommended to use CPUs with large L1 caches, as those will perform better.

The following minimum system requirements resources must be exclusive to simplyblock and are not available to the host operating system or other processes. This includes vCPUs, RAM, locally attached virtual or physical NVMe devices, network bandwidth, and free space on the boot disk.

"},{"location":"deployments/deployment-preparation/system-requirements/#overview","title":"Overview","text":"Node Type vCPU(s) RAM (GB) Locally Attached Storage Network Performance Free Boot Disk Number of Nodes Storage Node 8+ 6+ 1x fully dedicated NVMe 10 GBit/s 10 GB 1 (2 for HA) Control Plane* 4 16 - 1 GBit/s 35 GB 1 (3 for HA)

*Plain Linux Deployment, up to 5 nodes, 1,000 logical volumes, 2,500 snapshots

"},{"location":"deployments/deployment-preparation/system-requirements/#storage-nodes","title":"Storage Nodes","text":"

IOPS performance depends on Storage Node vCPU. The maximum performance will be reached with 32 physical cores per socket. In such a scenario, the deployment will dedicate (isolate) 24 cores to Simplyblock Data Plane (spdk_80xx containers) and the rest will remain under control of Linux.

Info

Simplyblock auto-detects NUMA nodes. It will configure and deploy storage nodes per NUMA node.

Each NUMA socket requires directly attached NVMe devices and NICs to deploy a storage node. For more information on simplyblock on NUMA, see NUMA Considerations.

Info

It is recommended to deploy multiple storage nodes per storage host if there are more than 32 cores available per socket.

During deployment, simplyblock detects the underlying configuration and prepares a configuration file with the recommended deployment strategy, including the recommended amount of storage nodes per storage host based on the detected configuration. This file is later processed when adding the storage nodes to the storage host. Manual changes to the configuration are possible if the proposed configuration is not applicable.

As hyper-converged deployments have to share vCPUs, it is recommended to dedicate 8 vCPU per socket to simplyblock. For example, on a system with 32 cores (64 vCPU) per socket, this amounts to 12.5% of vCPU capacity per host. For very IO-intensive applications, this amount should be increased.

Warning

On storage nodes, required vCPUs will be automatically isolated from the operating system. No kernel-space, user-space processes, or interrupt handler can be scheduled on these vCPUs.

Info

For RAM, it is required to estimate the expected average number of logical volumes per node, as well as the average raw storage capacity, which can be utilized per node. For example, if each node in a cluster has 100 TiB of raw capacity, this would be the average too. In a 5-node cluster, with a maximum of 2,500 logical volumes, the average per node would be 500.

Unit Memory Requirement Fixed amount 3 GiB Per logical volume (cluster average per node) 25 MiB % of maximum storage capacity (cluster average per node) 1.5 GiB / TiB

Info

For disaggregated setups, it is recommended to add 50% to these numbers as a reserve. In a purely hyper-converged setup, stay at the requirement.

"},{"location":"deployments/deployment-preparation/system-requirements/#control-plane","title":"Control Plane","text":"

General control plane requirements provided above apply to the plain linux deployment. For a Kubernetes-based control plane, the minimum requirements per replica are:

Service vCPU(s) RAM (GB) Disk (GB) Simplyblock Meta-Database 1 4 5 Observability Stack 4 8 25 Simplyblock Web-API 1 2 0.5 Other Simplyblock Services 1 2 0.5

If more than 2,500 volumes or more than 5 storage nodes are attached to the control plane, additional RAM and vCPU are advised. Also, the required observability disk space must be increased, if retention of logs and statistics for more than 7 days is required.

Info

3 replicas are mandatory for the Key-Value-Store. The WebAPI runs as a Daemonset on all Workers, if no taint is applied. The Observability Stack can optionally be replicated and the sb-services run without replication.

"},{"location":"deployments/deployment-preparation/system-requirements/#hyperthreading","title":"Hyperthreading","text":"

If 32 or more physical cores are available per storage node, it is highly recommended to turn off hyperthreading in the BIOS or UEFI setup services.

"},{"location":"deployments/deployment-preparation/system-requirements/#nvme-devices","title":"NVMe Devices","text":"

NVMe devices must support 4KB native block size and are recommended to be sized between 1.9 TiB and 7.68 TiB. Large NVMe devices are supported, but performance per TiB is lower and rebalancing can take longer.

In general, all NVMe used in a single cluster should exhibit a similar performance profile per TB. Therefore, within a single cluster, all NVMe devices are recommended to be of the same size, but this is not a hard requirement.

Clusters are lightweight, and it is recommended to use different clusters for different types of hardware (NVMe, networking, compute) or with a different performance profile per TiB of raw storage.

Warning

Simplyblock only works with non-partitioned, exclusive NVMe devices (virtual via SRV-IO or physical) as its backing storage.

Individual NVMe namespaces or partitions cannot be claimed by simplyblock, only dedicated NVMe controllers.

Devices are not allowed to be mounted under Linux and the entire device will be low-level formatted and re-partioned during deployment.

Additionally, devices will be detached from the operating system's control and will no longer show up in lsblk once simplyblock's storage nodes are running.

Info

It is required to Low-Level Format Devices with 4KB block size before deploying Simplyblock.

"},{"location":"deployments/deployment-preparation/system-requirements/#network-requirements","title":"Network Requirements","text":"

In production, simplyblock requires a redundant network for storage traffic (e.g., via LACP, Stacked Switches, MLAG, active/active or active/passive NICs, STP or MSTP).

Simplyblock implements NVMe over Fabrics (NVMe-oF), specifically NVMe over TCP, and works over any Ethernet interconnect.

Recommendation

Simplyblock highly recommends NICs with RDMA/ROCEv2 support such as NVIDIA Mellanox network adapters (ConnectX-6 or higher). Those network adapters are available from brands such as NVIDIA, Intel and Broadcom.

For production, software-defined switches such as Linux Bridge or OVS cannot be used. An interface on top of a Linux bond over two ports of the NIC(s) or using SRV-IO must be created.

Also, it is recommended to use a separate physical NIC with two ports (bonded) and a highly available network for management traffic. For management traffic, a 1 GBit/s network is sufficient and a Linux Bridge may be used.

Warning

All storage nodes within a cluster and all hosts accessing storage shall reside within the same hardware VLAN.

Avoid any gateways, firewalls, or proxies higher than L2 on the network path.

"},{"location":"deployments/deployment-preparation/system-requirements/#pcie-version","title":"PCIe Version","text":"

The minimum required PCIe standard for NVMe devices is PCIe 3.0. However, PCIe 4.0 or higher is strongly recommended.

"},{"location":"deployments/deployment-preparation/system-requirements/#operating-system-requirements-control-plane-storage-plane","title":"Operating System Requirements (Control Plane, Storage Plane)","text":"

Control plane nodes, as well as storage nodes in a plain linux deployment, require one of the following operating systems:

Operating System Versions Alma Linux 9 Rocky Linux 9 Redhat Enterprise Linux (RHEL) 9

In a hyper-converged deployment a broad range of operating systems are supported. The availability also depends on the utilized Kubernetes distribution.

Operating System Versions Alma Linux 9, 10 Rocky Linux 9, 10 Redhat Enterprise Linux (RHEL) 9, 10 Ubuntu 22.04, 24.04 Debian 12, 13 Talos from 1.6.7

The operating system must be on the latest patch-level.

"},{"location":"deployments/deployment-preparation/system-requirements/#operating-system-requirements-initiator","title":"Operating System Requirements (Initiator)","text":"

An initiator is the operating system to which simplyblock logical volumes are attached over the network (NVMe/TCP).

For further information on the requirements of the initiator-side (client-only), see:

  • Linux Distributions and Versions
  • Linux Kernel Versions
"},{"location":"deployments/deployment-preparation/system-requirements/#kubernetes-requirements","title":"Kubernetes Requirements","text":"

For Kubernetes-based deployments, the following Kubernetes environments and distributions are supported:

Distribution Versions Amazon EKS 1.28 and higher Google GKE 1.28 and higher K3s 1.29 and higher Kubernetes (vanilla) 1.28 and higher Talos 1.6.7 and higher Openshift 4.15 and higher"},{"location":"deployments/deployment-preparation/system-requirements/#proxmox-requirements","title":"Proxmox Requirements","text":"

The Proxmox integration supports any Proxmox installation of version 8.0 and higher.

"},{"location":"deployments/deployment-preparation/system-requirements/#openstack-requirements","title":"OpenStack Requirements","text":"

The OpenStack integration supports any OpenStack installation of version 25.1 (Epoxy) or higher. Support for older versions may be available on request.

"},{"location":"deployments/install-on-linux/","title":"Install Simplyblock on Linux","text":"

Installing simplyblock for production on plain linux (Docker) requires a few components to be installed. Furthermore, there are a couple of configuration steps to secure the network, ensure the performance, and data protection in the case of hardware or software failures.

Simplyblock provides two test scripts to automatically check your system's configuration. While those may not catch all edge cases, they can help to streamline the configuration check. This script can be run multiple times during the preparation phase to find missing configurations during the process.

Automatically check your configurations
# Configuration check for the control plane (management nodes)\ncurl -s -L https://install.simplyblock.io/scripts/prerequisites-cp.sh | bash\n\n# Configuration check for the storage plane (storage nodes)\ncurl -s -L https://install.simplyblock.io/scripts/prerequisites-sn.sh | bash\n
"},{"location":"deployments/install-on-linux/#before-we-start","title":"Before We Start","text":"

A simplyblock production cluster consists of three different types of nodes in the plain linux (Docker) variant of the deployment:

  1. Management nodes are part of the control plane which managed the cluster(s).
  2. Storage nodes are part of a specific storage cluster and provide capacity to the distributed storage pool. A production cluster requires at least three nodes.
  3. Secondary nodes are part of a specific storage cluster and enable automatic fail over for NVMe-oF connections. In a high-availability cluster, every primary storage node automatically provides a secondary storage node.

In a plain-linux deployment multiple storage nodes can reside on the same host. This has to be done on multi-socket systems as nodes have to be aligned with NUMA sockets. However, the management nodes require separate VMs.

A single control plane can manage one or more clusters. If started afresh, a control plane must be set up before creating a storage cluster. If there is a preexisting control plane, an additional storage cluster can be added to it directly.

More information on the control plane, storage plane, and the different node types is available under Simplyblock Cluster in the architecture section.

"},{"location":"deployments/install-on-linux/#network-preparation","title":"Network Preparation","text":"

For network requirements, see System Requirements.

On storage nodes, simplyblock can use either one network interface for both storage and management or separate interfaces (VLANs or subnets).

To install simplyblock in your environment, you may have to adopt these commands to match your configuration.

Network interface Network definition Abbreviation Subnet eth0 Control Plane control 192.168.10.0/24 eth1 Storage Plane storage 10.10.10.0/24"},{"location":"deployments/install-on-linux/install-cp/","title":"Install Control Plane","text":""},{"location":"deployments/install-on-linux/install-cp/#control-plane-installation","title":"Control Plane Installation","text":"

The first step when installing simplyblock on plain linux (Docker), is to install the control plane. The control plane manages one or more storage clusters. If an existing control plane is available and the new cluster should be added to it, this section can be skipped.

In this case, the following section can be skipped to Storage Plane Installation.

"},{"location":"deployments/install-on-linux/install-cp/#firewall-configuration-cp","title":"Firewall Configuration (CP)","text":"

Simplyblock requires a number of TCP and UDP ports to be opened from certain networks. Additionally, it requires IPv6 to be disabled on management nodes.

The following is a list of all ports (TCP and UDP) required to operate as a management node. Attention is required, as this list is for management nodes only. Storage nodes have a different port configuration.

Service Direction Source / Target Network Port Protocol(s) ICMP ingress control - ICMP Cluster API ingress storage, control, admin 80 TCP SSH ingress storage, control, admin 22 TCP Greylog ingress storage, control 12201 TCP / UDP Greylog ingress storage, control 12202 TCP Greylog ingress storage, control 13201 TCP Greylog ingress storage, control 13202 TCP Docker Daemon Remote Access ingress storage, control 2375 TCP Docker Swarm Remote Access ingress storage, control 2377 TCP Docker Overlay Network ingress storage, control 4789 UDP Docker Network Discovery ingress storage, control 7946 TCP / UDP FoundationDB ingress storage, control 4500 TCP Prometheus ingress storage, control 9100 TCP Cluster Control egress storage, control 8080-8890 TCP spdk-http-proxy egress storage, control 5000 TCP spdk-firewall-proxy egress storage, control 5001 TCP Docker Daemon Remote Access egress storage, control 2375 TCP Docker Swarm Remote Access egress storage, control 2377 TCP Docker Overlay Network egress storage, control 4789 UDP Docker Network Discovery egress storage, control 7946 TCP / UDP

With the previously defined subnets, the following snippet disables IPv6 and configures the iptables automatically.

Danger

The example assumes that you have an external firewall between the admin network and the public internet! If this is not the case, ensure the correct source access for ports 22 and 80.

Network Configuration
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1\nsudo sysctl -w net.ipv6.conf.default.disable_ipv6=1\n\n# Clean up\nsudo iptables -F SIMPLYBLOCK\nsudo iptables -D DOCKER-FORWARD -j SIMPLYBLOCK\nsudo iptables -X SIMPLYBLOCK\n# Setup\nsudo iptables -N SIMPLYBLOCK\nsudo iptables -I DOCKER-FORWARD 1 -j SIMPLYBLOCK\nsudo iptables -A INPUT -p tcp --dport 22 -j ACCEPT\nsudo iptables -A SIMPLYBLOCK -m state --state ESTABLISHED,RELATED -j RETURN\nsudo iptables -A SIMPLYBLOCK -p tcp --dport 80 -j RETURN\nsudo iptables -A SIMPLYBLOCK -p tcp --dport 2375 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN\nsudo iptables -A SIMPLYBLOCK -p tcp --dport 2377 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN\nsudo iptables -A SIMPLYBLOCK -p tcp --dport 4500 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN\nsudo iptables -A SIMPLYBLOCK -p udp --dport 4789 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN\nsudo iptables -A SIMPLYBLOCK -p tcp --dport 7946 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN\nsudo iptables -A SIMPLYBLOCK -p udp --dport 7946 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN\nsudo iptables -A SIMPLYBLOCK -p tcp --dport 9100 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN\nsudo iptables -A SIMPLYBLOCK -p tcp --dport 12201 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN\nsudo iptables -A SIMPLYBLOCK -p udp --dport 12201 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN\nsudo iptables -A SIMPLYBLOCK -p tcp --dport 12202 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN\nsudo iptables -A SIMPLYBLOCK -p tcp --dport 13201 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN\nsudo iptables -A SIMPLYBLOCK -p tcp --dport 13202 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN\nsudo iptables -A SIMPLYBLOCK -s 0.0.0.0/0 -j DROP\n
"},{"location":"deployments/install-on-linux/install-cp/#management-node-installation","title":"Management Node Installation","text":"

Now that the network is configured, the management node software can be installed.

Simplyblock provides a command line interface called sbctl. It's built in Python and requires Python 3 and Pip (the Python package manager) installed on the machine. This can be achieved with yum.

Install Python and Pip
sudo yum -y install python3-pip\n

Afterward, the sbctl command line interface can be installed. Upgrading the CLI later on uses the same command.

Install Simplyblock CLI
sudo pip install sbctl --upgrade\n

Recommendation

Simplyblock recommends to only upgrade sbctl if a system upgrade is executed to prevent potential incompatibilities between the running simplyblock cluster and the version of sbctl.

At this point, a quick check with the simplyblock provided system check can reveal potential issues quickly.

Automatically check your configuration
curl -s -L https://install.simplyblock.io/scripts/prerequisites-cp.sh | bash\n

If the check succeeds, it's time to set up the primary management node:

Deploy the primary management node
sbctl cluster create --ifname=<IF_NAME> --ha-type=ha\n

Additional cluster deployment options can be found in the Cluster Deployment Options.

The output should look something like this:

Example output of control plane deployment
[root@vm11 ~]# sbctl cluster create --ifname=eth0 --ha-type=ha\n2025-02-26 12:37:06,097: INFO: Installing dependencies...\n2025-02-26 12:37:13,338: INFO: Installing dependencies > Done\n2025-02-26 12:37:13,358: INFO: Node IP: 192.168.10.1\n2025-02-26 12:37:13,510: INFO: Configuring docker swarm...\n2025-02-26 12:37:14,199: INFO: Configuring docker swarm > Done\n2025-02-26 12:37:14,200: INFO: Adding new cluster object\nFile moved to /usr/local/lib/python3.9/site-packages/simplyblock_core/scripts/alerting/alert_resources.yaml successfully.\n2025-02-26 12:37:14,269: INFO: Deploying swarm stack ...\n2025-02-26 12:38:52,601: INFO: Deploying swarm stack > Done\n2025-02-26 12:38:52,604: INFO: deploying swarm stack succeeded\n2025-02-26 12:38:52,605: INFO: Configuring DB...\n2025-02-26 12:39:06,003: INFO: Configuring DB > Done\n2025-02-26 12:39:06,106: INFO: Settings updated for existing indices.\n2025-02-26 12:39:06,147: INFO: Template created for future indices.\n2025-02-26 12:39:06,505: INFO: {\"cluster_id\": \"7bef076c-82b7-46a5-9f30-8c938b30e655\", \"event\": \"OBJ_CREATED\", \"object_name\": \"Cluster\", \"message\": \"Cluster created 7bef076c-82b7-46a5-9f30-8c938b30e655\", \"caused_by\": \"cli\"}\n2025-02-26 12:39:06,529: INFO: {\"cluster_id\": \"7bef076c-82b7-46a5-9f30-8c938b30e655\", \"event\": \"OBJ_CREATED\", \"object_name\": \"MgmtNode\", \"message\": \"Management node added vm11\", \"caused_by\": \"cli\"}\n2025-02-26 12:39:06,533: INFO: Done\n2025-02-26 12:39:06,535: INFO: New Cluster has been created\n2025-02-26 12:39:06,535: INFO: 7bef076c-82b7-46a5-9f30-8c938b30e655\n7bef076c-82b7-46a5-9f30-8c938b30e655\n

If the deployment was successful, the last line returns the cluster id. This should be noted down. It's required in further steps of the installation.

Additionally to the cluster id, the cluster secret is required in many further steps. The following command can be used to retrieve it.

Get the cluster secret
sbctl cluster get-secret <CLUSTER_ID>\n
Example output get cluster secret
[root@vm11 ~]# sbctl cluster get-secret 7bef076c-82b7-46a5-9f30-8c938b30e655\ne8SQ1ElMm8Y9XIwyn8O0\n
"},{"location":"deployments/install-on-linux/install-cp/#secondary-management-nodes","title":"Secondary Management Nodes","text":"

A production cluster requires at least three management nodes in the control plane. Hence, additional management nodes need to be added.

On the secondary nodes, the network requires the same configuration as on the primary. Executing the commands under Firewall Configuration (CP) will get the node prepared.

Afterward, Python, Pip, and sbctl need to be installed.

Deployment preparation
sudo yum -y install python3-pip\npip install sbctl --upgrade\n

Finally, we deploy the management node software and join the control plane cluster.

Secondary management node deployment
sbctl mgmt add <CP_PRIMARY_IP> <CLUSTER_ID> <CLUSTER_SECRET> <IF_NAME>\n

Running against the primary management node in the control plane should create an output similar to the following example:

Example output joining a control plane cluster
[demo@demo ~]# sbctl mgmt add 192.168.10.1 7bef076c-82b7-46a5-9f30-8c938b30e655 e8SQ1ElMm8Y9XIwyn8O0 eth0\n2025-02-26 12:40:17,815: INFO: Cluster found, NQN:nqn.2023-02.io.simplyblock:7bef076c-82b7-46a5-9f30-8c938b30e655\n2025-02-26 12:40:17,816: INFO: Installing dependencies...\n2025-02-26 12:40:25,606: INFO: Installing dependencies > Done\n2025-02-26 12:40:25,626: INFO: Node IP: 192.168.10.2\n2025-02-26 12:40:26,802: INFO: Joining docker swarm...\n2025-02-26 12:40:27,719: INFO: Joining docker swarm > Done\n2025-02-26 12:40:32,726: INFO: Adding management node object\n2025-02-26 12:40:32,745: INFO: {\"cluster_id\": \"7bef076c-82b7-46a5-9f30-8c938b30e655\", \"event\": \"OBJ_CREATED\", \"object_name\": \"MgmtNode\", \"message\": \"Management node added vm12\", \"caused_by\": \"cli\"}\n2025-02-26 12:40:32,752: INFO: Done\n2025-02-26 12:40:32,755: INFO: Node joined the cluster\ncdde125a-0bf3-4841-a6ef-a0b2f41b8245\n

From here, additional management nodes can be added to the control plane cluster. If the control plane cluster is ready, the storage plane can be installed.

"},{"location":"deployments/install-on-linux/install-sp/","title":"Install Storage Plane","text":""},{"location":"deployments/install-on-linux/install-sp/#storage-plane-installation","title":"Storage Plane Installation","text":"

The installation of a storage plane requires a functioning control plane. If no control plane cluster is available yet, it must be installed beforehand. Jump right to the Control Plane Installation.

The following examples assume two subnets are available.

"},{"location":"deployments/install-on-linux/install-sp/#firewall-configuration-sp","title":"Firewall Configuration (SP)","text":"

Simplyblock requires a number of TCP and UDP ports to be opened from certain networks. Additionally, it requires IPv6 to be disabled on management nodes.

Following is a list of all ports (TCP and UDP) required for operation as a storage node. Attention is required, as this list is for storage nodes only. Management nodes have a different port configuration.

Service Direction Source / Target Network Port(s) Protocol(s) ICMP ingress control - ICMP Storage node API ingress storage 5000 TCP spdk-firewall-proxy ingress storage 5001 TCP spdk-http-proxy ingress storage, control 8080-8180 TCP hublvol-nvmf-subsys-port ingress storage, control 9030-9059 TCP internal-nvmf-subsys-port ingress storage, control 9060-9099 TCP lvol-nvmf-subsys-port ingress storage, control 9100-9200 TCP SSH ingress storage, control, admin 22 TCP Docker Daemon Remote Access ingress storage, control 2375 TCP Docker Swarm Remote Access ingress storage, control 2377 TCP Docker Overlay Network ingress storage, control 4789 UDP Docker Network Discovery ingress storage, control 7946 TCP / UDP Greylog ingress control 12202 TCP FoundationDB egress storage 4500 TCP Docker Daemon Remote Access egress storage, control 2375 TCP Docker Swarm Remote Access egress storage, control 2377 TCP Docker Overlay Network egress storage, control 4789 UDP Docker Network Discovery egress storage, control 7946 TCP / UDP Greylog egress control 12202 TCP

With the previously defined subnets, the following snippet disables IPv6 and configures the iptables automatically.

Danger

The example assumes that you have an external firewall between the admin network and the public internet! If this is not the case, ensure the correct source access for port 22.

Disable IPv6
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1\nsudo sysctl -w net.ipv6.conf.default.disable_ipv6=1\n

Docker Swarm, by default, creates iptables entries open to the world. If no external firewall is available, the created iptables configuration needs to be restricted.

The following script will create additional iptables rules prepended to Docker's forwarding rules and only enabling access from internal networks. This script should be stored in /usr/local/sbin/simplyblock-iptables.sh.

Configuration script for Iptables
#!/usr/bin/env bash\n\n# Clean up\nsudo iptables -F SIMPLYBLOCK\nsudo iptables -D DOCKER-FORWARD -j SIMPLYBLOCK\nsudo iptables -X SIMPLYBLOCK\n\n# Setup\nsudo iptables -N SIMPLYBLOCK\nsudo iptables -I DOCKER-FORWARD 1 -j SIMPLYBLOCK\nsudo iptables -A INPUT -p tcp --dport 22 -j ACCEPT\nsudo iptables -A SIMPLYBLOCK -p tcp --dport 2375 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN\nsudo iptables -A SIMPLYBLOCK -p tcp --dport 2377 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN\nsudo iptables -A SIMPLYBLOCK -p tcp --dport 4420 -s 10.10.10.0/24 -j RETURN\nsudo iptables -A SIMPLYBLOCK -p udp --dport 4789 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN\nsudo iptables -A SIMPLYBLOCK -p tcp --dport 5000 -s 192.168.10.0/24 -j RETURN\nsudo iptables -A SIMPLYBLOCK -p tcp --dport 7946 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN\nsudo iptables -A SIMPLYBLOCK -p udp --dport 7946 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN\nsudo iptables -A SIMPLYBLOCK -p tcp --dport 8080:8890 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN\nsudo iptables -A SIMPLYBLOCK -p tcp --dport 9090-9900 -s 192.168.10.0/24,10.10.10.0/24 -j RETURN\nsudo iptables -A SIMPLYBLOCK -s 0.0.0.0/0 -j DROP\n

To automatically run this script whenever Docker is started or restarted, it must be attached to a Systemd service, stored as /etc/systemd/system/simplyblock-iptables.service.

Systemd script to set up Iptables
[Unit]\nDescription=Simplyblock Iptables Restrictions for Docker \nAfter=docker.service\nBindsTo=docker.service\nReloadPropagatedFrom=docker.service\n\n[Service]\nType=oneshot\nExecStart=/usr/local/sbin/simplyblock-iptables.sh\nExecReload=/usr/local/sbin/simplyblock-iptables.sh\nRemainAfterExit=yes\n\n[Install]\nWantedBy=multi-user.target\n

After both files are stored in their respective locations, the bash script needs to be made executable, and the Systemd service needs to be enabled to start automatically.

Enabling service file
chmod +x /usr/local/sbin/simplyblock-iptables.sh\nsystemctl enable simplyblock-iptables.service\nsystemctl start simplyblock-iptables.service\n
"},{"location":"deployments/install-on-linux/install-sp/#storage-node-installation","title":"Storage Node Installation","text":"

Now that the network is configured, the storage node software can be installed.

Info

All storage nodes can be prepared at this point, as they are added to the cluster in the next step. Therefore, it is recommended to execute this step on all storage nodes before moving to the next step.

Simplyblock provides a command line interface called sbctl. It's built in Python and requires Python 3 and Pip (the Python package manager) are installed on the machine. This can be achieved with yum.

Install Python and Pip
sudo yum -y install python3-pip\n

Afterward, the sbctl command line interface can be installed. Upgrading the CLI later on uses the same command.

Install Simplyblock CLI
sudo pip install sbctl --upgrade\n

Recommendation

Simplyblock recommends to only upgrade sbctl if a system upgrade is executed to prevent potential incompatibilities between the running simplyblock cluster and the version of sbctl.

At this point, a quick check with the simplyblock provided system check can reveal potential issues quickly.

Automatically check your configuration
curl -s -L https://install.simplyblock.io/scripts/prerequisites-sn.sh | bash\n
"},{"location":"deployments/install-on-linux/install-sp/#nvme-device-preparation","title":"NVMe Device Preparation","text":"

Once the check is complete, the NVMe devices in each storage node can be prepared. To prevent data loss in case of a sudden power outage, NVMe devices need to be formatted for a specific LBA format.

Warning

Failing to format NVMe devices with the correct LBA format can lead to data loss or data corruption in the case of a sudden power outage or other loss of power. If you can't find the necessary LBA format, it is best to ask your simplyblock contact for further instructions.

On AWS, the necessary LBA format is not available. Simplyblock is, however, fully tested and supported with AWS.

The lsblk is the best way to find all NVMe devices attached to a system.

Example output of lsblk
[demo@demo-3 ~]# sudo lsblk\nNAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS\nsda           8:0    0   30G  0 disk\n\u251c\u2500sda1        8:1    0    1G  0 part /boot\n\u2514\u2500sda2        8:2    0   29G  0 part\n  \u251c\u2500rl-root 253:0    0   26G  0 lvm  /\n  \u2514\u2500rl-swap 253:1    0    3G  0 lvm  [SWAP]\nnvme3n1     259:0    0  6.5G  0 disk\nnvme2n1     259:1    0   70G  0 disk\nnvme1n1     259:2    0   70G  0 disk\nnvme0n1     259:3    0   70G  0 disk\n

In the example, we see four NVMe devices. Three devices of 70GiB and one device with 6.5GiB storage capacity.

To find the correct LBA format (lbaf) for each of the devices, the nvme CLI can be used.

Show NVMe namespace information
sudo nvme id-ns /dev/nvmeXnY\n

The output depends on the NVMe device itself, but looks something like this:

Example output of NVMe namespace information
[demo@demo-3 ~]# sudo nvme id-ns /dev/nvme0n1\nNVME Identify Namespace 1:\n...\nlbaf  0 : ms:0   lbads:9  rp:0\nlbaf  1 : ms:8   lbads:9  rp:0\nlbaf  2 : ms:16  lbads:9  rp:0\nlbaf  3 : ms:64  lbads:9  rp:0\nlbaf  4 : ms:0   lbads:12 rp:0 (in use)\nlbaf  5 : ms:8   lbads:12 rp:0\nlbaf  6 : ms:16  lbads:12 rp:0\nlbaf  7 : ms:64  lbads:12 rp:0\n

From this output, the required lbaf configuration can be found. The necessary configuration has to have the following values:

Property Value ms 0 lbads 12 rp 0

In the example, the required LBA format is 4. If an NVMe device doesn't have that combination, any other lbads=12 combination will work. However, simplyblock recommends asking for the best available combination.

Info

In some rare cases, no lbads=12 combination will be available. In this case, it is ok to leave the current setup. This is specifically true for certain cloud providers such as AWS.

In our example, the device is already formatted with the correct lbaf (see the \"in use\"). It is, however, recommended to always format the device before use.

To format the drive, the nvme cli is used again.

Formatting the NVMe device
sudo nvme format --lbaf=<lbaf> --ses=0 /dev/nvmeXnY\n

The output of the command should give a successful response when executed similarly to the example below.

Example output of NVMe device formatting
[demo@demo-3 ~]# sudo nvme format --lbaf=4 --ses=0 /dev/nvme0n1\nYou are about to format nvme0n1, namespace 0x1.\nWARNING: Format may irrevocably delete this device's data.\nYou have 10 seconds to press Ctrl-C to cancel this operation.\n\nUse the force [--force] option to suppress this warning.\nSending format operation ...\nSuccess formatting namespace:1\n

Warning

This operation needs to be repeated for each NVMe device that will be handled by simplyblock.

"},{"location":"deployments/install-on-linux/install-sp/#configuration-and-deployment","title":"Configuration and Deployment","text":"

The low-level format of the devices is required only once.

With all NVMe devices prepared, the storage node software can be deployed.

The actual deployment process happens in three steps: - Creating the storage node configuration - Deploy the first stage (the storage node API) - Deploy the second stage (the actual storage node services). Remember that this step is performed from a control plane node.

The configuration process creates the configuration file, which contains all the assignments of NVMe devices, NICs, and potentially available NUMA nodes. By default, simplyblock will configure one storage node per NUMA node.

Configure the storage node
sudo sbctl storage-node configure \\\n  --max-lvol <MAX_LOGICAL_VOLUMES> \\\n  --max-size <MAX_PROVISIONING_CAPACITY>\n
Example output of storage node configure
[demo@demo-3 ~]# sudo sbctl sn configure --nodes-per-socket=2 --max-lvol=50 --max-size=1T\n2025-05-14 10:40:17,460: INFO: 0000:00:04.0 is already bound to nvme.\n0000:00:1e.0\n0000:00:1e.0\n0000:00:1f.0\n0000:00:1f.0\n0000:00:1e.0\n0000:00:1f.0\n2025-05-14 10:40:17,841: INFO: JSON file successfully written to /etc/simplyblock/sn_config_file\n2025-05-14 10:40:17,905: INFO: JSON file successfully written to /etc/simplyblock/system_info\nTrue\n

A full set of the parameters for the configure subcommand can be found in the CLI reference.

It is also possible to adjust the configuration file manually, e.g., to remove NVMe devices. After the configuration has been created, the first stage deployment can be executed.

Deploy the storage node
sudo sbctl storage-node deploy --ifname eth0\n

The output will look something like the following example:

Example output of a storage node deployment
[demo@demo-3 ~]# sudo sbctl storage-node deploy --ifname eth0\n2025-02-26 13:35:06,991: INFO: NVMe SSD devices found on node:\n2025-02-26 13:35:07,038: INFO: Installing dependencies...\n2025-02-26 13:35:13,508: INFO: Node IP: 192.168.10.2\n2025-02-26 13:35:13,623: INFO: Pulling image public.ecr.aws/simply-block/simplyblock:hmdi\n2025-02-26 13:35:15,219: INFO: Recreating SNodeAPI container\n2025-02-26 13:35:15,543: INFO: Pulling image public.ecr.aws/simply-block/ultra:main-latest\n192.168.10.2:5000\n

On a successful deployment, the last line will provide the storage node's control channel address. This should be noted for all storage nodes, as it is required in the next step to attach the storage node to the simplyblock storage cluster.

When all storage nodes are added, it's finally time to activate the storage plane.

"},{"location":"deployments/install-on-linux/install-sp/#attach-the-storage-node-to-the-control-plane","title":"Attach the Storage Node to the Control Plane","text":"

When all storage nodes are prepared, they can be added to the storage cluster.

Warning

The following commands are executed from a management node. Attaching a storage node to a control plane is executed from a management node.

Attaching a storage node to the storage plane
sudo sbctl storage-node add-node <CLUSTER_ID> <SN_CTR_ADDR> <MGT_IF> \\\n  --partitions <NUM_OF_PARTITIONS> \\\n  --data-nics <DATA_IF>\n

If a separate NIC (e.g., BOND device) is used for storage traffic (no matter if in the cluster and between hosts and cluster nodes), the --data-nics parameter must be specified. In R25.10, zero or one data NICs are supported. Zero data NICs will utilize the management interface for all traffic.

Info

The number of partitions (NUM_OF_PARTITIONS) depends on the storage node setup. If a storage node has a separate journaling device (e.g., a SLC NVMe device), the value should be zero (0) to prevent the storage devices from being partitioned. This improves the performance and prevents device sharing between the journal and the actual data storage location. However, in most cases, a separate journaling device is not available or required and the value of --partitions has to be 1.

The output will look something like the following example:

Example output of adding a storage node to the storage plane
[demo@demo ~]# sudo sbctl storage-node add-node 7bef076c-82b7-46a5-9f30-8c938b30e655 192.168.10.2:5000 eth0 --number-of-devices 3 --data-nics eth1\n2025-02-26 14:55:17,236: INFO: Adding Storage node: 192.168.10.2:5000\n2025-02-26 14:55:17,340: INFO: Instance id: 0b0c825e-3d16-4d91-a237-51e55c6ffefe\n2025-02-26 14:55:17,341: INFO: Instance cloud: None\n2025-02-26 14:55:17,341: INFO: Instance type: None\n2025-02-26 14:55:17,342: INFO: Instance privateIp: 192.168.10.2\n2025-02-26 14:55:17,342: INFO: Instance public_ip: 192.168.10.2\n2025-02-26 14:55:17,347: INFO: Node Memory info\n2025-02-26 14:55:17,347: INFO: Total: 24.3 GB\n2025-02-26 14:55:17,348: INFO: Free: 23.2 GB\n2025-02-26 14:55:17,348: INFO: Minimum required huge pages memory is : 14.8 GB\n2025-02-26 14:55:17,349: INFO: Joining docker swarm...\n2025-02-26 14:55:21,060: INFO: Deploying SPDK\n2025-02-26 14:55:31,969: INFO: adding alceml_2d1c235a-1f4d-44c7-9ac1-1db40e23a2c4\n2025-02-26 14:55:32,010: INFO: creating subsystem nqn.2023-02.io.simplyblock:vm12:dev:2d1c235a-1f4d-44c7-9ac1-1db40e23a2c4\n2025-02-26 14:55:32,022: INFO: adding listener for nqn.2023-02.io.simplyblock:vm12:dev:2d1c235a-1f4d-44c7-9ac1-1db40e23a2c4 on IP 10.10.10.2\n2025-02-26 14:55:32,303: INFO: Connecting to remote devices\n2025-02-26 14:55:32,321: INFO: Connecting to remote JMs\n2025-02-26 14:55:32,342: INFO: Make other nodes connect to the new devices\n2025-02-26 14:55:32,346: INFO: Setting node status to Active\n2025-02-26 14:55:32,357: INFO: {\"cluster_id\": \"3196b77c-e6ee-46c3-8291-736debfe2472\", \"event\": \"STATUS_CHANGE\", \"object_name\": \"StorageNode\", \"message\": \"Storage node status changed from: in_creation to: online\", \"caused_by\": \"monitor\"}\n2025-02-26 14:55:32,361: INFO: Sending event updates, node: 37b404b9-36aa-40b3-8b74-7f3af86bd5a5, status: online\n2025-02-26 14:55:32,368: INFO: Sending to: 37b404b9-36aa-40b3-8b74-7f3af86bd5a5\n2025-02-26 14:55:32,389: INFO: Connecting to remote devices\n2025-02-26 14:55:32,442: WARNING: The cluster status is not active (unready), adding the node without distribs and lvstore\n2025-02-26 14:55:32,443: INFO: Done\n

Repeat this process for all prepared storage nodes to add them to the storage plane.

"},{"location":"deployments/install-on-linux/install-sp/#activate-the-storage-cluster","title":"Activate the Storage Cluster","text":"

The last step, after all nodes are added to the storage cluster, is to activate the storage plane.

Storage cluster activation
sudo sbctl cluster activate <CLUSTER_ID>\n

The command output should look like this, and respond with a successful activation of the storage cluster

Example output of a storage cluster activation
[demo@demo ~]# sbctl cluster activate 7bef076c-82b7-46a5-9f30-8c938b30e655\n2025-02-28 13:35:26,053: INFO: {\"cluster_id\": \"7bef076c-82b7-46a5-9f30-8c938b30e655\", \"event\": \"STATUS_CHANGE\", \"object_name\": \"Cluster\", \"message\": \"Cluster status changed from unready to in_activation\", \"caused_by\": \"cli\"}\n2025-02-28 13:35:26,322: INFO: Connecting remote_jm_43560b0a-f966-405f-b27a-2c571a2bb4eb to 2f4dafb1-d610-42a7-9a53-13732459523e\n2025-02-28 13:35:31,133: INFO: Connecting remote_jm_43560b0a-f966-405f-b27a-2c571a2bb4eb to b7db725a-96e2-40d1-b41b-738495d97093\n2025-02-28 13:35:55,791: INFO: {\"cluster_id\": \"7bef076c-82b7-46a5-9f30-8c938b30e655\", \"event\": \"STATUS_CHANGE\", \"object_name\": \"Cluster\", \"message\": \"Cluster status changed from in_activation to active\", \"caused_by\": \"cli\"}\n
"},{"location":"deployments/kubernetes/","title":"Install Simplyblock on Kubernetes","text":"

Three simplyblock components can be installed into existing Kubernetes environments:

  • Control Plane: In Kubernetes-based deployments, the simplyblock control plane can be installed into a Kubernetes cluster. This is always the first step.
  • Storage Plane: In Kubernetes-based deployments, the simplyblock storage plane can be installed into Kubernetes clusters once the control plane is ready. It is possible to use separate workers or even separate clusters as storage nodes or to combine them with compute. The storage plane installs also installs necessary components of the CSI driver, no extra helm chart is needed.

In general, all Kubernetes deployments follow the same procedure. However, here are some specifics worth to mention around openshift and talos. Also, if you want to use volume-based e2e encryption with customer-managed keys, please see here.

The Simplyblock CSI Driver can also be separately installed to connect to any external storage cluster (this can be another hyperconverged or disaggregated cluster under Kubernetes or a Linux-based disaggregated deployment), see: Install Simplyblock CSI.

"},{"location":"deployments/kubernetes/install-csi/","title":"Install Simplyblock CSI","text":"

Simplyblock provides a seamless integration with Kubernetes through its Kubernetes CSI driver.

Before installing the Kubernetes CSI Driver, a control plane must be present, a (empty) storage cluster must have been added to the control plane, and a storage pool must have been created.

This section explains how to install a CSI driver and connect it to a disaggregated storage cluster, which must already exist prior to the CSI driver installation. The disaggregated cluster must be installed onto Plain Linux Hosts or into an Existing Kubernetes Cluster. It must not be co-located on the same Kubernetes worker nodes as the CSI driver installation.

For co-located (hyper-converged) deployment (which includes the CSI driver and storage node deployment), see Hyper-Converged Deployment.

"},{"location":"deployments/kubernetes/install-csi/#csi-driver-system-requirements","title":"CSI Driver System Requirements","text":"

The CSI driver consists of two parts:

  • A controller part, which communicates to the control plane via the control plane API
  • A node part, which is deployed to and must be present on all nodes with pods attaching simplyblock storage (Daemonset)

The worker node of the node part must satisfy the following requirements:

  • Linux Distributions and Versions
  • Linux Kernel Versions
"},{"location":"deployments/kubernetes/install-csi/#installation-options","title":"Installation Options","text":"

To install the Simplyblock CSI Driver, a Helm chart is provided. While it can be installed manually, the Helm chart is strongly recommended. If a manual installation is preferred, see the CSI Driver Repository\u00a0\u29c9.

"},{"location":"deployments/kubernetes/install-csi/#retrieving-credentials","title":"Retrieving Credentials","text":"

Credentials are available via sbctl cluster get-secret from any of the control plane nodes. For further information on the command, see Retrieving a Cluster Secret.

First, the unique cluster id must be retrieved. Note down the cluster UUID of the cluster to access.

Retrieving the Cluster UUID
sudo sbctl cluster list\n

An example of the output is below.

Example output of a cluster listing
[demo@demo ~]# sbctl cluster list\n+--------------------------------------+-----------------------------------------------------------------+---------+-------+------------+---------------+-----+--------+\n| UUID                                 | NQN                                                             | ha_type | tls   | mgmt nodes | storage nodes | Mod | Status |\n+--------------------------------------+-----------------------------------------------------------------+---------+-------+------------+---------------+-----+--------+\n| 4502977c-ae2d-4046-a8c5-ccc7fa78eb9a | nqn.2023-02.io.simplyblock:4502977c-ae2d-4046-a8c5-ccc7fa78eb9a | ha      | False | 1          | 4             | 1x1 | active |\n+--------------------------------------+-----------------------------------------------------------------+---------+-------+------------+---------------+-----+--------+\n

In addition, the cluster secret must be retrieved. Note down the cluster secret.

Retrieve the Cluster Secret
sbctl cluster get-secret <CLUSTER_UUID>\n

Retrieving the cluster secret will look somewhat like that.

Example output of retrieving a cluster secret
[demo@demo ~]# sbctl cluster get-secret 4502977c-ae2d-4046-a8c5-ccc7fa78eb9a\noal4PVNbZ80uhLMah2Bs\n
"},{"location":"deployments/kubernetes/install-csi/#creating-a-storage-pool","title":"Creating a Storage Pool","text":"

Additionally, a storage pool is required. If a pool already exists, it can be reused. Otherwise, creating a storage pool can be created as follows:

Create a Storage Pool
sbctl pool add <POOL_NAME> <CLUSTER_UUID>\n

The last line of a successful storage pool creation returns the new pool id.

Example output of creating a storage pool
[demo@demo ~]# sbctl pool add test 4502977c-ae2d-4046-a8c5-ccc7fa78eb9a\n2025-03-05 06:36:06,093: INFO: Adding pool\n2025-03-05 06:36:06,098: INFO: {\"cluster_id\": \"4502977c-ae2d-4046-a8c5-ccc7fa78eb9a\", \"event\": \"OBJ_CREATED\", \"object_name\": \"Pool\", \"message\": \"Pool created test\", \"caused_by\": \"cli\"}\n2025-03-05 06:36:06,100: INFO: Done\nad35b7bb-7703-4d38-884f-d8e56ffdafc6 # <- Pool Id\n

The last item necessary before deploying the CSI driver is the control plane address. It is recommended to front the simplyblock API with an AWS load balancer, HAproxy, or similar service. Hence, your control plane address is the \"public\" endpoint of this load balancer.

"},{"location":"deployments/kubernetes/install-csi/#deploying-the-helm-chart","title":"Deploying the Helm Chart","text":"

Anyhow, deploying the Simplyblock CSI Driver using the provided Helm Chart comes down to providing the four necessary values, adding the helm chart repository, and installing the driver.

Install Simplyblock's CSI Driver
CLUSTER_UUID=\"<UUID>\"\nCLUSTER_SECRET=\"<SECRET>\"\nCNTR_ADDR=\"<CONTROL-PLANE-ADDR>\"\nPOOL_NAME=\"<POOL-NAME>\"\nhelm repo add simplyblock-csi https://install.simplyblock.io/helm/csi\nhelm repo update\nhelm install -n simplyblock --create-namespace simplyblock simplyblock-csi/spdk-csi \\\n    --set csiConfig.simplybk.uuid=${CLUSTER_UUID} \\\n    --set csiConfig.simplybk.ip=${CNTR_ADDR} \\\n    --set csiSecret.simplybk.secret=${CLUSTER_SECRET} \\\n    --set logicalVolume.pool_name=${POOL_NAME}\n
Example output of the CSI driver deployment
demo@demo ~> export CLUSTER_UUID=\"4502977c-ae2d-4046-a8c5-ccc7fa78eb9a\"\ndemo@demo ~> export CLUSTER_SECRET=\"oal4PVNbZ80uhLMah2Bs\"\ndemo@demo ~> export CNTR_ADDR=\"http://192.168.10.1/\"\ndemo@demo ~> export POOL_NAME=\"test\"\ndemo@demo ~> helm repo add simplyblock-csi https://install.simplyblock.io/helm/csi\n\"simplyblock-csi\" has been added to your repositories\ndemo@demo ~> helm repo update\nHang tight while we grab the latest from your chart repositories...\n...Successfully got an update from the \"simplyblock-csi\" chart repository\nUpdate Complete. \u2388Happy Helming!\u2388\ndemo@demo ~> helm install -n simplyblock --create-namespace simplyblock simplyblock-csi/spdk-csi \\\n  --set csiConfig.simplybk.uuid=${CLUSTER_UUID} \\\n  --set csiConfig.simplybk.ip=${CNTR_ADDR} \\\n  --set csiSecret.simplybk.secret=${CLUSTER_SECRET} \\\n  --set logicalVolume.pool_name=${POOL_NAME}\nNAME: simplyblock-csi\nLAST DEPLOYED: Wed Mar  5 15:06:02 2025\nNAMESPACE: simplyblock\nSTATUS: deployed\nREVISION: 1\nTEST SUITE: None\nNOTES:\nThe Simplyblock SPDK Driver is getting deployed to your cluster.\n\nTo check CSI SPDK Driver pods status, please run:\n\n  kubectl --namespace=simplyblock get pods --selector=\"release=simplyblock-csi\" --watch\ndemo@demo ~> kubectl --namespace=simplyblock get pods --selector=\"release=simplyblock-csi\" --watch\nNAME                   READY   STATUS    RESTARTS   AGE\nspdkcsi-controller-0   6/6     Running   0          30s\nspdkcsi-node-tzclt     2/2     Running   0          30s\n

There are a lot of additional parameters for the Helm Chart deployment. Most parameters, however, aren't required in real-world CSI driver deployments and should only be used on request of simplyblock.

The full list of parameters is available here: Kubernetes Helm Chart Parameters.

Please note that the storagenode.create? parameter must be set tofalse` (the default) to deploy only the CSI driver.

"},{"location":"deployments/kubernetes/install-csi/#multi-cluster-support","title":"Multi Cluster Support","text":"

The Simplyblock CSI driver now offers multi-cluster support and zone-aware configurations, allowing to connect with multiple simplyblock clusters based on ClusterID or based on their topology zone. Previously, the CSI driver could only connect to a single cluster.

To enable interaction with multiple clusters, there are two key changes:

  1. Parameter cluster_id in a storage class: A new parameter, cluster_id, has been added to the storage class. This parameter specifies which Simplyblock cluster a given request should be directed to.
  2. Secret simplyblock-csi-secret-v2: A new Kubernetes secret, simplyblock-csi-secret-v2, has been added to store credentials for all configured simplyblock clusters.
"},{"location":"deployments/kubernetes/install-csi/#adding-a-cluster","title":"Adding a Cluster","text":"

When the Simplyblock CSI driver is initially installed, only a single cluster can be referenced.

helm install simplyblock-csi ./ \\\n    --set csiConfig.simplybk.uuid=${CLUSTER_ID} \\\n    --set csiConfig.simplybk.ip=${CLUSTER_IP} \\\n    --set csiSecret.simplybk.secret=${CLUSTER_SECRET} \\\n

The CLUSTER_ID (UUID), gateway endpoint (CLUSTER_IP), and secret (CLUSTER_SECRET) of the initial cluster must be provided. This command automatically creates the simplyblock-csi-secret-v2 secret.

The structure of the simplyblock-csi-secret-v2 secret is as following:

apiVersion: v1\ndata:\n  secret.json: <base64 encoded secret>\nkind: Secret\nmetadata:\n  name: simplyblock-csi-secret-v2\ntype: Opaque\n

The decoded secret must be valid JSON content and contain an array of JSON items, one per cluster. Each items consists of three properties, cluster_id, cluster_endpoint, and cluster_secret.

{\n   \"clusters\": [\n     {\n       \"cluster_id\": \"4ec308a1-61cf-4ec6-bff9-aa837f7bc0ea\",\n       \"cluster_endpoint\": \"http://127.0.0.1\",\n       \"cluster_secret\": \"super_secret\"\n     }\n   ]\n}\n

To add a new cluster, the current secret must be retrieved from Kubernetes, edited (adding the new cluster information), and uploaded to the Kubernetes cluster.

# Save cluster secret to a file\nkubectl get secret simplyblock-csi-secret-v2 -o jsonpath='{.data.secret\\.json}' | base64 --decode > secret.yaml\n\n# Edit the clusters and add the new cluster's cluster_id, cluster_endpoint, cluster_secret\n# vi secret.json \n\ncat secret.json | base64 | tr -d '\\n' > secret-encoded.json\n\n# Replace data.secret.json with the content of secret-encoded.json\nkubectl -n simplyblock edit secret simplyblock-csi-secret-v2\n
"},{"location":"deployments/kubernetes/install-csi/#using-multi-cluster","title":"Using Multi Cluster","text":""},{"location":"deployments/kubernetes/install-csi/#option-1-cluster-idbased-method-one-storageclass-per-cluster","title":"Option 1: Cluster ID\u2013Based Method (One StorageClass per Cluster)","text":"

In this approach, each SimplyBlock cluster has its own dedicated StorageClass that specifies which cluster to use for provisioning. This is ideal for setups where workloads are manually directed to specific clusters.

For example:

apiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n  name: simplyblock-csi-sc-cluster1\nprovisioner: csi.simplyblock.io\nparameters:\n  cluster_id: \"luster-uuid-1\"\n  ... other parameters\nreclaimPolicy: Delete\nvolumeBindingMode: WaitForFirstConsumer\nallowVolumeExpansion: true\n

You can define another StorageClass for a different cluster:

apiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n  name: simplyblock-csi-sc-cluster2\nprovisioner: csi.simplyblock.io\nparameters:\n  cluster_id: \"cluster-uuid-2\"\n  ... other parameters\nreclaimPolicy: Delete\nvolumeBindingMode: WaitForFirstConsumer\nallowVolumeExpansion: true\n

Each StorageClass references a unique cluster_id. The CSI driver uses that ID to determine which SimplyBlock cluster to connect to.

"},{"location":"deployments/kubernetes/install-csi/#option-2-zone-aware-method-automatic-multi-cluster-selection","title":"Option 2: Zone-Aware Method (Automatic Multi-Cluster Selection)","text":"

This approach allows a single StorageClass to automatically select the appropriate SimplyBlock cluster based on the Kubernetes zone where the workload runs. It is recommended for multi-zone Kubernetes deployments that span multiple SimplyBlock clusters.

storageclass.zoneClusterMap

Sets the mapping between Kubernetes zones and SimplyBlock cluster IDs. Each zone is associated with one cluster.

storageclass.allowedTopologyZones

Sets the list of zones where the StorageClass is permitted to provision volumes. This ensures that scheduling aligns with the clusters defined in zoneClusterMap.

example:

apiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n  name: simplyblock-csi-sc\nprovisioner: csi.simplyblock.io\nparameters:\n  zone_cluster_map: |\n    {\"us-east-1a\":\"cluster-uuid-1\",\"us-east-1b\":\"cluster-uuid-2\"}\n  ... other parameters\nreclaimPolicy: Delete\nvolumeBindingMode: WaitForFirstConsumer\nallowVolumeExpansion: true\nallowedTopologies:\n- matchLabelExpressions:\n  - key: topology.kubernetes.io/zone\n    values:\n      - us-east-1a\n      - us-east-1b\n

This method allows Kubernetes to automatically pick the right cluster based on the pod\u2019s scheduling zone.

"},{"location":"deployments/kubernetes/k8s-control-plane/","title":"Install Simplyblock Control Plane on Kubernetes","text":"Install CLI
pip install sbctl --upgrade\n

After installing the CLI, navigate to the Helm chart directory within the installed package:

cd /usr/local/lib/python3.9/site-packages/simplyblock_core/scripts/charts/\n

Then build the Helm dependencies and deploy the simplyblock control plane:

helm dependency build ./\nhelm upgrade --install sbcli --namespace simplyblock --create-namespace ./\n

Before running the helm install, you can edit the values.yaml file to match your specific configuration \u2014 for example, to set cluster parameters, storage options, or node selectors according to your environment.

Service Direction Source / Target Network Port Protocol(s) ICMP ingress control - ICMP Cluster API ingress storage, control, admin 80 TCP FoundationDB ingress storage, control 4500 TCP Cluster Control egress storage, control 8080-8890 TCP spdk-http-proxy egress storage, control 5000 TCP spdk-firewall-proxy egress storage, control 5001 TCP

Find and exec into the admin control pod (replace the pod name if different):

kubectl -n simplyblock exec -it simplyblock-admin-control-<uuid> -- bash\n
Install Control Plane
sbctl cluster create --mgmt-ip <WORKER_IP> --ha-type ha --mode kubernetes\n

Info

You need to add additional parameter when using a Loadbalancer --ingress-host-source loadbalancer and --dns-name <LB_INGRESS_DNS>

Additional parameters for the cluster create command can be found at Cluster Deployment Options.

"},{"location":"deployments/kubernetes/k8s-storage-plane/","title":"Install Simplyblock Storage Plane on Kubernetes","text":"

When installed on Kubernetes, simplyblock installations consist of three parts, the control plane, the storage nodes and the CSI driver.

Info

In a Kubernetes deployment, not all Kubernetes workers have to become part of the storage cluster. Simplyblock uses node labels to identify Kubernetes workers that are deemed as storage hosting instances.

It is common to add dedicated Kubernetes worker nodes for storage to the same Kubernetes cluster. They can be separated into a different node pool, and using a different type of host. In this case, it is important to remember to taint the Kubernetes worker accordingly to prevent other services from being scheduled on this worker.

"},{"location":"deployments/kubernetes/k8s-storage-plane/#retrieving-credentials","title":"Retrieving Credentials","text":"

Credentials are available via sbctl cluster get-secret from any of the control plane nodes. For further information on the command, see Retrieving a Cluster Secret.

First, the unique cluster id must be retrieved. Note down the cluster UUID of the cluster to access.

Retrieving the Cluster UUID
sudo sbctl cluster list\n

An example of the output is below.

Example output of a cluster listing
[demo@demo ~]# sbctl cluster list\n+--------------------------------------+-----------------------------------------------------------------+---------+-------+------------+---------------+-----+--------+\n| UUID                                 | NQN                                                             | ha_type | tls   | mgmt nodes | storage nodes | Mod | Status |\n+--------------------------------------+-----------------------------------------------------------------+---------+-------+------------+---------------+-----+--------+\n| 4502977c-ae2d-4046-a8c5-ccc7fa78eb9a | nqn.2023-02.io.simplyblock:4502977c-ae2d-4046-a8c5-ccc7fa78eb9a | ha      | False | 1          | 4             | 1x1 | active |\n+--------------------------------------+-----------------------------------------------------------------+---------+-------+------------+---------------+-----+--------+\n

In addition, the cluster secret must be retrieved. Note down the cluster secret.

Retrieve the Cluster Secret
sbctl cluster get-secret <CLUSTER_UUID>\n

Retrieving the cluster secret will look somewhat like that.

Example output of retrieving a cluster secret
[demo@demo ~]# sbctl cluster get-secret 4502977c-ae2d-4046-a8c5-ccc7fa78eb9a\noal4PVNbZ80uhLMah2Bs\n
"},{"location":"deployments/kubernetes/k8s-storage-plane/#creating-a-storage-pool","title":"Creating a Storage Pool","text":"

Additionally, a storage pool is required. If a pool already exists, it can be reused. Otherwise, creating a storage pool can be created as follows:

Create a Storage Pool
sbctl pool add <POOL_NAME> <CLUSTER_UUID>\n

The last line of a successful storage pool creation returns the new pool id.

Example output of creating a storage pool
[demo@demo ~]# sbctl pool add test 4502977c-ae2d-4046-a8c5-ccc7fa78eb9a\n2025-03-05 06:36:06,093: INFO: Adding pool\n2025-03-05 06:36:06,098: INFO: {\"cluster_id\": \"4502977c-ae2d-4046-a8c5-ccc7fa78eb9a\", \"event\": \"OBJ_CREATED\", \"object_name\": \"Pool\", \"message\": \"Pool created test\", \"caused_by\": \"cli\"}\n2025-03-05 06:36:06,100: INFO: Done\nad35b7bb-7703-4d38-884f-d8e56ffdafc6 # <- Pool Id\n

Info

It is possible to configure QoS limits on a storage pool level. This limit will collectively cap all volumes assigned to this pool without being limited individually. In fact, if pool-level QoS is active, it is not allowed to set volume-level QoS in the storage class!

Example:

Create a Storage Pool with QoS Limits
sbctl pool add <POOL_NAME> <CLUSTER_UUID> --max-iops 10000 --max-rw-mb 500 --max-w-mb 100\n
"},{"location":"deployments/kubernetes/k8s-storage-plane/#labeling-nodes","title":"Labeling Nodes","text":"

Before the Helm Chart can be installed, it is required to label all Kubernetes worker nodes deemed as storage nodes.

It is also possible to label additional nodes at a later stage to add them to the storage cluster. However, expanding a storage cluster always requires at least two new nodes to be added as part of the same expansion operation.

Label the Kubernetes worker node
kubectl label nodes <NODE_NAME> io.simplyblock.node-type=simplyblock-storage-plane\n
"},{"location":"deployments/kubernetes/k8s-storage-plane/#networking-configuration","title":"Networking Configuration","text":"

Multiple ports are required to be opened on storage node hosts.

Ports using the same source and target networks (VLANs) will not require any additional firewall settings.

Opening ports may be required between the control plane and storage networks as those typically reside on different VLANs.

Service Direction Source / Target Network Port(s) Protocol(s) ICMP ingress control - ICMP Storage node API ingress storage 5000 TCP spdk-firewall-proxy ingress storage 5001 TCP spdk-http-proxy ingress storage, control 8080-8180 TCP hublvol-nvmf-subsys-port ingress storage, control 9030-9059 TCP internal-nvmf-subsys-port ingress storage, control 9060-9099 TCP lvol-nvmf-subsys-port ingress storage, control 9100-9200 TCP FoundationDB egress storage 4500 TCP Control plane API egress control 80 TCP"},{"location":"deployments/kubernetes/k8s-storage-plane/#installing-csi-driver-and-storage-nodes-via-helm","title":"Installing CSI Driver and Storage Nodes via Helm","text":"

In the simplest deployment, compared to a pure Simplyblock CSI Driver installation, the deployment of a storage node via the Helm Chart requires only one additional parameter --set storagenode.create=true:

Install the helm chart
CLUSTER_UUID=\"<UUID>\"\nCLUSTER_SECRET=\"<SECRET>\"\nCNTR_ADDR=\"<CONTROL-PLANE-ADDR>\"\nPOOL_NAME=\"<POOL-NAME>\"\nhelm repo add simplyblock-csi https://install.simplyblock.io/helm/csi\nhelm repo add simplyblock-controller https://install.simplyblock.io/helm/controller\nhelm repo update\n\n# Install Simplyblock CSI Driver and Storage Node API\nhelm install -n simplyblock \\\n    --create-namespace simplyblock \\\n    simplyblock-csi/spdk-csi \\\n    --set csiConfig.simplybk.uuid=<CLUSTER_UUID> \\\n    --set csiConfig.simplybk.ip=<CNTR_ADDR> \\\n    --set csiSecret.simplybk.secret=<CLUSTER_SECRET> \\\n    --set logicalVolume.pool_name=<POOL_NAME> \\\n    --set storagenode.create=true\n
Example output of the Simplyblock Kubernetes deployment
demo@demo ~> export CLUSTER_UUID=\"4502977c-ae2d-4046-a8c5-ccc7fa78eb9a\"\ndemo@demo ~> export CLUSTER_SECRET=\"oal4PVNbZ80uhLMah2Bs\"\ndemo@demo ~> export CNTR_ADDR=\"http://192.168.10.1/\"\ndemo@demo ~> export POOL_NAME=\"test\"\ndemo@demo ~> helm repo add simplyblock-csi https://install.simplyblock.io/helm/csi\n\"simplyblock-csi\" has been added to your repositories\ndemo@demo ~> helm repo add simplyblock-controller https://install.simplyblock.io/helm/controller\n\"simplyblock-controller\" has been added to your repositories\ndemo@demo ~> helm repo update\nHang tight while we grab the latest from your chart repositories...\n...Successfully got an update from the \"simplyblock-csi\" chart repository\n...Successfully got an update from the \"simplyblock-controller\" chart repository\nUpdate Complete. \u2388Happy Helming!\u2388\ndemo@demo ~> helm install -n simplyblock --create-namespace simplyblock simplyblock-csi/spdk-csi \\\n  --set csiConfig.simplybk.uuid=${CLUSTER_UUID} \\\n  --set csiConfig.simplybk.ip=${CNTR_ADDR} \\\n  --set csiSecret.simplybk.secret=${CLUSTER_SECRET} \\\n  --set logicalVolume.pool_name=${POOL_NAME}\nNAME: simplyblock-csi\nLAST DEPLOYED: Wed Mar  5 15:06:02 2025\nNAMESPACE: simplyblock\nSTATUS: deployed\nREVISION: 1\nTEST SUITE: None\nNOTES:\nThe Simplyblock SPDK Driver is getting deployed to your cluster.\n\nTo check CSI SPDK Driver pods status, please run:\n\n  kubectl --namespace=simplyblock get pods --selector=\"release=simplyblock-csi\" --watch\ndemo@demo ~> kubectl --namespace=simplyblock get pods --selector=\"release=simplyblock-csi\" --watch\nNAME                   READY   STATUS    RESTARTS   AGE\nspdkcsi-controller-0   6/6     Running   0          30s\nspdkcsi-node-tzclt     2/2     Running   0          30s\n

There are a number of other Helm Chart parameters that are important for storage node deployment in hyper-converged mode. The most important ones are:

Parameter Description Default storagenode.ifname Sets the interface name of the management interface (traffic between storage nodes and control plane, see storage mgmt VLAN). Highly available ports and networks are required in production. While this value can be changed at a later point in time, it requires a storage node restart. eth0 storagenode.maxSize Sets the maximum utilized storage capacity of this storage node. A conservative setting is the expected cluster capacity. This setting has significant impact on RAM demand with 0.02% of maxSixe is required in additional RAM. 150g storagenode.isolateCores Enabled core isolation of cores used by simplyblock from other processes and system, including IRQs, can significantly increase performance. Core isolation requires a Kubernetes worker node restart after the deployment is completed. Changes are performed via a privileged container on the OS-level (grub). false storagenode.dataNics Sets the interface name of the storage network(s). This includes traffic inside the storage cluster and between csi-nodes and storage nodes. Highly available ports and networks are required for production. storagenode.pciAllowed Sets the list of allowed NVMe PCIe addresses. <empty> storagenode.pciBlocked Sets the list of blocked NVMe PCIe addresses. <empty> storagenode.socketsToUse Sets the list of NUMA sockets to use. If a worker node has more than 1 NUMA socket, it is possible to deploy more than one simplyblock storage node per host, depending on the distribution of NVMe devices and NICs across NUMA sockets and the resource demand of other workloads. 1 storagenode.nodesPerSocket Sets the number of storage nodes to be deployed per NUMA socket. It is possible to deploy one or two storage nodes per socket. This improves performance if one each NUMA socket has more than 32 cores. 1 storagenode.coresPercentage Sets the percentage of total cores (vCPUs) available to simplyblock storage node services. It must be ensured that the configured percentage yields at least 8 vCPUs per storage node. For example, if a host has 128 vCPUs on two NUMA sockets (64 each) and --storagenode.socketsToUse=2 and --storagenode.nodesPerSocket=1, at least 13% (as 13% * 64 > 8) must be set. Simplyblock does not use more than 32 vCPUs per storage node efficiently. <empty>

Warning

The resources consumed by simplyblock are exclusively used and have to be aligned with resources required by other workloads. For further information, see Minimum System Requirements.

Info

The RAM requirement itself is split in between huge page memory and system memory. However, this is transparent to users.

Simplyblock takes care of allocating, reserving, and freeing huge pages as part of its overall RAM management.

The total amount of RAM required depends on the number of vCPUs used, the number of active logical volumes (Persistent Volume Claims or PVCs) and the utilized virtual storage on this node. This doesn't mean the physical storage provided on the storage host, but the storage connected to via this storage node.

"},{"location":"deployments/kubernetes/openshift/","title":"OpenShift","text":"

When installing simplyblock on OpenShift, the process is very similar to Kubernetes, with one key difference, OpenShift requires explicitly granting the privileged Security Context Constraint (SCC) to service accounts to enable storage and SPDK operations.

Info

In OpenShift deployments, not all worker nodes must host storage components. Simplyblock uses node labels to identify nodes that participate in the storage cluster. You can isolate storage workloads on dedicated worker nodes or node pools.

"},{"location":"deployments/kubernetes/openshift/#prerequisites","title":"Prerequisites","text":"

Ensure your OpenShift cluster is operational and that you have administrator privileges.

Before deploying Simplyblock components, grant the required SCC permissions:

Grant SCC permissions
oc adm policy add-scc-to-group privileged system:serviceaccounts\n

This step is mandatory to allow SPDK and storage-related containers to run with the privileges required for NVMe device access.

"},{"location":"deployments/kubernetes/talos/","title":"Talos","text":"

Talos Linux\u00a0\u29c9 is a minimal Linux distribution optimized for Kubernetes. Built as an immutable distribution image, it provides a minimal attack surface but requires some changes to the image to run simplyblock.

Simplyblock requires a set of additional Linux kernel modules, as well as tools being available in the Talos image. That means that a custom Talos image has to be built to run simplyblock. The following section explains the required changes to make Talos compliant.

"},{"location":"deployments/kubernetes/talos/#required-kernel-modules-worker-node","title":"Required Kernel Modules (Worker Node)","text":"

On Kubernetes worker nodes, simplyblock requires a few kernel modules to be loaded.

Content of kernel-module-config.yaml
machine:\n  kernel:\n    modules:\n      - name: nbd \n      - name: uio_pci_generic\n      - name: vfio_pci\n      - name: vfio_iommu_type1\n
"},{"location":"deployments/kubernetes/talos/#huge-pages-reservations","title":"Huge Pages Reservations","text":"

Simplyblock requires huge pages memory to operate. The storage engine expects to find huge pages of 2 MiB page size. The required amount of huge pages depends on a number of factors.

To apply the change to Talos' worker nodes, a YAML configuration file with the following content is required. The number of pages is to be replaced with the number calculated above.

Content of huge-pages-config.yaml
machine:\n  sysctls:\n     vm.nr_hugepages: \"<number-of-pages>\"\n

To activate the huge pages, the talosctl command should be used.

Enable Huge Pages in Talos
demo@demo ~> talosctl apply-config --nodes <worker_node_ip> \\\n    --file huge-pages-config.yaml -m reboot\ndemo@demo ~> talosctl service kubelet restart --nodes <worker_node_ip>\n
"},{"location":"deployments/kubernetes/talos/#required-talos-permissions","title":"Required Talos Permissions","text":"

Simyplyblock's CSI driver requires connecting NVMe over Fabrics devices, as well as mounting and formatting them. Therefore, the CSI driver has to run as a privileged container. Hence, Talos must be configured to start the simplyblock's CSI driver in privileged mode.

Talos allows overriding the pod security admission settings at a Kubernetes namespace level. To enable privileged mode and grant the required access to the simplyblock CSI driver, a specific simplyblock namespace with the appropriate security exemptions must be created:

Content of simplyblock-namespace.yaml
apiVersion: v1\nkind: Namespace\nmetadata:\n  name: simplyblock\n  labels:\n    pod-security.kubernetes.io/enforce: privileged\n    pod-security.kubernetes.io/enforce-version: latest\n    pod-security.kubernetes.io/audit: privileged\n    pod-security.kubernetes.io/audit-version: latest\n    pod-security.kubernetes.io/warn: privileged\n    pod-security.kubernetes.io/warn-version: latest\n

To enable the required permissions, apply the namespace configuration using kubectl.

Enabled privileged mode for simplyblock
demo@demo ~> kubectl apply -f simplyblock-namespace.yaml\n
"},{"location":"deployments/kubernetes/volume-encryption/","title":"Volume Encryption","text":"

Simplyblock supports encryption of logical volumes (LVs) to protect data at rest, ensuring that sensitive information remains secure across the distributed storage cluster. Encryption is applied during volume creation as part of the storage class specification.

Encrypting Logical Volumes ensures that simplyblock storage meets data protection and compliance requirements, safeguarding sensitive workloads without compromising performance.

Warning

Encryption must be specified at the time of volume creation. Existing logical volumes cannot be retroactively encrypted.

"},{"location":"deployments/kubernetes/volume-encryption/#encrypting-volumes-with-simplyblock","title":"Encrypting Volumes with Simplyblock","text":"

Simplyblock supports the encryption of logical volumes. Internally, simplyblock utilizes the industry-proven crypto bdev\u00a0\u29c9 provided by SPDK to implement its encryption functionality.

The encryption uses an AES_XTS variable-length block cipher. This cipher requires two keys of 16 to 32 bytes each. The keys need to have the same length, meaning that if one key is 32 bytes long, the other one has to be 32 bytes, too.

Recommendation

Simplyblock strongly recommends two keys of 32 bytes.

"},{"location":"deployments/kubernetes/volume-encryption/#generate-random-keys","title":"Generate Random Keys","text":"

Simplyblock does not provide an integrated way to generate encryption keys, but recommends using the OpenSSL tool chain. For Kubernetes, the encryption key needs to be provided as base64. Hence, it's encoded right away.

To generate the two keys, the following command is run twice. The result must be stored for later.

Create an Encryption Key
openssl rand -hex 32 | base64 -w0\n
"},{"location":"deployments/kubernetes/volume-encryption/#create-the-kubernetes-secret","title":"Create the Kubernetes Secret","text":"

Next up, a Kubernetes Secret is created, providing the two just-created encryption keys.

Create a Kubernetes Secret Resource
apiVersion: v1\nkind: Secret\nmetadata:\n  name: my-encryption-keys\ndata:\n  crypto_key1: YzIzYzllY2I4MWJmYmY1ZDM5ZDA0NThjNWZlNzQwNjY2Y2RjZDViNWE4NTZkOTA5YmRmODFjM2UxM2FkZGU4Ngo=\n  crypto_key2: ZmFhMGFlMzZkNmIyODdhMjYxMzZhYWI3ZTcwZDEwZjBmYWJlMzYzMDRjNTBjYTY5Nzk2ZGRlZGJiMDMwMGJmNwo=\n

The Kubernetes Secret can be used for one or more logical volumes. Using different encryption keys, multiple tenants can be secured with an additional isolation layer against each other.

"},{"location":"deployments/kubernetes/volume-encryption/#storageclass-configuration","title":"StorageClass Configuration","text":"

A new Kubernetes StorageClass needs to be created, or an existing one needs to be configured. To use encryption on a persistent volume claim level, the storage class has to be set for encryption.

Example StorageClass
apiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n  name: my-encrypted-volumes\nprovisioner: csi.simplyblock.io\nparameters:\n  encryption: \"True\" # This is important!\n  ... other parameters\nreclaimPolicy: Delete\nvolumeBindingMode: Immediate\nallowVolumeExpansion: true\n
"},{"location":"deployments/kubernetes/volume-encryption/#create-a-persistentvolumeclaim","title":"Create a PersistentVolumeClaim","text":"

When requesting a logical volume through a Kubernetes PersistentVolumeClaim, the storage class and the secret resources have to be connected to the PVC. When picked up, simplyblock will automatically collect the keys and create the logical volumes as a fully encrypted logical volume.

Create an encrypting PersistentVolumeClaim
apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  annotations:\n    simplybk/secret-name: my-encryption-keys # Encryption keys\n  name: my-encrypted-volume-claim\nspec:\n  storageClassName: my-encrypted-volumes # StorageClass\n  accessModes:\n    - ReadWriteOnce\n  resources:\n    requests:\n      storage: 200Gi\n
"},{"location":"deployments/openstack/","title":"OpenStack Integration","text":"

Info

This driver is still not part of the official OpenStack support matrix.

We are working on getting it there.

"},{"location":"deployments/openstack/#features-supported","title":"Features Supported","text":"

The following list of features is supported: - Thin provisioning - Creating a volume - Resizing (extend) a volume - Deleting a volume - Snapshotting a volume - Reverting to snapshot - Cloning a volume (copy-on-write) - Extending an attached volume - Multi-attaching a volume - Volume migration (driver-supported) - QoS - Active/active HA support

"},{"location":"deployments/openstack/#deployment","title":"Deployment","text":"

Depending on the fabric, it is necessary to load the Linux kernel modules on compute nodes and controller:

Load NVMe/TCP on Ubuntu or Debian
sudo apt-get install -y linux-modules-extra-$(uname -r)\nsudo modprobe nvme_tcp\n
Load NVMe/TCP on RHEL, Rocky or Alma
sudo modprobe nvme_tcp\n
In case you need the RoCE/RDMA fabric or both fabrics, (also) run:

Load NVMe/RoCE on Ubuntu or Debian
sudo apt-get install -y linux-modules-extra-$(uname -r)\nsudo modprobe nvme_rdma\n
Load NVMe/RoCE on RHEL, Rocky or Alma
sudo modprobe nvme_rdma\n

Update globals.yaml
enable_cinder: \"yes\"\n...\n#This is a fork of the cinder-volume driver container including Simplyblock:\ncinder_volume_image: \"docker.io/simplyblock/cinder-volume\"\n#If Simplyblock is the only Cinder Storage Backend:\nskip_cinder_backend_check: \"yes\"\n
Update Cinder Override for Simplyblock Backend Located in /etc/kolla/config/cinder.conf
[DEFAULT]\ndebug = True\n# Add Simplyblock to enabled_backends list\nenabled_backends = simplyblock\n\n[simplyblock]\nvolume_driver = cinder.volume.drivers.simplyblock.driver.SimplyblockDriver\nvolume_backend_name = simplyblock\nsimplyblock_endpoint = <simplyblock_endpoint>\nsimplyblock_cluster_uuid = <simplyblock_cluster_uuid>\nsimplyblock_cluster_secret = <simplyblock_cluster_secret>\nsimplyblock_pool_name = <simplyblock_pool_name>\n
Rerun Kolla-Ansible Deploy Command for Cinder
kolla-ansible deploy -i <inventory_file> --tags cinder\n
"},{"location":"deployments/proxmox/","title":"Proxmox Integration","text":"

Proxmox Virtual Environment (Proxmox VE) is an open-source server virtualization platform that integrates KVM-based virtual machines and LXC containers with a web-based management interface.

Simplyblock seamlessly integrates with Proxmox through its storage plugin. The storage plugin enables the automatic provisioning of storage volumes for Proxmox's KVM virtual machines and LXC containers. Simplyblock is fully integrated into the Proxmox user interface.

After being deployed, virtual machine and container images can be provisioned to simplyblock logical volumes, inheriting all performance and reliability characteristics. Volumes provisioned using the simplyblock Proxmox integration are automatically managed and provided to the hypervisor in an ad-hoc fashion. The Proxmox UI and command line interface can manage the volume lifecycle.

"},{"location":"deployments/proxmox/#install-simplyblock-for-proxmox","title":"Install Simplyblock for Proxmox","text":"

Simplyblock's Proxmox storage plugin can be installed from the simplyblock apt repository. To register the simplyblock apt repository, simplyblock offers a script to handle the repository registration automatically.

Info

All the following commands require root permissions for execution. It is recommended to log in as root or open a root shell using sudo su.

Automatically register the Simplyblock Debian Repository
curl https://install.simplyblock.io/install-debian-repository | bash\n

If a manual registration is preferred, the repository public key must be downloaded and made available to apt. This key is used for signature verification.

Install the Simplyblock Public Key
curl -o /etc/apt/keyrings/simplyblock.gpg https://install.simplyblock.io/simplyblock.key\n

Afterward, the repository needs to be registered for apt itself. The following line registers the apt repository.

Register the Simplyblock Debian Repository
echo 'deb [signed-by=/etc/apt/keyrings/simplyblock.gpg] https://install.simplyblock.io/debian stable main' | \\\n    tee /etc/apt/sources.list.d/simplyblock.list\n
"},{"location":"deployments/proxmox/#install-the-simplyblock-proxmox-package","title":"Install the Simplyblock-Proxmox Package","text":"

After the registration of the repository, an apt update will refresh all available package information and make the simplyblock-proxmox package available. The update must not show any errors related to the simplyblock apt repository.

With the updated repository information, an apt install simplyblock-proxmox installed the simplyblock storage plugin.

Install the Simplyblock Proxmox Integration
apt update\napt install simplyblock-proxmox\n

Now, register a simplyblock storage pool with Proxmox. The new Proxmox storage can have an arbitrary name and multiple simplyblock storage pools can be registered as long as their Proxmox names are different.

Enable Simplyblock as a Storage Provider
pvesm add simplyblock <NAME> \\\n    --entrypoint=<CONTROL_PLANE_ADDR> \\\n    --cluster=<CLUSTER_ID> \\\n    --secret=<CLUSTER_SECRET> \\\n    --pool=<STORAGE_POOL_NAME>\n
Parameter Description NAME The name of the storage pool in Proxmox. CONTROL_PLANE_ADDR The api address of the simplyblock control plane. CLUSTER_ID The simplyblock storage cluster id. The cluster id can be found using sbctl cluster list. CLUSTER_SECRET The simplyblock storage cluster secret. The cluster secret can be retrieved using sbctl cluster get-secret. STORAGE_POOL_NAME The simplyblock storage pool name to attach."},{"location":"deployments/proxmox/#after-installation","title":"After Installation","text":"

In the Proxmox user interface, a storage of type simplyblock is now available.

The hypervisor is now configured and can use a simplyblock storage cluster as a storage backend.

"},{"location":"important-notes/","title":"Important Notes","text":"

Simplyblock is a high-performance yet reliable distributed block storage optimized for Kubernetes that is compatible with any bare metal and virtualized Linux environments. It also provides integrations with other environments, such as Proxmox.

To enable the successful operation of your new simplyblock cluster, this section defines some initial conventions and terminology when working with this documentation.

"},{"location":"important-notes/acronyms/","title":"Acronyms & Abbreviations","text":"Acronym or Abbreviation Explanation API Application Programming Interface AWS Amazon Web Services CIDR Classless Inter-Domain Routing CLI Command Line Interface COW Copy On Write CP Control Plane CSI Container Storage Interface DMA Direct Memory Access EA Erasure Coding HA High Availability HTTP Hypertext Transfer Protocol ID Identifier IO Input-Output IOMMU Input-Output Memory Management Unit IP Internet Protocol K8s Kubernetes LV Logical Volume MFT Maximum Tolerable Failure NIC Network Interface Card NQN NVMe Qualified Name NVMe Non-Volatile Memory Express NVMe-oF NVMe over Fabrics NVMe/RoCE NVMe over RDMA on Converged Ethernet NVMe/TCP NVMe over TCP OS Operating System PV Persistent Volume PVC Persistent Volume Claim QOS Quality of Service RAID Redundant Array of Independent Disks RDMA Remote Direct Memory Access ROW Redirect On Write ROX Read Only Many RWO Read Write Once RWX Read Write Many SC Storage Class SDK Software Development Kit SDS Software Defined Storage SP Storage Plane SPDK Storage Performance Development Kit SSD Solid State Drive SSL Secure Socket Layer TCP Transmission Control Protocol TLS Transport Layer Security UDP User Datagram Protocol UUID Universally Unique Identifier VM Virtual Machine"},{"location":"important-notes/contributing/","title":"Contributing","text":""},{"location":"important-notes/contributing/#contributing-to-simplyblock-documentation","title":"Contributing to Simplyblock Documentation","text":""},{"location":"important-notes/contributing/#overview","title":"Overview","text":"

Simplyblock's documentation is publicly available, and we welcome contributions from the community to improve clarity, fix errors, and enhance the overall quality of our documentation. While simplyblock itself is not open source, our documentation is publicly hosted GitHub\u00a0\u29c9. We encourage users to provide feedback, report typos, suggest improvements, and submit fixes for documentation inconsistencies.

"},{"location":"important-notes/contributing/#how-to-contribute","title":"How to Contribute","text":"

The simplyblock documentation is built using mkdocs\u00a0\u29c9, specifically using the mkdocs-material\u00a0\u29c9 variant.

Changes to the documentation can be made by changing or adding the necessary Markdown files.

"},{"location":"important-notes/contributing/#1-provide-feedback-or-report-issues","title":"1. Provide Feedback or Report Issues","text":"

If you notice any inaccuracies, typos, missing information, or outdated content, you can submit an issue on our GitHub repository:

  1. Navigate to the Simplyblock Documentation GitHub Repository\u00a0\u29c9.
  2. Click on the Issues tab.
  3. Click New Issue and provide a clear description of the problem or suggestion.
  4. Submit the issue, and our team will review it.
"},{"location":"important-notes/contributing/#2-make-edits-and-submit-a-pull-request-pr","title":"2. Make Edits and Submit a Pull Request (PR)","text":"

If you'd like to make direct changes to the documentation, follow these steps:

  1. Fork the Repository

  2. Visit Simplyblock Documentation GitHub\u00a0\u29c9 and click Fork to create your own copy of the repository.

  3. Clone the Repository

  4. Clone your fork to your local machine:

    git clone https://github.com/YOUR_USERNAME/documentation.git\ncd documentation\n

  5. Create a New Branch

  6. Always create a new branch for your changes:

    git checkout -b update-docs\n

  7. Make Changes

  8. Edit the relevant Markdown (.md) files using a text editor or IDE. The documentation files can be found in the /docs directory.

  9. Ensure that formatting follows existing conventions.

  10. Commit and Push Your Changes

  11. Commit your changes with a clear message:

    git commit -m \"Fix typo in installation guide\"\n

  12. Push the changes to your fork:

    git push origin update-docs\n

  13. Create a Pull Request (PR)

  14. Navigate to the original simplyblock documentation repository.

  15. Click New Pull Request and select your branch.
  16. Provide a concise description of the changes and submit the PR.
  17. Our team will review and merge accepted contributions.
"},{"location":"important-notes/contributing/#contribution-guidelines","title":"Contribution Guidelines","text":"
  • Ensure all content remains clear, concise, and professional.
  • Follow Markdown syntax conventions used throughout the documentation.
  • Keep changes focused on documentation improvements (not product functionality).
  • Be respectful and constructive in all discussions and contributions.
"},{"location":"important-notes/contributing/#getting-in-touch","title":"Getting in Touch","text":"

If you have questions about contributing, feel free to open an issue or contact us via the simplyblock support channels.

"},{"location":"important-notes/documentation-conventions/","title":"Documentation Conventions","text":""},{"location":"important-notes/documentation-conventions/#feature-stages","title":"Feature Stages","text":"

Features in simplyblock are released when reaching general availability. However, sometimes, features are made available earlier to receive feedback from testers. Those features must be explicitly enabled and are marked in the documentation accordingly. Features without a specific label are considered ready for production.

The documentation uses the following feature stage labels:

  • General Availability: This is the default stage if nothing else is defined for the feature. In this stage, the feature is considered ready for production.
  • Technical Preview: The feature is provided for testing and feedback acquisition. It is not regarded as stable or complete. Breaking changes may occur, which could break backward compatibility. Features in this stage are not considered ready for production. Features in this stage need to be specifically enabled before use.
"},{"location":"important-notes/documentation-conventions/#admonitions-call-outs","title":"Admonitions (Call-Outs)","text":""},{"location":"important-notes/documentation-conventions/#notes","title":"Notes","text":"

Notes include additional information that may be interesting but not crucial.

Note

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor massa, nec semper lorem quam in massa.

"},{"location":"important-notes/documentation-conventions/#recommendations","title":"Recommendations","text":"

Recommendations include best practices and recommendations.

Recommendation

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor massa, nec semper lorem quam in massa.

"},{"location":"important-notes/documentation-conventions/#infos","title":"Infos","text":"

Information boxes include background and links to additional information.

Info

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor massa, nec semper lorem quam in massa.

"},{"location":"important-notes/documentation-conventions/#warnings","title":"Warnings","text":"

Warnings contain crucial information that should be considered before proceeding.

Warning

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor massa, nec semper lorem quam in massa.

"},{"location":"important-notes/documentation-conventions/#dangers","title":"Dangers","text":"

Dangers contain crucial information that can lead to harmful consequences, such as data loss and irreversible damage.

Danger

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor massa, nec semper lorem quam in massa.

"},{"location":"important-notes/known-issues/","title":"Known Issues","text":""},{"location":"important-notes/known-issues/#kubernetes","title":"Kubernetes","text":"
  • Currently, it is not possible to resize a logical volume clone. The resize command does not fail and the new size is shown by lsblk. But when remounting the filesystem with the option to resize, it fails.
"},{"location":"important-notes/terminology/","title":"Terminology","text":""},{"location":"important-notes/terminology/#storage-related-terms","title":"Storage Related Terms","text":""},{"location":"important-notes/terminology/#storage-cluster","title":"Storage Cluster","text":"

A simplyblock storage cluster is a group of interconnected storage nodes that work together to provide a scalable, fault-tolerant, and high-performance storage system. Unlike traditional single-node storage solutions, storage clusters distribute data across multiple nodes, ensuring redundancy, load balancing, and resilience against hardware failures. To optimize data availability and efficiency, these clusters can be configured using different architectures, including replication and erasure coding. Storage clusters are commonly used in cloud storage, high-performance computing (HPC), and enterprise data centers, enabling seamless scalability and improved data accessibility across distributed environments.

"},{"location":"important-notes/terminology/#storage-node","title":"Storage Node","text":"

A storage node in a simplyblock distributed storage cluster is a physical or virtual machine that contributes storage resources to the cluster. It provides a portion of the overall storage capacity and participates in the data distribution, redundancy, and retrieval processes. In simplyblock, each logical volume is attached to particular primary and secondary storage nodes via the nmvf protocol. The nodes run the in-memory data services for this volume on the hot data path and provide access to underlying data. The data stored on such a volume is distributed within the cluster following a defined placement logic.

"},{"location":"important-notes/terminology/#storage-pool","title":"Storage Pool","text":"

A storage pool in simplyblock groups logical volumes and assigns them optional quotas (caps) of capacity, IOPS, and read-write throughput. Storage pools are defined on a cluster level and can span logical volumes across multiple storage nodes. Therefore, storage pools implement a tenant concept.

"},{"location":"important-notes/terminology/#storage-device","title":"Storage Device","text":"

A storage device is a physical or virtualized NVMe drive in simplyblock, but not a partition. It is identified by its PCIe address and serial number. Simplyblock currently supports a wide range of different types of NVMe drives with varying characteristics of performance, features, and capacities.

"},{"location":"important-notes/terminology/#nvme-non-volatile-memory-express","title":"NVMe (Non-Volatile Memory Express)","text":"

NVMe (Non-Volatile Memory Express) is a high-performance storage protocol explicitly designed for flash-based storage devices like SSDs, leveraging the PCIe (Peripheral Component Interconnect Express) interface for ultra-low latency and high throughput. Unlike traditional protocols such as SATA or SAS, NVMe takes advantage of parallelism and multiple queues, significantly improving data transfer speeds and reducing CPU overhead. It is widely used in enterprise storage, cloud computing, and high-performance computing (HPC) environments, where speed and efficiency are critical. NVMe is also the foundation for NVMe-over-Fabrics (NVMe-oF), which extends its benefits across networked storage systems, enhancing scalability and flexibility in distributed environments.

"},{"location":"important-notes/terminology/#nvme-of-nvme-over-fabrics","title":"NVMe-oF (NVMe over Fabrics)","text":"

NVMe-oF (NVMe over Fabrics) is an extension of the NVMe (Non-Volatile Memory Express) protocol that enables high-performance, low-latency access to remote NVMe storage devices over network fabrics such as TCP, RDMA (RoCE, iWARP), and Fibre Channel (FC). Unlike traditional networked storage protocols, NVMe-oF maintains the efficiency and parallelism of direct-attached NVMe storage while allowing disaggregation of compute and storage resources. This architecture improves scalability, resource utilization, and flexibility in cloud, enterprise, and high-performance computing (HPC) environments. NVMe-oF is a key technology in modern software-defined and disaggregated storage infrastructures, providing fast and efficient remote storage access.

"},{"location":"important-notes/terminology/#nvmetcp-nvme-over-tcp","title":"NVMe/TCP (NVMe over TCP)","text":"

NVMe/TCP (NVMe over TCP) is a transport protocol that extends NVMe-over-Fabrics (NVMe-oF) using standard TCP/IP networks to enable high-performance, low-latency access to remote NVMe storage. By leveraging existing Ethernet infrastructure, NVMe/TCP eliminates the need for specialized networking hardware such as RDMA (RoCE or iWARP) or Fibre Channel (FC), making it a cost-effective and easily deployable solution for cloud, enterprise, and data center storage environments. It maintains the efficiency of NVMe, providing scalable, high-throughput, and low-latency remote storage access while ensuring broad compatibility with modern network architectures.

"},{"location":"important-notes/terminology/#nvmeroce-nvme-over-rdma-over-converged-ethernet","title":"NVMe/RoCE (NVMe over RDMA over Converged Ethernet)","text":"

NVMe/RoCE (NVMe over RoCE) is a high-performance storage transport protocol that extends NVMe-over-Fabrics (NVMe-oF) using RDMA over Converged Ethernet (RoCE) to enable ultra-low-latency and high-throughput access to remote NVMe storage devices. By leveraging Remote Direct Memory Access (RDMA), NVMe/RoCE bypasses the CPU for data transfers, reducing latency and improving efficiency compared to traditional TCP-based storage protocols. This makes it ideal for high-performance computing (HPC), enterprise storage, and latency-sensitive applications such as financial trading and AI workloads. NVMe/RoCE requires lossless Ethernet networking and specialized NICs to fully utilize its performance advantages.

"},{"location":"important-notes/terminology/#multipathing","title":"Multipathing","text":"

Multipathing is a storage networking technique that enables multiple physical paths between a compute system and a storage device to improve redundancy, load balancing, and fault tolerance. Multipathing enhances performance and reliability by using multiple connections, ensuring continuous access to storage even if one path fails. It is commonly implemented in Fibre Channel (FC), iSCSI, and NVMe-oF (including NVMe/TCP and NVMe/RoCE) environments, where high availability and optimized data transfer are critical.

"},{"location":"important-notes/terminology/#management-node","title":"Management Node","text":"

A management node is a containerized component that orchestrates, monitors, and controls the distributed storage cluster. It forms part of the control plane, managing cluster-wide configurations, provisioning logical volumes, handling metadata operations, and ensuring overall system health. Management nodes facilitate communication between storage nodes and client applications, enforcing policies such as access control, data placement, and fault tolerance. They also provide an interface for administrators to interact with the storage system via the Simplyblock CLI or API, enabling seamless deployment, scaling, and maintenance of the storage infrastructure.

"},{"location":"important-notes/terminology/#distributed-erasure-coding","title":"Distributed Erasure Coding","text":"

Distributed Erasure coding is a data protection technique used in distributed storage systems to provide fault tolerance and redundancy while minimizing storage overhead. It works by breaking data into k data fragments and generating m parity fragments using mathematical algorithms. These k + m fragments are then distributed across multiple storage nodes, allowing the system to reconstruct lost or corrupted data from any k available fragments. Compared to traditional replication, erasure coding offers greater storage efficiency while maintaining high availability, making it ideal for cloud storage, object storage, and high-performance computing (HPC) environments where durability and cost-effectiveness are critical.

Simplyblock supports all combinations of k = 1,2,4 and m = 1,2. The erasure coding implementation uses highly performance-optimized algorithms specific to the selected schema.

"},{"location":"important-notes/terminology/#replication","title":"Replication","text":"

Replication in storage is the process of creating and maintaining identical copies of data across multiple storage devices or nodes to ensure fault tolerance, high availability, and disaster recovery. Replication can occur synchronously, where data is copied in real-time to ensure consistency, or asynchronously, where updates are delayed to optimize performance. It is commonly used in distributed storage systems, cloud storage, and database management to protect against hardware failures and data loss. By maintaining redundant copies, replication enhances data resilience, load balancing, and accessibility, making it a fundamental technique for enterprise and cloud-scale storage solutions. Simplyblock supports synchronous replication.

"},{"location":"important-notes/terminology/#raid-redundant-array-of-independent-disks","title":"RAID (Redundant Array of Independent Disks)","text":"

RAID (Redundant Array of Independent Disks) is a data storage technology that combines multiple physical drives into a single logical unit to improve performance, fault tolerance, or both. RAID configurations vary based on their purpose: RAID 0 (striping) enhances speed but offers no redundancy, RAID 1 (mirroring) duplicates data for high availability, and RAID 5, 6, and 10 use combinations of striping and parity to balance performance and fault tolerance. RAID is widely used in enterprise storage, servers, and high-performance computing to protect against drive failures and optimize data access. It can be implemented in hardware controllers or software-defined storage solutions, depending on system requirements.

"},{"location":"important-notes/terminology/#quality-of-service","title":"Quality of Service","text":"

Quality of Service (QoS) refers to the ability to define and enforce performance guarantees for storage workloads by controlling key metrics such as IOPS (Input/Output Operations Per Second), throughput, and latency. QoS ensures that different applications receive appropriate levels of performance, preventing resource contention in multi-tenant environments. By setting limits and priorities for Logical Volumes (LVs), Simplyblock allows administrators to allocate storage resources efficiently, ensuring critical workloads maintain consistent performance even under high demand. This capability is essential for optimizing storage operations, improving reliability, and meeting service-level agreements (SLAs) in distributed cloud-native environments. In simplyblock, it is possible to limit (cap) IOPS or throughput of individual logical volumes or entire storage pools, and additionally to create QoS classes and provide a fair relative resource allocation (IOPS and/or throughput) to each class. Logical volumes can be assigned to classes.

"},{"location":"important-notes/terminology/#spdk-storage-performance-development-kit","title":"SPDK (Storage Performance Development Kit)","text":"

Storage Performance Development Kit (SPDK) is an open-source set of libraries and tools designed to optimize high-performance, low-latency storage applications by bypassing traditional kernel-based I/O processing. SPDK leverages user-space and polled-mode drivers to eliminate context switching and interrupts, significantly reducing CPU overhead and improving throughput. It is particularly suited for NVMe storage, NVMe-over-Fabrics (NVMe-oF), and iSCSI target acceleration, making it a key technology in software-defined storage solutions. By providing a highly efficient framework for storage processing, SPDK enables modern storage architectures to achieve high IOPS, reduced latency, and better resource utilization in cloud and enterprise environments.

"},{"location":"important-notes/terminology/#volume-snapshot-copy-on-write-reverse","title":"Volume Snapshot (Copy-On-Write, Reverse)","text":"

A volume snapshot is a point-in-time copy of a storage volume, file system, or virtual machine that captures its state without duplicating the entire data set. Snapshots enable rapid data recovery, backup, and versioning by preserving only the changes made since the last snapshot.

In the world of storage, different snapshot concepts exist. Simplyblock uses copy-on-write snapshots, which means that taking the snapshot is an instant operation since no data has to be moved.

Later on, volumes can be instantly reverted to a snapshot and copy-on-write volumes can be instantly created (cloned) from a snapshot.

Due to the entirely distributed nature of the underlying storage in simplyblock, dependent snapshots and copy-on-write clones do not affect the performance of the originating volume or each other.

"},{"location":"important-notes/terminology/#volume-clone","title":"Volume Clone","text":"

A volume clone is an exact, fully independent copy of a storage volume, virtual machine, or dataset that can be used for testing, development, backup, or deployment purposes. Unlike snapshots, which capture a point-in-time state and depend on the original data, a clone is a complete duplication that can operate separately without relying on the source. Cloning is commonly used in enterprise storage, cloud environments, and containerized applications to create quick, reproducible environments for workloads without affecting the original data. Storage systems often use thin cloning to optimize space by sharing unchanged data blocks between the original and the clone, reducing storage overhead. COW is widely implemented in storage virtualization and containerized environments, enabling fast, space-efficient backups, cloning, and data protection while maintaining high system performance.

"},{"location":"important-notes/terminology/#cow-copy-on-write","title":"CoW (Copy-on-Write)","text":"

Copy-on-Write (COW) is an efficient data management technique used in snapshots, cloning, and memory management to optimize storage usage and performance. Instead of immediately duplicating data, COW defers copying until a modification is made, ensuring that only changed data blocks are written to a new location. This approach minimizes storage overhead, speeds up snapshot creation, and reduces unnecessary data duplication.

"},{"location":"important-notes/terminology/#kubernetes-related-terms","title":"Kubernetes Related Terms","text":""},{"location":"important-notes/terminology/#kubernetes","title":"Kubernetes","text":"

Kubernetes (K8s)\u00a0\u29c9 is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications across clusters of machines. Initially developed by Google and now maintained by the Cloud Native Computing Foundation (CNCF)\u00a0\u29c9, Kubernetes provides a robust framework for load balancing, self-healing, storage orchestration, and automated rollouts and rollbacks. It manages application workloads using Pods, Deployments, Services, and Persistent Volumes (PVs), ensuring scalability and resilience. By abstracting underlying infrastructure, Kubernetes enables organizations to efficiently run containerized applications across on-premises, cloud, and hybrid environments, making it a cornerstone of modern cloud-native computing.

"},{"location":"important-notes/terminology/#kubernetes-csi-container-storage-interface","title":"Kubernetes CSI (Container Storage Interface)","text":"

The Kubernetes Container Storage Interface (CSI)\u00a0\u29c9 is a standardized API enabling external storage providers to integrate their storage solutions with Kubernetes. CSI allows Kubernetes to dynamically provision, attach, mount, and manage Persistent Volumes (PVs) across different storage backends without requiring changes to the Kubernetes core. Using a CSI driver, storage vendors can offer block and file storage to Kubernetes workloads, supporting advanced features like snapshotting, cloning, and volume expansion. CSI enhances Kubernetes\u2019 flexibility by enabling seamless integration with cloud, on-premises, and software-defined storage solutions, making it the de facto method for managing storage in containerized environments.

"},{"location":"important-notes/terminology/#pod","title":"Pod","text":"

A Pod in Kubernetes is the smallest and most basic deployable unit, representing a single instance of a running process in a cluster. A Pod can contain one or multiple containerized applications that share networking, storage, and runtime configurations, enabling efficient communication and resource sharing. Kubernetes schedules and manages Pods, ensuring they are deployed on suitable worker nodes based on resource availability and constraints. Since Pods are ephemeral, they are often managed by higher-level controllers like Deployments, StatefulSets, or DaemonSets to maintain availability and scalability. Pods facilitate scalable, resilient, and cloud-native application deployments across diverse infrastructure environments.

"},{"location":"important-notes/terminology/#persistent-volume","title":"Persistent Volume","text":"

A Persistent Volume (PV) is a cluster-wide Kubernetes storage resource that provides durable and independent storage for Pods allow data to persist beyond the lifecycle of individual containers. Unlike ephemeral storage, which is tied to a Pod\u2019s runtime, a PV is provisioned either statically by an administrator or dynamically using StorageClasses. Applications request storage by creating Persistent Volume Claims (PVCs), which Kubernetes binds to an available PV based on capacity and access requirements. Persistent Volumes support different access modes, such as ReadWriteOnce ( RWO), ReadOnlyMany (ROX), and ReadWriteMany (RWX), and are backed by various storage solutions, including local disks, network-attached storage (NAS), and cloud-based storage services.

"},{"location":"important-notes/terminology/#persistent-volume-claim","title":"Persistent Volume Claim","text":"

A Persistent Volume Claim (PVC) is a request for Kubernetes storage made by a Pod, allowing it to dynamically or statically access a Persistent Volume (PV). PVCs specify storage requirements such as size, access mode (ReadWriteOnce, ReadOnlyMany, or ReadWriteMany), and storage class. Kubernetes automatically binds a PVC to a suitable PV based on these criteria, abstracting the underlying storage details from applications. This separation enables dynamic storage provisioning, ensuring that Pods can seamlessly consume persistent storage resources without needing direct knowledge of the storage infrastructure. When a PVC is deleted, its associated PV handling depends on its reclaim policy (Retain, Recycle, or Delete), determining whether the storage is preserved, cleared, or removed.

"},{"location":"important-notes/terminology/#storage-class","title":"Storage Class","text":"

A StorageClass is a Kubernetes abstraction that defines different types of storage available within a cluster, enabling dynamic provisioning of Persistent Volumes (PVs). It allows administrators to specify storage requirements such as performance characteristics, replication policies, and backend storage providers (e.g., cloud block storage, network file systems, or distributed storage systems). Each StorageClass includes a provisioner, which determines how volumes are created and parameters that define specific configurations for the underlying storage system. By referencing a StorageClass in a Persistent Volume Claim (PVC), users can automatically provision storage that meets their application's needs without manually pre-allocating PVs, streamlining storage management in cloud-native environments.

"},{"location":"important-notes/terminology/#network-related-terms","title":"Network Related Terms","text":""},{"location":"important-notes/terminology/#tcp-transmission-control-protocol","title":"TCP (Transmission Control Protocol)","text":"

Transmission Control Protocol (TCP) is a core communication protocol in the Internet Protocol (IP) suite that ensures reliable, ordered, and error-checked data delivery between devices over a network. TCP operates at the transport layer and establishes a connection-oriented communication channel using a three-way handshake process to synchronize data exchange. It segments large data streams into smaller packets, ensures their correct sequencing, and retransmits lost packets to maintain data integrity. TCP is widely used in applications requiring stable and accurate data transmission, such as web browsing, email, and file transfers, making it a fundamental protocol for modern networked systems.

"},{"location":"important-notes/terminology/#udp-user-datagram-protocol","title":"UDP (User Datagram Protocol)","text":"

User Datagram Protocol (UDP) is a lightweight, connectionless communication protocol in the Internet Protocol (IP) suite that enables fast, low-latency data transmission without guaranteeing delivery, order, or error correction. Unlike Transmission Control Protocol (TCP), UDP does not establish a connection before sending data, making it more efficient for applications prioritizing speed over reliability. It is commonly used in real-time communications, streaming services, online gaming, and DNS lookups, where occasional data loss is acceptable in exchange for reduced latency and overhead.

"},{"location":"important-notes/terminology/#ip-internet-protocol-ipv4-ipv6","title":"IP (Internet Protocol), IPv4, IPv6","text":"

Internet Protocol (IP) is the fundamental networking protocol that enables devices to communicate over the Internet and private networks by assigning unique IP addresses to each device. Operating at the network layer of the Internet Protocol suite, IP is responsible for routing and delivering data packets from a source to a destination based on their addresses. It functions in a connectionless manner, meaning each packet is sent independently and may take different paths to reach its destination. IP exists in two primary versions: IPv4, which uses 32-bit addresses, and IPv6, which uses 128-bit addresses for expanded address space. IP works alongside transport layer protocols like TCP and UDP to ensure effective data transmission across networks.

"},{"location":"important-notes/terminology/#netmask","title":"Netmask","text":"

A netmask is a numerical value used in IP networking to define a subnet's range of IP addresses. It works by masking a portion of an IP address to distinguish the network part from the host part. A netmask consists of a series of binary ones (1s) followed by zeros (0s), where the ones represent the network portion and the zeros indicate the host portion. Common netmasks include 255.255.255.0 (/24) for standard subnets and 255.255.0.0 (/16) for larger networks. Netmasks are essential in subnetting, routing, and IP address allocation, ensuring efficient traffic management and communication within networks.

"},{"location":"important-notes/terminology/#cidr-classless-inter-domain-routing","title":"CIDR (Classless Inter-Domain Routing)","text":"

Classless Inter-Domain Routing (CIDR) is a method for allocating and managing IP addresses more efficiently than the traditional class-based system. CIDR uses variable-length subnet masking (VLSM) to define IP address ranges with flexible subnet sizes, reducing wasted addresses and improving routing efficiency. CIDR notation represents an IP address followed by a slash (/) and a number indicating the number of significant bits in the subnet mask (e.g., 192.168.1.0/24 means the first 24 bits define the network, leaving 8 bits for host addresses). Widely used in modern networking and the internet, CIDR helps optimize IP address distribution and enhance routing aggregation, reducing the size of global routing tables.

"},{"location":"important-notes/terminology/#hyper-converged","title":"Hyper-Converged","text":"

Hyper-converged refers to an IT infrastructure model that integrates compute, storage, and networking into a single, software-defined system. Unlike traditional architectures that rely on separate hardware components for each function, hyper-converged infrastructure (HCI) leverages virtualization and centralized management to streamline operations, improve scalability, and reduce complexity. This approach enhances performance, fault tolerance, and resource efficiency by distributing workloads across multiple nodes, allowing seamless scaling by adding more nodes. HCI is widely used in cloud environments, virtual desktop infrastructure (VDI), and enterprise data centers for its ease of deployment, automation capabilities, and cost-effectiveness.

"},{"location":"important-notes/terminology/#disaggregated","title":"Disaggregated","text":"

Disaggregated refers to an IT architecture approach where compute, storage, and networking resources are separated into independent components rather than tightly integrated within the same physical system. In disaggregated storage, for example, storage resources are managed independently of compute nodes, allowing for flexible scaling, improved resource utilization, and reduced hardware dependencies. This contrasts with traditional or hyper-converged architectures, where these resources are combined. Disaggregated architectures are widely used in cloud computing, high-performance computing (HPC), and modern data centers to enhance scalability, cost-efficiency, and operational flexibility while optimizing performance for dynamic workloads.

"},{"location":"maintenance-operations/","title":"Operations","text":"

Ensuring data resilience and maintaining cluster health are critical aspects of managing a simplyblock storage deployment. This section covers best practices for backing up and restoring individual volumes or entire clusters, helping organizations safeguard their data against failures, corruption, or accidental deletions.

Additionally, simplyblock provides comprehensive monitoring capabilities using built-in Prometheus and Grafana for real-time visualization of cluster health, I/O statistics, and performance metrics.

This section details how to configure and use these monitoring tools, ensuring optimal performance, early issue detection, and proactive storage management in cloud-native and enterprise environments.

"},{"location":"maintenance-operations/cluster-upgrade/","title":"Upgrading a Cluster","text":"

Simplyblock clusters consist of two independent parts: a control plane with management nodes, and a storage plane with storage nodes. A single control plane can be used to manage for multiple storage planes.

The control plane and storage planes can be updated independently. It is, however, not recommended to run an upgraded control plane without upgrading the storage planes.

Recommendation

If multiple storage planes are connected to a single control plane, it is recommended to upgrade the control plane first.

Upgrading the control plane and storage cluster is currently not an online operation and requires downtime. Planning an upgrade as part of a maintenance window is recommended. They should be an online operation from next release.

"},{"location":"maintenance-operations/cluster-upgrade/#upgrading-the-cli","title":"Upgrading the CLI","text":"

Before starting a cluster upgrade, all storage and control plane nodes must update the CLI (sbctl).

This can be achieved using the same command used during the initial installation. It is important, though, to provide the --upgrade parameter to pip to ensure an upgrade to happen.

sudo pip install sbctl --upgrade\n
"},{"location":"maintenance-operations/cluster-upgrade/#upgrading-a-control-plane","title":"Upgrading a Control Plane","text":"

This section outlines the process of upgrading the control plane. An upgrade introduces new versions of the management and monitoring services.

To upgrade a control plane, the following command must be executed:

sudo sbctl cluster update <CLUSTER_ID> --cp-only true\n

After issuing the command, the individual management services will be upgraded and restarted on all management nodes.

"},{"location":"maintenance-operations/cluster-upgrade/#upgrading-a-storage-plane","title":"Upgrading a Storage Plane","text":"

Now to upgrade the storage plane, the following steps are performed for each of the storage nodes. From the control plane, issue the following commands.

Warning

Ensure not all storage nodes are offline at the same time. Storage nodes must be updated in a round-robin fashion. In between, it is important to wait until the cluster is in ACTIVE state again and finished with the REBALANCING task.

sudo sbctl storage-node suspend <NODE_ID>\nsudo sbctl storage-node shutdown <NODE_ID> \n

If the shutdown doesn't work by itself, you may savely force a shutdown using the --force parameter.

sudo sbctl storage-node shutdown <NODE_ID> --force \n

Ensure the node has become offline before continuing.

sudo sbctl storage-node list \n

Next up, on the storage node itself, a redployment must be executed. To achieve that, ssh into the storage node and run the following command.

sudo sbctl storage-node deploy\n

Finally, the new storage node deployment can be restarted from the control plane.

sudo sbctl --dev storage-node restart <NODE-ID> --spdk-image <UPGRADE SPDK IMAGE>\n

Note

One can find the upgrade spdk image from env_var file on storage node, location: /usr/local/lib/python3.9/site-packages/simplyblock_core/env_var

Once the node is restarted, wait until the cluster is stabilized. Depending on the capacity of a storage node, this can take a few minutes. The status of the cluster can be checked via the cluster listing or listing the tasks and checkking their progress.

sudo sbctl cluster list\nsudo sbctl cluster list-tasks <CLUSTER_ID>\n
"},{"location":"maintenance-operations/find-secondary-node/","title":"Finding the Secondary Node","text":"

Simplyblock, in high-availability mode, creates two connections per logical volume: a primary and a secondary connection.

The secondary connection will be used in case of issues or failures of the primary storage node which owns the logical volume.

For debugging purposes, sometimes it is useful to find out which host is used as the secondary for a specific primary storage node. This can be achieved using the command line tool sbctl by asking for the details of the primary storage node and grepping for the secondary id.

Find secondary for a primary
sbctl storage-node get <NODE_ID> | grep secondary_node_id\n
"},{"location":"maintenance-operations/manual-restarting-nodes/","title":"Stopping and Manually Restarting a Storage Node","text":"

There are a few reasons to manually restart a storage node: - After a storage node became unavailable, the auto-restart did not work - A cluster upgrade - A planned storage node maintenance

Critical

There is an auto-restart functionality, which restarts a storage node in case the monitoring service detects an issue with that specific node. This can be the case if one of the containers exited, after a reboot of the host, or because of an internal node error which causes the management interface to become unresponsive. The auto-restart functionality retries multiple times. It will not work in one of the following cases:

  • The cluster is suspended (e.g. two or more storage nodes are offline)
  • The RPC interface is responsive and the container is up, but the storage node has another health issue
  • The host or docker service are not available or hanging (e.g. network issue)
  • Too many retries (e.g. because there is a problem with the lvolstore recovering for some of the logical volimes)

In these cases, a manual restart is required.

"},{"location":"maintenance-operations/manual-restarting-nodes/#shutdown-of-storage-nodes","title":"Shutdown of Storage Nodes","text":"

Warning

Nodes can only be restarted from offline state!

It is important to ensure that the cluster is not in degraded state and all other nodes are online before shutting down a storage node for maintainance or upgrades! Otherwise loss of availability - io interrupt - may occur!

Suspending a storage node and then shutting it down:

Shutdown storage node
sbctl storage-node suspend <NODE_ID> \nsbctl storage-node shutdown <NODE_ID> \n

If that does not work, it is ok to forcefully shutdown the storage node.

Shutdown storage node forcefully
sbctl storage-node shutdown <NODE_ID> --force\n
"},{"location":"maintenance-operations/manual-restarting-nodes/#storage-node-in-offline-state","title":"Storage Node in Offline State","text":"

It is very important to notice that with a storage node in state offline, the cluster is in a degraded state. Write and read performance can be impacted, and if another node goes offline, I/O will be interrupted. Therefore, it is recommended to keep nodes in offline state as short as possible!

If a longer maintenance window (hours to weeks) is required, it is recommended to migrate the storage node to another host for the time being. This alternative host can be without NVMe devices. Node migration is entirely automated. Later the storage node can be migrated back to its original host.

"},{"location":"maintenance-operations/manual-restarting-nodes/#restarting-a-storage-node","title":"Restarting a Storage Node","text":"

A storage node can be restarted using the following command:

Restarting storage node
sbctl storage-node restart <NODE_ID> \n

In the rare case the restart may hang. If this is the case, it is ok to forcefully shutdown and forcefully restart the storage node:

Restarting storage node
sbctl storage-node restart <NODE_ID> --force \n
"},{"location":"maintenance-operations/manual-restarting-nodes/#restarting-docker-service","title":"Restarting Docker Service","text":"

Warning

This applies to disaggregated storage nodes under Docker (only non-Kubernetes setups) only.

If there is a problem with the entire Docker service on a host, the Docker service may require a restart. In such a case, auto-restart will not be able to automatically self-heal the storage node. This happens because the container responsible for self-healing and auto-restarting (SNodeAPI) itself does not respond anymore.

Restarting docker service
sudo systemctl restart docker --force\n

After restarting the Docker service, the auto-restart will start to self-heal the storage node after a short delay. A manual restart of the storage node is not required.

"},{"location":"maintenance-operations/migrating-storage-node/","title":"Migrating a Storage Node","text":"

Simplyblock storage clusters are designed as always-on. That means that a storage node migration is an online operation that doesn't require explicit maintenance windows or storage downtime.

"},{"location":"maintenance-operations/migrating-storage-node/#storage-node-migration","title":"Storage Node Migration","text":"

Migrating a storage node is a three-step process. First, the new storage node will be pre-deployed, after that the old storage node must be shutdown properly. It will be restarted (migrated) with the new storage node's storage node api address, and finally, the new storage node will become the primary storage node.

Warning

Between each process step, it is required to wait for storage node migration tasks to complete. Otherwise, there may have an impact on the system's performance or, worse, may lead to data loss.

As part of the process, the existing storage node id will be moved to the new host machine. All logical volumes allocated on the old storage node will be moved to the new storage node and will automatically be reconnected.

"},{"location":"maintenance-operations/migrating-storage-node/#first-stage-storage-node-deployment","title":"First-Stage Storage Node Deployment","text":"

To install the first stage of a storage node, the installation guide for the selected environment should be followed.

The process will diverge after executing the initial deployment command sbctl storage-node deploy. If the command finishes successfully, resume from the next section of this page.

  • storage nodes in kubernetes
  • storage nodes on Bare Metal or Virtualized Linux
"},{"location":"maintenance-operations/migrating-storage-node/#preparing-the-new-storage-host","title":"Preparing the New Storage Host","text":"

The new storage host must be prepared before a storage node can be migrated. It must fulfill the pre-requisites for a storage node according to the installation documentation for the selected installation method.

To prepare the new storage host, the following commands must be executed.

Preparing the configuration
sbctl storage-node configure \\\n    --max-lvol=<MAX_LVOL> \\\n    --max-size=<MAX_SIZE> \\\n    [--nodes-per-socket=<NUM_OF_NODES>] \n
Preparing the instance
sbctl storage-node deploy [--isolate-cores --ifname=<IFNAME>] \n

The full list of parameters for either command can be found in the CLI documentation.

"},{"location":"maintenance-operations/migrating-storage-node/#restart-old-storage-node","title":"Restart Old Storage Node","text":"

Warning

Before migrating the storage node on a storage host, the ols storage node must be put in offline state.

If the storage node is not yet offline, it can be forced into offline state using the following command.

Shutdown storage node on old instance
sbctl storage-node shutdown <NODE_ID> --force\n

To start the migration process of logical volumes, the old storage node needs to be restarted with the new storage node's API address.

In this example, it is assumed that the new storage node's IP address is 192.168.10.100. The IP address must be changed according to the real-world setup.

Danger

Providing the wrong IP address can lead to service interruption and data loss.

To restart the node, the following command must be run:

Restarting a storage node to initiate the migration
sbctl storage-node restart <NODE_ID> --node-addr=<NEW_NODE_IP>:5000\n

Warning

The parameter --node-addr expects the API endpoint of the new storage node. This API is reachable on port 5000. It must be ensured that the given parameter is the new IP address and the port, separated by a colon.

Example output of the node restart
demo@cp-1 ~> sbctl storage-node restart 788c3686-9d75-4392-b0ab-47798fd4a3c1 --node-addr 192.168.10.64:5000\n2025-04-02 13:24:26,785: INFO: Restarting storage node\n2025-04-02 13:24:26,796: INFO: Setting node state to restarting\n2025-04-02 13:24:26,807: INFO: {\"cluster_id\": \"a84537e2-62d8-4ef0-b2e4-8462b9e8ea96\", \"event\": \"STATUS_CHANGE\", \"object_name\": \"StorageNode\", \"message\": \"Storage node status changed from: unreachable to: in_restart\", \"caused_by\": \"monitor\"}\n2025-04-02 13:24:26,812: INFO: Sending event updates, node: 788c3686-9d75-4392-b0ab-47798fd4a3c1, status: in_restart\n2025-04-02 13:24:26,843: INFO: Sending to: f4b37b6c-6e36-490f-adca-999859747eb4\n2025-04-02 13:24:26,859: INFO: Sending to: 71c31962-7313-4317-8330-9f09a3e77a72\n2025-04-02 13:24:26,870: INFO: Sending to: 93a812f9-2981-4048-a8fa-9f39f562f1aa\n2025-04-02 13:24:26,893: INFO: Restarting on new node with ip: 192.168.10.64:5000\n2025-04-02 13:24:27,037: INFO: Restarting Storage node: 192.168.10.64\n2025-04-02 13:24:27,097: INFO: Restarting SPDK\n...\n2025-04-02 13:24:40,012: INFO: creating subsystem nqn.2023-02.io.simplyblock:a84537e2-62d8-4ef0-b2e4-8462b9e8ea96:lvol:13945596-4fbc-46a5-bbb1-ebe4d3e2af26\n2025-04-02 13:24:40,025: INFO: creating subsystem nqn.2023-02.io.simplyblock:a84537e2-62d8-4ef0-b2e4-8462b9e8ea96:lvol:2c593f82-d96c-4eb7-8d1c-30c534f6592d\n2025-04-02 13:24:40,037: INFO: creating subsystem nqn.2023-02.io.simplyblock:a84537e2-62d8-4ef0-b2e4-8462b9e8ea96:lvol:e3d2d790-4d14-4875-a677-0776335e4588\n2025-04-02 13:24:40,048: INFO: creating subsystem nqn.2023-02.io.simplyblock:a84537e2-62d8-4ef0-b2e4-8462b9e8ea96:lvol:1086d1bf-e77f-4ddf-b374-3575cfd68d30\n2025-04-02 13:24:40,414: INFO: {\"cluster_id\": \"a84537e2-62d8-4ef0-b2e4-8462b9e8ea96\", \"event\": \"OBJ_CREATED\", \"object_name\": \"StorageNode\", \"message\": \"Port blocked: 9091\", \"caused_by\": \"cli\"}\n2025-04-02 13:24:40,494: INFO: Add BDev to subsystem\n2025-04-02 13:24:40,495: INFO: 1\n2025-04-02 13:24:40,495: INFO: adding listener for nqn.2023-02.io.simplyblock:a84537e2-62d8-4ef0-b2e4-8462b9e8ea96:lvol:13945596-4fbc-46a5-bbb1-ebe4d3e2af26 on IP 10.10.10.64\n2025-04-02 13:24:40,499: INFO: Add BDev to subsystem\n2025-04-02 13:24:40,499: INFO: 1\n2025-04-02 13:24:40,500: INFO: adding listener for nqn.2023-02.io.simplyblock:a84537e2-62d8-4ef0-b2e4-8462b9e8ea96:lvol:e3d2d790-4d14-4875-a677-0776335e4588 on IP 10.10.10.64\n2025-04-02 13:24:40,503: INFO: Add BDev to subsystem\n2025-04-02 13:24:40,504: INFO: 1\n2025-04-02 13:24:40,504: INFO: adding listener for nqn.2023-02.io.simplyblock:a84537e2-62d8-4ef0-b2e4-8462b9e8ea96:lvol:2c593f82-d96c-4eb7-8d1c-30c534f6592d on IP 10.10.10.64\n2025-04-02 13:24:40,507: INFO: Add BDev to subsystem\n2025-04-02 13:24:40,508: INFO: 1\n2025-04-02 13:24:40,509: INFO: adding listener for nqn.2023-02.io.simplyblock:a84537e2-62d8-4ef0-b2e4-8462b9e8ea96:lvol:1086d1bf-e77f-4ddf-b374-3575cfd68d30 on IP 10.10.10.64\n2025-04-02 13:24:41,861: INFO: {\"cluster_id\": \"a84537e2-62d8-4ef0-b2e4-8462b9e8ea96\", \"event\": \"OBJ_CREATED\", \"object_name\": \"StorageNode\", \"message\": \"Port allowed: 9091\", \"caused_by\": \"cli\"}\n2025-04-02 13:24:41,894: INFO: Done\nSuccess\n
"},{"location":"maintenance-operations/migrating-storage-node/#make-new-storage-node-primary","title":"Make new Storage Node Primary","text":"

After the migration has successfully finished, the new storage node must be made the primary storage node for the owned set of logical volumes.

This can be initiated using the following command:

Make the new storage node the primary
sbctl storage-node make-primary <NODE_ID>\n

The following is the example output.

Example output of primary change
demo@cp-1 ~> sbctl storage-node make-primary 788c3686-9d75-4392-b0ab-47798fd4a3c1\n2025-04-02 13:25:02,220: INFO: Adding device 65965029-4ab3-44b9-a9d4-29550e6c14ae\n2025-04-02 13:25:02,251: INFO: bdev already exists alceml_65965029-4ab3-44b9-a9d4-29550e6c14ae\n2025-04-02 13:25:02,252: INFO: bdev already exists alceml_65965029-4ab3-44b9-a9d4-29550e6c14ae_PT\n2025-04-02 13:25:02,266: INFO: subsystem already exists True\n2025-04-02 13:25:02,267: INFO: bdev already added to subsys alceml_65965029-4ab3-44b9-a9d4-29550e6c14ae_PT\n2025-04-02 13:25:02,285: INFO: Setting device online\n2025-04-02 13:25:02,301: INFO: {\"cluster_id\": \"a84537e2-62d8-4ef0-b2e4-8462b9e8ea96\", \"event\": \"OBJ_CREATED\", \"object_name\": \"NVMeDevice\", \"message\": \"Device created: 65965029-4ab3-44b9-a9d4-29550e6c14ae\", \"caused_by\": \"cli\"}\n2025-04-02 13:25:02,305: INFO: Make other nodes connect to the node devices\n2025-04-02 13:25:02,383: INFO: Connecting to node 71c31962-7313-4317-8330-9f09a3e77a72\n2025-04-02 13:25:02,384: INFO: bdev found remote_alceml_197c2d40-d39a-4a10-84eb-41c68a6834c7_qosn1\n2025-04-02 13:25:02,385: INFO: bdev found remote_alceml_5202854e-e3b3-4063-b6b9-9a83c1bbefe9_qosn1\n2025-04-02 13:25:02,386: INFO: bdev found remote_alceml_15c5f6de-63b6-424c-b4c0-49c3169c0135_qosn1\n2025-04-02 13:25:02,386: INFO: Connecting to node 93a812f9-2981-4048-a8fa-9f39f562f1aa\n2025-04-02 13:25:02,439: INFO: Connecting to node f4b37b6c-6e36-490f-adca-999859747eb4\n2025-04-02 13:25:02,440: INFO: bdev found remote_alceml_0544ef17-6130-4a79-8350-536c51a30303_qosn1\n2025-04-02 13:25:02,441: INFO: bdev found remote_alceml_e9d69493-1ce8-4386-af1a-8bd4feec82c6_qosn1\n2025-04-02 13:25:02,442: INFO: bdev found remote_alceml_5cc0aed8-f579-4a4c-9c31-04fb8d781af8_qosn1\n2025-04-02 13:25:02,443: INFO: Connecting to node 93a812f9-2981-4048-a8fa-9f39f562f1aa\n2025-04-02 13:25:02,493: INFO: Connecting to node f4b37b6c-6e36-490f-adca-999859747eb4\n2025-04-02 13:25:02,494: INFO: bdev found remote_alceml_0544ef17-6130-4a79-8350-536c51a30303_qosn1\n2025-04-02 13:25:02,494: INFO: bdev found remote_alceml_e9d69493-1ce8-4386-af1a-8bd4feec82c6_qosn1\n2025-04-02 13:25:02,495: INFO: bdev found remote_alceml_5cc0aed8-f579-4a4c-9c31-04fb8d781af8_qosn1\n2025-04-02 13:25:02,495: INFO: Connecting to node 71c31962-7313-4317-8330-9f09a3e77a72\n2025-04-02 13:25:02,496: INFO: bdev found remote_alceml_197c2d40-d39a-4a10-84eb-41c68a6834c7_qosn1\n2025-04-02 13:25:02,496: INFO: bdev found remote_alceml_5202854e-e3b3-4063-b6b9-9a83c1bbefe9_qosn1\n2025-04-02 13:25:02,497: INFO: bdev found remote_alceml_15c5f6de-63b6-424c-b4c0-49c3169c0135_qosn1\n2025-04-02 13:25:02,667: INFO: {\"cluster_id\": \"a84537e2-62d8-4ef0-b2e4-8462b9e8ea96\", \"event\": \"OBJ_CREATED\", \"object_name\": \"JobSchedule\", \"message\": \"task created: 773ae420-3491-4ea6-aaf4-b7b1103132f6\", \"caused_by\": \"cli\"}\n2025-04-02 13:25:02,675: INFO: {\"cluster_id\": \"a84537e2-62d8-4ef0-b2e4-8462b9e8ea96\", \"event\": \"OBJ_CREATED\", \"object_name\": \"JobSchedule\", \"message\": \"task created: 95eaf69f-6926-454e-a023-8d9341f7c4c6\", \"caused_by\": \"cli\"}\n2025-04-02 13:25:02,682: INFO: {\"cluster_id\": \"a84537e2-62d8-4ef0-b2e4-8462b9e8ea96\", \"event\": \"OBJ_CREATED\", \"object_name\": \"JobSchedule\", \"message\": \"task created: 0a0f7942-46d7-46b2-9dc6-c5787bc3691e\", \"caused_by\": \"cli\"}\n2025-04-02 13:25:02,690: INFO: {\"cluster_id\": \"a84537e2-62d8-4ef0-b2e4-8462b9e8ea96\", \"event\": \"OBJ_CREATED\", \"object_name\": \"JobSchedule\", \"message\": \"task created: 0f10c95e-937b-4e9b-99ca-e13815ae3578\", \"caused_by\": \"cli\"}\n2025-04-02 13:25:02,698: INFO: {\"cluster_id\": \"a84537e2-62d8-4ef0-b2e4-8462b9e8ea96\", \"event\": \"OBJ_CREATED\", \"object_name\": \"JobSchedule\", \"message\": \"task created: fb36c4c7-d128-4a43-894f-50fb406bab30\", \"caused_by\": \"cli\"}\n2025-04-02 13:25:02,707: INFO: {\"cluster_id\": \"a84537e2-62d8-4ef0-b2e4-8462b9e8ea96\", \"event\": \"OBJ_CREATED\", \"object_name\": \"JobSchedule\", \"message\": \"task created: d5480f1f-e113-49ab-8c9d-3663e7ba512b\", \"caused_by\": \"cli\"}\n2025-04-02 13:25:02,717: INFO: {\"cluster_id\": \"a84537e2-62d8-4ef0-b2e4-8462b9e8ea96\", \"event\": \"OBJ_CREATED\", \"object_name\": \"JobSchedule\", \"message\": \"task created: 8e910437-7957-4701-b626-5dffce0284dc\", \"caused_by\": \"cli\"}\n2025-04-02 13:25:02,727: INFO: {\"cluster_id\": \"a84537e2-62d8-4ef0-b2e4-8462b9e8ea96\", \"event\": \"OBJ_CREATED\", \"object_name\": \"JobSchedule\", \"message\": \"task created: 919fceb4-ee48-4c72-96b0-a4367b8d0f67\", \"caused_by\": \"cli\"}\n2025-04-02 13:25:02,737: INFO: {\"cluster_id\": \"a84537e2-62d8-4ef0-b2e4-8462b9e8ea96\", \"event\": \"OBJ_CREATED\", \"object_name\": \"JobSchedule\", \"message\": \"task created: da076017-c0ba-4e5b-8bcd-7748fa56305e\", \"caused_by\": \"cli\"}\n2025-04-02 13:25:02,748: INFO: {\"cluster_id\": \"a84537e2-62d8-4ef0-b2e4-8462b9e8ea96\", \"event\": \"OBJ_CREATED\", \"object_name\": \"JobSchedule\", \"message\": \"task created: fa43687f-33ff-486d-8460-2b07bbc18cff\", \"caused_by\": \"cli\"}\n2025-04-02 13:25:02,757: INFO: {\"cluster_id\": \"a84537e2-62d8-4ef0-b2e4-8462b9e8ea96\", \"event\": \"OBJ_CREATED\", \"object_name\": \"JobSchedule\", \"message\": \"task created: e53431ce-c7c9-40a9-8e11-4dafefce79d8\", \"caused_by\": \"cli\"}\n2025-04-02 13:25:02,768: INFO: {\"cluster_id\": \"a84537e2-62d8-4ef0-b2e4-8462b9e8ea96\", \"event\": \"OBJ_CREATED\", \"object_name\": \"JobSchedule\", \"message\": \"task created: 38e320ca-1fd1-4f8e-9ef1-2defa50f1d22\", \"caused_by\": \"cli\"}\n2025-04-02 13:25:02,813: INFO: Adding device 7e5145e7-d8fc-4d60-8af1-3f5015cb3021\n2025-04-02 13:25:02,837: INFO: bdev already exists alceml_7e5145e7-d8fc-4d60-8af1-3f5015cb3021\n2025-04-02 13:25:02,837: INFO: bdev already exists alceml_7e5145e7-d8fc-4d60-8af1-3f5015cb3021_PT\n2025-04-02 13:25:02,851: INFO: subsystem already exists True\n2025-04-02 13:25:02,852: INFO: bdev already added to subsys alceml_7e5145e7-d8fc-4d60-8af1-3f5015cb3021_PT\n2025-04-02 13:25:02,879: INFO: Setting device online\n2025-04-02 13:25:02,893: INFO: {\"cluster_id\": \"a84537e2-62d8-4ef0-b2e4-8462b9e8ea96\", \"event\": \"OBJ_CREATED\", \"object_name\": \"NVMeDevice\", \"message\": \"Device created: 7e5145e7-d8fc-4d60-8af1-3f5015cb3021\", \"caused_by\": \"cli\"}\n2025-04-02 13:25:02,897: INFO: Make other nodes connect to the node devices\n2025-04-02 13:25:02,968: INFO: Connecting to node 71c31962-7313-4317-8330-9f09a3e77a72\n2025-04-02 13:25:02,969: INFO: bdev found remote_alceml_197c2d40-d39a-4a10-84eb-41c68a6834c7_qosn1\n2025-04-02 13:25:02,970: INFO: bdev found remote_alceml_5202854e-e3b3-4063-b6b9-9a83c1bbefe9_qosn1\n2025-04-02 13:25:02,971: INFO: bdev found remote_alceml_15c5f6de-63b6-424c-b4c0-49c3169c0135_qosn1\n2025-04-02 13:25:02,971: INFO: Connecting to node 93a812f9-2981-4048-a8fa-9f39f562f1aa\n...\n2025-04-02 13:25:10,255: INFO: {\"cluster_id\": \"a84537e2-62d8-4ef0-b2e4-8462b9e8ea96\", \"event\": \"OBJ_CREATED\", \"object_name\": \"JobSchedule\", \"message\": \"task created: a4692e1d-a527-44f7-8a86-28060eb466cf\", \"caused_by\": \"cli\"}\n2025-04-02 13:25:10,277: INFO: {\"cluster_id\": \"a84537e2-62d8-4ef0-b2e4-8462b9e8ea96\", \"event\": \"OBJ_CREATED\", \"object_name\": \"JobSchedule\", \"message\": \"task created: bab06208-bd27-4002-bc7b-dd92cf7b9b66\", \"caused_by\": \"cli\"}\nTrue\n

At this point, the old storage node is automatically removed from the cluster, and the storage node id is taken over by the new storage node. Any operation on the old storage node, such as an OS reinstall, can be safely executed.

"},{"location":"maintenance-operations/node-affinity/","title":"Configure Node Affinity","text":"

Simplyblock features node affinity, sometimes also referred to as data locality. This feature ensures that storage volumes are physically co-located on storage or Kubernetes worker nodes running the corresponding workloads. This minimizes network latency and maximizes I/O performance by keeping data close to the application. Ideal for latency-sensitive workloads, node affinity enables smarter, faster, and more efficient storage access in hyper-converged and hybrid environments.

Info

Node affinity is only available with hyper-converged or hybrid setups.

Node affinity does not sacrifice fault tolerance, as parity data will still be distributed to other storage cluster nodes enabling transparent failover in case of a failure, or spill over in the situation where the locally available storage runs out of available capacity.

"},{"location":"maintenance-operations/node-affinity/#enabling-node-affinity","title":"Enabling Node Affinity","text":"

To use node affinity, the storage cluster needs to be created with node affinity activated. When node affinity is enabled for a logical volume, it will influence how the data distribution algorithm will handle read and write requests.

To enable node affinity at creation time of the cluster, the --enable-node-affinity parameter needs to be added:

Enabling node affinity when the cluster is created
sbctl cluster create \\\n    --ifname=<IF_NAME> \\\n    --ha-type=ha \\\n    --enable-node-affinity # <- this is important\n

To see all available parameters for cluster creation, see Cluster Create.

When the cluster was created with node affinity enabled, logical volumes can be created with node affinity, which will always try to locate data co-located with the requested storage node.

"},{"location":"maintenance-operations/node-affinity/#create-a-node-affine-logical-volume","title":"Create a Node Affine Logical Volume","text":"

When creating a logical volume, it is possible to provide a host id (storage node UUID) to request the storage cluster to co-locate the volume with this storage node. This configuration will have no influence on storage clusters without node affinity enabled.

To create a co-located logical volume, the parameter --host-id needs to be added to the creation command:

Create a node affine logical volume
sbctl volume add <NAME> <SIZE> <POOL> \\\n    --host-id=<HOST_ID> \\\n    ... # other parameters\n

To see all available parameters for a logical volume creation, see Logical Volume Creation.

The storage node UUID (or host id) can be found using the sbctl storage-node list command.

List all storage nodes in a storage cluster
sbctl storage-node list --cluster-id=<CLUSTER_ID>\n
"},{"location":"maintenance-operations/reconnect-nvme-device/","title":"Reconnecting Logical Volume","text":"

After outages of storage nodes, primary and secondary NVMe over Fabrics connections may need to be re-established. With integrations such as simplyblock's Kubernetes CSI driver and the Proxmox integration, this is automatically handled.

With plain Linux clients, the connections have to be reconnected manually. This is especially important when a storage node is unavailable for more than 60 seconds (by default).

"},{"location":"maintenance-operations/reconnect-nvme-device/#reconnect-a-missing-nvme-controller","title":"Reconnect a Missing NVMe Controller","text":"

To reconnect the NVMe controllers for the logical volume, the normal nvme connect commands are executed again. This will immediately reconnect missing controllers and connection paths.

Retrieve connection strings
{cliname} volume connect <VOLUME_ID>\n
Example output for connection string retrieval
[demo@demo ~]# {cliname} volume connect 82e587c5-4a94-42a1-86e5-a5b8a6a75fc4\nsudo nvme connect --reconnect-delay=2 --ctrl-loss-tmo=60 --nr-io-queues=6 --keep-alive-tmo=5 --transport=tcp --traddr=192.168.10.112 --trsvcid=9100 --nqn=nqn.2023-02.io.simplyblock:0f2c4cb0-a71c-4830-bcff-11112f0ee51a:lvol:82e587c5-4a94-42a1-86e5-a5b8a6a75fc4\nsudo nvme connect --reconnect-delay=2 --ctrl-loss-tmo=60 --nr-io-queues=6 --keep-alive-tmo=5 --transport=tcp --traddr=192.168.10.113 --trsvcid=9100 --nqn=nqn.2023-02.io.simplyblock:0f2c4cb0-a71c-4830-bcff-11112f0ee51a:lvol:82e587c5-4a94-42a1-86e5-a5b8a6a75fc4\n
"},{"location":"maintenance-operations/reconnect-nvme-device/#increase-loss-timeout","title":"Increase Loss Timeout","text":"

Alternatively, depending on the environment, it is possible to increase the timeout after which Linux assumes the NVMe controller to be lost and stops with reconnection attempts.

To increase the timeout, the parameter --ctrl-loss-tmo can be increased. The value is the number of seconds until the Linux kernel stops the reconnection attempt and removes the controller from the list of valid multipath routes.

"},{"location":"maintenance-operations/replacing-storage-node/","title":"Replacing a Storage Node","text":"

A simplyblock storage cluster is designed to be always up. Hence, operations such as extending a cluster or replacing a storage node are online operations and don't require a system downtime. However, there are a few things to keep in mind when replacing a storage node.

Danger

If a storage node should be migrated, Migrating a Storage Node must be followed. Removing a storage node from a simplyblock cluster without migrating it will make the logical volumes owned by this storage node inaccessible!

"},{"location":"maintenance-operations/replacing-storage-node/#starting-the-new-storage-node","title":"Starting the new Storage Node","text":"

It is always recommended to start the new storage node before removing the old one, even if the remaining cluster has enough storage available to absorb the additional (temporary) storage requirement.

Every operation that changes the cluster topology comes with a set of migration tasks, moving data across the cluster to ensure equal usage distribution.

If a storage node failed and cannot be recovered, adding a new storage node is perfectly fine, though.

To start a new storage node, follow the storage node installation according to your chosen setup:

  • storage nodes in kubernetes
  • storage nodes on Bare Metal or Virtualized Linux
"},{"location":"maintenance-operations/replacing-storage-node/#remove-the-old-storage-node","title":"Remove the old Storage Node","text":"

Danger

All volumes on this storage node, which haven't been migrated before the removal, will become inaccessible!

To remove the old storage node, use the sbctl command line tool.

Remove a storage node
sbctl storage-node remove <NODE_ID>\n

Wait until the operation has successfully finished. Afterward, the storage node is removed from the cluster.

This can be checked again with the sbctl command line tool.

List storage nodes
sbctl storage-node list --cluster-id=<CLUSTER_ID>\n
"},{"location":"maintenance-operations/monitoring/","title":"Monitoring","text":"

Monitoring the health, performance, and resource utilization of a Simplyblock cluster is crucial for ensuring optimal operation, early issue detection, and efficient capacity planning. The sbctl command line interface provides a comprehensive set of tools to retrieve real-time and historical metrics related to Logical Volumes (LVs), storage nodes, I/O performance, and system status. By leveraging sbctl, administrators can quickly diagnose bottlenecks, monitor resource consumption, and maintain overall system stability.

"},{"location":"maintenance-operations/monitoring/accessing-grafana/","title":"Accessing Grafana","text":"

Simplyblock's control plane includes a Prometheus, Grafana, and Graylog installation.

Grafana retrieves metric data from Prometheus, including capacity, I/O statistics, and the cluster event log. Additionally, Grafana is used for alerting via Slack or email.

The standard retention period for metrics is 7 days. However, this can be changed when creating a cluster.

"},{"location":"maintenance-operations/monitoring/accessing-grafana/#how-to-access-grafana","title":"How to access Grafana","text":"

Grafana can be accessed through all management node API. It is recommended to set up a load balancer with session stickyness in front of the Grafana installation(s).

Grafana URLs
http://<MGMT_NODE_IP>/grafana\n

To retrieve the endpoint address from the cluster itself, use the following command:

Retrieving the Grafana endpoint
sbctl cluster get <CLUSTER_ID> | grep grafana_endpoint\n
"},{"location":"maintenance-operations/monitoring/accessing-grafana/#credentials","title":"Credentials","text":"

The Grafana installation uses the cluster secret as its password for the user admin. To retrieve the cluster secret, the following commands should be used:

Get the cluster uuid
sbctl cluster list\n
Get the cluster secret
sbctl cluster get-secret <CLUSTER_ID>\n

Credentials Username: Password:

"},{"location":"maintenance-operations/monitoring/accessing-grafana/#grafana-dashboards","title":"Grafana Dashboards","text":"

All dashboards are stored in per-cluster folders. Each cluster contains the following dashboard entries:

  • Cluster
  • Storage node
  • Device
  • Logical Volume
  • Storage Pool
  • Storage Plane node(s) system monitoring
  • Control Plane node(s) system monitoring

Dashboard widgets are designed to be self-explanatory.

By default, each dashboard contains data for all objects (e.g., all devices) in a cluster. It is, however, possible to filter them by particular objects (e.g., devices, storage nodes, or logical volumes) and to change the timescale and window.

Dashboards include physical and logical capacity utilization dynamics, IOPS, I/O throughput, and latency dynamics (all separate for read, write, and unmap). While all data from the event log is currently stored in Prometheus, they weren't used at the time of writing.

"},{"location":"maintenance-operations/monitoring/accessing-graylog/","title":"Accessing Graylog","text":"

Simplyblock's control plane includes a Prometheus, Grafana, and Graylog installation.

Graylog retrieves logs for all control plane and storage node services.

The standard retention period for metrics is 7 days. However, this can be changed when creating a cluster.

"},{"location":"maintenance-operations/monitoring/accessing-graylog/#how-to-access-graylog","title":"How to access Graylog","text":"

Graylog can be accessed through all management node API. It is recommended to set up a load balancer with session stickyness in front of the Graylog installation(s).

Graylog URLs
http://<MGMT_NODE_IP>/graylog\n
"},{"location":"maintenance-operations/monitoring/accessing-graylog/#credentials","title":"Credentials","text":"

The Graylog installation uses the cluster secret as its password for the user admin. To retrieve the cluster secret, the following command should be used:

Get the cluster secret
sbctl cluster get-secret <CLUSTER_ID>\n

Credentials Username: admin Password:

"},{"location":"maintenance-operations/monitoring/alerts/","title":"Alerting","text":"

Simplyblock uses Grafana to configure and manage alerting rules.

By default, Grafana is configured to send alerts to Slack channels. However, Grafana also allows alerting via email notifications, but this requires the use of an authorized SMTP server to send a message.

An SMTP server is currently not part of the management stack and must be deployed separately. Alerts can be triggered based on on-time or interval-based thresholds of statistical data collected (IO statistics, capacity information) or based on events from the cluster event log.

"},{"location":"maintenance-operations/monitoring/alerts/#pre-defined-alerts","title":"Pre-Defined Alerts","text":"

The following pre-defined alerts are available:

Alert Trigger device-unavailable Storage device became unavailable. device-read-only Storage device changed to status: read-only. cluster-status-degraded Storage node changed to status: degraded. cluster-status-suspended Storage node changed to status: suspended. storage-node-unreachable Storage node became unreachable. storage-node-offline Storage node became unavailable. storage-node-healthcheck-failure Storage node with negative healthcheck. logical-volume-offline Logical volume became unavailable. critical-capacity-reached Critical absolute capacity utilization in a cluster was reached. The threshold value can be configured at cluster creation time using --cap-crit. critical-provisioning-capacity-reached Critical absolute provisioned capacity utilization in a cluster was reached. The threshold value can be configured at cluster creation time using --prov-cap-crit. root-fs-low-disk-space Root filesystem free disk space is below 20%.

It is possible to configure the Slack webhook for alerting during cluster creation or to modify it at a later point in time.

"},{"location":"maintenance-operations/monitoring/cluster-health/","title":"Cluster Health","text":"

A simplyblock cluster consists of interconnected management nodes (control plane) and storage nodes (storage plane) working together to deliver a resilient, distributed storage platform. Monitoring the overall health, availability, and performance of the cluster is essential for ensuring data integrity, fault tolerance, and optimal operation under varying workloads. Simplyblock provides detailed metrics and status indicators at both the node and cluster levels to help administrators proactively detect issues and maintain system stability.

"},{"location":"maintenance-operations/monitoring/cluster-health/#accessing-cluster-status","title":"Accessing Cluster Status","text":"

To access a cluster's status, the sbctl command line tool can be used:

Accessing the status of a cluster
sbctl cluster status <CLUSTER_ID>\n

All details of the command are available in the CLI reference.

"},{"location":"maintenance-operations/monitoring/cluster-health/#accessing-cluster-statistics","title":"Accessing Cluster Statistics","text":"

To access a cluster's performance and I/O statistics, the sbctl command line tool can be used:

Accessing the statistics of a cluster
sbctl cluster show <CLUSTER_ID>\n

All details of the command are available in the CLI reference.

The information is also available through Grafana in the cluster's dashboard.

"},{"location":"maintenance-operations/monitoring/cluster-health/#accessing-cluster-io-statistics","title":"Accessing Cluster I/O Statistics","text":"

To access a cluster's performance and I/O statistics, the sbctl command line tool can be used:

Accessing the I/O statistics of a cluster
sbctl cluster get-io-stats <CLUSTER_ID>\n

All details of the command are available in the CLI reference.

The information is also available through Grafana in the cluster's dashboard.

"},{"location":"maintenance-operations/monitoring/cluster-health/#accessing-cluster-capacity-information","title":"Accessing Cluster Capacity Information","text":"

To access a cluster's capacity information, the sbctl command line tool can be used:

Accessing the capcity information of a cluster
sbctl cluster get-capacity <CLUSTER_ID>\n

All details of the command are available in the CLI reference.

"},{"location":"maintenance-operations/monitoring/cluster-health/#accessing-cluster-health-information","title":"Accessing Cluster Health Information","text":"

To access a cluster's health status, the sbctl command line tool can be used:

Accessing the health status of a cluster
sbctl cluster check <CLUSTER_ID>\n

All details of the command are available in the CLI reference.

"},{"location":"maintenance-operations/monitoring/io-stats/","title":"Accessing I/O Stats ({{ cliname }})","text":"

Simplyblock's sbctl tool provides the option to retrieve some extensive I/O statistics. Those contain a number of relevant metrics of historic and current I/O activities per device, storage node, logical volume, and cluster.

These metrics include:

  • Read and write throughput (in MB/s)
  • I/O operations per second (IOPS) for read, write, and unmap
  • Total amount of bytes read and written
  • Total number of I/O operations since the start of a node
  • Latency ticks
  • Average read, write, and unmap latency
"},{"location":"maintenance-operations/monitoring/io-stats/#accessing-cluster-statistics","title":"Accessing Cluster Statistics","text":"

To access cluster-wide statistics, use the following command:

Accessing cluster-wide I/O statistics
sbctl cluster get-io-stats <CLUSTER_ID>\n

More information about the command is available in the CLI reference section.

"},{"location":"maintenance-operations/monitoring/io-stats/#accessing-storage-node-statistics","title":"Accessing Storage Node Statistics","text":"

To access the I/O statistics of a storage node (which includes all physical NVMe devices), use the following command:

Accessing storage node I/O statistics
sbctl storage-node get-io-stats <NODE_ID>\n

More information about the command is available in the CLI reference section.

To access the I/O statistics of a specific device in a storage node, use the following command:

Accessing storage node device I/O statistics
sbctl storage-node get-io-stats-device <DEVICE_ID>\n

More information about the command is available in the CLI reference section.

"},{"location":"maintenance-operations/monitoring/io-stats/#accessing-storage-pool-statistics","title":"Accessing Storage Pool Statistics","text":"

To access logical volume-specific statistics, use the following command:

Accessing storage pool I/O statistics
sbctl storage-pool get-io-stats <POOL_ID>\n

More information about the command is available in the CLI reference section.

"},{"location":"maintenance-operations/monitoring/io-stats/#accessing-logical-volume-statistics","title":"Accessing Logical Volume Statistics","text":"

To access logical volume-specific statistics, use the following command:

Accessing logical volume I/O statistics
sbctl volume get-io-stats <VOLUME_ID>\n

More information about the command is available in the CLI reference section.

"},{"location":"maintenance-operations/monitoring/lvol-conditions/","title":"Logical Volume Conditions","text":"

Logical volumes are the core storage abstraction in simplyblock, representing high-performance, distributed NVMe block devices backed by the cluster. Maintaining visibility into the health, status, and performance of these volumes is critical for ensuring workload reliability, troubleshooting issues, and planning resource utilization. Simplyblock continuously monitors volume-level metrics and exposes them through both CLI and observability tools, giving operators detailed insight into system behavior.

"},{"location":"maintenance-operations/monitoring/lvol-conditions/#accessing-logical-volume-statistics","title":"Accessing Logical Volume Statistics","text":"

To access a logical volume's performance and I/O statistics, the sbctl command line tool can be used:

Accessing the statistics of a logical volume
sbctl volume get-io-stats <VOLUME_ID>\n

All details of the command are available in the CLI reference.

The information is also available through Grafana in the logical volume's dashboard.

"},{"location":"maintenance-operations/monitoring/lvol-conditions/#accessing-logical-volume-health-information","title":"Accessing Logical Volume Health Information","text":"

To access a logical volume's health status, the sbctl command line tool can be used:

Accessing the health status of a logical volume
sbctl volume check <VOLUME_ID>\n

All details of the command are available in the CLI reference.

"},{"location":"maintenance-operations/scaling/","title":"Scaling","text":"

Simplyblock is designed with a scale-out architecture that enables seamless growth of both storage capacity and performance by simply adding more nodes to the cluster. Built for modern, cloud-native environments, simplyblock supports linear scalability across compute, network, and storage layers\u2014without downtime or disruption to active workloads. Whether you're scaling to accommodate petabytes of data, high IOPS requirements, or enhanced throughput, simplyblock delivers predictable performance and resilience at scale.

"},{"location":"maintenance-operations/scaling/expanding-storage-cluster/","title":"Expanding a Storage Cluster","text":"

Simplyblock is designed as an always-on storage solution. Hence, storage cluster expansion is an online operation without a need for maintenance downtime.

However, every operation that changes the cluster topology comes with a set of migration tasks, moving data across the cluster to ensure equal usage distribution. While these migration tasks are low priority and their overhead is designed to be minimal, it is still recommended to expand the cluster at times when the storage cluster isn't under full utilization.

To start a new storage node, follow the storage node installation according to your chosen set-up:

  • storage nodes in kubernetes
  • storage nodes on Bare Metal or Virtualized Linux
"},{"location":"maintenance-operations/scaling/expanding-storage-pool/","title":"Expanding a Storage Pool","text":"

Simplyblock is designed as on always-on a storage system. Therefore, expanding a storage pool is an online operation and does not require a maintenance window or system downtime.

When expanding a storage pool, its capacity will be extended, offering an extended quota of the overall storage cluster.

"},{"location":"maintenance-operations/scaling/expanding-storage-pool/#storage-pool-expansion","title":"Storage Pool Expansion","text":"

To expand a storage pool, the sbctl command line interface:

Expanding the storage pool
sbctl storage-pool set <POOL_ID> --pool-max=<NEW_SIZE>\n

The value of NEW_SIZE must be given as 20G, 20T, etc.

All valid parameters can be found in the Storage Pool CLI Reference.

"},{"location":"maintenance-operations/security/","title":"Security","text":"

Security is a core pillar of the simplyblock platform, designed to protect data across every layer of the storage stack. From encryption at rest to multi-tenant isolation and secure communications, simplyblock provides robust, enterprise-grade features that help meet stringent compliance and data protection requirements. Security is enforced by design, ensuring your workloads and sensitive data remain protected against internal and external threats.

"},{"location":"maintenance-operations/security/encryption-kubernetes-secrets/","title":"Encrypting with Kubernetes Secrets","text":"

Simplyblock supports encryption of logical volumes (LVs) to protect data at rest, ensuring that sensitive information remains secure across the distributed storage cluster. Encryption is applied during volume creation as part of the storage class specification.

Encrypting Logical Volumes ensures that simplyblock storage meets data protection and compliance requirements, safeguarding sensitive workloads without compromising performance.

Warning

Encryption must be specified at the time of volume creation. Existing logical volumes cannot be retroactively encrypted.

"},{"location":"maintenance-operations/security/encryption-kubernetes-secrets/#encrypting-volumes-with-simplyblock","title":"Encrypting Volumes with Simplyblock","text":"

Simplyblock supports the encryption of logical volumes. Internally, simplyblock utilizes the industry-proven crypto bdev\u00a0\u29c9 provided by SPDK to implement its encryption functionality.

The encryption uses an AES_XTS variable-length block cipher. This cipher requires two keys of 16 to 32 bytes each. The keys need to have the same length, meaning that if one key is 32 bytes long, the other one has to be 32 bytes, too.

Recommendation

Simplyblock strongly recommends two keys of 32 bytes.

"},{"location":"maintenance-operations/security/encryption-kubernetes-secrets/#generate-random-keys","title":"Generate Random Keys","text":"

Simplyblock does not provide an integrated way to generate encryption keys, but recommends using the OpenSSL tool chain. For Kubernetes, the encryption key needs to be provided as base64. Hence, it's encoded right away.

To generate the two keys, the following command is run twice. The result must be stored for later.

Create an Encryption Key
openssl rand -hex 32 | base64 -w0\n
"},{"location":"maintenance-operations/security/encryption-kubernetes-secrets/#create-the-kubernetes-secret","title":"Create the Kubernetes Secret","text":"

Next up, a Kubernetes Secret is created, providing the two just-created encryption keys.

Create a Kubernetes Secret Resource
apiVersion: v1\nkind: Secret\nmetadata:\n  name: my-encryption-keys\ndata:\n  crypto_key1: YzIzYzllY2I4MWJmYmY1ZDM5ZDA0NThjNWZlNzQwNjY2Y2RjZDViNWE4NTZkOTA5YmRmODFjM2UxM2FkZGU4Ngo=\n  crypto_key2: ZmFhMGFlMzZkNmIyODdhMjYxMzZhYWI3ZTcwZDEwZjBmYWJlMzYzMDRjNTBjYTY5Nzk2ZGRlZGJiMDMwMGJmNwo=\n

The Kubernetes Secret can be used for one or more logical volumes. Using different encryption keys, multiple tenants can be secured with an additional isolation layer against each other.

"},{"location":"maintenance-operations/security/encryption-kubernetes-secrets/#storageclass-configuration","title":"StorageClass Configuration","text":"

A new Kubernetes StorageClass needs to be created, or an existing one needs to be configured. To use encryption on a persistent volume claim level, the storage class has to be set for encryption.

Example StorageClass
apiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n  name: my-encrypted-volumes\nprovisioner: csi.simplyblock.io\nparameters:\n  encryption: \"True\" # This is important!\n  ... other parameters\nreclaimPolicy: Delete\nvolumeBindingMode: Immediate\nallowVolumeExpansion: true\n
"},{"location":"maintenance-operations/security/encryption-kubernetes-secrets/#create-a-persistentvolumeclaim","title":"Create a PersistentVolumeClaim","text":"

When requesting a logical volume through a Kubernetes PersistentVolumeClaim, the storage class and the secret resources have to be connected to the PVC. When picked up, simplyblock will automatically collect the keys and create the logical volumes as a fully encrypted logical volume.

Create an encrypting PersistentVolumeClaim
apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  annotations:\n    simplybk/secret-name: my-encryption-keys # Encryption keys\n  name: my-encrypted-volume-claim\nspec:\n  storageClassName: my-encrypted-volumes # StorageClass\n  accessModes:\n    - ReadWriteOnce\n  resources:\n    requests:\n      storage: 200Gi\n
"},{"location":"maintenance-operations/security/multi-tenancy/","title":"Multi-Tenancy","text":"

Simplyblock is designed to support secure and efficient multitenancy, enabling multiple independent tenants to share the same physical infrastructure without compromising data isolation, performance guarantees, or security. This capability is essential in cloud environments, managed services, and enterprise deployments where infrastructure is consolidated across internal departments or external customers.

"},{"location":"maintenance-operations/security/multi-tenancy/#storage-isolation","title":"Storage Isolation","text":"

Simplyblock provides multiple layers of isolation between multiple tenants, depending on requirements and how tenants are defined.

"},{"location":"maintenance-operations/security/multi-tenancy/#storage-pool-isolation","title":"Storage Pool Isolation","text":"

If tenants are expected to have multiple volumes, defining the overall available storage quota a tenant can access and assign to volumes might be required. Hence, simplyblock enables the creation of a storage pool with a maximum capacity per tenant. All volumes for this tenant should be created in their respective storage pool and automatically count towards the storage quota.

"},{"location":"maintenance-operations/security/multi-tenancy/#logical-volume-isolation","title":"Logical Volume Isolation","text":"

If a tenant is expected to have only one volume or strong isolation between volumes is required, each logical volume can be seen as fully isolated at the storage layer. Access to volumes is tightly controlled, and each LV is only exposed to the workloads explicitly granted access.

"},{"location":"maintenance-operations/security/multi-tenancy/#quality-of-service-qos","title":"Quality of Service (QoS)","text":"

To prevent noisy neighbor effects and ensure fair resource allocation, simplyblock supports per-volume Quality of Service (QoS) configurations. Administrators can define IOPS and bandwidth limits for each logical volume, providing predictable performance and protecting tenants from resource contention.

Quality of service is available for Kubernetes-based installation quality of service and plain Linux installation quality of service.

"},{"location":"maintenance-operations/security/multi-tenancy/#encryption-and-data-security","title":"Encryption and Data Security","text":"

All data is protected with encryption at rest, using strong AES-based cryptographic algorithms. Encryption is applied at the volume level, ensuring that tenant data remains secure and inaccessible to other users, even at the physical storage layer. Encryption keys are logically separated between tenants to support strong cryptographic isolation.

Encryption is available for Kubernetes-based installation encryption and plain Linux installation encryption.

"},{"location":"reference/","title":"Reference","text":"

Simplyblock provides multiple interfaces for managing and interacting with its distributed storage system, including the sbctl command-line interface (CLI) and Management API. The sbctl CLI offers a powerful, scriptable way to perform essential operations such as provisioning, expanding, snapshotting, and cloning logical volumes, making it ideal for administrators who prefer direct command-line access.

The simplyblock Management API enables integration with external automation and orchestration tools, allowing seamless management of storage resources at scale. Additionally, this section includes a reference list of supported Linux kernels and distributions, ensuring compatibility across various environments.

"},{"location":"reference/nvme-low-level-format/","title":"NVMe Low-Level Format","text":"

Once the check is complete, the NVMe devices in each storage node can be prepared. To prevent data loss in case of a sudden power outage, NVMe devices need to be formatted for a specific LBA format.

Warning

Failing to format NVMe devices with the correct LBA format can lead to data loss or data corruption in the case of a sudden power outage or other loss of power. If you can't find the necessary LBA format, it is best to ask your simplyblock contact for further instructions.

On AWS, the necessary LBA format is not available. Simplyblock is, however, fully tested and supported with AWS.

The lsblk is the best way to find all NVMe devices attached to a system.

Example output of lsblk
[demo@demo-3 ~]# sudo lsblk\nNAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS\nsda           8:0    0   30G  0 disk\n\u251c\u2500sda1        8:1    0    1G  0 part /boot\n\u2514\u2500sda2        8:2    0   29G  0 part\n  \u251c\u2500rl-root 253:0    0   26G  0 lvm  /\n  \u2514\u2500rl-swap 253:1    0    3G  0 lvm  [SWAP]\nnvme3n1     259:0    0  6.5G  0 disk\nnvme2n1     259:1    0   70G  0 disk\nnvme1n1     259:2    0   70G  0 disk\nnvme0n1     259:3    0   70G  0 disk\n

In the example, we see four NVMe devices. Three devices of 70GiB and one device with 6.5GiB storage capacity.

To find the correct LBA format (lbaf) for each of the devices, the nvme CLI can be used.

Show NVMe namespace information
sudo nvme id-ns /dev/nvmeXnY\n

The output depends on the NVMe device itself, but looks something like this:

Example output of NVMe namespace information
[demo@demo-3 ~]# sudo nvme id-ns /dev/nvme0n1\nNVME Identify Namespace 1:\n...\nlbaf  0 : ms:0   lbads:9  rp:0\nlbaf  1 : ms:8   lbads:9  rp:0\nlbaf  2 : ms:16  lbads:9  rp:0\nlbaf  3 : ms:64  lbads:9  rp:0\nlbaf  4 : ms:0   lbads:12 rp:0 (in use)\nlbaf  5 : ms:8   lbads:12 rp:0\nlbaf  6 : ms:16  lbads:12 rp:0\nlbaf  7 : ms:64  lbads:12 rp:0\n

From this output, the required lbaf configuration can be found. The necessary configuration has to have the following values:

Property Value ms 0 lbads 12 rp 0

In the example, the required LBA format is 4. If an NVMe device doesn't have that combination, any other lbads=12 combination will work. However, simplyblock recommends asking for the best available combination.

Info

In some rare cases, no lbads=12 combination will be available. In this case, it is ok to leave the current setup. This is specifically true for certain cloud providers such as AWS.

In our example, the device is already formatted with the correct lbaf (see the \"in use\"). It is, however, recommended to always format the device before use.

To format the drive, the nvme cli is used again.

Formatting the NVMe device
sudo nvme format --lbaf=<lbaf> --ses=0 /dev/nvmeXnY\n

The output of the command should give a successful response when executed similarly to the example below.

Example output of NVMe device formatting
[demo@demo-3 ~]# sudo nvme format --lbaf=4 --ses=0 /dev/nvme0n1\nYou are about to format nvme0n1, namespace 0x1.\nWARNING: Format may irrevocably delete this device's data.\nYou have 10 seconds to press Ctrl-C to cancel this operation.\n\nUse the force [--force] option to suppress this warning.\nSending format operation ...\nSuccess formatting namespace:1\n

Warning

This operation needs to be repeated for each NVMe device that will be handled by simplyblock.

"},{"location":"reference/supported-linux-distributions/","title":"Supported Linux Distributions","text":"

Simplyblock requires a Linux Kernel 5.19 or later with NVMe over Fabrics and NVMe over TCP enabled. However, sbctl, the simplyblock commandline interface, requires some additional tools and expects certain conventions for configuration files and locations. Therefore, simplyblock officially only supports Red Hat-based Linux distributions as of now.

While others may work, manual intervention may be required, and simplyblock cannot support those.

"},{"location":"reference/supported-linux-distributions/#control-plane-plain-linux","title":"Control Plane (Plain Linux)","text":"

The following Linux distributions are considered tested and supported to run a control plane:

Distribution Version Architecture Support Level Red Hat Enterprise Linux 9 and later x64 Fully supported Rocky Linux 9 and later x64 Fully supported AlmaLinux 9 and later x64 Fully supported"},{"location":"reference/supported-linux-distributions/#storage-plane-plain-linux","title":"Storage Plane (Plain Linux)","text":"

The following Linux distributions are considered tested and supported to run a disaggregated storage plane:

Distribution Version Architecture Support Level Red Hat Enterprise Linux 9 and later x64, arm64 Fully supported Rocky Linux 9 and later x64, arm64 Fully supported AlmaLinux 9 and later x64, arm64 Fully supported"},{"location":"reference/supported-linux-distributions/#kubernetes-control-plane-and-storage-plane","title":"Kubernetes: Control Plane and Storage Plane","text":"

The following Linux distributions are considered tested and supported to run a hyper-converged storage plane:

Distribution Version Architecture Support Level Red Hat Enterprise Linux 9 and later x64, arm64 Fully supported Rocky Linux 9 and later x64, arm64 Fully supported Alma Linux 9 and later x64, arm64 Fully supported Ubuntu 22.04 and later x64, arm64 Fully supported Debian 12 or later x64, arm64 Fully supported Amazon Linux 2 (AL2) - x64, arm64 Fully supported Amazon Linux 2023 - x64, arm64 Fully supported Talos 1.6.7 or later x64, arm64 Fully supported"},{"location":"reference/supported-linux-distributions/#hosts-initiators-accessing-storage-cluster-over-nvmf","title":"Hosts (Initiators accessing Storage Cluster over NVMf)","text":"

The following Linux distributions are considered tested and supported as NVMe-oF storage clients:

Distribution Version Architecture Support Level Red Hat Enterprise Linux 8.1 and later x64, arm64 Fully supported CentOS 8 and later x64, arm64 Fully supported Rocky Linux 9 and later x64, arm64 Fully supported AlmaLinux 9 and later x64, arm64 Fully supported Ubuntu 18.04 x64, arm64 Fully supported Ubuntu 20.04 x64, arm64 Fully supported Ubuntu 22.04 x64, arm64 Fully supported Debian 12 or later x64, arm64 Fully supported Amazon Linux 2 (AL2) - x64, arm64 Partially supported1 Amazon Linux 2023 - x64, arm64 Partially supported1

1 Amazon Linux 2 and Amazon Linux 2023 have a bug with NVMe over Fabrics Multipathing. That means that NVMe over Fabrics on any Amazon Linux operates in a degraded state with the risk of connection outages. Alternatively, multipathing must be configured using the Linux Device Manager (dm) via DM-MPIO.

"},{"location":"reference/supported-linux-kernels/","title":"Supported Linux Kernels","text":"

Simplyblock is built upon NVMe over Fabrics. Hence, it requires a Linux kernel with NVMe and NVMe-oF support.

As a general rule, every Linux kernel 5.19 or later is expected to work, as long as the kernel modules for NVMe (nvme), NVMe over Fabrics (nvme-of), and NVMe over TCP (nvme-tcp) are available. In most cases, the latter two kernel modules need to be loaded manually or persisted. Please see the Bare Metal or Virtualized (Linux) installation section on how to do this.

The following kernels are known to be compatible and tested. Additional kernel versions may work, but are untested.

OS Linux Kernel Prerequisite Red Hat Enterprise Linux 4.18.0-xxx Kernel on x86_64 modprobe nvme-tcp Amazon Linux 2 Kernel 5.10 AMI 2.0.20230822.0 modprobe nvme-tcp Amazon Linux 2023 2023.1.20230825.0 x86_64 HVM kernel-6.1 modprobe nvme-tcp

Warning

Amazon Linux 2 and Amazon Linux 2023 have a bug with NVMe over Fabrics Multipathing. That means that NVMe over Fabrics on any Amazon Linux operates in a degraded state with the risk of connection outages. As an alternative, multipathing must be configured using the Linux Device Manager (dm) via DM-MPIO. Use the following DM-MPIO configuration:

cat /etc/multipath.conf \ndefaults {\n    polling_interval 1\n    user_friendly_names yes\n    find_multipaths yes\n    enable_foreign nvme\n    checker_timeout 3\n    failback immediate\n    max_polling_interval 3\n    detect_checker yes\n}\n\ndevices {\n    device {\n        vendor \"NVMe\"\n        product \".*\"\n        path_grouping_policy group_by_prio\n        path_selector \"service-time 0\"\n        failback \"immediate\"\n        no_path_retry \"queue\"\n        hardware_handler \"1 ana\"\n    }\n}\n\nblacklist {\n}\n
"},{"location":"reference/upgrade-matrix/","title":"Upgrade Matrix","text":"

Simplyblock supports in-place upgrades of existing clusters. However, not all versions can be upgraded straight to the latest versions. Hence, some upgrades may include multiple steps.

Possible upgrade paths are described in the following table. If the currently installed version is not listed on the requested version, an upgrade to a further supported version must be executed first.

Requested Version Installed Version 25.5.x 25.5.x, 25.3-PRE 25.7.7 25.7.5. 25.10.1 25.7.5., 25.7.7."},{"location":"reference/api/","title":"API / Developer SDK","text":"

Simplyblock offers a comprehensive API to manage and automate cluster operations. This includes all cluster-wide operations, logical volume-specific operations, health information, and

  • Retrieve information about the cluster and its health status
  • Automatically manage a logical volume lifecycle
  • Integrate simplyblock into deployment processes and workflow automations
  • Create custom alerts and warnings
"},{"location":"reference/api/#authentication","title":"Authentication","text":"

Any request to the simplyblock API requires authorization information to be provided. Unauthorized requests return an HTTP status 401 (Unauthorized).

To provide authorization information, the simplyblock API uses the Authorization HTTP header with a combination of the cluster UUID and the cluster secret.

HTTP Authorization header:

Authorization: <CLUSTER_UUID> <CLUSTER_SECRET>\n

The cluster id is provided during the initial cluster installation. The cluster secret can be obtained using the simplyblock commandline interface tool sbctl.

sbctl cluster get-secret CLUSTER_UUID\n
"},{"location":"reference/api/#put-and-post-requests","title":"PUT and POST Requests","text":"

For requests that send a JSON payload to the backend endpoint, it is important to set the Content-Type header accordingly. Requests that require this header to be set are of type HTTP PUT or HTTP POST.

The expected content type is application/json:

Content-Type: application/json\n
"},{"location":"reference/api/#api-documentation","title":"API Documentation","text":"

The full API documentation is hosted on Postman. You can find the full API collection on the Postman API project\u00a0\u29c9.

"},{"location":"reference/api/reference/","title":"API Reference","text":"

!! SWAGGER ERROR: File ../../../scripts/sbcli-repo/simplyblock_web/static/swagger.yaml not found. !!

"},{"location":"reference/cli/","title":"CLI / Command-line interface","text":"

Simplyblock provides a feature-rich CLI (command line interface) client to manage all aspects of the storage cluster.

"},{"location":"reference/cli/cluster/","title":"Cluster commands","text":"
sbctl cluster --help\n

Cluster commands

"},{"location":"reference/cli/cluster/#creates-a-new-cluster","title":"Creates a new cluster","text":"

Created a new control plane cluster with the current node as the primary control plane node.

sbctl cluster create\n    --cap-warn=<CAP_WARN>\n    --cap-crit=<CAP_CRIT>\n    --prov-cap-warn=<PROV_CAP_WARN>\n    --prov-cap-crit=<PROV_CAP_CRIT>\n    --ifname=<IFNAME>\n    --mgmt-ip=<MGMT_IP>\n    --tls-secret-name=<TLS_SECRET_NAME>\n    --log-del-interval=<LOG_DEL_INTERVAL>\n    --metrics-retention-period=<METRICS_RETENTION_PERIOD>\n    --contact-point=<CONTACT_POINT>\n    --grafana-endpoint=<GRAFANA_ENDPOINT>\n    --data-chunks-per-stripe=<DATA_CHUNKS_PER_STRIPE>\n    --parity-chunks-per-stripe=<PARITY_CHUNKS_PER_STRIPE>\n    --ha-type=<HA_TYPE>\n    --is-single-node\n    --mode=<MODE>\n    --ingress-host-source=<INGRESS_HOST_SOURCE>\n    --dns-name=<DNS_NAME>\n    --enable-node-affinity\n    --fabric=<FABRIC>\n    --strict-node-anti-affinity\n    --name=<NAME>\n    --qpair-count=<QPAIR_COUNT>\n    --client-qpair-count=<CLIENT_QPAIR_COUNT>\n
Parameter Description Data Type Required Default --cap-warn Capacity warning level in percent, default: 89 integer False 89 --cap-crit Capacity critical level in percent, default: 99 integer False 99 --prov-cap-warn Capacity warning level in percent, default: 250 integer False 250 --prov-cap-crit Capacity critical level in percent, default: 500 integer False 500 --ifname Management interface name, e.g. eth0 string False - --mgmt-ip Management IP address to use for the node (e.g., 192.168.1.10) string False - --tls-secret-name Name of the Kubernetes TLS Secret to be used by the Ingress for HTTPS termination (e.g., my-tls-secret) string False - --log-del-interval Logging retention policy, default: 3d string False 3d --metrics-retention-period Retention period for I/O statistics (Prometheus), default: 7d string False 7d --contact-point Email or slack webhook url to be used for alerting string False --grafana-endpoint Endpoint url for Grafana string False --data-chunks-per-stripe Erasure coding schema parameter k (distributed raid), default: 1 integer False 1 --parity-chunks-per-stripe Erasure coding schema parameter n (distributed raid), default: 1 integer False 1 --ha-type Logical volume HA type (single, ha), default is cluster ha typeAvailable Options:- single- ha string False ha --is-single-node For single node clusters only marker False False --mode Environment to deploy management services, default: dockerAvailable Options:- docker- kubernetes string False docker --ingress-host-source Ingress host source: 'hostip' for node IP, 'loadbalancer' for external LB, or 'dns' for custom domainAvailable Options:- hostip- loadbalancer- dns string False hostip --dns-name Fully qualified DNS name to use as the Ingress host (required if --ingress-host-source=dns) string False --enable-node-affinity Enable node affinity for storage nodes marker False - --fabric fabric: tcp, rdma or both (specify: tcp, rdma)Available Options:- tcp- rdma- tcp,rdma string False tcp --strict-node-anti-affinity Enable strict node anti affinity for storage nodes. Never more than one chunk is placed on a node. This requires a minimum of data-chunks-in-stripe + parity-chunks-in-stripe + 1 nodes in the cluster. marker False - --name, -n Assigns a name to the newly created cluster. string False - --qpair-count Increase for clusters with few but very large logical volumes or decrease for clusters with a large number of very small logical volumes. range(0..128) False 32 --client-qpair-count Increase for clusters with few but very large logical volumes or decrease for clusters with a large number of very small logical volumes. range(0..128) False 3"},{"location":"reference/cli/cluster/#adds-a-new-cluster","title":"Adds a new cluster","text":"

Adds a new cluster

sbctl cluster add\n    --cap-warn=<CAP_WARN>\n    --cap-crit=<CAP_CRIT>\n    --prov-cap-warn=<PROV_CAP_WARN>\n    --prov-cap-crit=<PROV_CAP_CRIT>\n    --data-chunks-per-stripe=<DATA_CHUNKS_PER_STRIPE>\n    --parity-chunks-per-stripe=<PARITY_CHUNKS_PER_STRIPE>\n    --ha-type=<HA_TYPE>\n    --enable-node-affinity\n    --fabric=<FABRIC>\n    --is-single-node\n    --qpair-count=<QPAIR_COUNT>\n    --client-qpair-count=<CLIENT_QPAIR_COUNT>\n    --strict-node-anti-affinity\n    --name=<NAME>\n
Parameter Description Data Type Required Default --cap-warn Capacity warning level in percent, default: 89 integer False 89 --cap-crit Capacity critical level in percent, default: 99 integer False 99 --prov-cap-warn Capacity warning level in percent, default: 250 integer False 250 --prov-cap-crit Capacity critical level in percent, default: 500 integer False 500 --data-chunks-per-stripe Erasure coding schema parameter k (distributed raid), default: 1 integer False 1 --parity-chunks-per-stripe Erasure coding schema parameter n (distributed raid), default: 1 integer False 1 --ha-type Logical volume HA type (single, ha), default is cluster single typeAvailable Options:- single- ha string False ha --enable-node-affinity Enables node affinity for storage nodes marker False - --fabric fabric: tcp, rdma or both (specify: tcp, rdma)Available Options:- tcp- rdma- tcp,rdma string False tcp --is-single-node For single node clusters only marker False False --qpair-count Increase for clusters with few but very large logical volumes or decrease for clusters with a large number of very small logical volumes. range(0..128) False 32 --client-qpair-count Increase for clusters with few but very large logical volumes or decrease for clusters with a large number of very small logical volumes. range(0..128) False 3 --strict-node-anti-affinity Enable strict node anti affinity for storage nodes. Never more than one chunk is placed on a node. This requires a minimum of data-chunks-in-stripe + parity-chunks-in-stripe + 1 nodes in the cluster.\" marker False - --name, -n Assigns a name to the newly created cluster. string False -"},{"location":"reference/cli/cluster/#activates-a-cluster","title":"Activates a cluster.","text":"

Once a cluster has sufficient nodes added, it needs to be activated. Can also be used to re-activate a suspended cluster.

sbctl cluster activate\n    <CLUSTER_ID>\n    --force\n    --force-lvstore-create\n
Argument Description Data Type Required CLUSTER_ID Cluster id string True Parameter Description Data Type Required Default --force Force recreate distr and lv stores marker False - --force-lvstore-create Force recreate lv stores marker False -"},{"location":"reference/cli/cluster/#shows-the-cluster-list","title":"Shows the cluster list","text":"

Shows the cluster list

sbctl cluster list\n    --json\n
Parameter Description Data Type Required Default --json Print json output marker False -"},{"location":"reference/cli/cluster/#shows-a-clusters-status","title":"Shows a cluster's status","text":"

Shows a cluster's status

sbctl cluster status\n    <CLUSTER_ID>\n
Argument Description Data Type Required CLUSTER_ID Cluster id string True"},{"location":"reference/cli/cluster/#create-lvstore-on-newly-added-nodes-to-the-cluster","title":"Create lvstore on newly added nodes to the cluster","text":"

Create lvstore on newly added nodes to the cluster

sbctl cluster complete-expand\n    <CLUSTER_ID>\n
Argument Description Data Type Required CLUSTER_ID Cluster id string True"},{"location":"reference/cli/cluster/#shows-a-clusters-statistics","title":"Shows a cluster's statistics","text":"

Shows a cluster's statistics

sbctl cluster show\n    <CLUSTER_ID>\n
Argument Description Data Type Required CLUSTER_ID Cluster id string True"},{"location":"reference/cli/cluster/#gets-a-clusters-information","title":"Gets a cluster's information","text":"

Gets a cluster's information

sbctl cluster get\n    <CLUSTER_ID>\n
Argument Description Data Type Required CLUSTER_ID Cluster id string True"},{"location":"reference/cli/cluster/#gets-a-clusters-capacity","title":"Gets a cluster's capacity","text":"

Gets a cluster's capacity

sbctl cluster get-capacity\n    <CLUSTER_ID>\n    --json\n    --history=<HISTORY>\n
Argument Description Data Type Required CLUSTER_ID Cluster id string True Parameter Description Data Type Required Default --json Print json output marker False - --history (XXdYYh), list history records (one for every 15 minutes) for XX days and YY hours (up to 10 days in total). string False -"},{"location":"reference/cli/cluster/#gets-a-clusters-io-statistics","title":"Gets a cluster's I/O statistics","text":"

Gets a cluster's I/O statistics

sbctl cluster get-io-stats\n    <CLUSTER_ID>\n    --records=<RECORDS>\n    --history=<HISTORY>\n
Argument Description Data Type Required CLUSTER_ID Cluster id string True Parameter Description Data Type Required Default --records Number of records, default: 20 integer False 20 --history (XXdYYh), list history records (one for every 15 minutes) for XX days and YY hours (up to 10 days in total). string False -"},{"location":"reference/cli/cluster/#returns-a-clusters-status-logs","title":"Returns a cluster's status logs","text":"

Returns a cluster's status logs

sbctl cluster get-logs\n    <CLUSTER_ID>\n    --json\n    --limit=<LIMIT>\n
Argument Description Data Type Required CLUSTER_ID Cluster id string True Parameter Description Data Type Required Default --json Return JSON formatted logs marker False - --limit show last number of logs, default 50 integer False 50"},{"location":"reference/cli/cluster/#gets-a-clusters-secret","title":"Gets a cluster's secret","text":"

Gets a cluster's secret

sbctl cluster get-secret\n    <CLUSTER_ID>\n
Argument Description Data Type Required CLUSTER_ID Cluster id string True"},{"location":"reference/cli/cluster/#updates-a-clusters-secret","title":"Updates a cluster's secret","text":"

Updates a cluster's secret

sbctl cluster update-secret\n    <CLUSTER_ID>\n    <SECRET>\n
Argument Description Data Type Required CLUSTER_ID Cluster id string True SECRET new 20 characters password string True"},{"location":"reference/cli/cluster/#updates-a-clusters-fabric","title":"Updates a cluster's fabric","text":"

Updates a cluster's fabric

sbctl cluster update-fabric\n    <CLUSTER_ID>\n    <FABRIC>\n
Argument Description Data Type Required CLUSTER_ID Cluster id string True FABRIC fabric: tcp, rdma or both (specify: tcp, rdma) string True"},{"location":"reference/cli/cluster/#checks-a-clusters-health","title":"Checks a cluster's health","text":"

Checks a cluster's health

sbctl cluster check\n    <CLUSTER_ID>\n
Argument Description Data Type Required CLUSTER_ID Cluster id string True"},{"location":"reference/cli/cluster/#updates-a-cluster-to-new-version","title":"Updates a cluster to new version","text":"

Updates a the control plane to a new version. To update the storage nodes, they have to be shutdown and restarted. This can be done in a rolling manner. Attention: verify that an upgrade path is available and has been tested!\"

sbctl cluster update\n    <CLUSTER_ID>\n    --cp-only=<CP_ONLY>\n    --restart=<RESTART>\n    --spdk-image=<SPDK_IMAGE>\n    --mgmt-image=<MGMT_IMAGE>\n
Argument Description Data Type Required CLUSTER_ID Cluster id string True Parameter Description Data Type Required Default --cp-only Update the control plane only boolean False False --restart Restart the management services boolean False False --spdk-image Restart the storage nodes using the provided image string False - --mgmt-image Restart the management services using the provided image string False -"},{"location":"reference/cli/cluster/#lists-tasks-of-a-cluster","title":"Lists tasks of a cluster","text":"

Lists tasks of a cluster

sbctl cluster list-tasks\n    <CLUSTER_ID>\n    --limit=<LIMIT>\n
Argument Description Data Type Required CLUSTER_ID Cluster id string True Parameter Description Data Type Required Default --limit show last number of tasks, default 50 integer False 50"},{"location":"reference/cli/cluster/#cancels-task-by-task-id","title":"Cancels task by task id","text":"

Cancels task by task id

sbctl cluster cancel-task\n    <TASK_ID>\n
Argument Description Data Type Required TASK_ID Task id string True"},{"location":"reference/cli/cluster/#get-rebalancing-subtasks-list","title":"Get rebalancing subtasks list","text":"

Get rebalancing subtasks list

sbctl cluster get-subtasks\n    <TASK_ID>\n
Argument Description Data Type Required TASK_ID Task id string True"},{"location":"reference/cli/cluster/#deletes-a-cluster","title":"Deletes a cluster","text":"

This is only possible, if no storage nodes and pools are attached to the cluster

sbctl cluster delete\n    <CLUSTER_ID>\n
Argument Description Data Type Required CLUSTER_ID Cluster id string True"},{"location":"reference/cli/cluster/#assigns-or-changes-a-name-to-a-cluster","title":"Assigns or changes a name to a cluster","text":"

Assigns or changes a name to a cluster

sbctl cluster change-name\n    <CLUSTER_ID>\n    <NAME>\n
Argument Description Data Type Required CLUSTER_ID Cluster id string True NAME Name string True"},{"location":"reference/cli/control-plane/","title":"Control plane commands","text":"
sbctl control-plane --help\n

Aliases: cp mgmt

Control plane commands

"},{"location":"reference/cli/control-plane/#adds-a-control-plane-to-the-cluster-local-run","title":"Adds a control plane to the cluster (local run)","text":"

Adds a control plane to the cluster (local run)

sbctl control-plane add\n    <CLUSTER_IP>\n    <CLUSTER_ID>\n    <CLUSTER_SECRET>\n    --ifname=<IFNAME>\n    --mgmt-ip=<MGMT_IP>\n    --mode=<MODE>\n
Argument Description Data Type Required CLUSTER_IP Cluster IP address string True CLUSTER_ID Cluster id string True CLUSTER_SECRET Cluster secret string True Parameter Description Data Type Required Default --ifname Management interface name string False - --mgmt-ip Management IP address to use for the node (e.g., 192.168.1.10) string False - --mode Environment to deploy management services, default: dockerAvailable Options:- docker- kubernetes string False docker"},{"location":"reference/cli/control-plane/#lists-all-control-plane-nodes","title":"Lists all control plane nodes","text":"

Lists all control plane nodes

sbctl control-plane list\n    --json\n
Parameter Description Data Type Required Default --json Print outputs in json format marker False -"},{"location":"reference/cli/control-plane/#removes-a-control-plane-node","title":"Removes a control plane node","text":"

Removes a control plane node

sbctl control-plane remove\n    <NODE_ID>\n
Argument Description Data Type Required NODE_ID Control plane node id string True"},{"location":"reference/cli/qos/","title":"qos commands","text":"
sbctl qos --help\n

qos commands

"},{"location":"reference/cli/qos/#creates-a-new-qos-class","title":"Creates a new QOS class","text":"

Creates a new QOS class

sbctl qos add\n    <NAME>\n    <WEIGHT>\n    <CLUSTER_ID>\n
Argument Description Data Type Required NAME QOS class name string True WEIGHT QOS class weight integer True CLUSTER_ID Cluster UUID string True"},{"location":"reference/cli/qos/#lists-all-qos-classes","title":"Lists all qos classes","text":"

Lists all qos classes

sbctl qos list\n    <CLUSTER_ID>\n    --json\n
Argument Description Data Type Required CLUSTER_ID Cluster UUID string True Parameter Description Data Type Required Default --json Print json output marker False -"},{"location":"reference/cli/qos/#delete-a-class","title":"Delete a class","text":"

Delete a class

sbctl qos delete\n    <NAME>\n    <CLUSTER_ID>\n
Argument Description Data Type Required NAME QOS class name string True CLUSTER_ID Cluster UUID string True"},{"location":"reference/cli/snapshot/","title":"Snapshot commands","text":"
sbctl snapshot --help\n

Snapshot commands

"},{"location":"reference/cli/snapshot/#creates-a-new-snapshot","title":"Creates a new snapshot","text":"

Creates a new snapshot

sbctl snapshot add\n    <VOLUME_ID>\n    <NAME>\n
Argument Description Data Type Required VOLUME_ID Logical volume id string True NAME New snapshot name string True"},{"location":"reference/cli/snapshot/#lists-all-snapshots","title":"Lists all snapshots","text":"

Lists all snapshots

sbctl snapshot list\n    --all\n
Parameter Description Data Type Required Default --all List soft deleted snapshots marker False -"},{"location":"reference/cli/snapshot/#deletes-a-snapshot","title":"Deletes a snapshot","text":"

Deletes a snapshot

sbctl snapshot delete\n    <SNAPSHOT_ID>\n    --force\n
Argument Description Data Type Required SNAPSHOT_ID Snapshot id string True Parameter Description Data Type Required Default --force Force remove marker False -"},{"location":"reference/cli/snapshot/#provisions-a-new-logical-volume-from-an-existing-snapshot","title":"Provisions a new logical volume from an existing snapshot","text":"

Provisions a new logical volume from an existing snapshot

sbctl snapshot clone\n    <SNAPSHOT_ID>\n    <LVOL_NAME>\n    --resize=<RESIZE>\n
Argument Description Data Type Required SNAPSHOT_ID Snapshot id string True LVOL_NAME Logical volume name string True Parameter Description Data Type Required Default --resize New logical volume size: 10M, 10G, 10(bytes). Can only increase. size False 0"},{"location":"reference/cli/storage-node/","title":"Storage node commands","text":"
sbctl storage-node --help\n

Aliases: sn

Storage node commands

"},{"location":"reference/cli/storage-node/#prepares-a-host-to-be-used-as-a-storage-node","title":"Prepares a host to be used as a storage node","text":"

Runs locally on to-be storage node hosts. Installs storage node dependencies and prepares it to be used as a storage node. Only required, in standalone deployment outside of Kubernetes.

sbctl storage-node deploy\n    --ifname=<IFNAME>\n    --isolate-cores\n
Parameter Description Data Type Required Default --ifname The network interface to be used for communication between the control plane and the storage node. string False - --isolate-cores Isolate cores in kernel args for provided cpu mask marker False False"},{"location":"reference/cli/storage-node/#prepare-a-configuration-file-to-be-used-when-adding-the-storage-node","title":"Prepare a configuration file to be used when adding the storage node","text":"

Runs locally on to-be storage node hosts. Reads system information (CPUs topology, NVME devices) and prepares yaml config to be used when adding the storage node.

sbctl storage-node configure\n    --max-lvol=<MAX_LVOL>\n    --max-size=<MAX_SIZE>\n    --nodes-per-socket=<NODES_PER_SOCKET>\n    --sockets-to-use=<SOCKETS_TO_USE>\n    --pci-allowed=<PCI_ALLOWED>\n    --pci-blocked=<PCI_BLOCKED>\n
Parameter Description Data Type Required Default --max-lvol Max logical volume per storage node integer True - --max-size Maximum amount of GB to be utilized on this storage node. This cannot be larger than the total effective cluster capacity. A safe value is 1/n * 2.0 of effective cluster capacity. Meaning, if you have three storage nodes, each with 100 TiB of raw capacity and a cluster with erasure coding scheme 1+1 (two replicas), the effective cluster capacity is 100 TiB * 3 / 2 = 150 TiB. Setting this parameter to 150 TiB / 3 * 2 = 100TiB would be a safe choice. string True - --nodes-per-socket number of each node to be added per each socket. integer False 1 --sockets-to-use System socket to use when adding storage nodes. Comma-separated list: e.g. 0,1 string False 0 --pci-allowed Storage PCI addresses to use for storage devices(Normal address and full address are accepted). Comma-separated list: e.g. 0000:00:01.0,00:02.0 string False --pci-blocked Storage PCI addresses to not use for storage devices(Normal address and full address are accepted). Comma-separated list: e.g. 0000:00:01.0,00:02.0 string False"},{"location":"reference/cli/storage-node/#upgrade-the-automated-configuration-file-with-new-changes-of-cpu-mask-or-storage-devices","title":"Upgrade the automated configuration file with new changes of cpu mask or storage devices","text":"

Regenerate the core distribution and auto calculation according to changes in cpu_mask and ssd_pcis only

sbctl storage-node configure-upgrade\n
"},{"location":"reference/cli/storage-node/#cleans-a-previous-simplyblock-deploy-local-run","title":"Cleans a previous simplyblock deploy (local run)","text":"

Run locally on storage nodes and control plane hosts. Remove a previous deployment to support a fresh scratch-deployment of cluster software.

sbctl storage-node deploy-cleaner\n
"},{"location":"reference/cli/storage-node/#adds-a-storage-node-by-its-ip-address","title":"Adds a storage node by its IP address","text":"

Adds a storage node by its IP address

sbctl storage-node add-node\n    <CLUSTER_ID>\n    <NODE_ADDR>\n    <IFNAME>\n    --journal-partition=<JOURNAL_PARTITION>\n    --data-nics=<DATA_NICS>\n    --ha-jm-count=<HA_JM_COUNT>\n    --namespace=<NAMESPACE>\n
Argument Description Data Type Required CLUSTER_ID Cluster id string True NODE_ADDR Address of storage node api to add, like :5000 string True IFNAME Management interface name string True Parameter Description Data Type Required Default --journal-partition 1: auto-create small partitions for journal on nvme devices. 0: use a separate (the smallest) nvme device of the node for journal. The journal needs a maximum of 3 percent of total available raw disk space. integer False 1 --data-nics Storage network interface names, e.g. eth0,eth1 string False - --ha-jm-count HA JM count integer False 3 --namespace Kubernetes namespace to deploy on string False -"},{"location":"reference/cli/storage-node/#deletes-a-storage-node-object-from-the-state-database","title":"Deletes a storage node object from the state database.","text":"

Deletes a storage node object from the state database. It must only be used on clusters without any logical volumes. Warning: This is dangerous and could lead to unstable cluster if used on active cluster.

sbctl storage-node delete\n    <NODE_ID>\n    --force\n
Argument Description Data Type Required NODE_ID Storage node id string True Parameter Description Data Type Required Default --force Force delete storage node from DB...Hopefully you know what you do marker False -"},{"location":"reference/cli/storage-node/#removes-a-storage-node-from-the-cluster","title":"Removes a storage node from the cluster","text":"

The storage node cannot be used or added any more. Any data residing on this storage node will be migrated to the remaining storage nodes. The user must ensure that there is sufficient free space in remaining cluster to allow for successful node removal.

Danger

If there isn't enough storage available, the cluster may run full and switch to read-only mode.

sbctl storage-node remove\n    <NODE_ID>\n    --force-remove\n
Argument Description Data Type Required NODE_ID Storage node id string True Parameter Description Data Type Required Default --force-remove Force remove all logical volumes and snapshots marker False -"},{"location":"reference/cli/storage-node/#lists-all-storage-nodes","title":"Lists all storage nodes","text":"

Lists all storage nodes

sbctl storage-node list\n    --cluster-id=<CLUSTER_ID>\n    --json\n
Parameter Description Data Type Required Default --cluster-id Cluster id string False - --json Print outputs in json format marker False -"},{"location":"reference/cli/storage-node/#gets-a-storage-nodes-information","title":"Gets a storage node's information","text":"

Gets a storage node's information

sbctl storage-node get\n    <NODE_ID>\n
Argument Description Data Type Required NODE_ID Storage node id string True"},{"location":"reference/cli/storage-node/#restarts-a-storage-node","title":"Restarts a storage node","text":"

A storage node is required to be offline to be restarted. All functions and device drivers will be reset as a result of the restart. New physical devices can only be added with a storage node restart. During restart, the node will not accept any I/O.

sbctl storage-node restart\n    <NODE_ID>\n    --max-lvol=<MAX_LVOL>\n    --node-addr=<NODE_ADDR>\n    --force\n    --ssd-pcie=<SSD_PCIE>\n    --force-lvol-recreate\n
Argument Description Data Type Required NODE_ID Storage node id string True Parameter Description Data Type Required Default --max-lvol Max logical volume per storage node integer False 0 --node-addr, --node-ip Allows to restart an existing storage node on new host or hardware. Devices attached to storage nodes have to be attached to new hosts. Otherwise, they have to be marked as failed and removed from cluster. Triggers a pro-active migration of data from those devices onto other storage nodes. The provided value must be presented in the form of IP:PORT. Be default the port number is 5000. string False - --force Force restart marker False - --ssd-pcie New Nvme PCIe address to add to the storage node. Can be more than one. string False --force-lvol-recreate Force LVol recreate on node restart even if lvol bdev was not recovered marker False False"},{"location":"reference/cli/storage-node/#initiates-a-storage-node-shutdown","title":"Initiates a storage node shutdown","text":"

Once the command is issued, the node will stop accepting IO,but IO, which was previously received, will still be processed. In a high-availability setup, this will not impact operations.

sbctl storage-node shutdown\n    <NODE_ID>\n    --force\n
Argument Description Data Type Required NODE_ID Storage node id string True Parameter Description Data Type Required Default --force Force node shutdown marker False -"},{"location":"reference/cli/storage-node/#suspends-a-storage-node","title":"Suspends a storage node","text":"

The node will stop accepting new IO, but will finish processing any IO, which has been received already.

sbctl storage-node suspend\n    <NODE_ID>\n    --force\n
Argument Description Data Type Required NODE_ID Storage node id string True Parameter Description Data Type Required Default --force Force node suspend marker False -"},{"location":"reference/cli/storage-node/#resumes-a-storage-node","title":"Resumes a storage node","text":"

Resumes a storage node

sbctl storage-node resume\n    <NODE_ID>\n
Argument Description Data Type Required NODE_ID Storage node id string True"},{"location":"reference/cli/storage-node/#gets-storage-node-io-statistics","title":"Gets storage node IO statistics","text":"

Gets storage node IO statistics

sbctl storage-node get-io-stats\n    <NODE_ID>\n    --history=<HISTORY>\n    --records=<RECORDS>\n
Argument Description Data Type Required NODE_ID Storage node id string True Parameter Description Data Type Required Default --history list history records -one for every 15 minutes- for XX days and YY hours -up to 10 days in total-, format: XXdYYh string False - --records Number of records, default: 20 integer False 20"},{"location":"reference/cli/storage-node/#gets-a-storage-nodes-capacity-statistics","title":"Gets a storage node's capacity statistics","text":"

Gets a storage node's capacity statistics

sbctl storage-node get-capacity\n    <NODE_ID>\n    --history=<HISTORY>\n
Argument Description Data Type Required NODE_ID Storage node id string True Parameter Description Data Type Required Default --history list history records -one for every 15 minutes- for XX days and YY hours -up to 10 days in total-, format: XXdYYh string False -"},{"location":"reference/cli/storage-node/#lists-storage-devices","title":"Lists storage devices","text":"

Lists storage devices

sbctl storage-node list-devices\n    <NODE_ID>\n    --json\n
Argument Description Data Type Required NODE_ID Storage node id string True Parameter Description Data Type Required Default --json Print outputs in json format marker False -"},{"location":"reference/cli/storage-node/#gets-storage-device-by-its-id","title":"Gets storage device by its id","text":"

Gets storage device by its id

sbctl storage-node get-device\n    <DEVICE_ID>\n
Argument Description Data Type Required DEVICE_ID Device id string True"},{"location":"reference/cli/storage-node/#resets-a-storage-device","title":"Resets a storage device","text":"

Hardware device reset. Resetting the device can return the device from an unavailable into online state, if successful.

sbctl storage-node reset-device\n    <DEVICE_ID>\n
Argument Description Data Type Required DEVICE_ID Device id string True"},{"location":"reference/cli/storage-node/#restarts-a-storage-device","title":"Restarts a storage device","text":"

A previously logically or physically removed or unavailable device, which has been re-inserted, may be returned into online state. If the device is not physically present, accessible or healthy, it will flip back into unavailable state again.

sbctl storage-node restart-device\n    <DEVICE_ID>\n
Argument Description Data Type Required DEVICE_ID Device id string True"},{"location":"reference/cli/storage-node/#adds-a-new-storage-device","title":"Adds a new storage device","text":"

Adds a device, including a previously detected device (currently in \"new\" state) into cluster and launches an auto-rebalancing background process in which some cluster capacity is re-distributed to this newly added device.

sbctl storage-node add-device\n    <DEVICE_ID>\n
Argument Description Data Type Required DEVICE_ID Device id string True"},{"location":"reference/cli/storage-node/#logically-removes-a-storage-device","title":"Logically removes a storage device","text":"

Logical removes a storage device. The device will become unavailable, irrespectively if it was physically removed from the server. This function can be used if auto-detection of removal did not work or if the device must be maintained while remaining inserted into the server.

sbctl storage-node remove-device\n    <DEVICE_ID>\n    --force\n
Argument Description Data Type Required DEVICE_ID Device id string True Parameter Description Data Type Required Default --force Force device remove marker False -"},{"location":"reference/cli/storage-node/#sets-storage-device-to-failed-state","title":"Sets storage device to failed state","text":"

Sets a storage device to state failed. This command can be used, if an administrator believes that the device must be replaced. Attention: a failed state is final, meaning, all data on the device will be automatically recovered to other devices in the cluster.

sbctl storage-node set-failed-device\n    <DEVICE_ID>\n
Argument Description Data Type Required DEVICE_ID Device ID string True"},{"location":"reference/cli/storage-node/#gets-a-devices-capacity","title":"Gets a device's capacity","text":"

Gets a device's capacity

sbctl storage-node get-capacity-device\n    <DEVICE_ID>\n    --history=<HISTORY>\n
Argument Description Data Type Required DEVICE_ID Device id string True Parameter Description Data Type Required Default --history list history records -one for every 15 minutes- for XX days and YY hours -up to 10 days in total-, format: XXdYYh string False -"},{"location":"reference/cli/storage-node/#gets-a-devices-io-statistics","title":"Gets a device's IO statistics","text":"

Gets a device's IO statistics

sbctl storage-node get-io-stats-device\n    <DEVICE_ID>\n    --history=<HISTORY>\n    --records=<RECORDS>\n
Argument Description Data Type Required DEVICE_ID Device id string True Parameter Description Data Type Required Default --history list history records -one for every 15 minutes- for XX days and YY hours -up to 10 days in total-, format: XXdYYh string False - --records Number of records, default: 20 integer False 20"},{"location":"reference/cli/storage-node/#gets-the-data-interfaces-list-of-a-storage-node","title":"Gets the data interfaces list of a storage node","text":"

Gets the data interfaces list of a storage node

sbctl storage-node port-list\n    <NODE_ID>\n
Argument Description Data Type Required NODE_ID Storage node id string True"},{"location":"reference/cli/storage-node/#gets-the-data-interfaces-io-stats","title":"Gets the data interfaces' IO stats","text":"

Gets the data interfaces' IO stats

sbctl storage-node port-io-stats\n    <PORT_ID>\n    --history=<HISTORY>\n
Argument Description Data Type Required PORT_ID Data port id string True Parameter Description Data Type Required Default --history list history records -one for every 15 minutes- for XX days and YY hours -up to 10 days in total, format: XXdYYh string False -"},{"location":"reference/cli/storage-node/#checks-the-health-status-of-a-storage-node","title":"Checks the health status of a storage node","text":"

Verifies if all of the NVMe-oF connections to and from the storage node, including those to and from other storage devices in the cluster and the meta-data journal, are available and healthy and all internal objects of the node, such as data placement and erasure coding services, are in a healthy state.

sbctl storage-node check\n    <NODE_ID>\n
Argument Description Data Type Required NODE_ID Storage node id string True"},{"location":"reference/cli/storage-node/#checks-the-health-status-of-a-device","title":"Checks the health status of a device","text":"

Checks the health status of a device

sbctl storage-node check-device\n    <DEVICE_ID>\n
Argument Description Data Type Required DEVICE_ID Device id string True"},{"location":"reference/cli/storage-node/#gets-the-nodes-information","title":"Gets the node's information","text":"

Gets the node's information

sbctl storage-node info\n    <NODE_ID>\n
Argument Description Data Type Required NODE_ID Storage node id string True"},{"location":"reference/cli/storage-node/#restarts-a-journaling-device","title":"Restarts a journaling device","text":"

Restarts a journaling device

sbctl storage-node restart-jm-device\n    <JM_DEVICE_ID>\n    --force\n
Argument Description Data Type Required JM_DEVICE_ID Journaling device id string True Parameter Description Data Type Required Default --force Force device remove marker False -"},{"location":"reference/cli/storage-node/#forces-to-make-the-provided-node-id-primary","title":"Forces to make the provided node id primary","text":"

Makes the storage node the primary node. This is required after certain storage cluster operations, such as a storage node migration.

sbctl storage-node make-primary\n    <NODE_ID>\n
Argument Description Data Type Required NODE_ID Storage node id string True"},{"location":"reference/cli/storage-pool/","title":"Storage pool commands","text":"
sbctl storage-pool --help\n

Aliases: pool

Storage pool commands

"},{"location":"reference/cli/storage-pool/#adds-a-new-storage-pool","title":"Adds a new storage pool","text":"

Adds a new storage pool

sbctl storage-pool add\n    <NAME>\n    <CLUSTER_ID>\n    --pool-max=<POOL_MAX>\n    --lvol-max=<LVOL_MAX>\n    --max-rw-iops=<MAX_RW_IOPS>\n    --max-rw-mbytes=<MAX_RW_MBYTES>\n    --max-r-mbytes=<MAX_R_MBYTES>\n    --max-w-mbytes=<MAX_W_MBYTES>\n    --qos-host=<QOS_HOST>\n
Argument Description Data Type Required NAME New pool name string True CLUSTER_ID Cluster id string True Parameter Description Data Type Required Default --pool-max Pool maximum size: 20M, 20G, 0. Default: 0 size False 0 --lvol-max Logical volume maximum size: 20M, 20G, 0. Default: 0 size False 0 --max-rw-iops Maximum Read Write IO Per Second integer False - --max-rw-mbytes Maximum Read Write Megabytes Per Second integer False - --max-r-mbytes Maximum Read Megabytes Per Second integer False - --max-w-mbytes Maximum Write Megabytes Per Second integer False - --qos-host Node UUID for QoS pool string False -"},{"location":"reference/cli/storage-pool/#sets-a-storage-pools-attributes","title":"Sets a storage pool's attributes","text":"

Sets a storage pool's attributes

sbctl storage-pool set\n    <POOL_ID>\n    --pool-max=<POOL_MAX>\n    --lvol-max=<LVOL_MAX>\n    --max-rw-iops=<MAX_RW_IOPS>\n    --max-rw-mbytes=<MAX_RW_MBYTES>\n    --max-r-mbytes=<MAX_R_MBYTES>\n    --max-w-mbytes=<MAX_W_MBYTES>\n
Argument Description Data Type Required POOL_ID Pool id string True Parameter Description Data Type Required Default --pool-max Pool maximum size: 20M, 20G size False - --lvol-max Logical volume maximum size: 20M, 20G size False - --max-rw-iops Maximum Read Write IO Per Second integer False - --max-rw-mbytes Maximum Read Write Megabytes Per Second integer False - --max-r-mbytes Maximum Read Megabytes Per Second integer False - --max-w-mbytes Maximum Write Megabytes Per Second integer False -"},{"location":"reference/cli/storage-pool/#lists-all-storage-pools","title":"Lists all storage pools","text":"

Lists all storage pools

sbctl storage-pool list\n    --json\n    --cluster-id=<CLUSTER_ID>\n
Parameter Description Data Type Required Default --json Print outputs in json format marker False - --cluster-id Cluster id string False -"},{"location":"reference/cli/storage-pool/#gets-a-storage-pools-details","title":"Gets a storage pool's details","text":"

Gets a storage pool's details

sbctl storage-pool get\n    <POOL_ID>\n    --json\n
Argument Description Data Type Required POOL_ID Pool id string True Parameter Description Data Type Required Default --json Print outputs in json format marker False -"},{"location":"reference/cli/storage-pool/#deletes-a-storage-pool","title":"Deletes a storage pool","text":"

It is only possible to delete a pool if it is empty (no provisioned logical volumes contained).

sbctl storage-pool delete\n    <POOL_ID>\n
Argument Description Data Type Required POOL_ID Pool id string True"},{"location":"reference/cli/storage-pool/#set-a-storage-pools-status-to-active","title":"Set a storage pool's status to Active","text":"

Set a storage pool's status to Active

sbctl storage-pool enable\n    <POOL_ID>\n
Argument Description Data Type Required POOL_ID Pool id string True"},{"location":"reference/cli/storage-pool/#sets-a-storage-pools-status-to-inactive","title":"Sets a storage pool's status to Inactive.","text":"

Sets a storage pool's status to Inactive.

sbctl storage-pool disable\n    <POOL_ID>\n
Argument Description Data Type Required POOL_ID Pool id string True"},{"location":"reference/cli/storage-pool/#gets-a-storage-pools-capacity","title":"Gets a storage pool's capacity","text":"

Gets a storage pool's capacity

sbctl storage-pool get-capacity\n    <POOL_ID>\n
Argument Description Data Type Required POOL_ID Pool id string True"},{"location":"reference/cli/storage-pool/#gets-a-storage-pools-io-statistics","title":"Gets a storage pool's I/O statistics","text":"

Gets a storage pool's I/O statistics

sbctl storage-pool get-io-stats\n    <POOL_ID>\n    --history=<HISTORY>\n    --records=<RECORDS>\n
Argument Description Data Type Required POOL_ID Pool id string True Parameter Description Data Type Required Default --history (XXdYYh), list history records (one for every 15 minutes) for XX days and YY hours (up to 10 days in total). string False - --records Number of records, default: 20 integer False 20"},{"location":"reference/cli/volume/","title":"Logical volume commands","text":"
sbctl volume --help\n

Aliases: lvol

Logical volume commands

"},{"location":"reference/cli/volume/#adds-a-new-logical-volume","title":"Adds a new logical volume","text":"

Adds a new logical volume

sbctl volume add\n    <NAME>\n    <SIZE>\n    <POOL>\n    --snapshot\n    --max-size=<MAX_SIZE>\n    --host-id=<HOST_ID>\n    --encrypt\n    --crypto-key1=<CRYPTO_KEY1>\n    --crypto-key2=<CRYPTO_KEY2>\n    --max-rw-iops=<MAX_RW_IOPS>\n    --max-rw-mbytes=<MAX_RW_MBYTES>\n    --max-r-mbytes=<MAX_R_MBYTES>\n    --max-w-mbytes=<MAX_W_MBYTES>\n    --max-namespace-per-subsys=<MAX_NAMESPACE_PER_SUBSYS>\n    --ha-type=<HA_TYPE>\n    --fabric=<FABRIC>\n    --lvol-priority-class=<LVOL_PRIORITY_CLASS>\n    --namespace=<NAMESPACE>\n    --pvc-name=<PVC_NAME>\n    --data-chunks-per-stripe=<DATA_CHUNKS_PER_STRIPE>\n    --parity-chunks-per-stripe=<PARITY_CHUNKS_PER_STRIPE>\n
Argument Description Data Type Required NAME New logical volume name string True SIZE Logical volume size: 10M, 10G, 10(bytes) size True POOL Pool id or name string True Parameter Description Data Type Required Default --snapshot, -s Make logical volume with snapshot capability, default: false marker False False --max-size Logical volume max size size False 1000T --host-id Primary storage node id or Hostname string False - --encrypt Use inline data encryption and decryption on the logical volume marker False - --crypto-key1 Hex value of key1 to be used for logical volume encryption string False - --crypto-key2 Hex value of key2 to be used for logical volume encryption string False - --max-rw-iops Maximum Read Write IO Per Second integer False - --max-rw-mbytes Maximum Read Write Megabytes Per Second integer False - --max-r-mbytes Maximum Read Megabytes Per Second integer False - --max-w-mbytes Maximum Write Megabytes Per Second integer False - --max-namespace-per-subsys Maximum Namespace per subsystem integer False 32 --ha-type Logical volume HA type (single, ha), default is cluster HA typeAvailable Options:- single- default- ha string False default --fabric tcp or rdma (tcp is default). Cluster must support chosen fabric.Available Options:- tcp- rdma- tcp,rdma string False tcp --lvol-priority-class Logical volume priority class integer False 0 --namespace Set logical volume namespace for k8s clients string False - --pvc-name, --pvc_name Set the logical volume persistent volume claim name for Kubernetes clients.

Warning

The old parameter name --pvc_name is deprecated and shouldn't be used anymore. It will eventually be removed. Please exchange the use of --pvc_name with --pvc-name. | string | False | - |

| --data-chunks-per-stripe| Erasure coding schema parameter k (distributed raid), default: 1 | integer | False | 0 | | --parity-chunks-per-stripe| Erasure coding schema parameter n (distributed raid), default: 1 | integer | False | 0 |

"},{"location":"reference/cli/volume/#changes-qos-settings-for-an-active-logical-volume","title":"Changes QoS settings for an active logical volume","text":"

Changes QoS settings for an active logical volume

sbctl volume qos-set\n    <VOLUME_ID>\n    --max-rw-iops=<MAX_RW_IOPS>\n    --max-rw-mbytes=<MAX_RW_MBYTES>\n    --max-r-mbytes=<MAX_R_MBYTES>\n    --max-w-mbytes=<MAX_W_MBYTES>\n
Argument Description Data Type Required VOLUME_ID Logical volume id string True Parameter Description Data Type Required Default --max-rw-iops Maximum Read Write IO Per Second integer False - --max-rw-mbytes Maximum Read Write Megabytes Per Second integer False - --max-r-mbytes Maximum Read Megabytes Per Second integer False - --max-w-mbytes Maximum Write Megabytes Per Second integer False -"},{"location":"reference/cli/volume/#lists-logical-volumes","title":"Lists logical volumes","text":"

Lists logical volumes

sbctl volume list\n    --cluster-id=<CLUSTER_ID>\n    --pool=<POOL>\n    --json\n    --all\n
Parameter Description Data Type Required Default --cluster-id List logical volumes in particular cluster string False - --pool List logical volumes in particular pool id or name string False - --json Print outputs in json format marker False - --all List soft deleted logical volumes marker False -"},{"location":"reference/cli/volume/#gets-the-logical-volume-details","title":"Gets the logical volume details","text":"

Gets the logical volume details

sbctl volume get\n    <VOLUME_ID>\n    --json\n
Argument Description Data Type Required VOLUME_ID Logical volume id or name string True Parameter Description Data Type Required Default --json Print outputs in json format marker False -"},{"location":"reference/cli/volume/#deletes-a-logical-volume","title":"Deletes a logical volume","text":"

Deletes a logical volume. Attention: All data will be lost! This is an irreversible operation! Actual storage capacity will be freed as an asynchronous background task. It may take a while until the actual storage is released.

sbctl volume delete\n    <VOLUME_ID>\n    --force\n
Argument Description Data Type Required VOLUME_ID Logical volumes id or ids string True Parameter Description Data Type Required Default --force Force delete logical volume from the cluster marker False -"},{"location":"reference/cli/volume/#gets-the-logical-volumes-nvmetcp-connection-strings","title":"Gets the logical volume's NVMe/TCP connection string(s)","text":"

Multiple connections to the cluster are always available for multi-pathing and high-availability.

sbctl volume connect\n    <VOLUME_ID>\n    --ctrl-loss-tmo=<CTRL_LOSS_TMO>\n
Argument Description Data Type Required VOLUME_ID Logical volume id string True Parameter Description Data Type Required Default --ctrl-loss-tmo Control loss timeout for this volume integer False -"},{"location":"reference/cli/volume/#resizes-a-logical-volume","title":"Resizes a logical volume","text":"

Resizes a logical volume. Only increasing a volume is possible. The new capacity must fit into the storage pool's free capacity.

sbctl volume resize\n    <VOLUME_ID>\n    <SIZE>\n
Argument Description Data Type Required VOLUME_ID Logical volume id string True SIZE New logical volume size size: 10M, 10G, 10(bytes) size True"},{"location":"reference/cli/volume/#creates-a-snapshot-from-a-logical-volume","title":"Creates a snapshot from a logical volume","text":"

Creates a snapshot from a logical volume

sbctl volume create-snapshot\n    <VOLUME_ID>\n    <NAME>\n
Argument Description Data Type Required VOLUME_ID Logical volume id string True NAME Snapshot name string True"},{"location":"reference/cli/volume/#provisions-a-logical-volumes-from-an-existing-snapshot","title":"Provisions a logical volumes from an existing snapshot","text":"

Provisions a logical volumes from an existing snapshot

sbctl volume clone\n    <SNAPSHOT_ID>\n    <CLONE_NAME>\n    --resize=<RESIZE>\n
Argument Description Data Type Required SNAPSHOT_ID Snapshot id string True CLONE_NAME Clone name string True Parameter Description Data Type Required Default --resize New logical volume size: 10M, 10G, 10(bytes). Can only increase. size False 0"},{"location":"reference/cli/volume/#gets-a-logical-volumes-capacity","title":"Gets a logical volume's capacity","text":"

Gets a logical volume's capacity

sbctl volume get-capacity\n    <VOLUME_ID>\n    --history=<HISTORY>\n
Argument Description Data Type Required VOLUME_ID Logical volume id string True Parameter Description Data Type Required Default --history (XXdYYh), list history records (one for every 15 minutes) for XX days and YY hours (up to 10 days in total). string False -"},{"location":"reference/cli/volume/#gets-a-logical-volumes-io-statistics","title":"Gets a logical volume's I/O statistics","text":"

Gets a logical volume's I/O statistics

sbctl volume get-io-stats\n    <VOLUME_ID>\n    --history=<HISTORY>\n    --records=<RECORDS>\n
Argument Description Data Type Required VOLUME_ID Logical volume id string True Parameter Description Data Type Required Default --history (XXdYYh), list history records (one for every 15 minutes) for XX days and YY hours (up to 10 days in total). string False - --records Number of records, default: 20 integer False 20"},{"location":"reference/cli/volume/#checks-a-logical-volumes-health","title":"Checks a logical volume's health","text":"

Checks a logical volume's health

sbctl volume check\n    <VOLUME_ID>\n
Argument Description Data Type Required VOLUME_ID Logical volume id string True"},{"location":"reference/cli/volume/#inflate-a-logical-volume","title":"Inflate a logical volume","text":"

All unallocated clusters are allocated and copied from the parent or zero filled if not allocated in the parent. Then all dependencies on the parent are removed.

sbctl volume inflate\n    <VOLUME_ID>\n
Argument Description Data Type Required VOLUME_ID Logical volume id string True"},{"location":"reference/kubernetes/","title":"Kubernetes Helm Chart Parameters","text":"

Simplyblock provides a Helm chart to install one or more components into Kubernetes. Available components are the CSI driver, storage nodes, and caching nodes.

This reference provides an overview of all available parameters that can be set on the Helm chart during installation or upgrade.

"},{"location":"reference/kubernetes/#csi-parameters","title":"CSI Parameters","text":"

Commonly configured CSI driver parameters:

Parameter Description Default csiConfig.simplybk.uuid Sets the simplyblock cluster id on which the volumes are provisioned. csiConfig.simplybk.ip Sets the HTTP(S) API Gateway endpoint connected to the management node. https://o5ls1ykzbb.execute-api.eu-central-1.amazonaws.com csiSecret.simplybk.secret Sets the cluster secret associated with the cluster. logicalVolume.encryption Specifies whether logical volumes should be encrypted. False csiSecret.simplybkPvc.crypto_key1 Sets the first encryption key. csiSecret.simplybkPvc.crypto_key2 Sets the second encryption key. logicalVolume.pool_name Sets the storage pool name where logical volumes are created. This storage pool needs exist. testing1 logicalVolume.qos_rw_iops Sets the maximum read-write IOPS. Zero means unlimited. 0 logicalVolume.qos_rw_mbytes Sets the maximum read-write Mbps. Zero means unlimited. 0 logicalVolume.qos_r_mbytes Sets the maximum read Mbps. Zero means unlimited. 0 logicalVolume.qos_w_mbytes Sets the maximum write Mbps. Zero means unlimited. 0 logicalVolume.numDataChunks Sets the number of Erasure coding schema parameter k (distributed raid). 1 logicalVolume.numParityChunks Sets the number of Erasure coding schema parameter n (distributed raid). 1 logicalVolume.lvol_priority_class Sets the logical volume priority class. 0 logicalVolume.fabric Sets the NVMe-oF transport type. tcp logicalVolume.tune2fs_reserved_blocks Sets the percentage of disk blocks reserved for system. 0 logicalVolume.max_namespace_per_subsys Sets the maximum namespace per subsystem. 1 storageclass.create Specifies whether to create a StorageClass. true snapshotclass.create Specifies whether to create a SnapshotClass. true snapshotcontroller.create Specifies whether to create a snapshot controller and CRD for snapshot support it. true

Additional, uncommonly configured CSI driver parameters:

Parameter Description Default driverName Sets an alternative driver name. csi.simplyblock.io serviceAccount.create Specifies whether to create service account for spdkcsi-controller. true rbac.create Specifies whether to create RBAC permissions for the spdkcsi-controller. true controller.replicas Sets the replica number of the spdkcsi-controller. 1 serviceAccount.create Specifies whether to create service account for the csi controller. true rbac.create Specifies whether to create RBAC permissions for the csi controller. true controller.replicas Sets the replica number of the csi controller. 1 controller.tolerations.create Specifies whether to create tolerations for the csi controller. false controller.tolerations.effect Sets the effect of tolerations on the csi controller. <empty> controller.tolerations.key Sets the key of tolerations for the csi controller. <empty> controller.tolerations.operator Sets the operator for the csi controller tolerations. Exists controller.tolerations.value Sets the value of tolerations for the csi controller. <empty> controller.nodeSelector.create Specifies whether to create nodeSelector for the csi controller. false controller.nodeSelector.key Sets the key of nodeSelector for the csi controller. <empty> controller.nodeSelector.value Sets the value of nodeSelector for the csi controller. <empty> externallyManagedConfigmap.create Specifies whether a externallyManagedConfigmap should be created. true externallyManagedSecret.create Specifies whether a externallyManagedSecret should be created. true podAnnotations Annotations to apply to all pods in the chart. {} simplyBlockAnnotations Annotations to apply to simplyblock Kubernetes resources like DaemonSets, Deployments, or StatefulSets. {} node.tolerations.create Specifies whether to create tolerations for the CSI driver node. false node.tolerations.effect Sets the effect of tolerations on the CSI driver node. <empty> node.tolerations.key Sets the key of tolerations for the CSI driver node. <empty> node.tolerations.operator Sets the operator for the csi node tolerations. Exists node.tolerations.value Sets the value of tolerations for the CSI driver node. <empty> node.nodeSelector.create Specifies whether to create nodeSelector for the CSI driver node. false node.nodeSelector.key Sets the key of nodeSelector for the CSI driver node. <empty> node.nodeSelector.value Sets the value of nodeSelector for the CSI driver node. <empty> storageclass.volumeBindingMode Sets when PersistentVolumes are bound and provisioned. WaitForFirstConsumer storageclass.zoneClusterMap Sets the mapping between Kubernetes zones and SimplyBlock clusters for multi-cluster or multi-zone deployments. {} storageclass.allowedTopologyZones Sets the list of topology zones where the StorageClass is allowed to provision volumes. []"},{"location":"reference/kubernetes/#storage-node-parameters","title":"Storage Node Parameters","text":"Parameter Description Default storagenode.daemonsets[0].name Sets the name of the storage node DaemonSet. storage-node-ds storagenode.daemonsets[0].appLabel Sets the label applied to the storage node DaemonSet for identification. storage-node storagenode.daemonsets[0].nodeSelector.key Sets the key used in the nodeSelector to constrain which nodes the DaemonSet should run on. type storagenode.daemonsets[0].nodeSelector.value Sets the value for the nodeSelector key to match against specific nodes. simplyblock-storage-plane storagenode.daemonsets[0].tolerations.create Specifies whether to create tolerations for the storage node. false storagenode.daemonsets[0].tolerations.effect Sets the effect of tolerations on the storage node. <empty> storagenode.daemonsets[0].tolerations.key Sets the key of tolerations for the storage node. <empty> storagenode.daemonsets[0].tolerations.operator Sets the operator for the storage node tolerations. Exists storagenode.daemonsets[0].tolerations.value Sets the value of tolerations for the storage node. <empty> storagenode.daemonsets[1].name Sets the name of the restart storage node DaemonSet. storage-node-ds-restart storagenode.daemonsets[1].appLabel Sets the label applied to the restart storage node DaemonSet for identification. storage-node-restart storagenode.daemonsets[1].nodeSelector.key Sets the key used in the nodeSelector to constrain which nodes the DaemonSet should run on. type storagenode.daemonsets[1].nodeSelector.value Sets the value for the nodeSelector key to match against specific nodes. simplyblock-storage-plane-restart storagenode.daemonsets[1].tolerations.create Specifies whether to create tolerations for the restart storage node. false storagenode.daemonsets[1].tolerations.effect Sets the effect of tolerations on the restart storage node. <empty> storagenode.daemonsets[1].tolerations.key Sets the key of tolerations for the restart storage node. <empty> storagenode.daemonsets[1].tolerations.operator Sets the operator for the restart storage node tolerations. Exists storagenode.daemonsets[1].tolerations.value Sets the value of tolerations for the restart storage node. <empty> storagenode.create Specifies whether to create storage node on kubernetes worker node. false storagenode.ifname Sets the default interface to be used for binding the storage node to host interface. eth0 storagenode.maxLogicalVolumes Sets the default maximum number of logical volumes per storage node. 10 storagenode.maxSnapshots Sets the default maximum number of snapshot per storage node. 10 storagenode.maxSize Sets the max provisioning size of all storage nodes. 150g storagenode.numPartitions Sets the number of partitions to create per device. 1 storagenode.numDevices Sets the number of devices per storage node. 1 storagenode.numDistribs Sets the number of distribs per storage node. 2 storagenode.isolateCores Enables automatic core isolation. false storagenode.dataNics Sets the data interface names. <empty> storagenode.pciAllowed Sets the list of allowed NVMe PCIe addresses. <empty> storagenode.pciBlocked Sets the list of blocked NVMe PCIe addresses. <empty> storagenode.socketsToUse Sets the list of sockets to use. <empty> storagenode.nodesPerSocket Sets the number of nodes to use per socket. <empty> storagenode.coresPercentage Sets the percentage of total cores (vCPUs) available to simplyblock storage node services. <empty> storagenode.ubuntuHost Set to true if the worker node runs Ubuntu and needs the nvme-tcp kernel module installed. false storagenode.openShiftCluster Set to true if it an OpenShift Cluster and needs core isolation. false"},{"location":"reference/kubernetes/#caching-node-parameters","title":"Caching Node Parameters","text":"Parameter Description Default cachingnode.tolerations.create Specifies whether to create tolerations for the caching node. false cachingnode.tolerations.effect Sets the effect of tolerations on caching nodes. <empty> cachingnode.tolerations.key Sets the tolerations key for caching nodes. <empty> cachingnode.tolerations.operator Sets the operator for caching node tolerations. Exists cachingnode.tolerations.value Sets the value of tolerations for caching nodes. <empty> cachingnode.create Specifies whether to create caching nodes on Kubernetes worker nodes. false cachingnode.nodeSelector.key Sets the key used in the nodeSelector to constrain which nodes the DaemonSet should run on. type cachingnode.nodeSelector.value Sets the value for the nodeSelector key to match against specific nodes. simplyblock-cache cachingnode.ifname Sets the default interface to be used for binding the caching node to host interface. eth0 cachingnode.cpuMask Sets the CPU mask for the spdk app to use for caching node. <empty> cachingnode.spdkMem Sets the amount of hugepages memory to allocate for caching node. <empty> cachingnode.multipathing Specifies whether to enable multipathing for logical volume connections. true"},{"location":"reference/kubernetes/#image-overrides","title":"Image Overrides","text":"

Danger

Overriding pinned image tags can result in an unusable state. The following parameters should only be used after an explicit request from simplyblock.

Parameter Description Default image.csi.repository Simplyblock CSI driver image. simplyblock/spdkcsi image.csi.tag Simplyblock CSI driver image tag. v0.1.0 image.csi.pullPolicy Simplyblock CSI driver image pull policy. Always image.csiProvisioner.repository CSI provisioner image. registry.k8s.io/sig-storage/csi-provisioner image.csiProvisioner.tag CSI provisioner image tag. v4.0.1 image.csiProvisioner.pullPolicy CSI provisioner image pull policy. Always image.csiAttacher.repository CSI attacher image. gcr.io/k8s-staging-sig-storage/csi-attacher image.csiAttacher.tag CSI attacher image tag. v4.5.1 image.csiAttacher.pullPolicy CSI attacher image pull policy. Always image.nodeDriverRegistrar.repository CSI node driver registrar image. registry.k8s.io/sig-storage/csi-node-driver-registrar image.nodeDriverRegistrar.tag CSI node driver registrar image tag. v2.10.1 image.nodeDriverRegistrar.pullPolicy CSI node driver registrar image pull policy. Always image.csiSnapshotter.repository CSI snapshotter image. registry.k8s.io/sig-storage/csi-snapshotter image.csiSnapshotter.tag CSI snapshotter image tag. v7.0.2 image.csiSnapshotter.pullPolicy CSI snapshotter image pull policy. Always image.csiResizer.repository CSI resizer image. gcr.io/k8s-staging-sig-storage/csi-resizer image.csiResizer.tag CSI resizer image tag. v1.10.1 image.csiResizer.pullPolicy CSI resizer image pull policy. Always image.csiHealthMonitor.repository CSI external health-monitor controller image. gcr.io/k8s-staging-sig-storage/csi-external-health-monitor-controller image.csiHealthMonitor.tag CSI external health-monitor controller image tag. v0.11.0 image.csiHealthMonitor.pullPolicy CSI external health-monitor controller image pull policy. Always image.simplyblock.repository Simplyblock management image. simplyblock/simplyblock image.simplyblock.tag Simplyblock management image tag. R25.5-Hotfix image.simplyblock.pullPolicy Simplyblock management image pull policy. Always image.storageNode.repository Simplyblock storage-node controller image. simplyblock/simplyblock image.storageNode.tag Simplyblock storage-node controller image tag. v0.1.0 image.storageNode.pullPolicy Simplyblock storage-node controller image pull policy. Always image.cachingNode.repository Simplyblock caching-node controller image. simplyblock/simplyblock image.cachingNode.tag Simplyblock caching-node controller image tag. v0.1.0 image.cachingNode.pullPolicy Simplyblock caching-node controller image pull policy. Always image.mgmtAPI.repository Simplyblock management api image. python image.mgmtAPI.tag Simplyblock management api image tag. 3.10 image.mgmtAPI.pullPolicy Simplyblock management api image pull policy. Always"},{"location":"reference/troubleshooting/","title":"Troubleshooting","text":"

Simplyblock is designed as a system for minimal manual intervention. However, once in a while, there may be issues that require some special treatment.

This section provides practical solutions for common issues you might encounter when deploying or operating simplyblock. Whether you're dealing with deployment hiccups, performance anomalies, connectivity problems, or configuration errors, you'll find step-by-step guidance to help you diagnose and resolve them quickly. Use this guide to keep your simplyblock environment running smoothly and with confidence.

"},{"location":"reference/troubleshooting/control-plane/","title":"Control Plane","text":""},{"location":"reference/troubleshooting/control-plane/#foundationdb-error","title":"FoundationDB Error","text":"

Symptom: FoundationDB error. All services that rely upon the FoundationDB key-value storage are offline or refuse to start.

  1. Ensure that IPv6 is disabled: Network Configuration
    sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1\nsudo sysctl -w net.ipv6.conf.default.disable_ipv6=1\n
  2. Ensure sufficient disk space on the root partition on all control plane nodes. Free disk space can be checked with df -h.
  3. If not enough free disk space is available, start by checking the Graylog, MongoDB, and Elasticsearch containers. If those consume most of the disk space, old indices (2-3) can be deleted.
  4. Increase the root partition size.
  5. If you cannot increase the root partition size, remove any data or service not relevant to the simplyblock control plane and run a docker system prune.
  6. Restart the Docker daemon: systemctl restart docker
  7. Reboot the node
"},{"location":"reference/troubleshooting/control-plane/#graylog-service-is-unresponsive","title":"Graylog Service Is Unresponsive","text":"

Symptom: The Graylog service cannot be reached anymore or is unresponsive.

  1. Ensure enough free available memory
  2. If short on available memory, stop services non-relevant to the simplyblock control plane.
  3. If that doesn't help, reboot the host.
"},{"location":"reference/troubleshooting/control-plane/#graylog-storage-is-full","title":"Graylog Storage is Full","text":"

Symptom: The Graylog service cannot start or is unresponsive, and the storage disk is full.

  1. Identify the cause of the disk running full. Run the following commands to find the largest files on the Graylog disk. Find the largest files
    df -h\ndu -sh /var/lib/docker\ndu -sh /var/lib/docker/containers\ndu -sh /var/lib/docker/volumes\n
  2. Delete the old Graylog indices via the Graylog UI.
    • Go to System -> Indices
    • Select your index set
    • Adjust the Max Number of Indices to a lower number
  3. Reduce Docker disk usage by removing unused Docker volumes and images, as well as old containers. Remove old Docker entities
    docker volume prune -f\ndocker image prune -f\ndocker rm $(sudo docker ps -aq --filter \"status=exited\")\n
  4. Cleanup OpenSearch, Graylog, and MongoDB volume paths and restart the services. Cleaning up adjacent services
    # Scale services down\ndocker service update monitoring_graylog --replicas=0\ndocker service update monitoring_opensearch --replicas=0\ndocker service update monitoring_mongodb --replicas=0\n\n# Remove old data\nrm -rf /var/lib/docker/volumes/monitoring_graylog_data\nrm -rf /var/lib/docker/volumes/monitoring_os_data\nrm -rf /var/lib/docker/volumes/monitoring_mongodb_data\n\n# Restart services\ndocker service update monitoring_mongodb --replicas=1\ndocker service update monitoring_opensearch --replicas=1\ndocker service update monitoring_graylog --replicas=1\n
"},{"location":"reference/troubleshooting/simplyblock-csi/","title":"Kubernetes CSI","text":""},{"location":"reference/troubleshooting/simplyblock-csi/#high-level-csi-driver-architecture","title":"High-Level CSI Driver Architecture","text":"

** Controller Plugin:** Runs as a Deployment and manages volume provisioning and deletion.

Node Plugin: Runs as a DaemonSet and handles volume attachment, mounting, and unmounting.

Sidecars: Handle tasks like external provisioning (csi-provisioner), attaching (csi-attacher), and monitoring (csi-node-driver-registrar).

"},{"location":"reference/troubleshooting/simplyblock-csi/#finding-csi-driver-logs-for-a-specific-pvc","title":"Finding CSI Driver Logs for a Specific PVC","text":"
  1. Identify the Node Where the PVC is Mounted Get the pod name using the persistent volume claim
    kubectl get pods -A -o \\\njsonpath='{range .items[*]}{.metadata.name}{\"\\t\"}{.spec.volumes[*].persistentVolumeClaim.claimName}{\"\\n\"}{end}' | \\\ngrep <PVC_NAME>\n
    Find the node the pod is bound to
    kubectl get pods -A -o \\\njsonpath='{range .items[*]}{.spec.nodeName}{\"\\t\"}{.spec.volumes[*].persistentVolumeClaim.claimName}{\"\\n\"}{end}' | \\\ngrep <PVC_NAME>\n
  2. Find the CSI driver pod on that node Find the CSI driver pod
    kubectl get pods -n <CSI_NAMESPACE> -o wide | grep <NODE_NAME>\n
  3. Get Logs from the node plugin Get the CSI driver pod logs
    kubectl logs -n <CSI_NAMESPACE> <CSI_NODE_POD> -c <DRIVER_CONTAINER>\n
"},{"location":"reference/troubleshooting/simplyblock-csi/#troubleshooting-nvme-related-errors","title":"Troubleshooting NVMe-Related Errors","text":"

If the error is NVMe-related (e.g., volume attachment failure, device not found), follow these steps.

  1. Ensure that nvme-cli is installed

    RHEL / Alma / RockyDebian / Ubuntu
    sudo dnf install -y nvme-cli\n
    sudo apt install -y nvme-cli\n
  2. Verify if the nvme-tcp kernel module is loaded

    Check NVMe/TCP kernel module is loaded
    lsmod | grep nvme_tcp\n

    If not available, the driver can be loaded temporarily using the following command:

    Load NVMe/TCP kernel module
    sudo modprobe nvme-tcp\n

    However, to ensure it is automatically loaded at system startup, it should be persisted as following:

    Red Hat / Alma / RockyDebian / Ubuntu
    echo \"nvme-tcp\" | sudo tee -a /etc/modules-load.d/nvme-tcp.conf\n
    echo \"nvme-tcp\" | sudo tee -a /etc/modules\n
  3. Check NVMe Connection Status

    Check NVMe-oF connection
    sudo nvme list-subsys\n

    If the expected NVMe subsystem is missing, reconnect manually:

    Manually reconnect the NVMe-oF device
    sudo nvme connect -t tcp \\\n    -n <NVME_SUBSYS_NAME> \\\n    -a <TARGET_IP> \\\n    -s <TARGET_PORT> \\\n    -l <CTRL_LOSS_TIMEOUT> \\\n    -c <RECONNECT_DELAY> \\\n    -i <NR_IO_QUEUES>\n
  4. If the issue persists, gather kernel logs and provide them to the simplyblock support team:

    Collect logs for support
    sudo dmesg | grep -i nvme\n

"},{"location":"reference/troubleshooting/storage-plane/","title":"Storage Plane","text":""},{"location":"reference/troubleshooting/storage-plane/#fresh-cluster-cannot-be-activated","title":"Fresh Cluster Cannot Be Activated","text":"

Symptom: After a fresh deployment, the cluster cannot be activated. The activation process hangs or fails, and the storage nodes show n/0 disks available in the disks column (sbctl storage-node list).

  1. Shutdown all storage nodes: sbctl storage-node shutdown --force
  2. Force remove all storage nodes: sbctl storage-node remove --force
  3. Delete all storage nodes: sbctl storage-node delete
  4. Re-add all storage nodes. The disks should become active.
  5. Try to activate the cluster.
"},{"location":"reference/troubleshooting/storage-plane/#storage-node-health-check-shows-healthfalse","title":"Storage Node Health Check Shows Health=False","text":"

Symptom: The storage node health check returns health=false (sbctl storage-node list).

  1. First run sbctl storage-node check.
  2. If the command keeps showing an unhealthy storage node, suspend, shutdown, and restart the storage node.

Danger

Never shutdown or restart a storage node while the cluster is in degraded state. This can lead to potential I/O operation. This is independent of the cluster's high-availability status. Check the cluster status with any of the following commands:

sbctl cluster list\nsbctl cluster get <cluster-id>\nsbctl cluster show <cluster-id>\n
"},{"location":"release-notes/","title":"Release Notes","text":"

Simplyblock regularly provides new releases with new features, performance enhancements, bugfixes, and more.

This section provides detailed information about each Simplyblock release, including new features, enhancements, bug fixes, and known issues. Stay informed about the latest developments to ensure optimal performance and take full advantage of simplyblock's capabilities.

"},{"location":"release-notes/25-10-2/","title":"25.10.2","text":"

Simplyblock is happy to release the general availability release of Simplyblock 25.10.2.

"},{"location":"release-notes/25-10-2/#new-features","title":"New Features","text":"
  • Control Plane: The control plane can alternatively deploy into existing kubernetes clusters and co-locate on workers with storage nodes.
  • Kubernetes Support Matrix: Added OpenShift starting from version 4.20.0
  • OpenStack driver: OpenStack driver is now available. It has support for most optional features and is tested from OpenStack 25.1 (Epoxy). On request, older versions of OS may be supported.
  • The required memory footprint on storage nodes has been reduced from 0.2% of storage capacity to 0.05% of storage capacity.
  • Qos: Pool-level QoS feature has been added.
  • QoS Service Classes: A new feature to assign a service class to a volume. Service classes provide full performance isolation within the cluster.
  • Support for flexible erasure coding schemas within a cluster.
  • Support for RDMA fabric and mixed fabrics (RDMA, TCP)
  • Some write performance improvements during first write to volume and node outage
  • Support for namespace volumes. A single NVMe subsystem can expose up to 32 namespace volumes now
"},{"location":"release-notes/25-10-2/#fixes","title":"Fixes","text":"
  • Control Plane: Fixed a problem, which could lead to stuck deletes.
"},{"location":"release-notes/25-10-2/#upgrade-considerations","title":"Upgrade Considerations","text":"

It is possible to upgrade from 25.7.6 and 25.7.7. It is possible to add rdma support for the fabric during online upgrade.

"},{"location":"release-notes/25-10-2/#known-issues","title":"Known Issues","text":"

Use of different erasure coding schemas per cluster is available, but can cause io interrupt issues in some tests. This feature is experimental and not GA.

"},{"location":"release-notes/25-10-3/","title":"25.10.3","text":"

Simplyblock is happy to release the general availability release of Simplyblock 25.10.3.

"},{"location":"release-notes/25-10-3/#new-features","title":"New Features","text":"

No new features.

"},{"location":"release-notes/25-10-3/#fixes","title":"Fixes","text":"
  • Control Plane: Fixed an issue with FoundationDB where clusters were able to exceed the maximum size of a single entry.
  • Control Plane: Fixed an issue where a default QoS class was assigned to logical volumes.
  • Control Plane: Fixed an issue where a connected wasn't re-established when a node goes down.
  • Storage Plane: Optimized the storage usage calculation of the storage node.
"},{"location":"release-notes/25-10-3/#upgrade-considerations","title":"Upgrade Considerations","text":"

It is possible to upgrade from 25.7.6 and 25.7.7. It is possible to add RDMA support for the fabric during the online upgrade.

"},{"location":"release-notes/25-10-3/#known-issues","title":"Known Issues","text":"

Use of different erasure coding schemas per cluster is available but can cause io interrupt issues in some tests. This feature is experimental and not GA.

"},{"location":"release-notes/25-3-pre/","title":"25.3-PRE","text":"

Simplyblock is happy to release the pre-release version of the upcoming Simplyblock 25.3.

Warning

This is a pre-release and may contain issues. It is not recommended for production use.

"},{"location":"release-notes/25-3-pre/#new-features","title":"New Features","text":"

Simplyblock strives to provide a strong product. Following is a list of the enhancements and features that made it into this release.

  • High availability has been significantly hardened for production. Main improvements concern the support for safe and interruption free fail-over and fail-back in different types of outage scenarios. Those include: partial network outage, full network outage, host failure, container failure, reboots, graceful and ungraceful shutdowns of nodes. Tested for single und dial node outages.
  • Multiple journal compression bugs have been identified and fixed.
  • Multiple journal fail-over bugs have been identified and fixed.
  • Logical volume creation, deletion, snapshotting, and resizing can now be performed via a secondary storage node (when the primary storage node is offline).
  • The system has been hardened against high load scenarios, determined by the amount of parallel NVMe-oF volumes per node, the amount of storage and parallel I/O. Tested up to 400 concurrent and fully active logical volumes per node and u to 20 concurrent I/O processes per logical volume.
  • Erasure coding schemes 2+1, 2+2, 4+2, 4+4 have been made power-fail-safe with high availability enabled.
  • System has been extensively tested outside of AWS with KVM-based virtualization and on bare-metal deployments.
  • Significant rework of the command line tool sbcli to simplify commands and parameters and make it more consistent. For more information see Important Changes.
  • Support for Linux Core Isolation to improve performance and system stability.
  • Added support for Proxmox via the Simplyblock Proxmox Integration.
"},{"location":"release-notes/25-3-pre/#important-changes","title":"Important Changes","text":"

Simplyblock made significant changes to the command line tool sbcli to simplify working with it. Many parameters and commands were meant for internal testing and confusing to users. Hence, simplyblock decided to make those private.

Parameters and commands that were made private should not influence users. If the change to private for a parameter or command should affect your deployment, please reach out.

Most changes are backwards-compatible, however, some are not. Following is a list of all changes.

  • Command: storage-node
    • Renamed command sn to storage-node (sn still works as an alias)
    • Changed subcommand device-testing-mode to private
    • Changed subcommand info-spdk to private
    • Changed subcommand remove-jm-device to private
    • Changed subcommand send-cluster-map to private
    • Changed subcommand get-cluster-map to private
    • Changed subcommand dump-lvstore to private
    • Changed subcommand set to private
    • Subcommand: deploy
      • Added parameter --cpu-mask
      • Added parameter --isolate-cores
    • Subcommand: add-node
      • Renamed parameter --partitions to --journal-patition
      • Renamed parameter --storage-nics to --data-nics
      • Renamed parameter --number-of-vcpus to --vcpu-count
      • Added parameter --max-snap (private)
      • Changed parameter --jm-percent to private
      • Changed parameter --number-of-devices to private
      • Changed parameter --size-of-device to private
      • Changed parameter --cpu-mask to private
      • Changed parameter --spdk-image to private
      • Changed parameter --spdk-debug to private
      • Changed parameter --iobuf_small_bufsize to private
      • Changed parameter --iobuf_large_bufsize to private
      • Changed parameter --enable-test-device to private
      • Changed parameter --disable-ha-jm to private
      • Changed parameter --id-device-by-nqn to private
      • Changed parameter --max-snap to private
    • Subcommand: restart
      • Renamed parameter --node-ip to --node-addr (--node-ip still works but is deprecated and should be exchanged)
      • Changed parameter --max-snap to private
      • Changed parameter --max-size to private
      • Changed parameter --spdk-image to private
      • Changed parameter --spdk-debug to private
      • Changed parameter --iobuf_small_bufsize to private
      • Changed parameter --iobuf_large_bufsize to private
    • Subcommand: list-devices
      • Removed parameter --sort / -s
  • Command: cluster
  • Changed subcommand graceful-shutdown to private
  • Changed subcommand graceful-startup to private
    • Subcommand: deploy
      • Renamed parameter --separate-journal-device to --journal-partition
      • Renamed parameter --storage-nics to --data-nics
      • Renamed parameter --number-of-vcpus to --vcpu-count
      • Changed parameter --ha-jm-count to private
      • Changed parameter --enable-qos to private
      • Changed parameter --blk-size to private
      • Changed parameter --page_size to private
      • Changed parameter --CLI_PASS to private
      • Changed parameter --grafana-endpoint to private
      • Changed parameter --distr-bs to private
      • Changed parameter --max-queue-size to private
      • Changed parameter --inflight-io-threshold to private
      • Changed parameter --jm-percent to private
      • Changed parameter --max-snap to private
      • Changed parameter --number-of-distribs to private
      • Changed parameter --size-of-device to private
      • Changed parameter --cpu-mask to private
      • Changed parameter --spdk-image to private
      • Changed parameter --spdk-debug to private
      • Changed parameter --iobuf_small_bufsize to private
      • Changed parameter --iobuf_large_bufsize to private
      • Changed parameter --enable-test-device to private
      • Changed parameter --disable-ha-jm to private
      • Changed parameter --lvol-name to private
      • Changed parameter --lvol-size to private
      • Changed parameter --pool-name to private
      • Changed parameter --pool-max to private
      • Changed parameter --snapshot / -s to private
      • Changed parameter --max-volume-size to private
      • Changed parameter --encrypt to private
      • Changed parameter --crypto-key1 to private
      • Changed parameter --crypto-key2 to private
      • Changed parameter --max-rw-iops to private
      • Changed parameter --max-rw-mbytes to private
      • Changed parameter --max-r-mbytes to private
      • Changed parameter --max-w-mbytes to private
      • Changed parameter --distr-vuid to private
      • Changed parameter --lvol-ha-type to private
      • Changed parameter --lvol-priority-class to private
      • Changed parameter --fstype to private
    • Subcommand: create
      • Changed parameter --page_size to private
      • Changed parameter --CLI_PASS to private
      • Changed parameter --distr-bs to private
      • Changed parameter --distr-chunk-bs to private
      • Changed parameter --ha-type to private
      • Changed parameter --max-queue-size to private
      • Changed parameter --inflight-io-threshold to private
      • Changed parameter --enable-qos to private
    • Subcommand: add
      • Changed parameter --page_size to private
      • Changed parameter --distr-bs to private
      • Changed parameter --distr-chunk-bs to private
      • Changed parameter --max-queue-size to private
      • Changed parameter --inflight-io-threshold to private
      • Changed parameter --enable-qos to private
  • Command: storage-pool - Removed subcommand get-secret - Removed subcommand update-secret
    • Subcommand: add
      • Changed parameter --has-secret to private -Command: caching-node
    • Subcommand: add-node
      • Renamed parameter --number-of-vcpus to --vcpu-count
      • Changed parameter --cpu-mask to private
      • Changed parameter --memory to private
      • Changed parameter --spdk-image to private
    • Command: volume
      • Changed subcommand list-mem to private
      • Changed subcommand move to private
    • Subcommand: add
      • Renamed parameter --pvc_name to --pvc-name (--pvc_name still works but is deprecated and should be exchanged)
      • Changed parameter --distr-vuid to private
      • Changed parameter --uid to private
"},{"location":"release-notes/25-3-pre/#known-issues","title":"Known Issues","text":"

Simplyblock always seeks to provide a stable and strong release. However, smaller known issues happen. Following is a list of known issues of the current simplyblock release.

Info

This is a pre-release and many of those known issues are expected to be resolved with the final release.

  • The control plane reaches a limit at around 2,200 logical volumes.
  • If a storage node goes offline while a logical volume is being deleted, the storage cluster may keep some garbage.
  • In rare cases, resizing a logical volume under high I/O load may cause a storage node restart.
  • If a storage cluster reaches its capacity limit and runs full, file systems on logical volumes may return I/O errors.
  • A fail-back after a fail-over may increase to >10s (with freezing I/O) with a larger number of logical volumes per storage node (>100 logical volumes).
  • A fail-over time may increase to >5s (with freezing I/O) on large logical volumes (>5 TB).
  • During a node outage, I/O performance may drop significantly with certain I/O patterns due to a performance issue in the journal compression.
  • Journal compression may cause significant I/O performance drops (10-20s) in periodic intervals under certain I/O load patterns, especially when the logical volume capacity reaches its limits for the first time.
  • A peak read IOPS performance regression has been observed.
  • In rare cases, a primary-secondary storage node combination may get into a flip-flop situation with multiple fail-over/fail-back iterations due to network or configuration issues of particular logical volumes or clients.
  • A secondary node may get stuck when trying to restart under high load (>100 logical volumes).
  • Node affinity rules are not considered after a storage node migration to a new host.
  • Return code of sbcli commands is always 0.
"},{"location":"release-notes/25-6-ga/","title":"25.6","text":"

Simplyblock is happy to release the general availability release of Simplyblock 25.6.

"},{"location":"release-notes/25-6-ga/#new-features","title":"New Features","text":"

Simplyblock strives to provide a strong product. Following is a list of the enhancements and features that made it into this release.

  • General: Renamed sbcli to sbctl. The old sbcli command is deprecated but still available as a fallback for scripts.
  • Storage Plane: Increased maximum of available logical volumes per storage node to 1,000.
  • Storage Plane: Added the option to start multiple storage nodes in parallel on the same host. This is useful for machines with multiple NUMA nodes and many CPU cores to increase scalability.
  • Storage Plane: Added NVMe multipathing independent high-availability between storage nodes to harden resilience against network issues and improve failover.
  • Storage Plane: Removed separate secondary storage nodes for failover. From now on, all storage nodes act as primary and secondary storage nodes.
  • Storage Plane: Added I/O redirection in case of failover to secondary to improve cluster stability and failover times.
  • Storage Plane: Added support for CPU Core Isolation to improve performance consistency. Core masks and core isolation are auto-applied on disaggregated setups.
  • Storage Plane: Added sbctl storage-node configure command to automate the configuration of new storage nodes. See Configure a Storage Node for more information.
  • Storage Plane: Added optimized algorithms for the 4+1 and 4+2 erasure coding configurations.
  • Storage Plane: Reimplemented the Quality of Service (QoS) subsystem with significant less overhead than the old one.
  • Storage Plane: Added support for namespaced logical volumes (experimental).
  • Storage Plane: Reimplemented the initialization of a new page to significantly improve the performance of first write to page issues.
  • Storage Plane: Added support for optional labels when using strict anti-affinity.
  • Storage Plane: Added support for node affinity in case of a device failure to try to recover onto another device on the host.
  • Proxmox: Added support for native Proxmox node migration.
  • Talos: Added support to deploy on Talos-based OS-images.
  • AWS: Added Bottlerocket support.
  • AWS: Added multipathing support for Amazon Linux 2, Amazon Linux 2023, Bottlerocket.
  • GCP: Added support for Google Compute Engine.
"},{"location":"release-notes/25-6-ga/#fixes","title":"Fixes","text":"
+ + + + + + + +