********************************** Using Robin CNS in Kubernetes ********************************** The Container Storage Interface (CSI) is a standard for exposing storage to workloads on Kubernetes. To enable automatic creation/deletion of volumes for CSI Storage, a Kubernetes resource called `StorageClass` must be created and registered within the Kubernetes cluster. Associated with the StorageClass is a CSI provisioner plugin that does the heavy lifting at disk and storage management layers to provision storage volumes based on the various attributes defined in the StorageClass. By default, Robin ships with the following four StorageClasses: 1. ``robin`` - This StorageClass is considered the default Robin StorageClass. It does not have any features enabled and can be used for standard RWO volumes. 2. ``robin-repl-3`` - This StorageClass can be used to create volumes that need 3 replicas with a fault domain of ``host`` exclusively. 3. ``robin-immediate`` - This StorageClass will create volumes as soon as their respective volume claim is created without waiting for a first consumer. 4. ``robin-rwx`` - This StorageClass can be used to create RWX volumes with 2 replicas and faultdomain as ``host`` exclusively. Each StorageClass that uses Robin as the primary provisioner can be configured with the parameters described below. These parameters enable users to customize a Storageclass as needed and are optional with some having default values. .. code-block:: yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <"storage-class-name"> provisioner: robin reclaimPolicy: Delete parameters: media: blocksize: <"512", "4096"> fstype: protection: replication: <"2", "3"> faultdomain: faultdomain_customlabel: compression: encryption: workload: snapshot_space_limit: <"50"> rpool: <"default"> robin.io/storagetolerations: host_tags: hydration: =============================== ================================================== ``media`` The media type Robin should use to allocate PersistentVolumes. Two values are supported: ``HDD`` for spinning-disks and ``SSD`` for Solid State Devices. Robin automatically discovers the media type of the underlying local disks. If not provided Robin will choose type of the first discovered media. For example, `GCE Standard Persistent Disk` is treated as HDD media type and `GCE SSD Persistent Disk` is treated as an SSD media type. ``blocksize`` By default Robin uses ``4096`` as the block size of the underlying logical block device it creates. You can overwrite it by setting it to ``512`` for certain workloads that require it. ``fstype`` By default, the logical block device created by Robin is formatted using ``ext4`` filesystem. It can also be changed to ``xfs`` ``protection`` Type of Protection for a volume for data consistency. The valid values are ``replication`` and ``quorum-replication``. The default value is ``replication``. - **replication** - Robin CNS provides a strictly consistent data replication guarantee, which means a write IO is only acknowledged to the client once it is made durable on all **healthy** replicas. - **quorum-replication** - Robin CNS makes sure that majority of replicas are always up to acknowledge a write IO, which means a write IO is only acknowledged to the client once it is made durable on **majority** of replicas. When the number of active replicas are less than the quorum value, only the read IOs are served and the write IOs are not served in the region of the volume that is out of quorum because of the faults in the cluster. .. Note:: You must set the ``replication`` parameter ``3`` to configure 3-way replication for volumes with the ``quorum-replication`` protection. ``replication`` Number of replicas for a volume. By default, Robin does not enable replication for the Robin volume. You can set it to ``2`` or ``3`` to setup 2-way or 3-way replication. You must set the ``replication`` parameter ``3`` to configure 3-way replication for volumes with the ``quorum-replication`` protection. ``faultdomain`` The fault domain to be used when "replication" is turned on. Setting the right fault domain maximizes data safety. Setting it to ``disk`` results in ensuring that Robin picks two different disks to keep the replication copies. Robin also tries to pick disks on different nodes to ensure higher availability in the event of node failures. But on a very busy cluster, if there are no spare disks on different nodes, setting the fault domain to ``disk`` would result in disks from the same node to be picked up for storing the replicated copies of the volume. To prevent this and to ensure that your application can tolerate entire node going down, you can set the fault domain to ``host``. Doing so would guarantee that Robin never picks disks from the same node when storing replicated data of a volume. If disks across different nodes are not available, then the volume creation is failed rather than degrading to ``disk`` level fault domain. For additional safety, you can set the fault domain to ``rack`` in order to maintain app availability even if a set of nodes is shut down. Doing so would guarantee that Robin picks disks from different racks for each replica of the target volume. The rack a node belongs to is determined by the value of the Kubernetes label ``robin.io/rack`` assigned to the node. This applies to both cloud based and on-premises clusters. As before, if this condition cannot be met volume creation is failed. The default value is ``disk``. ``faultdomain_customlabel`` Enables a user to form a custom faultdomain by specifying a key that is used as the Kubernetes label for nodes within the cluster. For example, if two nodes are tagged with the labels "color:red" and "color:blue" respectively, the key "color" can be used as a custom faultdomain label to ensure these hosts are considered as residing in different domains. **Note:** When you specify the ``faultdomain`` and ``faultdomain_customlabel`` parameters together, the ``faultdomain_customlabel`` parameter takes precedence and overrides the value of the ``faultdomain`` parameter. ``compression`` By default, inline data compression is disabled. It can be enabled by setting it to ``LZ4`` which turns on inline block-level data compression using LZ4 compression algorithm. Support for other compression algorithms is on the roadmap ``encryption`` By default data-at-rest encryption is not enabled. To enable it set it to ``CHACHA20``, ``AES128`` or ``AES256``, which uses one of these algorithms to perform block-level encryption of data for that PersistentVolume. ``workload`` - ``throughput`` : If a volume has high throughput requirements this workload type can be used. Typically these are volumes that are used for large stream of IOs such as data volumes. Robin allocates no more than 1 such volumes per device. This type of volume can share the device with other ordinary volumes. - ``latency`` : If a volume has low latency requirements this workload type can be used. Typically these are volumes that are used for heart beats. The amount of data written is small, but the IO has to complete within strict time limits. By default Robin allocates no more than 2 such volumes per device. This type of volumes can share device along with other ordinary volumes. - ``dedicated`` : The entire device is dedicated to a volume when this workload type is used. This ensures that the volume gets all the IO bandwidth offered by the device. No other volume can share the device. - ``ordinary`` : All other volumes that are not classified as any of the above are given this workload type. Any number of ordinary volumes can share the same device. ``snapshot_space_limit`` This is how much space that is set aside for snapshots for this volume. For example, if volume size is 100GB, value of "30" here would be 30GB space reserved for snapshots. New snapshot creation will fail once this limit is reached. ``rpool`` Resource pools are a construct in Robin which allow you to group nodes in the cluster together for allocation purposes. Pools provide resource isolation. The default resource pool is ``default``. ``robin.io/storagetolerations`` Comma separated key-value pairs of the Storagetaints added to the nodes with ``robin.io/storagetaint`` annotation. Storage tolerations allow a volume to be placed onto the node with the matching storage taints but don’t guarantee to be placed on that node. If a node has storage taints, a volume, to be placed on this node, must have the matching storage tolerations. Storage taints and tolerations are similar to the Kubernetes node taints and tolerations for Pods. ``host_tags`` Comma separated key-value pairs of labels or tags added to the nodes using the ``kubectl label node`` command. This guarantees to place a volume on specific nodes. This is similar to the Kubernetes node-selector. ``hydration`` Copies data from the backup stored in an external cloud storage repository, to a cluster’s disks where volume needs to be imported. This parameter is optional and is only used for importing a volume using the Kubernetes specification. Valid values are ``true`` and ``false``. By default, this parameter is set to ``true`` which means that by default, all the volumes that are imported from the storage repository are hydrated. To override this parameter, you must set it to ``false``. =============================== ================================================== .. Note:: For the ``blocksize`` and ``replication`` attributes, the values they are configured with must be quoted strings to adhere to CSI specification. For example, the value for blocksize should be passed as "4096" (quoted) and NOT as 4096 (unquoted). .. ============ Fault domain ============ When you use the replication parameter to make replicas of your data for an On-premises cluster, you need to provide the fault domain parameter. It helps you in securing your data. When you set the fault domain for your cluster, it maximizes the safety of your data. .. Note:: The default value of fault domain is ``disk``. The following are the valid values for the fault domain: ``disk`` - When you set the fault domain to ``disk``, Robin creates the replicas of data on different disks as per the replication parameter. For example, you set the replication parameter to ``2`` and the fault domain parameter to ``disk``, Robin creates two replicas on two different disks from two different nodes. ``host`` - When you set the fault domain to ``host``, Robin creates the replicas of data on different hosts as per the replication parameter. For example, you set the replication parameter to ``3`` and the fault domain parameter to ``host``, Robin creates three replicas on three different hosts. ``rack`` - When you set the fault domain to ``rack``, Robin creates the replicas of data on different racks as per the replication parameter. For example, you set the replication parameter to ``3`` and the fault domain parameter to ``rack``, Robin creates three replicas on three different racks. Points to consider for fault domain ----------------------------------- - When you set the fault domain to ``disk``, Robin tries to pick the disks on different nodes to ensure higher availability in the event of node failure. - When you set the fault domain to ``disk`` and there are no spare disks on the nodes, Robin picks the disks from one node for storing the replicated copies of the volume. - When the replicated copies are stored on the disks from one node only, Robin recommends to set the fault domain to ``host``. - When you set the fault domain to ``host``, Robin never picks disks from the same node to store replicated data. - When you set the fault domain to ``host``, and disks are not available on different nodes, the volume creation fails rather than degrading to the ``disk`` fault domain. Set rack as fault domain ------------------------ You can set the fault domain to ``rack``. It is similar to Availability Zone (AZs) in the cloud platform. It is applicable to the On-premises clusters. You need to first add the labels to Kubernetes nodes. After adding the labels to the respective nodes, Robin picks up these labels and adds them automatically. Perform the following steps to set the fault domain to ``rack``: 1. Run the following command to add labels to Kubernetes nodes: .. code-block:: text # kubectl label nodes robin.io/rack= **Example:** .. code-block:: text # kubectl label nodes hypervvm-61-61 robin.io/rack=rack1 node/hypervvm-61-61 labeled 2. Robin picks up the labels and adds them automatically. Verify rack labels ------------------ After adding the rack labels to the respective nodes, you can verify whether the nodes are tagged with the correct rack labels. Run the following command to verify whether the nodes are tagged with the correct rack labels: .. code-block:: text # robin host list --tags **Example:** .. code-block:: text # robin host list --tags Id | Hostname | Version | Status | RPool | Avail. Zone | Rack | Lab | DC | Tags -------------+---------------------------------+-----------+--------+---------+-------------+------------------+-----+----+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1676556882:1 | hypervvm-61-61.robinsystems.com | 5.4.3-144 | Ready | default | N/A | rack : ['rack1'] | - | - | {'robin.io/hostname': ['hypervvm-61-61.robinsystems.com'], 'beta.kubernetes.io/arch': ['amd64'], 'node-role.kubernetes.io/control-plane': [''], 'kubernetes.io/hostname': ['hypervvm-61-61'], 'kubernetes.io/os': ['linux'], 'node.kubernetes.io/exclude-from-external-load-balancers': [''], 'beta.kubernetes.io/os': ['linux'], 'kubernetes.io/arch': ['amd64'], 'robin.io/robinhost': ['hypervvm-61-61'], 'robin.io/domain': ['ROBIN'], 'robin.io/robinrpool': ['default']} 1676556882:2 | hypervvm-61-62.robinsystems.com | 5.4.3-144 | Ready | default | N/A | - | - | - | {'beta.kubernetes.io/arch': ['amd64'], 'node-role.kubernetes.io/control-plane': [''], 'kubernetes.io/os': ['linux'], 'node.kubernetes.io/exclude-from-external-load-balancers': [''], 'beta.kubernetes.io/os': ['linux'], 'kubernetes.io/arch': ['amd64'], 'robin.io/domain': ['ROBIN'], 'robin.io/hostname': ['hypervvm-61-62.robinsystems.com'], 'kubernetes.io/hostname': ['hypervvm-61-62'], 'robin.io/robinhost': ['hypervvm-61-62'], 'robin.io/robinrpool': ['default']} 1676556882:3 | hypervvm-61-63.robinsystems.com | 5.4.3-144 | Ready | default | N/A | - | - | - | {'beta.kubernetes.io/arch': ['amd64'], 'node-role.kubernetes.io/control-plane': [''], 'kubernetes.io/os': ['linux'], 'node.kubernetes.io/exclude-from-external-load-balancers': [''], 'beta.kubernetes.io/os': ['linux'], 'kubernetes.io/arch': ['amd64'], 'robin.io/domain': ['ROBIN'], 'robin.io/hostname': ['hypervvm-61-63.robinsystems.com'], 'kubernetes.io/hostname': ['hypervvm-61-63'], 'robin.io/robinhost': ['hypervvm-61-63'], 'robin.io/robinrpool': ['default']} * Note: all values indicated above in the format XX/XX/XX represent the Free/Allocated/Total values of the respective resource unless otherwise specified. In addition allocated values for compute resource such as cpu, memory and pod usage includes reserved values for the corresponding resource. You can also use the following command: .. code-block:: text # kubectl get nodes --show-labels **Example:** .. code-block:: text # kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS hypervvm-61-61 Ready control-plane 165m v1.26.0-alpha.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=hypervvm-61-61,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=,robin.io/domain=ROBIN,robin.io/hostname=hypervvm-61-61.robinsystems.com,robin.io/rack=rack1,robin.io/robinhost=hypervvm-61-61,robin.io/robinrpool=default hypervvm-61-62 Ready control-plane 162m v1.26.0-alpha.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=hypervvm-61-62,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=,robin.io/domain=ROBIN,robin.io/hostname=hypervvm-61-62.robinsystems.com,robin.io/robinhost=hypervvm-61-62,robin.io/robinrpool=default hypervvm-61-63 Ready control-plane 161m v1.26.0-alpha.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=hypervvm-61-63,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=,robin.io/domain=ROBIN,robin.io/hostname=hypervvm-61-63.robinsystems.com,robin.io/robinhost=hypervvm-61-63,robin.io/robinrpool=default ======================================================= Using Robin CNS Storage Class to Provision Storage ======================================================= Supported access modes to Provision Storage -------------------------------------------- Robin supports the following access modes to provision storage: - ReadWriteOnce (RWO) - ReadOnlyMany (ROX) - ReadWriteMany (RWX) - ReadWriteOncePod (RWOP) For more information, see `Access Modes `__. Basic Use Case -------------- The most straightforward use case for a PVC is to have it utilized by a Pod. The following steps can be used to achieve this. - **Create a PersistentVolumeClaim with Robin CNS StorageClass** First configure YAML similar to the one shown below for a PersistentVolumeClaim (PVC) using the Robin CNS StorageClass. .. code-block:: yaml :emphasize-lines: 12 apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: robin Run the following command to create the PVC defined in above yaml: .. code-block:: text $ kubectl create -f mypvc.yaml persistentvolumeclaim/mypvc created .. Note:: Notice that under metadata/annotations we have spcified the storage class as ``storageClassName: robin``. This results in the Robin CNS Storage Class to be be picked up. Verify the desired PVC exists and was created successfully by running the following command: .. code-block:: text $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mypvc Pending robin 7s - **Attach the PersistentVolumeClaim to a simple Pod**: Configure a Pod YAML, similar to the one showcased below, wherein which the volume we created previously is referenced. .. code-block:: yaml :emphasize-lines: 9 kind: Pod apiVersion: v1 metadata: name: myweb spec: volumes: - name: htdocs persistentVolumeClaim: claimName: mypvc containers: - name: myweb0 image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - mountPath: "/usr/share/nginx/html" name: htdocs Run the following command to actually create the Pod: .. code-block:: text $ kubectl create -f mypod.yaml We can confirm that the PersistentVolumeClaim is bound to the pod and a PersistantVolume is created by issuing the following commands: .. code-block:: text $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mypvc Bound pvc-7a18d80c-6c26-4585-a949-24d9005e3d7f 10Gi RWO robin 6m1s $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-7a18d80c-6c26-4585-a949-24d9005e3d7f 10Gi RWO Delete Bound default/mypvc robin 5m32s Attach a volume to a Pod as ``readOnly`` ----------------------------------------- Robin supports the ``readOnly`` mode for a volume using `Kubernetes CSI provisioner `__. When a volume is in the ``readOnly`` mode, the write requests will not be served, only the read requests will be served on this volume. To attach a volume to a Pod as ``readOnly``, you must specify the following parameter in the Pod YAML in the ``spec.volumes.persistentVolumeClaim``: - ``readOnly: true`` Complete the following steps to attach a volume to a Pod as ``readOnly``: 1. Create a Pod YAML with the PVC name you want to attach to a Pod as ``readOnly`` using the following example: .. code-block:: yaml :emphasize-lines: 20,21 kind: Pod apiVersion: v1 metadata: name: my-csi-robin-app-clone-ro labels: app.kubernetes.io/instance: robin app.kubernetes.io/managed-by: robin.io app.kubernetes.io/name: robin spec: containers: - name: my-frontend image: busybox:stable volumeMounts: - mountPath: "/data" name: my-csi-robin-volume-clone command: [ "sleep", "1000000" ] volumes: - name: my-csi-robin-volume-clone persistentVolumeClaim: claimName: csi-pvc-robin-clone readOnly: true 2. Run the following command to create a Pod: .. code-block:: text # kubectl create -f mypod.yaml Customizing Volume Provisioning ------------------------------- Let's say that we'd like to create a PVC which meets the following requirements: * Data is replicated 3-ways * The Pod should continue to have access to data even if 2 of the 3 disks or the nodes on which these disks are hosted go down * The data must be compressed * The data should only reside on SSD media This is accomplished by specifying these requirements under ``metadata/annotations`` section of the PVC Spec as described in the YAML below. Please notice that each annotations are prefixed with ``robin.io/``. Except ``fstype``, Annotations can take exact same parameters as in Robin CNS Storage Class YAML detailed above and would override the corresponding parameters specified in the StorageClass. .. code-block:: yaml :emphasize-lines: 5-9, 17 apiVersion: v1 kind: PersistentVolumeClaim metadata: name: proteced-compressed-pvc annotations: robin.io/replication: "3" robin.io/faultdomain: host robin.io/compression: LZ4 robin.io/media: SSD spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: robin .. note:: PVC annotations will be deprecated in future releases. Run the following command to create the PVC: .. code-block:: text $ kubectl create -f newpvc.yaml persistentvolumeclaim/proteced-compressed-pvc created $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mypvc Bound pvc-7a18d80c-6c26-4585-a949-24d9005e3d7f 10Gi RWO robin 62m proteced-compressed-pvc Pending robin 47s .. Note:: The number 3 is quoted as "3" when specifying the ``robin.io/replication`` annotation. This is as per the Kubernetes Spec. Not doing so would result in an error being thrown by Kuberentes. Restricting Volume Placement ----------------------------- The restrict volume placement allows you to restrict and specify a volume creation on specific nodes. This feature is implemented by using the storage taint and storage toleration to restrict volume creation on specific nodes. This feature is similar to the Kubernetes taints and tolerations for Pods. A new parameter named ``host_tags`` is added to the StorageClass to allow a volume creation on the desired nodes. Point to consider for restricting volume placement ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - You must add the equivalent storage tolerations to the storage class to create the volumes onto nodes having storage taints. - A storage toleration allows a volume to be created on a node that has storage taint but does not guarantee it. When you add the labels or tags to that node, it guarantees to place a volume on the respective node. - After adding storage taints to the nodes, the existing volume on these nodes will not be moved anywhere. - After creating a volume using the storage tolerations, adding or removing tags from the nodes will not move existing volumes from the nodes. Complete the following steps to create a PVC on specific node: 1. Add the labels to the desired node: .. code-block:: text # kubectl label node **Example** .. code-block:: text # kubectl label node rakuten-37.robinsystems.com tier:gold node/rakuten-37.robinsystems.com labeled 2. Add Storagetaint to the desired node: .. code-block:: text # kubectl annotate node robin.io/storagetaint= **Example** .. code-block:: text # kubectl annotate node rakuten-37.robinsystems.com robin.io/storagetaint='mysql:True' node/rakuten-37.robinsystems.com annotate 3. Create a storageclass.yaml file with the ``robin.io/storagetolerations`` and ``host_tags`` parameters using the following example: .. code-block:: yaml :emphasize-lines: 20,21 apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: robin-storage-toleration labels: app.kubernetes.io/instance: robin app.kubernetes.io/managed-by: robin.io app.kubernetes.io/name: robin provisioner: robin reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: Immediate parameters: media: SSD rpool: default compression: LZ4 replication: "2" faultdomain: host encryption: CHACHA20 robin.io/storagetolerations: "mysql:True" host_tags: "tier:gold" 4. Create a StorageClass using the storageclass.yaml file created in step 3: .. code-block:: text # kubectl create -f **Example** .. code-block:: text # kubectl create -f robin-sc.yaml storageclass.storage.k8s.io/robin-storage-toleration created 5. Create a pvc.yaml file to create a PVC using the following example: .. code-block:: yaml :emphasize-lines: 11 apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: robin-storage-toleration 6. Create a PVC using the pvc.yaml file created in step 5: .. code-block:: text # kubectl create -f **Example** .. code-block:: text # kubectl create -f pvc.yaml persistentvolumeclaim/test-pvc created Create an Encrypted Volume -------------------------- An encrypted volume can be used to secure application data. The following steps can be used to create an encrypted volume. - **Configure the PersistentVolumeClaim YAML** First, you need to configure YAML similar to the one shown below for PersistentVolumeClaim (PVC) using the Robin CNS StorageClass. .. code-block:: yaml :emphasize-lines: 6-7, 15 apiVersion: v1 kind: PersistentVolumeClaim metadata: name: robinvol annotations: robin.io/media: HDD robin.io/encryption: AES128 spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: robin - **Create the PersistentVolumeClaim** Run the following command to create the encrypted PVC defined in the above YAML: .. code-block:: text # kubectl create -f robinvol.yaml persistentvolumeclaim/robinvol created Using Robin CNS in a StatefulSet ------------------------------------ In a StatefulSet a PVC is not directly referenced as in the above examples, but instead a volumeClaimTemplate is used to describe the type of PVC that needs to be created as part of the creation of the StatefulSet resource. This is accomplished via the following YAML: .. code-block:: yaml :emphasize-lines: 43-44, 50 apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: serviceName: "nginx" replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: k8s.gcr.io/nginx-slim:0.8 ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www annotations: robin.io/replication: "2" robin.io/media: SSD spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi storageClassName: robin The following commands can be used to create the Statefulset and ensure the correct PVCs are used: .. code-block:: text $ kubectl create -f myweb.yaml service/nginx created statefulset.apps/web created $ kubectl get statefulset NAME READY AGE web 2/2 12s $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE www-web-0 Bound pvc-2b97d8fc-479d-11e9-bac1-00155d61160d 1Gi RWO robin 8s www-web-1 Bound pvc-436536e6-479d-11e9-bac1-00155d61160d 1Gi RWO robin 8s $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-2b97d8fc-479d-11e9-bac1-00155d61160d 1Gi RWO Delete Bound default/www-web-0 robin 10s pvc-436536e6-479d-11e9-bac1-00155d61160d 1Gi RWO Delete Bound default/www-web-1 robin 10s .. _StorageForHelmCharts: Provisioning Storage for Helm Charts ------------------------------------ Helm charts are a popular way to deploy an entire stack of Kubernetes resources in one shot. A helm chart is installed using ``helm install`` command. To use Robin CNS for persistent storage one needs to pass it as ``--set persistence.storageClass=robin`` command line option as shown below: .. code-block:: text $ helm install --name pgsqldb stable/mysql --set persistence.storageClass=robin This would result in Robin being used as the storage provisioner for PersistentVolumeClaims created by this helm chart. Identifying released volumes ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ When PVC is created with a ReclaimPolicy of ``Retain``, Kubernetes will not call the CSI driver to delete the backing PersistentVolume when the linked PVC is removed. As a result, in such cases the Robin Adminstrator might need to manually clean up the associated ``released`` volumes in order complete the deletion process. To aid the process, a utility named ``get-unbound-vols`` is provided within the Robin daemonset Pods. As its name suggests it discovers any unbound volumes which do not have any associated PVCs and thus enables the Adminstrator to complete the deletion process by removing the identified PV objects and their linked Robin volumes (with the same name) using the ``kubectl delete pv`` and ``robin volume delete`` commands respectively. More information on the ReclaimPolicy concept can be found `here `__. To run the utility, execute into any Robin daemonset pod, login as the Adminstrator and issue the ``get-unbound-vols`` command as shown in the example below: .. code-block:: text # kubectl describe robinclusters -n robinio Name: robin Namespace: robinio Labels: app.kubernetes.io/instance=robin app.kubernetes.io/managed-by=robin.io app.kubernetes.io/name=robin Annotations: API Version: manage.robin.io/v1 Kind: RobinCluster Metadata: Creation Timestamp: 2021-02-04T09:41:06Z Generation: 1 Resource Version: 378697 Self Link: /apis/manage.robin.io/v1/namespaces/robinio/robinclusters/robin UID: 7533d2da-8521-47e2-bb52-426df5a07a2b Spec: host_type: ibm image_pull_secret: all-icr-io image_registry_path: uk.icr.io/docker_registry/docker image_robin: robinsys/robinimg:5.3.4-75 k8s_provider: iks Options: cloud_cred_secret: cloud-cred-secret update_coredns: 1 Status: connect_command: kubectl exec -it robin-nfbxx -n robinio -- bash get_robin_client: curl -k https://10.242.64.22:29442/api/v3/robin_server/download?file=robincli&os=linux > robin master_ip: 10.242.64.22 Phase: Ready pod_status: robin-nfbxx 10.242.64.22 Running 10.242.64.22 robin-drdzx 10.242.64.21 Running 10.242.64.21 robin-57w8h 10.242.64.20 Running 10.242.64.20 robin_node_status: host_name: kube-c0d69ael0c0ce2a8b6b0-asitclus-default-00000b25 join_time: 1612431722 k8s_node_name: 10.242.64.22 Roles: M*,S Rpool: default State: ONLINE Status: Ready host_name: kube-c0d69ael0c0ce2a8b6b0-asitclus-default-00000a68 join_time: 1612431745 k8s_node_name: 10.242.64.20 Roles: S,M Rpool: default State: ONLINE Status: Ready host_name: kube-c0d69ael0c0ce2a8b6b0-asitclus-default-00000c21 join_time: 1612431760 k8s_node_name: 10.242.64.21 Roles: S,M Rpool: default State: ONLINE Status: Ready Events: # kubectl exec -it robin-nfbxx -n robinio -- bash [robinds@hypervvm-72-43]# [robinds@hypervvm-72-43]# robin login admin --p Robin123 User admin is logged into Administrators tenant [robinds@hypervvm-72-43]# get-unbound-vols Unbound volumes (without PVCs): pvc-6732bdf4-58ce-40da-abff-c93089dcdcdf pvc-15b32f8f-4e85-4d97-a28e-9d760f63769b ===================================================== Protecting PVCs using Robin's Volume Replication ===================================================== Robin CNS supports storage volume-level replication to ensure that data is always available in the event of nodes and disks failure. To acheive this, Robin CNS uses the ``protection`` parameter to configure a volume replication and the ``replication`` parameter to set the replica count of a volume. Protection parameter -------------------- To configure volume replication, the ``protection`` parameter with options either ``replication`` or ``quorum-replication`` needs to be specified in the `StorageClass `__: - **Replication** - Write IOs are only acknowledged to the client once they are made durable on all **healthy** replicas. The read and write IOs are allowed to the last standing replica. This is the default value of the protection parameter. - **Quorum-replication** - Write IOs are only acknowledged to the client once they are made durable on **majority** of replicas. When the number of active replicas are less than the quorum value, only the read IOs are served from the last standing replica and the write IOs are not served in the region of the volume that is out of quorum because of the faults in the cluster. Leader replica makes sure that the write IOs are made to at least quorum replicas before it is acknowledged to the client. Points to consider for ``quorum-replication`` protection ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - After creating a volume, you cannot change the protection type and replication count. - Import of a quorum-based replication volume is supported only if it is imported with the hydration option. - A thick clone volume can have its own protection type, independent of the parent volume. Replication parameter ---------------------- To set the replica count of a volume, the ``replication`` parameter with options either ``2`` or ``3`` needs to be specified in the `StorageClass `__. Robin CNS uses a synchronous replication to ensure high availability of data. Synchronous replication ensures data is written and committed to all the replicas in real time. The benefit synchronous replication provides is guaranteed data consistency between the replicated volumes. When the ``replication`` parameter is set to 2, at least 2 copies of the volume are maintained on different disks or host. If it is set to 3, at least 3 copies are maintained. This ensures that the volume’s data is available in the event of 1 or 2 disks or nodes failure. Configuring replication can be done by either specifying the ``replication`` parameter in the `StorageClass `__ or annotating the PVC spec with ``robin.io/replication: ""`` and optionally ``robin.io/faultdomain: disk|host|rack`` as shown in the YAML below: .. code-block:: yaml :emphasize-lines: 6-7, 15 apiVersion: v1 kind: PersistentVolumeClaim metadata: name: replicated-pvc annotations: robin.io/replication: "3" robin.io/faultdomain: host spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: robin Setting the correct value for ``robin.io/fautdomain`` to either ``disk`` or ``host`` or ``rack`` ensures that this PVC's data is available in the event of just a disk or also node failures. **How are faults handled?** Robin CNS uses strict-consistency semantics to guarantee correctness for your mission critical stateful applications. This means that a "write" IO is not acknowledged back to the application until it made durable on all the healthy replica disks. It is possible that one or more replica disks for a volume can go down for short periods of time (node going through a reboot cycle), or for longer periods of time (node has a hardware fault and can't be brought online until the part is replaced). Robin CNS handles both cases gracefully. When a replica disk becomes available during IO, Robin CNS automatically evicts it from the replication group. The IOs continue to go to the remaining healthy replicas. When the faulted disks become available Robin CNS automatically brings them up to the same state as the other healthy disks before adding it back into the replication group. This is automatically handled and transparent to the application. When a disk suffers a more serious error, for example, an IO error is returned by the disk during a write or read operation. In this case, Robin CNS marks that disk as `faulted` and generates an alert for the storage admin to investigate. The storage admin can then determine the nature of the error and then mark that disk as healthy, in which case Robin CNS adds it back into the replication group and initiates a data resync to bring it up to the same level as the other healthy disks. If the error is serious (for example, SMART counters return corruption), or if the node has a motherboard or IO card fault that needs to be replaced, the storage admin can permanently decommison that disk or node from the Kubernetes cluster. Doing so would also automatically evict that disk from the replication group of the PVC. The storage admin can then add a new healthy disk to the replication group so that the PVC can be brought back to the same level of availability as before. There is a practical reason why Robin CNS doesn't automatically trigger rebuilds of faulted disks. Robin CNS is currently being used in mission critical workloads with multiple-petabytes under management by the Robin storage stack. We have seen scenarios where an IO controller card has failed while it has 12 disks of 10TiB each. That is 120 TiB of storage capacity under a single IO controller card. Rebuilding 120 TiB of data takes more time than replacing a faulted IO controller card with a healthy one. Also, moving 120 TiB of data over the network from healthy disks on other nodes puts a significant load on the network switches and the applications running on the nodes from which the data is pulled. This results in noticeable performance degradation. With our experience managing storage under large scale deployments and taking feedback from admins managing those clusters we have determined that it is best to inform an admin of a failure and let them decide, based on cost and time, whether they want to replace a faulty hardware or want Robin CNS to initiate a rebuild. ===================================== Making Robin the default StorageClass ===================================== To avoid typing the name of the StorageClass each time a new chart is deployed, it is highly recommend to set a StorageClass provided by Robin as the default Kubernetes StorageClass. The following steps can be used to achieve this. - **Check the current default StorageClass** Inspect if there is already a different StorageClass marked as default by running the following command: .. code-block:: text $ kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 8d robin robin Delete WaitForFirstConsumer true 5d5h - **Set the non-Robin StorageClass as "non-default"** In order to mark the current default StorageClass as "non-default" run the following command: .. code-block:: text $ kubectl patch storageclass gp2 \ -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' storageclass.storage.k8s.io/gp2 .. Note:: Before patching the storage class ensure that the annotation specified is correct. The above example is specific to a GKE cluster running version 1.12 of Kubernetes. - **Make the Robin StorageClass the new default** To set a Robin provided StorageClass as the default for the cluster, run the following command: .. code-block:: text $ kubectl patch storageclass robin \ -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' .. .. Note:: Before patching the Robin provided storage class ensure that name specified is correct as depending on the Kubernetes version the name of the standard StorageClass changes. For example, for Kubernetes versions newer than 1.13 the standard StorageClass appears as ``robin`` but for older versions it is displayed as ``robin-0-3`` instead. - **Verify that the Robin StorageClass is now the default** Issue the following command to confirm that Robin is now the default StorageClass: .. code-block:: text :emphasize-lines: 4 $ kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 8d robin (default) robin Delete WaitForFirstConsumer true 5d5h To learn more the official documentation on this process can be found `here `_. ============================= ReadWriteMany (RWX) Volumes ============================= Robin supports the ReadWriteMany (RWX) access mode of Persistent Volumes (PVs). An RWX PVC can be used by any Pod on any node in the same namespace for read and write operations. More information on these types of volumes can be found `here `_. Robin provides support for RWX volumes by utilizing a shared file system. Specifically the network file system (NFS) is used. These volumes can be mounted within any Pod deployed via Helm or YAML files and consequently can support multiple read/write clients. In addition, support for RWX volumes also extends to non-root application Pods, details for which can be found `here `_. .. Note:: RWX volumes are not supported within Robin Bundle applications. Default Configuration for RWX volumes ------------------------------------- When you install Robin CNS, Robin creates a sample StorageClass as ``robin-rwx`` for RWX volumes. You can use this StorageClass for creating RWX volumes. The default values of the ``robin-rwx`` StorageClass for ``replication`` factor and ``faultdomain`` are ``2`` and ``host`` respectively. .. Note:: For RWX volumes, you must specify the ``replication`` greater than ``1`` and the ``faultdomain`` as either ``host`` or ``rack``. These conditions must be met for RWX volumes to be provisioned. The default values are considered if the ``robin.io/replication`` and ``robin.io/faultdomain`` annotations are not explicitly defined within the PVC definition YAML. The following is an example of a PVC definition YAML where the default ``replication`` and ``faultdomain`` are overridden: .. code-block:: yaml :emphasize-lines: 7,8 apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-shared-1 annotations: robin.io/nfs-server-type: "shared" robin.io/replication: "3" robin.io/faultdomain: "rack" spec: StorageClassName: robin-rwx accessModes: - ReadWriteMany resources: requests: storage: 500Gi To override the default ``replication`` factor such that only ``1`` replica is provisioned, the ``robin.io/rwx_force_single_replica: "1"`` annotation must be specified. If an increased number of replicas are needed (the maximum being 3), the ``robin.io/replication`` annotation must be specified with the appropriate value. The following is an example of a PVC definition YAML where the default ``replication`` factor has been overridden such that only ``1`` replica is provisioned: .. code-block:: yaml :emphasize-lines: 7 apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-shared-1 annotations: robin.io/nfs-server-type: "shared" robin.io/rwx_force_single_replica: "1" spec: storageClassName: robin-rwx accessModes: - ReadWriteMany resources: requests: storage: 500Gi NFS server Pod --------------- Every RWX PVC results in an NFS export from an NFS server Pod. When you create a PVC, Robin automatically creates an NFS server Pod and configures it on demand. All Robin NFS server Pods are High Availability (HA) compliant. Robin monitors the health of the NFS server Pods and executes a failover automatically if any NFS server Pod is offline. There are two types of NFS server Pod, ``shared`` and ``exclusive``, which are described in the sections below. The default NFS server Pod type is ``shared``. In order to update this value, the following command can be used: .. code-block:: text # robin config update nfs default_server_type **Example** .. code-block:: text # robin config update nfs default_server_type exclusive Shared NFS server Pod ^^^^^^^^^^^^^^^^^^^^^ With a shared NFS server Pod, multiple RWX PVC exports can be allocated to one NFS server Pod. To allocate a shared NFS server Pod for a PVC, use the following annotation. An example of its usage is shown below as well. .. code-block:: text robin.io/nfs-server-type: "shared" **Example** .. code-block:: yaml :emphasize-lines: 6, 12 apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-shared-1 annotations: robin.io/nfs-server-type: "shared" spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi Exclusive NFS server Pod ^^^^^^^^^^^^^^^^^^^^^^^^ With an exclusive NFS server Pod, only one RWX PVC is associated the Pod resulting in a dedicated NFS server Pod for the PVC. To allocate an exclusive NFS server Pod for a PVC, use the following annotation. An example of its usage is shown below as well. .. code-block:: text robin.io/nfs-server-type: "exclusive" **Example** .. code-block:: yaml :emphasize-lines: 6, 12 apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-excl-1 annotations: robin.io/nfs-server-type: "exclusive" spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi Set resource limits for exclusive NFS server Pods ------------------------------------------------- The requests and limits for the CPU and memory consumption of exclusive NFS server Pods can be set using PVC annotations. This allows users to control the resource utilization of dedicated RWX volumes and override the default config values for each of the attributes. To set the requests and limits for CPU, use the respective annotation shown below. .. code-block:: text robin.io/nfs-cpu-limit: robin.io/nfs-cpu-request: To set the requests and limits for memory, use the respective annotation shown below. .. code-block:: text robin.io/nfs-memory-limit: robin.io/nfs-memory-request: **Example** .. code-block:: yaml :emphasize-lines: 12-16, 22 apiVersion: v1 kind: PersistentVolumeClaim metadata: name: CSI-PVC-robin labels: app.kubernetes.io/instance: robin app.kubernetes.io/managed-by: robin.io app.kubernetes.io/name: robin annotations: robin.io/media: HDD robin.io/compression: LZ4 robin.io/nfs-server-type: "exclusive" robin.io/nfs-cpu-limit: 300m robin.io/nfs-memory-limit: 300Mi robin.io/nfs-cpu-request: 240m robin.io/nfs-memory-request: 240Mi spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi StorageClassName: robin .. Note:: The annotations above must be specified during the initial creation of the PVC otherwise they will not take affect even if given at a later time. Managing tolerations for NFS server Pods ----------------------------------------- In certain environments, some nodes within the Kubernetes cluster might have custom taints set in order to differentiate them from the rest of the nodes in the cluster. In this scenario, NFS server Pods (or any Pods for that matter) cannot be deployed on these nodes if they do not have the appropriate tolerations in their definition. As a result, in order to support the deployment of NFS server Pods on nodes with custom taints Robin provides two methods to set the necessary tolerations for the aforementioned pods. Namely these are updating a config attribute or updating the deployment YAML natively and both are described in detail in the sections below. The former method impacts new NFS server Pods to be created whilst the latter focuses on Pods which are already deployed. More information on Kubernetes taints and tolerations can be found `here `__. Utilizing Robin configuration attribute ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Robin allows users to utilize the ``nfs_pod_tolerations`` config attribute to set the necessary tolerations for all NFS server Pods. This attribute accepts multiple key-effect value pairs in a comma seperated manner which result in multiple tolerations being added to the NFS server Pod. More details on the configuration attribute and how to update it can be found `here `__. For convienence, an example of how to update it is shown below: .. code-block:: text # robin config update nfs nfs_pod_tolerations "taintgrp:NoSchedule,node-role.kubernetes.io/control-plane:NoExecute" The 'nfs' attribute 'nfs_pod_tolerations' has been updated .. Note:: The tolerations set via the ``nfs_pod_tolerations`` attribute will only be reflected in NFS server Pods created after the config attribute is updated. For the existing NFS server Pods, you must add tolerations manually via native commands as described in the section below. If the node in question is untainted altogether, all of the respective tolerations will have to be removed from the NFS server Pod in order for it to function properly. To ensure this is the case for any new NFS server Pods, the ``nfs_pod_tolerations`` config attribute can be reset to its default value of ``None`` as shown in the example below: .. code-block:: text # robin config update nfs nfs_pod_tolerations none The 'nfs' attribute 'nfs_pod_tolerations' has been updated Utilizing native commands ^^^^^^^^^^^^^^^^^^^^^^^^^ If a NFS server Pod without taints is already deployed on node with taints, it will continously fail to reach a ``Ready`` state until the appropriate tolerations are appended to its definition. The steps to complete this process are shown below: 1. Run the following command to edit NFS server Pod’s YAML: .. code-block:: text # kubectl edit pods -n robinio 2. In the ``spec`` section, add the key-effect value pair that mirrors the taints of the appropriate node to the ``toleration`` dictionary within the YAML: .. code-block:: yaml tolerations: - key: "" operator: "Exists" effect: "" .. Note:: Multiple tolerations can be specified by repeating the above key-effect block with the respective values 3. Save the updated YAML in order to bounce the NFS sever Pod and ensure it reaches the ``Ready`` state If the tolerations for an existing NFS server Pod ever need to be removed, as is necessary when the respective node is untainted, the above steps can also be followed except instead of appending the ``toleration`` dictionary to the YAML file it needs to be removed. Configure NFS server Pod attributes ----------------------------------- Listed below are all the attributes a user can configure with regards to the configuration of the NFS server pods to be created. .. Note:: These attributes can be seen using the command ``robin config list nfs`` ================================== =================== ==================================================================================================================================================================================================================================================================================================================================================================================================================== Attribute Default value Valid Values ================================== =================== ==================================================================================================================================================================================================================================================================================================================================================================================================================== ``default_server_type`` shared - ``shared`` : multiple RWX PVC exports are allocated to one NFS server Pod. - ``exclusive`` : each RWX PVC results in a dedicated NFS server Pod. ``excl_pod_cpu`` 100m As needed but should conform to the standard Kubernetes notation for CPU requests as documented `here `_. ``excl_pod_memory`` 200Mi As needed but should conform to the standard Kubernetes notation for memory requests as documented `here `_. ``exclusive_pod_cpu_limit`` UNLIMITED As needed but should conform to the standard Kubernetes notation for CPU limits as documented `here `_. ``exclusive_pod_memory_limit`` UNLIMITED As needed but should conform to the standard Kubernetes notation for memory limits as documented `here `_. ``failover_enabled`` 1 Can be set to '0' in order to disable failovers and '1' in order to enable them. ``max_exports_per_pod`` 8 As needed per the planned requirments but has to be an integer. ``nfs_server_storage_affinity`` none - ``none`` : an NFS server Pod is created on a node where sufficient storage and other resources are availble, without checking for the allocated volume. - ``preferred``: an NFS server Pod is created on the same node as the volume is allocated. If no sufficient storage and other resources are available on the node, the NFS server Pod is created on an available node. - ``required``: an NFS server Pod is created on the same node as the volume is allocated. If no sufficient storage and other resources are available on the node, the NFS server Pod creation fails. ``pod_creation_timeout`` 600 As needed per the planned requirements but must be an integer. ``service_creation_timeout`` 60 As needed per the planned requirments but has to be an integer. ``shared_pod_cpu`` 100m As needed per the planned requirments but has to be an integer. ``shared_pod_cpu_limit`` UNLIMITED As needed per the planned requirments but has to be an integer. ``shared_pod_failover_serialized`` 1 Can be set to '0' in order to disable shared Pod failovers and '1' in order to enable them. ``shared_pod_memory`` 200Mi As needed but should conform to the standard Kubernetes notation for memory requests as documented `here `_. ``shared_pod_memory_limit`` UNLIMITED As needed but should conform to the standard Kubernetes notation for memory limits as documented `here `_. ``shared_pod_placement`` shared - ``PACK`` : selects the NFS server Pod that is assigned to the most and under the max limit. - ``SPREAD`` : selects the NFS server Pod with minimum PVC assignments and ensures that all NFS server Pods are equally loaded. ``nfs_pod_tolerations`` None Any valid tolerations that match the taints and respective effects already placed on target nodes. The tolerations should be specified in the following format: "key1:effect1,key2:effect2". ================================== =================== ==================================================================================================================================================================================================================================================================================================================================================================================================================== In order to update an NFS server Pod attribute, run the following command: .. code-block:: text # robin config update nfs **Example** .. code-block:: text # robin config update nfs shared_pod_placement SPREAD The 'nfs' attribute 'shared_pod_placement' has been updated .. Note:: The modified values only applicable to NFS server Pod to be created. The updated values are not applied to existing NFS server Pods. List all NFS server Pods ------------------------- .. tabs:: .. tab:: CLI To view the list of all NFS server Pods currently present on the cluster alongside additional details such as the ID, state, Kuberentes name, type and associated host for each Pod, run the following command: .. code-block:: text # robin nfs server-list --json ================= ========================== ``--json`` Output in JSON ================= ========================== **Example** .. code-block:: text # robin nfs server-list +--------+-------------------------+---------------------------------+---------------------+--------+ | Pod ID | NFS Server Pod | Hostname | NFS Server Pod Type | State | +--------+-------------------------+---------------------------------+---------------------+--------+ | 180 | robin-nfs-excl-v108-180 | hypervvm-62-35.robinsystems.com | EXCLUSIVE | ONLINE | | 181 | robin-nfs-shared-181 | hypervvm-62-35.robinsystems.com | SHARED | ONLINE | | 170 | robin-nfs-excl-v107-170 | hypervvm-62-33.robinsystems.com | EXCLUSIVE | ONLINE | | 185 | robin-nfs-excl-v104-185 | hypervvm-62-34.robinsystems.com | EXCLUSIVE | ONLINE | | 184 | robin-nfs-excl-v110-184 | hypervvm-62-35.robinsystems.com | EXCLUSIVE | ONLINE | +--------+-------------------------+---------------------------------+---------------------+--------+ List all NFS exports -------------------- .. tabs:: .. tab:: CLI To view the list of NFS exports currently present on the cluster alongside additional details such as the ID, state, associated RWX volume, NFS server Pod and client host for each export, run the following command: .. code-block:: text # robin nfs export-list --server --volume --verbose ====================== ======================================================================================================================================== ``--server `` Filter the list of NFS exports by the specified NFS server Pod. Either the NFS server Pod name or ID should be given ``--volume `` Filter the list of NFS exports by the specified volume. Either the volume name or ID should be given ``--verbose`` Include additional information in the output ====================== ======================================================================================================================================== **Example** .. code-block:: text # robin nfs export-list +--------------+-----------+------------------------------------------+-------------------------+-----------------------------------------------------------------------+ | Export State | Export ID | Volume | NFS Server Pod | Export Clients | +--------------+-----------+------------------------------------------+-------------------------+-----------------------------------------------------------------------+ | READY | 18 | pvc-2985f5f7-fd32-4645-95d1-551bce9ed002 | robin-nfs-excl-v112-175 | ["hypervvm-62-33.robinsystems.com"] | | READY | 21 | pvc-9b1ea557-f0f5-419d-ad2a-54eed25c22c1 | robin-nfs-excl-v115-179 | ["hypervvm-62-33.robinsystems.com"] | | READY | 2 | pvc-6ec4aa2d-6b56-4746-85c2-4bc8bf494115 | robin-nfs-shared-181 | ["hypervvm-62-35.robinsystems.com"] | | READY | 5 | pvc-6f5747b5-88c9-4e92-9547-0cf7cdf7b12a | robin-nfs-shared-181 | ["hypervvm-62-35.robinsystems.com","hypervvm-62-33.robinsystems.com"] | | READY | 4 | pvc-a5e14684-761b-4f3e-8f0a-a1828de3fa3f | robin-nfs-shared-181 | ["hypervvm-62-35.robinsystems.com"] | | READY | 13 | pvc-7fb597c8-f976-4ea6-9422-5349e37e544d | robin-nfs-excl-v107-170 | ["hypervvm-62-34.robinsystems.com"] | | READY | 19 | pvc-33827f5a-4004-49fb-b6ef-22f3b0ff6c40 | robin-nfs-excl-v113-182 | ["hypervvm-62-34.robinsystems.com"] | +--------------+-----------+------------------------------------------+-------------------------+-----------------------------------------------------------------------+ List all applications associated with an NFS server Pod -------------------------------------------------------- .. tabs:: .. tab:: CLI To view the list of all applications currently present on the cluster that are linked to a NFS server Pod alongside additional details such as the name, type and associated NFS server Pod for each application, run the following command: .. code-block:: text # robin nfs app-list --server ====================== ======================================================================================================================================== ``--server `` Filter applications that are using the respective NFS server Pod. ``--json `` Output in ``JSON`` format. ====================== ======================================================================================================================================== .. Note:: Only applications whose objects (Pods, controllers etc.) have both the ``app`` and ``release`` labels within their definition or contain the updated ``app.kubernetes.io/name`` and ``app.kubernetes.io/instance`` labels within their specification will be shown by the above command. **Example** .. code-block:: text # robin nfs app-list +-------------+------------------+----------------------+ | Application | Application Type | NFS Server | +-------------+------------------+----------------------+ | myapp1 | helm | robin-nfs-shared-181 | | myapp2 | helm | robin-nfs-shared-181 | | myapp3 | helm | robin-nfs-shared-181 | +-------------+------------------+----------------------+ Show information about a specific application and its NFS usage ---------------------------------------------------------------- .. tabs:: .. tab:: CLI Issue the following command to get information about the RWX volume and NFS server Pod associated with a specific application instance: .. code-block:: text # robin nfs server-info ================= ========================== ``name`` Name of application to fetch details for ================= ========================== **Example** .. code-block:: text # robin nfs server-info myapp1 +-------------+------------------------------------------+----------------------+ | Application | Volume Name | NFS Server Pod | +-------------+------------------------------------------+----------------------+ | myapp1 | pvc-6ec4aa2d-6b56-4746-85c2-4bc8bf494115 | robin-nfs-shared-181 | +-------------+------------------------------------------+----------------------+ ============================================================ Robin StorageClass with GID and UID to Run Non-Root App Pods ============================================================ In Kubernetes, only the root user can access all the persistent volumes. However, using new parameters in Robin’s StorageClass, you can allow a specific set of users to access the persistent volume. You can provide read and write access for a non-root user to a persistent volume by providing GID and UID when creating a new Robin StorageClass. Use this StorageClass in the PVC and set Pod's security context with the runAsUser value. When you provide a GID for read and write access to the persistent volumes, any non-root user that belongs to the group ID, including a Pod, is granted access to the file storage. The following is the sample YAML file: .. code-block:: yaml allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass labels: app.kubernetes.io/instance: robin app.kubernetes.io/managed-by: robin.io app.kubernetes.io/name: robin metadata: name: robin-gid parameters: gidAllocate: "true" gidFixed: "1001" media: HDD uidFixed: "1001" provisioner: robin reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer .. Note:: There are known Kubernetes permission issues when you mount a volume with ``subPath`` for non-root users. As a workaround, you can remove the ``subPath`` parameter or use ``init`` container to ``chown`` the path. More details about this issue can be found `here `_. ================= Snapshot Volumes ================= Just like storage management, which is done by an external storage provisioner such as Robin, taking snapshots of a single Robin volume is also done using a CSI-Snapshotter that is registered with Kubernetes. The official documentation on volume snapshots can be found `here `_. Robin supports native Kubernetes snapshots for Kubernetes versions v1.13 and beyond. How it works ------------ Snapshots in Robin are incremental in nature. Every snapshot is built on top of the previous snapshot. The previous snapshot is the parent snapshot to the new snapshot and the new snapshot is the child snapshot. As the snapshots are incremental in nature, every snapshot stores its own data until the previous snapshot is deleted. When you delete a snapshot, its data is not deleted. The child snapshot inherits all the data from its parent snapshot. Sherlock displays the snapshot entry for all snapshots. If the valid data of a deleted snapshot exists, Sherlock displays the snapshot entry for this snapshot also and it shows zero in size. **Example** .. code-block:: text robin_snap_size{name="snapshot-1684315576.deleted.1684321319",id="1",ctime="0",volumename="pvc-3dc3c881-9ba5-422f-97ea-976ac65079a3",volumeid="1"} 0 robin_snap_state{name="snapshot-1684315576.deleted.1684321319",id="1",ctime="0",volumename="pvc-3dc3c881-9ba5-422f-97ea-976ac65079a3",volumeid="1"} 5 robin_snap_nclones{name="snapshot-1684315576.deleted.1684321319",id="1",ctime="0",volumename="pvc-3dc3c881-9ba5-422f-97ea-976ac65079a3",volumeid="1"} 0 **Steps to verify how snapshots work in Robin** The following are the steps to verify how snapshots work in Robin: 1. Write some data on a volume and capture a snapshot of this volume. Let’s consider 2 GB data is written on this volume. 2. Run the following command to verify the size of snapshot1, and it must be 2048M (2 GB): .. code-block:: text # robin snapshot info snapshot1 | grep -i size Snapshot Size : 2048M 3. Again, write 1 GB of data on the same volume without overwriting the existing data, and create another snapshot. 4. Run the following command to verify the size of snapshot2: .. code-block:: text # robin snapshot info snapshot2 | grep -i size Snapshot Size : 1024M 5. Run the following command to know the total data written on the volume: .. code-block:: text # robin volume info pvc-90111007-3890-4a48-b6e0-785c51d52456 | grep -i size Logical Size : 10.0G Physical Size* : 3.00G *Size calculation is inclusive of all replicas. 6. Due to the incremental nature of the snapshots in Robin, the snapshot1 and snapshot2 show the snapshot size as 2 GB and 1 GB respectively instead of 2 GB and 3 GB collectively. .. figure:: ./images/volume_snapshot.png :width: 550 How data in snapshots handled with Garbage Collection ----------------------------------------------------- Let’s consider a 10 GB volume with 2 GB of data written on it. The first snapshot is captured with 2 GB of data and is named snapshot1. New data is added to the volume in the following scenarios: - **Case 1 (New data is appended without overwriting the existing data)** - Let’s consider 1 GB of new data is added to the volume and a new snapshot is captured as snapshot2. Due to the incremental snapshots, snapshot1 contains 2 GB of data and snapshot2 contains 1 GB of data. After deleting the parent snapshot, the child snapshot inherits all data from its parent snapshot. In this case, snapshot2 inherits all data from snapshot1 and contains 3 GB of total data. .. figure:: ./images/volume_snapshot_case1.png :width: 700 - **Case 2 (New data overwrites some part of the existing data)** - Let’s consider 2 GB of new data is added (1 GB of data is overwritten to existing data and 1 GB of new data is appended) to the volume and a new snapshot is captured as sanapshot2. Due to the incremental nature of the snapshots, snapshot1 contains 2 GB of existing data and snapshot2 also contains 2 GB of data (1 GB of overwritten data and 1 GB of new data). After deleting the parent snapshot (snapshot1), the child snapshot (snapshot2) inherits all data from its parent snapshot (snapshot1). From the Robin storage perspective, the snapshot2 inherits a total of 4GB of data (2 GB of data from snapshot1, 2 GB of data from snapshot2). At the same time, from the filesystem perspective, only 3 GB of space is occupied (1 GB of non-overwritten data from snapshot1, 1GB of overwritten data, and 1GB of new data from snapshot2). When you run the Garbage Collection (GC) on snapshot2, it cleans up the overwritten data of snapshot1, which is no longer valid due to snapshot1 being deleted, and now, from Robin storage perspective, snapshot2 also contains only 3 GB of valid data. .. figure:: ./images/volume_snapshot_case2.png :width: 800 - **Case 3 (New data completely overwrites existing data)** - Let’s consider 2 GB of new data overwrites the entire existing data and a new snapshot is captured as snapshot2. Due to the incremental snapshots, snapshot1 contains 2 GB of existing data and snapshot2 also contains 2 GB of new data as it overwrites the entire existing data. After deleting the parent snapshot (snapshot1), the child snapshot (snapshot2) inherits all data from its parent snapshot (snapshot1). From the Robin storage perspective, the snapshot2 inherits a total of 4GB of data (2 GB of data from snapshot1, 2 GB of data from snapshot2). At the same time, from the filesystem perspective, only 2 GB of space is occupied (2 GB of overwritten data from snapshot2). When you run the Garbage Collection (GC) on snapshot2, it cleans up the overwritten data of snapshot1, which is no longer valid due to snapshot1 being deleted, and now, from Robin storage perspective, snapshot2 also contains only 2 GB of valid data. .. figure:: ./images/volume_snapshot_case3.png :width: 800 Create a volume snapshot ------------------------- Perform the following steps to create a volume snapshot using the native commands: .. Note:: To create, list, and delete the volume snapshots using the Robin CLI, see `Managing Volume Snapshots `_. Step 1 - Register a volume snapshot class with Kubernetes Step 2 - Take a snapshot of a PersistentVolumeClaim Step 1 - Register a volume snapshot class with Kubernetes ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ a. Configure a VolumeSnapshotClass object using the following volume snapshot class YAML: .. code-block:: yaml apiVersion: snapshot.storage.k8s.io/v1beta1 kind: VolumeSnapshotClass metadata: name: robin-snapshotclass labels: app.kubernetes.io/instance: robin app.kubernetes.io/managed-by: robin.io app.kubernetes.io/name: robin driver: robin deletionPolicy: Delete b. Run the following command to create the VolumeSnapshotClass: .. code-block:: text $ kubectl create -f csi-robin-snapshotclass.yaml volumesnapshotclass.snapshot.storage.k8s.io/robin-snapshotclass created c. Run the following command to verify that a VolumeSnapshotClass is registered: .. code-block:: text $ kubectl get volumesnapshotclass NAME DRIVER DELETIONPOLICY AGE robin-snapshotclass robin Delete 18s Step 2 - Take a snapshot of a PersistentVolumeClaim ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ a. Configure a VolumeSnapshot object with the name of the VolumeSnapshotClass and the PVC that needs to be snapshotted using the following volume snapshot YAML: .. code-block:: yaml :emphasize-lines: 10,12 apiVersion: snapshot.storage.k8s.io/v1beta1 kind: VolumeSnapshot metadata: name: snapshot-mypvc labels: app.kubernetes.io/instance: robin app.kubernetes.io/managed-by: robin.io app.kubernetes.io/name: robin spec: volumeSnapshotClassName: robin-snapshotclass source: persistentVolumeClaimName: mypvc b. Run the following command to create the VolumeSnapshot: .. code-block:: text $ kubectl create -f take-snapshot.yaml volumesnapshot.snapshot.storage.k8s.io/snapshot-mypvc created .. Note:: Robin CNS allows deletion of a volume even if a VolumeSnapshot object or a clone created from it still exists. List a volume snapshot ----------------------- To verify that the VolumeSnapshot for the PersistentVolumeClaim is created, run the following command: .. code-block:: text $ kubectl get volumesnapshot NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE snapshot-mypvc false mypvc robin-snapshotclass snapcontent-06c17c2b-e7bb-4dc9-86df-e5fd05821977 4m28s $ kubectl get volumesnapshotcontent NAME READYTOUSE RESTORESIZE DELETIONPOLICY DRIVER VOLUMESNAPSHOTCLASS VOLUMESNAPSHOT AGE snapcontent-06c17c2b-e7bb-4dc9-86df-e5fd05821977 Delete robin robin-snapshotclass snapshot-mypvc 41s Delete a volume snapshot ------------------------ To delete a volume snapshot, run the native ``kubectl delete`` command as shown in the example below: .. Important:: Robin CNS allows deletion of a VolumeSnapshot object even if a volume provisioned from it still exists. .. code-block:: text # kubectl delete volumesnapshot -n prod snapshot-1 volumesnapshot.snapshot.storage.k8s.io "snapshot-1" deleted .. Note:: VolumeSnapshots are namespaced objects so the appropriate namespace will need to be specified for the above command to be successful. ================ Volume Clones ================ Robin CNS supports creating a clone of an existing volume. You can clone any volume (normal or encrypted). You can create a volume clone from an existing volume of type Regular or its snapshot. When you want to create a volume clone using a volume, Robin CNS creates a snapshot of the volume and uses it to create the volume clone. After the volume clone is created, all new writes to the parent volume and volume clone are independent. The official Kubernetes documentation on volume snapshot restores and clones can be found `here `_. Robin supports Kubernetes Clones for Kubernetes v1.13 and beyond. Starting from Robin CNS v5.4.8, the following two types of volume clones are supported: - **Thick clone** - A Thick clone is a writable point-in-time copy of an existing volume or volume snapshot. A Thick clone has no dependency on the parent volume or snapshot. You can use a Thick clone only after the data copy is complete to the clone volume. - **Thin clone** - A Thin clone is a writable point-in-time copy of an existing volume or volume snapshot. A Thin clone has a dependency on the parent volume or snapshot. You can create a Thin clone and start using it immediately. To create a Thick or Thin volume clone, specify in the ``DataSource`` section in the PVC YAML file which resource the clone will be created from. Valid choices for ``DataSource kind`` are ``PersistantVolumeClaim`` or ``VolumeSnapshot``. In addition, create a custom ``StorageClass`` and add a ``clonetype`` parameter specifying which type of clone volume to create. Valid values for the ``clonetype`` parameter are ``thick`` or ``thin`` (``clonetype: ``). .. note:: If you do not provide the ``clonetype`` parameter, then a Thin volume clone will be created by default. If you have created volume clones prior to Robin CNS v5.4.8, those clones are displayed as **Volume Type: Clone** in the ``robin volume info`` and ``robin volume list`` commands output. .. Note:: Robin CNS allows deletion of a volume even if a VolumeSnapshot object or a clone created from it still exists. Thick volume clone -------------------- A Thick volume clone is essentially a complete new copy of the parent volume. During the Thick clone creation process, Robin CNS creates a volume snapshot automatically if the data source is PVC, and all of the data in the parent volume is copied to the Thick clone, a process known as hydration. While data copy (hydration) is in progress, you cannot mount the volume in a Pod, or access the volume clone for reads and writes. Once the hydration process is complete, the newly created thick clone volume will have no dependency on the parent volume or volumesnapshot. Creating Thick clones can be a time-consuming process, depending on the size of the parent volume, as it copies all the data from the parent volume to the volume clone. The following volume parameters must be the same in the StorageClass YAML as the parent volume when you are creating a clone in your PVC: If you do not provide these parameters, Robin CNS takes the parent volume values. - Blocksize - fstype - Compression - Encryption .. Note:: You must add the parameter ``clonetype:thick`` in the Robin StorageCalss if you want to create a Thick clone. And in the PVC YAML, you must provide the volume clone size same the as the parent volume. Also, you must provide the value for the key ``kind`` under the ``DataSource`` parameter as ``PersistentVolumeClaim`` or ``VolumeSnapshot``. While the data copy is in progress, the ``robin volume info`` command output displays the **Volume Type** as **Clone Deferred**. Once the data copy is complete, the **Volume Type** parameter displays as **CLONE_THICK** while the clone is being hydrated and **REGULAR** once hydration is complete. The **Clone type** parameter displays as **THICK**. Thin volume clone ------------------ A Thin volume clone is a copy of a volume or a volume snapshot. The Thin clone type provides a read-and-write copy of the parent volume to users instantly, and the users can access old data from the snapshot. And any new data is written to and read from the cloned volume. You cannot delete a parent or dependent snapshot until the Thin clone exists. .. Note:: You should have the parameter ``clonetype:thin`` in the Robin StorageCalss. If you do not provide the value for the ``clonetype`` parameter, however, it still creates a Thin clone by default. For a Thin clone, the ``robin volume info`` command output displays the **Volume Type** as **CLONE_THIN** and the **Clone type** parameter as **THIN**. Limitations for volume clones ------------------------------ You must have knowledge of the following limitations related to volume cloning: - If you have clone volumes that were created in an earlier release (any release prior to Robin CNS v5.4.8), they will be available for use after upgrading to Robin CNS v5.4.8. These legacy clones are marked as Volume type CLONE in the volume list. However, you cannot convert them into a Thick clone. The Volume type: Clone is deprecated and is not supported anymore starting from Robin CNS v5.4.8 onwards. - Only the volume type Regular is supported for creating a Thick or Thin clone. - You cannot convert a Thin clone to a Thick clone. - A Thick clone will be available for reads and writes after completing the hydration process. - When the hydration process is in progress for a Thick volume clone, you cannot create another volume clone using the same volume. - Hydration is not supported for Thin clones in this release. Create a volume clone using a volume ------------------------------------- You can create a volume clone type of Thick or Thin according to your requirements. To create a volume clone, you can use the volume itself or an existing volume snapshot. For more information, see `Volume Clones `__. Before creating a volume clone, you should be aware of the limitations about volume cloning, For more information, see `Limitations for volume clones `__. The following procedure provides steps for creating a volume clone using a volume. .. Note:: For all volume clone types, the following volume parameters must be the same as the parent volume. Other parameters can be different from the parent volume. - Blocksize - fstype - Compression - Encryption If any of the parameters are not specified in the StorageClass, the volume clone uses the values from the parent. Complete the following steps to create a volume clone using a volume: 1. Create a ``storageclass.yaml`` file with the required parameters and add the parameter ``clonetype:`` using the following sample: .. code-block:: yaml :emphasize-lines: 18 apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <"storageclassname-clone"> provisioner: robin reclaimPolicy: Delete parameters: media: blocksize: <"512", "4096"> fstype: replication: <"2", "3"> compression: encryption: workload: snapshot_space_limit: <"50"> rpool: <"default"> robin.io/storagetolerations: Clonetype: 2. Create a ``StorageClass`` using the ``storageclass.yaml`` file created in the previous step: .. code-block:: text # kubectl create -f **Example** .. code-block:: text # kubectl create -f robin-sc.yaml storageclass.storage.k8s.io/clone-sc created 3. Create a pvc.yaml file to create a PVC using the following sample file: .. Note:: For volume clones (Thick or Thin), the volume clone size must be same as the parent volume. .. code-block:: yaml :emphasize-lines: 14 apiVersion: v1 kind: PersistentVolumeClaim metadata: name: thick-clone-pvc spec: storageClassName: storageclassname-clone dataSource: name: volume-parent-test kind: PersistentVolumeClaim accessModes: - ReadWriteOnce resources: requests: storage: 10Gi 4. Create a PVC using the pvc.yaml file created in the previous step : .. code-block:: text # kubectl create -f **Example** .. code-block:: text # kubectl create -f pvc.yaml persistentvolumeclaim/test-pvc created Create a volume clone using a volume snapshot ----------------------------------------------- The following procedure provides steps for creating a Thick or Thin volume clone using an existing volume snapshot. .. Note:: Before creating a volume clone, you should be aware of the limitations about volume cloning, For more information, see `Limitations for volume clones `__. Prerequisite ^^^^^^^^^^^^^ - You must have an existing volume snapshot to create a volume clone. For more information on creating a volume snapshot, see `Create a volume snapshot `__. .. Note:: For all volume clone types, the following volume parameters must be the same as the parent volume. Other parameters can be different from the parent volume. - Blocksize - fstype - Compression - Encryption If any of the parameters are not specified in the StorageClass, the volume clone uses the values from the parent. Complete the following steps to create a volume clone using a volume snapshot: 1. Create a storageclass.yaml file with the required parameters and add the parameter ``clonetype:thick`` or ``thin`` using the following example: .. code-block:: yaml :emphasize-lines: 18 apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <"storageclassname-clone"> provisioner: robin reclaimPolicy: Delete parameters: media: blocksize: <"512", "4096"> fstype: replication: <"2", "3"> compression: encryption: workload: snapshot_space_limit: <"50"> rpool: <"default"> robin.io/storagetolerations: Clonetype: 2. Create a StorageClass using the storageclass.yaml file created in the previous step: .. code-block:: text # kubectl create -f **Example** .. code-block:: text # kubectl create -f robin-sc.yaml storageclass.storage.k8s.io/clone-sc created 3. Create a pvc.yaml file to create a PVC using the following sample: .. Note:: For volume clones (Thick or Thin), the volume clone size must be same as the parent volume. .. code-block:: yaml :emphasize-lines: 15 apiVersion: v1 kind: PersistentVolumeClaim metadata: name: thick-clone-pvc spec: storageClassName: storageclassname-clone dataSource: name: volume-snapshot kind: VolumeSnapshot accessModes: - ReadWriteOnce resources: requests: storage: 10Gi 4. Create a PVC using the pvc.yaml file created in previous step: .. code-block:: text # kubectl create -f **Example** .. code-block:: text # kubectl create -f pvc.yaml persistentvolumeclaim/test-pvc created - **Confirm that the cloned PersistentVolumeClaim is created** One can verify that the clone was successfully created by issuing the following command: .. code-block:: text $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mypvc Bound pvc-83ed719a-5500-11e9-a0b7-00155d320462 1Gi RWO robin 49m mypvc-clone-snap1 Bound pvc-6dd554d1-5506-11e9-a0b7-00155d320462 1Gi RWO robin 7m19s ================ Expand Volumes ================ Robin supports volume expansion and thus allows users to resize their data storage to meet their needs accordingly. The official Kubernetes documentation on volume expansion can be found `here `_. The following steps can be used to expand a volume using native commands. - **List the PersistentVolumes** In order to list all the available PV's available on the cluster, run the following command: .. code-block:: text $ kubectl get pv -n robinapps NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-651722f9-dad2-4d62-85d9-de556bb8d555 8Gi RWO Delete Bound robinapps/mysqldb robin 14h - **Edit the PersistentVolume** Next we need to edit the desired PersistentVolume. Under the ``spec`` section change the ``storage`` attribute under ``capacity`` field to the desired value as hightlighted below: .. code-block:: text $ kubectl edit persistentVolume/pvc-651722f9-dad2-4d62-85d9-de556bb8d555 -n robinapps .. code-block:: yaml :emphasize-lines: 85 ------- # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: robin creationTimestamp: "2020-10-06T04:44:39Z" finalizers: - kubernetes.io/pv-protection - external-attacher/robin managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: v:"external-attacher/robin": {} manager: csi-attacher operation: Update time: "2020-10-06T04:44:39Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:pv.kubernetes.io/provisioned-by: {} f:spec: f:accessModes: {} f:capacity: {} f:claimRef: .: {} f:apiVersion: {} f:kind: {} f:name: {} f:namespace: {} f:resourceVersion: {} f:uid: {} f:csi: .: {} f:driver: {} f:fsType: {} f:volumeAttributes: .: {} f:csi.storage.k8s.io/pv/name: {} f:csi.storage.k8s.io/pvc/name: {} f:csi.storage.k8s.io/pvc/namespace: {} f:storage.kubernetes.io/csiProvisionerIdentity: {} f:volumeHandle: {} f:persistentVolumeReclaimPolicy: {} f:storageClassName: {} f:volumeMode: {} manager: csi-provisioner operation: Update time: "2020-10-06T04:44:39Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:phase: {} manager: kube-controller-manager operation: Update time: "2020-10-06T04:44:39Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:spec: f:capacity: f:storage: {} manager: kubectl operation: Update time: "2020-10-06T19:25:31Z" name: pvc-651722f9-dad2-4d62-85d9-de556bb8d555 resourceVersion: "4678372" selfLink: /api/v1/persistentvolumes/pvc-651722f9-dad2-4d62-85d9-de556bb8d555 uid: 0151065d-fd69-4479-85e2-d4c47c414a90 spec: accessModes: - ReadWriteOnce capacity: storage: 16Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: mysqldb namespace: robinapps resourceVersion: "4415500" uid: 651722f9-dad2-4d62-85d9-de556bb8d555 csi: driver: robin fsType: ext4 volumeAttributes: csi.storage.k8s.io/pv/name: pvc-651722f9-dad2-4d62-85d9-de556bb8d555 csi.storage.k8s.io/pvc/name: mysqldb csi.storage.k8s.io/pvc/namespace: robinapps storage.kubernetes.io/csiProvisionerIdentity: 1601911186270-8081-robin volumeHandle: "1601911167:5" persistentVolumeReclaimPolicy: Delete storageClassName: robin volumeMode: Filesystem status: phase: Bound ------- persistentvolume/pvc-651722f9-dad2-4d62-85d9-de556bb8d555 edited - **Verify the change to the PersistentVolume** Lastly confirm that the PersistantVolume's capacity has been increased by running the following command: .. code-block:: text $ kubectl get pv -n robinapps NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-651722f9-dad2-4d62-85d9-de556bb8d555 16Gi RWO Delete Bound robinapps/mysqldb robin 14h ========================================================= Backup and Import Volumes using Kubernetes specification ========================================================= Robin supports backup and import of volumes using the Kubernetes specifications. Volume backup allows a user to recover the volume from its backup when required. Robin allows you to push the volume snapshots to a registered external cloud storage repository. Volume backup can coexist with application backup in a cloud storage repository. Robin enables you to import data from a volume backup and create a new volume on the same source cluster from where the backup is taken or on a different cluster that has access to the same external cloud storage repository. Once the import process is complete, the new volume contains the same data as the source volume. However, there is no relationship between the original volume and the volume created using the backup. You can import a volume from backup on the same source cluster from where the backup is taken or on different cluster that has access to the same external cloud storage repository. .. Note:: When importing a volume from its backup, the size of importing volume must be the same as the size of the backedup volume. Creating a volume backup using Kubernetes spec is achieved through volume snapshot specifications which points to a volume snapshot class that has reference to the external cloud storage repository. To create a backup, first create a volume snapshot class using the ``storagerepo`` and ``purgeOnDelete`` parameters. Using this snapshot class, take a volume snapshot of the PVC you want to take a backup. Once you take the volume snapshot, Robin automatically pushes it to the external cloud storage repository for backup. Points to consider for backup and import volume using Kubernetes specification ------------------------------------------------------------------------------- - You cannot create a backup for an existing volume snapshot. - Volume backup using kubernetes spec works based on a PVC which means you can take volume backup that includes a single volume. - By default, the ``purgeOnDelete`` parameter is set to ``true`` in volume snapshot class. When you delete the volume snapshot that is used to take the volume backup, the volume backup is purged or deleted from the external cloud storage repository. If you do not want to delete the volume backup from the external cloud storage repository, you must set the ``purgeOnDelete`` parameter to ``false`` in the volume snapshot class. - The ``purgeOnDelete`` parameter cannot be changed after taking a backup. - When importing a volume from backup, the size of imported volume must be the same as the size of the backedup volume. - By default, the ``hydration`` parameter is set to ``true`` in the StorageClass which means Robin copies the data from external cloud repository to the cluster disks on which import is done. To override the ``hydration`` parameter in the StorageClass, you must set this parameter as ``false``. - Import of a quorum-based replication volume is supported only if it is imported with the ``hydration`` option. Create volume backup --------------------- A volume backup is backup of a volume snapshot that is stored in a registered external cloud storage repository. To create a backup, first create a volume snapshot class using the ``storagerepo`` and ``purgeOnDelete`` parameters. Using this snapshot class, take a volume snapshot of the PVC you want to take a backup. When you take a volume snapshot by specifying the ``storagerepo`` parameter in the volume snapshot class, Robin creates a volume snapshot on the fly and automatically pushes the created volume snapshot to the external storage repository for backup. By default, the ``purgeOnDelete`` parameter is set to ``true`` in volume snapshot class. When you delete the volume snapshot that is used to take the volume backup, the respective volume backup is also deleted from the external cloud storage repository. If you do not want to delete the volume backup from the external cloud storage repository, you must set the ``purgeOnDelete`` parameter to ``false`` in the volume snapshot class. Prerequisites ^^^^^^^^^^^^^^ The following is the prerequisite for backing up a volume: - An external cloud storage repository must be with Robin for backup purposes. For more information, see `Register a repo `__. Complete the following steps to backup a volume: 1. Create a volume snapshot class YAML for creating a VolumeSnapshotClass object: .. code-block:: yaml :emphasize-lines: 6,7 apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: backup-snapshotclass # snapshot_class_name parameters: storagerepo: gcs-bucket-1 # storage_repo_name purgeOnDelete: true/false # default true driver: robin deletionPolicy: Delete .. Note:: By default, the ``purgeOnDelete`` parameter is set to ``true`` which means when a volume snapshot is deleted from the cluster, the respective volume backup is also deleted from the external cloud storage repository. To keep the volume backup permanently in the external storage repository, the ``purgeOnDelete`` parameter must be set to ``false``. 2. Create a VolumeSnapshotClass object using the volume snapshot class YAML: .. code-block:: text # kubectl create -f backup-snapshotclass.yaml volumesnapshotclass.snapshot.storage.k8s.io/backup-snapshotclass created 3. Create a volume snapshot YAML for creating the VolumeSnapshot object: .. code-block:: yaml :emphasize-lines: 7,9 apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: backup-snapshot # volume_snapshot_name namespace: default # namespace_name spec: volumeSnapshotClassName: backup-snapshotclass # snapshot_class_name source: persistentVolumeClaimName: csi-pvc-robin # volume_name 4. Create a VolumeSnapshot object for taking a backup of a volume: .. code-block:: text # kubectl create -f backup-snapshot.yaml volumesnapshot.snapshot.storage.k8s.io/backup-snapshot created After receiving the request to take a volume snapshot, Robin takes the volume snapshot and pushes the volume snapshot as a volume backup to external cloud storage repository. Once the volume backup is successful, you can see the ``ReadyToUse`` flag as ``true`` in the ``kubectl get volumesnapshot`` command and the **token#** in the ``snapshotHandle`` parameter in the ``kubectl get volumesnapshotcontent -o yaml`` command. You can also see the volume backup in the ``robin backup list`` command. **Example** .. code-block:: text # kubectl get volumesnapshots NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE my-volume-snapshot-backup true demo-app 0 volume-backup-snapshot-class snapcontent-bc900057-acbf-4e28-8f16-bcde0f2b9582 9s 27s # kubectl get volumesnapshotcontent snapcontent-bc900057-acbf-4e28-8f16-bcde0f2b9582 -o yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotContent metadata: creationTimestamp: "2024-02-01T13:13:38Z" finalizers: - snapshot.storage.kubernetes.io/volumesnapshotcontent-bound-protection generation: 1 name: snapcontent-bc900057-acbf-4e28-8f16-bcde0f2b9582 resourceVersion: "58383088" uid: 3be6265b-a4d5-4d76-aec2-92fb75f2604a spec: deletionPolicy: Delete driver: robin source: volumeHandle: 1705528739:273 volumeSnapshotClassName: volume-backup-snapshot-class volumeSnapshotRef: apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot name: my-volume-snapshot-backup namespace: default resourceVersion: "58382994" uid: bc900057-acbf-4e28-8f16-bcde0f2b9582 status: creationTime: 1706793236096508000 readyToUse: true restoreSize: 0 snapshotHandle: token#b9f0bed0c10311ee8afad3bccf5f0d64 # robin backup list +----------------------------------+----------------------------------------------------+--------+---------+--------------------------------------------------------------------+--------+ | Backup ID | Backup Name | Type | Repo | Snapshot Name | State | +----------------------------------+----------------------------------------------------+--------+---------+--------------------------------------------------------------------+--------+ | b9f0bed0c10311ee8afad3bccf5f0d64 | volume-backup-bc900057-acbf-4e28-8f16-bcde0f2b9582 | VOLUME | my-repo | volume-snapshot-volume-backup-bc900057-acbf-4e28-8f16-bcde0f2b9582 | Pushed | +----------------------------------+----------------------------------------------------+--------+---------+--------------------------------------------------------------------+--------+ .. Note:: the ``robin backup list`` command displays the volume backup as Type VOLUME. Import volume on same cluster ----------------------------- You can import a volume from backup on the same source cluster where backup was taken. Complete the following steps to import a volume on the same cluster: 1. Create a PVC YAML to create a PVC with the datasource as volumesnapshotname: .. code-block:: yaml :emphasize-lines: 8,14 apiVersion: v1 kind: PersistentVolumeClaim metadata: name: restored-pvc #PVC_name spec accessModes: - ReadWriteOnce storageClassName: robin-volume-import # storage_class_name resources: requests: storage: 6Gi dataSource: kind: VolumeSnapshot name: restored-snapshot # volume_snapshot_name apiGroup: snapshot.storage.k8s.io 2. Create a PVC to import the volume from its backup using the above YAML file: .. code-block:: text # kubectl create -f restored-pvc.yaml persistentvolumeclaim/restored-pvc created Import volume on different cluster ------------------------------------- You can import a volume from backup on different cluster. Prerequisite ^^^^^^^^^^^^^ - The external cloud storage repository that contains the volume backup must be registered. For more information, see `Register a repo `__. - An export token must be generated for the respective volume backup on the source cluster. For more information, see `Export a backup `__. Complete the following steps to import a volume on different cluster: 1. Import the volume backup using backup token generated on the source cluster: .. code-block:: text #robin backup import After importing the backup, you can see the state of backup as **Imported** in the ``robin backup list`` command. Now, the volume is ready to be imported from backup. 2. Create a volume snapshot content YAML by specifying the token# in the ``snapshotHandle`` parameter to create a VolumeSnapshotContent object: .. code-block:: yaml :emphasize-lines: 8 apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotContent metadata: name: restored-snapshot-content # volume_snapshot_content_name spec: deletionPolicy: Retain driver: robin source: snapshotHandle: token# # backup_token volumeSnapshotRef: kind: VolumeSnapshot name: restored-snapshot namespace: default 3. Create a VolumeSnapshotContent object using the volume snapshot content YAML file created in step 1: .. code-block:: text # kubectl create -f restored-snapshot-content.yaml volumesnapshotcontent.snapshot.storage.k8s.io/restored-snapshot-content created 4. Create a volume snapshot YAML to create a VolumeSnapshot object by specifying the VolumeSnapshotContent name in the ``source`` parameter: .. code-block:: yaml :emphasize-lines: 8 apiVersion: snapshot.storage.k8s.io/v1beta1 kind: VolumeSnapshot metadata: name: restored-snapshot # volume_snapshot_name spec: volumeSnapshotClassName: robin-snapshotclass # volume_snapshot_class_name source: volumeSnapshotContentName: restored-snapshot-content #volume_snapshot_content_name 5. Create a VolumeSnapshot object using the volume snapshot YAML file created in step 4: .. code-block:: text # kubectl create -f restored-snapshot.yaml volumesnapshot.snapshot.storage.k8s.io/restored-snapshot created 6. (Optional) Create a storage class YAML file to create a StorageClass object to create a PVC: .. code-block:: yaml :emphasize-lines: 6 apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: robin-volume-import # storage_class_name parameters: hydration: true/false # default true driver: robin deletionPolicy: Delete .. Note:: By default, the ``hydration`` parameter is set to ``true`` in the StorageClass which means Robin copies the data from external cloud repository to the cluster disks on which import is done. To override the ``hydration`` parameter in the StorageClass, you must set this parameter as ``false``. 7. (Optional) Create a StorageClass object using the storage class YAML file created in step 6: .. code-block:: text # kubectl create -f robin-volume-import.yaml storageclass.storage.k8s.io/robin-volume-import created 8. Create a PVC YAML to create a PVC by specifying the VolumeSnapshot name in the ``datasource`` parameter: .. code-block:: yaml :emphasize-lines: 6 apiVersion: v1 kind: PersistentVolumeClaim metadata: name: restored-pvc #PVC_name spec: accessModes: - ReadWriteOnce storageClassName: robin-volume-import # storage_class_name resources: requests: storage: 6Gi dataSource: kind: VolumeSnapshot name: restored-snapshot # volume_snapshot_name apiGroup: snapshot.storage.k8s.io 9. Create a PVC to import the volume from backup using the PVC YAML file created in step 8: .. code-block:: text # kubectl create -f restored-pvc.yaml persistentvolumeclaim/restored-pvc created Delete a volume backup ----------------------- You can delete volume backups from the cluster if they are no longer needed. .. Important:: When you delete a volume backup from the cluster that has the ``purgeOnDelete`` parameter as ``true``, the volume backup is also deleted from the external cloud storage repository. Run the following command to delete a volume backup: .. code-block:: text # kubectl delete volumesnapshot -n prod snapshot-1 volumesnapshot.snapshot.storage.k8s.io "snapshot-1" deleted .. Note:: VolumeSnapshots are namespaced objects, therefore, the appropriate namespace must be specified to delete a volume backup successfully.