20. Release Notes¶
20.1. Robin Cloud Native Storage v5.4.18-278¶
The Robin CNS v5.4.18-278 Release Notes document provides information about upgrade paths, improvements, fixed issues, and one known issue.
Release Date: December 19, 2025
20.1.1. Upgrade Paths¶
The following are the supported upgrade paths for Robin CNS v5.4.18-278:
Robin CNS v5.4.16-105 to Robin CNS v5.4.18-278
Robin CNS v5.4.16-166 to Robin CNS v5.4.18-278
Note
After upgrading to Robin CNS v5.4.18-278, if you are using the Robin Client outside the
robincliPod, you must upgrade to the latest version of the Robin Client.If you have installed Robin CNS with the
skip_postgres_operatorparameter to use the Zalando PostgreSQL operator, then you must first upgrade the Zalando PostgreSQL operator to v1.11.0 or later before upgrading to Robin CNS v5.4.18-278.After upgrading from any supported Robin CNS version to Robin CNS v5.4.18, certificates will be automatically renewed.
20.1.2. Improvement¶
20.1.2.1. Readiness probe support for IOMGR service¶
Starting with Robin CNS v5.4.18, Robin CNS introduces Readiness probe support for the IOMGR service. These probes verify that both RIO and RDVM services are operational and ready to serve I/O operations. When both services are ready, then only Robin CNS marks the IOMGR service as ready. For more information about Readiness probe, see Readiness probe.
20.1.2.2. Improved Stability and Performance for Windows VMs¶
In Robin CNS, I/O operations can experience some latencies during storage device initialization (e.g., after a node restart or during recovery). This impacts application response and causes Windows VMs to freeze.
The following improvements are made in the Robin CNS to solve this issue:
Ensure that the volume mount and volume slice leader are on the same node for optimal I/O performance.
Fixed the Block map cache assertion to prevent IOMGR restarts.
Improved the Block map load time.
Optimized Garbage Collection (GC) for volumes with 512 block size.
Avoid IOMGR restarts during Robin master Pod failovers.
20.1.3. Fixed Issues¶
Reference ID |
Description |
|---|---|
RSD-9809, RSD-10087 |
The out-of-sync issue with Patroni Pods that led to a Robin Service Outage is fixed. |
RSD-9861, RSD-9886, RSD-9944 |
The Remote Procedure Call (RPC) client is dropping pending I/O requests without processing the received response. Because of these pending I/O requests, the virtual machine instance (VMI) Pod cannot reconcile the state and shows the following error: unknown error encountered sending command SyncVMI: rpc error: code = DeadlineExceeded desc = context deadline exceeded The issue of the VM instance hitting the SyncVMI reconciliation error is fixed. |
RSD-9882 |
The issue of IOMgr failing to restart the volume remount operation due to flaky Kubernetes API service after the StorMgr recovers from network partition is fixed. |
RSD-9867 |
The issue of creating a thick clone volume from its snapshot getting stuck in the hydrating state is fixed. |
RSD-9775 |
The Robin CNS v5.4.16 installation failed because the |
RSD-9603 |
When a node with an active master Pod is rebooted, in the |
RSD-9518 |
The Sherlock tool is incorrectly reporting a node as down. This issue is fixed by removing the SSH service from its diagnostic checks. |
RSD-9422 |
In a rare scenario, when a thick clone volume’s hydration is in progress, if that clone volume’s PVC is deleted, the |
RSD-7375 |
A volume clone creation with a larger size than the source volume is not working. This issue is fixed. |
RSD-9165 |
When a node is removed and added back to the cluster, Robin Patroni Pods are stuck in the |
RSD-9654 |
In a rare scenario, IO operations hang up on the Robin volume slice due to a network blip. This issue is fixed. |
RSD-8083 |
To ensure quicker failover, tasks related to device slice leader change have been optimized. This is especially beneficial during node reboots in environments with nodes containing many large devices, as following slice operations can now finish faster. |
RSD-10021 |
The Patroni cluster might go down if some of the nodes are cordoned in a rare scenario. This issue is fixed. |
RSD-10248 |
When a node abruptly powers off, Pods that use persistent volumes are rescheduled to other nodes.
However, in some cases, some of these Pods fail to start on the new nodes with a Multi-Attach error for volume … Volume is already exclusively attached to one node and can’t be attached to another. This issue is fixed. |
RSD-10342 |
The issue of noisy logs generated for the |
RSD-10348 |
The issue of block checksum mismatch occurring after upgrading to Robin CNS v5.4.18 is fixed. |
RSD-10357 |
When creating a cloned volume on a device with a write unit different than the device where the parent volume is provisioned, the IOMGR service is restarting due to an assert. This issue is fixed. |
RSD-10530 |
The issue of PersistentVolumeClaim (PVC) creation failing when provisioning a Persistent Volume (PV) due to the unavailability of the Kubernetes API server is fixed. |
RSD-10565 |
The malformed JSON as part of one of the metrics payload is now fixed. |
PP-38537 |
After deleting a backup, unregistering a storage repo fails with the following error message: Storage repo is associated with volume group This issue is fixed. |
20.1.4. Known Issue¶
Reference ID |
Description |
|---|---|
PP-40480 |
Symptom In rare scenarios, you might observe that one of the Pods is stuck in the Failed to mount volume pvc-d16fa6b1-5bcb-4c69-805d-ab4df9018cee: Node <default:vnode-87-237> has mount_blocked STORMGR_NODE_BLOCK_MOUNT. No new mounts are allowed. Workaround Bounce the worker Pod running on the affected node. |
20.1.5. Technical Support¶
Contact Robin Technical support for any assistance.