20. Release Notes¶
20.1. Robin Cloud Native Storage v6.0.0-211¶
The Robin CNS v6.0.0-211 release notes document provides information about upgrade paths, improvements, fixed issues, and known issues.
Release Date: February 23, 2026
20.1.1. Upgrade Paths¶
The following are the supported upgrade paths for Robin CNS v6.0.0-211:
Robin CNS v5.4.18-257 to Robin CNS v6.0.0-211
Robin CNS v5.4.18-278 to Robin CNS v6.0.0-211
Robin CNS v5.4.18-281 to Robin CNS v6.0.0-211
Note
After upgrading to Robin CNS v6.0.0-211, if you are using the Robin Client outside the
robincliPod, you must upgrade to the latest version of the Robin Client.If you have installed Robin CNS with the
skip_postgres_operatorparameter to use the Zalando PostgreSQL operator, then you must first upgrade the Zalando PostgreSQL operator to v1.11.0 or later before upgrading to Robin CNS v6.0.0-211.
20.1.2. New Features¶
20.1.2.1. Data Locality Tracking for Volumes¶
Starting with Robin CNS v6.0.0, Robin CNS provides visibility into the data locality ratio for each volume mount. The data locality ratio indicates the percentage of a volume’s data that is physically stored on the same node where the volume is currently mounted.
If the data locality percentage for a volume is high, I/O operations will be served locally without network hops. This lowers latency and improves storage performance.
The data locality ratio is reported as a percentage between 0% and 100%.
Hundred percent (100%) - All leader data for a volume resides on the mount node (fully local).
Zero percent (0%) - None of the leader data for a volume resides on the mount node (fully remote).
Viewing data locality ratio
You can view the data locality ratio for a volume using the following ways:
The
Mountcolumn in therobin volume listcommand.The
Data Localitycolumn in the Mounts table in therobin volume infocommand.The
mount_data_localitylabel in therobin_vol_mount_node_idsmetric.
20.1.2.2. New Metrics¶
Robin CNS v6.0.0 provides the following new metrics for the CSI sidecar containers and Manager services categories:
20.1.2.2.1. CSI sidecar containers¶
_total (Counters) - Total number of CSI operations (e.g.,
CreateVolume,DeleteVolume)._errors_total (Counters) - Total number of errors encountered during CSI operations.
_duration_seconds (Histograms) - Duration and latency of CSI calls in seconds.
Example
csi_sidecar_operations_seconds_count{driver_name="robin",grpc_status_code="OK",method_name="/csi.v1.Controller/ControllerGetCapabilities"} 1
csi_sidecar_operations_seconds_sum{driver_name="robin",grpc_status_code="OK",method_name="/csi.v1.Controller/ControllerGetCapabilities"} 0.000508898
csi_sidecar_operations_seconds_bucket{driver_name="robin",grpc_status_code="OK",method_name="/csi.v1.Identity/Probe",le="0.1"} 1
20.1.2.2.2. Manager services¶
robin_manager_services_auth_server
robin_manager_services_node_monitor_server
robin_manager_services_control_infra_server
For more information, see Storage Metrics.
20.1.3. Improvements¶
20.1.3.1. Enhanced Master Pods upgrade strategy¶
Starting with Robin CNS v6.0.0, the Robin master Pods upgrade sequentially to ensure continuous service and prevent downtime.
With this enhancement, Robin CNS first upgrades the standby master Pods. Once the standby master Pods are upgraded successfully, it upgrades the active master Pod.
20.1.4. Fixed Issues¶
Reference ID |
Description |
|---|---|
RSD-10850 |
The issue that caused unnecessary slice leader changes during iomgr-server restarts is fixed. |
RSD-10242 |
The Robin worker Pod is crashing due to the empty |
RSD-10160 |
The issue where |
RSD-10372 |
The lease mechanism feature in Robin CNS failed to handle the temporary etcd connection interruptions, resulting in frequent Robin master failovers. This issue is fixed. |
RSD-11041 |
The issue of Robin cluster resource reporting the |
RSD-10603 |
During the Robin CNS upgrade, IOMgr failed to start because of the empty |
RSD-11044 |
The DaemonSet |
RSD-11042 |
The issue of the standby Robin master pod failing its readiness probe and becoming unhealthy due to a mismatch between the in-memory phase state and the value stored in |
20.1.5. Known Issue¶
Reference ID |
Description |
|---|---|
PP-41215 |
Symptom Under rare scenarios, a VM volume’s slice leader can temporarily appear on two nodes. This happens if the I/O Manager (IOMgr) is down for an extended period, active I/O occurred on volumes, and the IOMgr is then restarted. The issue is due to a race condition between Robin Cluster Manager (RCM) updating node/device states and the IOMgr initiating remounts. However, this issue corrects itself once the volume slice leader automatically consolidates to the mounted node once volume slices resynchronize. |
PP-40480 |
Symptom In rare scenarios, you might observe that one of the Pods is stuck in the Failed to mount volume pvc-d16fa6b1-5bcb-4c69-805d-ab4df9018cee: Node <default:vnode-87-237> has mount_blocked STORMGR_NODE_BLOCK_MOUNT. No new mounts are allowed. Workaround Bounce the worker Pod running on the affected node. |
PP-39632 |
Symptom After upgrading to Robin CNP v6.0.0, NFS client might hang with no pending IO message. For no pending IO, refer this path : CsiServer_9 - robin.utils - INFO - Executing command
/usr/bin/nc -z -w 6 2049 with timeout 60 seconds
CsiServer_9 - robin.utils - INFO - Command
/usr/bin/nc -z -w 6 2049 completed with return code 0.
CsiServer_9 - robin.utils - INFO - Standard out:
Also, you can find the following message in the nfs: server 192.02.1.218 not responding, timed out
nfs: server 192.02.1.218 not responding, timed out
nfs: server 192.02.1.218 not responding, timed out
Workaround
|
PP-34414 |
Symptom In rare scenarios, the
To confirm the above issues, complete the following steps:
Workaround If the device is not in use, restart the # supervisorctl restart iomgr-server
|
20.1.6. Technical Support¶
Contact Robin Technical support for any assistance.