*********************** Release Notes *********************** This is a consolidated Release Notes for Robin CNP v5.3.5. The following are the other hotfix releases in the Robin CNP v5.3.5 release. * `Robin CNP v5.3.5 HF1`_ * `Robin CNP v5.3.5 HF2`_ * `Robin CNP v5.3.5 HF3`_ * `Robin CNP v5.3.5 HF4`_ * `Robin CNP v5.3.5 HF5`_ ================= Robin CNP v5.3.5 ================= The Robin CNP v5.3.5 provides new features, improvements, and bug fixes. This release notes document also provides a list of known issues in this release. Infrastructure Versions ----------------------- The following software applications are included in this CNP release. ==================== ======== Software application Version ==================== ======== Kubernetes 1.20.5 Docker 19.03.6 Prometheus 2.16.0 Node-exporter 0.17.0 Calico 3.12.3 HA-Proxy 1.5.18 PostgreSQL 9.6.11 Grafana 6.5.3 ==================== ======== Upgrade Paths -------------- The following are the upgrade paths for Robin CNP v5.3.5: * Robin v5.3.3-57 (GA) **to** Robin v5.3.5-140 (GA) * Robin v5.3.3-103 (HF2) **to** Robin v5.3.5-140 (GA) * Robin v5.3.3-113 (HF3) **to** Robin v5.3.5-140 (GA) * Robin v5.3.5-133 **to** Robin v5.3.5-140 (GA) .. Note:: If you are running a 4G VDU setup, you must not upgrade to Robin v5.3.5-140 (GA). The 4G VDU setup needs an emulator pin, which is not supported in Robin v5.3.5-140 (GA). Therefore, 4G VDU is not supported for upgrades. Features -------- -------------------- Support for RHEL 8.x -------------------- Robin CNP v5.3.5 supports Red Hat Enterprise Linux 8.x, Kernel Version 4.18.0-240.el8.x86_64. -------------------------------------- New Option and Policies in CNP upgrade -------------------------------------- Robin CNP v5.3.5 upgrade has the following new policies, and an option is included. .. Note:: You can use these policies and options in both the single-step upgrade and multistage upgrade process using GoRobin. * Reserved-cpus-cluster - Reserved CPUs Kubelet option. Pass the CPUs to be reserved by kubelet and not used for guaranteed PODs. - You can also specify reserved CPUs for master and worker nodes. + reserved-cpus-masters + reserved-cpus-workers .. Note:: You can use any of the options for the reserved CPUs cluster option. * Topology-manager-policy - Topology manager policy. Valid values are: + none + best-effort + restricted + single-numa-node .. Note:: best-effort is the default value. * cpu-manager-policy - CPU manager policy. Valid values are: + none + static .. Note:: static is the dafault value. ------------------ RWX Volume Support ------------------ This feature integrates with the Intel SRIOV/FPGA device plugin to discover and allocate FPGA VFs to PODs. This feature allocates FPGA VFs the cloud native way and hence supports running more than 1 DU container on the host as well as running a CNF and VNF workload using the FPGA device. ------------------------------ Advance Compute and Networking ------------------------------ Robin Cloud Native Platform v5.3.5 integrates with the Kubernetes Topology Manager and CPU Manager to support Non-Uniform Memory Access (NUMA) aware allocation, Single Root I/O Virtualization (SR-IOV) devices, dedicated CPUs, FPGA devices on Robin CNP clusters for deployments using YAMLs or Helm charts. ------------------------- Topology Manager Policies ------------------------- Robin CNP v5.3.5 supports the following Topology Manager policies: - best effort - restricted The restricted type is the default policy type when installing CNP and the restricted policy when upgrading CNP. ------------------------------------------------------------- Gateway Environmental Variable for Secondary IP Pool Gateways ------------------------------------------------------------- Starting with Robin CNP v5.3.5, you can add secondary ip-pool gateway information with an environmental variable. ------------------------------------------------------- Option to Provide Container Name in Network Annotations ------------------------------------------------------- Robin CNP V5.3.5 provides you an option to specify the container name in a network annotation when a Pod has multiple containers. In such a Pod, you can specify the network annotation for the specific container of the Pod. ------------------------------ Custom Network Interface Names ------------------------------ The custom network interface name feature enables you to provide a custom name for a network interface for easy identification and use the network interface for desired purposes. This feature allows you to implement custom interface names inside the Pod based on the requirements for a network function. If you do not provide a custom name for a network interface, Robin uses the default interface naming scheme. ------------------------------ Storage Affinity Customization ------------------------------ Robin Configuration Management (RCM) allocates compute and storage together in a single node when you install a Helm chart with storage affinity. However, in certain scenarios, the Helm chart may install compute and storage on different nodes. If you do not want Helm chart to install compute and storage on different nodes, you can specify the following in an annotation: ``robin.io/storage-affinity-policy: hard`` in annotation next to affinity annotation. .. Note:: You can also enable this customization globally using the Robin Config update command on ``extender_strict_affinity`` under the manager. ---------------------------------------------------- Application Bundles Support to Map the Custom Values ---------------------------------------------------- Robin Application Bundles enable you to provide a mapping of the custom values for an application Pod main container port to the host port necessary for accessing the application. You can configure this as part of the Bundle manifest YAML and using robin template CLI via input YAML. -------------------------------------- Support for Workload Types Through CSI -------------------------------------- This Robin software version supports adding workload types through Container Storage Interface (CSI) to Robin storage. You can set the workload type using storage class or persistent volume claim (PVC). - To set or modify workload type in the Storage class template, run the following command: .. code-block:: text # workload: ordinary|latency|throughput| dedicated - To set or modify workload type in the PVC, run the following command: .. code-block:: text # robin.io/workload: ordinary|latency|throughput| dedicated The ``robin volume info`` command displays the workload type. ---------------------------------- Bundles UI Performance Improvement ---------------------------------- The Bundle UI is updated for performance improvement. With this improvement, the bundle icons are downloaded only once and stored in a folder for reuse. Also, the Bundles UI is redesigned to make it easy to use. --------------------------------- Chargeback Feature Using Robin UI --------------------------------- Starting with Robin CNP v5.3.5, you can use the Chargeback feature using the Robin UI. You can perform the following tasks from the UI for Chargeback: - Displays all resources and resource types - Use desired currency type - Export a report or receipt into an Excel sheet with filters (tenant, user) - Modify the unit price per resource type (SSD, CPU resource type) ---------------- Robin Audit Logs ---------------- The Robin audit logs feature enables users to view the user audit logs in this log file: ``robin-user-audit.log``. You can find the user audit log file in this path: ``/var/log/robin/robin-user-audit.log``. By default, the Audit Logs feature is disabled. You need to enable this feature to view the audit logs. Once you enable the feature, all log messages are provided in JASON format or text format. You can specify the number of archive log files to retain and the size of each log file using config attributes in the ``user_audit``. Use the following command to enable Audit Logs: .. code-block:: text # robin config update user_audit log_enable True --------------------- Archive Robin Job Log --------------------- Robin CNP v5.3.5 supports archiving and purging Robin Job logs. The Robin Job logs are present in the following directory: - ``/var/log/robin/server`` - Preset in the Robin master nodes only - ``/var/log/robin/agent`` - Present in all Robin nodes The two log directories have the archived sub-directory. You can archive job logs by running the following task: .. code-block:: text robin job archive The archive process moves all successfully completed jobs to the archived sub-directory. The failed log jobs remain in the main directory. For cron job logs, Robin archives all the successfully completed jobs logs with completion time older than 24 hours. You can configure cron job time and time duration for archival using the following parameters: - ``job_archive_cron`` - ``job_archive_age`` ------------------- Purge Robin Job Log ------------------- Robin CNP v5.3.5 provides you an option to purge Robin Job logs and respective database entries using the following options: - ``job_purge_cron`` parameter in the of the robin config - ``robin job purge`` command ------------------- Robin Client Change ------------------- After the upgrade, users need to download the latest robin client as the old version of the Robin client will not work. ----------------------------------- Change in IP Pool Naming Convention ----------------------------------- To align with the Kubernetes IP Pool naming convention, Robin CNP v5.3.5 supports naming an IP Pool only in lower case letters. If you are adding a new IP Pool in Robin CNP v5.3.5, you must use lower case letters to name it. This change is applicable only for new IP Pools and does not impact any existing IP Pools. Known Issues ------------- .. list-table:: :widths: 15 80 :header-rows: 1 * - Reference ID - Description * - PP-21916 - **Symptom** Pod IP is not pingable from any other node in the cluster, apart from the node where it is running. **Workaround** Bounce the Calico Pod running on the node where the issue is seen. * - PP-21935 - **Symptom** Pods are stuck in ContainerCreating state with error: ``kubernetes.io/csi: mounter.SetUpAt`` failed to check for ``STAGE_UNSTAGE_VOLUME`` capability. **Workaround** * Flush connection entries .. code-block:: text # conntrack -F * Bounce nodeplugin Pod Bounce nodeplugin Pod (If the nodeplugin Pod has become unusable, future filesystem mounts will fail, this is a symptom of the many retries of nfs mount calls that hang). Bouncing the Pod will clear out the hung processes. * - PP-21832 - **Symptom** Node marked in ``NotReady`` state after reboot. **Workaround** To recover the cluster, run the following commands: .. code-block:: text # systemctl restart kubelet # systemctl restart dockershim # docker restart robin-cri * - PP-21469 - **Symptom** Change in isolcpu does not reflect in the host after rediscover **Workaround** 1. Update ``/etc/sysconfig/kubelet`` and update ``reserved-cpus`` parameter to include CPU IDs. 2. If your new ``reserverd-cpus`` is subset of existing one, just restart ``kubelet``. 3. If new ``reserved-cpus`` is not subset of existing cpuset. a. Drain the K8s nodes or reboot K8s node (The aim is to get rid of all Pods which are using CPUs from new ``reserved-cpus``, you can also delete those specific Pods but this is for an advanced user). b. Once all Pods are drained, restart ``kubelet`` (If you are rebooting, this is not needed). c. Uncordon K8s node. * - PP-20783 - **Symptom** After powering off a node, app Pods fail to come up on the rest of the available nodes. **Workaround** Deploy Pods without vnode hooks. This causes a lesser number of containers per node. * - PP-20131 - **Symptom** Pod stuck in Init State after reboot of one worker node. **Workaround** Kill the Pod that is stuck in ``Init:CrashLoopBackOff`` and Pod will be redeployed. **Workaround 2** Add security Context for initcontainer as well .. code-block:: text securityContext: runAsUser: 0 privileged: true * - PP-22109 - **Symptom** Gorobin does not support RHEL8/CentOS8 installation. **Workaround** Use Robin installer .. code-block:: text # robin-install-k8s_5.3.5-NNN.sh … … --robin-image-archive= --k8s-image-archive= … * - PP-22109 - **Symptom** Post upgrade, in some cases, applications are coming up with ``PLAN_FAILED``. **Workaround** Restart the plan failed application. * - PP-22104 - **Symptom** IOMGR down while upgrading. **Workaround** Perform the following steps: - Run ``systemctl restart iomgr-server`` (execute inside robinds container). - Perform Upgrade Robin. .. code-block:: text # ./gorobin onprem upgrade-robin --hosts --gorobintar --robin-admin-user --robin-admin-passwd - Perform K8s Upgrade .. code-block:: text # ./gorobin onprem upgrade-k8s --hosts --gorobintar --robin-admin -user --robin-admin-passwd - Perform Post Upgrade .. code-block:: text # ./gorobin onprem post-upgrade-robin --hosts --gorobintar --robin-admin-user --robin-admin-passwd * - PP-22354 - **Symptom** crictl version does not reflect 1.21 in a uninstall & reinstall environment. **Workaround** After robin uninstall, please make sure to remove cri-tools 1.13 version package is cleaned up manually before a fresh install. .. code-block:: text # yum remove cri-tools Technical Support ------------------ Contact `Robin Technical support `_ for any assistance. ==================== Robin CNP v5.3.5 HF1 ==================== The Robin CNP v5.3.5 HF1 has three bug fixes and two known issues. Upgrade Paths ------------- The following are the upgrade paths for Robin CNP v5.3.5 HF1: - Robin v5.3.3-115 (HF4) **to** Robin v5.3.5-151 (HF1) - Robin v5.3.5-140 (GA) **to** Robin v5.3.5-151 (HF1) Fixed Issues ------------- .. list-table:: :widths: 15 80 :header-rows: 1 * - Reference ID - Description * - PP-22497 - The vNode deployment process fails if a storage mount path has a trailing slash. * - PP-22514 - Starting with 5.3.5.1HF1, if a network interface has more than one IP address or gateway, the pod will have more than one ENV variable to accommodate the IP Address and the gateway prefixed by ``ROBIN_IPPOOL_NAME``. * - PP-22506 - After you delete a container using Helm, Robin's resources are not freed up on the cluster/node. Known Issues ------------- .. list-table:: :widths: 15 80 :header-rows: 1 * - Reference ID - Description * - PP-22541 - **Symptom** Sometimes the Robin CNP installation process on Red Hat Enterprise Linux 8 fails with this message: Exception: System token file not found. **Workaround** Run the following commands and try to install again: .. code-block:: text # systemctl journald.socket restart # systemctl rsyslog restart # systemctl auditd restart * - PP-20783 - **Symptom** After powering off a node, app Pods fail to come up on the rest of the available nodes. **Workaround** Deploy Pods without vnode hooks. This causes a lesser number of containers per node. Technical Support ------------------ Contact `Robin Technical support `_ for any assistance. ==================== Robin CNP v5.3.5 HF2 ==================== The Robin CNP v5.3.5 HF2 release has an improvement, bug fixes, and known issues. Upgrade Paths ------------- The following are the upgrade paths for Robin CNP v5.3.5 HF2: - Robin v5.3.3-115 (HF4) **to** Robin v5.3.5-159 (HF2) - Robin v5.3.5-151 (HF1) **to** Robin v5.3.5-159 (HF2) Improvement ------------ ------------------------------------------------------------- Application Auto Registration Prefers app.kunenetes.io Labels ------------------------------------------------------------- During auto-registration of Helm applications in Robin CNP, if an app or release labels and app.kubernets.io labels are present, the auto-registration process prefers the app.kunenetes.io labels over the app or release labels to ensure the correct object hierarchy in the Robin database. Fixed Issues ------------- .. list-table:: :widths: 15 80 :header-rows: 1 * - Reference ID - Description * - PP-21239 - In Robin 5.3.5, you are unable to add a bundle with the option ``isol: true``. Starting with Robin 5.3.5HF2, you can add bundles with ``isol, nonisol`` parameters as the Topology Manager using the CPU manager looks for reserve and ignores ``isol/nonisol`` parameters. * - PP-22612 - When you delete a statefulset Pod and try to create a new POD, the Pod fails to come up and displays a message saying IP is already existing. The root cause for the issue is that the deletion event comes after a considerable delay (30 seconds to 4 minutes) and releases the IP of the newly recreated Pod from the Robin platform. * - PP- 22711 - The deadlock of configuration lock and dev lock while trying to start freeing up of stale data is fixed. * - PP-22732 - Robin CNP UI is not displaying the total count of resources when creating a template for an app with multiple roles. * - PP-22737 - After upgrading to Robin 5.3.5, the restarted Pods and apps after the upgrade are prefixed with docker.io and they failed to start. * - PP-22761 - After upgrading to Robin 5.3.5, the robin.pem ``(/etc/ssl/private/robin.pem)`` is not updated. * - PP-22870 - Robin Configuration Management (RCM) fails to assign a static IP address when a user requests for a static IP under the following conditions: - Provided an invalid IP pool in deployments - Requested for a static IP and the same IP Pool for multiple networks in robin.io/networks. - Provided Invalid network configurations (same subnet multiple times) and network tagging details. Provided correct IP Pool details but configured PVC with replication and requested storage affinity. * - PP-22764 - If the docker-registry info is hardcoded in the bundle and not selected while deploying the ROBIN bundle app, then docker.io is getting prefixed before the image for the application PODs. This issue is resolved in Robin 5.3.5 HF2. The prefix docker.io will be prefixed only when all the following conditions are met. - docker.io registry is present in ROBIN CNP - docker.io registry credentials are set in ROBIN CNP prefix_dockerio config variable is set to True * - PP-22844 - When you reuse or restart a Pod, it is failing with this error: Failed to allocate static IP for pod. * - PP- 22123 - When you specify a node selector, Robin is selecting a different node. A node selector is available with Robin 5.3.5HF2, which ensures Robin planning considers the correct subset of nodes. * - PP-22721 - Manager timestamp not updated on time during certain PostgreSQL operations issue is fixed in this release. Known Issues ------------- .. list-table:: :widths: 15 80 :header-rows: 1 * - Reference ID - Description * - PP-22885 - **Symptom** If a container image has ``/bin/bash`` or ``/bin/sh`` as an entry point, as there is no tty console associated in default Kubernetes, the containers go into the Completed state and eventually into ``CrashLoopBackOff``. **Workaround** Add the following env parameter in the bundle under each role section which has ``/bin/bash`` or ``/bin/sh`` as the entry point. .. code-block:: text - DOCKER_OPTS: --tty * - PP-22893 - **Symptom** With this release, only the RWX Shared mode is supported for IPv6. * - PP-20783 - **Symptom** After powering off a node, app Pods fail to come up on the rest of the available nodes. **Workaround** Deploy Pods without vnode hooks. This causes a lesser number of containers per node. Technical Support ------------------ Contact `Robin Technical support `_ for any assistance. ==================== Robin CNP v5.3.5 HF3 ==================== The Robin CNP v5.3.5 HF3 release has improvements, bug fixes, and known issues. Upgrade Paths ------------- The following is the upgrade path for Robin CNP v5.3.5 HF3: - Robin v5.3.5-159 (HF2) **to** Robin v5.3.5-207 (HF3) Improvements ------------ --------------------------------------- Storage Reclaim Job after App Deletion --------------------------------------- When you run the ``robin app delete --force `` command, starting from Robin 5.3.5 HF3 runs a new job: ``K8SVolumeDeleteHelmRelease`` to reclaim of storage space after ``K8SApplicationDeleteHelm`` job. -------------------------------- Auto Discover Tuna Isolated CPUs -------------------------------- Robin CNP installer autodetects the isolcpus if the host is already configured with isolcpus using tuned/tuna settings, and it sets the kubelet reserved CPUs appropriately. With autodetection of isolcpus, it avoids specifying the isolcpus in the installer manually. Fixed Issues ------------- .. list-table:: :widths: 15 80 :header-rows: 1 * - Reference ID - Description * - PP-23356 - When you create a RAID-0 device using the ``mdadm`` command using the ROBIN TCMU devices as slaves, it results in ``partprobe`` and ``pvs`` commands are stuck during the Robin upgrade process. This Robin version fixes this issue by omitting the RAID devices created on top of the TCMU devices. * - PP-23329 - The issue apps are not being scheduled to all available nodes evenly is fixed in this Robin version. From Robin 5.3.5 HF3 version, when selecting a host for application deployment, nodes will be preferred depending on the number of currently available Pods. * - PP-23325 - After upgrading to Robin CNP 5.3.5, the auth webhook configuration is missing on some of the master nodes from the ``apiserver.yaml`` file, due to this issue, the ``kubectl`` command failed to fetch the current master service IP address and results in failing to set up the auth-webhook configuration. Robin fixed this issue in this version. * - PP-23264 - This Robin version checks and sets the robin_k8s_extension to true if it is set as false as part of the pre-upgrade stage. * - PP-23250 - The GoRobin upgrade script failing when upgrading Docker images and displays the following error: Failed to download upgrade images on host . The issue is due to missing WWN links in the TCMU devices. This Robin version fixed the issue by omitting all TCMU devices during the scan without WWN links. * - PP-22980 - The issue Robin schedules being active even after deleting the app is fixed. * - PP-22979 - The issue of the systemd-logind process is unresponsive inside the Robin container is fixed. * - PP-22948 - The issue GoRobin upgrade script failing to upload Docker images to cluster nodes due to sudoers is fixed. * - PP-23388 - The intermittent slowness issue of the Robin configuration Management (RCM) server is fixed. * - PP-23308 - The issue with metrics of a Pod not matching with containers in it is fixed. Known Issues ------------- .. list-table:: :widths: 15 80 :header-rows: 1 * - Reference ID - Description * - PP-23477 - **Symptom** If you are upgrading from Robin CNP v5.3.5 HF2 to Robin v5.3.5 HF3 on cluster reporting containers stuck in the ``ContainerCreating`` state due to NFS-Server Pod failover and Exports failing jobs, after upgrading to Robin v5.3.5 HF3, you might notice Pods are in the ``CrashLoopBackOff`` state. **Workaround** After upgrading to Robin v5.3.5 HF3, if you notice Pods are in the ``CrashLoopBackOff`` state, you must bounce the Pods. * - PP-21983 - **Symptom** When an IP address is not in ``robin ip-pool info --ip-allocations`` and no other running Pods in the cluster is using the IP, a Pod may not get created that is controlled by a Deployment, StatefulSet, or DaemonSet. **Workarounds** You can apply any of the following workarounds: * Run this command: ``robin ip-pool claim --ip `` * The IP will be cleared automatically within 60 minutes. * - PP-23277 - **Symptom** The Robin UI incorrectly displays multiple primary namespaces. **Workaround** Use CLI to verify the correct primary namespace. * - PP-23429 - **Symptom** If you add the superadmin user to a tenant as a regular user and then remove it, the cluster role binding for the user is also erroneously removed. **Workaround** You need to manually re-add the cluster role binding. Use the following command to re- add: .. code-block:: text # kubectl create clusterrolebinding clusterrolebinding- -cluster-admin--clusterrole=cluster-admin --user= **Example** .. code-block:: text # kubectl create clusterrolebinding clusterrolebinding-robin-cluster-admin --clusterrole=clust Technical Support ------------------ Contact `Robin Technical support `_ for any assistance. ==================== Robin CNP v5.3.5 HF4 ==================== **Release Date**: 22 October 2021 The Robin CNP v5.3.5 HF4 release has an improvement, bug fixes, and known issues. Upgrade Paths ------------- The following are the upgrade paths for Robin CNP v5.3.5 HF4: * Robin v5.3.3-115 (HF3) **to** Robin v5.3.5-213 (HF4) * Robin v5.3.5-159 (HF2) **to** Robin v5.3.5-213 (HF4) * Robin v5.3.5-207 (HF3) **to** Robin v5.3.5-213 (HF4) Improvement ----------- ------------------------------ Support for Kubernetes 1.20.11 ------------------------------ Robin CNP v5.3.5 HF4 now supports Kubernetes version 1.20.11 Fixed Issues ------------ Robin CNP v5.3.5 HF4 release has the following bug fixes. .. list-table:: :widths: 15 80 :header-rows: 1 * - Reference ID - Description * - PP-23277 - The Robin UI incorrectly displaying multiple primary namespaces issue is now fixed. * - PP-23642 - When creating a Pod, Robin CNP executes planning ahead of Kubernetes scheduling. However, in some cases, Robin CNP is in between collecting the Kubernetes inventory, and the node that Kubernetes picked up may not be in Robin's plan. Therefore, it might fail to allocate a static IP address for a POD. This issue is now fixed. * - PP-24041 - The issue of disabling usage of default HTTPS port (443) not working using the ``--disable-default-https`` option is now fixed. Known Issues ------------ Robin CNP v5.3.5 HF4 release has the following known issues. .. list-table:: :widths: 15 80 :header-rows: 1 * - Reference ID - Description * - PP-23942 - **Symptom** The Postgres slave instances didn't come up after the upgrade. **Workaround** Restart Postgres instance on the respective node. * - PP-22516 - **Symptom** When installing Kubernetes Master nodes, node taints may not apply successfully. **Workaround** You can manually taint the Kubernetes master node after installation is complete by using the following command from any Kubernetes master node. ``kubectl taint nodes ${K8S master node name} node-role.kubernetes.io/master=:NoSchedule`` Technical Support ------------------ Contact `Robin Technical support `_ for any assistance. ==================== Robin CNP v5.3.5 HF5 ==================== **Release Date**: 30 November 2021 The Robin CNP v5.3.5 HF5 has an improvement, bug fixes, and a known issue. Infrastructure Versions ----------------------- The following software applications are included in this CNP release. ==================== ======== Software application Version ==================== ======== Kubernetes 1.21.5 Docker 19.03.9 Prometheus 2.16.0 Node-exporter 1.1.2 Calico 3.12.3 HA-Proxy 1.5.18 PostgreSQL 9.6.11 Grafana 6.5.3 ==================== ======== Upgrade Paths ------------- The following is the upgrade path for Robin CNP v5.3.5 HF5: * Robin v5.3.5-213 (HF4) **to** Robin v5.3.5 (HF5) Improvement ----------- --------------------------------------------------------------------- Network Planning Support for Apps with Pod Affinity and Anti-affinity --------------------------------------------------------------------- Robin CNP v5.3.5 HF5 provides support for network planning support for apps with Pod affinity and anti-affinity. Fixed Issues ------------ Robin CNP v5.3.5 HF5 release has the following bug fixes. .. list-table:: :widths: 15 80 :header-rows: 1 * - Reference ID - Description * - PP-24202 - The security issue with SSL Medium Strength Cipher Suites is fixed by supporting the Strong Cipher Suites with more than 128 bit keys are configured in Robin CNP Services. The following are the list of the supported Strong Cipher Suites: * ``TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256`` * ``TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384`` * ``TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305`` * ``TLS_RSA_WITH_AES_128_CBC_SHA`` * ``TLS_RSA_WITH_AES_256_CBC_SHA`` * ``TLS_RSA_WITH_AES_128_GCM_SHA256`` * ``TLS_RSA_WITH_AES_256_GCM_SHA384`` * ``TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA`` The above-mentioned list of Strong Cipher Suites is supported in the following Robin CNP Services: * K8s API server * K8s controller Manger * K8s scheduler * K8s kubelet * ROBIN UI https server * ROBIN event server * - PP-21983 - When an IP address is not in ``robin ip-pool info --ip-allocations`` and no other running Pods in the cluster is using the IP, a Pod may not get created that is controlled by a Deployment, StatefulSet, or DaemonSet. This issue is fixed. * - PP-22941 - When you do not provide any limits and requests in the container resource section, but you provide Robin annotation for network planning, the issue of a Pod not coming up successfully is fixed. Known Issue ----------- Robin CNP v5.3.5 HF5 release has the following known issue. .. list-table:: :widths: 15 80 :header-rows: 1 * - Reference ID - Description * - PP-24202 - **Symptom** If a Pod is running without requested additional network interfaces, apply the following workaround. **Workaround** Bounce the Pod using the kubectl delete pod command. Technical Support ------------------ Contact `Robin Technical support `_ for any assistance.