2. Installing Robin CNP Using GoRobin

Robin.io provides the GoRobin utility for quick and simple installation of Robin CNP. The utility simplifies the install, uninstall, and upgrade procedures.

The GoRobin utility performs all the pre-install checks, copies the required installation files to all nodes, and runs the post-installation checks automatically.

You can install Robin CNP in the following two modes:

  • High availability (HA) - This mode must be utilized for all business requirements.

  • Non-HA

Note

Currently, Robin CNP 5.4.3 can only be installed on non-cloud based setups using the GoRobin utility.

2.1. Supported Operating Systems

The following are the supported operating systems and Kernel versions for installing Robin CNP:

OS Version

Kernel Version

CentOS 7.9

3.10.0-1160.71.1.el7.x86_64 and lower

Red Hat Enterprise Linux 8.10

4.18.0-553.el8_10.x86_64

Rocky Linux 8.6

4.18.0-372.9.1.el8.x86_64 and lower

You can run the uname -r command to know the kernel version and if required, use the yum update kernel command for CentOS and dnf update kernel command for Rocky Linux and Red Hat Enterprise Linux to update the kernel version. Reboot the nodes after updating the kernel version.

Note

Nodes within a Robin cluster do not need to be homogenous with regard to the OS installed them. This means all nodes with different operating systems can be part of the same Robin cluster and be installed at the same time.

2.2. Prerequisites

The following are the hardware, networking, and license prerequisites:

2.2.1. General Prerequisites

  • Robin CNP installation files, namely the GoRobin utility and the GoRobin tarfile. The aforementioned files will use the naming convention gorobin_<version> and gorobintar-<version>.tar respectively.

  • Login credentials for hosts on which Robin CNP will be installed.

Note

Contact the Robin Support team or Account Manager for installation files. These installation files cannot be edited in anyway including a renaming of the files.

2.2.2. Hardware and Networking Prerequisites

The following are the hardware and networking prerequisites:

  • All nodes in the cluster must be DNS resolvable for the Internet connectivity among the nodes.

  • A minimum of 16 GB memory for all nodes.

  • A minimum of 8 cores.

  • A minimum of three nodes for HA mode and one node for non-HA mode. Robin recommends at least five nodes when installing Robin CNP in HA mode.

  • Credentials of the nodes (IP addresses, username, and password).

  • A 10G network bandwidth for the interconnect network among hosts.

  • A virtual IP address for HA mode and the IP address must be in the same subnet as that of the hosts on which you are going to install Robin CNP.

  • A VRID of your network. It must be a unique integer per cluster between 1 to 255.

2.2.3. License Proxy (Optional)

If you want to activate a production license as part of the installation process, you must first set up License Proxy and include its associated URL endpoint within config.json file. For more information, see License Proxy.

2.3. Host Login JSON File

You can use the host login JSON file when installing CNP instead of the --hosts option if the hosts within the Robin cluster have different SSH passwords and port configurations.

Note

  • If you have different SSH passwords and port configurations for hosts, keep this host JSON file ready before your start the installation steps.

  • Hostnames must be in lowercase letters and cannot contain uppercase letters.

Sample hosts login credentials JSON file.

{
      "vm-2-61.robinsystems.com": {
            "password": "admin12",
            "role": "master",
            "user": "root",
            "port": "22"
      },
      "vm-2-62.robinsystems.com": {
            "password": "admin34",
            "role": "master",
            "user": "root",
            "port": "22"
      },
      "vm-2-63.robinsystems.com": {
            "password": "admin56",
            "role": "master",
            "user": "root",
            "port": "22"
      },
      "vm-2-64.robinsystems.com": {
            "password": "admin78",
            "user": "root",
            "port": "22"
      }
}

2.4. Config JSON File

The Config JSON file enables you to provide the install options. You can use Config JSON file when installing Robin CNP to pass configuration details for hosts. You need to provide the config.json file when installing Robin CNP using --config-json command option.

Note

For IPv6 environment, you must use the Config JSON file during CNP installation.

The following are the sample format of config.json file.

You can use the following format to include any configuration in the config.json file.

Sample Format

{'<host1>:{<parameter>:< parameter value'},

<host2>:{<parameter>:< parameter value'}}

Sample config.json file

{
 "vm-2-61.robinsystems.com":{
       "robin-install-dir":"/home/custom_robininstall",
       "robin-backup-dir":"/home/custom_robinbackup",
       "robindsdir":"/home/custom_robinds",
       "ip-protocol":"ipv6",
       "disablerepo":"*",
       "enablerepo":"robin-repo",
       "ca-cert-path":"/root/certs/abhi/abhi-inter-ca.crt",
       "ca-key-path":"/root/certs/abhi/abhi-inter-ca.key",
       "best-effort-qos": "True",
       "update-coredns":"True",
       "topology-manager-policy":"restricted",
       "setup-gpu-operator":"True",
       "nics":"eth1",
       "reserved-cpus":"0-3",
       "vault-addr": "https://192.0.2.100:8200",
       "vault-keys-path": "secret/robin",
       "vault-ca-cert": "/root/MyRootCA.pem",
       "vault-client-cert": "/root/robin_tls.pem",
       "vault-client-key": "/root/robin_tls.key",
       "kms": "vault",
       "identity-cert-path": "/root/certs/identity-certs/cluster/cluster-identity.cert",
       "identity-key-path": "/root/certs/identity-certs/cluster/cluster-identity.key",
       "identity-ca-path": "/root/certs/identity-certs/ca.crt",
       "zero-trust":"True",
       "single-node-cluster":"True",
       "loadbalancer-iprange":"<range_of_IPs>",
       "k8s-image-pull-policy":"Never"
 },
 "vm-2-62.robinsystems.com":{
       "robin-install-dir":"/home/custom_robininstall",
       "robin-backup-dir":"/home/custom_robinbackup",
       "robindsdir":"/home/custom_robinds",
       "ip-protocol":"ipv6",
       "disablerepo":"*",
       "enablerepo":"robin-repo",
       "nics":"eth1",
       "reserved-cpus":"0-3"
 },
 "vm-2-63.robinsystems.com":{
       "robin-install-dir":"/home/custom_robininstall",
       "robin-backup-dir":"/home/custom_robinbackup",
       "robindsdir":"/home/custom_robinds",
       "ip-protocol":"ipv6",
       "disablerepo":"*",
       "enablerepo":"robin-repo",
       "reserved-cpus":"0-3"
 },
 "vm-2-64.robinsystems.com":{
       "robin-install-dir":"/home/custom_robininstall",
       "robin-backup-dir":"/home/custom_robinbackup",
       "robindsdir":"/home/custom_robinds",
       "ip-protocol":"ipv6",
       "disablerepo":"*",
       "enablerepo":"robin-repo",
       "nics":"eth1",
       "reserved-cpus":"0-3"
 }
 }

Important:

You must provide the following parameters only using the config.json file instead of using the respective options currently present on the command line:

  • ca-key-path - Provide only in the first node

  • ca-cert-path - Provide only in the first node

  • update-coredns - Provide only in the first node

  • topology-manager-policy - Provide only in the first node. Valid values= none, best-effort, restricted or single-numa-node

2.5. Preparing for Robin CNP Installation

2.5.1. Step 1 - Pre-installation Checks

The GoRobin utility takes care of the pre-install checks. However, it is recommended that you manually check the following before installing:

  • All required prerequisites are met. For more information, see Prerequisites.

  • Disable the network firewall if it is enabled.

  • Disable SELinux only during installation.

  • Network Time Protocol (NTP) Server is up and running.

  • Disable Swap on your hosts.

  • The root user needs to be present in the sudoers file located at /etc/sudoers.

  • Remove conflicting packages such as podman and buildah packages on CentOS 8 hosts. The CNP Installer might prompt you to remove more conflicting packages during installation. You must rerun the installation after removing the conflicting packages.

  • When installing Robin CNP on Rocky Linux host systems, if runc package is installed, you must uninstall it before installing Robin CNP.

  • If the directory /var/lib/docker is present, it must be on an XFS filesystem.

  • The locations /, /var, and /var/crash should be on separate partitions.

  • Robin supports GPU allocations only via the NVIDIA GPU operator that works on CentOS Kernel version 3.10.0-1160 and above.

  • Automatic detection of isolated CPUs only occurs when the respective hosts have been configured to have isolated cores via tuned and/or tuna settings. The cores which are not part of this isolated set will be set as the reserved CPUs for Kubelet. If the isolated cores are configured in an alternative manner they will have to be passed to the installer explicitly. If isolated CPUs are not configured, you can configure using config.json parameter. Example: "reserved-cpus": "0-3".

2.5.2. Step 2 - Prepare hosts and Config JSON files (Optional)

When installing Robin CNP, optionally, you can use the hosts.json file instead of the --hosts option and the config.json file to provide all required installation options instead of providing each option individually as part of the install command.

  • Prepare your hosts JSON file if you have non-homogenous SSH password and port configurations. For more information, see JSON File.

  • Prepare Config JSON file for passing configuration options. For more information, see Config JSON File.

2.5.3. Step 3 - Storage Configuration

You need to create the following storage before installing Robin CNP for automatic storage configuration.

  1. Create /var directory of size 60 GB.

  2. Create /home directory of size 240 GB.

The Robin installer uses the folders and creates the required subfolders.

Note

If you prefer to manually configure storage, create separate volumes as per the following requirements:

/var/lib/docker

Directory in which the Docker images and metadata will be stored. Minimum 50 GB in size, but can be sized according to the application spread.

/home/robinds

Directory in which Robin config and Consul data will be stored. Minimum 20 GB in size.

/home/robinds/var/log

Directory in which all the Robin log files will be stored. Minimum 60 GB in size. Robin log files are capped at ~55G on master nodes and 30G on worker nodes.

/ home/robinds/var/crash

Directory in which Robin core dump files will be stored. Minimum 100GB in size. This is sufficient to store data for at least 4 crashes.

/home /robinds/var/lib/pgsql

Directory in which the Robin database will be stored. Minimum 80GB in size. Needs to have sufficient space to hold the contents of the database, as well as a backup to support failover.

Note

Robin CNP discovers and initializes only the unpartitioned drives for Pod deployments. You should not tag or label these drives.

2.5.4. Step 4 - Download GoRobin Utility and Robin CNP Software

To download Robin CNP installation files, you should contact the Robin Support Team or your Robin Account Manager.

2.6. Installing Robin CNP in an On-Premises Setup

Before installing Robin CNP, your cluster nodes must meet all the listed prerequisites and pass the pre-install checks. Detailed below are the steps to install Robin CNP in its two primary modes of installation:

  • High-Availability (HA)

  • Non High-Availability (Non-HA).

Note

Other additional customizations to the aforementioned main types of installation can be found here.

2.6.1. Installing Robin CNP in HA Mode

To install Robin CNP in an on-premises environment, run the following command:

# ./gorobin_<version> onprem install-ha --hosts <hostnames> --gorobintar <path-to-gorobin-tarball> --pemfile <pemfile-path> --vip <virtual-ip> --vrid <virtual-router-id>

Note

These are the required parameters for HA installation: --hosts, --gorobintar, --vip, --vrid along with the positional arguments. In addition, you can also optionally use the host login JSON and Config JSON files for IPv4 environment.

However, for IPv6 environment, you must use the Config JSON file. For more information, see Host Login JSON File and Config JSON File.

The following are the important parameters for installing Robin CNP in the HA mode:

--hosts

A comma-separated list of fully qualified hostnames to deploy Robin software.

--hosts-json

Path to JSON file containing hosts. Its format can be seen here.

--dir DIR

Directory to extract installation files to, at least 10 GB required.Default is /tmp

--gorobintar

Path to GoRobin tarball containing installation files.

--vip

Virtual IP address of the Kubernetes control plane (must be an IP address).

--vrid

Cluster-unique virtual router ID used by Keepalived (must be an integer from 1 to 255)

--ignore-warnings

Ignore precheck warnings and go forward with the installation.

--config-json

Path to JSON file containing extra configuration parameters for each host. Its format can be seen here.

--license-id

License ID, obtained from get.robin.io, to activate cluster with.

--pemfile

Hosts PEM Key File

Important:

You must provide the following parameters only using the config.json file.

  • --ca-key-path

  • --ca-cert-path

  • --update-coredns

  • --topology-manager-policy

Example

# ./gorobin_5.4.1-35 onprem install-ha --hosts 192.0.2.61,192.0.2.62,192.0.2.63 --gorobintar gorobintar-5.4.1-35.tar --ignore-warnings --vrid 218 --vip 192.0.2.202
Please Enter the sshpassword for hosts in your Robin cluster. GoRobin will use this sshpassword to access all hosts in your cluster
Password:
- Checking network connectivity to host vm-2-61.robinsystems.com ... DONE (0 secs)
- Checking network connectivity to host vm-2-62.robinsystems.com ... DONE (0 secs)
- Checking network connectivity to host vm-2-63.robinsystems.com ... DONE (0 secs)
- Running GoRobin Precheck on host vm-2-61.robinsystems.com ... DONE (0 s ecs)
- Running GoRobin Precheck on host vm-2-62.robinsystems.com ... DONE (0 s ecs)
- Running GoRobin Precheck on host vm-2-63.robinsystems.com ... DONE (0 s ecs)
This step will remove any existing Robin tars and re-copy new Robin tars from /root/ folder on all nodes. Are you sure you want to proceed (y/n) ? y
- Removing ROBIN tarball from '3' node(s) in the cluster ... DONE (0 secs)
-
Copying ROBIN tarball to '3' node(s) at /root/ in the cluster ... File upload to DONE (636 secs)all hosts: 100% -- (8151232149|8151232149)
- Running Precheck on host vm-2-61.robinsystems.com ... WARNING (20 secs)
  - Precheck on this host had warnings. Please check /root//vm-2-61.robinsystems.com_precheck_20220829-085140 for details.
- Running Precheck on host vm-2-62.robinsystems.com ... WARNING (21 secs)
  - Precheck on this host had warnings. Please check /root//vm-2-62.robinsystems.com_precheck_20220829-085201 for details.
- Running Precheck on host vm-2-63.robinsystems.com ... WARNING (20 secs)
  - Precheck on this host had warnings. Please check /root//vm-2-63.robinsystems.com_precheck_20220829-085222 for details.
- Executing hostname ping checks for all nodes in the cluster ... DONE (0 secs)
- Checking time drift between all the nodes ... DONE (0 secs)
- Installing host packages on '3' node(s) in the cluster. You may check /var/log/robin-host-script.log on each host to track progress ... DONE (512 secs)
- Configuring 'vm-2-61' as Master Node of cluster. You may check /var/log/robin-k8s-script.log on vm-2-61.robinsystems.com ... DONE (84 secs)
- Copying kubeconfig file to all nodes ... DONE (6 secs)
- Adding '2' additional master node(s) 'vm-2-62.robinsystems.com, vm-2-63.robinsystems.com' to cluster. You may check /var/log/robin-k8s-script.log on the host. ... DONE (120 secs)
- Configuring additional Kubernetes plugins. You may check /root/robin-k8splus-script.log on vm-2-61.robinsystems.com ... DONE (6 secs)
- Installing Robin on cluster. You may check /root/robin-script.log on vm-2-61 ... DONE (361 secs)
- Validating Installation ... DONE (0 secs)
- Initializing Compute and Storage services on 3 node(s) ... DONE (55 secs)

-----------------------------------------------------------------
 Cluster with 3 node(s) is ready for use
   1. vm-2-61.robinsystems.com
   2. vm-2-62.robinsystems.com
   3. vm-2-63.robinsystems.com
Robin Cluster Name ..... cluster-Bp1254
Robin Admin Username ... admin
Robin Admin Password ... admin1
Robin Admin Access ..... https://192.0.2.202


Note: Since a license was not provided at runtime, the cluster is not fully activated.

-----------------------------------------------------------------
ROBIN was installed on the following hosts which had precheck warnings: vm-2-61.robinsystems.com, vm-2-62.robinsystems.com, vm-2-63.robinsystems.com. This is due to the '--ignore-warnings' flag being passed during the installation. This might lead to erroneous behavior and hence an unsupported installation.
- Removing ROBIN tarball from '3' node(s) at /root/ in the cluster ... DONE (0 secs)
- Pulling ROBIN Install logs from '3' node(s) at /root/ ... DONE (0 secs)

2.6.2. Installing Robin CNP in Single Node HA-Ready

Robin provides you an option to install CNP in the HA mode in an on-prem environment using a single hostname or IP address. Later you can scale up the cluster by adding more Master nodes and Worker nodes as per your requirements. Thus, a Single Node HA-Ready cluster is a cluster with a single host and HA enabled.

You can use the same install command to install a single node HA-Ready cluster but just provide a single hostname or IP address in the host.json file.

Note

All the general prerequisites that are applicable for regular HA installation are applicable for the single Node HA-Ready installation.

Run the following command to install a Single node HA-Ready cluster:

# ./gorobin_<version> onprem install-ha --hosts <hostname> --gorobintar <path-to-gorobin-tarball> --pemfile <pemfile-path> --vip <virtual-ip> --vrid <virtual-router-id>

Example

# ./gorobin_5.4.3-198 onprem install-ha --hosts-json /root/robin/hosts.json --gorobintar gorobintar-5.4.3-198-bin.tar --vip 192.0.2.100 --vrid 227 --ignore-warnings
- Checking network connectivity to host hypervvm-61-61.robinsystems.com ... DONE (0 secs)
- Checking network connectivity to host hypervvm-61-62.robinsystems.com ... DONE (0 secs)
- Checking network connectivity to host hypervvm-61-63.robinsystems.com ... DONE (0 secs)
- Running GoRobin Precheck on the following hosts: hypervvm-61-61.robinsystems.com, hypervvm-61-62.robinsystems.com, hypervvm-61-63.robinsystems.com ... DONE (25 secs)
This step will remove any existing Robin tars and re-copy new Robin tars from /tmp/ folder on all nodes. Are you sure you want to proceed (y/n) ? y
- Removing ROBIN tarball from '3' node(s) in the cluster ... DONE (0 secs)
-
DONE (662 secs)all hosts: 100% -- (11202825528|11202825528)  ... File upload to all hosts:   0% -- (32768|11202825528)
- Running Precheck on hosts: hypervvm-61-61.robinsystems.com, hypervvm-61-62.robinsystems.com, hypervvm-61-63.robinsystems.com ...   - Precheck had warnings for host hypervvm-61-63.robinsystems.com. Please check the logs at /tmp//hypervvm-61-63.robinsystems.com_precheck_20230316-010145 for details of the warnings.
  - Precheck had warnings for host hypervvm-61-62.robinsystems.com. Please check the logs at /tmp//hypervvm-61-62.robinsystems.com_precheck_20230316-010145 for details of the warnings.
  - Precheck had warnings for host hypervvm-61-61.robinsystems.com. Please check the logs at /tmp//hypervvm-61-61.robinsystems.com_precheck_20230316-010148 for details of the warnings.
WARNING (24 secs)
- Executing hostname ping checks for all nodes in the cluster ... DONE (10 secs)
- Checking time drift between all the nodes  ... DONE (0 secs)
- Installing host packages on '3' node(s) in the cluster. You may check /var/log/robin-host-script.log on each host to track progress ... DONE (622 secs)
- Configuring 'hypervvm-61-61' as Master Node of cluster. You may check /var/log/robin-k8s-script.log on hypervvm-61-61.robinsystems.com ... DONE (115 secs)
- Copying kubeconfig file to all nodes ... DONE (6 secs)
- Adding '2' additional master node(s) 'hypervvm-61-62.robinsystems.com' to cluster. You may check /var/log/robin-k8s-script.log on the host. ... DONE (120 secs)
- Adding '2' additional master node(s) 'hypervvm-61-63.robinsystems.com' to cluster. You may check /var/log/robin-k8s-script.log on the host. ... DONE (120 secs)
- Configuring additional Kubernetes plugins. You may check /root/robin-k8splus-script.log on hypervvm-61-61.robinsystems.com ... DONE (10 secs)
- Installing Robin on cluster. You may check /root/robin-script.log on hypervvm-61-61 ... DONE (427 secs)
- Validating Installation ... DONE (0 secs)
- Initializing Compute and Storage services on 3 node(s) ... DONE (60 secs)

-----------------------------------------------------------------
Cluster with 3 node(s) is ready for use
  1. hypervvm-61-61.robinsystems.com
  2. hypervvm-61-62.robinsystems.com
  3. hypervvm-61-63.robinsystems.com
Robin Cluster Name ..... cluster-C8Vk
Robin Admin Username ... admin
Robin Admin Password ... Robin123
Robin Admin Access ..... https://192.0.2.100


Note: Since a license was not provided at runtime, the cluster is not fully activated.

-----------------------------------------------------------------
ROBIN was installed on the following hosts which had precheck warnings: hypervvm-61-63.robinsystems.com, hypervvm-61-62.robinsystems.com, hypervvm-61-61.robinsystems.com. This is due to the '--ignore-warnings' flag being passed during the installation. This might lead to erroneous behavior and hence an unsupported installation.
- Removing ROBIN tarball from '3' node(s) at /tmp/ in the cluster ... DONE (0 secs)
- Pulling ROBIN Install logs from '3' node(s) at /tmp/ ... DONE (0 secs)

2.6.3. Installing Robin CNP in non-HA Mode

You can install Robin CNP in a non-HA mode using the GoRobin utility.

All the prerequisites applicable for HA mode installation are also applicable for non-HA mode.

To install Robin CNP in a non-HA mode, run the following command:

# ./gorobin_<version> onprem install-nonha --hosts <hostnames> --gorobintar <path-to-gorobin-tarball>

Example

# ./gorobin_5.4.1-23 onprem install-nonha --hosts 192.0.2.61 --gorobintar gorobintar-5.4.1-23-bin.tar --ignore-warnings
Please Enter the sshpassword for hosts in your Robin cluster. GoRobin will use this sshpassword to access all hosts in your cluster
Password:
- Checking network connectivity to host vm-2-61.robinsystems.com ... DONE (0 secs)
- Running GoRobin Precheck on host vm-2-61.robinsystems.com ... DONE (0 secs)
This step will remove any existing Robin tars and re-copy new Robin tars from /root/ folder on all nodes. Are you sure you want to proceed (y/n) ? n
- Running Precheck on host vm-2-61.robinsystems.com ... WARNING (23 secs)
  - Precheck on this host had warnings. Please check /root//vm-2-61.robinsystems.com_precheck_20221010-124813 for details.
- Executing hostname ping checks for all nodes in the cluster ... DONE (0 secs)
- Checking time drift between all the nodes  ... DONE (0 secs)
- Installing host packages on '1' node(s) in the cluster. You may check /var/log/robin-host-script.log on each host to track progress ... DONE (571 secs)
- Configuring 'vm-2-61' as Master Node of cluster. You may check /var/log/robin-k8s-script.log on vm-2-61.robinsystems.com ... DONE (80 secs)
- Copying kubeconfig file to all nodes ... DONE (2 secs)
- Configuring additional Kubernetes plugins. You may check /root/robin-k8splus-script.log on vm-2-61.robinsystems.com ... DONE (6 secs)
- Installing Robin on cluster. You may check /root/robin-script.log on vm-2-61.robinsystems.com ... DONE (243 secs)
- Validating Installation ... DONE (0 secs)
- Initializing Compute and Storage services on 1 node(s) ... DONE (40 secs)

-----------------------------------------------------------------
Cluster with 1 node(s) is ready for use
  1. vm-2-61.robinsystems.com
Robin Cluster Name ..... cluster-q54g
Robin Admin Username ... admin
Robin Admin Password ... admin2
Robin Admin Access ..... https://vm-2-61.robinsystems.com

Note: Since a license was not provided at runtime, the cluster is not fully activated.

-----------------------------------------------------------------
ROBIN was installed on the following hosts which had precheck warnings: vm-2-61.robinsystems.com. This is due to the '--ignore-warnings' flag being passed during the installation. This might lead to erroneous behavior and hence an unsupported installation.
- Removing ROBIN tarball from '1' node(s) at /root/ in the cluster ... DONE (0 secs)
- Pulling ROBIN Install logs from '1' node(s) at /root/ ... DONE (0 secs)

2.7. Verify Robin CNP Installation

You can log in to Robin CNP and verify installation by running the following command:

Note

In some unknown scenarios, the GoRobin installer might not validate the cluster after installation, and hosts could be in the WaitingForMonitor status. If you notice this status after installation, activate the Robin license to fix the issue.

# robin login admin --password Robin123
User robin is logged in

# robin host list
Id           | Hostname                        | Version  | Status | RPool   | Avail. Zone | LastOpr | Roles | Cores  | GPUs  | Mem         | HDD(#/Alloc/Total) | SSD(#/Alloc/Total) | Pod Usage | Joined Time
-------------+---------------------------------+----------+--------+---------+-------------+---------+-------+--------+-------+-------------+--------------------+--------------------+-----------+----------------------
1665460022:1 | vm-2-61.robinsystems.com        | 5.4.1-35 | Ready  | default | N/A         | ONLINE  | C,S   | 7/5/12 | 0/0/0 | 20G/10G/31G | 2/-/200G           | -/-/-              | 75/25/100 | 10 Oct 2022 13:48:58

* Note: all values indicated above in the format XX/XX/XX represent the Free/Allocated/Total values of the respective resource unless otherwise specified. In addition allocated values for compute resource such as cpu, memory and pod usage includes reserved values for the corresponding resource.

Note

If your setup has GPUs which are not detected during installation, run the following command in order to retry the discovery process: robin host probe <hostname> --rediscover.

2.8. Post Installation Steps

  1. License Activation.

2.9. Uninstalling Robin CNP

You can uninstall Robin CNP using the GoRobin utility when required.

Prerequisite:

  • You must delete and clean up all Pods (including file collection and metrics) and mounts before uninstalling Robin CNP.

To uninstall Robin CNP, run the following command:

./gorobin_<version> onprem teardown --hosts <host-names> --gorobintar <path-to-gorobin-tarball>

--hosts

Comma-separated list of fully qualified hostnames to remove ROBIN software from.

--tar-url

URL for GoRobin tarball file.

--gorobintar

Path to the GoROBIN tarball containing installation files.

2.10. Custom installations

2.10.1. VLAN-based installation

VLANs allow a user to logically group a set of devices in the same L2 domain irrespective of how they are physically connected. Consequently this provides a variety of benefits from a networking prespective including isolation, security and flexibility. More details on VLANs can be found here.

A Robin cluster can be configured during the installation process so that VLAN support is natively enabled. This is done by providing the nics parameter within the config.json. Here’s a list of all possible types of physical interface configurations and the corresponding key-pair values that need to be specified.

  • For an interface without VLANs, the following option should be specified: "nics": "<nic>:<vlanNo>:untagged". The VLAN number specified here should be based on what is configured on the upstream switch.

  • For an interface with an IP Address and VLAN, the following option should be specified: "nics": "<nic>:<vlanNo>:untagged". Use this option for the interface without a VLAN configured. Its counterparts tagged with the VLANs will be automatically detected and registered.

  • For an interface with only VLANs: "nics": "<nic>:<vlanNo>". In this scenario, all traffic leaving the interface is tagged with a VLAN number. Here the untagged option need not be specified, as the upstream switch expects only tagged traffic.

2.10.2. Single Node Cluster option

When you Install Robin CNP, a set of Kubernetes and Robin control ports are configured to be open. Typically for edge deployments where only a single node is required to be part of the cluster. In this kind of environment, you can use the option --single-node-cluster to limit the number of ports that are configured to be open.

Note

It is assumed that the nodes are configured with default network rules to block the network traffic outside of the Cloud installation.

Points to consider

  • The -- single-node-cluster option is recommended only for non-HA Installations. You can use the option as part of HA installation. However, you need to contact the Robin Support team for your specific use case.

  • Use the option as part of the install command or config.json file.

  • It is recommended not to add extra nodes when you use the --single-node-cluster option.

  • When you use the --zero-trust option, only traffic destined to Kubernetes services and Robin control ports is allowed.

2.10.3. Zero Trust option

You can enable the --zero-trust option when installing Robin CNP. The --zero-trust option blocks network traffic. When you use this option, all network ports will be closed except Kubernetes control ports, Robin control ports, and SSH. You can use this option in conjunction with the --single-node-cluster option or independently. You can include this option as part of the config.json file that you use during Robin CNP installation.

2.10.4. Installation with custom root CA certificate and key

Robin allows you to use a custom root certificate authority (CA) certificate and key when installing a Robin CNP cluster. A digital certificate is an electronic file that proves the authenticity of a device, server, or user. Digital certificate authentication helps organizations to ensure that only trusted devices and users can connect to their networks. It also provides assurance to the client applications that they are connecting to the right server.

A certificate authority (CA) is an entity that stores, signs, and issues digital certificates. When a CA signs a certificate, it certifies the ownership of the domain name specified in the subject of the certificate.

The following are the types of certificates:

  • Root CA Certificate: A Root CA certificate forms the basis for a trust chain. The root CA certificates issued by private trusted CAs or commercial entities that sell certificates are considered trustworthy. When the root CA certificate is trustworthy, any certificate signed and issued by it is also trustworthy.

  • Intermediate Root CA Certificate: Intermediate root CA certificates have the same signing capabilities as root CA certificates with a limited scope. The intermediate root CA certificates are not self-signed, however, they are signed by the root CA certificate or another intermediate root CA certificate. There can be multiple intermediate root CA certificates in a trust chain.

  • Domain Certificate: A domain certificate is issued for a specific domain that needs validation. The following are the types of domain certificates:

    • Server certificate: When a client application sends a request to a server, the server returns its domain certificate. If the certificate’s subject does not match the domain name in the URL, the request is rejected.

    • Node certificate: A node certificate is the same as a domain certificate for a server, except that its domain is a physical or virtual host. All valid hostnames and IP addresses that map to the node are included in the certificate as alternative names. Binding the certificate to the node rather than to the domain name allows all services running on the node to use the same certificate for authentication.

  • User Certificate: A user certificate is used to validate the identity of a user. When a user sends a request to a server, a copy of its user certificate also gets passed in. If the certificate is validated, then the request will be handled, otherwise, the request is rejected.

Robin uses certificates to secure access to API endpoints and the Robin UI. When you install the first master node of a Robin cluster, Robin generates a self-signed root CA certificate and private key. Robin uses the root CA certificate’s private key to sign all other certificates issued by the cluster, such as node certificates for every node and user certificates.

2.10.4.1. How CA certificate works

When a client application sends a request to the Robin API, it is presented with the node certificate where the service is running. The node certificate contains alternative names that include the hostnames and IP addresses of the nodes. If the hostname or IP address mentioned in the URL does not match the details available in the alternative names of the node certificate, the certificate is deemed untrustworthy, and the request is rejected.

A client application also rejects the node certificate if it is not signed by a certificate stored in the client’s Trusted Root CA Certificate store. It can be the case for any certificate signed by the root CA of the Robin CNP cluster.

You have the following three options to avoid the rejection of the node certificate:

  • Configure the client in such a way that the client does not perform a validation check on the certificate. For example, ignore certificate validation errors.

  • Add the cluster’s root CA certificate to the client’s Trusted Root CA Certificate store.

  • Provide an intermediate root CA certificate signed by a known and trusted CA when installing the first master node of the Robin CNP cluster.

Note

Robin recommends the third option when installing a Robin CNP cluster in the production environment.

2.10.4.2. Custom root CA certificate and key

Robin allows you to use a custom root CA certificate and key when installing a Robin CNP cluster. You can obtain the custom root CA certificate and key from an external trusted CA or from a private trusted CA (public key infrastructure service).

After obtaining the root CA certificate, make sure that it is configured as an intermediate root CA certificate. The intermediate root CA certificate is used as a signing certificate to sign other certificates issued by the cluster.

Note

The intermediate root CA certificate must have a pathlen greater than or equal to the number of intermediate CA certificates in the chain of certificates used to authenticate an entity, including itself. For example, if the signing certificate is a root CA certificate, then the intermediate root CA certificate must have at least one pathlen.

When installing Robin CNP, you need to specify the following key pair values in the config.json file for one of the master nodes:

  • ca-cert-path - Path to the CA certificate.

  • ca-key-path - Path to the CA certificate’s key.

Note

Make sure that the custom root CA certificate and key must be present on the node where Robin CNP installation will be initiated.

2.10.5. Installation with Best-Effort QoS for non-application Pods

To increase the performance of the applications, Robin allows users to reserve and allocate isolated CPU cores to application Pods. When application Pods are deployed using the isolated CPU cores, the performance of the application is improved. However, non-application Pods or control plane Pods might also use some isolated CPU cores along with non-isolated CPU cores. When these Pods use the isolated CPU cores, resource availability is decreased for application Pods nullifying the purpose of isolating CPU cores. In order to counteract this, the Best-Effort Quality of Service (QoS) feature can be enabled. It attempts to increase the number of isolated CPU cores that are available for applications.

2.10.5.1. How the configuration works

The Best-Effort QoS feature tries to reduce the number of isolated CPU cores that are used by non-application Pods by setting the CPU requests for these Pods to zero automatically. After setting the CPU requests for the non-application Pods to zero, these Pods are less likely to utilize isolated CPU cores and so increasing the chance that more isolated CPU cores are available for application Pods.

To enable the Best-Effort QoS feature, you must specify the following key pair value in the config.json file for one of the master nodes:

  • "best-effort-qos": "True"

2.10.5.2. Points to consider

Some points to consider with regards to enabling the Best-Effort QoS configuration include:

  • You can enable this feature during the Robin CNP installation only.

  • You cannot enable or disable this feature post-installation because this is a non-configurable feature.

  • You cannot use this feature after upgrading from Robin CNP v5.3.X to Robin CNP v5.4.3.

2.10.6. Installation with HashiCorp Vault integration

Robin Cloud Native Platform (CNP) supports the integration of HashiCorp Vault as a key management service (KMS). Vault is a tool for securely accessing secrets. This includes but is not limited to API keys, passwords, and certificates. Vault provides a unified interface to any secret while providing tight access control and recording a detailed audit log. More information on the product can be found here.

2.10.6.1. Prerequisites

The following prerequisites must be met to install Robin CNP with HashiCorp Vault integration:

  • All prerequisites that apply to general installations, detailed here, must be met.

  • A Vault Server must be pre-installed and functioning correctly.

  • The IP address of the Vault Server must be reachable from the Kubernetes node on which the Robin CNP installation will take place.

  • Signed x509 CA Certificates for the Vault Server and Vault Client must be present on the Kubernetes node where the Robin CNP installation will be initiated.

  • The key for the Vault Client CA Certificate must be present on the Kubernetes node where the Robin CNP installation will be initiated.

  • A Robin CNP folder, in which the master key will be placed, must be present on the Vault Server.

Note

HashiCorp Vault can only be integrated as part of the initial Robin CNP installation, it is not possible to configure it post-deployment.

2.10.6.2. How the Integration works

As part of the initial Robin CNP installation, a one-time Kubernetes Job is spawned that sends an API request (using the given Vault parameters) to the Vault Server in order to create a key that is stored within the Vault Server database. Following this, the robin-kms init container fetches this key and encodes it in order to create the Robin CNP master key. This is then stored in-memory within the Robin CNP master Pod. When a PersistentVolume (PV) is provisioned on the Robin CNP cluster, a volume key is generated based on this master key in order to encrypt the aforementioned PV, and it is stored within the Robin CNP database.

2.10.6.3. Points to consider

Some points to consider with regards to Vault integration include:

  • Robin supports V1 and V2 KV engines for HashiCorp Vault integration.

  • Either certificates or keys can be used to complete the integration.

  • Integration with existing CNP clusters is not supported.

  • Volumes can only be encrypted using the master key at the time of provisioning and not after the fact.

  • The Vault Server need not be accessible at all times after the initial bootstrap of the Robin server is complete as the master key is loaded in-memory. However, if the connection to the Vault Server is lost whilst the Robin server is restarting the process will hang as it will attempt to fetch the key from the Vault Server indefinitely.

  • The in-memory master key will be deleted when the Robin CNP master Pod is deleted. However, it will be fetched again when the Robin server restarts if a new Robin CNP master Pod is spawned.

  • Robin CNP supports the rotation of keys within the Vault Server in a periodic manner as well as on demand.

  • Etcd secrets are encrypted on all master nodes using the EncryptionConfiguration API alongside appropriate configuration of kube-apiserver during installation. Keys stored in the Vault Server are not used for this purpose.

2.10.6.4. Necessary Parameters

The following parameters must be specified within the config.json with appropriate values in order to integrate Robin CNP with HashiCorp Vault:

  • vault-addr - URL at which to contact the Vault server consisting of its IP Address and the port number at which it can be reached.

  • vault-keys-path - The path to the folder at which the key for Robin CNP will be stored in the Vault Server.

  • vault-ca-cert - The path to the Vault Server CA certificate.

  • vault-client-cert - The path to the Vault Client CA certificate.

  • vault-client-key - The path to the Vault Client key.

  • kms - Type of key management service (KMS). In this case the value should be set to vault.

Note

The above parameters need only be set for the initial master node within the config.json file.

2.10.7. Installation with custom Cluster Identity certificate

A Cluster Identity certificate is used to validate the requests sent to the external-facing Kubernetes and Robin CNP services from clients outside the cluster. These services include:

  • Kubernetes API Server

  • Robin CNP control plane services including the RCM, Event and File Servers.

  • Graphical User Interface (GUI) access

  • Robin client

  • Kubernetes client

  • Helm client

By default, Robin creates its own Cluster Identity certificate and uses this certificate to validate external requests. The private key of the Robin CNP cluster certificate authority (CA) signs this certificate. In the case that this is not sufficient, a custom Cluster Identity certificate can be used for all external-facing services of a cluster. In order to use a custom Cluster Identity certificate it must be signed by an external trusted CA and provided along with a private key.

2.10.7.1. Points to consider

  • The Cluster Identity certificate and private key are only used for external-facing services of a cluster.

  • For security concerns, you should never store the Cluster Identity certificate and private key files to any host in the cluster. You should always store these files securely in an external secret store such as TPM, Vault, or use a Kubernetes secret.

  • Robin copies the Cluster Identity certificate and private key files from the utilized external store into a temporary memory file system such that it can be accessed by the necessary services.

  • The Cluster Identity certificate that is signed by an external trusted CA must be able to sign a CertificateSigningRequest (CSR) generated by the Robin CNP cluster.

2.10.7.2. Necessary Parameters

When installing Robin CNP, you need to specify the following key pair values in the config.json file for one of the master nodes in order to utilize a custom Cluster Identity certificate:

  • identity-cert-path - Path to the Cluster Identity certificate.

  • identity-key-path - Path to the Cluster Identity certificate’s private key.

  • identity-ca-path - Path to the CA certificate that signed the Cluster Identity certificate.

2.11. Load Balancer Support via MetalLB

Robin utilizes the layer 2 mode of MetalLB to provide support for network load balancing on bare metal clusters. It allows users to deploy and effectively use Kubernetes services of type LoadBalancer in a production bare metal environment. The IP range specified for the load balancer is separate from Robin IP Pool ranges and must be in the expanded format (eg: 192.0.2.20-192.0.2.30).

Note

You can set up MetalLB during the Robin CNP installation by specifying the IP address range in the loadbalancer-iprange parameter in the config.json for one of the master nodes.

However, You can set up MetalLB post Robin CNP installation using the setup script. You can also remove it from an existing Robin Cluster using the cleanup script.

2.11.1. Setup MetalLB Post Robin Installation

You can set up MetalLB post Robin CNP installation.

Perform the following steps to set up MetalLB post Robin CNP installation:

  1. Run the following command to extract the GoRobin tar file:

    # tar -xvf <gorobintar_file>
    
  2. Run the following command to change your current working directory to gorobintar:

    # cd gorobintar
    
  3. Run the following command to set up MetalLB:

    # ./k8splus-script.sh setup-metallb --loadbalancer-iprange=<range_of_ips>
    

    Example:

    # ./k8splus-script.sh setup-metallb --loadbalancer-iprange=192.0.2.20-192.0.2.30
                        Robin K8s Plugins Setup-metallb
    Extracting Payload                                : DONE
    Setting up Metallb                                : DONE
    Successfully setup Metallb
    
    # kubectl get pods -n metallb-system
    NAME                          READY   STATUS    RESTARTS   AGE
    controller-6f47b9bf54-t2qrj   1/1     Running   0          4d
    speaker-kvmpg                 1/1     Running   0          4d
    

2.11.2. Cleanup MetalLB

If MetalLB is set up on your Robin cluster, you can remove it.

Run the following command to remove MetalLB from the Robin cluster:

# cd gorobintar
# ./k8splus-script.sh cleanup-metallb

Example:

# cd gorobintar
# ./k8splus-script.sh cleanup-metallb
                  Robin K8s Plugins Cleanup-metallb
Are you sure you want to cleanup metallb configuration [y/n] ?: y
Extracting Payload                                : DONE
Cleaning up Metallb                               : DONE
Successfully cleaned up Metallb

2.12. Configure Calico Typha

As a part of the Robin CNP installation, Calico is deployed as a DaemonSet and the associated Calico Felix agent Pods are created on each node to watch for events from the Kubernetes API server. When the size of the Kuberenetes cluster exceeds 50 nodes, Calico recommends deploying the Typha DaemonSet to act as an intermediary between the datastore (the Kuberenetes API server in this case) and the Calico Felix agent Pods. It enables an increased scale by reducing each node’s impact on the datastore. For more information on Calico Typha, see here.

Starting from Robin CNP v5.4.3-395 (HF4), you can enable and disable Calico Typha post Robin CNP installation.

2.12.1. Enable Calico Typha

You can enable Calico Typha post Robin CNP installation.

Perform the following steps to enable Calico Typha:

  1. Check the Calico Peer status.

    # calicoctl node status
    
  2. Update the replica count of the Calico Typha to 2. Robin recommends that at least 1 replicas of the Calico Typha must be present in the cluster for every set of 50 nodes.

    # kubectl scale deployment -n kube-system calico-typha --replicas=2
    
  3. Verify that all Calico Typha Pods are up and running.

    # kubectl get pods -n kube-system -l k8s-app=calico-typha -w
    
  4. Patch the Calico configmap to update the Typha service name as calico-typha.

    # kubectl patch cm -n kube-system calico-config --type json -p="[{'op': 'replace', 'path': '/data/typha_service_name', 'value': 'calico-typha'}]"
    
  5. Verify that the configmap is correctly updated.

    # kubectl get cm -n kube-system calico-config -o yaml
    
  6. Bounce all Calico Pods.

    # kubectl rollout restart daemonset -n kube-system calico-node
    
  7. Verify that all Calico Pods are up and running.

    # kubectl get pods -n kube-system -l k8s-app=calico-node
    
  8. Check the Calico Peer status and verify that all peers are present as seen in step 1.

    # calicoctl node status
    

2.12.2. Disable Calico Typha

If the Calico Typha is enabled in your cluster, you can disable it.

Perform the following steps to enable Calico Typha:

  1. Check the Calico peer status.

    # calicoctl node status
    
  2. Patch the Calico configmap and set the Typha service name to none.

    # kubectl patch cm -n kube-system calico-config --type json -p="[{'op': 'replace', 'path': '/data/typha_service_name', 'value': 'none'}]"
    
  3. Verify that the configmap is correctly updated.

    # kubectl get cm -n kube-system calico-config -o yaml
    
  4. Bounce all Calico Pods.

    # kubectl rollout restart daemonset -n kube-system calico-node
    
  5. Verify that all Calico Pods are up and running.

    # kubectl get pods -n kube-system -l k8s-app=calico-node
    
  6. Check the Calico Peer status and verify that all peers are present as seen in step 1.

    # calicoctl node status
    
  7. Update the count of the Calico Typha replica to 0.

    # kubectl scale deployment -n kube-system calico-typha --replicas=0
    
  8. Verify that the Calico Typha Pods are no longer running in the cluster.

    # kubectl get pods -n kube-system -l k8s-app=calico-typha
    

2.13. High availability of Robin services

Robin manages the high availability of all management services that are deployed as part of a Robin installation. Robin pods are deployed as part of a daemonset. Some pods are designated to run master services. Robin configures 3 of these pods as manager pods which can host these master services.

  • If one of the Kubernetes nodes goes down, Robin seamlessly starts master services on other manager pods.

  • If a pod hosting Robin master services is removed from the Kuberenetes cluster, Robin will automatically designate another pod as manager pod, so that it always has 3 master pods.

2.14. Data Integrity

Checksum is a simple mathematical operation performed on a set of data to generate a fixed-size value that represents the integrity of the data. The generated checksum is often compared with a reference checksum to detect errors or inconsistencies that might have occurred during data transmission and storage.

  • Robin CNS does over-the-wire inline checksum validation for all read and write IOs to detect corruption that could happen during data transfers.

  • Robin calculates the checksum of every data block persisted on a storage device and stores it along with the metadata for the data block. In order to improve performance by minimizing disk IO, the Robin CNS storage stack pulls a batch of metadata blocks into memory at a time. While it reads the block’s metadata from the disk, it doesn’t store them in-memory. If the checksums were stored in-memory for the blocks, then the memory footprint for this metadata would increase. Also, modern flash media already performs checksums, and error recovery is automatically done at the FTL layer. Due to these reasons we have intentionally disabled inline checksum validation during the read operations.

  • Data blocks can be validated for checksum correctness offline by a utility called devck (currently available via Robin Customer Support). A Volume rebuild command can rebuild corrupted blocks from healthier replica copies in the event of corruption.