Robin Tutorials

1. MySQL on OpenShift

After successfully deploying and running stateless applications, a number of developers are exploring the possibility of running stateful workloads, such as MySQL, on OpenShift. If you are considering extending OpenShift for stateful workloads, this tutorial will help you experiment on your existing OpenShift environment by providing step-by-step instructions.

This tutorial will walk you through:

  1. How to deploy a MySQL database on OpenShift using the Robin Operator and Helm3

  2. Add sample data to the MySQL database

  3. Verify the Helm release has registered as a Robin application

  4. Create a point-in-time snapshot of the MySQL database

  5. Simulate a user error and rollback to a stable state using the snapshot

  6. Clone the database for the purpose of collaboration

  7. Backup the database to the cloud using Google Cloud Storage (GCS) bucket

  8. Simulate data loss/corruption and use the backup to restore the database

1.1. Prerequisites: Install the Robin Operator on OpenShift and set up Helm

Robin Storage is an application-aware container storage that offers advanced data management capabilities and runs natively on OpenShift. Robin Storage delivers bare-metal performance and enables you to protect (via snapshots and backups), encrypt, collaborate (via clones and git like push/pull workflows) and make portable (via Cloud Repositories) stateful applications that are deployed using Helm Charts or Operators.

Before we deploy MySQL on OpenShift, let’s first install the Robin operator on your existing OpenShift environment. You can install Robin directly from the OpenShift console by clicking on the OperatorHub tab. You can find further instructions here.

Let’s confirm that OpenShift cluster is up and running.

# oc get nodes

You should see an output similar to below, with the status of each node marked as Ready

[demo@ocp-svc tutorial]# oc get nodes
NAME                   STATUS   ROLES           AGE   VERSION
ocp-cp-1.lab.ocp.lan   Ready    master,worker   18d   v1.18.3+47c0e71
ocp-cp-2.lab.ocp.lan   Ready    master,worker   18d   v1.18.3+47c0e71
ocp-cp-3.lab.ocp.lan   Ready    master,worker   18d   v1.18.3+47c0e71
ocp-w-1.lab.ocp.lan    Ready    worker          18d   v1.18.3+47c0e71
ocp-w-2.lab.ocp.lan    Ready    worker          18d   v1.18.3+47c0e71

Let’s confirm that Robin is up and running. Run the following command to verify that Robin is ready.

# oc describe robincluster -n robinio

You should see an output similar to below:

[demo@ocp-svc ~]# oc describe robincluster -n robinio                                                                                                                         Name:         robin
Name:         robin
Namespace:    robinio
Labels:       app.kubernetes.io/instance=robin
              app.kubernetes.io/managed-by=robin.io
              app.kubernetes.io/name=robin
Annotations:  API Version:  manage.robin.io/v1
Kind:         RobinCluster
..
..
  Resource Version:  7790638
  Self Link:         /apis/manage.robin.io/v1/namespaces/robinio/robinclusters/robin
  UID:               c53fb011-56ae-490c-a2a2-0b5b19f03082
Spec:
  host_type:     physical
  image_robin:   robinsys/robinimg:5.3.2-513
  k8s_provider:  openshift
Status:
  connect_command:   kubectl exec -it robin-2vf8v -n robinio -- bash
  get_robin_client:  curl -k https://192.168.22.201:29442/api/v3/robin_server/download?file=robincli&os=linux > robin
  master_ip:         192.168.22.201
  Phase:             Ready
  pod_status:
    robin-2vf8v  ocp-cp-1.lab.ocp.lan  Running 192.168.22.201 true
    robin-kvtdq  ocp-w-2.lab.ocp.lan  Running 192.168.22.212 false
    robin-zz65s  ocp-cp-3.lab.ocp.lan  Running 192.168.22.203 true
    robin-kdtxw  ocp-cp-2.lab.ocp.lan  Running 192.168.22.202 true
    robin-6rq7b  ocp-w-1.lab.ocp.lan  Running 192.168.22.211 false
..
..
    k8s_node_name:  ocp-cp-1.lab.ocp.lan
    Roles:          M*,S
    Rpool:          default
    State:          ONLINE
    Status:         Ready
Events:             <none>

You should see that Phase is marked as Ready.

To get the link to download Robin client find the field ‘Get _ Robin _ Client’ in the output of above and run the corresponding command to get the Robin client. Please notice the quotes added to curl command. Change the file permission for robin and copy it to /usr/bin/local to make it as a system command.

[demo@ocp-svc tutorial]# curl -k "https://192.168.22.201:29442/api/v3/robin_server/download?file=robincli&os=linux" > robin
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                Dload  Upload   Total   Spent    Left  Speed
100 10.2M  100 10.2M    0     0  45.9M      0 --:--:-- --:--:-- --:--:-- 45.7M
[demo@ocp-svc tutorial]#
[demo@ocp-svc tutorial]# chmod +x robin
[demo@ocp-svc tutorial]# mv robin /usr/local/bin/
[demo@ocp-svc tutorial]# robin login admin --p Robin123
User admin is logged into Administrators tenant
[demo@ocp-svc tutorial]# robin host list
Id           | Hostname             | Version   | Status | RPool   | LastOpr | Roles | Isol Cores(SHR/DED/Total) | Non-Isol Cores | GPUs | Mem(Free/Alloc/Total) | HDD(#/Alloc/Total) | SSD(#/Alloc/Total) | Pod Usage | Joined Time
-------------+----------------------+-----------+--------+---------+---------+-------+---------------------------+----------------+------+-----------------------+--------------------+--------------------+-----------+----------------------
1601408284:1 | ocp-cp-3.lab.ocp.lan | 5.3.2-513 | Ready  | default | ONLINE  | M,S   | 0/0/0                     | 7/200          | 0/0  | 1G/13G/15G            | 1/-/50G            | -/-/-              | 67/250    | 29 Sep 2020 14:38:59
1601408284:2 | ocp-w-1.lab.ocp.lan  | 5.3.2-513 | Ready  | default | ONLINE  | S     | 0/0/0                     | 3/200          | 0/0  | 9G/5G/15G             | 1/-/50G            | -/-/-              | 27/250    | 29 Sep 2020 14:39:37
1601408284:3 | ocp-w-2.lab.ocp.lan  | 5.3.2-513 | Ready  | default | ONLINE  | S     | 0/0/0                     | 2/200          | 0/0  | 9G/6G/15G             | 1/-/50G            | -/-/-              | 23/250    | 29 Sep 2020 14:39:38
1601408284:4 | ocp-cp-2.lab.ocp.lan | 5.3.2-513 | Ready  | default | ONLINE  | M,S   | 0/0/0                     | 7/200          | 0/0  | 1G/13G/15G            | 1/12G/50G          | -/-/-              | 56/250    | 29 Sep 2020 14:39:39
1601408284:5 | ocp-cp-1.lab.ocp.lan | 5.3.2-513 | Ready  | default | ONLINE  | M*,S  | 0/0/0                     | 6/200          | 0/0  | 2G/12G/15G            | 1/-/50G            | -/-/-              | 66/250    | 29 Sep 2020 14:39:40

Next create a namespace wherein which we will create the application, by running the following commands:

[demo@ocp-svc mysql]# robin namespace add mysql --import-namespace
Namespace 'mysql' has been added for user 'admin' in tenant 'Administrators'

Let’s add a Bitnami Helm repository to pull Helm charts from. For this tutorial, we will use the IBM Community Helm repo. This repository has Helm charts designed to run on OpenShift.

You should see an output similar to below:

[demo@ocp-svc tutorial]# helm repo add bitnami https://charts.bitnami.com/ibm
"bitnami" has been added to your repositories

1.2. Deploy a MySQL database on OpenShift

Now, let’s create a MySQL database using Helm and Robin Storage. When we installed the Robin operator and created a “Robincluster” custom resource definition, ‘robin’ StorageClass is already created and registered with OpenShift. We can now use this StorageClass to create PersistentVolumes and PersistentVolumeClaims for the pods in OpenShift. Using this StorageClass allows us to access the data management capabilities (such as snapshot, clone, backup) provided by Robin Storage.

On Openshift 4.x, the security context for the MySQL helm chart should be updated to allow the containers in previleged mode. Fetch the MySQL chart and make the below changes.

[demo@ocp-svc tutorial]# helm pull bitnami/mysql --untar

Update the following file: mysql/values.yaml

securityContext:
  runAsUser: 0
  privileged: true

In addition to this, edit the mysql/values.yaml file such that the storageClass attribute is set to ‘robin’ in order to take advantage of the data management capabilities Robin Storage offers.

global:
#   imageRegistry: myRegistryName
#   imagePullSecrets:
#     - myRegistryKeySecretName
#   storageClass: myStorageClass
  storageClass: robin

Login to the ‘mysql’ project/namespace using the below oc command and add the privileged security context constraint to the current user.

[demo@ocp-svc mysql]#  oc project mysql
Now using project "mysql" on server "https://api.lab.ocp.lan:6443".

[demo@ocp-svc ~]# oc adm policy add-scc-to-user privileged -z default
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:privileged added: "default"

[demo@ocp-svc mysql]# oc adm policy add-scc-to-user privileged system:serviceaccount:mysql:default
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:privileged added: "default"

Using the below Helm command, we will deploy an MySQL instance.

Note: helm3 is being used for this tutorial. Please update helm to helm3 or replace command for helm2 appropriately.

[demo@ocp-svc mysql]# helm install imdb-movies /root/tutorial/mysql --set mysqlRootPassword=Robin123 --namespace mysql
NAME: imdb-movies
LAST DEPLOYED: Wed Oct 14 01:58:18 2020
NAMESPACE: mysql
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Please be patient while the chart is being deployed

Tip:

  Watch the deployment status using the command: kubectl get pods -w --namespace mysql

Services:

  echo Master: imdb-movies-mysql.mysql.svc.cluster.local:3306
  echo Slave:  imdb-movies-mysql-slave.mysql.svc.cluster.local:3306

Administrator credentials:

  echo Username: root
  echo Password : $(kubectl get secret --namespace mysql imdb-movies-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode)

To connect to your database:

  1. Run a pod that you can use as a client:

      kubectl run imdb-movies-mysql-client --rm --tty -i --restart='Never' --image  docker.io/bitnami/mysql:8.0.21-debian-10-r46 --namespace mysql --command -- bash

  2. To connect to master service (read/write):

      mysql -h imdb-movies-mysql.mysql.svc.cluster.local -uroot -p my_database

  3. To connect to slave service (read-only):

      mysql -h imdb-movies-mysql-slave.mysql.svc.cluster.local -uroot -p my_database

To upgrade this helm chart:

  1. Obtain the password as described on the 'Administrator credentials' section and set the 'root.password' parameter as shown below:

      ROOT_PASSWORD=$(kubectl get secret --namespace mysql imdb-movies-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode)
      helm upgrade imdb-movies bitnami/mysql --set root.password=$ROOT_PASSWORD

Note: In the above command, ‘/root/tutorial/mysql’ is the directory where the modified files are present and ‘imdb-movies’ is the name of the helm release.

Run the following command to verify the application is deployed and all relevant Kubernetes resources are ready. You should be able to see an output showing the status of your MySQL database.

[demo@ocp-svc mysql]# helm list -n mysql
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
imdb-movies     mysql           1               2020-10-14 01:58:18.706871 -0500 CDT    deployed        mysql-6.14.10   8.0.21

You would also want to make sure the relevant MySQL application pods are in a good state before proceeding further. Run the following command to verify the pods are running.

[demo@ocp-svc mysql]# oc get pods -n mysql
NAME                         READY   STATUS    RESTARTS   AGE
imdb-movies-mysql-master-0   1/1     Running   0          3m20s
imdb-movies-mysql-slave-0    1/1     Running   0          3m20s

Extract the mysql password that we would need to connect to db:

[demo@ocp-svc tutorial]# ROOT_PASSWORD=$(kubectl get secret --namespace mysql imdb-movies-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode)
[demo@ocp-svc tutorial]# echo $ROOT_PASSWORD
asGh1kSWrI

Please make a note of the password as we would need that in later steps.

1.3. Add sample data to the MySQL database

Now that we know the MySQL application is up and running, let’s insert some data to the database. There are couple of ways to insert data:

  1. Create a client pod and use that to inject data to the database.

  2. Install mysql on drive node and use that utility to connect and insert data into database.

For this tutorial we would go with option (1).

Let’s create a client pod and create database called ‘imdb’ and create a table called ‘movies’ in that database.

[demo@ocp-svc tutorial]# oc run imdb-movies-mysql-client --rm --tty -i --restart='Never' --image  docker.io/bitnami/mysql:8.0.21-debian-10-r46 --namespace mysql --command -- bash
If you don't see a command prompt, try pressing enter.
1001@imdb-movies-mysql-client:/$
1001@imdb-movies-mysql-client:/$ mysql -h imdb-movies-mysql.mysql.svc.cluster.local -uroot -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 10906
Server version: 8.0.21 Source distribution

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| my_database        |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
5 rows in set (0.00 sec)

mysql> create database imdb;
Query OK, 1 row affected (0.01 sec)

mysql> use imdb;
Database changed

mysql> CREATE TABLE movies (movieid TEXT, year INT, title TEXT, genre TEXT);
Query OK, 0 rows affected (0.03 sec)

mysql> SHOW TABLES;
+----------------+
| Tables_in_imdb |
+----------------+
| movies         |
+----------------+
1 row in set (0.01 sec)

Now that database and table is created, let’s add some data to the table.

mysql> INSERT INTO movies (movieid, year, title, genre) VALUES
    -> ('tt0360556', 2018, 'Fahrenheit 451', 'Drama'),
    -> ('tt0365545', 2018, 'Nappily Ever After', 'Comedy'),
    -> ('tt0427543', 2018, 'A Million Little Pieces','Drama'),
    -> ('tt0432010', 2018, 'The Queen of Sheba Meets the Atom Man', 'Comedy'),
    -> ('tt0825334', 2018, 'Caravaggio and My Mother the Pope', 'Comedy'),
    -> ('tt0859635', 2018, 'Super Troopers 2', 'Comedy'),
    -> ('tt0862930', 2018, 'Dukun', 'Horror'),
    -> ('tt0891581', 2018, 'RxCannabis: A Freedom Tale', 'Documentary'),
    -> ('tt0933876', 2018, 'June 9', 'Horror');
Query OK, 9 rows affected (0.01 sec)
Records: 9  Duplicates: 0  Warnings: 0

mysql> select * from movies;
+-----------+------+---------------------------------------+-------------+
| movieid   | year | title                                 | genre       |
+-----------+------+---------------------------------------+-------------+
| tt0360556 | 2018 | Fahrenheit 451                        | Drama       |
| tt0365545 | 2018 | Nappily Ever After                    | Comedy      |
| tt0427543 | 2018 | A Million Little Pieces               | Drama       |
| tt0432010 | 2018 | The Queen of Sheba Meets the Atom Man | Comedy      |
| tt0825334 | 2018 | Caravaggio and My Mother the Pope     | Comedy      |
| tt0859635 | 2018 | Super Troopers 2                      | Comedy      |
| tt0862930 | 2018 | Dukun                                 | Horror      |
| tt0891581 | 2018 | RxCannabis: A Freedom Tale            | Documentary |
| tt0933876 | 2018 | June 9                                | Horror      |
+-----------+------+---------------------------------------+-------------+
9 rows in set (0.01 sec)

As you can see from the above output that 9 movie records are added to our MySQL database named ‘imdb’. Now, let’s take a look at the data management capabilities Robin brings, such as taking snapshots, making clones, and creating backups.

1.4. Verify the MySQL Helm release has registered as an application

To benefit from the data management capabilities, we will register our MySQL database with Robin. Doing so will let Robin map and track all resources associated with the Helm release in order to enable the advanced data management capabilities of the product.

Since we initially added the ‘demo’ namespace in Robin for the admin user, Robin will auto discover the Helm applications registered in the ‘demo’ namespace. Verify this is the case by getting information and status for the application using the release name, by running the following command:

[demo@ocp-svc tutorial]# robin app list
Helm/Flex Apps:
+-------------+---------+--------+----------------------+--------------+-----------+---------+
| Name        | Type    | State  | Owner/Tenant         | Namespace    | Snapshots | Backups |
+-------------+---------+--------+----------------------+--------------+-----------+---------+
| imdb-movies | helm    | ONLINE | admin/Administrators | mysql        | 0         | 0       |
+-------------+---------+--------+----------------------+--------------+-----------+---------+

[demo@ocp-svc tutorial]# robin app info imdb-movies --status
+-----------------------+---------------------------------+--------+---------+
| Kind                  | Name                            | Status | Message |
+-----------------------+---------------------------------+--------+---------+
| ServiceAccount        | imdb-movies-mysql               | Ready  | -       |
| ConfigMap             | imdb-movies-mysql-slave         | Ready  | -       |
| ConfigMap             | imdb-movies-mysql-master        | Ready  | -       |
| Secret                | imdb-movies-mysql               | Ready  | -       |
| PersistentVolumeClaim | data-imdb-movies-mysql-slave-0  | Bound  | -       |
| PersistentVolumeClaim | data-imdb-movies-mysql-master-0 | Bound  | -       |
| Pod                   | imdb-movies-mysql-master-0      | Ready  | -       |
| Pod                   | imdb-movies-mysql-slave-0       | Ready  | -       |
| Service               | imdb-movies-mysql-slave         | Ready  | -       |
| Service               | imdb-movies-mysql               | Ready  | -       |
| StatefulSet           | imdb-movies-mysql-slave         | Ready  | -       |
| StatefulSet           | imdb-movies-mysql-master        | Ready  | -       |
+-----------------------+---------------------------------+--------+---------+

Key:
  Green: Object is running
  Yellow: Object is potentially down
  Red: Object is down

1.5. Snapshot the MySQL Application

If you make a mistake, such as unintentionally deleting important data, you may be able to undo it by restoring app to the previous snapshot. Snapshots allow you to restore the state of your application to a point-in-time state saved within the snapshot.

Robin lets you snapshot not just the storage volumes (PVCs) but the entire database application including all its resources such as Pods, StatefulSets, PVCs, Services, ConfigMaps etc. with a single command. To create a snapshot, run the following command.

[demo@ocp-svc tutorial]# robin snapshot create imdb-movies --snapname snap9movies --desc "contains 9 movies" --wait
Job: 1115 Name: K8SApplicationSnapshot State: VALIDATED       Error: 0
Job: 1115 Name: K8SApplicationSnapshot State: PROCESSED       Error: 0
Job: 1115 Name: K8SApplicationSnapshot State: WAITING         Error: 0
Job: 1115 Name: K8SApplicationSnapshot State: DONE            Error: 0
Job: 1115 Name: K8SApplicationSnapshot State: COMPLETED       Error: 0

Let’s verify we have successfully created the snapshot.

[demo@ocp-svc tutorial]# robin snapshot list --app imdb-movies
+----------------------------------+--------+-------------+----------+-------------------------+
| Snapshot ID                      | State  | App Name    | App Kind | Snapshot name           |
+----------------------------------+--------+-------------+----------+-------------------------+
| ca0ce01e0e6b11eb8db6fd18389aafba | ONLINE | imdb-movies | helm     | imdb-movies_snap9movies |
+----------------------------------+--------+-------------+----------+-------------------------+

We now have a snapshot of our entire database with information of all 9 movies.

1.6. Rollback the MySQL database

We have 9 rows in our “movies” table. To test the snapshot and rollback functionality, let’s simulate a user error by deleting couple of movies from the “movies” table.

mysql> delete from movies where genre = "Comedy";
Query OK, 4 rows affected (0.01 sec)

mysql> select * from movies;
+-----------+------+----------------------------+-------------+
| movieid   | year | title                      | genre       |
+-----------+------+----------------------------+-------------+
| tt0360556 | 2018 | Fahrenheit 451             | Drama       |
| tt0427543 | 2018 | A Million Little Pieces    | Drama       |
| tt0862930 | 2018 | Dukun                      | Horror      |
| tt0891581 | 2018 | RxCannabis: A Freedom Tale | Documentary |
| tt0933876 | 2018 | June 9                     | Horror      |
+-----------+------+----------------------------+-------------+
5 rows in set (0.00 sec)

Let’s list the snapshot for our application. Note the snapshot id, as we will use it in the next command.

[demo@ocp-svc tutorial]# robin app info imdb-movies
Name                              : imdb-movies
Kind                              : helm
State                             : ONLINE
Number of repos                   : 0
Number of snapshots               : 1
Number of usable backups          : 0
Number of archived/failed backups : 0

Query:
-------
{'apps': ['helm/imdb-movies@mysql'], 'resources': [], 'selectors': [], 'namespace': 'mysql'}

Snapshots:
+----------------------------------+-------------------------+-------------------+--------+----------------------+
| Id                               | Name                    | Description       | State  | Creation Time        |
+----------------------------------+-------------------------+-------------------+--------+----------------------+
| ca0ce01e0e6b11eb8db6fd18389aafba | imdb-movies_snap9movies | contains 9 movies | ONLINE | 14 Oct 2020 17:23:05 |
+----------------------------------+-------------------------+-------------------+--------+----------------------+

Now, let’s rollback to the point where we had 9 movies, using the snapshot id displayed above.

# robin app restore <app_name> <snapshot_id> --wait
[demo@ocp-svc tutorial]# robin app restore imdb-movies --snapshotid ca0ce01e0e6b11eb8db6fd18389aafba --wait
Job: 1119 Name: K8SApplicationRollback State: PROCESSED       Error: 0
Job: 1119 Name: K8SApplicationRollback State: PREPARED        Error: 0
Job: 1119 Name: K8SApplicationRollback State: AGENT_WAIT      Error: 0
Job: 1119 Name: K8SApplicationRollback State: COMPLETED       Error: 0

Validate that mysql pods are up and running after the restore operation.

[demo@ocp-svc tutorial]# oc get pod
NAME                         READY   STATUS    RESTARTS   AGE
imdb-movies-mysql-client     1/1     Running   0          32m
imdb-movies-mysql-master-0   1/1     Running   0          2m58s
imdb-movies-mysql-slave-0    1/1     Running   0          2m58s

Verify that we have rolled back to 9 movies in the “movies” table.

1001@imdb-movies-mysql-client:/$  mysql -h imdb-movies-mysql.mysql.svc.cluster.local -uroot -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 34
Server version: 8.0.21 Source distribution

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> use imdb;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select * from movies;
+-----------+------+---------------------------------------+-------------+
| movieid   | year | title                                 | genre       |
+-----------+------+---------------------------------------+-------------+
| tt0360556 | 2018 | Fahrenheit 451                        | Drama       |
| tt0365545 | 2018 | Nappily Ever After                    | Comedy      |
| tt0427543 | 2018 | A Million Little Pieces               | Drama       |
| tt0432010 | 2018 | The Queen of Sheba Meets the Atom Man | Comedy      |
| tt0825334 | 2018 | Caravaggio and My Mother the Pope     | Comedy      |
| tt0859635 | 2018 | Super Troopers 2                      | Comedy      |
| tt0862930 | 2018 | Dukun                                 | Horror      |
| tt0891581 | 2018 | RxCannabis: A Freedom Tale            | Documentary |
| tt0933876 | 2018 | June 9                                | Horror      |
+-----------+------+---------------------------------------+-------------+
9 rows in set (0.00 sec)

We have successfully rolled back to our original state with 9 movies!

1.7. Clone the MySQL Database

Robin lets you clone not just the storage volumes (PVCs) but the entire database application including all its resources such as Pods, StatefulSets, PVCs, Services, ConfigMaps, etc. with a single command.

Application cloning improves the collaboration across Dev/Test/Ops teams. Teams can share applications and data quickly, reducing the procedural delays involved in re-creating environments. Each team can work on their clone without affecting other teams. Clones are useful when you want to run a report on a database without affecting the source database application, or for performing UAT tests or for validating patches before applying them to the production database, etc.

Robin clones are ready-to-use “thin copies” of the entire app/database, not just storage volumes. Thin-copy means that data from the snapshot is NOT physically copied, therefore clones can be made very quickly. Robin clones are fully-writable and any modifications made to the clone are not visible to the source app/database.

To create a clone from the existing snapshot created above, run the following command. Use the snapshot ID we retrieved above.

# robin app create from-snapshot <clone_name> <snapshot_id> --wait
[demo@ocp-svc tutorial]# robin app create from-snapshot imdb-movies-clone ca0ce01e0e6b11eb8db6fd18389aafba --wait
Job: 1120 Name: K8SApplicationClone  State: VALIDATED       Error: 0
Job: 1120 Name: K8SApplicationClone  State: PREPARED        Error: 0
Job: 1120 Name: K8SApplicationClone  State: AGENT_WAIT      Error: 0
Job: 1120 Name: K8SApplicationClone  State: FINALIZED       Error: 0
Job: 1120 Name: K8SApplicationClone  State: COMPLETED       Error: 0

Let’s verify Robin has cloned all relevant Kubernetes resources.

[demo@ocp-svc tutorial]# oc get all |grep "imdb-movies-clone"
pod/imdb-movies-clone-imdb-movies-mysql-master-0   1/1     Running   0          86s
pod/imdb-movies-clone-imdb-movies-mysql-slave-0    1/1     Running   0          96s
service/imdb-movies-clone-imdb-movies-mysql         ClusterIP   172.30.226.99    <none>        3306/TCP   2m25s
service/imdb-movies-clone-imdb-movies-mysql-slave   ClusterIP   172.30.229.50    <none>        3306/TCP   2m25s
statefulset.apps/imdb-movies-clone-imdb-movies-mysql-master   1/1     2m24s
statefulset.apps/imdb-movies-clone-imdb-movies-mysql-slave    1/1     2m24s

Let’s verify Robin has this cloned app registered as a new application:

[demo@ocp-svc tutorial]# robin app info imdb-movies-clone --status

+-----------------------+---------------------------------------------------+--------+---------+
| Kind                  | Name                                              | Status | Message |
+-----------------------+---------------------------------------------------+--------+---------+
| ServiceAccount        | imdb-movies-clone-imdb-movies-mysql               | Ready  | -       |
| ConfigMap             | imdb-movies-clone-imdb-movies-mysql-master        | Ready  | -       |
| ConfigMap             | imdb-movies-clone-imdb-movies-mysql-slave         | Ready  | -       |
| Secret                | imdb-movies-clone-imdb-movies-mysql               | Ready  | -       |
| PersistentVolumeClaim | data-imdb-movies-clone-imdb-movies-mysql-master-0 | Bound  | -       |
| PersistentVolumeClaim | data-imdb-movies-clone-imdb-movies-mysql-slave-0  | Bound  | -       |
| Pod                   | imdb-movies-clone-imdb-movies-mysql-master-0      | Ready  | -       |
| Pod                   | imdb-movies-clone-imdb-movies-mysql-slave-0       | Ready  | -       |
| Service               | imdb-movies-clone-imdb-movies-mysql-slave         | Ready  | -       |
| Service               | imdb-movies-clone-imdb-movies-mysql               | Ready  | -       |
| StatefulSet           | imdb-movies-clone-imdb-movies-mysql-master        | Ready  | -       |
| StatefulSet           | imdb-movies-clone-imdb-movies-mysql-slave         | Ready  | -       |
+-----------------------+---------------------------------------------------+--------+---------+

Notice that Robin automatically clones the required Kubernetes resources, not just storage volumes (PVCs), that are required to stand up a fully-functional clone of our database. After the clone is complete, the cloned database is ready for use.

Get the name of service for the master pod of the cloned MySQL database and connect to clone database to verify data:

[demo@ocp-svc tutorial]# oc get service | grep clone
imdb-movies-clone-imdb-movies-mysql         ClusterIP   172.30.226.99    <none>        3306/TCP   6m56s
imdb-movies-clone-imdb-movies-mysql-slave   ClusterIP   172.30.229.50    <none>        3306/TCP   6m56s

[demo@ocp-svc tutorial]# kubectl exec -it imdb-movies-mysql-client bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
1001@imdb-movies-mysql-client:/$ mysql -h imdb-movies-clone-imdb-movies-mysql.mysql.svc.cluster.local -uroot -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 35
Server version: 8.0.21 Source distribution

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| imdb               |
| information_schema |
| my_database        |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
6 rows in set (0.01 sec)

mysql> use imdb;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select * from movies;
+-----------+------+---------------------------------------+-------------+
| movieid   | year | title                                 | genre       |
+-----------+------+---------------------------------------+-------------+
| tt0360556 | 2018 | Fahrenheit 451                        | Drama       |
| tt0365545 | 2018 | Nappily Ever After                    | Comedy      |
| tt0427543 | 2018 | A Million Little Pieces               | Drama       |
| tt0432010 | 2018 | The Queen of Sheba Meets the Atom Man | Comedy      |
| tt0825334 | 2018 | Caravaggio and My Mother the Pope     | Comedy      |
| tt0859635 | 2018 | Super Troopers 2                      | Comedy      |
| tt0862930 | 2018 | Dukun                                 | Horror      |
| tt0891581 | 2018 | RxCannabis: A Freedom Tale            | Documentary |
| tt0933876 | 2018 | June 9                                | Horror      |
+-----------+------+---------------------------------------+-------------+
9 rows in set (0.00 sec)

We have successfully created a clone of our original MySQL database, and the cloned database also has ‘imdb’ database and a table called “movies” with 9 rows, just like the original.

Now, let’s make changes to the clone and verify the original database remains unaffected by changes to the clone. Let’s delete the movie with genre “Drama”.

mysql> delete from movies where genre = "Drama";
Query OK, 2 rows affected (0.01 sec)

mysql> select * from movies;
+-----------+------+---------------------------------------+-------------+
| movieid   | year | title                                 | genre       |
+-----------+------+---------------------------------------+-------------+
| tt0365545 | 2018 | Nappily Ever After                    | Comedy      |
| tt0432010 | 2018 | The Queen of Sheba Meets the Atom Man | Comedy      |
| tt0825334 | 2018 | Caravaggio and My Mother the Pope     | Comedy      |
| tt0859635 | 2018 | Super Troopers 2                      | Comedy      |
| tt0862930 | 2018 | Dukun                                 | Horror      |
| tt0891581 | 2018 | RxCannabis: A Freedom Tale            | Documentary |
| tt0933876 | 2018 | June 9                                | Horror      |
+-----------+------+---------------------------------------+-------------+
7 rows in set (0.00 sec)

As you can see from the output above, 2 records were deleted. To verify that our original MySQL database is unaffected by changes to the clone, let’s inspect it by running below commands:

1001@imdb-movies-mysql-client:/$ mysql -h imdb-movies-mysql.mysql.svc.cluster.local -uroot -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 887
Server version: 8.0.21 Source distribution

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> use imdb;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select * from movies;
+-----------+------+---------------------------------------+-------------+
| movieid   | year | title                                 | genre       |
+-----------+------+---------------------------------------+-------------+
| tt0360556 | 2018 | Fahrenheit 451                        | Drama       |
| tt0365545 | 2018 | Nappily Ever After                    | Comedy      |
| tt0427543 | 2018 | A Million Little Pieces               | Drama       |
| tt0432010 | 2018 | The Queen of Sheba Meets the Atom Man | Comedy      |
| tt0825334 | 2018 | Caravaggio and My Mother the Pope     | Comedy      |
| tt0859635 | 2018 | Super Troopers 2                      | Comedy      |
| tt0862930 | 2018 | Dukun                                 | Horror      |
| tt0891581 | 2018 | RxCannabis: A Freedom Tale            | Documentary |
| tt0933876 | 2018 | June 9                                | Horror      |
+-----------+------+---------------------------------------+-------------+
9 rows in set (0.00 sec)

As you can see from the output above, our original MySQL database still has all 9 records and thus was unaffected by the data insertion into the clone.

This means we can work on the original MySQL database and the cloned database simultaneously without affecting each other. This is valuable for collaboration across teams where each team needs to perform a unique set of operations.

Now let’s delete the clone. Since the clone is just like any other Robin application, it can be deleted using the native ‘app delete’ command provided by Robin, as shown below:

[demo@ocp-svc tutorial]# robin app list
Helm/Flex Apps:

+-------------------+---------+--------+----------------------+--------------+-----------+---------+
| Name              | Type    | State  | Owner/Tenant         | Namespace    | Snapshots | Backups |
+-------------------+---------+--------+----------------------+--------------+-----------+---------+
| imdb-movies-clone | flexapp | ONLINE | admin/Administrators | mysql        | 0         | 0       |
| imdb-movies       | helm    | ONLINE | admin/Administrators | mysql        | 1         | 0       |
+-------------------+---------+--------+----------------------+--------------+-----------+---------+

[demo@ocp-svc tutorial]# robin app delete imdb-movies-clone -y --force --wait
Job: 1121 Name: K8SAppDelete         State: PROCESSED       Error: 0
Job: 1121 Name: K8SAppDelete         State: PREPARED        Error: 0
Job: 1121 Name: K8SAppDelete         State: AGENT_WAIT      Error: 0
Job: 1121 Name: K8SAppDelete         State: FINALIZED       Error: 0
Job: 1121 Name: K8SAppDelete         State: COMPLETED       Error: 0

1.8. Backup the MySQL Database to AWS S3

Robin elevates the experience from backing up just storage volumes (PVCs) to backing up entire applications/databases, including their metadata, configuration, and data.

A backup is a full copy of the application snapshot that resides on completely different storage media than the application’s data. Therefore, backups are useful to restore an entire application from an external storage media in the event of catastrophic failures, such as disk errors, server failures, or entire data centers going offline, etc. This is assuming your backup doesn’t reside in the data center that is offline, of course.

Let’s now backup our database to an external secondary storage repository (repo). Snapshots (metadata + configuration + data) are backed up to the repo.

Robin allows one to backup Kubernetes applications to both AWS S3 and Google GCS (Google Cloud Storage). In this demo we will use Google GCS to create the backup.

Before we proceed, we need to prepare the credentials file in order to create the bucket and register the repositry with Robin. Follow the documentation here.

After preparing the file, register the repo with Robin by running the below command:

[demo@ocp-svc tutorial]# robin repo register demo-backup gcs://demos-backup-tutorial /root/gcs.json readwrite --wait
Job: 1122 Name: StorageRepoAdd       State: PROCESSED       Error: 0
Job: 1122 Name: StorageRepoAdd       State: FINALIZED       Error: 0
Job: 1122 Name: StorageRepoAdd       State: COMPLETED       Error: 0

Confirm that our secondary storage repository is successfully registered by issuing the following command:

[demo@ocp-svc tutorial]# robin repo list
+------------------+--------+----------------------+--------------+-----------------------+-----------------+-------------+
| Name             | Type   | Owner/Tenant         | BackupTarget | Bucket                | Path            | Permissions |
+------------------+--------+----------------------+--------------+-----------------------+-----------------+-------------+
| demo-backup      | GCS    | admin/Administrators | 1            | demos-backup-tutorial | -               | readwrite   |
+------------------+--------+----------------------+--------------+-----------------------+-----------------+-------------+

Let’s attach this repo to the application so that we can utilize the cloud storage to store backups of its respective snapshots:

[demo@ocp-svc tutorial]# robin app attach-repo imdb-movies demo-backup --wait
Job: 1123 Name: K8SApplicationAddRepo State: VALIDATED       Error: 0
Job: 1123 Name: K8SApplicationAddRepo State: COMPLETED       Error: 0

Confirm that the secondary storage repository is successfully attached to app with the below command:

[demo@ocp-svc tutorial]# robin app attach-repo imdb-movies demo-backup --wait
Job: 1123 Name: K8SApplicationAddRepo State: VALIDATED       Error: 0
Job: 1123 Name: K8SApplicationAddRepo State: COMPLETED       Error: 0
[demo@ocp-svc tutorial]# robin app info imdb-movies
Name                              : imdb-movies
Kind                              : helm
..
..
Repos:
+-------------+-----------------------+------+------------+
| Name        | Bucket                | Path | Permission |
+-------------+-----------------------+------+------------+
| demo-backup | demos-backup-tutorial | -    | readwrite  |
+-------------+-----------------------+------+------------+

Snapshots:
+----------------------------------+-------------------------+-------------------+--------+----------------------+
| Id                               | Name                    | Description       | State  | Creation Time        |
+----------------------------------+-------------------------+-------------------+--------+----------------------+
| ca0ce01e0e6b11eb8db6fd18389aafba | imdb-movies_snap9movies | contains 9 movies | ONLINE | 14 Oct 2020 17:23:05 |
+----------------------------------+-------------------------+-------------------+--------+----------------------+

Take the backup of the snapshot to the remote GCS bucket by utilizing the following command:

# robin backup create <app_name> <repo_name> --snapshotid <snapshot_id> --backupname <backup_name> --wait

You should see an output similar to the following:

[demo@ocp-svc tutorial]# robin backup create imdb-movies demo-backup --snapshotid ca0ce01e0e6b11eb8db6fd18389aafba --backupname imdb_9movies_backup --wait
Creating app backup 'imdb_9movies_backup' from snapshot 'ca0ce01e0e6b11eb8db6fd18389aafba'
Job: 1124 Name: K8SApplicationBackup State: PROCESSED       Error: 0
Job: 1124 Name: K8SApplicationBackup State: AGENT_WAIT      Error: 0
Job: 1124 Name: K8SApplicationBackup State: COMPLETED       Error: 0

Confirm that the backup has been copied to GCS repo, by running the following command:

[demo@ocp-svc tutorial]# robin repo contents demo-backup
+----------------------------------+------------+-------------+----------------------+-------------+-------------------------+----------+
| BackupID                         | ZoneID     | RepoName    | Owner/Tenant         | App         | Snapshot                | Imported |
+----------------------------------+------------+-------------+----------------------+-------------+-------------------------+----------+
| 112bee1a0e7911eb909f8f9c9b2be776 | 1601408284 | demo-backup | admin/Administrators | imdb-movies | imdb-movies_snap9movies | False    |
+----------------------------------+------------+-------------+----------------------+-------------+-------------------------+----------+

As you can see from the output above, the snapshot created previously has now been backed up to the specified GCS bucket. Note the backup ID, as we will use it in the next section.

1.9. Restore the MySQL Database

Let’s simulate a system failure where all local data is lost by first deleting the snapshot locally.

[demo@ocp-svc tutorial]# robin snapshot delete ca0ce01e0e6b11eb8db6fd18389aafba --wait
Are you sure you want to delete [y/n] ? y
Job: 1125 Name: K8SSnapshotDelete    State: VALIDATED       Error: 0
Job: 1125 Name: K8SSnapshotDelete    State: COMPLETED       Error: 0

Now let’s simulate a data loss situation by deleting all data from the “movies” table and verify all data is lost.

1001@imdb-movies-mysql-client:/$  mysql -h imdb-movies-mysql.mysql.svc.cluster.local -uroot -p
..
..
mysql> use imdb;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> delete from movies;
Query OK, 9 rows affected (0.01 sec)

mysql> select * from movies;
Empty set (0.00 sec)

Since we dont have any data locally anymore, we have to use the data stored within the backup in the external cloud repositry. As a result, let’s restore snapshot from the backup and rollback our application to that snapshot by issuing the following command.

[demo@ocp-svc tutorial]# robin backup list
+----------------------------------+---------------------+--------------+-------------------------+--------+
| Backup ID                        | Backup Name         | Repo         | Snapshot Name           | State  |
+----------------------------------+---------------------+--------------+-------------------------+--------+
| 112bee1a0e7911eb909f8f9c9b2be776 | imdb_9movies_backup | demo-backup  | imdb-movies_snap9movies | Pushed |
+----------------------------------+---------------------+--------------+-------------------------+--------+

[demo@ocp-svc tutorial]# robin app restore imdb-movies --backupid 112bee1a0e7911eb909f8f9c9b2be776 --wait
Job: 1132 Name: K8SApplicationRestore State: VALIDATED       Error: 0
Job: 1132 Name: K8SApplicationRestore State: WAITING         Error: 0
Job: 1132 Name: K8SApplicationRestore State: COMPLETED       Error: 0

Remember, we had deleted the local snapshot of our data. Let’s verify the above command has restored the snapshot stored in the cloud by running the following command:

[demo@ocp-svc tutorial]# robin snapshot list --app imdb-movies
+----------------------------------+--------+-------------+----------+-------------------------+
| Snapshot ID                      | State  | App Name    | App Kind | Snapshot name           |
+----------------------------------+--------+-------------+----------+-------------------------+
| ca0ce01e0e6b11eb8db6fd18389aafba | ONLINE | imdb-movies | helm     | imdb-movies_snap9movies |
+----------------------------------+--------+-------------+----------+-------------------------+

Let’s verify all 9 rows are restored to the “movies” table by running the following command:

[demo@ocp-svc tutorial]# kubectl exec -it imdb-movies-mysql-client bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
1001@imdb-movies-mysql-client:/$ mysql -h imdb-movies-mysql.mysql.svc.cluster.local -uroot -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 25
Server version: 8.0.21 Source distribution

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> use imdb;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select * from movies;
+-----------+------+---------------------------------------+-------------+
| movieid   | year | title                                 | genre       |
+-----------+------+---------------------------------------+-------------+
| tt0360556 | 2018 | Fahrenheit 451                        | Drama       |
| tt0365545 | 2018 | Nappily Ever After                    | Comedy      |
| tt0427543 | 2018 | A Million Little Pieces               | Drama       |
| tt0432010 | 2018 | The Queen of Sheba Meets the Atom Man | Comedy      |
| tt0825334 | 2018 | Caravaggio and My Mother the Pope     | Comedy      |
| tt0859635 | 2018 | Super Troopers 2                      | Comedy      |
| tt0862930 | 2018 | Dukun                                 | Horror      |
| tt0891581 | 2018 | RxCannabis: A Freedom Tale            | Documentary |
| tt0933876 | 2018 | June 9                                | Horror      |
+-----------+------+---------------------------------------+-------------+
9 rows in set (0.01 sec)

As you can see, we can restore the database to a desired state in the event of data corruption. This is achieved by simply pulling the backup from the cloud to recreate the snapshot and rolling the application back to the state store in the aforementioned snapshot.

1.10. Create MySQL Database from the backup

In addition to recovering the snapshot from the MySQL backup, we can also create an entirely new and independent application on the same or completely new cluster using the aforementioned backup.

Here’s the command to create completely new application on the same cluster:

# robin app create from-backup <new_app_name> <backup_id> --wait

You should see an output similar to the following:

[demo@ocp-svc tutorial]# robin app create from-backup new-imdb-movies 112bee1a0e7911eb909f8f9c9b2be776 --wait
Job: 1135 Name: K8SApplicationCreate State: PROCESSED       Error: 0
Job: 1135 Name: K8SApplicationCreate State: PREPARED        Error: 0
Job: 1135 Name: K8SApplicationCreate State: AGENT_WAIT      Error: 0
Job: 1135 Name: K8SApplicationCreate State: COMPLETED       Error: 0

Ensure the application is up and running before continuing, as shown below:

[demo@ocp-svc tutorial]# oc get pod -A | grep new-imdb-movies
t001-u000003                                       new-imdb-movies-imdb-movies-mysql-master-0                   1/1     Running            0          3m6s
t001-u000003                                       new-imdb-movies-imdb-movies-mysql-slave-0                    1/1     Running            0          3m9s

[demo@ocp-svc tutorial]# robin app info new-imdb-movies --status
+-----------------------+-------------------------------------------------+--------+---------+
| Kind                  | Name                                            | Status | Message |
+-----------------------+-------------------------------------------------+--------+---------+
| ServiceAccount        | new-imdb-movies-imdb-movies-mysql               | Ready  | -       |
| ConfigMap             | new-imdb-movies-imdb-movies-mysql-slave         | Ready  | -       |
| ConfigMap             | new-imdb-movies-imdb-movies-mysql-master        | Ready  | -       |
| Secret                | new-imdb-movies-imdb-movies-mysql               | Ready  | -       |
| PersistentVolumeClaim | data-new-imdb-movies-imdb-movies-mysql-slave-0  | Bound  | -       |
| PersistentVolumeClaim | data-new-imdb-movies-imdb-movies-mysql-master-0 | Bound  | -       |
| Pod                   | new-imdb-movies-imdb-movies-mysql-slave-0       | Ready  | -       |
| Pod                   | new-imdb-movies-imdb-movies-mysql-master-0      | Ready  | -       |
| Service               | new-imdb-movies-imdb-movies-mysql               | Ready  | -       |
| Service               | new-imdb-movies-imdb-movies-mysql-slave         | Ready  | -       |
| StatefulSet           | new-imdb-movies-imdb-movies-mysql-slave         | Ready  | -       |
| StatefulSet           | new-imdb-movies-imdb-movies-mysql-master        | Ready  | -       |
+-----------------------+-------------------------------------------------+--------+---------+

Lets verify the contents and integrity of the MySQL database app by accessing the master pod and looking into database and rows in the table.

[demo@ocp-svc tutorial]# oc get pod -A | grep new-imdb
t001-u000003                                       new-imdb-movies-imdb-movies-mysql-master-0                   1/1     Running            0          7m8s
t001-u000003                                       new-imdb-movies-imdb-movies-mysql-slave-0                    1/1     Running            0          7m11s

[demo@ocp-svc tutorial]# oc exec -it -n t001-u000003 new-imdb-movies-imdb-movies-mysql-master-0 bash
1000610000@new-imdb-movies-imdb-movies-mysql-master-0:/$
1000610000@new-imdb-movies-imdb-movies-mysql-master-0:/$ mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 85
Server version: 8.0.21 Source distribution

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> use imdb;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select * from movies;
+-----------+------+---------------------------------------+-------------+
| movieid   | year | title                                 | genre       |
+-----------+------+---------------------------------------+-------------+
| tt0360556 | 2018 | Fahrenheit 451                        | Drama       |
| tt0365545 | 2018 | Nappily Ever After                    | Comedy      |
| tt0427543 | 2018 | A Million Little Pieces               | Drama       |
| tt0432010 | 2018 | The Queen of Sheba Meets the Atom Man | Comedy      |
| tt0825334 | 2018 | Caravaggio and My Mother the Pope     | Comedy      |
| tt0859635 | 2018 | Super Troopers 2                      | Comedy      |
| tt0862930 | 2018 | Dukun                                 | Horror      |
| tt0891581 | 2018 | RxCannabis: A Freedom Tale            | Documentary |
| tt0933876 | 2018 | June 9                                | Horror      |
+-----------+------+---------------------------------------+-------------+
9 rows in set (0.00 sec)

To learn more about using Robin Storage on OpenShift, visit us at Robin Storage solution page.

2. Elasticsearch on OpenShift

After successfully deploying and running stateless applications, a number of developers are exploring the possibility of running stateful workloads, such as Elasticsearch, on OpenShift. If you are considering extending OpenShift for stateful workloads, this tutorial will help you experiment on your existing OpenShift environment by providing step-by-step instructions.

This tutorial will walk you through:

  1. How to deploy an Elasticsearch database on OpenShift using the Robin Operator and Helm3

  2. Add sample data to the Elasticsearch database

  3. Verify the Helm release has been registered as an application

  4. Create a point-in-time snapshot of the Elasticsearch database

  5. Simulate a user error and rollback the application to a stable state using the snapshot

  6. Clone the database for the purpose of collaboration

  7. Backup the database to the cloud using Google Cloud Storage bucket

  8. Simulate data loss/corruption and use the backup to restore the database

  9. Create an Elasticsearch database application from the backup

2.1. Prerequisites: Install the Robin Operator on OpenShift and set up Helm

Robin Storage is an application-aware container storage that offers advanced data management capabilities and runs natively on OpenShift. Robin Storage delivers bare-metal performance and enables you to protect (via snapshots and backups), encrypt, collaborate (via clones and git like push/pull workflows) and make portable (via Cloud Repositories) stateful applications that are deployed using Helm Charts or Operators.

Note

Documentation for all commands native to the Robin Storage solution used within this tutorial is available here.

Before we deploy PostgreSQL on OpenShift, let’s first install the Robin operator on your existing OpenShift environment. You can install Robin directly from the OpenShift console by clicking on the OperatorHub tab. You can find further instructions here.

Let’s confirm that OpenShift cluster is up and running using the following command:

# oc get nodes

You should see an output similar to below, with the status of each node marked as Ready

[demo@ocp-svc ~]# oc get nodes
NAME                                                           STATUS   ROLES    AGE   VERSION
poc-oshift-pnh4s-master-0.c.rock-range-207622.internal         Ready    master   10d   v1.18.3+47c0e71
poc-oshift-pnh4s-master-1.c.rock-range-207622.internal         Ready    master   10d   v1.18.3+47c0e71
poc-oshift-pnh4s-master-2.c.rock-range-207622.internal         Ready    master   10d   v1.18.3+47c0e71
poc-oshift-pnh4s-worker-a-zj94h.c.rock-range-207622.internal   Ready    worker   10d   v1.18.3+47c0e71
poc-oshift-pnh4s-worker-b-bcstt.c.rock-range-207622.internal   Ready    worker   10d   v1.18.3+47c0e71
poc-oshift-pnh4s-worker-c-6xm95.c.rock-range-207622.internal   Ready    worker   10d   v1.18.3+47c0e71

In addition to the underlying Kubernetes cluster being ready, we need to make sure the Robin cluster is up and running. Run the following command to verify that the Robin cluster is ready:

# oc describe robincluster -n robinio

You should see an output similar to below.

[demo@ocp-svc ~]# oc describe robincluster -n robinio
Name:         robin
Namespace:    robinio
Labels:       app.kubernetes.io/instance=robin
              app.kubernetes.io/managed-by=robin.io
              app.kubernetes.io/name=robin
Annotations:  <none>
API Version:  manage.robin.io/v1
Kind:         RobinCluster
Metadata:
  Creation Timestamp:  2020-10-01T21:32:35Z
  Generation:          1
  Managed Fields:
    API Version:  manage.robin.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          .:
          f:app.kubernetes.io/instance:
          f:app.kubernetes.io/managed-by:
          f:app.kubernetes.io/name:
      f:spec:
        .:
        f:host_type:
        f:image_robin:
        f:k8s_provider:
    Manager:      kubectl
    Operation:    Update
    Time:         2020-10-01T21:32:35Z
    API Version:  manage.robin.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        .:
        f:connect_command:
        f:get_robin_client:
        f:master_ip:
        f:phase:
        f:pod_status:
        f:robin_node_status:
    Manager:         robin-operator
    Operation:       Update
    Time:            2020-10-09T14:49:36Z
  Resource Version:  1315346
  Self Link:         /apis/manage.robin.io/v1/namespaces/robinio/robinclusters/robin
  UID:               4c9bd687-f04b-4b05-9c9b-fa68bbb1890d
Spec:
  host_type:     gcp
  image_robin:   robinsys/robinimg:5.3.2-521
  k8s_provider:  openshift
Status:
  connect_command:   kubectl exec -it robin-9nr9s -n robinio -- bash
  get_robin_client:  curl -k https://10.0.0.5:29442/api/v3/robin_server/download?file=robincli&os=linux > robin
  master_ip:         10.0.0.5
  Phase:             Ready

In the above output, the field get_robin_client contains the command to download the Robin client. Utilize this command to get the Robin client onto your local machine. An example is shown below:

[demo@ocp-svc ~]# curl -k "https://10.0.0.5:29442/api/v3/robin_server/download?file=robincli&os=linux" > robin
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                Dload  Upload   Total   Spent    Left  Speed
100 47.4M  100 47.4M    0     0  18.0M      0  0:00:02  0:00:02 --:--:-- 18.0M

[demo@ocp-svc ~]# cp ./robin /usr/bin/local/robin
[demo@ocp-svc ~]# chmod +x /usr/bin/local/robin

Note

After running the command the client binary will be downloaded into your current working directory. The following commands displayed above result in the file being copied to /usr/bin/local directory and being made an executable. This results in robin becoming a system command. These final two steps are not mandatory but can be performed for user convienence.

After downloading the client, we have to associate it with the desired Robin cluster. In the oc describe robincluster output above, a field named master_ip is present and indicates the external IP Address of the Master node of the Robin cluster. Use the aformentioned address to connect the Robin client with the concerned OpenShift cluster, via the command:

# robin client add-context <master-ip> --set-current

An example usage of the command is shown below:

[demo@ocp-svc ~]$ robin client add-context 10.0.0.5 --set-current
Context robin-cluster-10.0.0.5 created successfully and set as the current context

Next log into your Robin cluster and create a namespace wherein which we will create the application, by running the following commands:

# robin login <user> --password <password>

# robin namespace add <namespace-name>

Note

The username and password combination of the admin user can be found within

You should see an output similar to below:

[demo@ocp-svc ~]# robin login admin --password Robin123
User admin is logged into Administrators tenant

[demo@ocp-svc ~]# robin namespace add demo
Namespace 'demo' has been added for user 'admin' in tenant 'Administrators'

Finally we will add a stable Helm repository to pull Helm charts from. For this tutorial, we will use the Bitnami Helm repositry which contains charts designed to run on OpenShift. Utilize the command shown below to register the repositry:

[demo@ocp-svc ~]# helm repo add bitnami https://charts.bitnami.com/ibm
"bitnami" has been added to your repositories

2.2. Deploy a Elasticsearch database on OpenShift

Now, let’s create an Elasticsearch database using Helm and Robin Storage. During the installation process of the Robin operator not only is a Robincluster custom resource definition created, a custom StorageClass named “robin” is registered with OpenShift. We can use the aforementioned storage class to create PersistentVolumes and PersistentVolumeClaims for pods in OpenShift. Using this storage class allows one to access the data management capabilities (such as snapshot, clone, backup) provided by Robin Storage.

On Openshift 4.x, the security context for the Elasticsearch helm chart should be updated to allow the containers to run in previleged mode. Fetch the Elasticsearch chart via the the following:

# helm fetch bitnami/elasticsearch

Note

The above command will download a tarball into the working directory where it is run.

Untar the downloaded Elasticsearch chart files and update the following files: coordinating-deploy.yaml, data-statefulset.yaml, and master-statefulset.yaml within the templates directory. Within each of aforementioned files edit the container’s securityContext section to be as shown below.

securityContext:
  privileged: true
  capabilities:
    add: ["SYS_ADMIN"]
  allowPrivilegeEscalation: true
  runAsUser: {{ .Values.coordinating.securityContext.runAsUser }}

These changes enable the container to be run under the privileged security context constraint. In addition to this, the values.yaml file needs to be edited such that the storageClass attribute is set to robin so as to take advantage of the data management capabilities Robin Storage offers.

Next login to the ‘demo’ project/namespace (which was created initially) and add the privileged security context constraint to the current user using the commands showcased below.

[demo@ocp-svc ~]# oc project demo
Now using project "demo" on server "https://api.lab.ocp.lan:6443"
[demo@ocp-svc ~]# oc adm policy add-scc-to-user privileged -z default
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:privileged added: "default"

Once the privileges are successfully added we are ready to deploy the Elasticsearch application using the below Helm command.

# helm install <release_name> <path_to_modified_directory> --namespace <namespace>

Displayed below is the result of running the command.

# helm install employees /root/elasticsearch --namespace demo

Note

All helm commands/output from this point onwards pertain to Helm3 as Robin will setup Helm3 on the host where it is installed.

After a few seconds run the following command to verify the application is deployed and all relevant Kubernetes resources are ready within the appropriate namespace.

# helm list -n <namespace>

This should result in an output showing the status of your Elasticsearch database as shown below.

[demo@ocp-svc ~]# helm list -n demo
NAME      NAMESPACE REVISION  UPDATED                                 STATUS    CHART                 APP VERSION
employees demo      1         2020-10-09 09:26:35.110072211 -0700 PDT deployed  elasticsearch-12.7.2  7.9.1

Although the status above is set to deployed, the relevant Elasticsearch application pods need to be in a good state before we can proceed. Run the following command to verify the pods are running:

# oc get pods -n <namespace> | grep <release_name>

You should see an output similar to the following.

[demo@ocp-svc ~]# oc get pods -n demo | grep employees
employees-elasticsearch-coordinating-only-5d67cb7d47-bbkgz   1/1     Running   0          6m43s
employees-elasticsearch-coordinating-only-5d67cb7d47-zznsp   1/1     Running   0          6m43s
employees-elasticsearch-data-0                               1/1     Running   0          6m43s
employees-elasticsearch-data-1                               1/1     Running   0          6m42s
employees-elasticsearch-master-0                             1/1     Running   0          6m43s
employees-elasticsearch-master-1                             1/1     Running   0          6m42s

Now that we know the Elasticsearch application is up and running, let’s save the name of the master pod using the below command.

# export CLIENT_ADDRESS=$(oc get pods --all-namespaces --field-selector=status.phase=Running | grep employees-elasticsearch-master-0 -m 1| awk 'BEGIN {FS=" "}; {print $2}')

We will use this environment variable later in the tutorial to add data into the application.

2.3. Add sample data to the Elasticsearch database

Let’s create an index called ‘test-index’, in which we will store the user data, using the below command.

[demo@ocp-svc ~]# oc exec -n demo -i ${CLIENT_ADDRESS} -- curl -X PUT http://localhost:9200/test-index
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                Dload  Upload   Total   Spent    Left  Speed
100    69  100    69    0     0     70      0 --:--:-- --:--:-- --:--:--    70
{"acknowledged":true,"shards_acknowledged":true,"index":"test-index"}

Note

All commands pertaining to the deployed application will henceforth be run within the demo namespace context in the examples.

In addition to the index we need some sample data to perform operations on. To do so, utilize the following command to add a user named ‘Bob’ to the index we created previously.

[demo@ocp-svc ~]# oc exec -n demo -i ${CLIENT_ADDRESS} -- curl -H 'Content-Type: application/json' -XPOST http://localhost:9200/test-index/test/1 -d '{"name":"Bob"}'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                Dload  Upload   Total   Spent    Left  Speed
100   172  100   158  100    14    521     46 --:--:-- --:--:-- --:--:--   569
{"_index":"test-index","_type":"test","_id":"1","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":0,"_primary_term":1}

Finally verify that a user named Bob has been added to the database on the specified index, by running the following command:

[demo@ocp-svc ~]# oc exec -n demo -i ${CLIENT_ADDRESS} -- curl -X GET http://localhost:9200/test-index/_search\?pretty
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                Dload  Upload   Total   Spent    Left  Speed
100   443  100   443    0     0    712      0 --:--:-- --:--:-- --:--:--   711
{
  "took" : 605,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 1,
      "relation" : "eq"
    },
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : "test-index",
        "_type" : "test",
        "_id" : "1",
        "_score" : 1.0,
        "_source" : {
          "name" : "Bob"
        }
      }
    ]
  }
}

As you can see from the above output that the desired user has been added to the ElasticSearch database. As a result we now have a ElasticSearch database with an index and some sample data. Now, let’s take a look at the data management capabilities Robin brings, such as taking snapshots, making clones, and creating backups.

2.4. Verify the Elasticsearch Helm release has registered as an application

To benefit from the data management capabilities, we will register our Elasticsearch application with Robin. Doing so will let Robin map and track all resources associated with the Helm release in order to enable the advanced data management capabilities of the product.

Since we initially added the demo namespace in Robin for the admin user, Robin will auto discover the Helm applications deployed in the aforementioned namespace. This can be verified by querying for the application details and status using native Robin command shown below:

# robin app info <release_name> --status

You should see an output similar to this:

[demo@ocp-svc ~]# robin app info employees --status
+-----------------------+------------------------------------------------------------+--------+---------+
| Kind                  | Name                                                       | Status | Message |
+-----------------------+------------------------------------------------------------+--------+---------+
| ConfigMap             | employees-elasticsearch-initcontainer                      | Ready  | -       |
| PersistentVolumeClaim | data-employees-elasticsearch-master-1                      | Bound  | -       |
| PersistentVolumeClaim | data-employees-elasticsearch-master-0                      | Bound  | -       |
| PersistentVolumeClaim | data-employees-elasticsearch-data-0                        | Bound  | -       |
| PersistentVolumeClaim | data-employees-elasticsearch-data-1                        | Bound  | -       |
| Pod                   | employees-elasticsearch-coordinating-only-5d67cb7d47-zznsp | Ready  | -       |
| Pod                   | employees-elasticsearch-data-0                             | Ready  | -       |
| Pod                   | employees-elasticsearch-coordinating-only-5d67cb7d47-bbkgz | Ready  | -       |
| Pod                   | employees-elasticsearch-master-1                           | Ready  | -       |
| Pod                   | employees-elasticsearch-master-0                           | Ready  | -       |
| Pod                   | employees-elasticsearch-data-1                             | Ready  | -       |
| Service               | employees-elasticsearch-master                             | Ready  | -       |
| Service               | employees-elasticsearch-data                               | Ready  | -       |
| Service               | employees-elasticsearch-coordinating-only                  | Ready  | -       |
| ReplicaSet            | employees-elasticsearch-coordinating-only-5d67cb7d47       | Ready  | -       |
| StatefulSet           | employees-elasticsearch-master                             | Ready  | -       |
| StatefulSet           | employees-elasticsearch-data                               | Ready  | -       |
| Deployment            | employees-elasticsearch-coordinating-only                  | Ready  | -       |
+-----------------------+------------------------------------------------------------+--------+---------+

2.5. Snapshot the Elasticsearch Database

If you make a mistake, such as unintentionally deleting important data, you may be able to undo it by restoring a snapshot. Snapshots allow you to restore the state of your application to a point-in-time state saved within the snapshot.

Robin lets you snapshot not just the storage volumes (PVCs) but the entire database application including all its resources such as Pods, StatefulSets, PVCs, Services, ConfigMaps etc. with a single command. To create a snapshot, run the following command.

# robin snapshot create <release_name> --snapname <snapshot_name> --desc <snapshot_description> --wait

An example usage of the command is shown below:

[demo@ocp-svc ~]# robin snapshot create employees --snapname single-user-bob --desc "contains an employee named Bob" --wait
Job:  136 Name: K8SApplicationSnapshot State: VALIDATED       Error: 0
Job:  136 Name: K8SApplicationSnapshot State: PREPARED        Error: 0
Job:  136 Name: K8SApplicationSnapshot State: AGENT_WAIT      Error: 0
Job:  136 Name: K8SApplicationSnapshot State: COMPLETED       Error: 0

Let’s verify we have successfully created the snapshot via the following command:

# robin snapshot list --app <release_name>

You should see an output similar to this:

[demo@ocp-svc ~]# robin snapshot list --app employees
+----------------------------------+--------+-----------+----------+---------------------------+
| Snapshot ID                      | State  | App Name  | App Kind | Snapshot name             |
+----------------------------------+--------+-----------+----------+---------------------------+
| 4ce8805e0a7511ebb05c39c799a31bb9 | ONLINE | employees | helm     | employees_single-user-bob |
+----------------------------------+--------+-----------+----------+---------------------------+

We now have a snapshot of our entire database that contains a user named Bob.

2.6. Rollback the Elasticsearch database

Let’s simulate user error by adding an invalid user named “Jo$n” instead of “John” via the command:

[demo@ocp-svc ~]# oc exec -n demo -i ${CLIENT_ADDRESS} -- curl -H 'Content-Type: application/json' -XPOST http://localhost:9200/test-index/test/2 -d '{"name":"Jo$n"}'

Verify the invalid user has been added by issuing the command:

[demo@ocp-svc ~]# oc exec -n demo -i ${CLIENT_ADDRESS} -- curl -X GET http://localhost:9200/test-index/_search\?pretty
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                Dload  Upload   Total   Spent    Left  Speed
100   622  100   622    0     0   1217      0 --:--:-- --:--:-- --:--:--  1214
{
  "took" : 505,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 2,
      "relation" : "eq"
    },
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : "test-index",
        "_type" : "test",
        "_id" : "1",
        "_score" : 1.0,
        "_source" : {
          "name" : "Bob"
        }
      },
      {
        "_index" : "test-index",
        "_type" : "test",
        "_id" : "2",
        "_score" : 1.0,
        "_source" : {
          "name" : "Jo$n"
        }
      }
    ]
  }
}

Next run the following command to see the available snapshots for the registered application:

# robin app info <release_name>

You should see an output similar to the following. Make note of the snapshot ID, as we will use it in the next command.

[demo@ocp-svc ~]# robin app info employees
Name                              : employees
Kind                              : helm
State                             : ONLINE
Number of repos                   : 0
Number of snapshots               : 1
Number of usable backups          : 0
Number of archived/failed backups : 0

Query:
-------
{'apps': ['helm/employees@demo'], 'resources': [], 'selectors': [], 'namespace': 'demo'}

Snapshots:
+----------------------------------+---------------------------+--------------------------------+--------+----------------------+
| Id                               | Name                      | Description                    | State  | Creation Time        |
+----------------------------------+---------------------------+--------------------------------+--------+----------------------+
| 4ce8805e0a7511ebb05c39c799a31bb9 | employees_single-user-bob | contains an employee named Bob | ONLINE | 09 Oct 2020 21:21:06 |
+----------------------------------+---------------------------+--------------------------------+--------+----------------------+

Now, let’s rollback to the point where we only had one user (‘Bob’) that was valid. To do this run the following command:

# robin app restore <release_name> --snapshotid <snapshotid> --wait
[demo@ocp-svc ~]# robin app restore employees --snapshotid 4ce8805e0a7511ebb05c39c799a31bb9 --wait
Job:  136 Name: K8SApplicationRollback State: VALIDATED       Error: 0
Job:  136 Name: K8SApplicationRollback State: PREPARED        Error: 0
Job:  136 Name: K8SApplicationRollback State: AGENT_WAIT      Error: 0
Job:  136 Name: K8SApplicationRollback State: COMPLETED       Error: 0

To verify we have rolled back our database such that it only has one user, run the following command:

[demo@ocp-svc ~]# oc exec -n demo -i ${CLIENT_ADDRESS} -- curl -X GET http://localhost:9200/test-index/_search\?pretty
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                Dload  Upload   Total   Spent    Left  Speed
100   443  100   443    0     0   1853      0 --:--:-- --:--:-- --:--:--  1853
{
  "took" : 125,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 1,
      "relation" : "eq"
    },
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : "test-index",
        "_type" : "test",
        "_id" : "1",
        "_score" : 1.0,
        "_source" : {
          "name" : "Bob"
        }
      }
    ]
  }
}

We have successfully rolled back the application to a stable state containing only valid users!

2.7. Clone the Elasticsearch Database

Robin lets you clone not just the storage volumes (PVCs) but the entire database application including all its resources such as Pods, StatefulSets, PVCs, Services, ConfigMaps, etc. with a single command.

Application cloning improves the collaboration across Dev/Test/Ops teams. Teams can share applications and data quickly, reducing the procedural delays involved in re-creating environments. Each team can work on their clone without affecting other teams. Clones are useful when you want to run a report on a database without affecting the source database application, or for performing UAT tests or for validating patches before applying them to the production database, etc.

Robin clones are ready-to-use “thin copies” of the entire app/database, not just storage volumes. Thin-copy here means that data from the snapshot is NOT physically copied, therefore clones can be made very quickly. Robin clones are fully-writable and any modifications made to the clone are not visible to the source application/database.

To create a clone from the existing snapshot created above, run the following command:

# robin app create from-snapshot <clone_name> <snapshotid> --wait

Note

The snapshot ID that is used should be the same as before.

The following output should be displayed when the above command is executed:

[demo@ocp-svc ~]# robin app create from-snapshot employees-clone 4ce8805e0a7511ebb05c39c799a31bb9 --wait
Job:  137 Name: K8SApplicationClone  State: VALIDATED       Error: 0
Job:  137 Name: K8SApplicationClone  State: PREPARED        Error: 0
Job:  137 Name: K8SApplicationClone  State: AGENT_WAIT      Error: 0
Job:  137 Name: K8SApplicationClone  State: FINALIZED       Error: 0
Job:  137 Name: K8SApplicationClone  State: COMPLETED       Error: 0

Let’s verify Robin has cloned all relevant Kubernetes resources by running the following command:

# robin app info <clone_name> --status

You should see an output similar to below:

[demo@ocp-svc ~]# robin app info employees-clone --status
+-----------------------+----------------------------------------------------------------------------+--------+---------+
| Kind                  | Name                                                                       | Status | Message |
+-----------------------+----------------------------------------------------------------------------+--------+---------+
| ConfigMap             | employees-clone-employees-elasticsearch-initcontainer                      | Ready  | -       |
| PersistentVolumeClaim | data-employees-clone-employees-elasticsearch-data-0                        | Bound  | -       |
| PersistentVolumeClaim | data-employees-clone-employees-elasticsearch-data-1                        | Bound  | -       |
| PersistentVolumeClaim | data-employees-clone-employees-elasticsearch-master-0                      | Bound  | -       |
| PersistentVolumeClaim | data-employees-clone-employees-elasticsearch-master-1                      | Bound  | -       |
| Pod                   | employees-clone-employees-elasticsearch-master-0                           | Ready  | -       |
| Pod                   | employees-clone-employees-elasticsearch-coordinating-only-5d67cb7d47-zznsp | Ready  | -       |
| Pod                   | employees-clone-employees-elasticsearch-master-1                           | Ready  | -       |
| Pod                   | employees-clone-employees-elasticsearch-data-1                             | Ready  | -       |
| Pod                   | employees-clone-employees-elasticsearch-coordinating-only-5d67cb7d47-bbkgz | Ready  | -       |
| Pod                   | employees-clone-employees-elasticsearch-data-0                             | Ready  | -       |
| Service               | employees-clone-employees-elasticsearch-coordinating-only                  | Ready  | -       |
| Service               | employees-clone-employees-elasticsearch-data                               | Ready  | -       |
| Service               | employees-clone-employees-elasticsearch-master                             | Ready  | -       |
| ReplicaSet            | employees-clone-employees-elasticsearch-coordinating-only-5d67cb7d47       | Ready  | -       |
| StatefulSet           | employees-clone-employees-elasticsearch-data                               | Ready  | -       |
| StatefulSet           | employees-clone-employees-elasticsearch-master                             | Ready  | -       |
| Deployment            | employees-clone-employees-elasticsearch-coordinating-only                  | Ready  | -       |
+-----------------------+----------------------------------------------------------------------------+--------+---------+

Notice that Robin automatically clones the required Kubernetes resources, not just storage volumes (PVCs), that are required to stand up a fully-functional clone of our database. After the clone operation is complete, the cloned database is ready for use.

Save the name of the master pod of the cloned ElasticSearch database by running the following command:

# export CLONE_CLIENT_ADDRESS=$(oc get pods --all-namespaces --field-selector=status.phase=Running | grep clone-employees-elasticsearch-master-0 -m 1| awk 'BEGIN {FS=" "}; {print $2}')

To verify we have successfully created a clone of our ElasticSearch database with all the data saved within the snapshot, run the following command:

[demo@ocp-svc ~]# oc exec -n demo -i ${CLONE_CLIENT_ADDRESS} -- curl -X GET http://localhost:9200/test-index/_search\?pretty
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0{
  "took" : 137,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 1,
      "relation" : "eq"
    },
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : "test-index",
        "_type" : "test",
        "_id" : "1",
        "_score" : 1.0,
        "_source" : {
          "name" : "Bob"
        }
      }
    ]
  }
}

Given the fact the data is identical (the same index and single user are present), we know we have successfully cloned the original ElasticSearch database.

Now, let’s make changes to the clone and ensure the original database remains unaffected by changes to the clone. Let’s add a user called ‘Sarah’ by running the following command:

# oc exec -i ${CLONE_CLIENT_ADDRESS} -- curl -H 'Content-Type: application/json' -XPOST http://localhost:9200/test-index/test/2 -d '{"name":"Sarah"}'

Let’s verify the second user has been added. You should see an output similar to the following:

[demo@ocp-svc ~]# oc exec -n demo -i ${CLONE_CLIENT_ADDRESS} -- curl -X GET http://localhost:9200/test-index/_search\?pretty
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                Dload  Upload   Total   Spent    Left  Speed
100   623  100   623    0     0    795      0 --:--:-- --:--:-- --:--:--   794
{
  "took" : 777,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 2,
      "relation" : "eq"
    },
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : "test-index",
        "_type" : "test",
        "_id" : "1",
        "_score" : 1.0,
        "_source" : {
          "name" : "Bob"
        }
      },
      {
        "_index" : "test-index",
        "_type" : "test",
        "_id" : "2",
        "_score" : 1.0,
        "_source" : {
          "name" : "Sarah"
        }
      }
    ]
  }
}

As you can see from the output above, a second user named ‘Sarah’ has been added. To verify that our original ElasticSearch database is unaffected by changes to the clone, run the following command:

[demo@ocp-svc ~]# oc exec -n demo -i ${CLIENT_ADDRESS} -- curl -X GET http://localhost:9200/test-index/_search\?pretty
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0{
  "took" : 82,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 1,
      "relation" : "eq"
    },
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : "test-index",
        "_type" : "test",
        "_id" : "1",
        "_score" : 1.0,
        "_source" : {
          "name" : "Bob"
        }
      }
    ]
  }
}

As shown above the original ElasticSearch database only has one user and thus was unaffected by the data insertion into the clone.

This means we can work on the original ElasticSearch database and the cloned database simultaneously and independently. This is valuable for collaboration across teams where each team needs to perform a unique set of operations.

To see a list of all the clones created by Robin run the following command:

# robin app list --app-types CLONE

You should see an output similar to the following:

[demo@ocp-svc ~]# robin app list --app-types CLONE
Helm/Flex Apps:

+-----------------+---------+--------+----------------------+--------------+-----------+---------+
| Name            | Type    | State  | Owner/Tenant         | Namespace    | Snapshots | Backups |
+-----------------+---------+--------+----------------------+--------------+-----------+---------+
| employees-clone | flexapp | ONLINE | admin/Administrators | demo         | 0         | 0       |
+-----------------+---------+--------+----------------------+--------------+-----------+---------+

Finally let’s delete the clone. Since the clone is just like any other Robin application, it can be deleted using the native app delete command provided by Robin, as shown below:

[demo@ocp-svc ~]# robin app delete employees-clone -y --force --wait
Job:  138 Name: K8SAppDelete         State: PROCESSED       Error: 0
Job:  138 Name: K8SAppDelete         State: PREPARED        Error: 0
Job:  138 Name: K8SAppDelete         State: AGENT_WAIT      Error: 0
Job:  138 Name: K8SAppDelete         State: FINALIZED       Error: 0
Job:  138 Name: K8SAppDelete         State: COMPLETED       Error: 0

2.8. Backup the Elasticsearch Database to AWS S3

Robin elevates the experience from backing up just storage volumes (PVCs) to backing up entire applications/databases, including their metadata, configuration, alongside the data stored.

A backup is a full copy of the application snapshot that resides on completely different storage media than the application’s data. Therefore, backups are useful to restore an entire application from an external storage media in the event of catastrophic failures, such as disk errors, server failures, or entire data centers going offline, etc. This is assuming your backup doesn’t reside in the data center that is offline, of course.

Let’s now backup our database to an external secondary storage repository (repo). Snapshots (metadata + configuration + data) are backed up to the repo.

Robin allows one to backup Kubernetes applications to both AWS S3 and Google GCS (Google Cloud Storage). In this demo we will use Google GCS to create the backup.

Before we proceed, we need to prepare the credentials file in order to create the bucket and register the repositry with Robin. Follow the documentation here for more details on how to create the credentials file.

After preparing the file, register the repo with Robin by running the below command:

# robin repo register <reponame> gcs://<path_to_bucket> <path_to_credentials_file> readwrite --wait

The following output should be display on execution of above command:

[demo@ocp-svc ~]# robin repo register demo-backup gcs://demos-backup-tutorial /root/gcs.json readwrite --wait
Job:  139 Name: StorageRepoAdd       State: PROCESSED       Error: 0
Job:  139 Name: StorageRepoAdd       State: COMPLETED       Error: 0

Confirm that our secondary storage repository is successfully registered by issuing the following command:

[demo@ocp-svc ~]# robin repo list
+------------------+------+----------------------+--------------+-----------------------+--------+-------------+
| Name             | Type | Owner/Tenant         | BackupTarget | Bucket                | Path   | Permissions |
+------------------+------+----------------------+--------------+-----------------------+--------+-------------+
| demo-backup      | GCS  | admin/Administrators | 1            | demos-backup-tutorial | -      | readwrite   |
+------------------+------+----------------------+--------------+-----------------------+--------+-------------+

Let’s attach this repo to the application so that we can utilize the cloud storage to store backups of its respective snapshots with the following command:

# robin app attach-repo <release_name> <reponame> --wait

An example usage of the command is shown below:

[demo@ocp-svc ~]# robin app attach-repo employees demo-backup --wait
Job: 1185 Name: K8SApplicationAddRepo State: PROCESSED       Error: 0
Job: 1185 Name: K8SApplicationAddRepo State: COMPLETED       Error: 0

Confirm that the secondary storage repository is successfully attached to app with the below command:

# robin app info <release_name>

You should see an output similar to the following:

[demo@ocp-svc ~]# robin app info employees
Name                              : employees
Kind                              : helm
State                             : ONLINE
Number of repos                   : 1
Number of snapshots               : 1
Number of usable backups          : 0
Number of archived/failed backups : 0

Query:
-------
{'resources': [], 'apps': ['helm/employees@demo'], 'namespace': 'demo', 'selectors': []}

Repos:
+-------------+-----------------------+------+------------+
| Name        | Bucket                | Path | Permission |
+-------------+-----------------------+------+------------+
| demo-backup | demos-backup-tutorial | -    | readwrite  |
+-------------+-----------------------+------+------------+

Snapshots:
+----------------------------------+---------------------------+--------------------------------+--------+----------------------+
| Id                               | Name                      | Description                    | State  | Creation Time        |
+----------------------------------+---------------------------+--------------------------------+--------+----------------------+
| 4ce8805e0a7511ebb05c39c799a31bb9 | employees_single-user-bob | contains an employee named Bob | ONLINE | 09 Oct 2020 21:21:06 |
+----------------------------------+---------------------------+--------------------------------+--------+----------------------+

Next backup the snapshot to the remote GCS bucket by utilizing the following command:

# robin backup create <release_name> <reponame> --snapshotid <snapshotid> --backupname <backupname> --wait

You should see an output similar to the following:

[demo@ocp-svc ~]# robin backup create employees demo-backup --snapshotid 4ce8805e0a7511ebb05c39c799a31bb9 --backupname single_user_backup --wait
Creating app backup 'single_user_backup' from snapshot '4ce8805e0a7511ebb05c39c799a31bb9'
Job:  142 Name: K8SApplicationBackup State: PROCESSED       Error: 0
Job:  142 Name: K8SApplicationBackup State: AGENT_WAIT      Error: 0
Job:  142 Name: K8SApplicationBackup State: COMPLETED       Error: 0

Confirm that the backup has been copied to GCS repo, by querying the contents of the respective repositry with the below command:

# robin repo contents <reponame>

Running the command, should result in output similar to the following:

[demo@ocp-svc ~]# robin repo contents demo-backup
+----------------------------------+------------+-------------+----------------------+-----------+---------------------------+----------+
| BackupID                         | ZoneID     | RepoName    | Owner/Tenant         | App       | Snapshot                  | Imported |
+----------------------------------+------------+-------------+----------------------+-----------+---------------------------+----------+
| 7647529c0a7d11eba8642f3d184b3333 | 1601588082 | demo-backup | admin/Administrators | employees | employees_single-user-bob | False    |
+----------------------------------+------------+-------------+----------------------+-----------+---------------------------+----------+

As you can see, the snapshot created previously has now been backed up to the specified GCS bucket. Make note the backup ID, as we will use it in the next section.

2.9. Restore the Elasticsearch Database

Let’s simulate a system failure wherein which all local data is lost by first deleting the snapshot locally via the native snapshot delete provided by Robin as shown below:

[demo@ocp-svc ~]# robin snapshot delete 4ce8805e0a7511ebb05c39c799a31bb9 --yes --wait
Job:   143 Name: K8SSnapshotDelete    State: PREPARED        Error: 0
Job:   143 Name: K8SSnapshotDelete    State: COMPLETED       Error: 0

Next let’s remove the index we created earlier and verify all data is lost by running the following set of commands:

[demo@ocp-svc ~]# oc  exec -n demo -i ${CLIENT_ADDRESS} -- curl -H 'Content-Type: application/json' -XDELETE http://localhost:9200/test-index
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                Dload  Upload   Total   Spent    Left  Speed
100    21  100    21    0     0    148      0 --:--:-- --:--:-- --:--:--   150
{"acknowledged":true}

[demo@ocp-svc ~]# oc exec -n demo -i ${CLIENT_ADDRESS} -- curl -X GET http://localhost:9200/test-index/_search\?pretty
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                Dload  Upload   Total   Spent    Left  Speed
100   538  100   538    0     0  19214      0 --:--:-- --:--:-- --:--:-- 19214
{
  "error" : {
    "root_cause" : [
      {
        "type" : "index_not_found_exception",
        "reason" : "no such index [test-index]",
        "resource.type" : "index_or_alias",
        "resource.id" : "test-index",
        "index_uuid" : "_na_",
        "index" : "test-index"
      }
    ],
    "type" : "index_not_found_exception",
    "reason" : "no such index [test-index]",
    "resource.type" : "index_or_alias",
    "resource.id" : "test-index",
    "index_uuid" : "_na_",
    "index" : "test-index"
  },
  "status" : 404
}

Since the data doesnt exist locally anymore, we have to use the data stored within the backup in the external cloud repositry. As a result, let’s restore snapshot from the backup and rollback our application to that snapshot by issuing the following command:

# robin app restore <release_name> --backupid <backupid> --wait

You should see an output similar to the following:

[demo@ocp-svc ~]# robin app restore employees --backupid 7647529c0a7d11eba8642f3d184b3333 --wait
Job:  144 Name: K8SApplicationRollback State: VALIDATED       Error: 0
Job:  144 Name: K8SApplicationRollback State: PREPARED        Error: 0
Job:  144 Name: K8SApplicationRollback State: AGENT_WAIT      Error: 0
Job:  144 Name: K8SApplicationRollback State: COMPLETED       Error: 0

Remember, we had deleted the local snapshot of our data. Let’s verify the above command has restored the snapshot stored in the cloud by running the following command:

[demo@ocp-svc ~]# robin snapshot list --app employees
+----------------------------------+--------+-----------+----------+---------------------------+
| Snapshot ID                      | State  | App Name  | App Kind | Snapshot name             |
+----------------------------------+--------+-----------+----------+---------------------------+
| 4ce8805e0a7511ebb05c39c799a31bb9 | ONLINE | employees | helm     | employees_single-user-bob |
+----------------------------------+--------+-----------+----------+---------------------------+

In addition, we need to verify that the index within our application contains the single user ‘Bob’. Issue the below command to do so:

[demo@ocp-svc ~]# oc exec -n demo -i ${CLIENT_ADDRESS} -- curl -X GET http://localhost:9200/test-index/_search\?pretty
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0{
  "took" : 137,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 1,
      "relation" : "eq"
    },
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : "test-index",
        "_type" : "test",
        "_id" : "1",
        "_score" : 1.0,
        "_source" : {
          "name" : "Bob"
        }
      }
    ]
  }
}

As you can see from the above output, we can restore the database to a stable state in the event of data corruption. This is achieved by simply pulling the backup from the cloud to recreate the snapshot and rolling the application back to the state store in the aformentioned snapshot.

2.10. Create Elasticsearch Database from the backup

In addition to recovering the snapshot from the Elasticsearch backup, we can also create an entirely new and independent application using the aforementioned backup by running the following command:

# robin app create from-backup <app_name> <backupid> --wait

You should see output similar to the following:

[demo@ocp-svc ~]# robin app create from-backup employees-bkp 7647529c0a7d11eba8642f3d184b3333 --wait
Job: 1093 Name: K8SApplicationCreate State: VALIDATED       Error: 0
Job: 1093 Name: K8SApplicationCreate State: PREPARED        Error: 0
Job: 1093 Name: K8SApplicationCreate State: AGENT_WAIT      Error: 0
Job: 1093 Name: K8SApplicationCreate State: COMPLETED       Error: 0

Ensure the application is up and running before continuing, as shown below:

[demo@ocp-svc ~]# robin app info employees-bkp --status
+-----------------------+--------------------------------------------------------------------------+--------+---------+
| Kind                  | Name                                                                     | Status | Message |
+-----------------------+--------------------------------------------------------------------------+--------+---------+
| ConfigMap             | employees-bkp-employees-elasticsearch-initcontainer                      | Ready  | -       |
| PersistentVolumeClaim | data-employees-bkp-employees-elasticsearch-data-0                        | Bound  | -       |
| PersistentVolumeClaim | data-employees-bkp-employees-elasticsearch-data-1                        | Bound  | -       |
| PersistentVolumeClaim | data-employees-bkp-employees-elasticsearch-master-0                      | Bound  | -       |
| PersistentVolumeClaim | data-employees-bkp-employees-elasticsearch-master-1                      | Bound  | -       |
| Pod                   | employees-bkp-employees-elasticsearch-master-0                           | Ready  | -       |
| Pod                   | employees-bkp-employees-elasticsearch-coordinating-only-5d67cb7d47-zznsp | Ready  | -       |
| Pod                   | employees-bkp-employees-elasticsearch-master-1                           | Ready  | -       |
| Pod                   | employees-bkp-employees-elasticsearch-data-1                             | Ready  | -       |
| Pod                   | employees-bkp-employees-elasticsearch-coordinating-only-5d67cb7d47-bbkgz | Ready  | -       |
| Pod                   | employees-bkp-employees-elasticsearch-data-0                             | Ready  | -       |
| Service               | employees-bkp-employees-elasticsearch-coordinating-only                  | Ready  | -       |
| Service               | employees-bkp-employees-elasticsearch-data                               | Ready  | -       |
| Service               | employees-bkp-employees-elasticsearch-master                             | Ready  | -       |
| ReplicaSet            | employees-bkp-employees-elasticsearch-coordinating-only-5d67cb7d47       | Ready  | -       |
| StatefulSet           | employees-bkp-employees-elasticsearch-data                               | Ready  | -       |
| StatefulSet           | employees-bkp-employees-elasticsearch-master                             | Ready  | -       |
| Deployment            | employees-bkp-employees-elasticsearch-coordinating-only                  | Ready  | -       |
+-----------------------+--------------------------------------------------------------------------+--------+---------+

Lets verify the contents and integrity of the Elasticsearch database app, by first saving the name of the master pod of the new application.

# export BACKUP_CLIENT_ADDRESS=$(oc get pods --all-namespaces --field-selector=status.phase=Running | grep bkp-employees-elasticsearch-master-0 -m 1| awk 'BEGIN {FS=" "}; {print $2}')

Query the ‘test-index’ we previously created and ensure that the record of the intial user ‘Bob’ still exists:

[demo@ocp-svc ~]# oc exec -n demo -i ${BACKUP_CLIENT_ADDRESS} -- curl -X GET http://localhost:9200/test-index/_search\?pretty
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0{
  "took" : 137,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 1,
      "relation" : "eq"
    },
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : "test-index",
        "_type" : "test",
        "_id" : "1",
        "_score" : 1.0,
        "_source" : {
          "name" : "Bob"
        }
      }
    ]
  }
}

To learn more about using Robin Storage on OpenShift, visit us at Robin Storage solution page.

3. MongoDB on OpenShift

After successfully deploying and running stateless applications, a number of developers are exploring the possibility of running stateful workloads, such as MongoDB, on OpenShift. If you are considering extending OpenShift for stateful workloads, this tutorial will help you experiment on your existing OpenShift environment by providing step-by-step instructions.

This tutorial will walk you through:

  1. Deploy a MongoDB Database on OpenShift

  2. Connect to MongoDB Database using a client

  3. Add sample data to the MongoDB Database

  4. Verify the MongoDB Helm release has registered as an application

  5. Snapshot the MongoDB Database

  6. Rollback the MongoDB Database

  7. Clone the MongoDB Database

  8. Backup the MongoDB Database to AWS S3

  9. Restore the MongoDB Database

  10. Create MongoDB Database from the backup

3.1. Prerequisites: Install the Robin Operator on OpenShift and set up Helm

Robin Storage is an application-aware container storage that offers advanced data management capabilities and runs natively on OpenShift. Robin Storage delivers bare-metal performance and enables you to protect (via snapshots and backups), encrypt, collaborate (via clones and git like push/pull workflows) and make portable (via Cloud Repositories) stateful applications that are deployed using Helm Charts or Operators.

Before we deploy MongoDB on OpenShift, let’s first install the Robin operator on your existing OpenShift environment. You can install Robin directly from the OpenShift console by clicking on the OperatorHub tab. You can find further instructions here.

Let’s confirm that OpenShift cluster is up and running.

# oc get nodes

You should see an output similar to below, with the status of each node marked as Ready

[demo@ocp-svc ~]# oc get nodes
NAME                                         STATUS   ROLES    AGE   VERSION
ip-10-0-128-218.us-west-1.compute.internal   Ready    worker   21d   v1.18.3+2cf11e2
ip-10-0-142-235.us-west-1.compute.internal   Ready    master   21d   v1.18.3+2cf11e2
ip-10-0-162-21.us-west-1.compute.internal    Ready    master   21d   v1.18.3+2cf11e2
ip-10-0-167-211.us-west-1.compute.internal   Ready    worker   21d   v1.18.3+2cf11e2
ip-10-0-220-43.us-west-1.compute.internal    Ready    worker   21d   v1.18.3+2cf11e2
ip-10-0-223-247.us-west-1.compute.internal   Ready    master   21d   v1.18.3+2cf11e2

Let’s confirm that Robin is up and running. Run the following command to verify that Robin is ready.

# oc get robincluster -n robinio

You should see an output similar to below.

[demo@ocp-svc ~]# oc get robincluster -n robinio
NAME    AGE
robin   12h

To get the link to download Robin client do:

# oc describe robincluster -n robinio

You should see an output similar to below:

[demo@ocp-svc ~]# oc describe robincluster -n robinio                                                                                                                         Name:         robin
Name:         robin
Namespace:    robinio
Labels:       app.kubernetes.io/instance=robin
              app.kubernetes.io/managed-by=robin.io
              app.kubernetes.io/name=robin
Annotations:  <none>
API Version:  manage.robin.io/v1
Kind:         RobinCluster
Metadata:
  Creation Timestamp:  2020-10-08T05:22:11Z
  Generation:          1
  Managed Fields:
    API Version:  manage.robin.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          .:
          f:app.kubernetes.io/instance:
          f:app.kubernetes.io/managed-by:
          f:app.kubernetes.io/name:
      f:spec:
        .:
        f:host_type:
        f:image_robin:
        f:k8s_provider:
        f:options:
          .:
          f:cloud_cred_secret:
    Manager:      kubectl
    Operation:    Update
    Time:         2020-10-08T05:22:11Z
    API Version:  manage.robin.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        .:
        f:connect_command:
        f:get_robin_client:
        f:master_ip:
        f:phase:
        f:pod_status:
        f:robin_node_status:
    Manager:         robin-operator
    Operation:       Update
    Time:            2020-10-12T23:30:50Z
  Resource Version:  6405881
  Self Link:         /apis/manage.robin.io/v1/namespaces/robinio/robinclusters/robin
  UID:               5900dab8-5d7b-4ba8-81a2-8232bbf7f4d8
Spec:
  host_type:     ec2
  image_robin:   robinsys/robinimg:5.3.2-4
  k8s_provider:  openshift
  Options:
    cloud_cred_secret:  aws-secret
Status:
  connect_command:   kubectl exec -it robin-prj8c -n robinio -- bash
  get_robin_client:  curl -k https://10.0.167.211:29442/api/v3/robin_server/download?file=robincli&os=linux > robin
  master_ip:         10.0.167.211
  Phase:             Ready

Find the field ‘Get _ Robin _ Client’ and run the corresponding command to get the Robin client.

# curl -k https://10.0.167.211:29442/api/v3/robin_server/download?file=robincli&os=linux > robin

Change the file permission for robin and copy it to /usr/bin/local to make it as a system command.

In the same output above notice the field ‘Master _ Ip’ and use it to setup your Robin client to work with your OpenShift cluster, by running the following command:

# robin client add-context 10.0.167.211 --set-current

Next log into your Robin cluster and create a namespace wherein which we will create the application, by running the following commands:

# robin login admin --password Robin123

You should see an output similar to below:

[demo@ocp-svc ~]# robin login admin --password Robin123
User admin is logged into Administrators tenant

Let’s add a stable Helm repository to pull Helm charts from. For this tutorial, we will use the Bitnami Helm repo. This repository has Helm charts designed to run on OpenShift.

# helm repo add bitnami https://charts.bitnami.com/ibm

You should see an output similar to below:

[demo@ocp-svc ~]# helm repo add ibm-community https://raw.githubusercontent.com/IBM/charts/master/repo/community
"bitnami" has been added to your repositories

3.2. Deploy a MongoDB database on OpenShift

Now, let’s create a MongoDB database using Helm and Robin Storage. When we installed the Robin operator and created a “Robincluster” custom resource definition, we created and registered a StorageClass named “Robin” with OpenShift. We can now use this StorageClass to create PersistentVolumes and PersistentVolumeClaims for the pods in OpenShift. Using this StorageClass allows us to access the data management capabilities (such as snapshot, clone, backup) provided by Robin Storage.

On Openshift 4.x, the security context for the MongoDB helm chart should be updated to allow the containers in previleged mode. Fetch the MongoDB chart and make the below changes.

# helm fetch bitnami/mongodb

Untar the mongodb chart files and update the following files: templates/standalone/dep-sts.yaml, templates/replicaset/statefulset.yaml, and templates/arbiter/statefulset.yaml. Within each of these files edit the container’s securityContext section to be as follows in order to enable the container to run under privileged security context constraint:

securityContext:
  privileged: true
  readOnlyRootFilesystem: false
  allowPrivilegeEscalation: true
  runAsNonRoot: true
  capabilities:
    add: ["SYS_ADMIN"]

Edit the values.yaml file such that the storageClass attribute is set to robin in order to take advantage of the data management capabilities Robin CNS offers. In addition to this, edit the fsGroup under podSecurityContext and runAsUser under containerSecurityContext as shown below

global:
  storageClass: robin

podSecurityContext:
  fsGroup: 1000620000

containerSecurityContext:
  runAsUser: 1000620000

Create and login to the ‘mongodbns’ project/namespace using the below oc command and add the privileged security context constraint to the current user.

# oc new-project mongodbns
# oc project mongodbns
# oc adm policy add-scc-to-user privileged -z default
# oc adm policy add-scc-to-user anyuid system:serviceaccount:mongodbns:default

Using the below Helm command, we will deploy an MongoDB instance.

Note: Robin will setup Helm3 on the host where it is installed as a result the below commands/output pertain to Helm3.

# helm install mongoapp /root/mongodb   --set persistence.storageClass=robin  --namespace mongodbns

Note: In the above command, ‘/root/mongodb’ is the directory where the modified files are present and ‘mongoapp’ is the name of the helm release.

Run the following command to verify the application is deployed and all relevant Kubernetes resources are ready.

# helm list -n mongodbns

You should be able to see an output showing the status of your MongoDB database.

# helm list -n mongodbns
NAME      NAMESPACE REVISION  UPDATED                                 STATUS    CHART         APP VERSION
mongoapp  mongodbns 1         2020-10-13 02:36:21.875446544 +0000 UTC deployed  mongodb-9.2.4 4.4.1

You would also want to make sure the relevant MongoDB application pods, services, deployement and replicaset are in a good state before proceeding further. Run the following command to verify the pods are running.

# oc get all -n mongodbns | grep mongoapp

You should see an output similar to the following.

# oc get all -n mongodbns | grep mongoapp
pod/mongoapp-mongodb-5497469966-68wtb   1/1     Running   0          73m
service/mongoapp-mongodb   ClusterIP   172.30.38.122   <none>        27017/TCP   73m
deployment.apps/mongoapp-mongodb   1/1     1            1           73m
replicaset.apps/mongoapp-mongodb-5497469966   1         1         1       73m

Now that we know the MongoDB application is up and running, let’s get the root password.

# export MONGODB_ROOT_PASSWORD=$(oc get secret --namespace mongodbns mongoapp-mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)

We will use this environment variable later in the tutorial to add data into our database.

3.3. Connect to mongodb database using a client

To connect to your database, create a MongoDB client container:

# oc run --namespace mongodbns mongoapp-mongodb-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mongodb:4.4.1-debian-10-r13 --command -- bash

Then, run the following command:

# mongo admin --host "mongoapp-mongodb" --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD

You should see an output similar to the below:

$ oc run --namespace mongodbns mongoapp-mongodb-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mongodb:4.4.1-debian-10-r13 --command -- bash
If you don't see a command prompt, try pressing enter.
1001@mongoapp-mongodb-client:/$

You can connect to the mongodb database using the below command:

# mongo admin --host "mongoapp-mongodb" --authenticationDatabase admin -u root -p

You should see an output similar to the below:

$ mongo admin --host "mongoapp-mongodb" --authenticationDatabase admin -u root -p
MongoDB shell version v4.4.1
Enter password:
connecting to: mongodb://mongoapp-mongodb:27017/admin?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("cabd4d67-9ec0-4235-95d5-f73827e4a45a") }
MongoDB server version: 4.4.1
Welcome to the MongoDB shell.

3.4. Add sample data to the MongoDB database

Let’s create a database called robin.

> use robin
switched to db robin
> db
robin

Create a collection called link.

> db.createCollection("link")
{ "ok" : 1 }
> show collections
link

Let’s now add some data to this collection.

> db.link.insert({url: "https://robin.io/", description: "Automate Enterprise Applications"});
> db.link.insert({url: "https://www.google.com/", description: "Google Search"});
> db.link.insert({url: "https://www.bing.com/", description: "Bing Search"});

Verify that the data is added to the collection.

> db.link.find();
{ "_id" : ObjectId("5f8517e7332f846b8f9008e2"), "url" : "https://robin.io/", "description" : "Automate Enterprise Applications" }
{ "_id" : ObjectId("5f851928332f846b8f9008e3"), "url" : "https://www.google.com/", "description" : "Google Search" }
{ "_id" : ObjectId("5f85192f332f846b8f9008e4"), "url" : "https://www.bing.com/", "description" : "Bing Search" }

As you can see from the above output that we successfuly created a database and a collection. We then added some daata to the collection. Now, let’s take a look at the data management capabilities Robin brings, such as taking snapshots, making clones, and creating backups.

3.5. Verify the MongoDB Helm release has registered as an application

To benefit from the data management capabilities, we will register our MongoDB database with Robin. Doing so will let Robin map and track all resources associated with the Helm release in order to enable the advanced data management capabilities of the product.

Since we initially added the ‘demo’ namespace in Robin for the admin user, Robin will auto discover the Helm applications registered in the ‘demo’ namespace. Verify this is the case by getting information and status for the application using the release name, by running the following command:

# robin app info mongoapp --status

You should see an output similar to this:

# robin app info mongoapp --status
+-----------------------+-----------------------------------+--------+---------+
| Kind                  | Name                              | Status | Message |
+-----------------------+-----------------------------------+--------+---------+
| ServiceAccount        | mongoapp-mongodb                  | Ready  | -       |
| Secret                | mongoapp-mongodb                  | Ready  | -       |
| PersistentVolumeClaim | mongoapp-mongodb                  | Bound  | -       |
| Pod                   | mongoapp-mongodb-5497469966-68wtb | Ready  | -       |
| Service               | mongoapp-mongodb                  | Ready  | -       |
| ReplicaSet            | mongoapp-mongodb-5497469966       | Ready  | -       |
| Deployment            | mongoapp-mongodb                  | Ready  | -       |
+-----------------------+-----------------------------------+--------+---------+

3.6. Snapshot the MongoDB Database

If you make a mistake, such as unintentionally deleting important data, you may be able to undo it by restoring a snapshot. Snapshots allow you to restore the state of your application to a point-in-time state saved within the snapshot.

Robin lets you snapshot not just the storage volumes (PVCs) but the entire database application including all its resources such as Pods, StatefulSets, PVCs, Services, ConfigMaps etc. with a single command. To create a snapshot, run the following command.

# robin snapshot create mongoapp --snapname link-3-recs  --desc "contains 3 records in the link collection" --wait
Job:  239 Name: K8SApplicationSnapshot State: VALIDATED       Error: 0
Job:  239 Name: K8SApplicationSnapshot State: PROCESSED       Error: 0
Job:  239 Name: K8SApplicationSnapshot State: WAITING         Error: 0
Job:  239 Name: K8SApplicationSnapshot State: COMPLETED       Error: 0

Let’s verify we have successfully created the snapshot.

# robin snapshot list --app mongoapp

You should see an output similar to this:

# robin snapshot list --app mongoapp
+----------------------------------+--------+----------+----------+----------------------+
| Snapshot ID                      | State  | App Name | App Kind | Snapshot name        |
+----------------------------------+--------+----------+----------+----------------------+
| c9689aca0d0911ebace6af20df9778b7 | ONLINE | mongoapp | helm     | mongoapp_link-3-recs |
+----------------------------------+--------+----------+----------+----------------------+

We now have a snapshot of our entire database that contains a user named Bob.

3.7. Rollback the MongoDB database

Let’s simulate a user error by deleting one of the records from the link collection:

> use robin
switched to db robin
> db.link.remove({"url" : "https://www.bing.com/"});
WriteResult({ "nRemoved" : 1 })

Query the link collection to make sure there are only 2 records:

db.link.find();
{ "_id" : ObjectId("5f8517e7332f846b8f9008e2"), "url" : "https://robin.io/", "description" : "Automate Enterprise Applications" }
{ "_id" : ObjectId("5f851928332f846b8f9008e3"), "url" : "https://www.google.com/", "description" : "Google Search" }

Let’s run the following command to see the available snapshots:

# robin app info mongoapp

You should see an output similar to the following. Note the snapshot ID, as we will use it in the next command.

# robin app info mongoapp
Name                              : mongoapp
Kind                              : helm
State                             : ONLINE
Number of repos                   : 0
Number of snapshots               : 1
Number of usable backups          : 0
Number of archived/failed backups : 0

Query:
-------
{'selectors': [], 'namespace': 'mongodbns', 'resources': [], 'apps': ['helm/mongoapp@mongodbns']}

Snapshots:
+----------------------------------+----------------------+-------------------------------------------+--------+----------------------+
| Id                               | Name                 | Description                               | State  | Creation Time        |
+----------------------------------+----------------------+-------------------------------------------+--------+----------------------+
| c9689aca0d0911ebace6af20df9778b7 | mongoapp_link-3-recs | contains 3 records in the link collection | ONLINE | 13 Oct 2020 04:09:03 |
+----------------------------------+----------------------+-------------------------------------------+--------+----------------------+

Now, let’s rollback to the point where we had 3 records in the link collection. To do this run the following command using the snapshot ID displayed above:

# robin app restore mongoapp --snapshotid Your_Snapshot_ID --wait
# robin app restore mongoapp --snapshotid c9689aca0d0911ebace6af20df9778b7 --wait
Job:  243 Name: K8SApplicationRollback State: VALIDATED       Error: 0
Job:  243 Name: K8SApplicationRollback State: PREPARED        Error: 0
Job:  243 Name: K8SApplicationRollback State: AGENT_WAIT      Error: 0
Job:  243 Name: K8SApplicationRollback State: COMPLETED       Error: 0

To verify we have rolled back our database such that it only has one user, run the following command:

> use robin
switched to db robin
> db.link.find();
{ "_id" : ObjectId("5f8517e7332f846b8f9008e2"), "url" : "https://robin.io/", "description" : "Automate Enterprise Applications" }
{ "_id" : ObjectId("5f851928332f846b8f9008e3"), "url" : "https://www.google.com/", "description" : "Google Search" }
{ "_id" : ObjectId("5f85192f332f846b8f9008e4"), "url" : "https://www.bing.com/", "description" : "Bing Search" }

We have successfully rolled back to our original state containing 3 records!

3.8. Clone the MongoDB Database

Robin lets you clone not just the storage volumes (PVCs) but the entire database application including all its resources such as Pods, StatefulSets, PVCs, Services, ConfigMaps, etc. with a single command.

Application cloning improves the collaboration across Dev/Test/Ops teams. Teams can share applications and data quickly, reducing the procedural delays involved in re-creating environments. Each team can work on their clone without affecting other teams. Clones are useful when you want to run a report on a database without affecting the source database application, or for performing UAT tests or for validating patches before applying them to the production database, etc.

Robin clones are ready-to-use “thin copies” of the entire app/database, not just storage volumes. Thin-copy means that data from the snapshot is NOT physically copied, therefore clones can be made very quickly. Robin clones are fully-writable and any modifications made to the clone are not visible to the source app/database.

To create a clone from the existing snapshot created above, run the following command. Use the snapshot ID we retrieved above.

# robin app create from-snapshot mongoapp-clone Your_Snapshot_ID --wait
# robin app create from-snapshot mongoapp-clone c9689aca0d0911ebace6af20df9778b7 --wait
Job:  244 Name: K8SApplicationClone  State: VALIDATED       Error: 0
Job:  244 Name: K8SApplicationClone  State: PREPARED        Error: 0
Job:  244 Name: K8SApplicationClone  State: AGENT_WAIT      Error: 0
Job:  244 Name: K8SApplicationClone  State: COMPLETED       Error: 0

Let’s verify Robin has cloned all relevant Kubernetes resources by running the following command:

# robin app info mongoapp-clone --status

You should see an output similar to below.

# robin app info mongoapp-clone --status
+-----------------------+--------------------------------------------------+--------+---------+
| Kind                  | Name                                             | Status | Message |
+-----------------------+--------------------------------------------------+--------+---------+
| ServiceAccount        | mongoapp-clone-mongoapp-mongodb                  | Ready  | -       |
| Secret                | mongoapp-clone-mongoapp-mongodb                  | Ready  | -       |
| PersistentVolumeClaim | mongoapp-clone-mongoapp-mongodb                  | Bound  | -       |
| Pod                   | mongoapp-clone-mongoapp-mongodb-5497469966-68wtb | Ready  | -       |
| Service               | mongoapp-clone-mongoapp-mongodb                  | Ready  | -       |
| ReplicaSet            | mongoapp-clone-mongoapp-mongodb-5497469966       | Ready  | -       |
| Deployment            | mongoapp-clone-mongoapp-mongodb                  | Ready  | -       |
+-----------------------+--------------------------------------------------+--------+---------+

Notice that Robin automatically clones the required Kubernetes resources, not just storage volumes (PVCs), that are required to stand up a fully-functional clone of our database. After the clone is complete, the cloned database is ready for use.

Get the root password of the cloned MongoDB database by running the following command:

# export CLONE_MONGODB_ROOT_PASSWORD=$(oc get secret --namespace mongodbns mongoapp-clone-mongoapp-mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)

Start a mongo client if one is not already running:

# oc run --namespace mongodbns mongoapp-mongodb-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mongodb:4.4.1-debian-10-r13 --command -- bash

Connect to the cloned database from the mongo client:

# mongo admin --host "mongoapp-clone-mongoapp-mongodb" --authenticationDatabase admin -u root -p

You should see an output similar to the following:

$ mongo admin --host "mongoapp-clone-mongoapp-mongodb" --authenticationDatabase admin -u root -p
MongoDB shell version v4.4.1
Enter password:
connecting to: mongodb://mongoapp-clone-mongoapp-mongodb:27017/admin?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("99971c9d-1e5f-4645-8e1c-981ab8fce436") }
MongoDB server version: 4.4.1
Welcome to the MongoDB shell.

Now, run the below commands to swtich database and query the collection:

> use robin;
switched to db robin
>
> db.link.find();
{ "_id" : ObjectId("5f8517e7332f846b8f9008e2"), "url" : "https://robin.io/", "description" : "Automate Enterprise Applications" }
{ "_id" : ObjectId("5f851928332f846b8f9008e3"), "url" : "https://www.google.com/", "description" : "Google Search" }
{ "_id" : ObjectId("5f85192f332f846b8f9008e4"), "url" : "https://www.bing.com/", "description" : "Bing Search" }
>

We have successfully created a clone of our original MongoDB database, and the cloned database also has a collection called ‘link’ and 3 records. Thus it is identical to our original database.

Now, let’s make changes to the clone and verify the original database remains unaffected by changes to the clone. Let’s add a record to the link collection:

> db.link.insert({url: "https://get.robin.io/", description: "Supercharge Kubernetes to Run Stateful apps"});
WriteResult({ "nInserted" : 1 })
>

Let’s verify that the new record has been added. You should see an output similar to the following:

> db.link.find();
{ "_id" : ObjectId("5f8517e7332f846b8f9008e2"), "url" : "https://robin.io/", "description" : "Automate Enterprise Applications" }
{ "_id" : ObjectId("5f851928332f846b8f9008e3"), "url" : "https://www.google.com/", "description" : "Google Search" }
{ "_id" : ObjectId("5f85192f332f846b8f9008e4"), "url" : "https://www.bing.com/", "description" : "Bing Search" }
{ "_id" : ObjectId("5f852f24163d3000eb30f77d"), "url" : "https://get.robin.io/", "description" : "Supercharge Kubernetes to Run Stateful apps" }
>

As you can see from the output above, a fourth record has been added. To verify that our original MongoDB database is unaffected by changes to the clone, run the following command to exit the mongoshell of the cloned database:

> exit
bye

Now, connect to the original database by running the below command

# mongo admin --host "mongoapp-mongodb" --authenticationDatabase admin -u root -p

You should see an output similar to the following:

$ mongo admin --host "mongoapp-mongodb" --authenticationDatabase admin -u root -p
MongoDB shell version v4.4.1
Enter password:
connecting to: mongodb://mongoapp-mongodb:27017/admin?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("3d7a1a85-8189-4263-8060-83054aabf742") }
MongoDB server version: 4.4.1
Welcome to the MongoDB shell.
For interactive help, type "help".

Now, switch the database to robin and query the link collection:

> use robin;
switched to db robin
> db.link.find();
{ "_id" : ObjectId("5f8517e7332f846b8f9008e2"), "url" : "https://robin.io/", "description" : "Automate Enterprise Applications" }
{ "_id" : ObjectId("5f851928332f846b8f9008e3"), "url" : "https://www.google.com/", "description" : "Google Search" }
{ "_id" : ObjectId("5f85192f332f846b8f9008e4"), "url" : "https://www.bing.com/", "description" : "Bing Search" }
>

As you can see from the output above, our original MongoDB database only has 3 records and thus was unaffected by the data insertion into the clone.

This means we can work on the original MongoDB database and the cloned database simultaneously without affecting each other. This is valuable for collaboration across teams where each team needs to perform a unique set of operations.

To see a list of all the clones created by Robin run the following command:

# robin app list --app-types CLONE

You should see an output similar to the following:

# robin app list --app-types CLONE
Helm/Flex Apps:

+----------------+---------+--------+----------------------+-----------+-----------+---------+
| Name           | Type    | State  | Owner/Tenant         | Namespace | Snapshots | Backups |
+----------------+---------+--------+----------------------+-----------+-----------+---------+
| mongoapp       | helm    | ONLINE | admin/Administrators | mongodbns | 1         | 0       |
| mongoapp-clone | flexapp | ONLINE | admin/Administrators | mongodbns | 0         | 0       |
+----------------+---------+--------+----------------------+-----------+-----------+---------+

Now let’s delete the clone. Since the clone is just like any other Robin application, it can be deleted using the native app delete command provided by Robin, as shown below:

# robin app delete mongoapp-clone -y --force --wait
Job:  245 Name: K8SAppDelete         State: PROCESSED       Error: 0
Job:  245 Name: K8SAppDelete         State: PREPARED        Error: 0
Job:  245 Name: K8SAppDelete         State: AGENT_WAIT      Error: 0
Job:  245 Name: K8SAppDelete         State: FINALIZED       Error: 0
Job:  245 Name: K8SAppDelete         State: COMPLETED       Error: 0

3.9. Backup the MongoDB Database to AWS S3

Robin elevates the experience from backing up just storage volumes (PVCs) to backing up entire applications/databases, including their metadata, configuration, and data.

A backup is a full copy of the application snapshot that resides on completely different storage media than the application’s data. Therefore, backups are useful to restore an entire application from an external storage media in the event of catastrophic failures, such as disk errors, server failures, or entire data centers going offline, etc. This is assuming your backup doesn’t reside in the data center that is offline, of course.

Let’s now backup our database to an external secondary storage repository (repo). Snapshots (metadata + configuration + data) are backed up to the repo.

Robin allows one to backup Kubernetes applications to both AWS S3 and Google GCS (Google Cloud Storage). In this mongodbns we will use Google GCS to create the backup.

Before we proceed, we need to prepare the credentials file in order to create the bucket and register the repositry with Robin. Follow the documentation here.

After preparing the file, register the repo with Robin by running the below command:

# robin repo register mongodbns-backup gcs://demos-backup-tutorial /root/gcs.json readwrite --wait
Job:  246 Name: StorageRepoAdd       State: PREPARED        Error: 0
Job:  246 Name: StorageRepoAdd       State: COMPLETED       Error: 0

Confirm that our secondary storage repository is successfully registered by issuing the following command:

# robin repo list
+------------------+------+----------------------+--------------+-----------------------+------+-------------+
| Name             | Type | Owner/Tenant         | BackupTarget | Bucket                | Path | Permissions |
+------------------+------+----------------------+--------------+-----------------------+------+-------------+
| mongodbns-backup | GCS  | admin/Administrators | 1            | demos-backup-tutorial | -    | readwrite   |
+------------------+------+----------------------+--------------+-----------------------+------+-------------+

Let’s attach this repo to the application so that we can utilize the cloud storage to store backups of its respective snapshots:

# robin app attach-repo mongoapp mongodbns-backup --wait
Job:  247 Name: K8SApplicationAddRepo State: VALIDATED       Error: 0
Job:  247 Name: K8SApplicationAddRepo State: COMPLETED       Error: 0

Confirm that the secondary storage repository is successfully attached to app with the below command:

# robin app info mongoapp

You should see an output similar to the following:

# robin app info mongoapp
Name                              : mongoapp
Kind                              : helm
State                             : ONLINE
Number of repos                   : 1
Number of snapshots               : 1
Number of usable backups          : 0
Number of archived/failed backups : 0

Query:
-------
{'selectors': [], 'apps': ['helm/mongoapp@mongodbns'], 'namespace': 'mongodbns', 'resources': []}

Repos:
+------------------+-----------------------+------+------------+
| Name             | Bucket                | Path | Permission |
+------------------+-----------------------+------+------------+
| mongodbns-backup | demos-backup-tutorial | -    | readwrite  |
+------------------+-----------------------+------+------------+

Snapshots:
+----------------------------------+----------------------+-------------------------------------------+--------+----------------------+
| Id                               | Name                 | Description                               | State  | Creation Time        |
+----------------------------------+----------------------+-------------------------------------------+--------+----------------------+
| c9689aca0d0911ebace6af20df9778b7 | mongoapp_link-3-recs | contains 3 records in the link collection | ONLINE | 13 Oct 2020 04:09:03 |
+----------------------------------+----------------------+-------------------------------------------+--------+----------------------+

Take the backup of the snapshot to the remote GCS bucket by utilizing the following command:

# robin backup create mongoapp mongodbns-backup --snapshotid Your_snaphsot_ID --backupname single_user_backup --wait

You should see an output similar to the following:

# robin backup create mongoapp mongodbns-backup --snapshotid c9689aca0d0911ebace6af20df9778b7 --backupname link_backup --wait
Creating app backup 'link_backup' from snapshot 'c9689aca0d0911ebace6af20df9778b7'
Job:  248 Name: K8SApplicationBackup State: PROCESSED       Error: 0
Job:  248 Name: K8SApplicationBackup State: AGENT_WAIT      Error: 0
Job:  248 Name: K8SApplicationBackup State: COMPLETED       Error: 0

Confirm that the backup has been copied to GCS repo, by running the following command:

# robin repo contents mongodbns-backup
+----------------------------------+------------+------------------+----------------------+----------+----------------------+----------+
| BackupID                         | ZoneID     | RepoName         | Owner/Tenant         | App      | Snapshot             | Imported |
+----------------------------------+------------+------------------+----------------------+----------+----------------------+----------+
| 89d6cbde0d1111eba8b7b71d742fe09d | 1602134570 | mongodbns-backup | admin/Administrators | mongoapp | mongoapp_link-3-recs | False    |
+----------------------------------+------------+------------------+----------------------+----------+----------------------+----------+

As you can see from the output above, the snapshot created previously has now been backed up to the specified GCS bucket. Note the backup ID, as we will use it in the next section.

3.10. Restore the MongoDB Database

Let’s simulate a system failure where all local data is lost by first deleting the snapshot locally.

# robin snapshot delete c9689aca0d0911ebace6af20df9778b7 --yes --wait
Job:  249 Name: K8SSnapshotDelete    State: VALIDATED       Error: 0
Job:  249 Name: K8SSnapshotDelete    State: COMPLETED       Error: 0

Next let’s remove the collection we created earlier and verify all data is lost by running the following set of commands:

Now, connect to the original database by running the below command

# mongo admin --host "mongoapp-mongodb" --authenticationDatabase admin -u root -p

You should see an output similar to the following:

$ mongo admin --host "mongoapp-mongodb" --authenticationDatabase admin -u root -p
MongoDB shell version v4.4.1
Enter password:
connecting to: mongodb://mongoapp-mongodb:27017/admin?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("3d7a1a85-8189-4263-8060-83054aabf742") }
MongoDB server version: 4.4.1
Welcome to the MongoDB shell.

Now, switch the database to robin and query the link collection:

> use robin;
switched to db robin
> db.link.find();
{ "_id" : ObjectId("5f8517e7332f846b8f9008e2"), "url" : "https://robin.io/", "description" : "Automate Enterprise Applications" }
{ "_id" : ObjectId("5f851928332f846b8f9008e3"), "url" : "https://www.google.com/", "description" : "Google Search" }
{ "_id" : ObjectId("5f85192f332f846b8f9008e4"), "url" : "https://www.bing.com/", "description" : "Bing Search" }
>
> db.link.find();
{ "_id" : ObjectId("5f8517e7332f846b8f9008e2"), "url" : "https://robin.io/", "description" : "Automate Enterprise Applications" }
{ "_id" : ObjectId("5f851928332f846b8f9008e3"), "url" : "https://www.google.com/", "description" : "Google Search" }
{ "_id" : ObjectId("5f85192f332f846b8f9008e4"), "url" : "https://www.bing.com/", "description" : "Bing Search" }
>
>
> show collections
link
>

Drop the collection and verify that the link collection desn’t exit anymore:

>
> db.link.drop()
true
>
> show collections
>

Since we dont have any data locally anymore, we have to use the data stored within the backup in the external cloud repositry. As a result, let’s restore snapshot from the backup and rollback our application to that snapshot by issuing the following command.

# robin app restore mongoapp --backupid 89d6cbde0d1111eba8b7b71d742fe09d --wait

You should see an output similar to the following:

# robin app restore mongoapp --backupid 89d6cbde0d1111eba8b7b71d742fe09d --wait
Job:  250 Name: K8SApplicationRestore State: PROCESSED       Error: 0
Job:  250 Name: K8SApplicationRestore State: WAITING         Error: 0
Job:  250 Name: K8SApplicationRestore State: COMPLETED       Error: 0

Remember, we had deleted the local snapshot of our data. Let’s verify the above command has restored the snapshot stored in the cloud by running the following command:

# robin snapshot list --app mongoapp
+----------------------------------+--------+----------+----------+----------------------+
| Snapshot ID                      | State  | App Name | App Kind | Snapshot name        |
+----------------------------------+--------+----------+----------+----------------------+
| c9689aca0d0911ebace6af20df9778b7 | ONLINE | mongoapp | helm     | mongoapp_link-3-recs |
+----------------------------------+--------+----------+----------+----------------------+

In addition, we need to verify that the collection is restored:

Connect to mongodb from the client:

# mongo admin --host "mongoapp-mongodb" --authenticationDatabase admin -u root -p

Run the below commands in mongo shell to verify that the collection link is restored and it shows all the records:

> use robin;
switched to db robin
>
>
> show collections
link
>
>
> db.link.find();
{ "_id" : ObjectId("5f8517e7332f846b8f9008e2"), "url" : "https://robin.io/", "description" : "Automate Enterprise Applications" }
{ "_id" : ObjectId("5f851928332f846b8f9008e3"), "url" : "https://www.google.com/", "description" : "Google Search" }
{ "_id" : ObjectId("5f85192f332f846b8f9008e4"), "url" : "https://www.bing.com/", "description" : "Bing Search" }
>

As you can see from the above output, we can restore the database to a stable state in the event of data corruption. This is achieved by simply pulling the backup from the cloud to recreate the snapshot and rolling the application back to the state store in the aformentioned snapshot.

3.11. Create MongoDB Database from the backup

Since we have taken backup of MongoDB database, we can create a new app using the backup and verify the data integrity of the MongoDB database via the following command:

# robin app create from-backup <app_name> <Your_backupID> --wait

An example of its usage is shown below:

# robin app create from-backup mongoapp-bkp 89d6cbde0d1111eba8b7b71d742fe09d -n mongodbns --wait
Job:  255 Name: K8SApplicationCreate State: VALIDATED       Error: 0
Job:  255 Name: K8SApplicationCreate State: PREPARED        Error: 0
Job:  255 Name: K8SApplicationCreate State: AGENT_WAIT      Error: 0
Job:  255 Name: K8SApplicationCreate State: FINALIZED       Error: 0
Job:  255 Name: K8SApplicationCreate State: COMPLETED       Error: 0
# robin app info  mongoapp-bkp --status
+-----------------------+------------------------------------------------+--------+---------+
| Kind                  | Name                                           | Status | Message |
+-----------------------+------------------------------------------------+--------+---------+
| ServiceAccount        | mongoapp-bkp-mongoapp-mongodb                  | Ready  | -       |
| Secret                | mongoapp-bkp-mongoapp-mongodb                  | Ready  | -       |
| PersistentVolumeClaim | mongoapp-bkp-mongoapp-mongodb                  | Bound  | -       |
| Pod                   | mongoapp-bkp-mongoapp-mongodb-5497469966-68wtb | Ready  | -       |
| Service               | mongoapp-bkp-mongoapp-mongodb                  | Ready  | -       |
| ReplicaSet            | mongoapp-bkp-mongoapp-mongodb-5497469966       | Ready  | -       |
| Deployment            | mongoapp-bkp-mongoapp-mongodb                  | Ready  | -       |
+-----------------------+------------------------------------------------+--------+---------+

Lets verify the contents of the MongoDB database app.

Get the root password of the backup MongoDB database by running the following command:

# export BKP_MONGODB_ROOT_PASSWORD=$(oc get secret --namespace mongodbns mongoapp-bkp-mongoapp-mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)

Start a mongo client if one is not already running:

# oc run --namespace mongodbns mongoapp-mongodb-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mongodb:4.4.1-debian-10-r13 --command -- bash

Connect to the cloned database from the mongo client:

# mongo admin --host "mongoapp-bkp-mongoapp-mongodb" --authenticationDatabase admin -u root -p

You should see an output similar to the following:

$ mongo admin --host "mongoapp-bkp-mongoapp-mongodb" --authenticationDatabase admin -u root -p
MongoDB shell version v4.4.1
Enter password:
connecting to: mongodb://mongoapp-bkp-mongoapp-mongodb:27017/admin?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("dec3671a-fa88-45ba-809e-dbe354ec16f1") }
MongoDB server version: 4.4.1
Welcome to the MongoDB shell.

Now, run the below commands to swtich database and query the collection:

> use robin;
switched to db robin
>
> db.link.find();
{ "_id" : ObjectId("5f8517e7332f846b8f9008e2"), "url" : "https://robin.io/", "description" : "Automate Enterprise Applications" }
{ "_id" : ObjectId("5f851928332f846b8f9008e3"), "url" : "https://www.google.com/", "description" : "Google Search" }
{ "_id" : ObjectId("5f85192f332f846b8f9008e4"), "url" : "https://www.bing.com/", "description" : "Bing Search" }
>

We have successfully created a new instance of mongodb database using the backup, and the new database also has a collection called ‘link’ and 3 records. Thus it is identical to our original database.

To learn more about using Robin Storage on OpenShift, visit us at Robin Storage solution page.

4. PostgreSQL on OpenShift

After successfully deploying and running stateless applications, a number of developers are exploring the possibility of running stateful workloads, such as PostgreSQL, on OpenShift. If you are considering extending OpenShift for stateful workloads, this tutorial will help you experiment on your existing OpenShift environment by providing step-by-step instructions.

This tutorial will walk you through:

  1. How to deploy a PostgreSQL database on OpenShift using the Robin Operator and Helm3

  2. Add sample data to the PostgreSQL database

  3. Verify the Helm release has registered as an application

  4. Create a point-in-time snapshot of the PostgreSQL database

  5. Simulate a user error and rollback to a stable state using the snapshot

  6. Clone the database for the purpose of collaboration

  7. Backup the database to the cloud using AWS S3 bucket

  8. Simulate data loss/corruption and use the backup to restore the database

  9. Create PostgreSQL Database from the backup

4.1. Prerequisites: Install the Robin Operator on OpenShift and set up Helm

Robin Storage is an application-aware container storage that offers advanced data management capabilities and runs natively on OpenShift. Robin Storage delivers bare-metal performance and enables you to protect (via snapshots and backups), encrypt, collaborate (via clones and git like push/pull workflows) and make portable (via Cloud Repositories) stateful applications that are deployed using Helm Charts or Operators.

Before we deploy PostgreSQL on OpenShift, let’s first install the Robin operator on your existing OpenShift environment. You can install Robin directly from the OpenShift console by clicking on the OperatorHub tab. You can find further instructions here.

Let’s confirm that OpenShift cluster is up and running.

# oc get nodes

You should see an output similar to below, with the status of each node marked as Ready

[demo@ocp-svc ~]# oc get nodes
NAME                   STATUS   ROLES           AGE     VERSION
ocp-cp-1.lab.ocp.lan   Ready    master,worker   4d13h   v1.18.3+47c0e71
ocp-cp-2.lab.ocp.lan   Ready    master,worker   4d13h   v1.18.3+47c0e71
ocp-cp-3.lab.ocp.lan   Ready    master,worker   4d13h   v1.18.3+47c0e71
ocp-w-1.lab.ocp.lan    Ready    worker          4d13h   v1.18.3+47c0e71
ocp-w-2.lab.ocp.lan    Ready    worker          4d13h   v1.18.3+47c0e71

Let’s confirm that Robin is up and running. Run the following command to verify that Robin is ready.

# oc get robincluster -n robinio

You should see an output similar to below.

[demo@ocp-svc ~]# oc get robincluster -n robinio
NAME    AGE
robin   12h

To get the link to download Robin client do:

# oc describe robincluster -n robinio

You should see an output similar to below:

[demo@ocp-svc ~]# oc describe robinclusters -n robinio                                                                                                                                                                                                                                                             [33/48]
Name:         robin
Namespace:    robinio
Labels:       app.kubernetes.io/instance=robin
              app.kubernetes.io/managed-by=robin.io
              app.kubernetes.io/name=robin
Annotations:  <none>
API Version:  manage.robin.io/v1
Kind:         RobinCluster
Metadata:
  Creation Timestamp:  2020-09-29T19:31:27Z
  Generation:          1
  Managed Fields:
    API Version:  manage.robin.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          .:
          f:app.kubernetes.io/instance:
          f:app.kubernetes.io/managed-by:
          f:app.kubernetes.io/name:
      f:spec:
        .:
        f:host_type:
        f:image_robin:
        f:k8s_provider:
    Manager:      oc
    Operation:    Update
    Time:         2020-09-29T19:31:27Z
    API Version:  manage.robin.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        .:
        f:connect_command:
        f:get_robin_client:
        f:master_ip:
        f:phase:
        f:pod_status:
        f:robin_node_status:
    Manager:         robin-operator
    Operation:       Update
    Time:            2020-09-30T12:20:21Z
  Resource Version:  1788633
  Self Link:         /apis/manage.robin.io/v1/namespaces/robinio/robinclusters/robin
  UID:               c53fb011-56ae-490c-a2a2-0b5b19f03082
Spec:
  host_type:     physical
  image_robin:   robinsys/robinimg:5.3.2-506
  k8s_provider:  openshift
Status:
  connect_command:   kubectl exec -it robin-z27qg -n robinio -- bash
  get_robin_client:  curl -k https://192.168.22.203:29442/api/v3/robin_server/download?file=robincli&os=linux > robin
  master_ip:         192.168.22.203
  Phase:             Ready

Find the field ‘Get _ Robin _ Client’ and run the corresponding command to get the Robin client.

# curl -k https://192.168.22.203:29442/api/v3/robin_server/download?file=robincli&os=linux > robin

Change the file permission for robin and copy it to /usr/bin/local to make it as a system command.

In the same output above notice the field ‘Master _ Ip’ and use it to setup your Robin client to work with your OpenShift cluster, by running the following command.

# robin client add-context 192.168.22.203 --set-current
# robin login admin --password Robin123
# robin namespace add demo

Let’s add a stable Helm repository to pull Helm charts from. For this tutorial, we will use the IBM Community Helm repo. This repository has Helm charts designed to run on OpenShift.

# helm repo add ibm-community https://raw.githubusercontent.com/IBM/charts/master/repo/community

4.2. Deploy a PostgreSQL database on OpenShift

Now, let’s create a PostgreSQL database using Helm and Robin Storage. When we installed the Robin operator and created a “Robincluster” custom resource definition, we created and registered a StorageClass named “Robin” with OpenShift. We can now use this StorageClass to create PersistentVolumes and PersistentVolumeClaims for the pods in OpenShift. Using this StorageClass allows us to access the data management capabilities (such as snapshot, clone, backup) provided by Robin Storage.

On Openshift 4.x, postgresql charts security context should be updated to allow the containers in previleged mode. Fetch the postgresql chart and make the below changes.

# helm fetch ibm-community/postgresql

Untar the postgresql chart files and update the statefulset.yaml with the below values to allow the container to run under privileged SCC

containers:
  - name: {{ template "postgresql.fullname" . }}
    image: {{ template "postgresql.image" . }}
    imagePullPolicy: "{{ .Values.image.pullPolicy }}"
    {{- if .Values.resources }}
    resources: {{- toYaml .Values.resources | nindent 12 }}
    {{- end }}
    {{- if .Values.securityContext.enabled }}
    securityContext:
      privileged: true
      readOnlyRootFilesystem: false
      allowPrivilegeEscalation: true
      runAsNonRoot: true
      capabilities:
        add: ["SYS_ADMIN"]
    {{- end }}

Login to the demo project/namespace using oc command and add the current user to SCC

[demo@ocp-svc ~]# oc project demo
Now using project "demo" on server "https://api.lab.ocp.lan:6443"
[demo@ocp-svc ~]# oc adm policy add-scc-to-user privileged -z default
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:privileged added: "default"

Using the below Helm command, we will deploy a PostgreSQL instance. (postgresql is the directory where the modified statefulset.yaml file is present and movies is the name of the helm release).

# helm install movies postgresql --set persistence.storageClass=robin,persistence.useDynamicProvisioning=true --namespace demo --set postgresqlDataDir=/bitnami/postgresql,persistence.mountPath=/bitnami/postgresql/,volumePermissions.enabled=true,volumePermissions.securityContext.runAsUser=1001

Run the following command to verify our database called “movies” is deployed and all relevant Kubernetes resources are ready.

# helm list

You should be able to see an output showing the status of your Postgres database.

[demo@ocp-svc ~]#  helm list
NAME    NAMESPACE         REVISION        UPDATED                         STATUS          CHART                 APP VERSION
movies  demo                              1               Wed Sep 30 07:00:07 2020        DEPLOYED        postgresql-8.6.4      11.7.0

You would also want to make sure Postgres database services are running before proceeding further. Run the following command to verify the services are running.

# oc get service | grep movies

You should see an output similar to the following.

[demo@ocp-svc ~]# oc get service |grep movies
movies-postgresql            ClusterIP   172.30.239.171   <none>        5432/TCP    3m15s
movies-postgresql-headless   ClusterIP   None             <none>        5432/TCP    3m15s

Now that we know the PostgreSQL services are up and running, let’s get Service IP address of our database.

# export IP_ADDRESS=$(oc get service movies-postgresql -o jsonpath={.spec.clusterIP})

Let’s get the password of our PostgreSQL database from Kubernetes Secret

# export POSTGRES_PASSWORD=$(oc get secret --namespace demo movies-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)

4.3. Add sample data to the PostgreSQL database

Let’s create a database “testdb” and connect to “testdb”.

[demo@ocp-svc ~]# oc run movies-postgresql-client --rm --tty -i --restart='Never' --namespace demo --image docker.io/bitnami/postgresql:11.7.0-debian-10-r9 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host movies-postgresql -U postgres -d postgres -p 5432
If you don't see a command prompt, try pressing enter.

postgres=# CREATE DATABASE testdb;
CREATE DATABASE
postgres=# \l
                                  List of databases
  Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
          |          |          |             |             | postgres=CTc/postgres
template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
          |          |          |             |             | postgres=CTc/postgres
testdb    | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
(4 rows)

For the purpose of this tutorial, let’s create a table named “movies”.

postgres=# \c testdb;
You are now connected to database "testdb" as user "postgres".
postgres=# CREATE TABLE movies (movieid TEXT, year INT, title TEXT, genre TEXT);
CREATE TABLE
postgres=# \d
        List of relations
Schema |  Name  | Type  |  Owner
--------+--------+-------+----------
public | movies | table | postgres
(1 row)

We need some sample data to perform operations on. Let’s add 9 movies to the “movies” table.

postgres=# INSERT INTO movies (movieid, year, title, genre) VALUES
('tt0360556', 2018, 'Fahrenheit 451', 'Drama'),
('tt0365545', 2018, 'Nappily Ever After', 'Comedy'),
('tt0427543', 2018, 'A Million Little Pieces','Drama'),
('tt0432010', 2018, 'The Queen of Sheba Meets the Atom Man', 'Comedy'),
('tt0825334', 2018, 'Caravaggio and My Mother the Pope', 'Comedy'),
('tt0859635', 2018, 'Super Troopers 2', 'Comedy'),
('tt0862930', 2018, 'Dukun', 'Horror'),
('tt0891581', 2018, 'RxCannabis: A Freedom Tale', 'Documentary'),
('tt0933876', 2018, 'June 9', 'Horror');

Let’s verify data was added to the “movies” table by running the following command. You should see an output with the “movies” table and the nine rows in it as follows:

postgres=# SELECT * from movies;
  movieid  | year |                 title                 |    genre
-----------+------+---------------------------------------+-------------
tt0360556 | 2018 | Fahrenheit 451                        | Drama
tt0365545 | 2018 | Nappily Ever After                    | Comedy
tt0427543 | 2018 | A Million Little Pieces               | Drama
tt0432010 | 2018 | The Queen of Sheba Meets the Atom Man | Comedy
tt0825334 | 2018 | Caravaggio and My Mother the Pope     | Comedy
tt0859635 | 2018 | Super Troopers 2                      | Comedy
tt0862930 | 2018 | Dukun                                 | Horror
tt0891581 | 2018 | RxCannabis: A Freedom Tale            | Documentary
tt0933876 | 2018 | June 9                                | Horror
(9 rows)

We now have a PostgreSQL database with a table and some sample data. Now, let’s take a look at the data management capabilities Robin brings, such as taking snapshots, making clones, and creating backups.

4.4. Verify the PostgreSQL Helm release has been registered as an application

To benefit from the data management capabilities, we’ll register our PostgreSQL database with Robin. Doing so will let Robin map and track all resources associated with the Helm release for this PostgreSQL database.

As we have already added demo namespace in robin for the current user (administrator), robin will auto discover the helm apps registered in the demo namespace. verify robin app list and the helm release “movies” is present.

# robin app info movies --status

You should see an output similar to this:

[demo@ocp-svc ~]# robin app info --status  movies
+-----------------------+----------------------------+--------+---------+
| Kind                  | Name                       | Status | Message |
+-----------------------+----------------------------+--------+---------+
| Secret                | movies-postgresql          | Ready  | -       |
| PersistentVolumeClaim | data-movies-postgresql-0   | Bound  | -       |
| Pod                   | movies-postgresql-0        | Ready  | -       |
| Service               | movies-postgresql-headless | Ready  | -       |
| Service               | movies-postgresql          | Ready  | -       |
| StatefulSet           | movies-postgresql          | Ready  | -       |
+-----------------------+----------------------------+--------+---------+

4.5. Snapshot the PostgreSQL Database

If you make a mistake, such as unintentionally deleting important data, you may be able to undo it by restoring a snapshot. Snapshots allow you to restore the state of your application to a point-in-time.

Robin lets you snapshot not just the storage volumes (PVCs) but the entire database application including all its resources such as Pods, StatefulSets, PVCs, Services, ConfigMaps etc. with a single command. To create a snapshot, run the following command.

# robin snapshot create movies --snapname snap9movies --desc "contains 9 movies" --wait

Let’s verify we have successfully created the snapshot.

# robin snapshot list --app movies

You should see an output similar to this:

[demo@ocp-svc ~]# robin snapshot list  --app movies
+----------------------------------+--------+----------+----------+--------------------+
| Snapshot ID                      | State  | App Name | App Kind | Snapshot name      |
+----------------------------------+--------+----------+----------+--------------------+
| 13cc8da2031a11eb99a99b354219f676 | ONLINE | movies   | helm     | movies_snap9movies |
+----------------------------------+--------+----------+----------+--------------------+

We now have a snapshot of our entire database with information of all 9 movies.

4.6. Rollback the PostgreSQL database

We have 9 rows in our “movies” table. To test the snapshot and rollback functionality, let’s simulate a user error by deleting a movie from the “movies” table.

testdb=# DELETE from movies where title = 'June 9';

Let’s verify the movie titled “June 9” has been deleted.

testdb=# SELECT * from movies;
  movieid  | year |                 title                 |    genre
-----------+------+---------------------------------------+-------------
tt0360556 | 2018 | Fahrenheit 451                        | Drama
tt0365545 | 2018 | Nappily Ever After                    | Comedy
tt0427543 | 2018 | A Million Little Pieces               | Drama
tt0432010 | 2018 | The Queen of Sheba Meets the Atom Man | Comedy
tt0825334 | 2018 | Caravaggio and My Mother the Pope     | Comedy
tt0859635 | 2018 | Super Troopers 2                      | Comedy
tt0862930 | 2018 | Dukun                                 | Horror
tt0891581 | 2018 | RxCannabis: A Freedom Tale            | Documentary
(8 rows)

Let’s run the following command to see the available snapshots:

# robin app info movies

You should see an output similar to the following. Note the snapshot id, as we will use it in the next command.

[demo@ocp-svc ~]# robin app info movies
Name                              : movies
Kind                              : helm
State                             : ONLINE
Number of repos                   : 0
Number of snapshots               : 1
Number of usable backups          : 0
Number of archived/failed backups : 0

Query:
-------
{'selectors': [], 'namespace': 'demo', 'apps': ['helm/movies@demo'], 'resources': []}

Snapshots:
+----------------------------------+--------------------+-------------------+--------+----------------------+
| Id                               | Name               | Description       | State  | Creation Time        |
+----------------------------------+--------------------+-------------------+--------+----------------------+
| 13cc8da2031a11eb99a99b354219f676 | movies_snap9movies | contains 9 movies | ONLINE | 30 Sep 2020 07:40:27 |
+----------------------------------+--------------------+-------------------+--------+----------------------+

Now, let’s rollback to the point where we had 9 movies, including “June 9”, using the snapshot id displayed above via the following command:

# robin app restore movies --snapshotid Your_Snapshot_ID --wait

You should see an output similar to the following:

[demo@ocp-svc ~]# robin app restore movies --snapshotid 13cc8da2031a11eb99a99b354219f676 --wait
Job:  136 Name: K8SApplicationRollback State: VALIDATED       Error: 0
Job:  136 Name: K8SApplicationRollback State: PREPARED        Error: 0
Job:  136 Name: K8SApplicationRollback State: AGENT_WAIT      Error: 0
Job:  136 Name: K8SApplicationRollback State: COMPLETED       Error: 0

To verify we have rolled back to 9 movies in the “movies” table, run the following command.

[demo@ocp-svc ~]# oc run movies-postgresql-client --rm --tty -i --restart='Never' --namespace demo --image docker.io/bitnami/postgresql:11.7.0-debian-10-r9 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host movies-postgresql -U postgres -d testdb -p 5432
If you don't see a command prompt, try pressing enter.

psql (11.7)
Type "help" for help.

testdb=#
testdb=# SELECT * from movies;
  movieid  | year |                 title                 |    genre
-----------+------+---------------------------------------+-------------
tt0360556 | 2018 | Fahrenheit 451                        | Drama
tt0365545 | 2018 | Nappily Ever After                    | Comedy
tt0427543 | 2018 | A Million Little Pieces               | Drama
tt0432010 | 2018 | The Queen of Sheba Meets the Atom Man | Comedy
tt0825334 | 2018 | Caravaggio and My Mother the Pope     | Comedy
tt0859635 | 2018 | Super Troopers 2                      | Comedy
tt0862930 | 2018 | Dukun                                 | Horror
tt0891581 | 2018 | RxCannabis: A Freedom Tale            | Documentary
tt0933876 | 2018 | June 9                                | Horror
(9 rows)

We have successfully rolled back to our original state with 9 movies!

4.7. Clone the PostgreSQL Database

Robin lets you clone not just the storage volumes (PVCs) but the entire database application including all its resources such as Pods, StatefulSets, PVCs, Services, ConfigMaps, etc. with a single command.

Application cloning improves the collaboration across Dev/Test/Ops teams. Teams can share applications and data quickly, reducing the procedural delays involved in re-creating environments. Each team can work on their clone without affecting other teams. Clones are useful when you want to run a report on a database without affecting the source database application, or for performing UAT tests or for validating patches before applying them to the production database, etc.

Robin clones are ready-to-use “thin copies” of the entire app/database, not just storage volumes. Thin-copy means that data from the snapshot is NOT physically copied, therefore clones can be made very quickly. Robin clones are fully-writable and any modifications made to the clone are not visible to the source app/database.

To create a clone from the existing snapshot created above, run the following command. Use the snapshot id we retrieved above.

# robin app create from-snapshot movies-clone Your_Snapshot_ID --wait

You should see output similar to the following:

[demo@ocp-svc ~]# robin app create from-snapshot movies-clone 13cc8da2031a11eb99a99b354219f676 --wait
Job:  137 Name: K8SApplicationClone  State: VALIDATED       Error: 0
Job:  137 Name: K8SApplicationClone  State: PREPARED        Error: 0
Job:  137 Name: K8SApplicationClone  State: AGENT_WAIT      Error: 0
Job:  137 Name: K8SApplicationClone  State: FINALIZED       Error: 0
Job:  137 Name: K8SApplicationClone  State: COMPLETED       Error: 0

Let’s verify Robin has cloned all relevant Kubernetes resources.

# oc get all | grep "movies-clone"

You should see an output similar to below.

[demo@ocp-svc ~]# oc get all |grep "movies-clone"
pod/movies-clone-movies-postgresql-0   1/1     Running   0          94s
service/movies-clone-movies-postgresql            ClusterIP   172.30.149.75    <none>        5432/TCP    109s
service/movies-clone-movies-postgresql-headless   ClusterIP   None             <none>        5432/TCP    109s
statefulset.apps/movies-clone-movies-postgresql   1/1     109s

Notice that Robin automatically clones the required Kubernetes resources, not just storage volumes (PVCs), that are required to stand up a fully-functional clone of our database. After the clone is complete, the cloned database is ready for use.

Get Service IP address of our postgresql database clone, and note the IP address.

# export IP_ADDRESS=$(oc get service movies-clone-movies-postgresql -o jsonpath={.spec.clusterIP})

Get Password of our postgresql database clone from Kubernetes Secret

# export POSTGRES_PASSWORD=$(oc get secret movies-clone-movies-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode;)

To verify we have successfully created a clone of our PostgreSQL database, run the following command. You should see an output similar to the following:

[demo@ocp-svc ~]# oc run movies-clone-postgresql-client --rm --tty -i --restart='Never' --namespace demo --image docker.io/bitnami/postgresql:11.7.0-debian-10-r9 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host movies-clone-movies-postgresql -U postgres -d testdb -p 5432
If you don't see a command prompt, try pressing enter.

testdb=# SELECT * from movies;
  movieid  | year |                 title                 |    genre
-----------+------+---------------------------------------+-------------
tt0360556 | 2018 | Fahrenheit 451                        | Drama
tt0365545 | 2018 | Nappily Ever After                    | Comedy
tt0427543 | 2018 | A Million Little Pieces               | Drama
tt0432010 | 2018 | The Queen of Sheba Meets the Atom Man | Comedy
tt0825334 | 2018 | Caravaggio and My Mother the Pope     | Comedy
tt0859635 | 2018 | Super Troopers 2                      | Comedy
tt0862930 | 2018 | Dukun                                 | Horror
tt0891581 | 2018 | RxCannabis: A Freedom Tale            | Documentary
tt0933876 | 2018 | June 9                                | Horror
(9 rows)

We have successfully created a clone of our original PostgreSQL database, and the cloned database also has a table called “movies” with 9 rows, just like the original.

Now, let’s make changes to the clone and verify the original database remains unaffected by changes to the clone. Let’s delete the movie called “Super Troopers 2”.

# testdb=# DELETE from movies where title = 'Super Troopers 2';

Let’s verify the movie has been deleted. You should see an output similar to the following with 8 movies.

testdb=# SELECT * from movies;
  movieid  | year |                 title                 |    genre
-----------+------+---------------------------------------+-------------
tt0360556 | 2018 | Fahrenheit 451                        | Drama
tt0365545 | 2018 | Nappily Ever After                    | Comedy
tt0427543 | 2018 | A Million Little Pieces               | Drama
tt0432010 | 2018 | The Queen of Sheba Meets the Atom Man | Comedy
tt0825334 | 2018 | Caravaggio and My Mother the Pope     | Comedy
tt0862930 | 2018 | Dukun                                 | Horror
tt0891581 | 2018 | RxCannabis: A Freedom Tale            | Documentary
tt0933876 | 2018 | June 9                                | Horror
(8 rows)

Now, let’s connect to our original PostgreSQL database and verify it is unaffected.

Get Service IP address of our postgresql database.

# export IP_ADDRESS=$(oc get service movies-postgresql -o jsonpath={.spec.clusterIP})

Get Password of our original postgre database from Kubernetes Secret.

# export POSTGRES_PASSWORD=$(oc get secret --namespace demo movies-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode;)

To verify that our PostgreSQL database is unaffected by changes to the clone, run the following command.

Let’s connect to “testdb” and check record and you should see an output similar to the following, with all 9 movies present:

[demo@ocp-svc ~]# oc run movies-postgresql-client --rm --tty -i --restart='Never' --namespace demo --image docker.io/bitnami/postgresql:11.7.0-debian-10-r9 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host movies-postgresql -U postgres -d testdb -p 5432
If you don't see a command prompt, try pressing enter.

testdb=# SELECT * from movies;
  movieid  | year |                 title                 |    genre
-----------+------+---------------------------------------+-------------
tt0360556 | 2018 | Fahrenheit 451                        | Drama
tt0365545 | 2018 | Nappily Ever After                    | Comedy
tt0427543 | 2018 | A Million Little Pieces               | Drama
tt0432010 | 2018 | The Queen of Sheba Meets the Atom Man | Comedy
tt0825334 | 2018 | Caravaggio and My Mother the Pope     | Comedy
tt0859635 | 2018 | Super Troopers 2                      | Comedy
tt0862930 | 2018 | Dukun                                 | Horror
tt0891581 | 2018 | RxCannabis: A Freedom Tale            | Documentary
tt0933876 | 2018 | June 9                                | Horror
(9 rows)

This means we can work on the original PostgreSQL database and the cloned database simultaneously without affecting each other. This is valuable for collaboration across teams where each team needs to perform unique set of operations.

To see a list of all clones created by Robin run the following command:

# robin app list --app-types CLONE

Now let’s delete the clone. Clone is just any other Robin app so it can be deleted using the native ‘app delete’ command show below.

# robin app delete movies-clone -y --force --wait

The output should be similar to the following:

[demo@ocp-svc ~]# robin app delete movies-clone -y --force --wait
Job:  138 Name: K8SAppDelete         State: PROCESSED       Error: 0
Job:  138 Name: K8SAppDelete         State: PREPARED        Error: 0
Job:  138 Name: K8SAppDelete         State: AGENT_WAIT      Error: 0
Job:  138 Name: K8SAppDelete         State: FINALIZED       Error: 0
Job:  138 Name: K8SAppDelete         State: COMPLETED       Error: 0

4.8. Backup the PostgreSQL Database to AWS S3

Robin elevates the experience from backing up just storage volumes (PVCs) to backing up entire applications/databases, including their metadata, configuration, and data.

A backup is a full copy of the application snapshot that resides on completely different storage media than the application’s data. Therefore, backups are useful to restore an entire application from an external storage media in the event of catastrophic failures, such as disk errors, server failures, or entire data centers going offline, etc. (This is assuming your backup doesn’t reside in the data center that is offline, of course.)

Let’s now backup our database to an external secondary storage repository (repo). Snapshots (metadata + configuration + data) are backed up into the repo.

Robin enables you to back up your Kubernetes applications to AWS S3 or Google GCS ( Google Cloud Storage). In this demo we will use AWS S3 to create the backup.

Before we proceed, we need to create an S3 bucket and get access parameters for it. Follow the documentation here.

Let’s first register an AWS repo with Robin via the following command:

# robin repo register pgsqlbackups s3://robin-pgsql/pgsqlbackups awstier.json readwrite --wait

This following should be displayed when the above command is run:

[demo@ocp-svc ~]# robin repo register pgsqlbackups s3://robin-pgsql/pgsqlbackups awstier.json readwrite --wait
Job:  139 Name: StorageRepoAdd       State: PROCESSED       Error: 0
Job:  139 Name: StorageRepoAdd       State: COMPLETED       Error: 0

Let’s confirm that our secondary storage repository is successfully registered:

# robin repo list
[demo@ocp-svc ~]# robin repo list
+--------------+--------+----------------------+--------------+-------------+---------------+-------------+
| Name         | Type   | Owner/Tenant         | BackupTarget | Bucket      | Path          | Permissions |
+--------------+--------+----------------------+--------------+-------------+---------------+-------------+
| pgsqlbackups | AWS_S3 | admin/Administrators | 1            | robin-pgsql | pgsqlbackups/ | readwrite   |
+--------------+--------+----------------------+--------------+-------------+---------------+-------------+

Let’s attach this repo to our app so that we can backup its snapshots there:

# robin app attach-repo movies pgsqlbackups --wait

Let’s confirm that our secondary storage repository is successfully attached to app:

# robin app info movies

You should see an output similar to the following:

[demo@ocp-svc ~]# robin app info movies
Name                              : movies
Kind                              : helm
State                             : ONLINE
Number of repos                   : 1
Number of snapshots               : 1
Number of usable backups          : 0
Number of archived/failed backups : 0

Query:
-------
{'namespace': 'demo', 'apps': ['helm/movies@demo'], 'resources': [], 'selectors': []}

Repos:
+--------------+-------------+---------------+------------+
| Name         | Bucket      | Path          | Permission |
+--------------+-------------+---------------+------------+
| pgsqlbackups | robin-pgsql | pgsqlbackups/ | readwrite  |
+--------------+-------------+---------------+------------+

Snapshots:
+----------------------------------+--------------------+-------------------+--------+----------------------+
| Id                               | Name               | Description       | State  | Creation Time        |
+----------------------------------+--------------------+-------------------+--------+----------------------+
| 13cc8da2031a11eb99a99b354219f676 | movies_snap9movies | contains 9 movies | ONLINE | 30 Sep 2020 07:40:27 |
+----------------------------------+--------------------+-------------------+--------+----------------------+

Lets take the backup of the snapshot to the remote S3 repo.

# robin backup create movies pgsqlbackups --snapshotid Your_snaphsot_ID --backupname Name_of_Backup --wait

You should see an output similar to the following:

[demo@ocp-svc ~]# robin backup create movies pgsqlbackups --snapshotid 13cc8da2031a11eb99a99b354219f676 --backupname movies_backup --wait
Creating app backup 'movies_backup' from snapshot '13cc8da2031a11eb99a99b354219f676'
Job:  142 Name: K8SApplicationBackup State: PROCESSED       Error: 0
Job:  142 Name: K8SApplicationBackup State: AGENT_WAIT      Error: 0
Job:  142 Name: K8SApplicationBackup State: COMPLETED       Error: 0

Let’s also confirm that backup has been copied to remote S3 repo:

# robin repo contents pgsqlbackups

You should see an output similar to the following:

[demo@ocp-svc ~]# robin repo contents pgsqlbackups
+----------------------------------+------------+--------------+----------------------+--------+--------------------+----------+
| BackupID                         | ZoneID     | RepoName     | Owner/Tenant         | App    | Snapshot           | Imported |
+----------------------------------+------------+--------------+----------------------+--------+--------------------+----------+
| 60fe30f4032011eb9ac3e5c47d7a9357 | 1601408284 | pgsqlbackups | admin/Administrators | movies | movies_snap9movies | False    |
+----------------------------------+------------+--------------+----------------------+--------+--------------------+----------+

The snapshot has now been backed up into our AWS S3 bucket. Let’s note the “BackupID”, because we will need it to restore the database in the next step.

4.9. Restore the PostgreSQL Database

Let’s simulate a system failure where you lose local data. First, let’s delete the snapshot locally.

# robin snapshot delete Your_Snapshot_ID --wait
[demo@ocp-svc ~]# robin snapshot delete 13cc8da2031a11eb99a99b354219f676 --wait
Job:   143 Name: K8SSnapshotDelete    State: PREPARED        Error: 0
Job:   143 Name: K8SSnapshotDelete    State: COMPLETED       Error: 0

Now let’s simulate a data loss situation by deleting all data from the “movies” table and verify all data is lost.

[demo@ocp-svc ~]# oc run movies-postgresql-client --rm --tty -i --restart='Never' --namespace demo --image docker.io/bitnami/postgresql:11.7.0-debian-10-r9 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host movies-postgresql -U postgres -d testdb -p 5432
If you don't see a command prompt, try pressing enter.
psql (11.7)
Type "help" for help.

testdb=# DELETE from movies;
DELETE 9
testdb=# SELECT * from movies;
movieid | year | title | genre
---------+------+-------+-------
(0 rows)

We will now use our backed-up snapshot on S3 to restore data we just lost.

Now let’s restore snapshot from the backup in cloud and rollback our application to that snapshot via the following command:

# robin app restore movies --backupid Your_Backup_ID --wait

You should see output similar to the following:

[demo@ocp-svc ~]# robin app restore movies --backupid 60fe30f4032011eb9ac3e5c47d7a9357 --wait
Job:  144 Name: K8SApplicationRollback State: VALIDATED       Error: 0
Job:  144 Name: K8SApplicationRollback State: PREPARED        Error: 0
Job:  144 Name: K8SApplicationRollback State: AGENT_WAIT      Error: 0
Job:  144 Name: K8SApplicationRollback State: COMPLETED       Error: 0

Remember, we had deleted the local snapshot of our data. Let’s verify the above command has pulled the snapshot stored in the cloud. Run the following command:

# robin snapshot list --app movies

You should see output similar to the following:

[demo@ocp-svc ~]# robin snapshot list  --app movies
+----------------------------------+--------+----------+----------+--------------------+
| Snapshot ID                      | State  | App Name | App Kind | Snapshot name      |
+----------------------------------+--------+----------+----------+--------------------+
| 13cc8da2031a11eb99a99b354219f676 | ONLINE | movies   | helm     | movies_snap9movies |
+----------------------------------+--------+----------+----------+--------------------+

Let’s verify all 9 rows are restored to the “movies” table by running the following command:

[demo@ocp-svc ~]# oc run movies-postgresql-client --rm --tty -i --restart='Never' --namespace demo --image docker.io/bitnami/postgresql:11.7.0-debian-10-r9 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host movies-postgresql -U postgres -d testdb -p 5432
If you don't see a command prompt, try pressing enter.

testdb=# SELECT * from movies;
  movieid  | year |                 title                 |    genre
-----------+------+---------------------------------------+-------------
tt0360556 | 2018 | Fahrenheit 451                        | Drama
tt0365545 | 2018 | Nappily Ever After                    | Comedy
tt0427543 | 2018 | A Million Little Pieces               | Drama
tt0432010 | 2018 | The Queen of Sheba Meets the Atom Man | Comedy
tt0825334 | 2018 | Caravaggio and My Mother the Pope     | Comedy
tt0859635 | 2018 | Super Troopers 2                      | Comedy
tt0862930 | 2018 | Dukun                                 | Horror
tt0891581 | 2018 | RxCannabis: A Freedom Tale            | Documentary
tt0933876 | 2018 | June 9                                | Horror
(9 rows)

As you can see, we can restore the database to a desired state in the event of data corruption. We simply pull the backup from the cloud and use it to restore the database.

4.10. Create PostgreSQL Database from the backup

Since we have taken backup of PostgreSQL database, we can create a new app using the backup and verify the data integrity of the postgreSQL database.

# robin app create from-backup <app_name> <Your_backupID> --wait
[demo@ocp-svc ~]# robin app create from-backup movies-bkp 60fe30f4032011eb9ac3e5c47d7a9357 --wait
Job: 1093 Name: K8SApplicationCreate State: VALIDATED       Error: 0
Job: 1093 Name: K8SApplicationCreate State: PREPARED        Error: 0
Job: 1093 Name: K8SApplicationCreate State: AGENT_WAIT      Error: 0
Job: 1093 Name: K8SApplicationCreate State: COMPLETED       Error: 0
[demo@ocp-svc ~]# robin app info  movies-bkp --status
+-----------------------+---------------------------------------+--------+---------+
| Kind                  | Name                                  | Status | Message |
+-----------------------+---------------------------------------+--------+---------+
| Secret                | movies-bkp-movies-postgresql          | Ready  | -       |
| PersistentVolumeClaim | data-movies-bkp-movies-postgresql-0   | Bound  | -       |
| Pod                   | movies-bkp-movies-postgresql-0        | Ready  | -       |
| Service               | movies-bkp-movies-postgresql          | Ready  | -       |
| Service               | movies-bkp-movies-postgresql-headless | Ready  | -       |
| StatefulSet           | movies-bkp-movies-postgresql          | Ready  | -       |
+-----------------------+---------------------------------------+--------+---------+

Lets verify the contents of the postgreSQL database app.

# export POSTGRES_PASSWORD=$(oc get secret movies-bkp-movies-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode;)

Connect to “testdb” and check record and you should see an output similar to the following, with all 9 movies present:

[demo@ocp-svc ~]# oc run movies-bkp-postgresql-client --rm --tty -i --restart='Never' --namespace t001-u000003 --image docker.io/bitnami/postgresql:11.7.0-debian-10-r9 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host movies-bkp-movies-postgresql -U postgres -d testdb -p 5432
If you don't see a command prompt, try pressing enter.

testdb=# select * from movies;
  movieid  | year |                 title                 |    genre
-----------+------+---------------------------------------+-------------
tt0360556 | 2018 | Fahrenheit 451                        | Drama
tt0365545 | 2018 | Nappily Ever After                    | Comedy
tt0427543 | 2018 | A Million Little Pieces               | Drama
tt0432010 | 2018 | The Queen of Sheba Meets the Atom Man | Comedy
tt0825334 | 2018 | Caravaggio and My Mother the Pope     | Comedy
tt0859635 | 2018 | Super Troopers 2                      | Comedy
tt0862930 | 2018 | Dukun                                 | Horror
tt0891581 | 2018 | RxCannabis: A Freedom Tale            | Documentary
tt0933876 | 2018 | June 9                                | Horror
(9 rows)

To learn more about using Robin Storage on OpenShift, visit us at Robin Storage solution page.

5. Redis on OpenShift

After successfully deploying and running stateless applications, a number of developers are exploring the possibility of running stateful workloads, such as Redis, on OpenShift. If you are considering extending OpenShift for stateful workloads, this tutorial will help you experiment on your existing OpenShift environment by providing step-by-step instructions.

This tutorial will walk you through:

  1. How to deploy a Redis database on OpenShift using the Robin Operator and Helm3

  2. Add sample data to the Redis database

  3. Verify the Helm release has registered as an application

  4. Create a point-in-time snapshot of the Redis database

  5. Simulate a user error and rollback to a stable state using the snapshot

  6. Clone the database for the purpose of collaboration

  7. Backup the database to the cloud using AWS S3 bucket

  8. Simulate data loss/corruption and use the backup to restore the database

5.1. Prerequisites: Install the Robin Operator on OpenShift and set up Helm

Robin Storage is an application-aware container storage that offers advanced data management capabilities and runs natively on OpenShift. Robin Storage delivers bare-metal performance and enables you to protect (via snapshots and backups), encrypt, collaborate (via clones and git like push/pull workflows) and make portable (via Cloud Repositories) stateful applications that are deployed using Helm Charts or Operators.

Before we deploy Redis on OpenShift, let’s first install the Robin operator on your existing OpenShift environment. You can install Robin directly from the OpenShift console by clicking on the OperatorHub tab. You can find further instructions here.

Let’s confirm that OpenShift cluster is up and running.

# oc get nodes

You should see an output similar to below, with the status of each node marked as Ready

[jk@oc] oc get nodes
NAME                                         STATUS   ROLES    AGE   VERSION
ip-10-0-134-149.us-west-2.compute.internal   Ready    worker   46h   v1.18.3+6c42de8
ip-10-0-153-127.us-west-2.compute.internal   Ready    master   46h   v1.18.3+6c42de8
ip-10-0-162-125.us-west-2.compute.internal   Ready    worker   46h   v1.18.3+6c42de8
ip-10-0-173-71.us-west-2.compute.internal    Ready    master   46h   v1.18.3+6c42de8
ip-10-0-194-125.us-west-2.compute.internal   Ready    master   46h   v1.18.3+6c42de8
ip-10-0-209-83.us-west-2.compute.internal    Ready    worker   46h   v1.18.3+6c42de8

Let’s confirm that Robin is up and running. Run the following command to verify that Robin is ready.

# oc get robincluster -n robinio

You should see an output similar to below.

[jk@oc] oc get robincluster -n robinio
NAME    AGE
robin   44h

To get the link to download Robin client do:

# oc describe robincluster -n robinio

You should see an output similar to below:

[jk@oc] oc describe robinclusters -n robinio
Name:         robin
Namespace:    robinio
Labels:       app.kubernetes.io/instance=robin
              app.kubernetes.io/managed-by=robin.io
              app.kubernetes.io/name=robin
Annotations:  <none>
API Version:  manage.robin.io/v1
Kind:         RobinCluster
Metadata:
  Creation Timestamp:  2020-10-07T22:19:20Z
  Generation:          1
  Managed Fields:
    API Version:  manage.robin.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          .:
          f:app.kubernetes.io/instance:
          f:app.kubernetes.io/managed-by:
          f:app.kubernetes.io/name:
      f:spec:
        .:
        f:host_type:
        f:image_robin:
        f:k8s_provider:
        f:node_selector:
          .:
          f:node-role.kubernetes.io/worker:
        f:options:
          .:
          f:access_key:
          f:secret_key:
    Manager:      kubectl
    Operation:    Update
    Time:         2020-10-07T22:19:20Z
    API Version:  manage.robin.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        .:
        f:connect_command:
        f:get_robin_client:
        f:master_ip:
        f:phase:
        f:pod_status:
        f:robin_node_status:
    Manager:         robin-operator
    Operation:       Update
    Time:            2020-10-09T19:06:57Z
  Resource Version:  866428
  Self Link:         /apis/manage.robin.io/v1/namespaces/robinio/robinclusters/robin
  UID:               d86e1d66-94eb-422f-a112-e6061f6c178b
Spec:
  host_type:     ec2
  image_robin:   robinsys/robinimg:5.3.2-3
  k8s_provider:  openshift
  node_selector:
    node-role.kubernetes.io/worker:
  Options:
    access_key:  AKIAJHXSWWGC5HIEAU6A
    secret_key:  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
Status:
  connect_command:   kubectl exec -it robin-n624g -n robinio -- bash
  get_robin_client:  curl -k https://10.0.162.125:29442/api/v3/robin_server/download?file=robincli&os=linux > robin
  master_ip:         10.0.162.125
  Phase:             Ready
  pod_status:
    robin-rhqtk  ip-10-0-209-83.us-west-2.compute.internal  Running 10.0.209.83 false
    robin-n624g  ip-10-0-162-125.us-west-2.compute.internal  Running 10.0.162.125 false
    robin-tzmcn  ip-10-0-134-149.us-west-2.compute.internal  Running 10.0.134.149 false
  robin_node_status:
    host_name:      ip-10-0-162-125.us-west-2.compute.internal
    join_time:      1602109415
    k8s_node_name:  ip-10-0-162-125.us-west-2.compute.internal
    Roles:          M*,S
    Rpool:          default
    State:          ONLINE
    Status:         Ready
    host_name:      ip-10-0-134-149.us-west-2.compute.internal
    join_time:      1602109422
    k8s_node_name:  ip-10-0-134-149.us-west-2.compute.internal
    Roles:          S,M
    Rpool:          default
    State:          ONLINE
    Status:         Ready
    host_name:      ip-10-0-209-83.us-west-2.compute.internal
    join_time:      1602109424
    k8s_node_name:  ip-10-0-209-83.us-west-2.compute.internal
    Roles:          M,S
    Rpool:          default
    State:          ONLINE
    Status:         Ready
Events:             <none>

Find the field ‘Get _ Robin _ Client’ and run the corresponding command to get the Robin client.

# curl -k https://10.0.162.125:29442/api/v3/robin_server/download?file=robincli&os=linux > robin

Change the file permission for robin and copy it to /usr/bin/local to make it as a system command.

In the same output above notice the field ‘Master _ Ip’ and use it to setup your Robin client to work with your OpenShift cluster, by running the following commands.

# robin client add-context 10.0.162.125 --set-current
#
# robin login admin --password Robin123
[jk@oc] oc new-project redis-tutorial
Now using project "redis-tutorial" on server "https://api.j-prod-os4.rbnio.net:6443".

You can add applications to this project with the 'new-app' command. For example, try:

    # oc new-app ruby~https://github.com/sclorg/ruby-ex.git

to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:

    # kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node

Let’s add a Helm repository to pull Helm charts from. For this tutorial, we will use the following repo.

[jk@oc] helm repo add bitnami https://charts.bitnami.com/bitnami

5.2. Deploy Redis on OpenShift

Now, let’s create a Redis cluster using Helm and Robin Storage. When we installed the Robin operator and created a “Robincluster” custom resource definition, we created and registered a StorageClass named “Robin” with OpenShift. We can now use this StorageClass to create PersistentVolumes and PersistentVolumeClaims for the pods in OpenShift. Using this StorageClass allows us to access the data management capabilities (such as snapshot, clone, backup) provided by Robin Storage.

Login to the redis-tutorial project/namespace using oc command and add the current user to SCC

[jk@oc] oc project redis-tutorial
Already on project "redis-tutorial" on server "https://api.j-prod-os4.rbnio.net:6443".

[jk@oc] oc adm policy add-scc-to-user anyuid system:serviceaccount:redis-tutorial:default
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:anyuid added: "default"

Using the below Helm command, we will deploy a Redis cluster. Note: robin will setup helm3 on the host where it is installed. Below commands/output are based on helm3 version.

# helm install employees bitnami/redis  --set master.persistence.storageClass=robin --set slave.persistence.storageClass=robin --set master.service.type=LoadBalancer --namespace redis-tutorial

Run the following command to verify Redis is deployed and all relevant Kubernetes resources are ready. You should be able to see an output showing the status of your Redis store.

[jk@oc] helm list --namespace redis-tutorial
NAME      NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
employees redis-tutorial  1               2020-10-09 20:01:42.456804365 +0000 UTC deployed        redis-11.1.0    6.0.8

Get the service IP address of our Redis store.

# export SERVICE_IP=$(kubectl get svc --namespace redis-tutorial employees-redis --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")

Get the password of the redis store from Kubernetes secret.

# export REDIS_PASSWORD=$(kubectl get secret --namespace redis-tutorial employees-redis -o jsonpath="{.data.redis-password}" | base64 --decode)

5.3. Add data to the Redis store

To add data to redis store, we will create a redis pod that can be used as a redis client. This will be used to connect to redis store and add data.

# kubectl run --namespace redis-tutorial employees-redis-client --rm --tty -i --restart='Never'     --env REDIS_PASSWORD=$REDIS_PASSWORD  --env SERVICE_IP=$SERVICE_IP  --image docker.io/bitnami/redis:6.0.8-debian-10-r0 -- bash

From this point in the document, redis-cli invocations are always in the employees-redis-client pod. We will insert employee entries into redis store. Lets use the redis-cli from the redis client pod.

# redis-cli -h $SERVICE_IP -a $REDIS_PASSWORD

Add the records as below.

# hmset employees e000001 'Carmina Chilcote' e000002 'Werner Whobrey' e000003 'Jenna Jarmon' e000004 'Norbert Niswonger' e000004 'Randell Reimers' e000005 'Janay Jacobi' e000006 'Tammara Theobald' e000007 'Margret Michelin' e000008 'Daron Desrosier' e000009 'Raymon Riggenbach'

Lets run the following command to verify that the records are added.

# redis-cli -h $SERVICE_IP -a $REDIS_PASSWORD hvals employees

The output of the above command will be as follows.

1001@employees-redis-client:/$ redis-cli -h $SERVICE_IP -a $REDIS_PASSWORD hvals employees
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
1) "Carmina Chilcote"
2) "Werner Whobrey"
3) "Jenna Jarmon"
4) "Randell Reimers"
5) "Janay Jacobi"
6) "Tammara Theobald"
7) "Margret Michelin"
8) "Daron Desrosier"
9) "Raymon Riggenbach"

We now have a Redis database deployed on OpenShift with some sample data. Now, let’s take a look at the data management capabilities Robin brings, such as taking snapshots, making clones, and creating backups.

5.4. Registering the Redis Helm release as an application

To benefit from the data management capabilities, we’ll register our Redis database with Robin. Doing so will let Robin map and track all resources associated with the Helm release for this Redis database. Lets make Robin aware of the new project.

# robin namespace add --import-namespace redis-tutorial

Lets register the helm app to Robin.

# robin app register empapp --app helm/employees -n redis-tutorial

Let us now verify Robin is now tracking our Redis Helm release as a single entity (app). You should see an output similar to the following.

[jk@oc] robin app info empapp --status
+-----------------------+-------------------------------------+--------+---------+
| Kind                  | Name                                | Status | Message |
+-----------------------+-------------------------------------+--------+---------+
| ConfigMap             | employees-redis-health              | Ready  | -       |
| ConfigMap             | employees-redis                     | Ready  | -       |
| ConfigMap             | employees-redis-scripts             | Ready  | -       |
| Secret                | employees-redis                     | Ready  | -       |
| PersistentVolumeClaim | redis-data-employees-redis-slave-1  | Bound  | -       |
| PersistentVolumeClaim | redis-data-employees-redis-master-0 | Bound  | -       |
| PersistentVolumeClaim | redis-data-employees-redis-slave-0  | Bound  | -       |
| Pod                   | employees-redis-slave-0             | Ready  | -       |
| Pod                   | employees-redis-master-0            | Ready  | -       |
| Pod                   | employees-redis-slave-1             | Ready  | -       |
| Service               | employees-redis-headless            | Ready  | -       |
| Service               | employees-redis-slave               | Ready  | -       |
| Service               | employees-redis-master              | Ready  | -       |
| StatefulSet           | employees-redis-slave               | Ready  | -       |
| StatefulSet           | employees-redis-master              | Ready  | -       |
+-----------------------+-------------------------------------+--------+---------+

Key:
  Green: Object is running
  Yellow: Object is potentially down
  Red: Object is down

We have successfully registered our Helm release as an app called “empapp”.

5.5. Snapshot the Redis store

If you make a mistake, such as unintentionally deleting important data, you may be able to undo it by restoring a snapshot. Snapshots allow you to restore the state of your application to a point-in-time.

Robin lets you snapshot not just the storage volumes (PVCs) but the entire redis application including all its resources such as Pods, StatefulSets, PVCs, Services, ConfigMaps etc. with a single command. To create a snapshot, run the following command.

# robin snapshot create empapp --snapname 9-employees --desc "Has 9 employees" --wait

Let’s verify we have successfully created the snapshot.

# robin snapshot list --app empapp

You should see an output similar to this:

[jk@oc] robin snapshot list --app empapp
+----------------------------------+--------+----------+----------+--------------------+
| Snapshot ID                      | State  | App Name | App Kind | Snapshot name      |
+----------------------------------+--------+----------+----------+--------------------+
| 5ef4167a0a7911eba569bb195d92adcd | ONLINE | empapp   | helm     | empapp_9-employees |
+----------------------------------+--------+----------+----------+--------------------+

We now have a snapshot of our entire database with information of all 9 movies.

5.6. Rollback the Redis store

We have 9 rows in our “employees” hash. To test the snapshot and rollback functionality, let’s simulate a user error by deleting three employees, “e000007”, “e000008” and “e000009” from the “employees” hash.

redis-cli -h $SERVICE_IP -a $REDIS_PASSWORD hdel employees e000009
redis-cli -h $SERVICE_IP -a $REDIS_PASSWORD hdel employees e000008
redis-cli -h $SERVICE_IP -a $REDIS_PASSWORD hdel employees e000007

Run the following command to check the employee records.

redis-cli -h $SERVICE_IP -a $REDIS_PASSWORD hvals employees

You should now see that the 3 employee records we just deleted do not exist in the hash anymore.

1001@employees-redis-client:/$ redis-cli -h $SERVICE_IP -a $REDIS_PASSWORD hvals employees
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
1) "Carmina Chilcote"
2) "Werner Whobrey"
3) "Jenna Jarmon"
4) "Randell Reimers"
5) "Janay Jacobi"
6) "Tammara Theobald"

Lets run the following command to see the available snapshots:

# robin snapshot list --app empapp

You should see output similar to the following:

[jk@oc] robin snapshot list --app empapp
+----------------------------------+--------+----------+----------+--------------------+
| Snapshot ID                      | State  | App Name | App Kind | Snapshot name      |
+----------------------------------+--------+----------+----------+--------------------+
| 5ef4167a0a7911eba569bb195d92adcd | ONLINE | empapp   | helm     | empapp_9-employees |
+----------------------------------+--------+----------+----------+--------------------+

Now, let’s roll back to the point where we had 9 employees, including “e000009”, using the snapshot id displayed above.

# robin app restore empapp --snapshotid <Your_Snapshot_ID> --wait

An example usage of the command is shown below.

jk@oc] robin app restore empapp --snapshotid 5ef4167a0a7911eba569bb195d92adcd --wait
Job:   85 Name: K8SApplicationRollback State: VALIDATED       Error: 0
Job:   85 Name: K8SApplicationRollback State: PREPARED        Error: 0
Job:   85 Name: K8SApplicationRollback State: AGENT_WAIT      Error: 0
Job:   85 Name: K8SApplicationRollback State: COMPLETED       Error: 0

To verify we have rolled back to 9 employees in the “employees” hash, run the following command.

redis-cli -h $SERVICE_IP -a $REDIS_PASSWORD hvals employees

You should see an output similar to the following:

1001@employees-redis-client:/$ redis-cli -h $SERVICE_IP -a $REDIS_PASSWORD hvals employees
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
1) "Carmina Chilcote"
2) "Werner Whobrey"
3) "Jenna Jarmon"
4) "Randell Reimers"
5) "Janay Jacobi"
6) "Tammara Theobald"
7) "Margret Michelin"
8) "Daron Desrosier"
9) "Raymon Riggenbach"

We have successfully rolled back our Redis database to the original state with 9 employee records!

5.7. Clone the Redis Database

Robin lets you clone not just the storage volumes (PVCs) but the entire database application including all its resources such as Pods, StatefulSets, PVCs, Services, ConfigMaps, etc. with a single command.

Application cloning improves the collaboration across Dev/Test/Ops teams. Teams can share app+data quickly, reducing the procedural delays involved in re-creating environments. Each team can work on their clone without affecting other teams. Clones are useful when you want to run a report on a database without affecting the source database application, or for performing UAT tests or for validating patches before applying them to the production database, etc.

Robin clones are ready-to-use “thin copy” of the entire app/database, not just storage volumes. Thin-copy means that data from the snapshot is NOT physically copied, therefore clones can be made very quickly. Robin clones are fully-writable and any modifications made to the clone are not visible to the source app/database.

To create a clone from the existing Redis database snapshot created earlier, run the following command.

# robin app create from-snapshot empapp-clone <Your_Snapshot_ID> --wait

An example usage of the command is shown below.

[jk@oc] robin app create from-snapshot empapp-clone  5ef4167a0a7911eba569bb195d92adcd --wait
Job:   86 Name: K8SApplicationClone  State: VALIDATED       Error: 0
Job:   86 Name: K8SApplicationClone  State: PREPARED        Error: 0
Job:   86 Name: K8SApplicationClone  State: AGENT_WAIT      Error: 0
Job:   86 Name: K8SApplicationClone  State: COMPLETED       Error: 0

Let’s verify Robin has cloned all relevant Kubernetes resources. You should see an output similar to below.

[jk@oc] oc get all |grep clone
pod/empapp-clone-employees-redis-master-0   1/1     Running   0          22m
pod/empapp-clone-employees-redis-slave-0    1/1     Running   0          21m
pod/empapp-clone-employees-redis-slave-1    1/1     Running   0          21m
service/empapp-clone-employees-redis-headless   ClusterIP      None            <none>                                                                    6379/TCP         23m
service/empapp-clone-employees-redis-master     LoadBalancer   172.30.125.57   a6bed3e9ba77c48d0a0e280d54602089-1302921165.us-west-2.elb.amazonaws.com   6379:30892/TCP   23m
service/empapp-clone-employees-redis-slave      ClusterIP      172.30.237.94   <none>                                                                    6379/TCP         23m
statefulset.apps/empapp-clone-employees-redis-master   1/1     23m
statefulset.apps/empapp-clone-employees-redis-slave    2/2     23m

Notice that Robin automatically clones all the required Kubernetes resources, not just storage volumes (PVCs), that are required to stand up a fully-functional clone of our database. After the clone is complete, the cloned database is ready for use.

Get Service IP address of our Redis database clone, and note the IP address.

# export SERVICE_IP=$(oc get svc --namespace redis-tutorial empapp-clone-employees-redis-master --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")

Get Password of our Redis store clone from Kubernetes Secret.

# export REDIS_PASSWORD=$(oc get secret --namespace redis-tutorial empapp-clone-employees-redis -o jsonpath="{.data.redis-password}" | base64 --decode)

To verify we have successfully created a clone of our Redis database, run the following command.

1001@employees-redis-client:/$ redis-cli -h $SERVICE_IP -a $REDIS_PASSWORD hvals employees
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
1) "Carmina Chilcote"
2) "Werner Whobrey"
3) "Jenna Jarmon"
4) "Randell Reimers"
5) "Janay Jacobi"
6) "Tammara Theobald"
7) "Margret Michelin"
8) "Daron Desrosier"
9) "Raymon Riggenbach"

We have successfully created a clone of our original Redis store, and the cloned store also has a hash called “employees” with 9 records, just like the original. We can work on the original Redis store and the cloned database simultaneously without affecting each other. This is valuable for collaboration across teams where each team needs to perform a unique set of operations.

5.8. Backup the Redis Database to AWS S3

Robin elevates the experience from backing up just storage volumes (PVCs) to backing up entire applications/databases, including their metadata, configuration, and data.

A backup is a full copy of the application snapshot that resides on completely different storage media than the application’s data. Therefore, backups are useful to restore an entire application from an external storage media in the event of catastrophic failures, such as disk errors, server failures, or entire data centers going offline, etc. (This is assuming your backup doesn’t reside in the data center that is offline, of course.)

Let’s now backup our redis to an external secondary storage repository (repo). Snapshots (metadata + configuration + data) are backed up into the repo.

Robin enables you to back up your Kubernetes applications to AWS S3 or Google GCS ( Google Cloud Storage). In this demo we will use AWS S3 to create the backup.

Before we proceed, we need to create an S3 bucket and get access parameters for it. Follow the documentation here.

Let’s first register an AWS repo with Robin:

# robin repo register awsrepo s3://robin-redis-backups/empapp awstier.json readwrite --wait

Running the command should result in an output similar to the one below.

[jk@oc] robin repo register awsrepo s3://robin-redis-backups/empapp awstier.json readwrite --wait
Job:   87 Name: StorageRepoAdd       State: PREPARED        Error: 0
Job:   87 Name: StorageRepoAdd       State: COMPLETED       Error: 0

Let’s confirm that our secondary storage repository is successfully registered:

[jk@oc] robin repo list
+---------+--------+----------------------+--------------+---------------------+---------+-------------+
| Name    | Type   | Owner/Tenant         | BackupTarget | Bucket              | Path    | Permissions |
+---------+--------+----------------------+--------------+---------------------+---------+-------------+
| awsrepo | AWS_S3 | admin/Administrators | 1            | robin-redis-backups | empapp/ | readwrite   |
+---------+--------+----------------------+--------------+---------------------+---------+-------------+

Let’s attach this repo to our app so that we can backup its snapshots there:

# robin app attach-repo empapp awserpo --wait

Let’s confirm that our secondary storage repository is successfully attached to app:

# robin app info empapp

You should see an output similar to the following:

[jk@oc] robin app attach-repo empapp awsrepo --wait
Job:   89 Name: K8SApplicationAddRepo State: PROCESSED       Error: 0
Job:   89 Name: K8SApplicationAddRepo State: COMPLETED       Error: 0
[jk@oc] robin app info empapp
Name                              : empapp
Kind                              : helm
State                             : ONLINE
Number of repos                   : 1
Number of snapshots               : 1
Number of usable backups          : 0
Number of archived/failed backups : 0

Query:
-------
{'apps': ['helm/employees@redis-tutorial'], 'namespace': 'redis-tutorial', 'resources': [], 'selectors': []}

Repos:
+---------+---------------------+---------+------------+
| Name    | Bucket              | Path    | Permission |
+---------+---------------------+---------+------------+
| awsrepo | robin-redis-backups | empapp/ | readwrite  |
+---------+---------------------+---------+------------+

Snapshots:
+----------------------------------+--------------------+-----------------+--------+----------------------+
| Id                               | Name               | Description     | State  | Creation Time        |
+----------------------------------+--------------------+-----------------+--------+----------------------+
| 5ef4167a0a7911eba569bb195d92adcd | empapp_9-employees | Has 9 employees | ONLINE | 09 Oct 2020 21:50:18 |
+----------------------------------+--------------------+-----------------+--------+----------------------+

Lets take the backup of the snapshot to the remote S3 repo.

# robin backup create empapp awsrepo --snapshotid Your_snaphsot_ID --backupname Name_of_Backup --wait

You should see an output similar to the following:

[jk@oc] robin backup create empapp awsrepo --snapshotid 5ef4167a0a7911eba569bb195d92adcd --backupname "emp bbackup" --wait
Creating app backup 'emp bbackup' from snapshot '5ef4167a0a7911eba569bb195d92adcd'
Job:   90 Name: K8SApplicationBackup State: PROCESSED       Error: 0
Job:   90 Name: K8SApplicationBackup State: AGENT_WAIT      Error: 0
Job:   90 Name: K8SApplicationBackup State: COMPLETED       Error: 0

Let’s also confirm that backup has been transferred to remote S3 repo:

# robin repo contents awsrepo

You should see an output similar to the following:

[jk@oc] robin repo contents awsrepo
+----------------------------------+------------+----------+----------------------+--------+--------------------+----------+
| BackupID                         | ZoneID     | RepoName | Owner/Tenant         | App    | Snapshot           | Imported |
+----------------------------------+------------+----------+----------------------+--------+--------------------+----------+
| cb6dece40a8411eb921c9bddf98dabb2 | 1602109367 | awsrepo  | admin/Administrators | empapp | empapp_9-employees | False    |
+----------------------------------+------------+----------+----------------------+--------+--------------------+----------+

The snapshot has now been backed up into our AWS S3 bucket. Let’s note the “BackupID”, because we will need it to restore the database in the next step.

5.9. Restore the Redis application

With backup, it is possible to create a new redis store if the original app is corrupted or deleted.

Lets simulate the app corruption by deleting the app by running the following commands.

# robin app unregister empapp  --force --wait
Job:   90 Name: K8SAppUnregister State: PROCESSED       Error: 0
Job:   90 Name: K8SAppUnregister State: AGENT_WAIT      Error: 0
Job:   90 Name: K8SAppUnregister State: COMPLETED       Error: 0

# helm uninstall employees --namespace redis-tutorial

Delete all the pvc’s so that there is no data of the app.

[jk@oc] kubectl get pvc -n redis-tutorial
No resources found in redis-tutorial namespace.

Now we have no redis app. Lets use the backup and restore the app.

jk@oc] robin app create from-backup  empapp cb6dece40a8411eb921c9bddf98dabb2 --namespace redis-tutorial --wait
Job:  107 Name: K8SApplicationCreate State: PREPARED        Error: 0
Job:  107 Name: K8SApplicationCreate State: AGENT_WAIT      Error: 0
Job:  107 Name: K8SApplicationCreate State: COMPLETED       Error: 0

We have a redis app created from the backup. Lets check the status.

jk@oc] oc get all -n redis-tutorial
NAME                                  READY   STATUS    RESTARTS   AGE
pod/empapp-employees-redis-master-0   1/1     Running   0          3m11s
pod/empapp-employees-redis-slave-0    1/1     Running   0          2m11s
pod/empapp-employees-redis-slave-1    1/1     Running   0          2m33s

NAME                                      TYPE           CLUSTER-IP       EXTERNAL-IP                                                             PORT(S)          AGE
service/empapp-employees-redis-headless   ClusterIP      None             <none>                                                                  6379/TCP         3m45s
service/empapp-employees-redis-master     LoadBalancer   172.30.234.238   a2a9995d6d8cc4ab3b40687e9b6ef7fe-36048600.us-west-2.elb.amazonaws.com   6379:31370/TCP   3m44s
service/empapp-employees-redis-slave      ClusterIP      172.30.186.174   <none>                                                                  6379/TCP         3m45s

NAME                                             READY   AGE
statefulset.apps/empapp-employees-redis-master   1/1     3m44s
statefulset.apps/empapp-employees-redis-slave    2/2     3m44s

As you can see, we can restore the redis to a desired state in the event of data corruption. We simply pull the backup from the cloud.

To learn more about using Robin Storage on OpenShift, visit us at Robin Storage solution page.

6. Kafka on OpenShift

After successfully deploying and running stateless applications, a number of developers are exploring the possibility of running stateful workloads, such as Kafka, on OpenShift. If you are considering extending OpenShift for stateful workloads, this tutorial will help you experiment on your existing OpenShift environment by providing step-by-step instructions.

This tutorial will walk you through:

  1. How to deploy a Apache Kafka on OpenShift using the Robin Operator and Helm3

  2. Add simple producer and consumer

  3. Verify the Helm release has registered as a Robin application

  4. Create a point-in-time snapshot of the Apache Kafka Appplication.

  5. Simulate a user error and rollback to a stable state using the snapshot

6.1. Prerequisites: Install the Robin Operator on OpenShift and set up Helm

Robin Storage is an application-aware container storage that offers advanced data management capabilities and runs natively on OpenShift. Robin Storage delivers bare-metal performance and enables you to protect (via snapshots and backups), encrypt, collaborate (via clones and git like push/pull workflows) and make portable (via Cloud Repositories) stateful applications that are deployed using Helm Charts or Operators.

Before we deploy Apache Kafka on OpenShift, let’s first install the Robin operator on your existing OpenShift environment. You can install Robin directly from the OpenShift console by clicking on the “OperatorHub” tab. You can find further instructions here.

Let’s confirm that OpenShift cluster is up and running.

oc get nodes

You should see an output similar to below, with the list of nodes and their status as Ready

gkesavan@watchthetimefly new-kafka % oc get nodes
NAME                                         STATUS   ROLES    AGE   VERSION
ip-10-0-139-8.us-west-2.compute.internal     Ready    worker   13d   v1.18.3+47c0e71
ip-10-0-158-166.us-west-2.compute.internal   Ready    master   13d   v1.18.3+47c0e71
ip-10-0-176-9.us-west-2.compute.internal     Ready    worker   13d   v1.18.3+47c0e71
ip-10-0-179-1.us-west-2.compute.internal     Ready    master   13d   v1.18.3+47c0e71
ip-10-0-201-72.us-west-2.compute.internal    Ready    master   13d   v1.18.3+47c0e71
ip-10-0-214-206.us-west-2.compute.internal   Ready    worker   13d   v1.18.3+47c0e71

Let’s confirm that Robin is up and running. Run the following command to verify that Robin is ready.

oc describe robincluster -n robinio

You should see an output similar to below:

gkesavan@watchthetimefly new-kafka % oc describe robincluster -n robinio
Name:         robin
Namespace:    robinio
Labels:       app.kubernetes.io/instance=robin
            app.kubernetes.io/managed-by=robin.io
            app.kubernetes.io/name=robin
Annotations:  API Version:  manage.robin.io/v1
Kind:         RobinCluster
Metadata:
    ...
    Manager:      kubectl
    Operation:    Update
    Time:         2020-10-08T04:20:46Z
    ...
    API Version:  manage.robin.io/v1
    Manager:         robin-operator
    Operation:       Update
    Time:            2020-10-21T20:41:27Z
Resource Version:  5981508
Self Link:         /apis/manage.robin.io/v1/namespaces/robinio/robinclusters/robin
UID:               7629de96-fa5c-489f-9cc9-0e6d1fb2dc22
Spec:
host_type:     ec2
image_robin:   robinsys/robinimg:5.3.2-4
k8s_provider:  openshift
Options:
    cloud_cred_secret:  aws-secret
Status:
connect_command:   kubectl exec -it robin-zv7gb -n robinio -- bash
get_robin_client:  curl -k https://10.0.158.166:29442/api/v3/robin_server/download?file=robincli&os=linux > robin
master_ip:         10.0.158.166
Phase:             Ready
pod_status:
    robin-6jt8z  ip-10-0-176-9.us-west-2.compute.internal  Running 10.0.176.9 false
    ...
    robin-bwf5s  ip-10-0-179-1.us-west-2.compute.internal  Running 10.0.179.1 true
robin_node_status:
    host_name:      ip-10-0-158-166.us-west-2.compute.internal
    join_time:      1602130907
    k8s_node_name:  ip-10-0-158-166.us-west-2.compute.internal
    Roles:          M*,S
    Rpool:          default
    State:          ONLINE
    Status:         Ready
    host_name:      ip-10-0-179-1.us-west-2.compute.internal
    join_time:      1602130934
    k8s_node_name:  ip-10-0-179-1.us-west-2.compute.internal
    Roles:          M,S
    Rpool:          default
    State:          ONLINE
    Status:         Ready
    host_name:      ip-10-0-139-8.us-west-2.compute.internal
    join_time:      1602130935
    k8s_node_name:  ip-10-0-139-8.us-west-2.compute.internal
    Roles:          S
    Rpool:          default
    State:          ONLINE
    Status:         Ready
    host_name:      ip-10-0-176-9.us-west-2.compute.internal
    join_time:      1602130938
    k8s_node_name:  ip-10-0-176-9.us-west-2.compute.internal
    Roles:          S
    Rpool:          default
    State:          ONLINE
    Status:         Ready
    host_name:      ip-10-0-214-206.us-west-2.compute.internal
    join_time:      1602130939
    k8s_node_name:  ip-10-0-214-206.us-west-2.compute.internal
    Roles:          S
    Rpool:          default
    State:          ONLINE
    Status:         Ready
    host_name:      ip-10-0-201-72.us-west-2.compute.internal
    join_time:      1602130942
    k8s_node_name:  ip-10-0-201-72.us-west-2.compute.internal
    Roles:          M,S
    Rpool:          default
    State:          ONLINE
    Status:         Ready
Events:             <none>

You should see that Phase is marked as Ready.

To get the link to download Robin client find the field get_robin_client in the output of above and run the corresponding command to get the Robin client. Please notice the quotes added to curl command. Change the file permission for robin and copy it to /usr/bin/local to make it as a system command.

gkesavan@watchthetimefly new-kafka % kubectl cp robin-zv7gb:/opt/robin/5.3.2-4/downloads/mac/robincli -n robinio robin
gkesavan@watchthetimefly new-kafka % sudo chmod +x robin ; sudo mv robin /usr/local/bin/

gkesavan@watchthetimefly new-kafka % robin login admin --p Robin123
User admin is logged into Administrators tenant
gkesavan@watchthetimefly new-kafka % robin host list
robin host list
Id           | Hostname                                   | Version | Status | RPool   | LastOpr | Roles | Isol Cores(SHR/DED/Total) | Non-Isol Cores | GPUs | Mem(Free/Alloc/Total) | HDD(#/Alloc/Total) | SSD(#/Alloc/Total) | Pod Usage | Joined Time
-------------+--------------------------------------------+---------+--------+---------+---------+-------+---------------------------+----------------+------+-----------------------+--------------------+--------------------+-----------+----------------------
1602130884:1 | ip-10-0-158-166.us-west-2.compute.internal | 5.3.2-4 | Ready  | default | ONLINE  | M*,S  | 0/0/0                     | 6/40           | 0/0  | 2G/12G/15G            | -/-/-              | -/-/-              | 61/250    | 08 Oct 2020 04:21:47
1602130884:2 | ip-10-0-179-1.us-west-2.compute.internal   | 5.3.2-4 | Ready  | default | ONLINE  | M,S   | 0/0/0                     | 8/40           | 0/0  | 0.69G/14G/15G         | -/-/-              | -/-/-              | 72/250    | 08 Oct 2020 04:22:14
1602130884:3 | ip-10-0-139-8.us-west-2.compute.internal   | 5.3.2-4 | Ready  | default | ONLINE  | S     | 0/0/0                     | 2/20           | 0/0  | 2G/5G/7G              | -/-/-              | 1/24G/101G         | 26/250    | 08 Oct 2020 04:22:15
1602130884:4 | ip-10-0-176-9.us-west-2.compute.internal   | 5.3.2-4 | Ready  | default | ONLINE  | S     | 0/0/0                     | 2/20           | 0/0  | 2G/5G/7G              | -/-/-              | 1/12G/101G         | 19/250    | 08 Oct 2020 04:22:18
1602130884:5 | ip-10-0-214-206.us-west-2.compute.internal | 5.3.2-4 | Ready  | default | ONLINE  | S     | 0/0/0                     | 3/20           | 0/0  | 1G/5G/7G              | -/-/-              | -/-/-              | 23/250    | 08 Oct 2020 04:22:19
1602130884:6 | ip-10-0-201-72.us-west-2.compute.internal  | 5.3.2-4 | Ready  | default | ONLINE  | M,S   | 0/0/0                     | 7/40           | 0/0  | 2G/13G/15G            | -/-/-              | -/-/-              | 63/250    | 08 Oct 2020 04:22:22

Next create a namespace wherein which we will create the application, by running the following commands. You should see an output similar to below:

#  robin namespace add kaf01 --import-namespace
Namespace 'kafa0a' has been added for user 'admin' in tenant 'Administrators'

Let’s add a Bitnami Helm repository to pull Helm charts from. For this tutorial, we will use the Bitnami Helm repo. This repository has Helm charts designed to run on OpenShift.

You should see an output similar to below:

gkesavan@watchthetimefly ~ % helm repo add bitnami https://charts.bitnami.com/ibm
"bitnami" has been added to your repositories

6.2. Deploy Apache Kafka on OpenShift

Now, let’s create Apache Kafka using Helm and Robin Storage. When we installed the Robin operator and created a “Robincluster” custom resource definition, ‘robin’ StorageClass is already created and registered with OpenShift. We can now use this StorageClass to create PersistentVolumes and PersistentVolumeClaims for the pods in OpenShift. Using this StorageClass allows us to access the data management capabilities (such as snapshot, clone, backup) provided by Robin Storage.

On Openshift 4.x, the security context for the Kafka helm chart should be updated to allow the containers in previleged mode. Fetch the Kafka chart and make the below changes.

gkesavan@watchthetimefly ~ % helm pull bitnami/kafka --version 11.8.5

Edit the kafka/values.yaml file to suit your deployment. In addition, set the storageClass attribute to robin in order to take advantage of the data management capabilities Robin Storage offers.

global:
storageClass: robin

Login to the ‘kafkans’ project/namespace using the below oc command and add the privileged security context constraint to the current user.

gkesavan@watchthetimefly kafka % oc project kafkans
Now using project "kafkans" on server "https://api.giri.rbnio.net:6443".

gkesavan@watchthetimefly kafka % oc adm policy add-scc-to-user privileged -z default
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:privileged added: "default"

gkesavan@watchthetimefly kafka % oc adm policy add-scc-to-user anyuid system:serviceaccount:kafkans:default
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:anyuid added: "default"

gkesavan@watchthetimefly kafka % oc adm policy add-scc-to-user privileged -z kaf01-kafka
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:privileged added: "kaf01-kafka"

Using the below Helm command, we will deploy Kafka.

gkesavan@watchthetimefly ~ % helm install kaf01 ./kafka --set global.storageClass=robin --set replicaCount=3,zookeeper.replicaCount=3,deleteTopicEnable=true -n kafkans
NAME: kaf01
LAST DEPLOYED: Wed Oct 21 17:54:13 2020
NAMESPACE: kafkans
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **

Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:

    kaf01-kafka.kafkans.svc.cluster.local

Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:

    kaf01-kafka-0.kaf01-kafka-headless.kafkans.svc.cluster.local:9092
    kaf01-kafka-1.kaf01-kafka-headless.kafkans.svc.cluster.local:9092
    kaf01-kafka-2.kaf01-kafka-headless.kafkans.svc.cluster.local:9092

To create a pod that you can use as a Kafka client run the following commands:

    kubectl run kaf01-kafka-client --restart='Never' --image docker.io/bitnami/kafka:2.6.0-debian-10-r57 --namespace kafkans --command -- sleep infinity
    kubectl exec --tty -i kaf01-kafka-client --namespace kafkans -- bash

    PRODUCER:
        kafka-console-producer.sh \
            --broker-list kaf01-kafka-0.kaf01-kafka-headless.kafkans.svc.cluster.local:9092,kaf01-kafka-1.kaf01-kafka-headless.kafkans.svc.cluster.local:9092,kaf01-kafka-2.kaf01-kafka-headless.kafkans.svc.cluster.local:9092 \
            --topic test

    CONSUMER:
        kafka-console-consumer.sh \
            --bootstrap-server kaf01-kafka.kafkans.svc.cluster.local:9092 \
            --topic test \
            --from-beginning

Note

Helm3 is being used for this tutorial. Please update helm to Helm3 or replace the command shown for the Helm2 equivalent.

Run the following command to verify the application is deployed and all relevant Kubernetes resources are ready. You should be able to see an output showing the status of your Kafka App.

gkesavan@watchthetimefly ~ % helm list -n kafkans
NAME        NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
kaf01       kafkans         1               2020-10-21 18:04:43.534632 -0700 PDT    deployed        kafka-11.8.5    2.6.0

You would also want to make sure the relevant MySQL application pods are in a good state before proceeding further. Run the following command to verify the pods are running.

gkesavan@watchthetimefly ~ % oc get po -n kafkans
NAME                READY   STATUS    RESTARTS   AGE
kaf01-kafka-0       1/1     Running   1          2m13s
kaf01-kafka-1       1/1     Running   1          2m13s
kaf01-kafka-2       1/1     Running   0          2m13s
kaf01-zookeeper-0   1/1     Running   0          2m13s
kaf01-zookeeper-1   1/1     Running   0          2m13s
kaf01-zookeeper-2   0/1     Pending   0          2m12s

Please make a note of the password as we would need that in later steps.

6.3. Add a simple producer and consumer

Now that we know the Kafka is un and running, let create a producer for topic test and a consumer streaming the data from the test topic. There are a couple of way to do it.

  1. Create a Kafka client pod and use that to create a producer and consumer.

  2. Get into one of the 3 brokers and start the consumer to input the data and user the same broker to start a consumer.

For this tutorial we would go with option (2).

gkesavan@watchthetimefly ~ % kubectl exec -it kaf01-kafka-0 bash -n kafkans

root@kaf01-kafka-0:/# kafka-console-producer.sh \
            --broker-list kaf01-kafka-0.kaf01-kafka-headless.kafkans.svc.cluster.local:9092,kaf01-kafka-1.kaf01-kafka-headless.kafkans.svc.cluster.local:9092,kaf01-kafka-2.kaf01-kafka-headless.kafkans.svc.cluster.local:9092 \
            --topic test
>test1
>test2
>test3
>test4
>test5

Now start a consumer to get the data from the test topic from the beginning

gkesavan@watchthetimefly ~ % kubectl exec -it kaf01-kafka-0 bash -n kafkans

root@kaf01-kafka-0:/# kafka-console-consumer.sh \
>             --bootstrap-server kaf01-kafka.kafkans.svc.cluster.local:9092 \
>             --topic test \
>             --from-beginning
test1
test2
test3
test4
test5

As you can see from the above output that the message from the provider are accessed by the consumer from the beginning. Now, let’s take a look at the data management capabilities Robin brings, such as taking snapshots, making clones, and creating backups.

6.4. Verify the Apache Kafka Helm release has registered as an application

To benefit from the data management capabilities, we will register our Apache Kafka with Robin. Doing so will let Robin map and track all resources associated with the Helm release in order to enable the advanced data management capabilities of the product.

Since we initially added the ‘kafkans’ namespace in Robin for the admin user, Robin will auto discover the Helm applications registered in the ‘kafkans’ namespace. Verify this is the case by getting information and status for the application using the release name, by running the following command:

[robinds@ip-10-0-158-166 ~]# robin app list
Helm/Flex Apps:

+-------+------+--------+----------------------+-----------+-----------+---------+
| Name  | Type | State  | Owner/Tenant         | Namespace | Snapshots | Backups |
+-------+------+--------+----------------------+-----------+-----------+---------+
| kaf01 | helm | ONLINE | admin/Administrators | kafkans   | 0         | 0       |
+-------+------+--------+----------------------+-----------+-----------+---------+

[robinds@ip-10-0-158-166 ~]# robin app info kaf01 --status
+-----------------------+--------------------------+--------+---------+
| Kind                  | Name                     | Status | Message |
+-----------------------+--------------------------+--------+---------+
| ServiceAccount        | kaf01-kafka              | Ready  | -       |
| ConfigMap             | kaf01-kafka-scripts      | Ready  | -       |
| PersistentVolumeClaim | data-kaf01-zookeeper-1   | Bound  | -       |
| PersistentVolumeClaim | data-kaf01-kafka-2       | Bound  | -       |
| PersistentVolumeClaim | data-kaf01-kafka-0       | Bound  | -       |
| PersistentVolumeClaim | data-kaf01-kafka-1       | Bound  | -       |
| PersistentVolumeClaim | data-kaf01-zookeeper-2   | Bound  | -       |
| PersistentVolumeClaim | data-kaf01-zookeeper-0   | Bound  | -       |
| Pod                   | kaf01-kafka-1            | Ready  | -       |
| Pod                   | kaf01-zookeeper-1        | Ready  | -       |
| Pod                   | kaf01-kafka-0            | Ready  | -       |
| Pod                   | kaf01-zookeeper-0        | Ready  | -       |
| Pod                   | kaf01-kafka-2            | Ready  | -       |
| Pod                   | kaf01-zookeeper-2        | Ready  | -       |
| Service               | kaf01-zookeeper          | Ready  | -       |
| Service               | kaf01-zookeeper-headless | Ready  | -       |
| Service               | kaf01-kafka-headless     | Ready  | -       |
| Service               | kaf01-kafka              | Ready  | -       |
| StatefulSet           | kaf01-zookeeper          | Ready  | -       |
| StatefulSet           | kaf01-kafka              | Ready  | -       |
| PodDisruptionBudget   | kaf01-zookeeper          | Ready  | -       |
| PodDisruptionBudget   | kaf01-kafka              | Ready  | -       |
+-----------------------+--------------------------+--------+---------+

Key:
Green: Object is running
Yellow: Object is potentially down
Red: Object is down

6.5. Snapshot the Apache Kafka Application

If you make a mistake, such as unintentionally deleting important data, you may be able to undo it by restoring app to the previous snapshot. Snapshots allow you to restore the state of your application to a point-in-time state saved within the snapshot.

Robin lets you snapshot not just the storage volumes (PVCs) but the entire database application including all its resources such as Pods, StatefulSets, PVCs, Services, ConfigMaps etc. with a single command. To create a snapshot, run the following command.

[robinds@ip-10-0-158-166 ~]# robin snapshot create kaf01 --snapname snaptesttopic --desc "snap test topic" --wait
Job:  247 Name: K8SApplicationSnapshot State: PROCESSED       Error: 0
Job:  247 Name: K8SApplicationSnapshot State: WAITING         Error: 0
Job:  247 Name: K8SApplicationSnapshot State: COMPLETED       Error: 0

Let’s verify we have successfully created the snapshot.

[robinds@ip-10-0-158-166 ~]# robin snapshot list --app kaf01
+----------------------------------+--------+----------+----------+---------------------+
| Snapshot ID                      | State  | App Name | App Kind | Snapshot name       |
+----------------------------------+--------+----------+----------+---------------------+
| 3abad1c6140d11eb8c389198c7b36ce6 | ONLINE | kaf01    | helm     | kaf01_snaptesttopic |
+----------------------------------+--------+----------+----------+---------------------+

We now have a snapshot of our entire Kafka app with the test topic.

6.6. Rollback the Apache Kafka application

We have a test topic in our existing kafka application snapshot. Lets simulate a scenario where the user has accidently deleted the test topic.

root@kaf01-kafka-0:/# kafka-topics.sh --zookeeper kaf01-zookeeper --list
__consumer_offsets
test

root@kaf01-kafka-0:/# kafka-topics.sh --zookeeper kaf01-zookeeper --topic test --delete
Topic test is marked for deletion.

root@kaf01-kafka-0:/# kafka-topics.sh --zookeeper kaf01-zookeeper --list
__consumer_offsets
root@kaf01-kafka-0:/#

Let’s list the snapshot for our application. Note the snapshot id, as we will use it in the next command.

[robinds@ip-10-0-158-166 ~]# robin app info kaf01
Name                              : kaf01
Kind                              : helm
State                             : ONLINE
Number of repos                   : 0
Number of snapshots               : 1
Number of usable backups          : 0
Number of archived/failed backups : 0

Query:
-------
{'resources': [], 'namespace': 'kafkans', 'selectors': [], 'apps': ['helm/kaf01@kafkans']}

Snapshots:
+----------------------------------+---------------------+-----------------+--------+----------------------+
| Id                               | Name                | Description     | State  | Creation Time        |
+----------------------------------+---------------------+-----------------+--------+----------------------+
| 3abad1c6140d11eb8c389198c7b36ce6 | kaf01_snaptesttopic | snap test topic | ONLINE | 22 Oct 2020 02:21:16 |
+----------------------------------+---------------------+-----------------+--------+----------------------+

Now, let’s rollback to the point where we had the test topic, utilizing the snapshot id displayed above and the below command.

# robin app restore <app_name> --snapshotid <snapshot_id> --wait

You should see output similar to the following:

[robinds@ip-10-0-158-166 ~]# robin app restore kaf01 --snapshotid 3abad1c6140d11eb8c389198c7b36ce6 --wait
Job:  257 Name: K8SApplicationRollback State: PROCESSED       Error: 0
Job:  257 Name: K8SApplicationRollback State: PREPARED        Error: 0
Job:  257 Name: K8SApplicationRollback State: AGENT_WAIT      Error: 0
Job:  257 Name: K8SApplicationRollback State: COMPLETED       Error: 0

Validate that Apache Kafka pods are up and running after the restore operation.

gkesavan@watchthetimefly ~ % oc get po -n kafkans
NAME                READY   STATUS    RESTARTS   AGE
kaf01-kafka-0       1/1     Running   2          27m
kaf01-kafka-1       1/1     Running   2          27m
kaf01-kafka-2       1/1     Running   2          27m
kaf01-zookeeper-0   1/1     Running   0          27m
kaf01-zookeeper-1   1/1     Running   0          27m
kaf01-zookeeper-2   1/1     Running   0          27m

Verify that we have rolled back to the version of the Kafka application that has the test topic.

root@kaf01-kafka-0:/# kafka-topics.sh --list --zookeeper kaf01-zookeeper
__consumer_offsets
test

Run the kafka producer to validate the presence of the messages in the test topic from the beginning.

root@kaf01-kafka-0:/# kafka-console-consumer.sh --bootstrap-server kaf01-kafka.kafkans.svc.cluster.local:9092 --topic test --from-beginning
test1
test2
test3
test4
test5

We have successfully rolled back to our original state with messsage in the test topic!

To learn more about using Robin Storage on OpenShift, visit us at Robin Storage solution page.