I'm trying to deploy MySQL InnoDB Cluster on Kubernetes using the Oracle MySQL Operator with the help of manifest files and kubectl. However, when executing the mycluster.yaml file, the pods are stuck in the initializing and pending state without progressing to running state. When I checked the logs the following lines are shown.
LAST SEEN TYPE REASON OBJECT MESSAGE
3m4s Normal FailedBinding persistentvolumeclaim/datadir-mycluster-0 no persistent volumes available for this claim and no storage class is set
33m Normal FailedBinding persistentvolumeclaim/datadir-mycluster-1 no persistent volumes available for this claim and no storage class is set
3m4s Normal FailedBinding persistentvolumeclaim/datadir-mycluster-1 no persistent volumes available for this claim and no storage class is set
33m Normal FailedBinding persistentvolumeclaim/datadir-mycluster-2 no persistent volumes available for this claim and no storage class is set
3m4s Normal FailedBinding persistentvolumeclaim/datadir-mycluster-2 no persistent volumes available for this claim and no storage class is set
I have found that the issue is due to the absense of a storage class. I've followed the official documentation from MySQL (MySQL Operator), but there's no mention of a storage class manifest file there.
The mycluster.yaml file is,
apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
name: mycluster
spec:
secretName: mypwds
tlsUseSelfSigned: true
instances: 3
router:
instances: 1
MySQL Operator is running without issues. The issue occurs only with the InnoDBCluster mycluster with three pods. I am using real Kubernetes cluster with one master node and three worker node virtual machines. I want to run multiple read/write instance of MySQL in Kubernetes which are using the same storage directory for data persistence.
What should I do to resolve this issue and get it working? All suggestions and help are welcomed.