I have created an Enhanced Cluster of Kubernetes (OKE) in Oracle Cloud (OCI). Some cluster pods connect to VM instances outside OKE but in the same VCN subnet as the pods.
I want these connections to use domain names instead of IP addresses. To achieve this I created an IP to domain mapping in hosts block in core-dns configmap that was automatically created by OKE.
Issue: after editing the entry in configmap and a proper rollout, after around 12 hours it reverts to its original state.
I am unable to determine what is causing this behaviour as this is an enhanced cluster and not a Basic one. Is there a recommended way to do this host mapping so that custom changes will be preserved?
Coredns configmap edit code below (custom code starts and End mentioned in below YAML):
kubectl edit cm coredns -n kube-system
data:
Corefile: |-
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
#Custom Code Starts
hosts {
10.110.2.60 citus.exotel.local
10.110.2.60 citus-coordinator-master.exotel.local
10.110.2.252 citus-coordinator-replica.exotel.local
10.110.2.96 citus-worker-0.exotel.local
10.110.2.59 redis.exotel.local
10.110.2.8 kafka.exotel.local
10.110.2.8 kafka-0.exotel.local
10.110.2.250 kafka-1.exotel.local
10.110.2.142 kafka-2.exotel.local
fallthrough
}
#Custom Code Ends
}
import custom/*.server
kind: ConfigMap
metadata:
name: coredns