K8s PVC stuck in Pending in Sydney (Dallas OK)
I'm trying to deploy a Redis container in k8s (there are a couple of other containers in my pod, but I've excluded them as I'm only having problems with PVC)
As per the question title, it gets stuck in "Pending" if I try this on a Sydney (AU) cluster, but works fine if I try against a Dallas (US) cluster. Here's the YAML:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-pvc
labels:
app: myapp
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: redis
labels:
app: myapp
spec:
selector:
matchLabels:
app: myapp
strategy:
type: Recreate
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: redis:6.0.8-alpine
name: redis
args: ['--bind', '0.0.0.0']
ports:
- containerPort: 6379
name: redis
volumeMounts:
- name: redis-pv
mountPath: /data
volumes:
- name: redis-pv
persistentVolumeClaim:
claimName: redis-pvc
If I kubectl apply -f redis.yaml
while my k8s context is set to my Sydney cluster, and then I kubectl get pods
, I see it's pending:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-5c8ccbf6bb-97g2x 0/1 Pending 0 8m27s
In the output for kubectl describe pods
at the end of the output, I see:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 15s default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling 15s default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Now if I run through the above with my k8s context set to Dallas, everything works fine:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-5c8ccbf6bb-ldhxw 1/1 Running 0 34s
Is there something wrong with Sydney?
2 Replies
Hey @reetpetite - thanks for hanging in there on this one; I know this response is a week behind at this point. Let me say that your post details (especially the inclusion of the yaml file) were extremely helpful.
Using the info and the Redis yaml you provided, I was actually able to recreate the environment you described. With that said, I was able to successfully change contexts between Dallas and Sydney clusters and apply the redis.yaml to either cluster without any pending statuses. A couple of my colleagues were able to achieve the same result via our API with no issues.
I also checked on our backend to see if there were any overarching issues with the Sydney data center at the time of your posting, and I couldn't find anything on our end. This tells me that this is either somehow related to an internal config or potentially a brief external networking blip at that specific date/time.
I looked up the pod has unbound immediate PersistentVolumeClaims
error you provided, and I was directed to this Stack Overflow forum: Kubernetes - pod has unbound immediate PersistentVolumeClaims. The forum suggests that any pending or hanging statuses are a result of the PV not being available or not matching up with the claim query.
If you're able, I would suggest trying this again with a Sydney cluster to see if the issue persists. Hopefully you were able to get things up and running in the meantime. :)
Thanks @jdutton - I've been using Singapore, but i'll try Sydney again over the next few days :)