how to share data using a local kubernetes cluster, using a custom PersistentVolume.
I will use the clk extension to bootstrap easily a kind cluster.
clk k8s --distribution kind flow
A hardcoded behavior of clk makes the folder /kind
mounted on /shared
in the
kind docker container.
Then, say that you are debugging a deployment that contain a PersistentVolumeClaim that is named my-claim in the default namespace.
The PersistentVolume to create is simply.
apiVersion: v1
kind: PersistentVolume
metadata:
name: shared-volume
spec:
accessModes:
- ReadWriteOnce
claimRef:
name: my-claim
namespace: default
capacity:
storage: 1Gi
hostPath:
path: /shared/myvolume/
type: DirectoryOrCreate
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- clk-k8s-control-plane
persistentVolumeReclaimPolicy: Delete
storageClassName: local-path
volumeMode: Filesystem
The nodeAffinity is not strictly needed as in the standard k3d cluster, you have only one node, but it might help remembering that you put this persistent volume explicitly on this node.
kubectl get pvc my-claim
NAME | STATUS | VOLUME | CAPACITY | ACCESS | MODES | STORAGECLASS | VOLUMEATTRIBUTESCLASS | AGE |
---|---|---|---|---|---|---|---|---|
my-claim | Bound | shared-volume | 1Gi | RWO | <unset> | 56s |
bonus: create the PersistentVolumeClaim
This is a simple as that.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-claim
spec:
storageClassName: ""
volumeName: shared-volume
resources:
requests:
storage: 100Mi
accessModes:
- ReadWriteOnce
Let’s schedule a pod in it.
apiVersion: v1
kind: Pod
metadata:
name: testpod
namespace: default
labels:
app.kubernetes.io/name: testpod
spec:
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: my-claim
containers:
- image: nginx:latest
imagePullPolicy: IfNotPresent
name: maincontainer
ports:
- containerPort: 80
name: http
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 3
periodSeconds: 5
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 3
periodSeconds: 5
volumeMounts:
- name: my-volume
mountPath: /usr/share/nginx/html/init
And check that the pod actually sees the content available on the host.
kubectl exec pod/testpod -- touch /usr/share/nginx/html/init/foo
ssh colima ls /kind/myvolume
foo
bonus2: Using that path without using PV and PVC
I can actually access that directory directly from a pod without needing a much work.
apiVersion: v1
kind: Pod
metadata:
name: testpod
namespace: default
labels:
app.kubernetes.io/name: testpod
spec:
volumes:
- name: shared-volume
hostPath:
path: /shared/myothervolume
containers:
- image: nginx:latest
imagePullPolicy: IfNotPresent
name: maincontainer
ports:
- containerPort: 80
name: http
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 3
periodSeconds: 5
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 3
periodSeconds: 5
volumeMounts:
- name: shared-volume
mountPath: /usr/share/nginx/html/init
kubectl exec pod/testpod -- touch /usr/share/nginx/html/init/bar
ssh colima ls /kind/myothervolume
bar
Notes linking here
- persistent local volumes with k3d kubernetes (braindump)