Using Multi-Node Clusters
Overview
- This tutorial will show you how to start a multi-node clusters on minikube and deploy a service to it.
Prerequisites
- minikube 1.10.1 or higher
- kubectl
Caveat
Default host-path volume provisioner doesn’t support multi-node clusters (#12360). To be able to provision or claim volumes in multi-node clusters, you could use CSI Hostpath Driver addon.
Tutorial
- Start a cluster with 2 nodes in the driver of your choice:
minikube start --nodes 2 -p multinode-demo
π [multinode-demo] minikube v1.18.1 on Opensuse-Tumbleweed
β¨ Automatically selected the docker driver
π Starting control plane node multinode-demo in cluster multinode-demo
π₯ Creating docker container (CPUs=2, Memory=8000MB) ...
π³ Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
βͺ Generating certificates and keys ...
βͺ Booting up control plane ...
βͺ Configuring RBAC rules ...
π Configuring CNI (Container Networking Interface) ...
π Verifying Kubernetes components...
βͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
π Enabled addons: storage-provisioner, default-storageclass
π Starting node multinode-demo-m02 in cluster multinode-demo
π₯ Creating docker container (CPUs=2, Memory=8000MB) ...
π Found network options:
βͺ NO_PROXY=192.168.49.2
π³ Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
βͺ env NO_PROXY=192.168.49.2
π Verifying Kubernetes components...
π Done! kubectl is now configured to use "multinode-demo" cluster and "default" namespace by default
- Get the list of your nodes:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
multinode-demo Ready control-plane,master 99s v1.20.2
multinode-demo-m02 Ready <none> 73s v1.20.2
- You can also check the status of your nodes:
minikube status -p multinode-demo
multinode-demo
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
multinode-demo-m02
type: Worker
host: Running
kubelet: Running
- Deploy our hello world deployment:
kubectl apply -f hello-deployment.yaml
deployment.apps/hello created
kubectl rollout status deployment/hello
deployment "hello" successfully rolled out
- Deploy our hello world service, which just spits back the IP address the request was served from:
kubectl apply -f hello-svc.yaml
service/hello created
- Check out the IP addresses of our pods, to note for future reference
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
hello-695c67cf9c-bzrzk 1/1 Running 0 22s 10.244.1.2 multinode-demo-m02 <none> <none>
hello-695c67cf9c-frcvw 1/1 Running 0 22s 10.244.0.3 multinode-demo <none> <none>
- Look at our service, to know what URL to hit
minikube service list -p multinode-demo
|-------------|------------|--------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|------------|--------------|---------------------------|
| default | hello | 80 | http://192.168.49.2:31000 |
| default | kubernetes | No node port | |
| kube-system | kube-dns | No node port | |
|-------------|------------|--------------|---------------------------|
- Let’s hit the URL a few times and see what comes back
curl http://192.168.49.2:31000
Hello from hello-695c67cf9c-frcvw (10.244.0.3)
curl http://192.168.49.2:31000
Hello from hello-695c67cf9c-bzrzk (10.244.1.2)
curl http://192.168.49.2:31000
Hello from hello-695c67cf9c-bzrzk (10.244.1.2)
curl http://192.168.49.2:31000
Hello from hello-695c67cf9c-frcvw (10.244.0.3)
-
Multiple nodes!
-
Referenced YAML files
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 100%
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
affinity:
# β¬β¬β¬ This ensures pods will land on separate hosts
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions: [{ key: app, operator: In, values: [hello] }]
topologyKey: "kubernetes.io/hostname"
containers:
- name: hello-from
image: pbitty/hello-from:latest
ports:
- name: http
containerPort: 80
terminationGracePeriodSeconds: 1
apiVersion: v1
kind: Service
metadata:
name: hello
spec:
type: NodePort
selector:
app: hello
ports:
- protocol: TCP
nodePort: 31000
port: 80
targetPort: http
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.