Pushing images

comparing 8 ways to push your image into a minikube cluster.

Glossary:

Pull means downloading a container image directly from a remote registry.

Push means uploading a container image directly to a remote registry.

Load takes an image that is available as an archive, and makes it available in the cluster.

Save saves an image into an archive.

Build takes a “build context” (directory) and creates a new image in the cluster from it.

Tag means assigning a name and tag.

Comparison table for different methods

The best method to push your image to minikube depends on the container-runtime you built your cluster with (the default is docker). Here is a comparison table to help you choose:

Method Supported Runtimes Performance Load Build
docker-env command only docker good yes yes
cache command all ok yes no
podman-env command only cri-o good yes yes
registry addon all ok yes no
minikube ssh all best yes* yes*
ctr/buildctl command only containerd good yes yes
image load command all ok yes no
image build command all ok no yes
  • note1 : the default container-runtime on minikube is ‘docker’.
  • note2 : ’none’ driver (bare metal) does not need pushing image to the cluster, as any image on your system is already available to the kubernetes.
  • note3: when using ssh to run the commands, the files to load or build must already be available on the node (not only on the client host).

1. Pushing directly to the in-cluster Docker daemon (docker-env)

This is similar to podman-env but only for Docker runtime. When using a container or VM driver (all drivers except none), you can reuse the Docker daemon inside minikube cluster. This means you don’t have to build on your host machine and push the image into a docker registry. You can just build inside the same docker daemon as minikube which speeds up local experiments.

To point your terminal to use the docker daemon inside minikube run this:

eval $(minikube docker-env)
eval $(minikube docker-env)

PowerShell

& minikube -p minikube docker-env --shell powershell | Invoke-Expression

cmd

@FOR /f "tokens=*" %i IN ('minikube -p minikube docker-env --shell cmd') DO @%i

Now any ‘docker’ command you run in this current terminal will run against the docker inside minikube cluster.

So if you do the following commands, it will show you the containers inside the minikube, inside minikube’s VM or Container.

docker ps

Now you can ‘build’ against the docker inside minikube, which is instantly accessible to kubernetes cluster.

docker build -t my_image .

To verify your terminal is using minikube’s docker-env you can check the value of the environment variable MINIKUBE_ACTIVE_DOCKERD to reflect the cluster name.

Tip 1: Remember to turn off the imagePullPolicy:Always (use imagePullPolicy:IfNotPresent or imagePullPolicy:Never) in your yaml file. Otherwise Kubernetes won’t use your locally build image and it will pull from the network.

Tip 2: Evaluating the docker-env is only valid for the current terminal. By closing the terminal, you will go back to using your own system’s docker daemon.

Tip 3: In container-based drivers such as Docker or Podman, you will need to re-do docker-env each time you restart your minikube cluster.

More information on docker-env


2. Push images using ‘cache’ command.

From your host, you can push a Docker image directly to minikube. This image will be cached and automatically pulled into all future minikube clusters created on the machine

minikube cache add alpine:latest

The add command will store the requested image to $MINIKUBE_HOME/cache/images, and load it into the minikube cluster’s container runtime environment automatically.

Tip 1 : If your image changes after your cached it, you need to do ‘cache reload’.

minikube refreshes the cache images on each start. However to reload all the cached images on demand, run this command :

minikube cache reload

Tip 2 : If you have multiple clusters, the cache command will load the image for all of them.

To display images you have added to the cache:

minikube cache list

This listing will not include the images minikube’s built-in system images.

minikube cache delete <image name>

For more information, see:


3. Pushing directly to in-cluster CRI-O. (podman-env)

This is similar to docker-env but only for CRI-O runtime. To push directly to CRI-O, configure podman client on your host using the podman-env command in your shell:

eval $(minikube podman-env)

You should now be able to use podman client on the command line on your host machine talking to the podman service inside the minikube VM:

podman-remote help

Now you can ‘build’ against the storage inside minikube, which is instantly accessible to kubernetes cluster.

podman-remote build -t my_image .

Note: On Linux the remote client is called “podman-remote”, while the local program is called “podman”.

This is similar to docker-env but only for CRI-O runtime. To push directly to CRI-O, configure Podman client on your host using the podman-env command in your shell:

eval $(minikube podman-env)

You should now be able to use Podman client on the command line on your host machine talking to the Podman service inside the minikube VM:

podman help

Now you can ‘build’ against the storage inside minikube, which is instantly accessible to Kubernetes cluster.

podman build -t my_image .

Note: On macOS the remote client is called “podman”, since there is no local “podman” program available.

This is similar to docker-env but only for CRI-O runtime. To push directly to CRI-O, configure Podman client on your host using the podman-env command in your shell:

PowerShell

& minikube -p minikube podman-env --shell powershell | Invoke-Expression

cmd

@FOR /f "tokens=*" %i IN ('minikube -p minikube podman-env --shell cmd') DO @%i

You should now be able to use Podman client on the command line on your host machine talking to the Podman service inside the minikube VM:

Now you can ‘build’ against the storage inside minikube, which is instantly accessible to Kubernetes cluster.

podman help
podman build -t my_image .

Note: On Windows the remote client is called “podman”, since there is no local “podman” program available.

Remember to turn off the imagePullPolicy:Always (use imagePullPolicy:IfNotPresent or imagePullPolicy:Never), as otherwise Kubernetes won’t use images you built locally.


4. Pushing to an in-cluster using Registry addon

For illustration purpose, we will assume that minikube VM has one of the ip from 192.168.39.0/24 subnet. If you have not overridden these subnets as per networking guide, you can find out default subnet being used by minikube for a specific OS and driver combination here which is subject to change. Replace 192.168.39.0/24 with appropriate values for your environment wherever applicable.

Ensure that docker is configured to use 192.168.39.0/24 as insecure registry. Refer here for instructions.

Ensure that 192.168.39.0/24 is enabled as insecure registry in minikube. Refer here for instructions..

Enable minikube registry addon:

minikube addons enable registry

Build docker image and tag it appropriately:

docker build --tag $(minikube ip):5000/test-img .

Push docker image to minikube registry:

docker push $(minikube ip):5000/test-img

5. Building images inside of minikube using SSH

Use minikube ssh to run commands inside the minikube node, and run the build command directly there. Any command you run there will run against the same daemon / storage that kubernetes cluster is using.

For Docker, use:

docker build

For more information on the docker build command, read the Docker documentation (docker.com).

For CRI-O, use:

sudo podman build

For more information on the podman build command, read the Podman documentation (podman.io).

For Containerd, use:

sudo ctr images import
sudo buildctl build

For more information on the ctr images command, read the containerd documentation (containerd.io)

For more information on the buildctl build command, read the Buildkit documentation (mobyproject.org).

To exit minikube ssh and come back to your terminal type:

exit

6. Pushing directly to in-cluster containerd (buildkitd)

This is similar to docker-env and podman-env but only for Containerd runtime.

Currently it requires starting the daemon and setting up the tunnels manually.

ctr instructions

In order to access containerd, you need to log in as root. This requires adding the ssh key to /root/authorized_keys..

docker@minikube:~$ sudo mkdir /root/.ssh
docker@minikube:~$ sudo chmod 700 /root/.ssh
docker@minikube:~$ sudo cp .ssh/authorized_keys /root/.ssh/authorized_keys
docker@minikube:~$ sudo chmod 600 /root/.ssh

Note the flags that are needed for the ssh command.

minikube --alsologtostderr ssh --native-ssh=false

Tunnel the containerd socket to the host, from the machine. (Use above ssh flags (most notably the -p port and root@host))

ssh -nNT -L ./containerd.sock:/run/containerd/containerd.sock ... &

Now you can run command to this unix socket, tunneled over ssh.

ctr --address ./containerd.sock help

Images in “k8s.io” namespace are accessible to kubernetes cluster.

buildctl instructions

Start the BuildKit daemon, using the containerd backend.

docker@minikube:~$ sudo -b buildkitd --oci-worker=false --containerd-worker=true --containerd-worker-namespace=k8s.io

Make the BuildKit socket accessible to the regular user.

docker@minikube:~$ sudo groupadd buildkit
docker@minikube:~$ sudo chgrp -R buildkit /run/buildkit
docker@minikube:~$ sudo usermod -aG buildkit $USER
docker@minikube:~$ exit

Note the flags that are needed for the ssh command.

minikube --alsologtostderr ssh --native-ssh=false

Tunnel the BuildKit socket to the host, from the machine. (Use above ssh flags (most notably the -p port and user@host))

ssh -nNT -L ./buildkitd.sock:/run/buildkit/buildkitd.sock ... &

After that, it should now be possible to use buildctl:

buildctl --addr unix://buildkitd.sock build \
    --frontend=dockerfile.v0 \
    --local context=. \
    --local dockerfile=. \
    --output type=image,name=registry.k8s.io/username/imagename:latest

Now you can ‘build’ against the storage inside minikube. which is instantly accessible to kubernetes cluster.


7. Loading directly to in-cluster container runtime

The minikube client will talk directly to the container runtime in the cluster, and run the load commands there - against the same storage.

minikube image load my_image

For more information, see:


8. Building images to in-cluster container runtime

The minikube client will talk directly to the container runtime in the cluster, and run the build commands there - against the same storage.

minikube image build -t my_image .

For more information, see: