K3s beginner

What’s K3s and why we need it ?

K3s is a lightweight distribution of k8s to make k8s accessible in resource-constrained environments. Advantages of k3s:

  1. Size and efficiency
  2. Easy to install
  3. Some built-in tools

Install

curl -sfL https://get.k3s.io | sh -

Check k3s process status

 systemctl status k3s.service

Got this issue FATA[0000] starting kubernetes: preparing server: init cluster datastore and https: listen tcp :6443: bind: address already in use Solutions:

sudo lsof -i :6443 
sudo kill -9 $(sudo lsof -t -i:6443)

sudo kill -9 <PID>
sudo systemctl restart k3s

Issue: Not able to get k3s cluster status.

 kubectl get all -n kube-system

E0616 16:26:56.936188 1761783 memcache.go:265] couldn't get current server API group list: Get "https://192.168.49.2:8443/api?timeout=32s": dial tcp 192.168.49.2:8443: connect: no route to host
E0616 16:27:00.007979 1761783 memcache.go:265] couldn't get current server API group list: Get "https://192.168.49.2:8443/api?timeout=32s": dial tcp 192.168.49.2:8443: connect: no route to host
E0616 16:27:03.080047 1761783 memcache.go:265] couldn't get current server API group list: Get "https://192.168.49.2:8443/api?timeout=32s": dial tcp 192.168.49.2:8443: connect: no route to host

I have install k8s on this node before. So I think the issue is that I do not copy the k3s config to .kube folder. Tried that

# Copy K3s kubeconfig to ~/.kube/config
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config

# Change the owner of the config file to the current user
sudo chown $USER ~/.kube/config

 kubectl get all -n kube-system

Now I am able to get k3s cluster status.

NAME                                          READY   STATUS      RESTARTS   AGE
pod/coredns-6799fbcd5-2s8tn                   1/1     Running     0          46m
pod/local-path-provisioner-6c86858495-sfghq   1/1     Running     0          46m
pod/helm-install-traefik-crd-22tc9            0/1     Completed   0          46m
pod/helm-install-traefik-976dn                0/1     Completed   1          46m
pod/metrics-server-54fd9b65b-g5k5z            1/1     Running     0          46m
pod/svclb-traefik-ee33812d-mvwjp              2/2     Running     0          45m
pod/traefik-7d5f6474df-fjdgf                  1/1     Running     0          45m

NAME                     TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
service/kube-dns         ClusterIP      10.43.0.10     <none>        53/UDP,53/TCP,9153/TCP       46m
service/metrics-server   ClusterIP      10.43.199.96   <none>        443/TCP                      46m
service/traefik          LoadBalancer   10.43.62.153   28.10.10.62   80:30148/TCP,443:30314/TCP   45m

Check this article to know more about installing k3s on ubuntu https://www.digitalocean.com/community/tutorials/how-to-setup-k3s-kubernetes-cluster-on-ubuntu

K3s use local docker image

If you have a Docker image on your local machine and you want to use it in a Kubernetes cluster, you can follow these steps:

  1. Tag your image: Tag the Docker image with a version number. For example, if your image is named my-image and you want to tag it as v1, you can use the following command:

     docker tag my-image:latest my-image:v1
    
  2. Load the image into your nodes: If you’re using Minikube, you can use the minikube docker-env command to configure your current shell to use Minikube’s Docker daemon, then use docker save and docker load to move your image. If you’re using K3s, you can load the image directly into the K3s nodes using ctr:

     docker save my-image:v1 | ctr -n k8s.io images import -
    
  3. Use the image in Kubernetes: Now you can use your image in a Kubernetes pod. Here’s an example of a pod specification:

     apiVersion: v1
     kind: Pod
     metadata:
       name: my-pod
     spec:
       containers:
       - name: my-container
         image: my-image:v1
    

Remember to replace my-image:v1 with your actual image name and tag.

Difference between Deployment and Pod

A Pod is a group of one or more containers, with shared storage/network resources, and a specification for how to run the containers. A Pod’s contents are always co-located and co-scheduled, and run in a shared context. A basic pod definition for running a single container of nginx

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
spec:
  containers:
  - name: nginx-container
    image: nginx:1.14.2

A Deployment is a higher-level concept that manages Pods and provides declarative updates to Pods along with a lot of other useful features. Therefore, a Deployment is a higher-level concept that manages ReplicaSets and Pods. A basic deployment configution

apiversion: apps/v1
kind: Deployment
metadata:
    name: nginx-deployment
    labels:
        app: nginx
spec:
    replicas: 2
    selector:
        matchLabels:
            app: nginx
    template:
        metadata:
            labels:
                app: nginx
        spec:
            containers:
            - name: nginx-container
            image: nginx:1.14.2
            ports:
                - containerPort: 80

Deployments wrap pod definition, providing additional management layer. Pods reqquire manual updates and intervention for deploying and scaling Deployments enable automatic updates, rollbacks and scaling.

Issue: Still could not pull image from local

Events:
  Type     Reason     Age   From               Message
  ----     ------     ----  ----               -------
  Normal   Scheduled  2s    default-scheduler  Successfully assigned default/rocksdb-74cc66c9d6-fhsz5 to common-testbed
  Normal   Pulling    2s    kubelet            Pulling image "docker.io/rocksdb:v1"
  Warning  Failed     0s    kubelet            Failed to pull image "docker.io/rocksdb:v1": failed to pull and unpack image "docker.io/library/rocksdb:v1": failed to resolve reference "docker.io/library/rocksdb:v1": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
  Warning  Failed     0s    kubelet            Error: ErrImagePull

Solution: Try to push image to docker.io and pull from it. Sure, here’s a basic example of how you can push a Docker image to Docker Hub (docker.io) and then use that image in a Kubernetes deployment YAML file.

First, you need to tag your Docker image with the Docker Hub username and repository name:

docker tag local-image:tag yourusername/yourrepository:tag

Then, you can push the Docker image to Docker Hub:

docker push yourusername/yourrepository:tag

After pushing the image to Docker Hub, you can use it in a Kubernetes deployment YAML file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: your-deployment
spec:
  selector:
    matchLabels:
      app: your-app
  replicas: 3
  template:
    metadata:
      labels:
        app: your-app
    spec:
      containers:
      - name: your-app
        image: yourusername/yourrepository:tag
        ports:
        - containerPort: 8080
docker tag rocksdb:latest zzt1234/test-rocksdb:v2
docker push zzt1234/test-rocksdb:v2

In this YAML file, replace yourusername/yourrepository:tag with the Docker Hub username, repository name, and tag you used earlier. This will pull the image from Docker Hub and use it for the Kubernetes deployment.

Please replace yourusername, yourrepository, tag, your-deployment, your-app, and 8080 with your actual values. Also, make sure you’re logged in to Docker Hub in the environment where you’re running these commands. You can do this using the docker login command.

Remember to apply the deployment using kubectl apply -f your-deployment.yaml.

Another issue: Still can not start container successfully. Want to check logs of container but failed to connect to it

kubectl logs rocksdb-57c5b55686-mnwvr rocksdb
(base) ➜  docker git:(master) ✗ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: activating (auto-restart) (Result: exit-code) since Mon 2024-06-17 17:03:06 HKT; 5s ago
       Docs: https://kubernetes.io/docs/home/
    Process: 1458057 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status>   Main PID: 1458057 (code=exited, status=1/FAILURE)

Not able

Asked bingchat to give me solutions for how to solve this kubelet connection problem

journalctl -u kubelet

Get this log output

May 12 02:49:40 common-testbed kubelet[3632]: E0512 02:49:40.413488    3632 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s>May 12 02:49:44 common-testbed kubelet[3632]: I0512 02:49:44.813157    3632 kubelet_node_status.go:70] "Attempting to register node" node="common-testbed"
May 12 02:49:44 common-testbed kubelet[3632]: E0512 02:49:44.816687    3632 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes \"common-te>May 12 02:49:47 common-testbed kubelet[3632]: E0512 02:49:47.110624    3632 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get>May 12 02:49:47 common-testbed kubelet[3632]: E0512 02:49:47.416828    3632 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s>May 12 02:49:48 common-testbed kubelet[3632]: E0512 02:49:48.872385    3632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameser>May 12 02:49:50 common-testbed kubelet[3632]: E0512 02:49:50.873045    3632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameser>May 12 02:49:51 common-testbed kubelet[36

So now I decide to restart kubelet service for k3s because this kubelet service was started for previous k8s cluster. I am not sure if there is any domain name issue with it. I don’t know how to solve this problem properly so I decided to reinstall kubelet service.

Failed to execute kubectl command after replacing k3s.service file content new(not working):

[Unit]
Description=Lightweight Kubernetes
Documentation=https://k3s.io
After=network-online.target

[Service]
ExecStart=/usr/local/bin/k3s \
    server \
    --kubelet-arg='address=0.0.0.0' \
    --kubelet-arg='anonymous-auth=false' \
    --kubelet-arg='authentication-token-webhook=true' \
    --kubelet-arg='authorization-mode=Webhook' \
    --kubelet-arg='client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt' \
    --kubelet-arg='tls-cert-file=/var/lib/rancher/k3s/server/tls/kubelet.crt' \
    --kubelet-arg='tls-private-key-file=/var/lib/rancher/k3s/server/tls/kubelet.key'
KillMode=process
Delegate=yes
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Restart=always
RestartSec=5s

[Install]
WantedBy=multi-user.target

old

[Unit]
Description=Lightweight Kubernetes
Documentation=https://k3s.io
Wants=network-online.target
After=network-online.target

[Install]
WantedBy=multi-user.target

[Service]
Type=notify
EnvironmentFile=-/etc/default/%N
EnvironmentFile=-/etc/sysconfig/%N
EnvironmentFile=-/etc/systemd/system/k3s.service.env
KillMode=process
Delegate=yes
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
TimeoutStartSec=0
Restart=always
RestartSec=5s
ExecStartPre=/bin/sh -xc '! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service 2>/dev/null'
ExecStartPre=-/sbin/modprobe br_netfilter
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/k3s \
    server \
    --kubelet-arg='address=0.0.0.0' \
    --kubelet-arg='anonymous-auth=false' \
    --kubelet-arg='authentication-token-webhook=true' \
    --kubelet-arg='authorization-mode=Webhook' \
    --kubelet-arg='client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt' \
    --kubelet-arg='tls-cert-file=/var/lib/rancher/k3s/server/tls/kubelet.crt' \
    --kubelet-arg='tls-private-key-file=/var/lib/rancher/k3s/server/tls/kubelet.key'

Got error port:6443 already in use Tried to delete all processes that use 6443 First, need to stop kubelet service because it uses 6443 port

sudo systemctl stop kubelet

Then check the process that uses 6443 port

I found that there is obsolete content in kubelet starting file

/usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubel
sudo vim /etc/kubernetes/kubelet.conf

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJSkFNTXZ6a0h3ZWN3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TXpFeU1qa3dPVEUxTXpGYUZ3MHpNekV5TWpZd09USXdNekZhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUURiVWhXYlg1a241M0xLbHVDa2kzME96QUloR3ltVFdEdS9xZjlOUU1JcmtXdnoybUp6M2ptNHAzSFMKbllkY01RKytpL2FxVDN5MDNuSDhycXZ3ZFVRWnVsSS9xTXQreFdOTVg3V0NHR3lqOEhDY0JtdXdaT1A4WjE3dApBNVByNFRrM3VJOFNHMnJoRTBtTithMk5rQnVCZjNFRnR4NllPUnByKytneUtmMmxhQTlqV1A2cytML1plWi94CktYaFRicittclBPbTlXMXBxYnVwamJNNkJJUmROU0dYcWRiL0orcVRabDNQcStZZmtUaHJ6VjU3ZG9rOVBEWUEKMXd1MW8yZ3RMUTBISGcwb2R4V1ZyN2ZKenhNSXU0bWJ5ZGZiUTRPU2g2T2dDNzlNdFEvOGFtQ1hFTlJxK2RPaQo0TUhSQlNGZTh2K0FDMmQwY2NndjRkV1hURm1UQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJSaEpoNUkvdys0c1diNjNlcXYycklqOEd2ekhEQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQUR2OWU4T2dRNAoybzBJeUg5OFg2aDI0TnIrbEQ3R1o3KzBDUzdYcnZsYWtrV1hwT3RLYmRVWDI5Y3BPZ2p4WHZLV2NqMzhtK0wxCmJwWUlRK3Q0bXh0SmFlZVJlT2FzcXJzZVBQbC9POFMxRTlxN2xGd2dNWWs4amc3SVFjeHU5QzJBNG5OOGgxeXQKdU1qWi9mUXlkUmdMSkhnYm15Vkw5NGtpZndSOWxJM0RZTExwM2dlYTQ4ejBFZ0ZpOERoVXkrZ1llVDk5dzZXaQp0YmtFNHRTZ05jZFNjS2ZFMDNvTnFrUXJ4dkJXc1lOZnlOS0dPclV1YTNVTjhqN2NRcHFOa0plRlhxTUhsT0sxCmtJbi9XOVl4UC9rWGx2UkIvZXFiTC9uVlYxeTg4SC9tWGFsNHlIZ1owckNOSm1SOW9vRHJsVS9aRWtpMWRZeEIKdzdhdjFMOFU2M1BaCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://28.10.10.62:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:node:k8s-master
  name: system:node:k8s-master@kubernetes
current-context: system:node:k8s-master@kubernetes
kind: Config
preferences: {}
users:
- name: system:node:k8s-master
  user:
    client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
    client-key: /var/lib/kubelet/pki/kubelet-client-current.pem

So what I need to do now is to uninstall kubelet or clean config file. I don’t know the exact relationship between kubelet and k3s yet.

I run this command to remove all existing service and clean config files

sudo kubeadm reset --cleanup-tmp-dir -f --cri-socket=unix:///run/containerd/containerd.sock

[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki /etc/kubernetes/tmp]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

Clear original k8s installment including kubeadm, kubectl, kubulet. Uninstalling Kubernetes involves removing various components including kubelet, kubeadm, kubectl, and the .kube configuration directory. Here’s a general guide on how you can do it:

  1. Drain and delete the nodes: If you have a multi-node setup, you need to drain the nodes before deleting them. Replace <node-name> with the name of your node.
kubectl drain <node-name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node-name>
  1. Reset kubeadm: This will revert all the changes made by kubeadm init and kubeadm join.
sudo kubeadm reset
  1. Remove kubelet and kubeadm:
sudo apt-get purge kubeadm kubectl kubelet kubernetes-cni kube*
sudo apt-get autoremove
  1. Delete the .kube directory:
rm -rf ~/.kube
  1. Remove all the docker container network interfaces:
docker rm $(docker ps -a -q)
docker rmi $(docker images -q)

Got this issue

(base) ➜  docker git:(master) ✗ kubectl describe pod
WARN[0000] Unable to read /etc/rancher/k3s/k3s.yaml, please start server with --write-kubeconfig-mode to modify kube config permissions
error: error loading config file "/etc/rancher/k3s/k3s.yaml": open /etc/rancher/k3s/k3s.yaml: permission denied

Solution: This error is related to the permissions of the k3s.yaml file. Here are a few ways you can resolve this issue:

  1. Change the permissions of the file²⁴:
     sudo chmod 644 /etc/rancher/k3s/k3s.yaml
    

    This command changes the permissions of the file to be readable by all users on your system.

  2. Change the KUBECONFIG environment variable¹:
     KUBECONFIG=~/.kube/config
     export KUBECONFIG=~/.kube/config
     mkdir ~/.kube 2> /dev/null
     sudo k3s kubectl config view --raw > "$KUBECONFIG"
     chmod 600 "$KUBECONFIG"
    

    This set of commands changes the KUBECONFIG environment variable to point to a different location (~/.kube/config), creates the .kube directory if it doesn’t exist, copies the current configuration into the new location, and then changes the permissions of the new configuration file.

  3. Start the k3s server with modified kube config permissions³:
     sudo k3s server --write-kubeconfig-mode 644
    

    This command starts the k3s server with the --write-kubeconfig-mode flag set to 644, which changes the permissions of the kubeconfig file when it is written.

Still getting error that container is not able to start successfully

Jun 17 22:07:14 common-testbed k3s[3214677]: I0617 22:07:14.939722 3214677 scope.go:117] "RemoveContainer" containerID="e0c6ed40764db0173c3464f2e4b387a2bc9bd13bbf5e3ab>Jun 17 22:07:14 common-testbed k3s[3214677]: E0617 22:07:14.940136 3214677 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"r>Jun 17 22:07:25 common-testbed k3s[3214677]: I0617 22:07:25.939271 3214677 scope.go:117] "RemoveContainer" containerID="e0c6ed40764db0173c3464f2e4b387a2bc9bd13bbf5e3ab>Jun 17 22:07:25 common-testbed k3s[3214677]: E0617 22:07:25.939711 3214677 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"r>Jun 17 22:07:40 common-testbed k3s[3214677]: I0617 22:07:40.939511 3214677 scope.go:117] "RemoveContainer" containerID="e0c6ed40764db0173c3464f2e4b387a2bc9bd13bbf5e3ab>Jun 17 22:07:40 common-testbed k3s[3214677]: E0617 22:07:40.939941 3214677 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"r>~

Solution: Remove all existing deployments and pods and restart again

# Delete all deployments in the default namespace
kubectl delete deployments --all

# Delete all pods in the default namespace
kubectl delete pods --all

Still get error that rocksdb container fails to finish successfully. And I can not check logs of container. Solution: BingChat tells me that it’s because of http_proxy env variable issue. Do this

export NO_PROXY=$NO_PROXY,<node-ip-address>:6443
unset http_proxy
unset https_proxy

Still can not call kubectl logs command to check container logs.

Uninstall k3s


sudo /usr/local/bin/k3s-uninstall.sh

https://docs.k3s.io/installation/configuration

curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" INSTALL_K3S_EXEC="server" sh -s - --flannel-backend none

Get metrics server endpoint not available error after reinstallment of k3s Solution Don’t have solution yet.

Try minikube

Follow this doc https://minikube.sigs.k8s.io/docs/start/?arch=%2Flinux%2Fx86-64%2Fstable%2Fbinary+download#Service

And then set no proxy to run kubectl command

 export no_proxy=$(minikube ip)

Run this command to set alias

alias kubectl="minikube kubectl --"

Issue: Get image pulling issue

  Normal   Scheduled  3m32s                default-scheduler  Successfully assigned default/rocksdb-858cd64b59-hnqtg to minikube
  Warning  Failed     3m22s                kubelet            Failed to pull image "zzt1234/test-rocksdb:v2": Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp 34.226.69.105:443: connect: no route to host

I think the reason is thatimage pulling does not go through proxy. Need to find a way to let minikube to use proxy to pull image.

https://stackoverflow.com/questions/73756734/minikube-start-error-to-pull-new-external-images-you-may-need-to-configure-a-pr Read this doc to solve image pulling issue

Set proxy in doker daemon setting file like this https://docs.docker.com/config/daemon/systemd/#httphttps-proxy

minikube start --docker-env HTTP_PROXY=http://127.0.0.1:8081   --docker-env HTTPS_PROXY=http://127.0.0.1:8081
Events:
  Type     Reason     Age   From               Message
  ----     ------     ----  ----               -------
  Normal   Scheduled  4s    default-scheduler  Successfully assigned default/hello-minikube to minikube
  Normal   Pulling    3s    kubelet            Pulling image "gcr.io/google_containers/echoserver:1.4"
  Warning  Failed     3s    kubelet            Failed to pull image "gcr.io/google_containers/echoserver:1.4": Error response from daemon: Get "https://gcr.io/v2/": proxyconnect tcp: dial tcp 127.0.0.1:8081: connect: connection refused
  Warning  Failed     3s    kubelet            Error: ErrImagePull

Actually command above about setting proxy is not right. I think I should set proxy address to real ip address of host machine.

 minikube start --docker-env HTTP_PROXY=http://28.10.10.62:8081   --docker-env HTTPS_PROXY=http://28.10.10.62:8081
  Normal   Scheduled  4s    default-scheduler  Successfully assigned default/hello-minikube to minikube
  Normal   Pulling    3s    kubelet            Pulling image "gcr.io/google_containers/echoserver:1.4"
  Warning  Failed     2s    kubelet            Failed to pull image "gcr.io/google_containers/echoserver:1.4": [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of gcr.io/google_containers/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/
  Warning  Failed     2s    kubelet            Error: ErrImagePull

Now this issue is finially solved.

So what do I learn from this problem solving steps? minikube actually set up a docker vm in host machine to pull image. So when I run minikube start command, it actually starts a docker vm in host machine. and when I run kubectl command, it actually runs in docker vm.

Finally able to see logs inside k8s container

(base) ➜  docker git:(master) ✗ kubectl logs rocksdb-858cd64b59-vk28f rocksdb
RocksDB:    version 8.11.3
Date:       Tue Jun 18 04:21:49 2024
CPU:        40 * Intel(R) Xeon(R) Gold 6230N CPU @ 2.30GHz
CPUCache:   28160 KB
Set seed to 1718684509751930 because --seed was 0
Initializing RocksDB Options from the specified file
Initializing RocksDB Options from command-line flags
Integrated BlobDB: blob cache disabled
Keys:       16 bytes each (+ 0 bytes user-defined timestamp)
Values:     100 bytes each (50 bytes after compression)
Entries:    1000000
Prefix:    0 bytes
Keys per prefix:    0
RawSize:    110.6 MB (estimated)
FileSize:   62.9 MB (estimated)
Write rate: 0 bytes/second
Read rate: 0 ops/second
Compression: NoCompression
Compression sampling rate: 0
Memtablerep: SkipListFactory
Perf Level: 1
------------------------------------------------
Initializing RocksDB Options from the specified file
Initializing RocksDB Options from command-line flags
Integrated BlobDB: blob cache disabled
DB path: [/data/]
fillseq      :       2.114 micros/op 473062 ops/sec 2.114 seconds 1000000 operations;   52.3 MB/s

Looks like that minikube sets docker proxy to map host port to container port.

 ├─3420670 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
 ├─3534038 /usr/bin/docker-proxy -proto tcp -host-ip 127.0.0.1 -host-port 32778 -container-ip 192.168.58.2 -container-port 32443
 ├─3534054 /usr/bin/docker-proxy -proto tcp -host-ip 127.0.0.1 -host-port 32779 -container-ip 192.168.58.2 -container-port 8443
 ├─3534076 /usr/bin/docker-proxy -proto tcp -host-ip 127.0.0.1 -host-port 32780 -container-ip 192.168.58.2 -container-port 5000
 ├─3534090 /usr/bin/docker-proxy -proto tcp -host-ip 127.0.0.1 -host-port 32781 -container-ip 192.168.58.2 -container-port 2376
 └─3534110 /usr/bin/docker-proxy -proto tcp -host-ip 127.0.0.1 -host-port 32782 -container-ip 192.168.58.2 -container-port 22

kubectl delete deployment and service

# Delete deployment
kubectl delete deployment <deployment-name>

# Delete service
kubectl delete service <service-name>
# List deployments
kubectl get deployments

# List services
kubectl get services

Submit container job to k3s

After pushing local image to docker hub I am able to get normal image pulling message.

kubectl describe pod
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  7s    default-scheduler  Successfully assigned default/rocksdb-57c5b55686-mnwvr to common-testbed
  Normal  Pulling    7s    kubelet            Pulling image "zzt1234/test-rocksdb:v2"



Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • Learning-based memory allocation for C++ server workloads summary
  • my question:
  • Binary search algorithm variant
  • Docker Rocksdb build
  • Difference between Dockerfile and Docker Compose