Category: Kubernetes

  • Local K8s cluster: kind

    Use kind to spin up a new cluster

    brew install kind

    Create an ha cluster, 3 control-plane, 3 nodes

    $ mkdir kind-cluster
    $ cd kind-cluster
    $ bash -x ha-bootstrap.sh

    kubectl cluster-info –context kind-ha-dev

    It’s not stable after docker desktop restarted

    $ kubectx
    Switched to context “kind-ha-dev”.
    $ kubectl get pods
    E0515 09:10:08.836178 38742 memcache.go:265] couldn’t get current server API group list: Get “https://127.0.0.1:50016/api?timeout=32s”: EOF

    $ kind delete cluster –name ha-dev
    Deleting cluster “dev” …
    Deleted nodes: [“dev-external-load-balancer” “dev-control-plane3” “dev-control-plane2” “dev-worker” “dev-worker2” “dev-worker3” “dev-control-plane”]

    Create 1 control-plane, 1 node cluster

    $ bash -x single-bootstrap.sh

    Get clusters

    $ kind get clusters
    dev
    ha-dev

    source code: https://github.com/jpuyy/local-k8s-gitops/tree/main/kind-cluster

  • K8s lifecycle preStop for nginx pod

    Trigger pre stop, to let the nginx pod consume the transactions while no new incoming connections

    lifecycle:                                                              
              preStop:                                                              
                exec:                                                               
                  command:                                                          
                  - /bin/sh                                                         
                  - -c                                                              
                  - curl test.xxx.com/prestop-sleep --resolve test.xxx.com:80:127.0.0.1; sleep 60; /wait-shutdown
    
  • Local K8s cluster: minikube

    Learn minikube

    Intel Chips: Mac mini 2018

    Apple Chips: 13-inch, M2, 2022 MacBook Pro

    know drivers: https://minikube.sigs.k8s.io/docs/drivers/

    I use docker desktop, but not enable kubernetes

    https://minikube.sigs.k8s.io/docs/start/

    Drivers: Docker – VM + Container (preferred)
    https://minikube.sigs.k8s.io/docs/drivers/docker/

    turn off the docker => kubernetes feature.

    “`
    # remove minikube
    $ minikube delete
    💀 Removed all traces of the “minikube” cluster.
    $ minikube config set driver docker
    # check
    $ minikube config get driver
    docker
    $ minikube start
    😄 minikube v1.24.0 on Darwin 12.4
    ✨ Using the docker driver based on user configuration
    👍 Starting control plane node minikube in cluster minikube
    🚜 Pulling base image …
    > gcr.io/k8s-minikube/kicbase: 355.78 MiB / 355.78 MiB 100.00% 1.91 MiB p/
    🔥 Creating docker container (CPUs=2, Memory=7911MB) …
    🐳 Preparing Kubernetes v1.22.3 on Docker 20.10.8 …
    ▪ Generating certificates and keys …
    ▪ Booting up control plane …
    ▪ Configuring RBAC rules …
    🔎 Verifying Kubernetes components…
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
    🌟 Enabled addons: storage-provisioner, default-storageclass
    🏄 Done! kubectl is now configured to use “minikube” cluster and “default” namespace by default
    $ minikube status
    “`

    start 3 nodes
    $ minikube start –nodes=3

    ❗ The cluster minikube already exists which means the –nodes parameter will be ignored. Use “minikube node add” to add nodes to an existing cluster.

    minikube node add
    😄 Adding node m02 to cluster minikube
    ❗ Cluster was created without any CNI, adding a node to it might cause broken networking.
    👍 Starting worker node minikube-m02 in cluster minikube
    🚜 Pulling base image …
    🔥 Creating docker container (CPUs=2, Memory=2200MB)
    kubectl get nodes

    kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    minikube Ready control-plane 20d v1.26.3
    minikube-m02 Ready 17s v1.26.3

    https://minikube.sigs.k8s.io/docs/tutorials/multi_node/

    $ minikube docker-env

    ❌ Exiting due to ENV_MULTINODE_CONFLICT: The docker-env command is incompatible with multi-node clusters. Use the ‘registry’ add-on: https://minikube.sigs.k8s.io/docs/handbook/registry/

    remove nodes and minikube docker-env run again $ minikube docker-env export DOCKER_TLS_VERIFY=”1″ export DOCKER_HOST=”tcp://127.0.0.1:62214″ export DOCKER_CERT_PATH=”/Users/jpuyy/.minikube/certs” export MINIKUBE_ACTIVE_DOCKERD=”minikube” # To point your shell to minikube’s docker-daemon, run: # eval $(minikube -p minikube docker-env)


    开发机 debian
    Linux
    Drivers:
    * Docker – container-based (preferred)
    * KVM2 – VM-based (preferred)
    ❌ Exiting due to DRV_AS_ROOT: The “docker” driver should not be used with root privileges.
    解决办法:

    minikube start –force –driver=docker


    $ kubectl config get-clusters

    minikube

    $ kubectl config get-contexts

    CURRENT NAME CLUSTER AUTHINFO NAMESPACE
    * minikube minikube minikube default

    $ kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    minikube Ready control-plane,master 76m v1.22.3

    $ kubectl get nodes -o wide master?
    NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
    minikube Ready control-plane,master 96m v1.22.3 192.168.49.2 Ubuntu 20.04.2 LTS 5.10.104-linuxkit docker://20.10.8
    $ docker ps

    CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    1bdea8493ce7 gcr.io/k8s-minikube/kicbase:v0.0.28 “/usr/local/bin/entr…” 2 hours ago Up 2 hours 127.0.0.1:50116->22/tcp, 127.0.0.1:50112->2376/tcp, 127.0.0.1:50114->5000/tcp, 127.0.0.1:50115->8443/tcp, 127.0.0.1:50113->32443/tcp minikube

    $ minikube ip
    192.168.49.2

    Go inside the minikube docker ( docker in docker )
    $ docker exec -it minikube bash
    $ docker ps

    $ minikube docker-env
    export DOCKER_TLS_VERIFY=”1″
    export DOCKER_HOST=”tcp://127.0.0.1:63230″
    export DOCKER_CERT_PATH=”/Users/jpuyy/.minikube/certs”
    export MINIKUBE_ACTIVE_DOCKERD=”minikube”

    # To point your shell to minikube’s docker-daemon, run:
    # eval $(minikube -p minikube docker-env)

    minikube service prometheus -n istio-system –url
    😿 service istio-system/prometheus has no node port
    🏃 Starting tunnel for service prometheus.
    |————–|————|————-|————————|
    | NAMESPACE | NAME | TARGET PORT | URL |
    |————–|————|————-|————————|
    | istio-system | prometheus | | http://127.0.0.1:64657 |
    |————–|————|————-|————————|
    http://127.0.0.1:64657
    ❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

    $ kubectl get svc -A | grep LoadBalancer

    istio-ingressgateway

    $ minikube tunnel
    ❗ The service/ingress istio-ingressgateway requires privileged ports to be exposed: [80 443]
    🔑 sudo permission will be asked for it.
    🏃 Starting tunnel for service istio-ingressgateway.
    Password:

    Update:
    2023-04-11

    ▪ You are trying to run amd64 binary on M1 system. Please consider running darwin/arm64 binary instead (Download at https://github.com/kubernetes/minikube/releases/download/v1.30.1/minikube-darwin-arm64.)

    mv /tmp/minikube-darwin-arm64 /usr/local/bin/minikube
    chmod +x /usr/local/bin/minikube
    file /usr/local/bin/minikube
    /usr/local/bin/minikube: Mach-O 64-bit executable arm64
    minikube version
    minikube version: v1.30.1
    commit: 08896fd1dc362c097c925146c4a0d0dac715ace0

    Install the Apple Chip Docker
    https://www.docker.com/

  • letsencrypt and cert-manager in k8s

    letsencrypt

    入门文档
    https://letsencrypt.org/getting-started/

    acme 意思 Automatic Certificate Management Environment
    https://datatracker.ietf.org/doc/html/rfc8555

    签发站点可用状态
    https://letsencrypt.status.io/

    检查证书工具
    https://letsdebug.net/

    certbot 手动更新证书, 不推荐,但可以了解运行原理
    We don’t recommend this option because it is time-consuming and you will need to repeat it several times per year as your certificate expires.
    https://certbot.eff.org/docs/using.html#manual

    k8s 里的 cert-manager

    kubectl get Issuer,ClusterIssuers -A                                     
    NAMESPACE   NAME                                                   READY   AGE
                clusterissuer.cert-manager.io/letsencrypt-production   True    113d
    

    查看 order

    kubectl get order -A
    

    查看 challenge

    kubectl get challenge -A
    
  • kubernetes nginx ingress

    configmap
    kube-system/nginx-ingress-controller

  • kubernetes kill pod fail

      Warning  FailedKillPod  34s (x396 over 26h)  kubelet  error killing pod: [failed to "KillContainer" for "fluentd" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded", failed to "KillPodSandbox" for "bd1bd397-b8fe-4118-8149-869e18ac4034" with KillPodSandboxError: "rpc error: code = DeadlineExceeded desc = context deadline exceeded"]