Kubernetes
Перейти к навигации
Перейти к поиску
Installation
sudo apt-get update sudo apt-get install -y apt-transport-https ca-certificates curl sudo mkdir -p /etc/apt/keyrings sudo curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg sudo echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl
Шпаргалка по командам
- Kubernetes Crash Course for Absolute Beginners NEW
- Шпаргалка по kubectl
- Docker Containers and Kubernetes Fundamentals – Full Hands-On Course
minikube start --mount-string="$HOME/go/src/github.com/nginx:/data" --mount --driver=docker minikube ip kubectl cluster-info kubectl create deployment nginx-depl --image=nginx kubectl get pod kubectl get replicaset kubectl get pod <POD-ID> -o wide kubectl get deployment nginx-depl kubectl get deployment nginx-depl -o yaml kubectl edit deployment nginx-depl kubectl logs <POD-ID> kubectl describe pod <POD-ID> kubectl exec -it <POD-ID> -- /bin/bash kubectl apply -f <CONFIG.YML> kubectl get deployment nginx-deployment -o yaml # https://kubernetes.github.io/ingress-nginx/troubleshooting/ kubectl get ingress -n mongo-admin kubectl describe ing mongo-express-ingress -n mongo-admin kubectl describe ingress mongo-express-ingress -n mongo-admin kubectl rollout restart deployment <deployment_name> -n <namespace> kubectl delete --all pods --namespace=kubernetes-dashboard
System reset
minikube stop minikube delete rm -rf ~/.minikube rm -rf ~/.kube docker kill $(docker ps -q) docker rm $(docker ps -a -q) docker rmi $(docker images -q) docker system prune
Setup cluster
sudo su swapoff -a kubeadm reset kubeadm init mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
sudo systemctl start docker sudo swapoff -a sudo mv /etc/containerd/config.toml /etc/containerd/config222.toml sudo kubeadm reset --force sudo kubeadm init --pod-network-cidr=10.244.0.0/16 /run/flannel/subnet.env FLANNEL_NETWORK=10.244.0.0/16 FLANNEL_SUBNET=10.244.0.1/24 FLANNEL_MTU=1450 FLANNEL_IPMASQ=true kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml kubectl taint nodes --all node-role.kubernetes.io/control-plane- kubectl taint nodes --all node.kubernetes.io/disk-pressure- kubectl taint nodes --all node.kubernetes.io/not-ready-
Translate a Docker Compose File to Kubernetes Resources
Helm
https://helm.sh/docs/intro/install/
Clear space / Remove unused images
crictl rmi --prune docker image prune -a
Ansible
- https://kubernetes.io/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant/
- https://habr.com/ru/companies/slurm/articles/810933/
Container Network Interface
- https://daily.dev/blog/kubernetes-cni-comparison-flannel-vs-calico-vs-canal
- https://kubevious.io/blog/post/comparing-kubernetes-container-network-interface-cni-providers
Ingress
Complete Reset
kubeadm reset systemctl stop kubelet systemctl stop docker rm -rf /etc/cni/net.d/ rm -rf /var/lib/cni/ rm -rf /var/lib/kubelet/* rm -rf /run/flannel rm -rf /etc/cni/ ip link set cni0 down ip link delete cni0 brctl delbr cni0 ifconfig flannel.1 down systemctl start docker
Troubleshooting
Check status
kubectl version
kubectl config view
apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA+OMITTED server: https://192.168.0.64:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: DATA+OMITTED client-key-data: DATA+OMITTED
kubectl cluster-info
Kubernetes control plane is running at https://192.168.0.64:6443 CoreDNS is running at https://192.168.0.64:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
kubectl describe node -A
kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE kube-flannel kube-flannel-ds-tt45j 1/1 Running 5 (65m ago) 80m kube-system coredns-76f75df574-rzth2 1/1 Running 2 (65m ago) 87m kube-system coredns-76f75df574-xvdl6 1/1 Running 2 (65m ago) 87m kube-system etcd-x570-aorus-ultra 1/1 Running 2 (65m ago) 87m kube-system kube-apiserver-x570-aorus-ultra 1/1 Running 2 (65m ago) 87m kube-system kube-controller-manager-x570-aorus-ultra 1/1 Running 9 (65m ago) 87m kube-system kube-proxy-lm95s 1/1 Running 2 (65m ago) 87m kube-system kube-scheduler-x570-aorus-ultra 1/1 Running 9 (65m ago) 87m
CoreDNS does not run
Check node is ready
kubectl get node -A
If not check it's status and Taints: should be <none>
kubectl describe node x570-aorus-ultra
If not do
nano /var/lib/kubelet/config.yaml
and add at the end
evictionHard: memory.available: "100Mi" nodefs.available: "2%" nodefs.inodesFree: "2%" imagefs.available: "2%"
save and do
systemctl restart containerd systemctl restart kubelet
Flannel does not run
In logs
Failed to check br_netfilter: stat /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory
modprobe br_netfilter echo "br_netfilter" >> /etc/modules