Skip to main content

kubernetes

· 9 min read
Spark light
Spark light
Left

名称规划

  • 想好 ClusterName: 比如:prod,prod1,dev,p,d,....
  • apiserver的ha domain: k8s<ClusterName>ce k8sprod1ce
  • 规划好这个domain的static ip
  • ctrlNode(cNode)的hostname: k8s<ClusterName>c<number> k8sprod1c1 k8sprod1c2 k8sprod1c3
  • workNode(wNode)的hostname: k8s<ClusterName>w<letter> k8sprod1wa k8sprod1wb k8sprod1wc
  • k8s会使用机器的hostname作为节点名

搭建步骤。

  • 在域名系统映射好apiserver的ha domain的固定静态ip
  • cNode1搭建:k8s_init_controlnode k8sprod1ce k8sprod1c1
    • 搭建首个k8s control节点,开发环境可以只有一个cNode
    • k8s_init_controlnode会解析出k8sprod1ce的static ip,设置好hostname,并搭建haproxy和keepalived
    • 然后会用kubeadm init进行cluster的首次初始化
    • 自动echodo kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
    • 会生成~/join.txt保存下其他node(cNode or wNode)的join命令
    • 运行完以后可以执行reload命令,shell会自动更新环境信息确保cmd下kubectl可用
    • kubectl get nodes确保 首个node is Ready
  • cNode2,3搭建:k8s_init_controlnode k8sprod1ce k8sprod1c2
    • dev环境可以不用cNode2,3
    • 这个只会准备hostname,haproxy和keepalived
    • 然后找到初始节点的~/join.txt中的cNode join命令粘贴执行
  • wNode join
    • 先sethostname正确配置好hostname
    • 粘贴wNode的join命令加入

注意事项

remember word

  • taint cordon/uncordon

containerd arch layout

kubernetes port useage

portmoduledesc
10250kubeletkubelet api
10248kubelethealth check:healthz
10255kubeletkubelet readonly info api,can use without auth
4194cAdvisor

kubernetes extends

kubernetes system oper

  • check ## memory section in NotesLinux.md

  • container images list: kubectl get nodes -o jsonpath='{range .items[\*]}{.metadata.name}{"\n"}{range .status.images[\*]}{"\t"}{.names[\0]}{"\n"}{end}{end}'

  • check system memory: cat /proc/meminfo | grep '^(MemFree:|Cached:|Shmem:|Buffers:|SReclaimable|Slab:)'

  • root cgroup check: cat /sys/fs/cgroup/memory.stat| grep '^(cache |rss |inactive_file |active_file |SReclaimable|Slab:)'

  • cgroup memory check

  • alias cgcheck='function cgcheck_fn() (cd /sys/fs/cgroup/$(cat /proc/$(pgrep ${1})/cgroup | sed "s@^[^/]*/@@") && echo pid:$(pgrep ${1}) && pwd && echo "can:cat memory.stat,exit,ls,..." && bash ); cgcheck_fn'

  • cgcreate and use by docker

  • cgcreate -g memory:test-docker-memory

  • docker run --cgroup-parent=/test-docker-memory --net=none -v /root/test_mem:/test -idt --name test --privileged csighub.tencentyun.com/admin/tlinux2.2-bridge-tcloud-underlay:latest

  • systemd sysemd

    • systemctl == systemctl list-units
    • systemctl -t slice
    • systemctl -t service,masked
    • systemctl list-unit-files
    • systemd-cgls
    • systemd-cgtop
  • control-plane-node-communication

    • kubeadm缺省安装的已经设置好了 kubelet_node 通过证书信任 apiserver的client, 也就是这个文档中所的需要配置的地方 #只是新版本部署的anonymous-auth=false,client-ca-file这些配置都按照推荐都放到/var/lib/kubelet/config.yaml文件中了
    • 如下配置让apiserver通过证书信任kubelet节点
    • 生成统一的/etc/kubernetes/pki/kubelet-ca.{crt,key}
    • apiserver --kubelet-certificate-authority /etc/kubernetes/pki/kubelet-ca.crt #
    • 所有的node上的原有的selfsign(/var/lib/kubelet/pki/kubelet.{crt,key})替换为由 /etc/kubernetes/pki/kubelet-ca.{crt,key}签发的新crt,key
  • memory free > echo 3 > /proc/sys/vm/drop_caches && swapoff -a && swapon -a && printf ‘\n%s\n’ ‘Ram-cache and Swap Cleared’

certificate how to

kubeadm certs check-expiration
kubeadm certs renew all
#verify cert chain
openssl verify -CAfile /etc/kubernetes/pki/etcd/{ca.crt,ca.crt,healthcheck-client.crt,peer.crt,server.crt} #etcd-ca
openssl verify -CAfile /etc/kubernetes/pki/{front-proxy-ca.crt,front-proxy-ca.crt,front-proxy-client.crt} #front-proxy-ca
openssl verify -CAfile /etc/kubernetes/pki/{ca.crt,ca.crt,apiserver.crt,apiserver-kubelet-client.crt}
openssl verify -CAfile /var/lib/kubelet/pki/kubelet.crt{,}

control pane certiciate(ca.{crt,key} etcd-ca.{crt,key} front-proxy-ca.{crt,key} sa.key sa.pub)

  • upload to kube secret "kubeadm-certs"
  • upload is temp, will expire in 2 hour. this is use for add more control pane
  • kubeadm init phase upload-certs --upload-certs --certificate-key $(kubeadm certs certificate-key)
  • kubeadm init phase upload-certs --upload-certs #Upload control plane certificates to the kubeadm-certs Secret, and print out certificate-key
  • --certificate-key #define key by user option the encrypt key for uploaded cert. if not define. will auto generate new one and print out
  • can check it by: kubectl -n kube-system get secret kubeadm-certs -o yaml

#regenerate apiserver.crt with new certSANs item

#mv /etc/kubernetes/pki/apiserver.{crt,key} /tmp/
kubeadm init phase certs apiserver #this ok for most case
kubeadm init phase certs apiserver --config <(kubectl -n kube-system get configmap kubeadm-config -o jsonpath='{.data.ClusterConfiguration}'| sed 's@^apiServer:@&\n certSANs:\n - "ctrlpe.local"@')

#check certSAN for apiserver.crt openssl x509 -in=/etc/kubernetes/pki/apiserver.crt -text | grep -A 1 "Subject Alternative Name"

kube-apiserver url acess kubelet
  * kube-apiserver --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
* --kubelet-certificate-authority=
* why kubelet serve cert need a certSANs(host list)
openssl x509 -req -days 365 -in server.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out server-cert.pem
echo subjectAltName = IP:worker_node_ip >? hostSANs_file.cnf
openssl x509 -req -days 365 -in server.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out server-cert.pem -extfile hostSANs_file.cnf

kubernetes debug utils

* journalctl -xe --unit kubelet
* kubectl describe nodes
* kubectl get nodes
* kubectl -n kube-public get cm cluster-info -o yaml
* kubectl -n kube-system get cm kubeadm-config -o yaml
* kubectl cluster-info dump
* find ip define in kubernetes> grep -r '\.[0-9]\{1,3\}\/[0-9]\{1,2\}' /var/lib/kubelet/ /etc/kubernetes/
kubectl logs podName | tail -f /var/log/pods/kube-system_coredns-5dd5756b68-5wp4j_456bf805-a764-4874-b168-3c832d21241b/coredns/161.log

install k8s deps

 * systemd service file ref: github:kubespray: *.service.j2
* let control-pane can get pod task(action like work pane): kubectl taint nodes nodeName1 node-role.kubernetes.io/control-plane:NoSchedule-

install k8s

install k8s network plan

* pod ip range need be diff form service ip range
* --pod-network-cidr @ kubeadm init --help
* podSubnet @/etc/kubernetes/kubeadm-config.yaml
* cluster-cidr: @/etc/kubernetes/kubeadm-config.yaml
* clusterCIDR @/etc/kubernetes/kubeadm-config.yaml
* --service-cidr @ kubeadm init --help #(default "10.96.0.0/12")
* serviceSubnet @/etc/kubernetes/kubeadm-config.yaml
* serviceSubnet @kubectl -n kube-system get cm kubeadm-config -o yaml
* --service-cluster-ip-range @/etc/kubernetes/manifests/kube-apiserver.yaml
mkdir -p -m 755 /etc/apt/keyrings
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
systemctl stop kubelet

config file list(ls /var/lib/kubelet/ /etc/kubernetes/manifests/ /etc/kubernetes /etc/cni/net.d/)

/etc/kubernetes:
admin.conf calico-crb.yml controller-manager.conf kubeadm-config.yaml manifests
calico-ipamconfig.yml kubeadm-images.yaml kubelet.env node-crb.yml ssl
calico-config.yml calico-node-sa.yml k8s-cluster-critical-pc.yml kubelet-config.yaml kubernetes-services-endpoint.yml pki tmp
calico-cr.yml calico-node.yml kdd-crds.yml kubelet.conf kubescheduler-config.yaml scheduler.conf

/etc/kubernetes/manifests/:
etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml

/var/lib/kubelet/:
config.yaml cpu_manager_state device-plugins kubeadm-flags.env memory_manager_state pki plugins plugins_registry pod-resources pods

/etc/cni/net.d/:
10-calico.conflist calico-kubeconfig calico.conflist.template nerdctl-bridge.conflist nerdctl-dkfile_default.conflist nerdctl-prometheus-pushgateway_default.conflist nerdctl-rtorrent_default.conflist
cat /etc/cni/net.d/10-containerd-net.conflist
{
"cniVersion": "1.0.0",
"name": "containerd-net",
"plugins": [
{
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"promiscMode": true,
"ipam": {
"type": "host-local",
"ranges": [
[{
"subnet": "10.244.0.0/16"
}],
[{
"subnet": "2001:4860:4860::/64"
}]
],
"routes": [
{ "dst": "0.0.0.0/0" },
{ "dst": "::/0" }
]
}
},
{
"type": "portmap",
"capabilities": {"portMappings": true}
}
]
}
  • grep -r 'kind: .' /etc/kubernetes/ | sort | uniq
/etc/kubernetes/admin.conf:kind: Config
/etc/kubernetes/calico-config.yml:kind: ConfigMap
/etc/kubernetes/calico-cr.yml:kind: ClusterRole
/etc/kubernetes/calico-crb.yml: kind: ClusterRole
/etc/kubernetes/calico-crb.yml:- kind: ServiceAccount
/etc/kubernetes/calico-crb.yml:kind: ClusterRoleBinding
/etc/kubernetes/calico-ipamconfig.yml:kind: IPAMConfig
/etc/kubernetes/calico-node-sa.yml:kind: ServiceAccount
/etc/kubernetes/calico-node.yml:kind: DaemonSet
/etc/kubernetes/controller-manager.conf:kind: Config
/etc/kubernetes/k8s-cluster-critical-pc.yml:kind: PriorityClass
/etc/kubernetes/kdd-crds.yml: kind: ""
/etc/kubernetes/kdd-crds.yml: kind: BGPConfiguration
/etc/kubernetes/kdd-crds.yml: kind: BGPPeer
/etc/kubernetes/kdd-crds.yml: kind: BlockAffinity
/etc/kubernetes/kdd-crds.yml: kind: CalicoNodeStatus
/etc/kubernetes/kdd-crds.yml: kind: ClusterInformation
/etc/kubernetes/kdd-crds.yml: kind: FelixConfiguration
/etc/kubernetes/kdd-crds.yml: kind: GlobalNetworkPolicy
/etc/kubernetes/kdd-crds.yml: kind: GlobalNetworkSet
/etc/kubernetes/kdd-crds.yml: kind: HostEndpoint
/etc/kubernetes/kdd-crds.yml: kind: IPAMBlock
/etc/kubernetes/kdd-crds.yml: kind: IPAMConfig
/etc/kubernetes/kdd-crds.yml: kind: IPAMHandle
/etc/kubernetes/kdd-crds.yml: kind: IPPool
/etc/kubernetes/kdd-crds.yml: kind: IPReservation
/etc/kubernetes/kdd-crds.yml: kind: KubeControllersConfiguration
/etc/kubernetes/kdd-crds.yml: kind: NetworkPolicy
/etc/kubernetes/kdd-crds.yml: kind: NetworkSet
/etc/kubernetes/kdd-crds.yml:kind: CustomResourceDefinition
/etc/kubernetes/kubeadm-config.yaml:kind: ClusterConfiguration
/etc/kubernetes/kubeadm-config.yaml:kind: InitConfiguration
/etc/kubernetes/kubeadm-config.yaml:kind: KubeProxyConfiguration
/etc/kubernetes/kubeadm-config.yaml:kind: KubeletConfiguration
/etc/kubernetes/kubeadm-images.yaml:kind: ClusterConfiguration
/etc/kubernetes/kubeadm-images.yaml:kind: InitConfiguration
/etc/kubernetes/kubelet-config.yaml:kind: KubeletConfiguration
/etc/kubernetes/kubelet.conf:kind: Config
/etc/kubernetes/kubernetes-services-endpoint.yml:kind: ConfigMap
/etc/kubernetes/kubescheduler-config.yaml:kind: KubeSchedulerConfiguration
/etc/kubernetes/manifests/etcd.yaml:kind: Pod
/etc/kubernetes/manifests/kube-apiserver.yaml:kind: Pod
/etc/kubernetes/manifests/kube-controller-manager.yaml:kind: Pod
/etc/kubernetes/manifests/kube-scheduler.yaml:kind: Pod
/etc/kubernetes/node-crb.yml: kind: ClusterRole
/etc/kubernetes/node-crb.yml: kind: Group
/etc/kubernetes/node-crb.yml:kind: ClusterRoleBinding
/etc/kubernetes/scheduler.conf:kind: Config
  • kubectl get cm -n kube-system
coredns
extension-apiserver-authentication
kube-apiserver-legacy-service-account-token-tracking
kube-proxy #KubeProxyConfiguration
kube-root-ca.crt
kubeadm-config #ClusterConfiguration
kubelet-config #KubeletConfiguration

kubectl

  • kubectl [get|describe|edit|set resources] (pods,rc,services,cm,deployments,rs,runtimeclass|lease) -A

kubelet___reconfig__

  • to learn more
  • kubectl edit cm -n kube-system kubelet-config
  • /var/lib/kubelet/config.yaml
  • systemctl restart kubelet

k8s usage

kubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml kubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml

k8s config detail

  • kubeadm config(k8s_init_cfg&k8s_cluster_cfg helper)

    • kubeadm config images list(use k8s_init_cfg gen k8s_cluster_cfg and get docker images(need use) list print out)
    • kubeadm config images pull
    • kubeadm config print [init-defefault|join-defaults|reset-default]
  • kubeadm kubeconfig(kube_access_cfg helper)

  • kube_access_cfg(--kubeconfig): include info(CACert,APIServer,ClientName,TokenAuth,ClientCertAuth)

    • load kube_access_cfg: kubeadm&kubectl default load it from: ~/.kube/config or /etc/kubernetes/admin.conf
    • save kube_access_cfg when boot init: kubeadm init will generate /etc/kubernetes/admin.conf at 'phase kubeconfig', reference kubeadm init -h
    • send&cp /etc/kubernetes/admin.conf to some user's ~/.kube/config, will give full acesss the k8s cluster to this user
    • generate kube_access_cfg(with limit acesss& validity-perio) print to stdout by cmd: kubeadm kubeconfig user [--config k8s_cluster_cfg.yml]
    • (kubeadm config|kubeadm upgrade|...) can use --kubeconfig kube_access_cfg.yml option to load kube_access_cfg
  • k8s_init_cfg(--config):

    • show default k8s_init_cfg: kubeadm config print init-defaults
    • save default k8s_init_cfg: kubeadm config print init-defaults > k8s_init_cfg.yml
    • on other machine k8s init by: kubeadm init --config k8s_init_cfg.yml
    • 'kubeadm init' == 'kubeadm init --config <(kubeadm config print init-defaults)', but you can edit output of 'kubeadm config print init-defaults' for custom init
  • k8s_cluster_cfg:

    • show current k8s_cluster_cfg: kubectl describe -n kube-system cm kubeadm-config
  • kubelet config:

    • [kubeadm init|join|upgrade] will write KubeletConfiguration to file /var/lib/kubelet/config.yaml and passing it to the local node kubelet #ref
    • kubelet all node update: ref: Update the cgroup driver on all nodes
    • kubectl drain [node-name] --ignore-daemonsets
    • kubectl uncordon [node-name]