Kubernetes(k8s)-创建永久(100年)证书
Kubernetes 是一个用于自动化容器化应用程序部署、扩展和管理的开源平台。它提供了一种容器编排的方式,可以自动管理应用程序的部署、伸缩、负载均衡和容错等任务。 Kubernetes 基于容器技术,特别是 Docker,它使用容器作为应用程序和服务的基本构建块。通过 Kubernetes,用户可以轻松地定义和部署容器化应用程序,并通过集群来管理和编排这些容器。Kubernetes提供了多种安装方式,可以根据自己的需求和实际情况选择合适的方式,这里采用kubeadm方式来部署集群,kubeadm安装集群里面涉及到的证书,有的有效期是10年,有的有效期是1年,而且除了根证书是10年,实际业务使用的证书都是1年.
1.查看当前证书情况
[root@node1 ~]# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Feb 19, 2026 08:13 UTC 364d ca no
apiserver Feb 19, 2026 08:13 UTC 364d ca no
apiserver-etcd-client Feb 19, 2026 08:13 UTC 364d etcd-ca no
apiserver-kubelet-client Feb 19, 2026 08:13 UTC 364d ca no
controller-manager.conf Feb 19, 2026 08:13 UTC 364d ca no
etcd-healthcheck-client Feb 19, 2026 08:13 UTC 364d etcd-ca no
etcd-peer Feb 19, 2026 08:13 UTC 364d etcd-ca no
etcd-server Feb 19, 2026 08:13 UTC 364d etcd-ca no
front-proxy-client Feb 19, 2026 08:13 UTC 364d front-proxy-ca no
scheduler.conf Feb 19, 2026 08:13 UTC 364d ca no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Feb 17, 2035 08:13 UTC 9y no
etcd-ca Feb 17, 2035 08:13 UTC 9y no
front-proxy-ca Feb 17, 2035 08:13 UTC 9y no
[root@node1 ~]#
这个图里面显示的证书包括3个ca证书,3个etcd证书,3个apiserver证书,一个kubectl证书,一个controller-manager证书,一个scheduler证书,一个proxy启用代理证书。这里的证书到期以后就不可用,不具有自动续签的能力,kubelet证书该如何查看呢?
[root@node1 ~]# openssl x509 -in /var/lib/kubelet/pki/kubelet-client-current.pem -noout -text |grep Validity -A2
Validity
Not Before: Feb 19 08:13:41 2025 GMT
Not After : Feb 19 08:13:46 2026 GMT
更进一步进入目录发现,这个文件其实是个软连接,连接文件本目录下一个带有时间戳的文件,这个时间戳上的时间,实际上就是证书的生成时间,有效期就在这个生成时间加上一年时间。
2.证书续期
[root@node1 pki]# kubeadm certs renew all
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed
Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.
[root@node1 pki]#
通过以上命令,我们可以一键续期除kubelet外的所有证书,根证书不会续期。我们也可以手工续期某一个组件,只需要把最后的证书更换成为对应的证书名字即可。
kubeadm certs renew scheduler.conf
证书续期以后,当前应用程序还需要重启才会生效。
但是有没有什么方法可以在创建集群的时候就提供更长的证书时间呢?很遗憾,kubeadm并没有给我们提供这样的参数,这个计算逻辑在Kubeadm的源代码里面,所以只能修改源代码来实现。
准备工作
1.服务器安装go环境go环境需要大于1.17,否则会报错,这里用的是1.23
[root@node1 ~]# cd /usr/local
[root@node1 local]# wget https://golang.google.cn/dl/go1.23.4.linux-amd64.tar.gz
--2025-02-19 16:25:02-- https://golang.google.cn/dl/go1.23.4.linux-amd64.tar.gz
Resolving golang.google.cn (golang.google.cn)... 114.250.64.34, 2401:3800:4001:14::1002
Connecting to golang.google.cn (golang.google.cn)|114.250.64.34|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://dl.google.com/go/go1.23.4.linux-amd64.tar.gz [following]
--2025-02-19 16:25:03-- https://dl.google.com/go/go1.23.4.linux-amd64.tar.gz
Resolving dl.google.com (dl.google.com)... 114.250.65.33
Connecting to dl.google.com (dl.google.com)|114.250.65.33|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 73645095 (70M) [application/x-gzip]
Saving to: ‘go1.23.4.linux-amd64.tar.gz.1’
100%[================================================================================================================================================================>] 73,645,095 38.3MB/s in 1.8s
2025-02-19 16:25:05 (38.3 MB/s) - ‘go1.23.4.linux-amd64.tar.gz.1’ saved [73645095/73645095]
[root@node1 local]# tar -zxvf go1.23.4.linux-amd64.tar.gz
##添加到环境变量中
[root@node1 local]# cat /root/.bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin:/usr/local/go/bin/
export PATH
##source使其立即生效
[root@node1 local]# source /root/.bash_profile
2.服务器安装git命令
[root@node1 local]# yum -y install git
3.修改代码
下载代码
[root@node1 ~]# git clone --depth 1 -b v1.23.12 https://github.com/kubernetes/kubernetes.git
[root@node1 ~]# cd kubernetes-1.23.12
备注:
这里演示的环境操作系统版本为:Red Hat Enterprise Linux Server release 7.9 (Maipo)
使用el7的yum安装的话只能安装到1.28.2的版本,这里演示使用的是1.23.12版本.
如果使用这个git 克隆不下来的话,也可以下载zip包上传到服务器上面,这里采用的是上传zip包
修改ca证书到100年
[root@node1 kubernetes-1.23.12]# cd staging/src/k8s.io/client-go/util/cert/
[root@node1 cert]# vi cert.go
// NewSelfSignedCACert creates a CA certificate
58 func NewSelfSignedCACert(cfg Config, key crypto.Signer) (*x509.Certificate, error) {
59 now := time.Now()
60 tmpl := x509.Certificate{
61 SerialNumber: new(big.Int).SetInt64(0),
62 Subject: pkix.Name{
63 CommonName: cfg.CommonName,
64 Organization: cfg.Organization,
65 },
66 DNSNames: []string{cfg.CommonName},
67 NotBefore: now.UTC(),
68 NotAfter: now.Add(duration365d * 100).UTC(),
69 KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign,
位置:68行修改为:365d * 100,如下
NotAfter: now.Add(duration365d * 100).UTC()
修改生成的其他证书
[root@node1 kubernetes-1.23.12]# cd cmd/kubeadm/app/constants/
[root@node1 constants]# vi constants.go
49 // CertificateValidity defines the validity for all the signed certificates generated by kubeadm
50 //CertificateValidity = time.Hour * 24 * 365
51 CertificateValidity = time.Hour * 24 * 365 * 100
位置:50行,原始为CertificateValidity = time.Hour * 24 * 365,修改为如下:
CertificateValidity = time.Hour * 24 * 365 * 100
保存后退出。
如果是新机器则还需要安装gcc编译器,这里演示环境已经安装过gcc,这里不再进行安装。
开始进行编译
[root@node1 kubernetes-1.23.12]# make all WHAT=cmd/kubeadm GOFLAGS=-v
编译完成以后,获取对应的kubeadm二进制包
[root@node1 kubernetes-1.23.12]# ls -ltr _output/bin/kubeadm
-rwxr-xr-x 1 root root 44544152 Feb 19 14:14 _output/bin/kubeadm
重新安装集群,需要使用手工编译的kubeadm二进制文件替换掉yum安装的kubeadm文件,文件路径是/bin/kubeadm。然后使用进行集群的创建。
[root@node1 ~]# cp -rf kubernetes-1.23.12/_output/bin/kubeadm /bin/
cp: overwrite ‘/bin/kubeadm’? y
创建集群
##清理掉原有的集群
[root@node1 ~]# kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[root@node1 ~]# rm -rf /root/.kube/config
##创建新的集群
[root@node1 ~]# kubeadm init \
> --apiserver-advertise-address=172.16.17.50 \
> --image-repository registry.aliyuncs.com/google_containers \
> --kubernetes-version v1.23.12 \
> --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.23.12
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 26.1.4. Latest validated version: 20.10
[WARNING Hostname]: hostname "node1" could not be reached
[WARNING Hostname]: hostname "node1": lookup node1 on 202.102.224.68:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node1] and IPs [10.96.0.1 172.16.17.50]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost node1] and IPs [172.16.17.50 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost node1] and IPs [172.16.17.50 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.003817 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node node1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 00vk10.b7uhs64iqq3v1rmo
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.16.17.50:6443 --token 00vk10.b7uhs64iqq3v1rmo \
--discovery-token-ca-cert-hash sha256:a8c22ef3ebb1bbfc798635dc2611fed7f3f520fbda6b9b6947ea5c5c83a5da82
[root@node1 ~]# mkdir -p $HOME/.kube
[root@node1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@node1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@node1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 NotReady control-plane,master 43s v1.23.12
检查证书
[root@node1 ~]# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Jan 26, 2125 08:45 UTC 99y ca no
apiserver Jan 26, 2125 08:45 UTC 99y ca no
apiserver-etcd-client Jan 26, 2125 08:45 UTC 99y etcd-ca no
apiserver-kubelet-client Jan 26, 2125 08:45 UTC 99y ca no
controller-manager.conf Jan 26, 2125 08:45 UTC 99y ca no
etcd-healthcheck-client Jan 26, 2125 08:45 UTC 99y etcd-ca no
etcd-peer Jan 26, 2125 08:45 UTC 99y etcd-ca no
etcd-server Jan 26, 2125 08:45 UTC 99y etcd-ca no
front-proxy-client Jan 26, 2125 08:45 UTC 99y front-proxy-ca no
scheduler.conf Jan 26, 2125 08:45 UTC 99y ca no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Jan 26, 2125 08:45 UTC 99y no
etcd-ca Jan 26, 2125 08:45 UTC 99y no
front-proxy-ca Jan 26, 2125 08:45 UTC 99y no
[root@node1 ~]# openssl x509 -in /var/lib/kubelet/pki/kubelet-client-current.pem -noout -text |grep Validity -A2
-bash: openssl: command not found
[root@node1 ~]#
[root@node1 ~]# openssl x509 -in /var/lib/kubelet/pki/kubelet-client-current.pem -noout -text |grep Validity -A2
Validity
Not Before: Feb 19 08:45:29 2025 GMT
Not After : Jan 26 08:45:33 2125 GMT
可以看到,无论是ca证书,还是其他业务使用的证书都是100年,包括kubelet证书也是100年,这样我们的集群就不用在考虑证书到期的问题。