上传文件至 /
This commit is contained in:
parent
6b67420a0d
commit
4f36ddb77b
1177
Kubernetes-Pod控制器详解.md
Normal file
1177
Kubernetes-Pod控制器详解.md
Normal file
File diff suppressed because it is too large
Load Diff
130
kubernetes-DashBoard.md
Normal file
130
kubernetes-DashBoard.md
Normal file
@ -0,0 +1,130 @@
|
|||||||
|
<h2><center>kubernetes DashBoard</center></h2>
|
||||||
|
|
||||||
|
------
|
||||||
|
|
||||||
|
## 一:DashBoard
|
||||||
|
|
||||||
|
之前在kubernetes中完成的所有操作都是通过命令行工具kubectl完成的。其实,为了提供更丰富的用户体验,kubernetes还开发了一个基于web的用户界面(Dashboard)。用户可以使用Dashboard部署容器化的应用,还可以监控应用的状态,执行故障排查以及管理kubernetes中各种资源。
|
||||||
|
|
||||||
|
### 1. 部署Dashboard
|
||||||
|
|
||||||
|
**下载yaml,并运行Dashboard**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 下载yaml
|
||||||
|
[root@master ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.1/aio/deploy/recommended.yaml
|
||||||
|
|
||||||
|
# 修改kubernetes-dashboard的Service类型
|
||||||
|
[root@master ~]# vim recommended.yaml
|
||||||
|
kind: Service
|
||||||
|
apiVersion: v1
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
k8s-app: kubernetes-dashboard
|
||||||
|
name: kubernetes-dashboard
|
||||||
|
namespace: kubernetes-dashboard
|
||||||
|
spec:
|
||||||
|
type: NodePort
|
||||||
|
ports:
|
||||||
|
- port: 443
|
||||||
|
targetPort: 8443
|
||||||
|
nodePort: 30001
|
||||||
|
selector:
|
||||||
|
k8s-app: kubernetes-dashboard
|
||||||
|
|
||||||
|
# 部署
|
||||||
|
[root@master ~]# kubectl create -f recommended.yaml
|
||||||
|
|
||||||
|
# 查看namespace下的kubernetes-dashboard下的资源
|
||||||
|
[root@master ~]# kubectl get pods,svc -n kubernetes-dashboard
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
pod/dashboard-metrics-scraper-799d786dbf-rqftt 1/1 Running 0 7m46s
|
||||||
|
pod/kubernetes-dashboard-fb8648fd9-wmqwv 1/1 Running 0 7m46s
|
||||||
|
|
||||||
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||||
|
service/dashboard-metrics-scraper ClusterIP 10.108.205.40 <none> 8000/TCP 7m46s
|
||||||
|
service/kubernetes-dashboard NodePort 10.102.51.227 <none> 443:30001/TCP 7m46s
|
||||||
|
```
|
||||||
|
|
||||||
|
**创建访问账户,获取token**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 创建账号
|
||||||
|
[root@master ~]# kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
|
||||||
|
serviceaccount/dashboard-admin created
|
||||||
|
|
||||||
|
# 授权
|
||||||
|
[root@master ~]# kubectl create clusterrolebinding dashboard-admin-rb --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin
|
||||||
|
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin-rb created
|
||||||
|
|
||||||
|
# 获取账号token
|
||||||
|
[root@master ~]# kubectl get secrets -n kubernetes-dashboard | grep dashboard-admin
|
||||||
|
dashboard-admin-token-w49cf kubernetes.io/service-account-token 3 13s
|
||||||
|
|
||||||
|
[root@master ~]# kubectl describe secrets dashboard-admin-token-w49cf -n kubernetes-dashboard
|
||||||
|
Name: dashboard-admin-token-w49cf
|
||||||
|
Namespace: kubernetes-dashboard
|
||||||
|
Labels: <none>
|
||||||
|
Annotations: kubernetes.io/service-account.name: dashboard-admin
|
||||||
|
kubernetes.io/service-account.uid: 97f46627-fdc0-4379-8588-cf18f697e623
|
||||||
|
|
||||||
|
Type: kubernetes.io/service-account-token
|
||||||
|
|
||||||
|
Data
|
||||||
|
====
|
||||||
|
namespace: 20 bytes
|
||||||
|
token: eyJhbGciOiJSUzI1NiIsImtpZCI6InNQU3pOVFpLVmlwVHM3em5vZUV3S2xTbGZmc3g4M3BhTlZiTGExdlRkZTgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tdzQ5Y2YiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiOTdmNDY2MjctZmRjMC00Mzc5LTg1ODgtY2YxOGY2OTdlNjIzIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.bL6DuuI9j0Hn2WUaMgrzqxrPVHJuhNXBMAD-enK-SN_--hFh7zXVQLbU63caR6wCtPh5xjpWMm6jiKPu5oAeUZcumJbgpnTk6aYiRGu3h8i_IN_mrUdCsgu4ux_AeZuoj3gu83aMoHoG9GRZi--0TOJYeRm6lGToNMad_vXPBvCnZmgV9QDRdmjBtLt4cfz816PIWdLC0-36R4QnVWNVGqyEkeTnqYYWQIlL13hmbUfGaz7ZnkxYLnFO6uvKd9DSrb7jU9hzLY_MoOST11G-BUzcGaiGutgrbGobMGBKzB2vFLuKswxwyXQcshPxkWw3ieg8x7_WLGt-V7E2G2x0tw
|
||||||
|
ca.crt: 1099 bytes
|
||||||
|
```
|
||||||
|
|
||||||
|
⚠️ **浏览器安全警告处理**:
|
||||||
|
|
||||||
|
- 如果是自签名证书,浏览器会阻止访问。
|
||||||
|
- **临时解决方案**:在页面任意位置输入 `thisisunsafe`(Chrome/Edge 有效)。
|
||||||
|
- **长期方案**:将集群的 CA 证书导入系统信任库。
|
||||||
|
|
||||||
|
**通过浏览器访问Dashboard的UI**
|
||||||
|
|
||||||
|
在登录页面上输入上面的token
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
出现下面的页面代表成功
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
### 2. 使用Dashboard
|
||||||
|
|
||||||
|
本章节以Deployment为例演示DashBoard的使用
|
||||||
|
|
||||||
|
**查看**
|
||||||
|
|
||||||
|
选择指定的命名空间dev,然后点击Deployments,查看dev空间下的所有deployment
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**扩缩容**
|
||||||
|
|
||||||
|
在Deployment上点击缩放,然后指定目标副本数量,点击确定
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**编辑**
|
||||||
|
|
||||||
|
在Deployment上点击编辑,然后修改yaml文件,点击确定
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**查看Pod**
|
||||||
|
|
||||||
|
点击Pods, 查看pods列表
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**操作Pod**
|
||||||
|
|
||||||
|
选中某个Pod,可以对其执行日志(logs)、进入执行(exec)、编辑、删除操作
|
495
kubernetes-Service详解.md
Normal file
495
kubernetes-Service详解.md
Normal file
@ -0,0 +1,495 @@
|
|||||||
|
<h2><center>Kubernetes Service详解</center></h2>
|
||||||
|
|
||||||
|
------
|
||||||
|
|
||||||
|
## 一:Service介绍
|
||||||
|
|
||||||
|
在kubernetes中,pod是应用程序的载体,我们可以通过pod的ip来访问应用程序,但是pod的ip地址不是固定的,这也就意味着不方便直接采用pod的ip对服务进行访问。
|
||||||
|
|
||||||
|
为了解决这个问题,kubernetes提供了Service资源,Service会对提供同一个服务的多个pod进行聚合,并且提供一个统一的入口地址。通过访问Service的入口地址就能访问到后面的pod服务。
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Service在很多情况下只是一个概念,真正起作用的其实是kube-proxy服务进程,每个Node节点上都运行着一个kube-proxy服务进程。当创建Service的时候会通过api-server向etcd写入创建的service的信息,而kube-proxy会基于监听的机制发现这种Service的变动,然后它会将最新的Service信息转换成对应的访问规则。
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 10.97.97.97:80 是service提供的访问入口
|
||||||
|
# 当访问这个入口的时候,可以发现后面有三个pod的服务在等待调用,
|
||||||
|
# kube-proxy会基于rr(轮询)的策略,将请求分发到其中一个pod上去
|
||||||
|
# 这个规则会同时在集群内的所有节点上都生成,所以在任何一个节点上,都可以访问。
|
||||||
|
[root@node1 ~]# ipvsadm -Ln
|
||||||
|
IP Virtual Server version 1.2.1 (size=4096)
|
||||||
|
Prot LocalAddress:Port Scheduler Flags
|
||||||
|
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
|
||||||
|
TCP 10.97.97.97:80 rr
|
||||||
|
-> 10.244.1.39:80 Masq 1 0 0
|
||||||
|
-> 10.244.1.40:80 Masq 1 0 0
|
||||||
|
-> 10.244.2.33:80 Masq 1 0 0
|
||||||
|
```
|
||||||
|
|
||||||
|
kube-proxy目前支持三种工作模式:
|
||||||
|
|
||||||
|
### 1. userspace 模式
|
||||||
|
|
||||||
|
userspace模式下,kube-proxy会为每一个Service创建一个监听端口,发向Cluster IP的请求被Iptables规则重定向到kube-proxy监听的端口上,kube-proxy根据LB算法选择一个提供服务的Pod并和其建立链接,以将请求转发到Pod上。 该模式下,kube-proxy充当了一个四层负责均衡器的角色。由于kube-proxy运行在userspace中,在进行转发处理时会增加内核和用户空间之间的数据拷贝,虽然比较稳定,但是效率比较低。
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
### 2. iptables 模式
|
||||||
|
|
||||||
|
iptables模式下,kube-proxy为service后端的每个Pod创建对应的iptables规则,直接将发向Cluster IP的请求重定向到一个Pod IP。 该模式下kube-proxy不承担四层负责均衡器的角色,只负责创建iptables规则。该模式的优点是较userspace模式效率更高,但不能提供灵活的LB策略,当后端Pod不可用时也无法进行重试。
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
### 3. ipvs 模式
|
||||||
|
|
||||||
|
ipvs模式和iptables类似,kube-proxy监控Pod的变化并创建相应的ipvs规则。ipvs相对iptables转发效率更高。除此以外,ipvs支持更多的LB算法。
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 此模式必须安装ipvs内核模块,否则会降级为iptables
|
||||||
|
# 开启ipvs
|
||||||
|
# 修改mode: "ipvs"
|
||||||
|
[root@master ~]# kubectl edit cm kube-proxy -n kube-system
|
||||||
|
configmap/kube-proxy edited
|
||||||
|
[root@master ~]# kubectl delete pod -l k8s-app=kube-proxy -n kube-system
|
||||||
|
pod "kube-proxy-fxv98" deleted
|
||||||
|
pod "kube-proxy-mtn5n" deleted
|
||||||
|
pod "kube-proxy-x82nf" deleted
|
||||||
|
[root@node1 ~]# ipvsadm -Ln
|
||||||
|
IP Virtual Server version 1.2.1 (size=4096)
|
||||||
|
Prot LocalAddress:Port Scheduler Flags
|
||||||
|
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
|
||||||
|
TCP 10.97.97.97:80 rr
|
||||||
|
-> 10.244.1.39:80 Masq 1 0 0
|
||||||
|
-> 10.244.1.40:80 Masq 1 0 0
|
||||||
|
-> 10.244.2.33:80 Masq 1 0 0
|
||||||
|
```
|
||||||
|
|
||||||
|
## 二:Service类型
|
||||||
|
|
||||||
|
Service的资源清单文件:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
kind: Service # 资源类型
|
||||||
|
apiVersion: v1 # 资源版本
|
||||||
|
metadata: # 元数据
|
||||||
|
name: service # 资源名称
|
||||||
|
namespace: dev # 命名空间
|
||||||
|
spec: # 描述
|
||||||
|
selector: # 标签选择器,用于确定当前service代理哪些pod
|
||||||
|
app: nginx
|
||||||
|
type: # Service类型,指定service的访问方式
|
||||||
|
clusterIP: # 虚拟服务的ip地址
|
||||||
|
sessionAffinity: # session亲和性,支持ClientIP、None两个选项
|
||||||
|
ports: # 端口信息
|
||||||
|
- protocol: TCP
|
||||||
|
port: 3017 # service端口
|
||||||
|
targetPort: 5003 # pod端口
|
||||||
|
nodePort: 31122 # 主机端口
|
||||||
|
```
|
||||||
|
|
||||||
|
- ClusterIP:默认值,它是Kubernetes系统自动分配的虚拟IP,只能在集群内部访问
|
||||||
|
- NodePort:将Service通过指定的Node上的端口暴露给外部,通过此方法,就可以在集群外部访问服务
|
||||||
|
- LoadBalancer:使用外接负载均衡器完成到服务的负载分发,注意此模式需要外部云环境支持
|
||||||
|
- ExternalName: 把集群外部的服务引入集群内部,直接使用
|
||||||
|
|
||||||
|
## 三:Service使用
|
||||||
|
|
||||||
|
### 1. 实验环境准备
|
||||||
|
|
||||||
|
在使用service之前,首先利用Deployment创建出3个pod,注意要为pod设置app=nginx-pod的标签
|
||||||
|
|
||||||
|
创建deployment.yaml,内容如下:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: pc-deployment
|
||||||
|
namespace: dev
|
||||||
|
spec:
|
||||||
|
replicas: 3
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: nginx-pod
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: nginx-pod
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: nginx
|
||||||
|
image: nginx:1.17.1
|
||||||
|
ports:
|
||||||
|
- containerPort: 80
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
[root@master ~]# kubectl create -f deployment.yaml
|
||||||
|
deployment.apps/pc-deployment created
|
||||||
|
|
||||||
|
# 查看pod详情
|
||||||
|
[root@master ~]# kubectl get pods -n dev -o wide --show-labels
|
||||||
|
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
|
||||||
|
pc-deployment-6756f95949-6slx9 1/1 Running 0 101s 10.244.1.2 node1 <none> <none> app=nginx-pod,pod-template-hash=6756f95949
|
||||||
|
pc-deployment-6756f95949-hjkfw 1/1 Running 0 101s 10.244.2.2 node2 <none> <none> app=nginx-pod,pod-template-hash=6756f95949
|
||||||
|
pc-deployment-6756f95949-jjzvd 1/1 Running 0 101s 10.244.2.3 node2 <none> <none> app=nginx-pod,pod-template-hash=6756f95949
|
||||||
|
|
||||||
|
# 为了方便后面的测试,修改下三台nginx的index.html页面(三台修改的IP地址不一致)
|
||||||
|
# kubectl exec -it pc-deployment-6756f95949-6slx9 -n dev /bin/sh
|
||||||
|
# echo "10.244.1.2" > /usr/share/nginx/html/index.html
|
||||||
|
|
||||||
|
#修改完毕之后,访问测试
|
||||||
|
[root@master ~]# curl 10.244.1.2
|
||||||
|
10.244.1.2
|
||||||
|
[root@master ~]# curl 10.244.2.2
|
||||||
|
10.244.2.2
|
||||||
|
[root@master ~]# curl 10.244.2.3
|
||||||
|
10.244.2.3
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. ClusterIP 类型的 Service
|
||||||
|
|
||||||
|
创建service-clusterip.yaml文件
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: service-clusterip
|
||||||
|
namespace: dev
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
app: nginx-pod
|
||||||
|
clusterIP: 10.97.97.97
|
||||||
|
type: ClusterIP
|
||||||
|
ports:
|
||||||
|
- port: 80
|
||||||
|
targetPort: 80
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 创建service
|
||||||
|
[root@master ~]# kubectl create -f service-clusterip.yaml
|
||||||
|
service/service-clusterip created
|
||||||
|
|
||||||
|
# 查看service
|
||||||
|
[root@master ~]# kubectl get svc -n dev -o wide
|
||||||
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
|
||||||
|
service-clusterip ClusterIP 10.97.97.97 <none> 80/TCP 10s app=nginx-pod
|
||||||
|
|
||||||
|
# 查看service的详细信息
|
||||||
|
# 在这里有一个Endpoints列表,里面就是当前service可以负载到的服务入口
|
||||||
|
[root@master ~]# kubectl describe svc service-clusterip -n dev
|
||||||
|
Name: service-clusterip
|
||||||
|
Namespace: dev
|
||||||
|
Labels: <none>
|
||||||
|
Annotations: <none>
|
||||||
|
Selector: app=nginx-pod
|
||||||
|
Type: ClusterIP
|
||||||
|
IP Family Policy: SingleStack
|
||||||
|
IP Families: IPv4
|
||||||
|
IP: 10.97.97.97
|
||||||
|
IPs: 10.97.97.97
|
||||||
|
Port: <unset> 80/TCP
|
||||||
|
TargetPort: 80/TCP
|
||||||
|
Endpoints: 10.244.1.2:80,10.244.2.2:80,10.244.2.3:80
|
||||||
|
Session Affinity: None
|
||||||
|
Events: <none>
|
||||||
|
|
||||||
|
# 查看ipvs的映射规则
|
||||||
|
[root@master ~]# ipvsadm -Ln
|
||||||
|
IP Virtual Server version 1.2.1 (size=4096)
|
||||||
|
Prot LocalAddress:Port Scheduler Flags
|
||||||
|
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
|
||||||
|
TCP 10.97.97.97:80 rr
|
||||||
|
-> 10.244.1.2:80 Masq 1 0 0
|
||||||
|
-> 10.244.2.2:80 Masq 1 0 0
|
||||||
|
-> 10.244.2.3:80 Masq 1 0 0
|
||||||
|
|
||||||
|
# 访问10.97.97.97:80观察效果
|
||||||
|
[root@master ~]# curl 10.97.97.97:80
|
||||||
|
10.244.2.3
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Endpoint
|
||||||
|
|
||||||
|
Endpoint是kubernetes中的一个资源对象,存储在etcd中,用来记录一个service对应的所有pod的访问地址,它是根据service配置文件中selector描述产生的。
|
||||||
|
|
||||||
|
一个Service由一组Pod组成,这些Pod通过Endpoints暴露出来,Endpoints是实现实际服务的端点集合。换句话说,service和pod之间的联系是通过endpoints实现的。
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**负载分发策略**
|
||||||
|
|
||||||
|
对Service的访问被分发到了后端的Pod上去,目前kubernetes提供了两种负载分发策略:
|
||||||
|
|
||||||
|
- 如果不定义,默认使用kube-proxy的策略,比如随机、轮询
|
||||||
|
|
||||||
|
- 基于客户端地址的会话保持模式,即来自同一个客户端发起的所有请求都会转发到固定的一个Pod上
|
||||||
|
|
||||||
|
此模式可以使在spec中添加sessionAffinity:ClientIP选项
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 查看ipvs的映射规则【rr 轮询】
|
||||||
|
[root@master ~]# ipvsadm -Ln
|
||||||
|
IP Virtual Server version 1.2.1 (size=4096)
|
||||||
|
Prot LocalAddress:Port Scheduler Flags
|
||||||
|
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
|
||||||
|
TCP 10.97.97.97:80 rr
|
||||||
|
-> 10.244.1.2:80 Masq 1 0 0
|
||||||
|
-> 10.244.2.2:80 Masq 1 0 0
|
||||||
|
-> 10.244.2.3:80 Masq 1 0 0
|
||||||
|
|
||||||
|
# 循环访问测试
|
||||||
|
[root@master ~]# while true;do curl 10.97.97.97:80; sleep 5; done;
|
||||||
|
10.244.2.2
|
||||||
|
10.244.1.2
|
||||||
|
10.244.2.3
|
||||||
|
10.244.2.2
|
||||||
|
10.244.1.2
|
||||||
|
|
||||||
|
# 修改分发策略----sessionAffinity:ClientIP
|
||||||
|
[root@master ~]# kubectl edit service service-clusterip -n dev
|
||||||
|
service/service-clusterip edited
|
||||||
|
|
||||||
|
# 查看ipvs规则【persistent 代表持久】
|
||||||
|
[root@master ~]# ipvsadm -Ln
|
||||||
|
IP Virtual Server version 1.2.1 (size=4096)
|
||||||
|
Prot LocalAddress:Port Scheduler Flags
|
||||||
|
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
|
||||||
|
TCP 10.97.97.97:80 rr persistent 10800
|
||||||
|
-> 10.244.1.2:80 Masq 1 0 0
|
||||||
|
-> 10.244.2.2:80 Masq 1 0 0
|
||||||
|
-> 10.244.2.3:80 Masq 1 0 0
|
||||||
|
|
||||||
|
# 循环访问测试
|
||||||
|
[root@master ~]# while true;do curl 10.97.97.97; sleep 5; done;
|
||||||
|
10.244.2.3
|
||||||
|
10.244.2.3
|
||||||
|
10.244.2.3
|
||||||
|
10.244.2.3
|
||||||
|
|
||||||
|
# 删除service
|
||||||
|
[root@master ~]# kubectl delete -f service-clusterip.yaml
|
||||||
|
service "service-clusterip" deleted
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. HeadLiness类型的 Service
|
||||||
|
|
||||||
|
在某些场景中,开发人员可能不想使用Service提供的负载均衡功能,而希望自己来控制负载均衡策略,针对这种情况,kubernetes提供了HeadLiness Service,这类Service不会分配Cluster IP,如果想要访问service,只能通过service的域名进行查询。
|
||||||
|
|
||||||
|
创建service-headliness.yaml
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: service-headliness
|
||||||
|
namespace: dev
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
app: nginx-pod
|
||||||
|
clusterIP: None
|
||||||
|
type: ClusterIP
|
||||||
|
ports:
|
||||||
|
- port: 80
|
||||||
|
targetPort: 80
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 创建service
|
||||||
|
[root@master ~]# kubectl create -f service-headliness.yaml
|
||||||
|
service/service-headliness created
|
||||||
|
|
||||||
|
# 获取service, 发现CLUSTER-IP未分配
|
||||||
|
[root@master ~]# kubectl get svc service-headliness -n dev -o wide
|
||||||
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
|
||||||
|
service-headliness ClusterIP None <none> 80/TCP 6s app=nginx-pod
|
||||||
|
|
||||||
|
# 查看service详情
|
||||||
|
[root@master ~]# kubectl describe svc service-headliness -n dev
|
||||||
|
Name: service-headliness
|
||||||
|
Namespace: dev
|
||||||
|
Labels: <none>
|
||||||
|
Annotations: <none>
|
||||||
|
Selector: app=nginx-pod
|
||||||
|
Type: ClusterIP
|
||||||
|
IP Family Policy: SingleStack
|
||||||
|
IP Families: IPv4
|
||||||
|
IP: None
|
||||||
|
IPs: None
|
||||||
|
Port: <unset> 80/TCP
|
||||||
|
TargetPort: 80/TCP
|
||||||
|
Endpoints: 10.244.1.2:80,10.244.2.2:80,10.244.2.3:80
|
||||||
|
Session Affinity: None
|
||||||
|
Events: <none>
|
||||||
|
|
||||||
|
# 查看域名的解析情况
|
||||||
|
[root@master ~]# kubectl exec -it pc-deployment-6756f95949-6slx9 -n dev /bin/sh
|
||||||
|
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
|
||||||
|
# cat /etc/resolv.conf
|
||||||
|
nameserver 10.96.0.10
|
||||||
|
search dev.svc.cluster.local svc.cluster.local cluster.local
|
||||||
|
|
||||||
|
[root@master ~]# dig @10.96.0.10 service-headliness.dev.svc.cluster.local
|
||||||
|
service-headliness.dev.svc.cluster.local. 30 IN A 10.244.2.3
|
||||||
|
service-headliness.dev.svc.cluster.local. 30 IN A 10.244.1.2
|
||||||
|
service-headliness.dev.svc.cluster.local. 30 IN A 10.244.2.2
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. NodePort类型的 Service
|
||||||
|
|
||||||
|
在之前的样例中,创建的Service的ip地址只有集群内部才可以访问,如果希望将Service暴露给集群外部使用,那么就要使用到另外一种类型的Service,称为NodePort类型。NodePort的工作原理其实就是将service的端口映射到Node的一个端口上,然后就可以通过NodeIp:NodePort来访问service了。
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
创建service-nodeport.yaml
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: service-nodeport
|
||||||
|
namespace: dev
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
app: nginx-pod
|
||||||
|
type: NodePort
|
||||||
|
ports:
|
||||||
|
- port: 80
|
||||||
|
nodePort: 30002
|
||||||
|
targetPort: 80
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 创建service
|
||||||
|
[root@master ~]# kubectl create -f service-nodeport.yaml
|
||||||
|
service/service-nodeport created
|
||||||
|
|
||||||
|
# 查看service
|
||||||
|
[root@master ~]# kubectl get svc -n dev -o wide
|
||||||
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
|
||||||
|
service-nodeport NodePort 10.104.5.98 <none> 80:30002/TCP 15s app=nginx-pod
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6. LoadBalancer 类型的 Service
|
||||||
|
|
||||||
|
LoadBalancer和NodePort很相似,目的都是向外部暴露一个端口,区别在于LoadBalancer会在集群的外部再来做一个负载均衡设备,而这个设备需要外部环境支持的,外部服务发送到这个设备上的请求,会被设备负载之后转发到集群中。
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
### 7. ExternalName 类型的 Service
|
||||||
|
|
||||||
|
ExternalName类型的Service用于引入集群外部的服务,它通过externalName属性指定外部一个服务的地址,然后在集群内部访问此service就可以访问到外部的服务了。
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
创建service-externalname.yaml
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: service-externalname
|
||||||
|
namespace: dev
|
||||||
|
spec:
|
||||||
|
type: ExternalName
|
||||||
|
externalName: www.baidu.com
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 创建service
|
||||||
|
[root@master ~]# kubectl create -f service-externalname.yaml
|
||||||
|
service/service-externalname created
|
||||||
|
|
||||||
|
# 域名解析
|
||||||
|
[root@master ~]# dig @10.96.0.10 service-externalname.dev.svc.cluster.local
|
||||||
|
www.baidu.com. 5 IN CNAME www.a.shifen.com.
|
||||||
|
www.a.shifen.com. 5 IN A 39.156.70.46
|
||||||
|
www.a.shifen.com. 5 IN A 39.156.70.239
|
||||||
|
```
|
||||||
|
|
||||||
|
## 四:Ingress介绍
|
||||||
|
|
||||||
|
在前面课程中已经提到,Service对集群之外暴露服务的主要方式有两种:NotePort和LoadBalancer,但是这两种方式,都有一定的缺点:
|
||||||
|
|
||||||
|
- NodePort方式的缺点是会占用很多集群机器的端口,那么当集群服务变多的时候,这个缺点就愈发明显
|
||||||
|
- LB方式的缺点是每个service需要一个LB,浪费、麻烦,并且需要kubernetes之外设备的支持
|
||||||
|
|
||||||
|
基于这种现状,kubernetes提供了Ingress资源对象,Ingress只需要一个NodePort或者一个LB就可以满足暴露多个Service的需求。工作机制大致如下图表示:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
实际上,Ingress相当于一个7层的负载均衡器,是kubernetes对反向代理的一个抽象,它的工作原理类似于Nginx,可以理解成在Ingress里建立诸多映射规则,Ingress Controller通过监听这些配置规则并转化成Nginx的反向代理配置 , 然后对外部提供服务。在这里有两个核心概念:
|
||||||
|
|
||||||
|
- ingress:kubernetes中的一个对象,作用是定义请求如何转发到service的规则
|
||||||
|
- ingress controller:具体实现反向代理及负载均衡的程序,对ingress定义的规则进行解析,根据配置的规则来实现请求转发,实现方式有很多,比如Nginx, Contour, Haproxy等等
|
||||||
|
|
||||||
|
Ingress(以Nginx为例)的工作原理如下:
|
||||||
|
|
||||||
|
1. 用户编写Ingress规则,说明哪个域名对应kubernetes集群中的哪个Service
|
||||||
|
2. Ingress控制器动态感知Ingress服务规则的变化,然后生成一段对应的Nginx反向代理配置
|
||||||
|
3. Ingress控制器会将生成的Nginx配置写入到一个运行着的Nginx服务中,并动态更新
|
||||||
|
4. 到此为止,其实真正在工作的就是一个Nginx了,内部配置了用户定义的请求转发规则
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## 五:Ingress使用
|
||||||
|
|
||||||
|
### 1. 环境准备搭建 ingress 环境
|
||||||
|
|
||||||
|
```bash
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. 准备 service 和 pod
|
||||||
|
|
||||||
|
为了后面的实验比较方便,创建如下图所示的模型
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
创建tomcat-nginx.yaml
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Http 代理
|
||||||
|
|
||||||
|
创建ingress-http.yaml
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Https 代理
|
||||||
|
|
||||||
|
创建证书
|
||||||
|
|
||||||
|
```bash
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
创建ingress-https.yaml
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
|
||||||
|
```
|
||||||
|
|
334
kubernetes-安全认证.md
Normal file
334
kubernetes-安全认证.md
Normal file
@ -0,0 +1,334 @@
|
|||||||
|
<h2><center>kubernetes 安全认证</center></h2>
|
||||||
|
|
||||||
|
------
|
||||||
|
|
||||||
|
## 一:安全认证
|
||||||
|
|
||||||
|
### 1. 访问控制概述
|
||||||
|
|
||||||
|
Kubernetes作为一个分布式集群的管理工具,保证集群的安全性是其一个重要的任务。所谓的安全性其实就是保证对Kubernetes的各种客户端进行认证和鉴权操作。
|
||||||
|
|
||||||
|
**客户端**
|
||||||
|
|
||||||
|
在Kubernetes集群中,客户端通常有两类:
|
||||||
|
|
||||||
|
- User Account:一般是独立于kubernetes之外的其他服务管理的用户账号。
|
||||||
|
- Service Account:kubernetes管理的账号,用于为Pod中的服务进程在访问Kubernetes时提供身份标识。
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**认证、授权与准入控制**
|
||||||
|
|
||||||
|
ApiServer是访问及管理资源对象的唯一入口。任何一个请求访问ApiServer,都要经过下面三个流程:
|
||||||
|
|
||||||
|
- Authentication(认证):身份鉴别,只有正确的账号才能够通过认证
|
||||||
|
- Authorization(授权): 判断用户是否有权限对访问的资源执行特定的动作
|
||||||
|
- Admission Control(准入控制):用于补充授权机制以实现更加精细的访问控制功能。
|
||||||
|
|
||||||
|
### 2. 认证管理
|
||||||
|
|
||||||
|
Kubernetes集群安全的最关键点在于如何识别并认证客户端身份,它提供了3种客户端身份认证方式:
|
||||||
|
|
||||||
|
- HTTP Base认证:通过用户名+密码的方式认证
|
||||||
|
|
||||||
|
这种认证方式是把“用户名:密码”用BASE64算法进行编码后的字符串放在HTTP请求中的Header Authorization域里发送给服务端。服务端收到后进行解码,获取用户名及密码,然后进行用户身份认证的过程。
|
||||||
|
|
||||||
|
- HTTP Token认证:通过一个Token来识别合法用户
|
||||||
|
|
||||||
|
这种认证方式是用一个很长的难以被模仿的字符串--Token来表明客户身份的一种方式。每个Token对应一个用户名,当客户端发起API调用请求时,需要在HTTP Header里放入Token,API Server接到Token后会跟服务器中保存的token进行比对,然后进行用户身份认证的过程。
|
||||||
|
|
||||||
|
- HTTPS证书认证:基于CA根证书签名的双向数字证书认证方式
|
||||||
|
|
||||||
|
这种认证方式是安全性最高的一种方式,但是同时也是操作起来最麻烦的一种方式。
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**HTTPS认证大体分为3个过程:**
|
||||||
|
|
||||||
|
1. 证书申请和下发
|
||||||
|
|
||||||
|
HTTPS通信双方的服务器向CA机构申请证书,CA机构下发根证书、服务端证书及私钥给申请者
|
||||||
|
|
||||||
|
2. 客户端和服务端的双向认证
|
||||||
|
|
||||||
|
1. 客户端向服务器端发起请求,服务端下发自己的证书给客户端,客户端接收到证书后,通过私钥解密证书,在证书中获得服务端的公钥,客户端利用服务器端的公钥认证证书中的信息,如果一致,则认可这个服务器。
|
||||||
|
2. 客户端发送自己的证书给服务器端,服务端接收到证书后,通过私钥解密证书,在证书中获得客户端的公钥,并用该公钥认证证书信息,确认客户端是否合法。
|
||||||
|
|
||||||
|
3. 服务器端和客户端进行通信
|
||||||
|
|
||||||
|
服务器端和客户端协商好加密方案后,客户端会产生一个随机的秘钥并加密,然后发送到服务器端。
|
||||||
|
|
||||||
|
服务器端接收这个秘钥后,双方接下来通信的所有内容都通过该随机秘钥加密
|
||||||
|
|
||||||
|
注意: Kubernetes允许同时配置多种认证方式,只要其中任意一个方式认证通过即可
|
||||||
|
|
||||||
|
### 3. 授权管理
|
||||||
|
|
||||||
|
授权发生在认证成功之后,通过认证就可以知道请求用户是谁, 然后Kubernetes会根据事先定义的授权策略来决定用户是否有权限访问,这个过程就称为授权。
|
||||||
|
|
||||||
|
每个发送到ApiServer的请求都带上了用户和资源的信息:比如发送请求的用户、请求的路径、请求的动作等,授权就是根据这些信息和授权策略进行比较,如果符合策略,则认为授权通过,否则会返回错误。
|
||||||
|
|
||||||
|
API Server目前支持以下几种授权策略:
|
||||||
|
|
||||||
|
- AlwaysDeny:表示拒绝所有请求,一般用于测试
|
||||||
|
- AlwaysAllow:允许接收所有请求,相当于集群不需要授权流程(Kubernetes默认的策略)
|
||||||
|
- ABAC:基于属性的访问控制,表示使用用户配置的授权规则对用户请求进行匹配和控制
|
||||||
|
- Webhook:通过调用外部REST服务对用户进行授权
|
||||||
|
- Node:是一种专用模式,用于对kubelet发出的请求进行访问控制
|
||||||
|
- RBAC:基于角色的访问控制(kubeadm安装方式下的默认选项)
|
||||||
|
|
||||||
|
RBAC(Role-Based Access Control) 基于角色的访问控制,主要是在描述一件事情:给哪些对象授予了哪些权限
|
||||||
|
|
||||||
|
其中涉及到了下面几个概念:
|
||||||
|
|
||||||
|
- 对象:User、Groups、ServiceAccount
|
||||||
|
- 角色:代表着一组定义在资源上的可操作动作(权限)的集合
|
||||||
|
- 绑定:将定义好的角色跟用户绑定在一起
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
RBAC引入了4个顶级资源对象:
|
||||||
|
|
||||||
|
- Role、ClusterRole:角色,用于指定一组权限
|
||||||
|
- RoleBinding、ClusterRoleBinding:角色绑定,用于将角色(权限)赋予给对象
|
||||||
|
|
||||||
|
**Role、ClusterRole**
|
||||||
|
|
||||||
|
一个角色就是一组权限的集合,这里的权限都是许可形式的(白名单)。
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# Role只能对命名空间内的资源进行授权,需要指定nameapce
|
||||||
|
kind: Role
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||||
|
metadata:
|
||||||
|
namespace: dev
|
||||||
|
name: authorization-role
|
||||||
|
rules:
|
||||||
|
- apiGroups: [""] # 支持的API组列表,"" 空字符串,表示核心API群
|
||||||
|
resources: ["pods"] # 支持的资源对象列表
|
||||||
|
verbs: ["get", "watch", "list"] # 允许的对资源对象的操作方法列表
|
||||||
|
```
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# ClusterRole可以对集群范围内资源、跨namespaces的范围资源、非资源类型进行授权
|
||||||
|
kind: ClusterRole
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||||
|
metadata:
|
||||||
|
name: authorization-clusterrole
|
||||||
|
rules:
|
||||||
|
- apiGroups: [""]
|
||||||
|
resources: ["pods"]
|
||||||
|
verbs: ["get", "watch", "list"]
|
||||||
|
```
|
||||||
|
|
||||||
|
需要详细说明的是,rules中的参数:
|
||||||
|
|
||||||
|
- apiGroups: 支持的API组列表
|
||||||
|
|
||||||
|
```bash
|
||||||
|
"","apps", "autoscaling", "batch"
|
||||||
|
```
|
||||||
|
|
||||||
|
- resources:支持的资源对象列表
|
||||||
|
|
||||||
|
```bash
|
||||||
|
"services", "endpoints", "pods","secrets","configmaps","crontabs","deployments","jobs",
|
||||||
|
"nodes","rolebindings","clusterroles","daemonsets","replicasets","statefulsets",
|
||||||
|
"horizontalpodautoscalers","replicationcontrollers","cronjobs"
|
||||||
|
```
|
||||||
|
|
||||||
|
- verbs:对资源对象的操作方法列表
|
||||||
|
|
||||||
|
```bash
|
||||||
|
"get", "list", "watch", "create", "update", "patch", "delete", "exec"
|
||||||
|
```
|
||||||
|
|
||||||
|
**RoleBinding、ClusterRoleBinding**
|
||||||
|
|
||||||
|
角色绑定用来把一个角色绑定到一个目标对象上,绑定目标可以是User、Group或者ServiceAccount。
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# RoleBinding可以将同一namespace中的subject绑定到某个Role下,则此subject即具有该Role定义的权限
|
||||||
|
kind: RoleBinding
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||||
|
metadata:
|
||||||
|
name: authorization-role-binding
|
||||||
|
namespace: dev
|
||||||
|
subjects:
|
||||||
|
- kind: User
|
||||||
|
name: heima
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
roleRef:
|
||||||
|
kind: Role
|
||||||
|
name: authorization-role
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
```
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# ClusterRoleBinding在整个集群级别和所有namespaces将特定的subject与ClusterRole绑定,授予权限
|
||||||
|
kind: ClusterRoleBinding
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||||
|
metadata:
|
||||||
|
name: authorization-clusterrole-binding
|
||||||
|
subjects:
|
||||||
|
- kind: User
|
||||||
|
name: heima
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
roleRef:
|
||||||
|
kind: ClusterRole
|
||||||
|
name: authorization-clusterrole
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
```
|
||||||
|
|
||||||
|
**RoleBinding引用ClusterRole进行授权**
|
||||||
|
|
||||||
|
RoleBinding可以引用ClusterRole,对属于同一命名空间内ClusterRole定义的资源主体进行授权。
|
||||||
|
|
||||||
|
一种很常用的做法就是,集群管理员为集群范围预定义好一组角色(ClusterRole),然后在多个命名空间中重复使用这些ClusterRole。这样可以大幅提高授权管理工作效率,也使得各个命名空间下的基础性授权规则与使用体验保持一致。
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# 虽然authorization-clusterrole是一个集群角色,但是因为使用了RoleBinding
|
||||||
|
# 所以heima只能读取dev命名空间中的资源
|
||||||
|
kind: RoleBinding
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||||
|
metadata:
|
||||||
|
name: authorization-role-binding-ns
|
||||||
|
namespace: dev
|
||||||
|
subjects:
|
||||||
|
- kind: User
|
||||||
|
name: heima
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
roleRef:
|
||||||
|
kind: ClusterRole
|
||||||
|
name: authorization-clusterrole
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
```
|
||||||
|
|
||||||
|
**实战:创建一个只能管理dev空间下Pods资源的账号**
|
||||||
|
|
||||||
|
1. 创建账号
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1) 创建证书
|
||||||
|
[root@master ~]# cd /etc/kubernetes/pki/
|
||||||
|
[root@master pki]# (umask 077;openssl genrsa -out devman.key 2048)
|
||||||
|
Generating RSA private key, 2048 bit long modulus
|
||||||
|
...........................................................................+++
|
||||||
|
...................+++
|
||||||
|
e is 65537 (0x10001)
|
||||||
|
|
||||||
|
# 2) 用apiserver的证书去签署
|
||||||
|
# 2-1) 签名申请,申请的用户是devman,组是devgroup
|
||||||
|
[root@master pki]# openssl req -new -key devman.key -out devman.csr -subj "/CN=devman/O=devgroup"
|
||||||
|
|
||||||
|
# 2-2) 签署证书
|
||||||
|
[root@master pki]# openssl x509 -req -in devman.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out devman.crt -days 3650
|
||||||
|
Signature ok
|
||||||
|
subject=/CN=devman/O=devgroup
|
||||||
|
Getting CA Private Key
|
||||||
|
|
||||||
|
# 3) 设置集群、用户、上下文信息
|
||||||
|
[root@master pki]# kubectl config set-cluster kubernetes --embed-certs=true --certificate-authority=/etc/kubernetes/pki/ca.crt --server=https://192.168.159.130:6443
|
||||||
|
Cluster "kubernetes" set.
|
||||||
|
[root@master pki]# kubectl config set-credentials devman --embed-certs=true --client-certificate=/etc/kubernetes/pki/devman.crt --client-key=/etc/kubernetes/pki/devman.key
|
||||||
|
User "devman" set.
|
||||||
|
[root@master pki]# kubectl config set-context devman@kubernetes --cluster=kubernetes --user=devman
|
||||||
|
Context "devman@kubernetes" created.
|
||||||
|
|
||||||
|
# 切换账户到devman
|
||||||
|
[root@master pki]# kubectl config use-context devman@kubernetes
|
||||||
|
Switched to context "devman@kubernetes".
|
||||||
|
|
||||||
|
# 查看dev下pod,发现没有权限
|
||||||
|
[root@master pki]# kubectl get pods -n dev
|
||||||
|
Error from server (Forbidden): pods is forbidden: User "devman" cannot list resource "pods" in API group "" in the namespace "dev"
|
||||||
|
|
||||||
|
# 切换到admin账户
|
||||||
|
[root@master pki]# kubectl config use-context kubernetes-admin@kubernetes
|
||||||
|
Switched to context "kubernetes-admin@kubernetes".
|
||||||
|
```
|
||||||
|
|
||||||
|
2. 创建Role和RoleBinding,为devman用户授权
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
kind: Role
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||||
|
metadata:
|
||||||
|
namespace: dev
|
||||||
|
name: dev-role
|
||||||
|
rules:
|
||||||
|
- apiGroups: [""]
|
||||||
|
resources: ["pods"]
|
||||||
|
verbs: ["get", "watch", "list"]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
kind: RoleBinding
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||||
|
metadata:
|
||||||
|
name: authorization-role-binding
|
||||||
|
namespace: dev
|
||||||
|
subjects:
|
||||||
|
- kind: User
|
||||||
|
name: devman
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
roleRef:
|
||||||
|
kind: Role
|
||||||
|
name: dev-role
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
[root@master pki]# kubectl create -f dev-role.yaml
|
||||||
|
role.rbac.authorization.k8s.io/dev-role created
|
||||||
|
rolebinding.rbac.authorization.k8s.io/authorization-role-binding created
|
||||||
|
```
|
||||||
|
|
||||||
|
3. 切换账户,再次验证
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 切换账户到devman
|
||||||
|
[root@master pki]# kubectl config use-context devman@kubernetes
|
||||||
|
Switched to context "devman@kubernetes".
|
||||||
|
|
||||||
|
|
||||||
|
[root@master pki]# kubectl get pods -n dev
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
nginx-deployment-66cb59b984-8wp2k 1/1 Running 0 4d1h
|
||||||
|
nginx-deployment-66cb59b984-dc46j 1/1 Running 0 4d1h
|
||||||
|
nginx-deployment-66cb59b984-thfck 1/1 Running 0 4d1h
|
||||||
|
|
||||||
|
# 为了不影响后面的学习,切回admin账户
|
||||||
|
[root@master pki]# kubectl config use-context kubernetes-admin@kubernetes
|
||||||
|
Switched to context "kubernetes-admin@kubernetes".
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. 准入控制
|
||||||
|
|
||||||
|
通过了前面的认证和授权之后,还需要经过准入控制处理通过之后,apiserver才会处理这个请求。
|
||||||
|
|
||||||
|
准入控制是一个可配置的控制器列表,可以通过在Api-Server上通过命令行设置选择执行哪些准入控制器:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,
|
||||||
|
DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds
|
||||||
|
```
|
||||||
|
|
||||||
|
只有当所有的准入控制器都检查通过之后,apiserver才执行该请求,否则返回拒绝。
|
||||||
|
|
||||||
|
当前可配置的Admission Control准入控制如下:
|
||||||
|
|
||||||
|
- AlwaysAdmit:允许所有请求
|
||||||
|
- AlwaysDeny:禁止所有请求,一般用于测试
|
||||||
|
- AlwaysPullImages:在启动容器之前总去下载镜像
|
||||||
|
- DenyExecOnPrivileged:它会拦截所有想在Privileged Container上执行命令的请求
|
||||||
|
- ImagePolicyWebhook:这个插件将允许后端的一个Webhook程序来完成admission controller的功能。
|
||||||
|
- Service Account:实现ServiceAccount实现了自动化
|
||||||
|
- SecurityContextDeny:这个插件将使用SecurityContext的Pod中的定义全部失效
|
||||||
|
- ResourceQuota:用于资源配额管理目的,观察所有请求,确保在namespace上的配额不会超标
|
||||||
|
- LimitRanger:用于资源限制管理,作用于namespace上,确保对Pod进行资源限制
|
||||||
|
- InitialResources:为未设置资源请求与限制的Pod,根据其镜像的历史资源的使用情况进行设置
|
||||||
|
- NamespaceLifecycle:如果尝试在一个不存在的namespace中创建资源对象,则该创建请求将被拒绝。当删除一个namespace时,系统将会删除该namespace中所有对象。
|
||||||
|
- DefaultStorageClass:为了实现共享存储的动态供应,为未指定StorageClass或PV的PVC尝试匹配默认的StorageClass,尽可能减少用户在申请PVC时所需了解的后端存储细节
|
||||||
|
- DefaultTolerationSeconds:这个插件为那些没有设置forgiveness tolerations并具有notready:NoExecute和unreachable:NoExecute两种taints的Pod设置默认的“容忍”时间,为5min
|
||||||
|
- PodSecurityPolicy:这个插件用于在创建或修改Pod时决定是否根据Pod的security context和可用的PodSecurityPolicy对Pod的安全策略进行控制
|
595
kubernetes-实战.md
Normal file
595
kubernetes-实战.md
Normal file
@ -0,0 +1,595 @@
|
|||||||
|
<h2><center>kubernetes 实战</center></h2>
|
||||||
|
|
||||||
|
------
|
||||||
|
|
||||||
|
## 一:Namespace
|
||||||
|
|
||||||
|
Namespace是kubernetes系统中的一种非常重要资源,它的主要作用是用来实现多套环境的资源隔离或者多租户的资源隔离。
|
||||||
|
|
||||||
|
默认情况下,kubernetes集群中的所有的Pod都是可以相互访问的。但是在实际中,可能不想让两个Pod之间进行互相的访问,那此时就可以将两个Pod划分到不同的namespace下。kubernetes通过将集群内部的资源分配到不同的Namespace中,可以形成逻辑上的"组",以方便不同的组的资源进行隔离使用和管理。
|
||||||
|
|
||||||
|
可以通过kubernetes的授权机制,将不同的namespace交给不同租户进行管理,这样就实现了多租户的资源隔离。此时还能结合kubernetes的资源配额机制,限定不同租户能占用的资源,例如CPU使用量、内存使用量等等,来实现租户可用资源的管理。
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
kubernetes在集群启动之后,会默认创建几个namespace
|
||||||
|
|
||||||
|
```bash
|
||||||
|
[root@master ~]# kubectl get namespace
|
||||||
|
NAME STATUS AGE
|
||||||
|
default Active 28d # 所有未指定Namespace的对象都会被分配在default命名空间
|
||||||
|
kube-flannel Active 28d
|
||||||
|
kube-node-lease Active 28d # 集群节点之间的心跳维护,v1.13开始引入
|
||||||
|
kube-public Active 28d # 此命名空间下的资源可以被所有人访问(包括未认证用户)
|
||||||
|
kube-system Active 28d # 所有由Kubernetes系统创建的资源都处于这个命名空间
|
||||||
|
```
|
||||||
|
|
||||||
|
下面来看namespace资源的具体操作:
|
||||||
|
|
||||||
|
**查看**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1 查看所有的ns 命令:kubectl get ns
|
||||||
|
[root@master ~]# kubectl get ns
|
||||||
|
NAME STATUS AGE
|
||||||
|
default Active 28d
|
||||||
|
kube-flannel Active 28d
|
||||||
|
kube-node-lease Active 28d
|
||||||
|
kube-public Active 28d
|
||||||
|
kube-system Active 28d
|
||||||
|
|
||||||
|
# 2 查看指定的ns 命令:kubectl get ns ns名称
|
||||||
|
[root@master ~]# kubectl get ns default
|
||||||
|
NAME STATUS AGE
|
||||||
|
default Active 28d
|
||||||
|
|
||||||
|
# 3 指定输出格式 命令:kubectl get ns ns名称 -o 格式参数
|
||||||
|
# kubernetes支持的格式有很多,比较常见的是wide、json、yaml
|
||||||
|
[root@master ~]# kubectl get ns default -o yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Namespace
|
||||||
|
metadata:
|
||||||
|
creationTimestamp: "2025-04-14T12:21:54Z"
|
||||||
|
labels:
|
||||||
|
kubernetes.io/metadata.name: default
|
||||||
|
name: default
|
||||||
|
resourceVersion: "191"
|
||||||
|
uid: dd556322-ffcd-4a72-a8df-8723090e82c0
|
||||||
|
spec:
|
||||||
|
finalizers:
|
||||||
|
- kubernetes
|
||||||
|
status:
|
||||||
|
phase: Active
|
||||||
|
|
||||||
|
# 4 查看ns详情 命令:kubectl describe ns ns名称
|
||||||
|
[root@master ~]# kubectl describe ns default
|
||||||
|
Name: default
|
||||||
|
Labels: kubernetes.io/metadata.name=default
|
||||||
|
Annotations: <none>
|
||||||
|
Status: Active
|
||||||
|
|
||||||
|
# ResourceQuota 针对namespace做的资源限制
|
||||||
|
# LimitRange针对namespace中的每个组件做的资源限制
|
||||||
|
No resource quota.
|
||||||
|
|
||||||
|
No LimitRange resource.
|
||||||
|
```
|
||||||
|
|
||||||
|
**创建**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 创建namespace
|
||||||
|
[root@master ~]# kubectl create ns dev
|
||||||
|
namespace/dev created
|
||||||
|
```
|
||||||
|
|
||||||
|
**删除**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 删除namespace
|
||||||
|
[root@master ~]# kubectl delete ns dev
|
||||||
|
namespace "dev" deleted
|
||||||
|
```
|
||||||
|
|
||||||
|
**配置方式**
|
||||||
|
|
||||||
|
首先准备一个yaml文件:ns-dev.yaml
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Namespace
|
||||||
|
metadata:
|
||||||
|
name: dev
|
||||||
|
```
|
||||||
|
|
||||||
|
然后就可以执行对应的创建和删除命令了:
|
||||||
|
|
||||||
|
- 创建:kubectl create -f ns-dev.yaml
|
||||||
|
- 删除:kubectl delete -f ns-dev.yaml
|
||||||
|
|
||||||
|
## 二:Pod
|
||||||
|
|
||||||
|
Pod是kubernetes集群进行管理的最小单元,程序要运行必须部署在容器中,而容器必须存在于Pod中。
|
||||||
|
|
||||||
|
Pod可以认为是容器的封装,一个Pod中可以存在一个或者多个容器。
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
kubernetes在集群启动之后,集群中的各个组件也都是以Pod方式运行的。可以通过下面命令查看:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
[root@master ~]# kubectl get pod -n kube-system
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
coredns-64897985d-hzb7c 1/1 Running 1 (28d ago) 28d
|
||||||
|
coredns-64897985d-l2nvk 1/1 Running 1 (28d ago) 28d
|
||||||
|
etcd-master 1/1 Running 1 (28d ago) 28d
|
||||||
|
kube-apiserver-master 1/1 Running 1 (28d ago) 28d
|
||||||
|
kube-controller-manager-master 1/1 Running 1 (28d ago) 28d
|
||||||
|
kube-proxy-fxv98 1/1 Running 1 (28d ago) 28d
|
||||||
|
kube-proxy-mtn5n 1/1 Running 1 (28d ago) 28d
|
||||||
|
kube-proxy-x82nf 1/1 Running 1 (28d ago) 28d
|
||||||
|
kube-scheduler-master 1/1 Running 1 (28d ago) 28d
|
||||||
|
```
|
||||||
|
|
||||||
|
**创建并运行**
|
||||||
|
|
||||||
|
kubernetes没有提供单独运行Pod的命令,都是通过Pod控制器来实现的
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 命令格式: kubectl run (pod控制器名称) [参数]
|
||||||
|
# --image 指定Pod的镜像
|
||||||
|
# --port 指定端口
|
||||||
|
# --namespace 指定namespace
|
||||||
|
[root@master ~]# kubectl run nginx --image=nginx:latest --port=80 --namespace dev
|
||||||
|
pod/nginx created
|
||||||
|
```
|
||||||
|
|
||||||
|
**查看pod信息**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 查看Pod基本信息
|
||||||
|
[root@master ~]# kubectl get pods -n dev
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
nginx 1/1 Running 0 41s
|
||||||
|
|
||||||
|
# 查看Pod的详细信息
|
||||||
|
[root@master ~]# kubectl describe pod nginx -n dev
|
||||||
|
Name: nginx
|
||||||
|
Namespace: dev
|
||||||
|
Priority: 0
|
||||||
|
Node: node1/192.168.159.131
|
||||||
|
Start Time: Tue, 13 May 2025 15:18:44 +0800
|
||||||
|
Labels: run=nginx
|
||||||
|
Annotations: <none>
|
||||||
|
Status: Running
|
||||||
|
IP: 10.244.1.2
|
||||||
|
IPs:
|
||||||
|
IP: 10.244.1.2
|
||||||
|
Containers:
|
||||||
|
nginx:
|
||||||
|
Container ID: docker://e5ac12d1fc9534410f1716e7b5d67c827ae9c9a6eba2691e8dd7e6b0f31091da
|
||||||
|
Image: nginx:latest
|
||||||
|
Image ID: docker-pullable://nginx@sha256:c15da6c91de8d2f436196f3a768483ad32c258ed4e1beb3d367a27ed67253e66
|
||||||
|
Port: 80/TCP
|
||||||
|
Host Port: 0/TCP
|
||||||
|
State: Running
|
||||||
|
Started: Tue, 13 May 2025 15:19:21 +0800
|
||||||
|
Ready: True
|
||||||
|
Restart Count: 0
|
||||||
|
Environment: <none>
|
||||||
|
Mounts:
|
||||||
|
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4d89h (ro)
|
||||||
|
Conditions:
|
||||||
|
Type Status
|
||||||
|
Initialized True
|
||||||
|
Ready True
|
||||||
|
ContainersReady True
|
||||||
|
PodScheduled True
|
||||||
|
Volumes:
|
||||||
|
kube-api-access-4d89h:
|
||||||
|
Type: Projected (a volume that contains injected data from multiple sources)
|
||||||
|
TokenExpirationSeconds: 3607
|
||||||
|
ConfigMapName: kube-root-ca.crt
|
||||||
|
ConfigMapOptional: <nil>
|
||||||
|
DownwardAPI: true
|
||||||
|
QoS Class: BestEffort
|
||||||
|
Node-Selectors: <none>
|
||||||
|
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
|
||||||
|
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
|
||||||
|
Events:
|
||||||
|
Type Reason Age From Message
|
||||||
|
---- ------ ---- ---- -------
|
||||||
|
Normal Scheduled 46s default-scheduler Successfully assigned dev/nginx to node1
|
||||||
|
Normal Pulling 45s kubelet Pulling image "nginx:latest"
|
||||||
|
Normal Pulled 9s kubelet Successfully pulled image "nginx:latest" in 35.843886421s
|
||||||
|
Normal Created 9s kubelet Created container nginx
|
||||||
|
Normal Started 9s kubelet Started container nginx
|
||||||
|
```
|
||||||
|
|
||||||
|
**访问Pod**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 获取podIP
|
||||||
|
[root@master ~]# kubectl get pods -n dev -o wide
|
||||||
|
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||||
|
nginx 1/1 Running 0 75s 10.244.1.2 node1 <none> <none>
|
||||||
|
|
||||||
|
# 创建service-nodeport.yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: service-nodeport
|
||||||
|
namespace: dev
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
app: nginx-pod
|
||||||
|
type: NodePort
|
||||||
|
ports:
|
||||||
|
- port: 80
|
||||||
|
nodePort: 30001
|
||||||
|
targetPort: 80
|
||||||
|
|
||||||
|
# 创建service
|
||||||
|
[root@master ~]# kubectl create -f service-nodeport.yaml
|
||||||
|
service/service-nodeport created
|
||||||
|
|
||||||
|
# 查看service
|
||||||
|
[root@master ~]# kubectl get svc -n dev -o wide
|
||||||
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
|
||||||
|
service-nodeport NodePort 10.103.76.162 <none> 80:30001/TCP 5s app=nginx-pod
|
||||||
|
|
||||||
|
# 接下来可以通过电脑主机的浏览器去访问集群中任意一个nodeip的30002端口,即可访问到pod
|
||||||
|
```
|
||||||
|
|
||||||
|
**删除指定Pod**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 删除指定Pod
|
||||||
|
[root@master ~]# kubectl delete pod nginx -n dev
|
||||||
|
pod "nginx" deleted
|
||||||
|
```
|
||||||
|
|
||||||
|
**配置操作**
|
||||||
|
|
||||||
|
创建一个pod-nginx.yaml,内容如下:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Pod
|
||||||
|
metadata:
|
||||||
|
name: nginx
|
||||||
|
namespace: dev
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- image: nginx:latest
|
||||||
|
name: pod
|
||||||
|
ports:
|
||||||
|
- name: nginx-port
|
||||||
|
containerPort: 80
|
||||||
|
protocol: TCP
|
||||||
|
```
|
||||||
|
|
||||||
|
然后就可以执行对应的创建和删除命令了:
|
||||||
|
|
||||||
|
- 创建:kubectl create -f pod-nginx.yaml
|
||||||
|
- 删除:kubectl delete -f pod-nginx.yaml
|
||||||
|
|
||||||
|
## 三:Label
|
||||||
|
|
||||||
|
Label是kubernetes系统中的一个重要概念。它的作用就是在资源上添加标识,用来对它们进行区分和选择。
|
||||||
|
|
||||||
|
Label的特点:
|
||||||
|
|
||||||
|
- 一个Label会以key/value键值对的形式附加到各种对象上,如Node、Pod、Service等等
|
||||||
|
- 一个资源对象可以定义任意数量的Label ,同一个Label也可以被添加到任意数量的资源对象上去
|
||||||
|
- Label通常在资源对象定义时确定,当然也可以在对象创建后动态添加或者删除
|
||||||
|
|
||||||
|
可以通过Label实现资源的多维度分组,以便灵活、方便地进行资源分配、调度、配置、部署等管理工作。
|
||||||
|
|
||||||
|
一些常用的Label 示例如下:
|
||||||
|
|
||||||
|
- 版本标签:"version":"release", "version":"stable"…
|
||||||
|
- 环境标签:"environment":"dev","environment":"test","environment":"pro"
|
||||||
|
- 架构标签:"tier":"frontend","tier":"backend"
|
||||||
|
|
||||||
|
标签定义完毕之后,还要考虑到标签的选择,这就要使用到Label Selector,即:
|
||||||
|
|
||||||
|
Label用于给某个资源对象定义标识
|
||||||
|
|
||||||
|
Label Selector用于查询和筛选拥有某些标签的资源对象
|
||||||
|
|
||||||
|
当前有两种Label Selector:
|
||||||
|
|
||||||
|
- 基于等式的Label Selector
|
||||||
|
|
||||||
|
name = slave: 选择所有包含Label中key="name"且value="slave"的对象
|
||||||
|
|
||||||
|
env != production: 选择所有包括Label中的key="env"且value不等于"production"的对象
|
||||||
|
|
||||||
|
- 基于集合的Label Selector
|
||||||
|
|
||||||
|
name in (master, slave): 选择所有包含Label中的key="name"且value="master"或"slave"的对象
|
||||||
|
|
||||||
|
name not in (frontend): 选择所有包含Label中的key="name"且value不等于"frontend"的对象
|
||||||
|
|
||||||
|
标签的选择条件可以使用多个,此时将多个Label Selector进行组合,使用逗号","进行分隔即可。例如:
|
||||||
|
|
||||||
|
- name=slave,env!=production
|
||||||
|
- name not in (frontend),env!=production
|
||||||
|
|
||||||
|
**命令方式**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 为pod资源打标签
|
||||||
|
[root@master ~]# kubectl label pod nginx version=1.0 -n dev
|
||||||
|
pod/nginx labeled
|
||||||
|
|
||||||
|
# 查看标签
|
||||||
|
[root@master ~]# kubectl get pod nginx -n dev --show-labels
|
||||||
|
NAME READY STATUS RESTARTS AGE LABELS
|
||||||
|
nginx 1/1 Running 0 113s version=1.0
|
||||||
|
|
||||||
|
# 为pod资源更新标签
|
||||||
|
[root@master ~]# kubectl label pod nginx version=2.0 -n dev --overwrite
|
||||||
|
pod/nginx labeled
|
||||||
|
|
||||||
|
# 查看标签
|
||||||
|
[root@master ~]# kubectl get pod nginx -n dev --show-labels
|
||||||
|
NAME READY STATUS RESTARTS AGE LABELS
|
||||||
|
nginx 1/1 Running 0 2m53s version=2.0
|
||||||
|
|
||||||
|
# 筛选标签
|
||||||
|
[root@master ~]# kubectl get pod -n dev -l version=2.0 --show-labels
|
||||||
|
NAME READY STATUS RESTARTS AGE LABELS
|
||||||
|
nginx 1/1 Running 0 3m17s version=2.0
|
||||||
|
[root@master ~]# kubectl get pod -n dev -l version!=2.0 --show-labels
|
||||||
|
No resources found in dev namespace.
|
||||||
|
|
||||||
|
# 删除标签
|
||||||
|
[root@master ~]# kubectl label pod nginx -n dev version-
|
||||||
|
pod/nginx unlabeled
|
||||||
|
```
|
||||||
|
|
||||||
|
**配置方式**
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Pod
|
||||||
|
metadata:
|
||||||
|
name: nginx
|
||||||
|
namespace: dev
|
||||||
|
labels:
|
||||||
|
version: "3.0"
|
||||||
|
env: "test"
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- image: nginx:latest
|
||||||
|
name: pod
|
||||||
|
ports:
|
||||||
|
- name: nginx-port
|
||||||
|
containerPort: 80
|
||||||
|
protocol: TCP
|
||||||
|
```
|
||||||
|
|
||||||
|
然后就可以执行对应的更新命令了:kubectl apply -f pod-nginx.yaml
|
||||||
|
|
||||||
|
## 四:Deployment
|
||||||
|
|
||||||
|
在kubernetes中,Pod是最小的控制单元,但是kubernetes很少直接控制Pod,一般都是通过Pod控制器来完成的。Pod控制器用于pod的管理,确保pod资源符合预期的状态,当pod的资源出现故障时,会尝试进行重启或重建pod。
|
||||||
|
|
||||||
|
在kubernetes中Pod控制器的种类有很多,本章节只介绍一种:Deployment。
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**命令操作**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 命令格式: kubectl create deployment 名称 [参数]
|
||||||
|
# --image 指定pod的镜像
|
||||||
|
# --port 指定端口
|
||||||
|
# --replicas 指定创建pod数量
|
||||||
|
# --namespace 指定namespace
|
||||||
|
[root@master ~]# kubectl create deployment nginx --image=nginx:latest --replicas=3 -n dev
|
||||||
|
deployment.apps/nginx create
|
||||||
|
|
||||||
|
# 查看创建的Pod
|
||||||
|
[root@master ~]# kubectl get pods -n dev
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
nginx-7c658794b9-2w5gx 1/1 Running 0 71s
|
||||||
|
nginx-7c658794b9-4b7cs 1/1 Running 0 71s
|
||||||
|
nginx-7c658794b9-k9bh7 1/1 Running 0 71s
|
||||||
|
|
||||||
|
# 查看deployment的信息
|
||||||
|
[root@master ~]# kubectl get deploy -n dev
|
||||||
|
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||||
|
nginx 3/3 3 3 96s
|
||||||
|
|
||||||
|
# UP-TO-DATE:成功升级的副本数量
|
||||||
|
# AVAILABLE:可用副本的数量
|
||||||
|
[root@master ~]# kubectl get deploy -n dev -o wide
|
||||||
|
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
|
||||||
|
nginx 3/3 3 3 2m1s nginx nginx:latest app=nginx
|
||||||
|
|
||||||
|
# 查看deployment的详细信息
|
||||||
|
[root@master ~]# kubectl describe deploy nginx -n dev
|
||||||
|
Name: nginx
|
||||||
|
Namespace: dev
|
||||||
|
CreationTimestamp: Thu, 15 May 2025 15:48:56 +0800
|
||||||
|
Labels: app=nginx
|
||||||
|
Annotations: deployment.kubernetes.io/revision: 1
|
||||||
|
Selector: app=nginx
|
||||||
|
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
|
||||||
|
StrategyType: RollingUpdate
|
||||||
|
MinReadySeconds: 0
|
||||||
|
RollingUpdateStrategy: 25% max unavailable, 25% max surge
|
||||||
|
Pod Template:
|
||||||
|
Labels: app=nginx
|
||||||
|
Containers:
|
||||||
|
nginx:
|
||||||
|
Image: nginx:latest
|
||||||
|
Port: <none>
|
||||||
|
Host Port: <none>
|
||||||
|
Environment: <none>
|
||||||
|
Mounts: <none>
|
||||||
|
Volumes: <none>
|
||||||
|
Conditions:
|
||||||
|
Type Status Reason
|
||||||
|
---- ------ ------
|
||||||
|
Available True MinimumReplicasAvailable
|
||||||
|
Progressing True NewReplicaSetAvailable
|
||||||
|
OldReplicaSets: <none>
|
||||||
|
NewReplicaSet: nginx-7c658794b9 (3/3 replicas created)
|
||||||
|
Events:
|
||||||
|
Type Reason Age From Message
|
||||||
|
---- ------ ---- ---- -------
|
||||||
|
Normal ScalingReplicaSet 2m25s deployment-controller Scaled up replica set nginx-7c658794b9 to 3
|
||||||
|
|
||||||
|
# 删除
|
||||||
|
[root@master ~]# kubectl delete deploy nginx -n dev
|
||||||
|
deployment.apps "nginx" deleted
|
||||||
|
```
|
||||||
|
|
||||||
|
**配置操作**
|
||||||
|
|
||||||
|
创建一个deploy-nginx.yaml,内容如下:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: nginx
|
||||||
|
namespace: dev
|
||||||
|
spec:
|
||||||
|
replicas: 3
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
run: nginx
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
run: nginx
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- image: nginx:latest
|
||||||
|
name: nginx
|
||||||
|
ports:
|
||||||
|
- containerPort: 80
|
||||||
|
protocol: TCP
|
||||||
|
```
|
||||||
|
|
||||||
|
然后就可以执行对应的创建和删除命令了:
|
||||||
|
|
||||||
|
- 创建:kubectl create -f deploy-nginx.yaml
|
||||||
|
- 删除:kubectl delete -f deploy-nginx.yaml
|
||||||
|
|
||||||
|
## 五:Service
|
||||||
|
|
||||||
|
通过上节课的学习,已经能够利用Deployment来创建一组Pod来提供具有高可用性的服务。
|
||||||
|
|
||||||
|
虽然每个Pod都会分配一个单独的Pod IP,然而却存在如下两问题:
|
||||||
|
|
||||||
|
- Pod IP 会随着Pod的重建产生变化
|
||||||
|
- Pod IP 仅仅是集群内可见的虚拟IP,外部无法访问
|
||||||
|
|
||||||
|
这样对于访问这个服务带来了难度。因此,kubernetes设计了Service来解决这个问题。
|
||||||
|
|
||||||
|
Service可以看作是一组同类Pod对外的访问接口。借助Service,应用可以方便地实现服务发现和负载均衡。
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**创建集群内部可访问的Service**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 暴露Service
|
||||||
|
[root@master ~]# kubectl expose deploy nginx --name=svc-nginx1 --type=ClusterIP --port=80 --target-port=80 -n dev
|
||||||
|
service/svc-nginx1 exposed
|
||||||
|
|
||||||
|
# 查看service
|
||||||
|
[root@master ~]# kubectl get svc svc-nginx1 -n dev -o wide
|
||||||
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
|
||||||
|
svc-nginx1 ClusterIP 10.99.146.46 <none> 80/TCP 46s app=nginx
|
||||||
|
|
||||||
|
# 这里产生了一个CLUSTER-IP,这就是service的IP,在Service的生命周期中,这个地址是不会变动的
|
||||||
|
# 可以通过这个IP访问当前service对应的POD
|
||||||
|
[root@master ~]# curl 10.99.146.46:80
|
||||||
|
<!DOCTYPE html>
|
||||||
|
<html>
|
||||||
|
<head>
|
||||||
|
<title>Welcome to nginx!</title>
|
||||||
|
<style>
|
||||||
|
html { color-scheme: light dark; }
|
||||||
|
body { width: 35em; margin: 0 auto;
|
||||||
|
font-family: Tahoma, Verdana, Arial, sans-serif; }
|
||||||
|
</style>
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<h1>Welcome to nginx!</h1>
|
||||||
|
<p>If you see this page, the nginx web server is successfully installed and
|
||||||
|
working. Further configuration is required.</p>
|
||||||
|
|
||||||
|
<p>For online documentation and support please refer to
|
||||||
|
<a href="http://nginx.org/">nginx.org</a>.<br/>
|
||||||
|
Commercial support is available at
|
||||||
|
<a href="http://nginx.com/">nginx.com</a>.</p>
|
||||||
|
|
||||||
|
<p><em>Thank you for using nginx.</em></p>
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
|
```
|
||||||
|
|
||||||
|
**创建集群外部也可访问的Service**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 上面创建的Service的type类型为ClusterIP,这个ip地址只用集群内部可访问
|
||||||
|
# 如果需要创建外部也可以访问的Service,需要修改type为NodePort
|
||||||
|
[root@master ~]# kubectl expose deploy nginx --name=svc-nginx2 --type=NodePort --port=80 --target-port=80 -n dev
|
||||||
|
service/svc-nginx2 exposed
|
||||||
|
|
||||||
|
# 此时查看,会发现出现了NodePort类型的Service,而且有一对Port(80:31928/TC)
|
||||||
|
[root@master ~]# kubectl get svc svc-nginx2 -n dev -o wide
|
||||||
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
|
||||||
|
svc-nginx2 NodePort 10.111.36.221 <none> 80:32259/TCP 14s app=nginx
|
||||||
|
|
||||||
|
# 接下来就可以通过集群外的主机访问 节点IP:31928访问服务了
|
||||||
|
# 例如在的电脑主机上通过浏览器访问下面的地址
|
||||||
|
http://192.168.159.130:32259
|
||||||
|
```
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**删除Service**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
[root@master ~]# kubectl delete svc svc-nginx1 -n dev
|
||||||
|
service "svc-nginx1" deleted
|
||||||
|
```
|
||||||
|
|
||||||
|
**配置方式**
|
||||||
|
|
||||||
|
创建一个svc-nginx.yaml,内容如下:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: svc-nginx
|
||||||
|
namespace: dev
|
||||||
|
spec:
|
||||||
|
clusterIP: 10.109.179.231 #固定svc的内网ip
|
||||||
|
ports:
|
||||||
|
- port: 80
|
||||||
|
protocol: TCP
|
||||||
|
targetPort: 80
|
||||||
|
selector:
|
||||||
|
run: nginx
|
||||||
|
type: ClusterIP
|
||||||
|
```
|
||||||
|
|
||||||
|
然后就可以执行对应的创建和删除命令了:
|
||||||
|
|
||||||
|
- 创建:kubectl create -f svc-nginx.yaml
|
||||||
|
- 删除:kubectl delete -f svc-nginx.yaml
|
||||||
|
|
||||||
|
**小结**
|
||||||
|
|
||||||
|
至此,已经掌握了Namespace、Pod、Deployment、Service资源的基本操作,有了这些操作,就可以在kubernetes集群中实现一个服务的简单部署和访问了,但是如果想要更好的使用kubernetes,就需要深入学习这几种资源的细节和原理。
|
Loading…
x
Reference in New Issue
Block a user