kubernetes RBAC规则在Kubeadm集群中不起作用

jk9hmnmh  于 2023-10-17  发布在  Kubernetes
关注(0)|答案(1)|浏览(81)

在我们的一个客户的Kubernetes集群(v1.16.8 with kubeadm)中,RBAC根本不工作。我们使用以下yamls创建了一个ServiceAccount、只读的MysterRole和MysterRoleBinding,但是当我们通过 Jmeter 板或Kubectl登录时,用户几乎可以在集群中做任何事情。是什么原因导致了这个问题?

kind: ServiceAccount
apiVersion: v1
metadata:
  name: read-only-user
  namespace: permission-manager
secrets:
  - name: read-only-user-token-7cdx2
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: read-only-user___template-namespaced-resources___read-only___all_namespaces
  labels:
    generated_for_user: ''
subjects:
  - kind: ServiceAccount
    name: read-only-user
    namespace: permission-manager
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: template-namespaced-resources___read-only
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: template-namespaced-resources___read-only
rules:
  - verbs:
      - get
      - list
      - watch
    apiGroups:
      - '*'
    resources:
      - configmaps
      - endpoints
      - persistentvolumeclaims
      - pods
      - pods/log
      - pods/portforward
      - podtemplates
      - replicationcontrollers
      - resourcequotas
      - secrets
      - services
      - events
      - daemonsets
      - deployments
      - replicasets
      - ingresses
      - networkpolicies
      - poddisruptionbudgets

以下是集群的kube-apiserver.yaml文件内容:

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=192.168.1.42
    - --allow-privileged=true
    - --authorization-mode=Node,RBAC
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
    - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
    - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
    - --etcd-servers=https://127.0.0.1:2379
    - --insecure-port=0
    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=6443
    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
    - --service-cluster-ip-range=10.96.0.0/12
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    image: k8s.gcr.io/kube-apiserver:v1.16.8
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 192.168.1.42
        path: /healthz
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: kube-apiserver
    resources:
      requests:
        cpu: 250m
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/ca-certificates
      name: etc-ca-certificates
      readOnly: true
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
    - mountPath: /usr/local/share/ca-certificates
      name: usr-local-share-ca-certificates
      readOnly: true
    - mountPath: /usr/share/ca-certificates
      name: usr-share-ca-certificates
      readOnly: true
  hostNetwork: true
  priorityClassName: system-cluster-critical
  volumes:
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/ca-certificates
      type: DirectoryOrCreate
    name: etc-ca-certificates
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /usr/local/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-local-share-ca-certificates
  - hostPath:
      path: /usr/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-share-ca-certificates
status: {}
ufj5ltwl

ufj5ltwl1#

您所定义的只是控制服务帐户。这里有一个测试规范;创建一个yaml文件:

apiVersion: v1
kind: Namespace
metadata:
  name: test
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: test-sa
  namespace: test
---
kind: ClusterRoleBinding  # <-- REMINDER: Cluster wide and not namespace specific. Use RoleBinding for namespace specific.
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: test-role-binding
subjects:
- kind: ServiceAccount
  name: test-sa
  namespace: test
- kind: User
  name: someone
  apiGroup: rbac.authorization.k8s.io
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: test-cluster-role
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: test-cluster-role
rules:
- verbs:
  - get
  - list
  - watch
  apiGroups:
  - '*'
  resources:
  - configmaps
  - endpoints
  - persistentvolumeclaims
  - pods
  - pods/log
  - pods/portforward
  - podtemplates
  - replicationcontrollers
  - resourcequotas
  - secrets
  - services
  - events
  - daemonsets
  - deployments
  - replicasets
  - ingresses
  - networkpolicies
  - poddisruptionbudgets

应用上述规范:kubectl apply -f <filename>.yaml
工作如预期:

删除测试资源:kubectl delete -f <filename>.yaml

相关问题