kubernetes 服务中心Regex

o4tp2gmn  于 2023-10-17  发布在  Kubernetes
关注(0)|答案(1)|浏览(75)

我正在尝试部署statefulsets(启用了HPA)。我不想发送任何流量到pod 0(由于某些限制)。在服务中有没有什么方法可以让我使用正则表达式,比如-

selector:
    statefulset.kubernetes.io/pod-name': pod-name-'regrex_starts _from-1-to-n'

或者有比这更好的办法吗?
入口:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: "nginx"
  name: broker-ingress
  labels:
    name: broker-ingress
spec:
  tls:
  - hosts:
    - activemq.artemis.com
  rules:
  - host: activemq.artemis.com
    http:
      paths:
      - pathType: ImplementationSpecific
        path: "/"
        backend:
          service:
            name: artemis-service
            port: 
              name: http-console

服务内容:

apiVersion: v1
kind: Service
metadata:
  name: artemis-service
spec:
  clusterIP: None
  ports:
  - port: 8161
    name: http-console
    protocol: TCP
    targetPort: 8161
  - port: 61616
    name: netty-connector
    protocol: TCP
    targetPort: 61616
  selector:
    app: artemis

状态设置-

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: artemis-deployment
spec:
  replicas: 5
  selector:
    matchLabels:
      app: artemis
  serviceName: artemis-service
  template:
    metadata:
      labels:
        app: artemis
    spec:
      containers:
      - env:
          - name: ARTEMIS_HOST
            valueFrom:
              fieldRef:
                fieldPath: status.podIP
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: SVC_NAME
            value: "artemis-service"
          - name: NS_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: HA_POLICY
            value: "LIVE"

        name: artemis
        image: activemq/artemis:latest
        imagePullPolicy: Never
        resources:
          requests:
            memory: "1G"
            cpu: "250m"
          limits:
            memory: "2G"
            cpu: "500m"
        ports:
          - containerPort: 8161
            name: console
            protocol: TCP
          - containerPort: 61616
            name: netty-connector
            protocol: TCP

现在,当有5个副本时,入口将以循环方式0-1-2-3-4路由流量。但我不希望ingress将流量转发到statefulset-0(statefulset-0在内部有一些其他statefulset没有的参数)。所以我不希望有任何负载/交通)。

t8e9dugd

t8e9dugd1#

为此,我在Python中实现了Leader Election

import time
import socket
import os
from kubernetes import client, config
import logging
from logging.handlers import RotatingFileHandler

POD_NAME = os.getenv("POD_NAME")
SVC_NAME = os.getenv("SVC_NAME")
NS_NAME = os.getenv("NS_NAME")
LABEL_SELECTOR = "node"

config.load_incluster_config()
v1 = client.CoreV1Api()
pod = v1.read_namespaced_pod(name=POD_NAME, namespace=NS_NAME)
pod_uuid = pod.metadata.uid
container_names = [container.name for container in pod.spec.containers]

log_dir = '/var/log/pods/'+pod_uuid+'/'+container_names[0]
if not os.path.exists(log_dir):
    os.makedirs(log_dir)

log_file = os.path.join(log_dir, "leader_election.log")
max_log_size = 1024 * 1024  # 1 MB
backup_count = 3  # Number of backup log files to keep

# Create a rotating file handler
log_handler = RotatingFileHandler(log_file, maxBytes=max_log_size, backupCount=backup_count)
log_formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
log_handler.setFormatter(log_formatter)

# Configure the root logger
root_logger = logging.getLogger()
root_logger.addHandler(log_handler)
root_logger.setLevel(logging.INFO)

def check_socket(host, port):
    try:
        sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        sock.settimeout(1)
        result = sock.connect_ex((host, port))
        sock.close()
        return result == 0
    except (socket.timeout, ConnectionRefusedError):
        return False

def perform_leader_election():
     

    while True:

        if check_socket("localhost", 61616):
            service = v1.read_namespaced_service(SVC_NAME, NS_NAME)
            (v1.read_namespaced_pod(POD_NAME, NS_NAME)).metadata
            if "master" in POD_NAME:
                label_selector = "master"
            elif "slave" in POD_NAME:
                label_selector = "slave"

            # Check the current label selector from the service
            current_label_selector = service.spec.selector.get(LABEL_SELECTOR, "")

            if label_selector != current_label_selector:
                service.spec.selector[LABEL_SELECTOR] = label_selector
                v1.patch_namespaced_service(SVC_NAME, NS_NAME, service)
                logging.critical("Pod '{}' is the leader. Updated service selector to '{}'.".format(POD_NAME, label_selector))
        else:
            logging.info("Pod '{}' is not the leader.".format(POD_NAME))

        time.sleep(5)

if __name__ == "__main__":
    perform_leader_election()

相关问题