我将一个python进程部署到k8的pod中。这个过程由一个简单的logger组成:
import sys
import logging
import logging.handlers
from pathlib import Path
import coloredlogs
from app.core.config import settings
CONFIG_PATH = Path.joinpath(BASE_PATH, '.configrc')
LOG_FORMAT = f'%(asctime)s.[{settings.APP_NAME}] %(levelname)s %(message)s'
LOG_LEVEL = settings.dict().get('LOG_LEVEL', logging.INFO)
LOGGER = logging.getLogger(__name__)
coloredlogs.install(
level=LOG_LEVEL,
fmt=LOG_FORMAT,
logger=LOGGER)
LOGGER.info("INFO Creating main objects...")
然而,当我检查k8中的日志时,它总是抱怨那些日志是错误的:
ERROR 2023-06-01T13:54:37.742688222Z [resource.labels.containerName: myapp] 2023-06-01 15:54:37.[myapp] INFO Creating main objects...
{
insertId: "8xj2l2hwts2f45gt"
labels: {4}
logName: "projects/proj-iot-poc/logs/stderr"
receiveTimestamp: "2023-06-01T13:54:38.596712564Z"
resource: {2}
severity: "ERROR"
textPayload: "2023-06-01 15:54:37.[myapp] INFO Creating main objects..."
timestamp: "2023-06-01T13:54:37.742688222Z"
}
不值得说的是,我刚刚使用了LOGGER.info("Creating main objects...")
,我希望日志只是一个信息不和错误...
清单如下:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "11"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"name":"myapp"},"name":"myapp","namespace":"default"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"myapp"}},"strategy":{"type":"Recreate"},"template":{"metadata":{"labels":{"app":"myapp","version":"1.6.1rc253"}},"spec":{"containers":[{"env":[{"name":"COMMON_CONFIG_COMMIT_ID","value":"1b2e6669140391d680ff0ca34811ddc2553f15f7"},{"name":"OWN_CONFIG_COMMIT_ID","value":"52a142ca003ade39a0fd96faffbe5334facc3463"}],"envFrom":[{"configMapRef":{"name":"myapp-config"}}],"image":"europe-west1-docker.pkg.dev/mycluster/docker-main/myapp:1.6.1rc253","lifecycle":{"preStop":{"exec":{"command":["/bin/bash","-c","sleep 5"]}}},"name":"myapp","resources":null}],"restartPolicy":"Always"}}}}
creationTimestamp: "2023-05-26T07:49:37Z"
generation: 13
labels:
name: myapp
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:labels:
.: {}
f:name: {}
f:spec:
f:progressDeadlineSeconds: {}
f:revisionHistoryLimit: {}
f:selector: {}
f:strategy:
f:type: {}
f:template:
f:metadata:
f:labels:
.: {}
f:app: {}
f:version: {}
f:spec:
f:containers:
k:{"name":"myapp"}:
.: {}
f:env:
.: {}
k:{"name":"COMMON_CONFIG_COMMIT_ID"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"OWN_CONFIG_COMMIT_ID"}:
.: {}
f:name: {}
f:value: {}
f:envFrom: {}
f:image: {}
f:imagePullPolicy: {}
f:lifecycle:
.: {}
f:preStop:
.: {}
f:exec:
.: {}
f:command: {}
f:name: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: kubectl-client-side-apply
operation: Update
time: "2023-05-26T07:49:37Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:deployment.kubernetes.io/revision: {}
f:status:
f:availableReplicas: {}
f:conditions:
.: {}
k:{"type":"Available"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Progressing"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:observedGeneration: {}
f:readyReplicas: {}
f:replicas: {}
f:updatedReplicas: {}
manager: kube-controller-manager
operation: Update
subresource: status
time: "2023-06-03T06:41:24Z"
name: myapp
namespace: default
resourceVersion: "537412667"
uid: 375a536e-e39c-4001-a234-e47e812f0bee
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: myapp
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
app: myapp
version: 1.6.1rc253
spec:
containers:
- env:
- name: COMMON_CONFIG_COMMIT_ID
value: 1b2e6669140391d680ff0ca34811ddc2553f15f7
- name: OWN_CONFIG_COMMIT_ID
value: 52a142ca003ade39a0fd96faffbe5334facc3463
envFrom:
- configMapRef:
name: myapp-config
image: europe-west1-docker.pkg.dev/mycluster/docker-main/myapp:1.6.1rc253
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /bin/bash
- -c
- sleep 5
name: myapp
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2023-05-26T07:49:37Z"
lastUpdateTime: "2023-06-01T14:33:00Z"
message: ReplicaSet "myapp-f9bbb5f6d" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2023-06-03T06:41:24Z"
lastUpdateTime: "2023-06-03T06:41:24Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 13
readyReplicas: 1
replicas: 1
updatedReplicas: 1
- 编辑**
也许这与以下内容有关:GCP and Python Logging
1条答案
按热度按时间cqoc49vn1#
在抓取网页后,我明白了GCP不支持Python的日志记录。最后,我写了一个自定义logger:
省略了一些代码优化。