我已经安装了minio和NFS存储,并在kubernetes集群上安装了velero。我在perm上有一个主节点和两个工作节点。两个pod(minio和velero)都在运行,没有错误。当我尝试Velero backup create testbackup --include-namespaces myns --wait
时,备份失败。当我检查Velero日志进行testbackup时,我看到了此日志。
An error occurred: Get "http://minio.velero.svc:9000/velero/backups/test/test-logs.gz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=minio%2F20230203%2Fminio%2Fs3%2Faws4_request&X-Amz-Date=20230203T110817Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=2d932b7717a857918337ffc26f08add6": dial tcp: lookup minio.velero.svc on 127.0.0.53:53: no such host
奇怪的是,Velero正在寻找的主机不是我的主机。
我正在学习this教程,但无法找到解决方案。我也尝试了Velero的官方文档来安装Minio,发现了同样的问题。
这是我的yaml文件
apiVersion: v1
kind: Namespace
metadata:
name: velero
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: itom-dr-minio-pv
namespace: velero
spec:
capacity:
storage: 2Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
path: /var/nfs/general
server: IP-of-NFS
storageClassName: cdf-default
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: itom-dr-minio-pvc
namespace: velero
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
volumeName: itom-dr-minio-pv
storageClassName: cdf-default
---
apiVersion: v1
kind: Secret
metadata:
name: itom-dr-secret-minio
namespace: velero
type: Opaque
stringData:
username: minio
password: minio123
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: velero
name: minio
labels:
component: minio
spec:
strategy:
type: Recreate
selector:
matchLabels:
component: minio
template:
metadata:
labels:
component: minio
spec:
securityContext:
runAsGroup: 0
runAsUser: 0
volumes:
- name: storage
persistentVolumeClaim:
claimName: itom-dr-minio-pvc
containers:
- name: minio
image: minio/minio:latest
imagePullPolicy: IfNotPresent
args:
- server
- /storage
- --config-dir=/config
env:
- name: MINIO_ACCESS_KEY
valueFrom:
secretKeyRef:
name: itom-dr-secret-minio
key: username
- name: MINIO_SECRET_KEY
valueFrom:
secretKeyRef:
name: itom-dr-secret-minio
key: password
ports:
- containerPort: 9000
volumeMounts:
- name: storage
mountPath: "/var/nfs/general"
---
apiVersion: v1
kind: Service
metadata:
namespace: velero
name: minio
labels:
component: minio
spec:
# ClusterIP is recommended for production environments.
# Change to NodePort if needed per documentation,
# but only if you run Minio in a test/trial environment, for example with Minikube.
type: ClusterIP
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
component: minio
---
apiVersion: batch/v1
kind: Job
metadata:
namespace: velero
name: minio-setup
labels:
component: minio
spec:
template:
metadata:
name: minio-setup
spec:
restartPolicy: OnFailure
volumes:
- name: config
emptyDir: {}
containers:
- name: mc
image: minio/mc:latest
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- "mc --config-dir=/config config host add velero http://minio:9000 minio minio123 && mc --config-dir=/config mb -p velero/velero"
volumeMounts:
- name: config
mountPath: "/config"
我就是这样安装Velero的
velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.0.0 \
--bucket velero \
--secret-file ./credentials-velero \
--use-volume-snapshots=false \
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000
我做了什么我尝试了不同的服务类型,并尝试在velero安装过程中添加minio服务IP。我想要什么我想要执行备份并找到Valero存储备份的位置。
1条答案
按热度按时间2lpgd9681#
我遇到了同样的问题,问题是从coreDNS服务器
kubectl run nginx --image=nginx:alpine --restart Never -it --rm -- curl minio.velero.svc:9000
如果上述Pod返回nslookup错误add.cluster.local
kubectl run nginx --image=nginx:alpine --restart Never -it --rm -- nslookup minio.velero.svc.cluster.local
如果这个吊舱能解决
然后应用以下解决方案:在coreDNS configmap中添加此行