kubernetes 为什么Velero和Minion备份恢复不适用于PGO集群?

ev7lccsx  于 2023-02-11  发布在  Kubernetes
关注(0)|答案(2)|浏览(151)

我已经为我的k8s集群设置了minio和Velero备份。一切都很好,因为我可以采取备份,我可以看到他们在minio。我有一个PGO操作员集群河马运行负载平衡器服务。当我通过Velero恢复备份时,一切看起来都很好。2它创建了命名空间和所有处于运行状态的部署和pod。3但是我不能通过PGadmin连接到我的数据库。当我删除pod时,它不会重新创建它,而是显示未绑定PVC的错误。这是输出。

Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  16m   default-scheduler  0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims.                    preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod..
  Warning  FailedScheduling  16m   default-scheduler  0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims.                    preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod..
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get PV
error: the server doesn't have a resource type "PV"
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                                 STORAGECLASS       REASON   AGE
pvc-1ca9e092-4e84-4ca4-88e3-0050890ef101   5Gi        RWO            Delete           Bound    postgres-operator/hippo-s3-instanc                   e2-4bhf-pgdata   openebs-hostpath            16m
pvc-2dd12937-a70e-40b4-b1ad-be1c9f7b39ec   5G         RWO            Delete           Bound    default/local-hostpath-pvc                                            openebs-hostpath            6d9h
pvc-30af7f3b-7ce5-4e2a-8c68-5c701881293b   5Gi        RWO            Delete           Bound    postgres-operator/hippo-s3-instanc                   e2-xvhq-pgdata   openebs-hostpath            16m
pvc-531c9ac7-938c-46b1-b4fa-3a7599f40038   5Gi        RWO            Delete           Bound    postgres-operator/hippo-instance2-                   p4ct-pgdata      openebs-hostpath            7m32s
pvc-968d9794-e4ba-479c-9138-8fbd85422920   5Gi        RWO            Delete           Bound    postgres-operator/hippo-instance2-                   s6fs-pgdata      openebs-hostpath            7m33s
pvc-987c1bd1-bf41-4180-91de-15bb5ead38ad   5Gi        RWO            Delete           Bound    postgres-operator/hippo-s3-instanc                   e2-c4rt-pgdata   openebs-hostpath            16m
pvc-d4629dba-b172-47ea-ab01-12a9039be571   5Gi        RWO            Delete           Bound    postgres-operator/hippo-instance2-                   29gh-pgdata      openebs-hostpath            7m32s
pvc-e79d68c3-4e2f-4314-b83f-f96c306a9b38   5Gi        RWO            Delete           Bound    postgres-operator/hippo-repo2                                         openebs-hostpath            7m30s
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pvc -n postgres-operator                                        NAME                             STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                          AGE
hippo-instance2-29gh-pgdata      Bound     pvc-d4629dba-b172-47ea-ab01-12a9039be571   5Gi        RWO            openebs-hostpath                      7m51s
hippo-instance2-p4ct-pgdata      Bound     pvc-531c9ac7-938c-46b1-b4fa-3a7599f40038   5Gi        RWO            openebs-hostpath                      7m51s
hippo-instance2-s6fs-pgdata      Bound     pvc-968d9794-e4ba-479c-9138-8fbd85422920   5Gi        RWO            openebs-hostpath                      7m51s
hippo-repo2                      Bound     pvc-e79d68c3-4e2f-4314-b83f-f96c306a9b38   5Gi        RWO            openebs-hostpath                      7m51s
hippo-s3-instance2-4bhf-pgdata   Bound     pvc-1ca9e092-4e84-4ca4-88e3-0050890ef101   5Gi        RWO            openebs-hostpath                      16m
hippo-s3-instance2-c4rt-pgdata   Bound     pvc-987c1bd1-bf41-4180-91de-15bb5ead38ad   5Gi        RWO            openebs-hostpath                      16m
hippo-s3-instance2-xvhq-pgdata   Bound     pvc-30af7f3b-7ce5-4e2a-8c68-5c701881293b   5Gi        RWO            openebs-hostpath                      16m
hippo-s3-repo1                   Pending                                                                        pgo                                   16m
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pods -n postgres-operator                                       NAME                           READY   STATUS      RESTARTS   AGE
hippo-backup-txk9-rrk4m        0/1     Completed   0          7m43s
hippo-instance2-29gh-0         4/4     Running     0          8m5s
hippo-instance2-p4ct-0         4/4     Running     0          8m5s
hippo-instance2-s6fs-0         4/4     Running     0          8m5s
hippo-repo-host-0              2/2     Running     0          8m5s
hippo-s3-instance2-c4rt-0      3/4     Running     0          16m
hippo-s3-repo-host-0           0/2     Pending     0          16m
pgo-7c867985c-kph6l            1/1     Running     0          16m
pgo-upgrade-69b5dfdc45-6qrs8   1/1     Running     0          16m
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl delete pods hippo-s3-repo-host-0 -n postgres-operator
pod "hippo-s3-repo-host-0" deleted
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pods -n postgres-operator                                       NAME                           READY   STATUS      RESTARTS   AGE
hippo-backup-txk9-rrk4m        0/1     Completed   0          7m57s
hippo-instance2-29gh-0         4/4     Running     0          8m19s
hippo-instance2-p4ct-0         4/4     Running     0          8m19s
hippo-instance2-s6fs-0         4/4     Running     0          8m19s
hippo-repo-host-0              2/2     Running     0          8m19s
hippo-s3-instance2-c4rt-0      3/4     Running     0          17m
hippo-s3-repo-host-0           0/2     Pending     0          2s
pgo-7c867985c-kph6l            1/1     Running     0          17m
pgo-upgrade-69b5dfdc45-6qrs8   1/1     Running     0          17m
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pvc -n postgres-operator                                        NAME                             STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                          AGE
hippo-instance2-29gh-pgdata      Bound     pvc-d4629dba-b172-47ea-ab01-12a9039be571   5Gi        RWO            openebs-hostpath                      8m45s
hippo-instance2-p4ct-pgdata      Bound     pvc-531c9ac7-938c-46b1-b4fa-3a7599f40038   5Gi        RWO            openebs-hostpath                      8m45s
hippo-instance2-s6fs-pgdata      Bound     pvc-968d9794-e4ba-479c-9138-8fbd85422920   5Gi        RWO            openebs-hostpath                      8m45s
hippo-repo2                      Bound     pvc-e79d68c3-4e2f-4314-b83f-f96c306a9b38   5Gi        RWO            openebs-hostpath                      8m45s
hippo-s3-instance2-4bhf-pgdata   Bound     pvc-1ca9e092-4e84-4ca4-88e3-0050890ef101   5Gi        RWO            openebs-hostpath                      17m
hippo-s3-instance2-c4rt-pgdata   Bound     pvc-987c1bd1-bf41-4180-91de-15bb5ead38ad   5Gi        RWO            openebs-hostpath                      17m
hippo-s3-instance2-xvhq-pgdata   Bound     pvc-30af7f3b-7ce5-4e2a-8c68-5c701881293b   5Gi        RWO            openebs-hostpath                      17m
hippo-s3-repo1                   Pending                                                                        pgo                                   17m
我希望获得什么:

我希望Velero恢复完整备份,并且我应该能够像恢复之前一样访问我的数据库。似乎Velero无法执行完整备份。

sd2nnvve

sd2nnvve1#

Velero是Kubernetes集群及其相关持久卷的备份和恢复解决方案。虽然Velero目前不支持数据库的完整备份和恢复,但请参考这些限制。它支持快照和恢复持久卷。这意味着,虽然您可能无法直接恢复完整数据库,您可以还原与数据库关联的永久卷,然后使用适当的工具从快照还原数据。此外,Velero的插件架构允许您使用可添加自定义备份和恢复功能的自定义插件扩展Velero的功能。
有关备份和恢复的更多信息,请参阅digital ocean by Hanif Jetha and Jamon Camisso的博客。

vxqlmq5t

vxqlmq5t2#

根据您共享的错误,您的安装程序缺少PVC或PVC。
如果使用AWS、GCP插件,Velero可以在一般快照中备份PVC和PV,并且在您恢复时,也可以使用该插件为您创建PVC和PV。
我已经迁移了Elasticsearch数据库与velero沿着PVC和工作良好,在我的情况下,但不是你使用相同的云提供商或storageclass在两个集群?为什么PVC是挂起的hipo-s3-repo?你的原因吗?
这里是我的文章,但我使用的插件和桶作为存储:https://faun.pub/clone-migrate-data-between-kubernetes-clusters-with-velero-e298196ec3d8

相关问题