我正在使用Next.js、Kubernetes、ingress-ngninx和skaffold。我的下一个项目中有一个Docker文件,Dockerhub中有一个repo。当我尝试运行skaffold dev
时,我一直看到这个错误:
build [st3/tickethub-client] failed: could not push image "st3/tickethub-client:36d456b": tag does not exist: st3/tickethub-client:36d456b
我尝试使用latest
标记和36d456b
标记手动构建映像:docker build -t st3/tickethub-client:
latest/36 d456 b。然后我成功地将两者推送到dockerhub。Skaffold开发仍然失败。然后我在推送后拉取该映像,skaffold dev
失败,并出现相同的错误。然后我执行docker prune -a
将Docker重置为开箱即用的设置,但在重建/推送到Docker hub后仍然失败。如何解决此问题?当我在构建中指定:latest
时,为什么它会创建一次性标记?
Docker版本Docker version 23.0.0, build e92dd87
Skaffold版本v2.0.3
client-depl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: tickethub-client-depl
spec:
replicas: 1
# Add selector so depl can find which pods to create
selector:
matchLabels:
app: tickethub-client
# Pod creation details
template:
metadata:
labels:
app: tickethub-client
spec:
containers:
- name: tickethub-client
image: st3/tickethub-client:latest
---
# K8's complimentary tickethub-client service
apiVersion: v1
kind: Service
metadata:
name: tickethub-client-srv
spec:
selector:
# Find matching pods by selector
app: tickethub-client
ports:
- name: tickethub-client
protocol: TCP
port: 3000
targetPort: 3000
ingress-srv.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: tickethub.io
http:
paths:
- path: /?(.*)
pathType: Prefix
backend:
service:
name: tickethub-client-srv
port:
number: 3000
停靠文件
# Grab base image
FROM node:alpine
# Set up working directory
WORKDIR /app
# Copy into workdir
COPY package.json .
# Cmd to run
RUN npm install
#Copy everything else from src dir
COPY . .
# Default cmd to run when container is created from this image
CMD ["npm", "run", "dev"]
docker images
的输出
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
st3/tickethub-client 36d456b d224d808efc7 12 minutes ago 533MB
st3/tickethub-client 74a97c6 d224d808efc7 12 minutes ago 533MB
st3/tickethub-client latest d224d808efc7 12 minutes ago 533MB
st3/auth latest 67b5330b204b 15 minutes ago 371MB
registry.k8s.io/ingress-nginx/controller <none> f2e1146a6d96 2 months ago 269MB
k8s.gcr.io/kube-apiserver v1.25.2 97801f839490 4 months ago 128MB
k8s.gcr.io/kube-scheduler v1.25.2 ca0ea1ee3cfd 4 months ago 50.6MB
k8s.gcr.io/kube-controller-manager v1.25.2 dbfceb93c69b 4 months ago 117MB
k8s.gcr.io/kube-proxy v1.25.2 1c7d8c51823b 4 months ago 61.7MB
registry.k8s.io/pause 3.8 4873874c08ef 7 months ago 711kB
k8s.gcr.io/etcd 3.5.4-0 a8a176a5d5d6 8 months ago 300MB
k8s.gcr.io/coredns v1.9.3 5185b96f0bec 8 months ago 48.8MB
docker/desktop-vpnkit-controller v2.0 8c2c38aa676e 21 months ago 21MB
docker/desktop-storage-provisioner v2.0 99f89471f470 21 months ago 41.9MB
在这一点上我没有主意了。
1条答案
按热度按时间aiazj4mn1#
我终于可以让这个工作:
运行
docker system prune -a --volumes
来删除docker本地的所有内容,然后重新构建映像并推送到dockerhub似乎已经工作。不是100%,这将是一个包罗万象的解决方案,因为它删除了所有的东西从本地(我不介意,因为这是开发工作反正)。