docker 单任务通信中的ECS映像

mmvthczy  于 2023-03-22  发布在  Docker
关注(0)|答案(1)|浏览(183)

我有一个ECS任务,它在同一个容器中运行两个镜像。这两个镜像试图相互通信。我可以在本地创建一个网络,在同一个IP上运行这两个镜像,并将IP作为Dockerfile中的ENV变量传递。它工作得很好,没有任何问题。
然而,当我将这些映像推送到在awsvpc模式下运行的ECS集群时,我遇到了这个答案,还有这个文档,他们说awsvpc在localhost接口上运行。我试图用www.example.com替换IP地址127.0.0.1,不幸的是,由于未知原因,它不起作用。
以下是我在ECS上托管的两个Dockerfile:停靠文件1

FROM cubejs/cubestore:v0.32.3
ENV CUBESTORE_WORKERS=127.0.0.1:10001
ENV CUBESTORE_META_PORT=9999
ENV CUBESTORE_SERVER_NAME=127.0.0.1:9999

停靠文件2

FROM cubejs/cubestore:v0.32.3
ENV CUBESTORE_WORKERS=127.0.0.1:10001
ENV CUBESTORE_SERVER_NAME=127.0.0.1:10001
ENV CUBESTORE_WORKER_PORT=10001
ENV CUBESTORE_META_ADDR=127.0.0.1:9999

我看到这个错误是我正在使用的库所特有的,但它仍然应该表明与给定的服务器名称建立通信时存在问题

ERROR [cubestore::cluster] <pid:1> Failed to get warmup partitions: Internal: Can't connect to 127.0.0.1:9999: Connection refused (os error 111)

下面是我的任务定义文件。请注意,所有的敏感信息都已被obv删除,所以文件没有任何问题:

{
    "taskDefinitionArn": "",
    "containerDefinitions": [
        {
            "name": "cubestore-router",
            "image": "",
            "cpu": 1,
            "portMappings": [
                {
                    "containerPort": 9999,
                    "hostPort": 9999,
                    "protocol": "tcp"
                }
            ],
            "essential": true,
            "environment": [
                {
                    "name": "CUBESTORE_WORKERS",
                    "value": "127.0.0.1:10001"
                },
                {
                    "name": "CUBESTORE_SERVER_NAME",
                    "value": "127.0.0.1:9999"
                },
                {
                    "name": "CUBESTORE_META_PORT",
                    "value": "9999"
                }
            ],
            "mountPoints": [],
            "volumesFrom": [],
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-create-group": "true",
                    "awslogs-group": "/ecs/",
                    "awslogs-region": "us-east-1",
                    "awslogs-stream-prefix": "ecs"
                }
            }
        },
        {
            "name": "cubestore-worker1",
            "image": "",
            "cpu": 1,
            "portMappings": [
                {
                    "containerPort": 10001,
                    "hostPort": 10001,
                    "protocol": "tcp"
                }
            ],
            "essential": true,
            "environment": [
                {
                    "name": "CUBESTORE_WORKERS",
                    "value": "127.0.0.1:10001"
                },
                {
                    "name": "CUBESTORE_META_ADDR",
                    "value": "127.0.0.1:9999"
                },
                {
                    "name": "CUBESTORE_SERVER_NAME",
                    "value": "127.0.0.1:10001"
                },
                {
                    "name": "CUBESTORE_LOG_LEVEL",
                    "value": "trace"
                },
                {
                    "name": "CUBESTORE_WORKER_PORT",
                    "value": "10001"
                }
            ],
            "mountPoints": [],
            "volumesFrom": [],
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-create-group": "true",
                    "awslogs-group": "/ecs/",
                    "awslogs-region": "us-east-1",
                    "awslogs-stream-prefix": "ecs"
                }
            }
        }
    ],
    "family": "",
    "taskRoleArn": "",
    "executionRoleArn": "",
    "networkMode": "awsvpc",
    "revision": 1,
    "volumes": [],
    "status": "ACTIVE",
    "requiresAttributes": [
        {
            "name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
        },
        {
            "name": "com.amazonaws.ecs.capability.docker-remote-api.1.24"
        },
        {
            "name": "ecs.capability.execution-role-awslogs"
        },
        {
            "name": "com.amazonaws.ecs.capability.ecr-auth"
        },
        {
            "name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
        },
        {
            "name": "com.amazonaws.ecs.capability.task-iam-role"
        },
        {
            "name": "ecs.capability.container-health-check"
        },
        {
            "name": "ecs.capability.execution-role-ecr-pull"
        },
        {
            "name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
        },
        {
            "name": "ecs.capability.task-eni"
        },
        {
            "name": "com.amazonaws.ecs.capability.docker-remote-api.1.29"
        }
    ],
    "placementConstraints": [],
    "compatibilities": [
        "EC2",
        "FARGATE"
    ],
    "requiresCompatibilities": [
        "FARGATE"
    ],
    "cpu": "4096",
    "memory": "8192",
    "runtimePlatform": {
        "cpuArchitecture": "X86_64",
        "operatingSystemFamily": "LINUX"
    },
    "registeredAt": "2023-03-16T19:43:23.318Z",
    "registeredBy": "",
    "tags": []
}
laawzig2

laawzig21#

我有一个ECS任务,它在同一个容器中运行两个映像。
这种说法毫无意义。每个容器都是从一个镜像创建的。你的意思是说你有两个容器在同一个ECS任务中运行。
查看您的任务定义,cubestore-router容器侦听端口9999cubestore-worker1容器侦听端口10001cubestore-router容器应该能够通过127.0.0.1:10001访问cubestore-worker1容器,cubestore-worker1容器应该能够通过127.0.0.1:9999访问cubestore-router容器。
如果这不起作用,那么我会怀疑您在容器中运行的软件出于某种原因不允许来自127.0.0.1的请求,或者当另一个容器试图访问它时,一个容器仍然在启动。
从ECS/Fargate的Angular 来看,容器绝对应该能够在127.0.0.1的各自端口上相互通信。

相关问题