我正在尝试部署具有自我管理节点组的群集。无论使用什么配置选项,总是出现以下错误:
错误:发布“http://localhost/api/v1/namespaces/kube-system/configmaps":拨号传输控制协议127.0.0.1:80:连接:连接被拒绝与.terraform/modules/eks-ssp/ www.example.com第19行上的模块. eks-ssp.kubernetes配置Map.aws验证[0]aws-auth-configmap.tf连接,位于资源“kubernetes配置Map”“aws验证”中:资源“kubernetes配置Map”“aws验证”{
.tf文件如下所示:
module "eks-ssp" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform"
# EKS CLUSTER
tenant = "DevOpsLabs2"
environment = "dev-test"
zone = ""
terraform_version = "Terraform v1.1.4"
# EKS Cluster VPC and Subnet mandatory config
vpc_id = "xxx"
private_subnet_ids = ["xxx","xxx", "xxx", "xxx"]
# EKS CONTROL PLANE VARIABLES
create_eks = true
kubernetes_version = "1.19"
# EKS SELF MANAGED NODE GROUPS
self_managed_node_groups = {
self_mg = {
node_group_name = "DevOpsLabs2"
subnet_ids = ["xxx","xxx", "xxx", "xxx"]
create_launch_template = true
launch_template_os = "bottlerocket" # amazonlinux2eks or bottlerocket or windows
custom_ami_id = "xxx"
public_ip = true # Enable only for public subnets
pre_userdata = <<-EOT
yum install -y amazon-ssm-agent \
systemctl enable amazon-ssm-agent && systemctl start amazon-ssm-agent \
EOT
disk_size = 20
instance_type = "t2.small"
desired_size = 2
max_size = 10
min_size = 2
capacity_type = "" # Optional Use this only for SPOT capacity as capacity_type = "spot"
k8s_labels = {
Environment = "dev-test"
Zone = ""
WorkerType = "SELF_MANAGED_ON_DEMAND"
}
additional_tags = {
ExtraTag = "t2x-on-demand"
Name = "t2x-on-demand"
subnet_type = "public"
}
create_worker_security_group = false # Creates a dedicated sec group for this Node Group
},
}
}
module "eks-ssp-kubernetes-addons" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform//modules/kubernetes-addons"
eks_cluster_id = module.eks-ssp.eks_cluster_id
# EKS Addons
enable_amazon_eks_vpc_cni = true
enable_amazon_eks_coredns = true
enable_amazon_eks_kube_proxy = true
enable_amazon_eks_aws_ebs_csi_driver = true
#K8s Add-ons
enable_aws_load_balancer_controller = true
enable_metrics_server = true
enable_cluster_autoscaler = true
enable_aws_for_fluentbit = true
enable_argocd = true
enable_ingress_nginx = true
depends_on = [module.eks-ssp.self_managed_node_groups]
}
提供者:
terraform {
backend "remote" {}
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.66.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.6.1"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.4.1"
}
}
}
3条答案
按热度按时间bbuxkriu1#
根据Github repo [1]中提供的示例,我猜测
provider
配置块丢失了,因此无法正常工作。查看问题中提供的代码,似乎需要添加以下内容:如果还需要
helm
,我认为还需要添加下面的块[2]:kubernetes
和helm
的提供程序参数引用分别位于[3]和[4]中。[1] https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/examples/eks-cluster-with-self-managed-node-groups/main.tf#L23-L47
[2] https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/examples/eks-cluster-with-eks-addons/main.tf#L49-L55
[3] https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#argument-reference
[4] https://registry.terraform.io/providers/hashicorp/helm/latest/docs#argument-reference
jtw3ybtb2#
上面Marko E的回答似乎解决了这个问题。在一个单独的
providers.tf
文件中应用了上面的代码后,terraform现在可以通过这个错误了。稍后将发布部署是否完全通过。作为参考,在我遇到这个错误之前,可以从创建的65个资源减少到创建的42个资源。这是使用AWS Consulting的自述文件顶部推荐的最佳实践/示例配置:https://github.com/aws-samples/aws-eks-accelerator-for-terraform
62lalag43#
在我的例子中,我尝试使用Terraform部署到kubernetes集群(GKE),我用kubeconfig文件的绝对路径替换了kubeconfig路径。
从
至