在Flink1.10.0中,我们尝试使用 taskmanager.memory.process.size
限制taskmanager使用的资源,以确保它们不会被kubernetes杀死。但是,我们仍然有很多taskmanager OOMKilled
使用以下设置。
有没有关于如何正确设置kubernetes和flink的建议 OOMKilled
?
kubernetes设置与中所述的相同https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/kubernetes.html.
以下是kubernetes中taskmanager部署的资源配置:
resources:
requests:
cpu: 1000m
memory: 4096Mi
limits:
cpu: 1000m
memory: 4096Mi
以下是中所有与内存相关的配置 flink-conf.yaml
在1.10.0中:
jobmanager.heap.size: 820m
taskmanager.memory.jvm-metaspace.size: 128m
taskmanager.memory.process.size: 4096m
我们使用rocksdb,不设置 state.backend.rocksdb.memory.managed
在 flink-conf.yaml
.
不知道该怎么查 any substantial off-heap memory allocations in your application code or its dependencies
. 有人建议这样做吗?
以下是我们的附件供参考。
val flinkVersion = "1.10.0"
libraryDependencies += "com.squareup.okhttp3" % "okhttp" % "4.2.2"
libraryDependencies += "com.typesafe" % "config" % "1.4.0"
libraryDependencies += "joda-time" % "joda-time" % "2.10.5"
libraryDependencies += "org.apache.flink" %% "flink-connector-kafka" % flinkVersion
libraryDependencies += "org.apache.flink" % "flink-metrics-dropwizard" % flinkVersion
libraryDependencies += "org.apache.flink" %% "flink-scala" % flinkVersion % "provided"
libraryDependencies += "org.apache.flink" %% "flink-statebackend-rocksdb" % flinkVersion % "provided"
libraryDependencies += "org.apache.flink" %% "flink-streaming-scala" % flinkVersion % "provided"
libraryDependencies += "org.json4s" %% "json4s-jackson" % "3.6.7"
libraryDependencies += "org.log4s" %% "log4s" % "1.8.2"
libraryDependencies += "org.rogach" %% "scallop" % "3.3.1"
我们在Flink1.9.1中使用的配置如下。它没有 OOMKilled
.
Kubernetes
resources:
requests:
cpu: 1200m
memory: 2G
limits:
cpu: 1500m
memory: 2G
Flink1.9.1
jobmanager.heap.size: 820m
taskmanager.heap.size: 1024m
暂无答案!
目前还没有任何答案,快来回答吧!