java K8s Spark作业JAR参数

vwhgwdsa  于 2022-12-02  发布在  Java
关注(0)|答案(1)|浏览(156)

我正在使用下面的清单,在应用时我得到下面的错误,这是一个正确的方式来传递JAR参数吗?

apiVersion: batch/v1
kind: Job
metadata:
  name: spark-on-eks
spec:
  template:
    spec:
      containers:
        - name: spark
          image: repo:buildversion
          command: [
            "/bin/sh",
            "-c",
            "/opt/spark/bin/spark-submit \
            --master k8s://EKSEndpoint \
            --deploy-mode cluster \
            --name spark-luluapp \
            --class com.ll.jsonclass \
            --conf spark.jars.ivy=/tmp/.ivy \
            --conf spark.kubernetes.container.image=repo:buildversion \
            --conf spark.kubernetes.namespace=spark-pi \
            --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark-sa \
            --conf spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem \
            --conf spark.kubernetes.authenticate.executor.serviceAccountName=spark-sa \
            --conf spark.kubernetes.driver.pod.name=spark-job-driver \
            --conf spark.executor.instances=4 \
            local:///opt/spark/examples/App-buildversion-SNAPSHOT.jar \
            [mks,env,reg,"dd.mm.yyyy","true","off","db-comp-results","true","XX","XXX","XXXXX","XXX",$$,###] " 
          ]
      serviceAccountName: spark-pi
      restartPolicy: Never
  backoffLimit: 4

将YAML转换为JSON时出错:YAML:第33行:找不到预期的','或']'

z31licg0

z31licg01#

你yaml格式不对,试试这个。

apiVersion: batch/v1
kind: Job
metadata:
  name: spark-on-eks
spec:
  template:
    spec:
      containers:
        - name: spark
          image: repo:buildversion
          command:  
            - "/bin/sh"
            - "-c"
            - '/opt/spark/bin/spark-submit \
            --master k8s://EKSEndpoint \
            --deploy-mode cluster \
            --name spark-luluapp \
            --class com.ll.jsonclass \
            --conf spark.jars.ivy=/tmp/.ivy \
            --conf spark.kubernetes.container.image=repo:buildversion \
            --conf spark.kubernetes.namespace=spark-pi \
            --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark-sa \
            --conf spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem \
            --conf spark.kubernetes.authenticate.executor.serviceAccountName=spark-sa \
            --conf spark.kubernetes.driver.pod.name=spark-job-driver \
            --conf spark.executor.instances=4 \
            local:///opt/spark/examples/App-buildversion-SNAPSHOT.jar \
            [mks,env,reg,"dd.mm.yyyy","true","off","db-comp-results","true","XX","XXX","XXXXX","XXX",$$,###] '

      serviceAccountName: spark-pi
      restartPolicy: Never
  backoffLimit: 4

相关问题