java 集成Spark和Kafka容器-Spark作业出现问题,找不到KafkaBatchInputPartition

5jvtdoz2  于 2022-11-20  发布在  Java
关注(0)|答案(1)|浏览(86)

我正在为我的大学项目尝试整合Spark & Kafka容器运行在Docker上一个星期,沿着Scala应用程序。我基于大数据欧洲图像。整合是有问题的,Kafka容器和Python制作者脚本工作得很好,但是我在Spark作业上很挣扎。一开始,我甚至不能提交作业。但是我已经解决了这个问题,从大数据欧洲提交正确的sbt/spark图像。然而,Spark应用程序不能正确地读取Kafka的流,我修复了这个问题,在本地下载一些jar文件,并在Dockerfile指定的容器中复制它们(临时解决方案)。现在,作业正在提交,与Kafka连接并启动,但一旦我将消息推送到Kafka代理,它在阅读流时就中断了。总体而言,整个过程在我的本地Ubuntu VM上运行得很好-Kafka & Spark在本地运行,但在停靠时它总是失败。
我也很乐意听取任何关于部署Kafka/Spark应用程序的建议。我心中的目标是创建一个docker-compose,最终可以部署到云(GCP)并在那里运行(可能会将docker-compose重写到K8s文件),但我也想知道是否可以使用一些托管的Spark示例(如GCP Dataproc)来更容易地实现这一点。
感谢您的提前理解-我对Java/Scala相当新手,对Spark/Docker也相当缺乏经验。
来自Spark的错误代码:

Driver stacktrace:
        at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2454)
        at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2403)
        at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2402)
        at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
        at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2402)
        at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1160)
        at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1160)
        at scala.Option.foreach(Option.scala:407)
        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1160)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2642)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2584)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2573)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
        at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:938)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2214)
        at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2(WriteToDataSourceV2Exec.scala:354)
        ... 40 more
Caused by: java.lang.ClassNotFoundException: org.apache.spark.sql.kafka010.KafkaBatchInputPartition
        at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:348)
        at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:68)
        at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1986)
        at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1850)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2160)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
        at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2405)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2329)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2187)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
        at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2405)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2329)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2187)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:503)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:461)
        at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)
        at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:115)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:466)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

Scala应用程序文档(我已经尝试了--packages和--jars,并为它们指定了/spark/jars前缀):

FROM bde2020/spark-sbt-template:3.2.1-hadoop3.2

COPY . .
COPY ./spark-sql-kafka-0-10_2.12-3.2.1.jar /spark/jars
COPY ./kafka-clients-3.2.1.jar /spark/jars
COPY ./spark-streaming-kafka-0-10-assembly_2.12-3.2.1.jar /spark/jars

ENV SPARK_APPLICATION_MAIN_CLASS StreamProcessor
ENV SPARK_APPLICATION_ARGS "--packages org.apache.spark:spark-sql-kafka-0-10_2.12-3.2.1,kafka-clients-3.2.1.jar,spark-streaming-kafka-0-10-assembly_2.12-3.2.1"

Scala测试应用程序-“StreamProcessor”:

import org.apache.spark.sql._
import org.apache.spark.sql.functions._
import org.apache.spark.sql.streaming._
import org.apache.spark.sql.types._

object StreamProcessor {
    def main(args:Array[String]): Unit =
    {
        val spark = SparkSession
            .builder
            .master("spark://spark-master:7077")
            .appName("Stream Processor")
            .getOrCreate()

        import spark.implicits._

        val inputDF = spark
            .readStream
            .format("kafka")
            .option("kafka.bootstrap.servers","kafka:29092")
            .option("subscribe","market")
            .load()

        val rawDF = inputDF.selectExpr("CAST(value AS STRING)").as[String]

        val query = inputDF
            .writeStream
            .format("console")
            .outputMode("update")
            .start()

        query.awaitTermination()
    }
}

build.sbt:

name := "StreamProcessor"

version := "1.0"

scalaVersion := "2.12.17"

libraryDependencies ++= Seq(
    "org.apache.spark" %% "spark-core" % "3.2.1" % "provided",
    "org.apache.spark" %% "spark-sql" % "3.2.1" % "provided",
    "org.apache.spark" %% "spark-sql-kafka-0-10" % "3.2.1" % "provided",
    "org.apache.spark" %% "spark-streaming" % "3.2.1" % "provided"
)

项目/装配。sbt:

addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "1.1.0")

docker-compose.yml(finnhubproducer是一个容器,其中包含一个向Kafka发送消息的应用程序):

version: "3.6"

services:

  zookeeper:
    image: confluentinc/cp-zookeeper:6.2.0
    container_name: zookeeper
    networks:
      - broker-kafka
    ports:
      - "2181:2181"
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  kafka:
    image: confluentinc/cp-kafka:6.2.0
    hostname: kafka
    container_name: kafka
    networks:
      - broker-kafka
    depends_on:
      - zookeeper
    ports:
      - "9092:9092"
      - "9101:9101"
    healthcheck:
      test: nc -z localhost 9092 || exit -1
      start_period: 15s
      interval: 5s
      timeout: 10s
      retries: 10
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ADVERTISED_HOST_NAME: kafka:9092
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT

  init-kafka:
    image: confluentinc/cp-kafka:6.2.0
    networks:
      - broker-kafka
    depends_on:
      - kafka
    entrypoint: [ '/bin/sh', '-c' ]
    command: |
      "
      # blocks until kafka is reachable
      kafka-topics --bootstrap-server kafka:29092 --list

      echo -e 'Creating kafka topics'
      kafka-topics --bootstrap-server kafka:29092 --create --if-not-exists --topic market --replication-factor 1 --partitions 1

      echo -e 'Successfully created the following topics:'
      kafka-topics --bootstrap-server kafka:29092 --list
      "

  kafdrop:
    image: obsidiandynamics/kafdrop:3.27.0
    networks:
      - broker-kafka
    depends_on:
      - kafka
      - zookeeper
    ports:
      - 19000:9000
    environment:
      KAFKA_BROKERCONNECT: kafka:29092
      
  finnhubproducer:
    build:
      context: ./FinnhubProducer
      dockerfile: Dockerfile
    environment:
      - KAFKA_TOPIC_NAME=market
      - KAFKA_SERVER=kafka
      - KAFKA_PORT=29092
    ports:
      - 8001:8001
    depends_on:
      kafka:
        condition: service_healthy
    networks:
      - broker-kafka

  spark-master:
    image: bde2020/spark-master:3.2.1-hadoop3.2
    container_name: spark-master
    ports:
      - "8080:8080"
      - "7077:7077"
    environment:
      - INIT_DAEMON_STEP=setup_spark
    networks:
      - broker-kafka
        
  spark-worker-1:
    image: bde2020/spark-worker:3.2.1-hadoop3.2
    container_name: spark-worker-1
    depends_on:
      - spark-master
    ports:
      - "8081:8081"
    environment:
      - "SPARK_MASTER=spark://spark-master:7077"
    networks:
      - broker-kafka
      
  spark-worker-2:
    image: bde2020/spark-worker:3.2.1-hadoop3.2
    container_name: spark-worker-2
    depends_on:
      - spark-master
    ports:
      - "8082:8081"
    environment:
      - "SPARK_MASTER=spark://spark-master:7077"
    networks:
      - broker-kafka

  streamprocessor:
    build:
      context: ./StreamProcessor
      dockerfile: Dockerfile
    ports:
      - "8002:8002"
    depends_on:
      kafka:
        condition: service_healthy
    networks:
      - broker-kafka

networks:
  broker-kafka:
    driver: bridge
atmip9wb

atmip9wb1#

看起来我已经解决了这个问题。我需要在每个Spark容器(主容器和工作容器)的Docker卷中包含.jar文件,包括一个额外的。

spark-master:
    image: bde2020/spark-master:3.2.1-hadoop3.2
    container_name: spark-master
    ports:
      - "8080:8080"
      - "7077:7077"
    environment:
      - INIT_DAEMON_STEP=setup_spark
    networks:
      - broker-kafka
    volumes:
      - ./StreamProcessor/kafka-clients-2.8.0.jar:/spark/jars/kafka-clients-2.8.0.jar
      - ./StreamProcessor/commons-pool2-2.8.0.jar:/spark/jars/commons-pool2-2.8.0.jar
      - ./StreamProcessor/spark-sql-kafka-0-10_2.12-3.2.1.jar:/spark/jars/spark-sql-kafka-0-10_2.12-3.2.1.jar
      - ./StreamProcessor/spark-streaming-kafka-0-10-assembly_2.12-3.2.1.jar:/spark/jars/spark-streaming-kafka-0-10-assembly_2.12-3.2.1.jar

相关问题