当spark作业在没有docker的情况下通过 spark-submit
一切正常。但是,在docker容器上运行会导致不生成输出。
为了查看kafka本身是否正常工作,我将kafka提取到spark worker容器中,让控制台使用者收听相同的主机、端口和主题(kafka:9092,crypto\u topic)正确运行并显示输出(有一个生产者不断地将数据推送到另一个容器中的主题)
预期-
20/09/11 17:35:27 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.29.10:42565 with 366.3 MB RAM, BlockManagerId(driver, 192.168.29.10, 42565, None)
20/09/11 17:35:27 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.29.10, 42565, None)
20/09/11 17:35:27 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.29.10, 42565, None)
-------------------------------------------
Batch: 0
-------------------------------------------
+---------+-----------+-----------------+------+----------+------------+-----+-------------------+---------+
|name_coin|symbol_coin|number_of_markets|volume|market_cap|total_supply|price|percent_change_24hr|timestamp|
+---------+-----------+-----------------+------+----------+------------+-----+-------------------+---------+
+---------+-----------+-----------------+------+----------+------------+-----+-------------------+---------+
...
...
...
followed by more output
实际的
20/09/11 14:49:44 INFO BlockManagerMasterEndpoint: Registering block manager d7443d94165c:46203 with 366.3 MB RAM, BlockManagerId(driver, d7443d94165c, 46203, None)
20/09/11 14:49:44 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, d7443d94165c, 46203, None)
20/09/11 14:49:44 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, d7443d94165c, 46203, None)
20/09/11 14:49:44 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
no more output, stuck here
docker-compose.yml文件
version: "3"
services:
zookeeper:
image: zookeeper:3.6.1
container_name: zookeeper
hostname: zookeeper
ports:
- "2181:2181"
networks:
- crypto-network
kafka:
image: wurstmeister/kafka:2.13-2.6.0
container_name: kafka
hostname: kafka
ports:
- "9092:9092"
environment:
- KAFKA_ADVERTISED_HOST_NAME=kafka
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_ADVERTISED_PORT=9092
# topic-name:partitions:in-sync-replicas:cleanup-policy
- KAFKA_CREATE_TOPICS="crypto_topic:1:1:compact"
networks:
- crypto-network
kafka-producer:
image: python:3-alpine
container_name: kafka-producer
command: >
sh -c "pip install -r /usr/src/producer/requirements.txt
&& python3 /usr/src/producer/kafkaProducerService.py"
volumes:
- ./kafkaProducer:/usr/src/producer
networks:
- crypto-network
cassandra:
image: cassandra:3.11.8
container_name: cassandra
hostname: cassandra
ports:
- "9042:9042"
#command:
# cqlsh -f /var/lib/cassandra/cql-queries.cql
volumes:
- ./cassandraData:/var/lib/cassandra
networks:
- crypto-network
spark-master:
image: bde2020/spark-master:2.4.5-hadoop2.7
container_name: spark-master
hostname: spark-master
ports:
- "8080:8080"
- "7077:7077"
- "6066:6066"
networks:
- crypto-network
spark-consumer-worker:
image: bde2020/spark-worker:2.4.5-hadoop2.7
container_name: spark-consumer-worker
environment:
- SPARK_MASTER=spark://spark-master:7077
ports:
- "8081:8081"
volumes:
- ./sparkConsumer:/sparkConsumer
networks:
- crypto-network
networks:
crypto-network:
driver: bridge
``` `spark-submit` 是由
docker exec -it spark-consumer-worker bash
/spark/bin/spark-submit --master $SPARK_MASTER --class processing.SparkRealTimePriceUpdates
--packages com.datastax.spark:spark-cassandra-connector_2.11:2.4.3,org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.5
/sparkConsumer/sparkconsumer_2.11-1.0-RELEASE.jar
Spark代码相关部分
val inputDF: DataFrame = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "kafka:9092")
.option("subscribe", "crypto_topic")
.load()
...
...
...
val queryPrice: StreamingQuery = castedDF
.writeStream
.outputMode("update")
.format("console")
.option("truncate", "false")
.start()
queryPrice.awaitTermination()
1条答案
按热度按时间n3ipq98p1#
这部分代码实际上是
哪里
KAFKA_BOOTSTRAP_SERVERS
以及KAFKA_TOPIC
在本地打包jar时从配置文件读入。对我来说,最好的调试方法是将日志设置得更详细。
在当地
KAFKA_BOOTSTRAP_SERVERS
是localhost:9092
,但在docker容器中它被更改为kafka:9092
在那里的配置文件中。但这并没有反映出来,因为jar已经打包好了。所以将值改为kafka:9092
当 Package 在当地固定它。我会很感激任何关于如何让jar动态获取配置的帮助。我不想在码头集装箱上通过sbt Package 。