下面是我的pyspark启动代码片段,它非常可靠(我已经使用它很长时间了)。今天我添加了 spark.jars.packages
选项(有效地“插入”Kafka支持)。现在,它通常会触发依赖项下载(由spark自动执行):
import sys, os, multiprocessing
from pyspark.sql import DataFrame, DataFrameStatFunctions, DataFrameNaFunctions
from pyspark.conf import SparkConf
from pyspark.sql import SparkSession
from pyspark.sql import functions as sFn
from pyspark.sql.types import *
from pyspark.sql.types import Row
# ------------------------------------------
# Note: Row() in .../pyspark/sql/types.py
# isn't included in '__all__' list(), so
# we must import it by name here.
# ------------------------------------------
num_cpus = multiprocessing.cpu_count() # Number of CPUs for SPARK Local mode.
os.environ.pop('SPARK_MASTER_HOST', None) # Since we're using pip/pySpark these three ENVs
os.environ.pop('SPARK_MASTER_POST', None) # aren't needed; and we ensure pySpark doesn't
os.environ.pop('SPARK_HOME', None) # get confused by them, should they be set.
os.environ.pop('PYTHONSTARTUP', None) # Just in case pySpark 2.x attempts to read this.
os.environ['PYSPARK_PYTHON'] = sys.executable # Make SPARK Workers use same Python as Master.
os.environ['JAVA_HOME'] = '/usr/lib/jvm/jre' # Oracle JAVA for our pip/python3/pySpark 2.4 (CDH's JRE won't work).
JARS_IVE_REPO = '/home/jdoe/SPARK.JARS.REPO.d/'
# ======================================================================
# Maven Coordinates for JARs (and their dependencies) needed to plug
# extra functionality into Spark 2.x (e.g. Kafka SQL and Streaming)
# A one-time internet connection is necessary for Spark to autimatically
# download JARs specified by the coordinates (and dependencies).
# ======================================================================
spark_jars_packages = ','.join(['org.apache.spark:spark-streaming-kafka-0-10_2.11:2.4.0',
'org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.0',])
# ======================================================================
spark_conf = SparkConf()
spark_conf.setAll([('spark.master', 'local[{}]'.format(num_cpus)),
('spark.app.name', 'myApp'),
('spark.submit.deployMode', 'client'),
('spark.ui.showConsoleProgress', 'true'),
('spark.eventLog.enabled', 'false'),
('spark.logConf', 'false'),
('spark.jars.repositories', 'file:/' + JARS_IVE_REPO),
('spark.jars.ivy', JARS_IVE_REPO),
('spark.jars.packages', spark_jars_packages), ])
spark_sesn = SparkSession.builder.config(conf = spark_conf).getOrCreate()
spark_ctxt = spark_sesn.sparkContext
spark_reader = spark_sesn.read
spark_streamReader = spark_sesn.readStream
spark_ctxt.setLogLevel("WARN")
但是,当我运行代码段时,插件没有下载和/或加载(例如。 ./python -i init_spark.py
),他们应该这样做。
这个机械装置曾经工作过,但后来停止了。我错过了什么?
提前谢谢!
1条答案
按热度按时间arknldoa1#
这是一种问题比答案更有价值的帖子,因为上面的代码可以工作,但在Spark2.x文档或示例中找不到。
以上是我如何通过maven坐标以编程方式向spark2.x添加功能的。我有这个工作,但后来它停止工作。为什么?
当我在一个
jupyter notebook
,笔记本在幕后已经运行了相同的代码片段PYTHONSTARTUP
脚本。那个PYTHONSTARTUP
脚本与上面的代码相同,但省略了maven坐标(有意)。那么,这个微妙的问题是如何出现的:
spark_sesn = SparkSession.builder.config(conf = spark_conf).getOrCreate()
因为spark会话已经存在,上面的语句只是重用了现有的会话(.getorcreate()),该会话没有加载jar/库(同样,因为我的pythonstartup脚本故意忽略了它们)。这就是为什么在pythonstartup脚本中放置print语句是一个好主意。最后,我忘了这么做:
$ unset PYTHONSTARTUP
启动前JupyterLab / Notebook
守护进程。我希望这个问题能对其他人有所帮助,因为这就是如何以编程方式将功能添加到Spark2.x(在本例中是kafka)。请注意,要从maven central一次性下载指定的jar和递归依赖项,您需要一个internet连接。