spark submit无法连接到元存储,原因是kerberos:由gssexception导致:未提供有效凭据但在本地客户端模式下工作

b1zrtrql  于 2021-05-27  发布在  Spark
关注(0)|答案(2)|浏览(540)

似乎,在docker中,本地客户端模式下的pyspark shell正在工作,并且能够连接到hive。但是,发出带有所有依赖项的spark submit失败,并出现以下错误。

20/08/24 14:03:01 INFO storage.BlockManagerMasterEndpoint: Registering block manager test.server.com:41697 with 6.2 GB RAM, BlockManagerId(3, test.server.com, 41697, None)
20/08/24 14:03:02 INFO hive.HiveUtils: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
20/08/24 14:03:02 INFO hive.metastore: Trying to connect to metastore with URI thrift://metastore.server.com:9083
20/08/24 14:03:02 ERROR transport.TSaslTransport: SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
        at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
        at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
        at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)

在pyspark上运行一个简单的pi示例可以正常工作,没有kerberos问题,但是在尝试访问hive时出现kerberos错误。
spark提交命令:

spark-submit --master yarn --deploy-mode cluster --files=/etc/hive/conf/hive-site.xml,/etc/hive/conf/yarn-site.xml,/etc/hive/conf/hdfs-site.xml,/etc/hive/conf/core-site.xml,/etc/hive/conf/mapred-site.xml,/etc/hive/conf/ssl-client.xml  --name fetch_hive_test --executor-memory 12g --num-executors 20 test_hive_minimal.py

test\u hive\u minimal.py是一个简单的pyspark脚本,用于在test db中显示表:

from pyspark.sql import SparkSession

# declaration

appName = "test_hive_minimal"
master = "yarn"

# Create the Spark session

sc = SparkSession.builder \
    .appName(appName) \
    .master(master) \
    .enableHiveSupport() \
    .config("spark.hadoop.hive.enforce.bucketing", "True") \
    .config("spark.hadoop.hive.support.quoted.identifiers", "none") \
    .config("hive.exec.dynamic.partition", "True") \
    .config("hive.exec.dynamic.partition.mode", "nonstrict") \
    .getOrCreate()

# Define the function to load data from Teradata

# custom freeform query

sql = "show tables in user_tables"
df_new = sc.sql(sql)
df_new.show()
sc.stop()

谁能告诉我怎么解决这个问题吗?kerberos票据不是由yarn自动管理的吗?所有其他hadoop资源都可以访问。
update:issue was 在docker容器上共享vol mount并传递keytab/principal和hive-site.xml以访问metastore后修复。

spark-submit --master yarn \
--deploy-mode cluster \
--jars /srv/python/ext_jars/terajdbc4.jar \
--files=/etc/hive/conf/hive-site.xml \
--keytab /home/alias/.kt/alias.keytab \ #this is mounted and kept in docker local path 
--principal alias@realm.com.org \
--name td_to_hive_test \
--driver-cores 2 \
--driver-memory 2G \
--num-executors 44 \
--executor-cores 5 \
--executor-memory 12g \
td_to_hive_test.py
bkhjykvo

bkhjykvo1#

我认为你的司机有票,但你的遗嘱执行人没有。将以下参数添加到spark submit:
--校长:你可以这样得到校长:klist-k
--keytab:keytab的路径
更多信息:https://spark.apache.org/docs/latest/running-on-yarn.html#yarn-特定kerberos配置

olhwl3o2

olhwl3o22#

在集群上运行作业时,是否可以尝试使用以下命令行属性。

-Djavax.security.auth.useSubjectCredsOnly=false

您可以将上述属性添加到spark submit命令中

相关问题