在dcos群集上提交spark失败,返回java.net.unknownhostexception:hdfs

smdncfj3  于 2021-06-01  发布在  Hadoop
关注(0)|答案(1)|浏览(458)

我在经营一家公司 spark-submit 在dcos群集上的群集/静止模式下:

$ ./spark-submit  --deploy-mode cluster --master mesos://localhost:7077 --conf spark.master.rest.enabled=true --conf spark.mesos.uris=http://api.hdfs.marathon.l4lb.thisdcos.directory/v1/endpoints/hdfs-site.xml,http://api.hdfs.marathon.l4lb.thisdcos.directory/v1/endpoints/core-site.xml --conf spark.mesos.executor.docker.image=someregistry:5000/someimage:2.0.0-rc3 --conf spark.eventLog.enabled=true --conf spark.eventLog.dir=hdfs://hdfs/history --conf spark.executor.extraClassPath=/opr/spark/dist/elasticsearch-spark-20_2.11-6.4.2.jar  --conf spark.mesos.driverEnv.SPARK_HDFS_CONFIG_URL=http://api.hdfs.marathon.l4lb.thisdcos.directory/v1/endpoints/hdfs-site.xml --conf spark.executor.memory=42G --conf spark.driver.memory=8G --conf  spark.executor.cores=8 --driver-class-path /opt/spark/dist/elasticsearch-spark-20_2.11-6.4.2.jar http://hostname/somescript.py

任务失败如下:

java.lang.IllegalArgumentException: java.net.UnknownHostException: hdfs
    at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:374)
    at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:310)
    at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:668)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:604)
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:148)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2598)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2632)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2614)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
    at org.apache.spark.util.Utils$.getHadoopFileSystem(Utils.scala:1853)
    at org.apache.spark.scheduler.EventLoggingListener.<init>(EventLoggingListener.scala:68)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
    at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:236)
    at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
    at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.UnknownHostException: hdfs
    ... 26 more

我已经用我的大脑建立了一个隧道localhost:7707 to 因为内部uri:7707 was 直接可达;
编辑:我认为这可能与我的遗嘱执行人不能阅读有关 core-site.xml 以下声明所在的位置

<property>
    <name>fs.defaultFS</name>
    <value>hdfs://hdfs</value>
</property>

我需要指出我的地方吗 spark 安装在这些文件上?

xfb7svmp

xfb7svmp1#

问题出在 --conf spark.eventLog.dir=hdfs://hdfs/history 你需要把它改成 --conf spark.eventLog.dir=hdfs://HDFS_NAME_NODE_HOSTNAME:8020/hdfs/history 注意:用实际的namenode主机名替换hdfs\u name\u node\u hostname。

相关问题