在hadoop上运行pyspark无法连接到localhost/127.0.0.1:46311

6fe3ivhb  于 2021-05-27  发布在  Hadoop
关注(0)|答案(0)|浏览(347)

当我在hadoop中运行Pypark时:

HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop/ pyspark --master yarn --deploy-mode client

它告诉我:原因:java.io.ioexception:未能连接到localhost/127.0.0.1:46311

> In [1]: sc20/11/05 18:19:06 ERROR YarnClientSchedulerBackend: YARN application has exited unexpectedly with state FAILED! Check the YARN application logs for more details.
20/11/05 18:19:06 ERROR YarnClientSchedulerBackend: Diagnostics message: Uncaught exception: org.apache.spark.SparkException: Exception thrown in awaitResult: 
    at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:302)
    at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
    at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:101)
    at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:109)
    at org.apache.spark.deploy.yarn.ApplicationMaster.runExecutorLauncher(ApplicationMaster.scala:547)
    at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:266)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:890)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:889)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
    at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:889)
    at org.apache.spark.deploy.yarn.ExecutorLauncher$.main(ApplicationMaster.scala:921)
    at org.apache.spark.deploy.yarn.ExecutorLauncher.main(ApplicationMaster.scala)
Caused by: java.io.IOException: Failed to connect to localhost/127.0.0.1:46311
    at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:253)
    at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:195)
    at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:204)
    at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:202)
    at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:198)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:46311
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:715)
    at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330)
    at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
    at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
    at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    at java.lang.Thread.run(Thread.java:748)

/etc/主机:

127.0.0.1   localhost
    # 127.0.1.1 oaa-virtual-machine

    192.168.32.127      ubuntu
    192.168.32.128      ubuntu001
    192.168.32.129      ubuntu002
    192.168.32.130      ubuntu003

    # The following lines are desirable for IPv6 capable hosts
    ::1     ip6-localhost ip6-loopback
    fe00::0 ip6-localnet
    ff00::0 ip6-mcastprefix
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters

/usr/local/hadoop/etc/hadoop/core-site.xml:

<configuration>
    <property>
       <name>fs.default.name</name>
       <value>hdfs://ubuntu:9000</value>
    </property>
    </configuration>

/usr/local/hadoop/etc/hadoop/hdfs-site.xml:

<configuration>
    <property>
       <name>dfs.replication</name>
       <value>3</value>
    </property>

    <property>
       <name>dfs.namenode.name.dir</name>
       <value> file:/usr/local/hadoop/hadoop_data/hdfs/namenode</value>
    </property>

    <property>
       <name>dfs.namenode.rpc-bind-host</name>
       <value>0.0.0.0</value>
    </property>

    <property>
       <name>dfs.namenode.servicerpc-bind-host</name>
       <value>0.0.0.0</value>
    </property>

    <property>
       <name>dfs.namenode.lifeline.rpc-bind-host</name>
       <value>0.0.0.0</value>
    </property>

    <property>
       <name>dfs.namenode.http-bind-host</name>
       <value>0.0.0.0</value>
    </property>

    <property>
       <name>dfs.namenode.https-bind-host</name>
       <value>0.0.0.0</value>
    </property>

    </configuration>

/usr/local/hadoop/etc/hadoop/yarn-site.xml:

<configuration>

    <!-- Site specific YARN configuration properties -->
    <property>
       <name>yarn.nodemanager.aux-services</name>
       <value>mapreduce_shuffle</value>
    </property>
    <property>
       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
       <name>yarn.resourcemanager.resource-tracker.address</name>
       <value>ubuntu:8025</value>
    </property>
    <property>
       <name>yarn.resourcemanager.scheduler.address</name>
       <value>ubuntu:8030</value>
    </property>
    <property>
       <name>yarn.resourcemanager.address</name>
       <value>ubuntu:8050</value>
    </property>

    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>ubuntu</value>
    </property>

    <property>
        <name>yarn.nodemanager.vmem-pmem-ratio</name>
        <value>5</value>
    </property>

    <property>
        <name>yarn.nodemanager.pmem-check-enabled</name>
        <value>false</value>
    </property>

    <property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>false</value>
    </property>

    <!--
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>0.0.0.0:8088</value>
    </property>
    -->

    <property>
        <name>yarn.resourcemanager.bind-host</name>
        <value>0.0.0.0</value>
    </property>

    </configuration>

/usr/local/hadoop/etc/hadoop/mapred-site.xml:

<configuration>

    <property>
       <name>mapred.job.tracker</name>
       <value>ubuntu:54311</value>
    </property>
    </configuration>

硕士jps:

20624 NameNode
    20898 SecondaryNameNode
    21836 Jps
    21404 SparkSubmit
    21086 ResourceManager

从属jps:

9365 DataNode
    9948 Jps
    9535 NodeManager

netstat-tpnl | grep java:

(Not all processes could be identified, non-owned process info
     will not be shown, you would have to be root to see it all.)
    tcp        0      0 0.0.0.0:9000            0.0.0.0:*               LISTEN      20624/java          
    tcp        0      0 0.0.0.0:50090           0.0.0.0:*               LISTEN      20898/java          
    tcp        0      0 0.0.0.0:50070           0.0.0.0:*               LISTEN      20624/java          
    tcp6       0      0 :::8050                 :::*                    LISTEN      21086/java          
    tcp6       0      0 127.0.0.1:43319         :::*                    LISTEN      21404/java          
    tcp6       0      0 :::8088                 :::*                    LISTEN      21086/java          
    tcp6       0      0 :::8025                 :::*                    LISTEN      21086/java          
    tcp6       0      0 :::8030                 :::*                    LISTEN      21086/java          
    tcp6       0      0 :::8033                 :::*                    LISTEN

你知道我该怎么解决我的问题吗?谢谢您!!!

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题