tachyon0.8.2与hadoop2.6.0一起部署,但ipc版本不匹配

6za6bjd0  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(292)

现在,我想在我的ubuntu14.04上部署tachyon0.8.2,我已经在master上部署了hadoop和spark:

bd@master$ jps
11871 Jps
3388 Master
2919 NameNode
3266 ResourceManager
3123 SecondaryNameNode

在奴隶身上

bd@slave$ jps
4350 Jps
2778 NodeManager
2647 DataNode
2879 Worker

我编辑了taachyon-env.sh:

export TACHYON_MASTER_ADDRESS=${TACHYON_MASTER_ADDRESS:-master}
export TACHYON_UNDERFS_ADDRESS=${TACHYON_UNDERFS_ADDRESS:-hdfs://master:9000}

然后,我运行 bin/tachyon format 以及 bin/tachyon-start.sh local . 我看不见里面的超光速粒子大师 JPS :

/usr/local/bigdata/tachyon-0.8.2 [06:06:32]
bd$ bin/tachyon-start.sh local
Killed 0 processes on master
Killed 0 processes on master
Connecting to master as bd...
Killed 0 processes on master
Connection to master closed.
[sudo] password for bd:
Formatting RamFS: /mnt/ramdisk (512mb)
Starting master @ master
Starting worker @ master
/usr/local/bigdata/tachyon-0.8.2 [06:06:54]
bd$ jps
12183 TachyonWorker
3388 Master
2919 NameNode
3266 ResourceManager
3123 SecondaryNameNode
12203 Jps

我在master中看到日志。日志,我说:

2015-12-27 18:06:50,635 ERROR MASTER_LOGGER (MetricsConfig.java:loadConfigFile) - Error loading metrics configuration file.
2015-12-27 18:06:51,735 ERROR MASTER_LOGGER (HdfsUnderFileSystem.java:<init>) - Exception thrown when trying to get FileSystem for hdfs://master:9000
org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4
    at org.apache.hadoop.ipc.Client.call(Client.java:1070)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
    at com.sun.proxy.$Proxy1.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
    at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
    at tachyon.underfs.hdfs.HdfsUnderFileSystem.<init>(HdfsUnderFileSystem.java:74)
    at tachyon.underfs.hdfs.HdfsUnderFileSystemFactory.create(HdfsUnderFileSystemFactory.java:30)
    at tachyon.underfs.UnderFileSystemRegistry.create(UnderFileSystemRegistry.java:116)
    at tachyon.underfs.UnderFileSystem.get(UnderFileSystem.java:100)
    at tachyon.underfs.UnderFileSystem.get(UnderFileSystem.java:83)
    at tachyon.master.TachyonMaster.connectToUFS(TachyonMaster.java:412)
    at tachyon.master.TachyonMaster.startMasters(TachyonMaster.java:280)
    at tachyon.master.TachyonMaster.start(TachyonMaster.java:261)
    at tachyon.master.TachyonMaster.main(TachyonMaster.java:64)
2015-12-27 18:06:51,742 ERROR MASTER_LOGGER (TachyonMaster.java:main) - Uncaught exception terminating Master
java.lang.IllegalArgumentException: All eligible Under File Systems were unable to create an instance for the given path: hdfs://master:9000
java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4

    at tachyon.underfs.UnderFileSystemRegistry.create(UnderFileSystemRegistry.java:132)
    at tachyon.underfs.UnderFileSystem.get(UnderFileSystem.java:100)
    at tachyon.underfs.UnderFileSystem.get(UnderFileSystem.java:83)
    at tachyon.master.TachyonMaster.connectToUFS(TachyonMaster.java:412)
    at tachyon.master.TachyonMaster.startMasters(TachyonMaster.java:280)
    at tachyon.master.TachyonMaster.start(TachyonMaster.java:261)
    at tachyon.master.TachyonMaster.main(TachyonMaster.java:64)

我该怎么解决这个问题?

wz8daaqr

wz8daaqr1#

此异常是由于hadoop客户端和服务器端的版本不匹配引起的。检查hadoop版本,然后使用以下命令针对该版本重新编译tachyon: mvn -Dhadoop.version=your_hadoop_version clean install 例子: mvn -Dhadoop.version=2.4.0 clean install 现在配置你编译的超光速粒子,它应该可以正常工作。参考链接。

相关问题