"name": "hdfswriter",
"parameter": {
"defaultFS": "hdfs://cluster",
"hadoopConfig": {
"dfs.nameservices": "cluster",
"dfs.ha.namenodes.cluster": "nn1,nn2",
"dfs.namenode.rpc-address.cluster.nn1": "ha01:8020",
"dfs.namenode.rpc-address.cluster.nn2": "ha04:8020",
"dfs.client.failover.proxy.provider.cluster": "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
},
"fieldDelimiter": "\u0001",
"fileName": "data",
"fileType": "orc",
"path": "/apps/hive/warehouse/mlg.db/ad_info_tao",
"writeMode": "truncate",
5条答案
按热度按时间prdp8dxp1#
/etc/hosts配一下
wxclj1h52#
同问。hdfsreader上面那样的配置,报相同的错
jjhzyzn03#
我已经解决了,把hdfs-site.xml,core-site.xml,hive-site.xml三个文件放到hdfswriter.jar文件里面去
km0tfn4u4#
用winrar把hdfs-site.xml,core-site.xml,hive-site.xml三个文件压缩到datax/plugin/reader/hdfsreader/hdfsreader-0.0.1-SNAPSHOT.jar里面, 感谢🙏@lijufeng2016
w51jfk4q5#
有效,hdfs-site.xml,core-site.xml,hive-site.xml可以从cloudrea manager 中下载hive 客户端配置