使用hadoop运行distcp java作业

pn9klfpd  于 2021-05-31  发布在  Hadoop
关注(0)|答案(0)|浏览(360)

我想用java代码将hdfs中的文件复制到s3 bucket。我的java代码实现如下所示:

import org.apache.hadoop.tools.DistCp;
import org.apache.hadoop.tools.DistCpOptions;
import org.apache.hadoop.tools.OptionsParser;
import org.apache.hadoop.conf.Configuration;

private void setHadoopConfiguration(Configuration conf) {

        conf.set("fs.defaultFS", hdfsUrl);
        conf.set("fs.s3a.access.key", s3AccessKey);
        conf.set("fs.s3a.secret.key", s3SecretKey);
        conf.set("fs.s3a.endpoint", s3EndPoint);
        conf.set("hadoop.job.ugi", hdfsUser);
        System.setProperty("com.amazonaws.services.s3.enableV4", "true");

    }

public static void main(String[] args){

        Configuration conf = new Configuration();
        setHadoopConfiguration(conf);
      try {
                DistCpOptions distCpOptions = OptionsParser.parse(new String[]{srcDir, dstDir});
                DistCp distCp = new DistCp(conf, distCpOptions);
                distCp.execute();
          } 
      catch (Exception e) { 
                   logger.info("Exception occured while copying file {}", srcDir);
                   logger.error("Error ", e);
         }
}

现在,这段代码运行良好,但问题是它没有在yarn cluster上启动distcp作业。它启动本地job runner,因此在出现大文件副本时会超时。

[2020-08-23 21:16:53.759][LocalJobRunner Map Task Executor #0][INFO][S3AFileSystem:?] Getting path status for s3a://***.distcp.tmp.attempt_local367303638_0001_m_000000_0 (***.distcp.tmp.attempt_local367303638_0001_m_000000_0)
[2020-08-23 21:16:53.922][LocalJobRunner Map Task Executor #0][INFO][S3AFileSystem:?] Delete path s3a://***.distcp.tmp.attempt_local367303638_0001_m_000000_0 - recursive false
[2020-08-23 21:16:53.922][LocalJobRunner Map Task Executor #0][INFO][S3AFileSystem:?] Getting path status for s3a://***.distcp.tmp.attempt_local367303638_0001_m_000000_0 (**.distcp.tmp.attempt_local367303638_0001_m_000000_0)
[2020-08-23 21:16:54.007][LocalJobRunner Map Task Executor #0][INFO][S3AFileSystem:?] Getting path status for s3a://****
[2020-08-23 21:16:54.118][LocalJobRunner Map Task Executor #0][ERROR][RetriableCommand:?] Failure in Retriable command: Copying hdfs://***to s3a://***
com.amazonaws.SdkClientException: Unable to execute HTTP request: Read timed out
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1189)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1135)

请帮助我了解如何配置yarn configs,以便distcp作业在集群上运行,而不是在本地运行

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题