hadoop分布式缓存:找不到文件异常

cuxqih21  于 2021-06-03  发布在  Hadoop
关注(0)|答案(1)|浏览(358)

我正在尝试在mapreduce上实现k-means。我已将初始质心文件上载到分布式缓存
驾驶员级别

DistributedCache.addCacheFile(new URI("GlobalCentroidFile"),conf);

在我的Map绘制课上

Path[] localFiles = DistributedCache.getLocalCacheFiles(job);
File file = new File(localFiles[0].getName());
System.out.println(" File read is "+localFiles[0].getName());
BufferedReader bufferedReader = new BufferedReader(new FileReader(file));       
System.out.println("Goin in while loop");
    ....
   // some code omitted
    ....
    } catch (IOException e) {
        System.out.println("\n"+e);
    }

输出格式为$hadoop\u home/logs/is

File read is localhostGlobalCentroidFile
java.io.FileNotFoundException: localhostGlobalCentroidFile (No such file or directory)

当我这么做的时候

ganesh@ganesh-PC:~/Desktop$ hadoop fs -ls

Warning: $HADOOP_HOME is deprecated.

Found 4 items

-rw-r--r--   1 ganesh supergroup         26 2013-04-02 16:15 /user/ganesh
/GlobalCentroidFile

-rw-r--r--   1 ganesh supergroup         18 2013-04-02 16:16 /user/ganesh
/GlobalCentroidFile1

-rw-r--r--   1 ganesh supergroup        672 2013-04-02 16:15 /user/ganesh/input

drwxr-xr-x   - ganesh supergroup          0 2013-04-02 16:16 /user/ganesh/output

ganesh@ganesh-PC:~/Desktop$ hadoop fs -cat GlobalCentroidFile

Warning: $HADOOP_HOME is deprecated.

2.3    4.3

34.4    33.3

45.5    34

有什么问题吗?

s5a0g9ez

s5a0g9ez1#

下面应该是你的代码。您正在尝试使用正常的java文件读取结构来读取hdfs中可用的文件,但这种结构不起作用。

Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);
int bytesRead = 0;
byte[] buffer = new byte[2048];
Path inFile = new Path(argv[0]);
if (fs.exists(inFile)) {
    FSDataInputStream in = fs.open(inFile);
    while ((bytesRead = in.read(buffer)) > 0) {
        // Read Logic
    }
    in.close();
}

相关问题