arrayoutofboundsexception

ki1q1bka  于 2021-05-30  发布在  Hadoop
关注(0)|答案(2)|浏览(190)

我正在尝试使用eclipse在hadoop上运行kmeans算法。我提到了这个程序。
http://www.slideshare.net/titusdamaiyanti/hadoop-installation-k-means-clustering-mapreduce?qid=44b5881c-089d-474b-b01d-c35a2f91cc67&v=qf1&b=&from_search=1#likes-面板
为此,数据是硬编码的。不需要外部数据文件。当我运行这个程序时,我得到了distancemeasurer方法中BoundsException的数组。我不明白为什么会出现这个错误。这是测距仪的代码

package com.clustering.model;
public class DistanceMeasurer{
public static final double measureDistance(ClusterCenter center,Vector v){
double sum=0;
int length=v.getVector().length;
for(int i=0; i<length; i++){
sum+=Math.abs(center.getCenter().getVector()[i]-v.getVector()[i]);
}
return sum;
}
}

eclipse中的控制台输出是这样的,

15/03/18 12:26:15 INFO input.FileInputFormat: Total input paths to process : 1

15/03/18 12:26:16 INFO mapred.JobClient: Running job: job_local1627424039_0001

15/03/18 12:26:16 INFO mapred.LocalJobRunner: Waiting for map tasks
15/03/18 12:26:16 INFO mapred.LocalJobRunner: Starting task: attempt_local1627424039_0001_m_000000_0

15/03/18 12:26:16 INFO util.ProcessTree: setsid exited with exit code 0
15/03/18 12:26:16 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@a0e0e1
15/03/18 12:26:16 INFO mapred.MapTask: Processing split: file:/home/hduser/workspace/KMeansClustering/files/clustering/import/data:0+558
15/03/18 12:26:16 INFO mapred.MapTask: io.sort.mb = 100
15/03/18 12:26:16 INFO mapred.MapTask: data buffer = 79691776/99614720
15/03/18 12:26:16 INFO mapred.MapTask: record buffer = 262144/327680
15/03/18 12:26:17 INFO compress.CodecPool: Got brand-new decompressor
15/03/18 12:26:17 INFO mapred.JobClient:  map 0% reduce 0%

15/03/18 12:26:17 INFO compress.CodecPool: Got brand-new decompressor
15/03/18 12:26:17 INFO mapred.MapTask: Starting flush of map output
15/03/18 12:26:17 INFO mapred.LocalJobRunner: Map task executor complete.
15/03/18 12:26:17 WARN mapred.LocalJobRunner: job_local1627424039_0001

java.lang.Exception: java.lang.ArrayIndexOutOfBoundsException: 1
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:354)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
    at com.clustering.model.DistanceMeasurer.measureDistance(DistanceMeasurer.java:9)
    at com.clustering.mapreduce.KMeansMapper.map(KMeansMapper.java:56)
    at com.clustering.mapreduce.KMeansMapper.map(KMeansMapper.java:1)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:364)

at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:223)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at 

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
15/03/18 12:26:18 INFO mapred.JobClient: Job complete: job_local1627424039_0001
15/03/18 12:26:18 INFO mapred.JobClient: Counters: 0

请帮我解决这个问题。谢谢

hivapdat

hivapdat1#

你确定“中心”和“向量”的维数相同吗?为什么不在循环前打印出“中心”的长度?
另外,旁白,为什么使用l1距离?

63lcw9qa

63lcw9qa2#

你的循环条件是错误的,它应该检查向量中两个数组的长度。你可以把两个数组的长度条件或你可以改变根据你的要求。

int length=v.getVector().length;
for(int i=0; i<length && i< center.getCenter().getVector().length; i++){
sum+=Math.abs(center.getCenter().getVector()[i]-v.getVector()[i]);
}

相关问题