我的执行文件是:
package hadoop;
import java.util.*;
import java.io.IOException;
import java.io.IOException;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.util.*;
import javax.lang.model.util.Elements;
public class ProcessUnits
{
//Mapper class
public static class E_EMapper extends MapReduceBase implements
Mapper<LongWritable ,/*Input key Type */
Text, /*Input value Type*/
Text, /*Output key Type*/
IntWritable> /*Output value Type*/
{
//Map function
public void map(LongWritable key, Text value,
OutputCollector<Text, IntWritable> output,
Reporter reporter) throws IOException
{
String line = value.toString();
String lasttoken = null;
StringTokenizer s = new StringTokenizer(line,"\t");
String year = s.nextToken();
while(s.hasMoreTokens())
{
lasttoken=s.nextToken();
}
int avgprice = Integer.parseInt(lasttoken);
output.collect(new Text(year), new IntWritable(avgprice));
}
}
//Reducer class
public static class E_EReduce extends MapReduceBase implements
Reducer< Text, IntWritable, Text, IntWritable >
{
//Reduce function
public void reduce( Text key, Iterator <IntWritable> values,
OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException
{
int maxavg=30;
int val=Integer.MIN_VALUE;
while (values.hasNext())
{
if((val=values.next().get())>maxavg)
{
output.collect(key, new IntWritable(val));
}
}
}
}
//Main function
public static void main(String args[])throws Exception
{
JobConf conf = new JobConf(Eleunits.class);
conf.setJobName("max_eletricityunits");
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
conf.setMapperClass(E_EMapper.class);
conf.setCombinerClass(E_EReduce.class);
conf.setReducerClass(E_EReduce.class);
conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);
FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
JobClient.runJob(conf);
}
}
当我编译它时:
javac-classpath/home/javier/entrada/hadoop-core-1.2.1.jar-d/home/javier/units/home/javier/entrada/processunits.java
我有以下错误:
javac -classpath /home/javier/entrada/hadoop-core-1.2.1.jar -d /home/javier/units /home/javier/entrada/ProcessUnits.java
/home/javier/entrada/ProcessUnits.java:72: error: cannot find symbol
JobConf conf = new JobConf(Eleunits.class);
^
symbol: class Eleunits
location: class ProcessUnits
1 error
我的hadoop版本是2.9.2,java版本是1.8.0\u191
当我用eclipse打开它并查看它时,我没有找到eleunits.class的导入
1条答案
按热度按时间x8goxv8g1#
我的hadoop版本是2.9.2,java版本是1.8.0\u191
首先,
hadoop-core-1.2.1.jar
是在hadoop2.9.2之前构建的,所以你需要一个新的jar当我用eclipse打开它并查看它时,我没有找到eleunits.class的导入
不清楚你为什么一直不使用eclipse!即使不使用maven或gradle为hadoop获取正确的库版本,对我来说也是可怕的。。。但eclipse可能没有撒谎。您只显示了一个类,而该类没有被调用
Eleunits
,我不知道你是怎么得到这个值的,除了从其他地方复制的除此之外,主要的班级应该
extends Configured implements Tool
,您将在其他mapreduce示例中找到