tapplicationexception在accumulo表上运行mapreduce作业时发生异常

ppcbkaq5  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(381)

我正在运行一个map reduce作业,从accumulo的一个表中获取数据作为输入,并将结果存储在accumulo的另一个表中。为此,我使用accumuloinputformat和accumulooutputformat类。这是密码

public int run(String[] args) throws Exception {

        Opts opts = new Opts();
        opts.parseArgs(PivotTable.class.getName(), args);

        Configuration conf = getConf();

        conf.set("formula", opts.formula);

        Job job = Job.getInstance(conf);

        job.setJobName("Pivot Table Generation");
        job.setJarByClass(PivotTable.class);

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);

        job.setMapperClass(PivotTableMapper.class);
        job.setCombinerClass(PivotTableCombiber.class);
        job.setReducerClass(PivotTableReducer.class);

        job.setInputFormatClass(AccumuloInputFormat.class);

        ClientConfiguration zkConfig = new ClientConfiguration().withInstance(opts.getInstance().getInstanceName()).withZkHosts(opts.getInstance().getZooKeepers());

        AccumuloInputFormat.setInputTableName(job, opts.dataTable);
        AccumuloInputFormat.setZooKeeperInstance(job, zkConfig);
        AccumuloInputFormat.setConnectorInfo(job, opts.getPrincipal(), new PasswordToken(opts.getPassword().value));

        job.setOutputFormatClass(AccumuloOutputFormat.class);

        BatchWriterConfig bwConfig = new BatchWriterConfig();

        AccumuloOutputFormat.setBatchWriterOptions(job, bwConfig);
        AccumuloOutputFormat.setZooKeeperInstance(job, zkConfig);
        AccumuloOutputFormat.setConnectorInfo(job, opts.getPrincipal(), new PasswordToken(opts.getPassword().value));
        AccumuloOutputFormat.setDefaultTableName(job, opts.pivotTable);
        AccumuloOutputFormat.setCreateTables(job, true);

        return job.waitForCompletion(true) ? 0 : 1;
    }

数据透视表是包含main方法的类的名称(这个也是)。我还制作了mapper、combiner和reducer类。但当我试图执行这项工作时,我得到了一个错误

Exception in thread "main" java.io.IOException: org.apache.accumulo.core.client.AccumuloException: org.apache.thrift.TApplicationException: Internal error processing hasTablePermission
        at org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.validatePermissions(InputConfigurator.java:707)
        at org.apache.accumulo.core.client.mapreduce.AbstractInputFormat.validateOptions(AbstractInputFormat.java:397)
        at org.apache.accumulo.core.client.mapreduce.AbstractInputFormat.getSplits(AbstractInputFormat.java:668)
        at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
        at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
        at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
        at com.latize.ulysses.accumulo.postprocess.PivotTable.run(PivotTable.java:247)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at com.latize.ulysses.accumulo.postprocess.PivotTable.main(PivotTable.java:251)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.accumulo.core.client.AccumuloException: org.apache.thrift.TApplicationException: Internal error processing hasTablePermission
        at org.apache.accumulo.core.client.impl.SecurityOperationsImpl.execute(SecurityOperationsImpl.java:87)
        at org.apache.accumulo.core.client.impl.SecurityOperationsImpl.hasTablePermission(SecurityOperationsImpl.java:220)
        at org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.validatePermissions(InputConfigurator.java:692)
        ... 21 more
Caused by: org.apache.thrift.TApplicationException: Internal error processing hasTablePermission
        at org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
        at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
        at org.apache.accumulo.core.client.impl.thrift.ClientService$Client.recv_hasTablePermission(ClientService.java:641)
        at org.apache.accumulo.core.client.impl.thrift.ClientService$Client.hasTablePermission(ClientService.java:624)
        at org.apache.accumulo.core.client.impl.SecurityOperationsImpl$8.execute(SecurityOperationsImpl.java:223)
        at org.apache.accumulo.core.client.impl.SecurityOperationsImpl$8.execute(SecurityOperationsImpl.java:220)
        at org.apache.accumulo.core.client.impl.ServerClient.executeRaw(ServerClient.java:79)
        at org.apache.accumulo.core.client.impl.SecurityOperationsImpl.execute(SecurityOperationsImpl.java:73)

有人能告诉我我做错什么了吗?任何帮助都将不胜感激。
编辑:我正在运行accumulo 1.7.0

j0pj023g

j0pj023g1#

TApplicationException 指示accumulo tablet服务器上发生的错误,而不是客户端(mapreduce)代码中发生的错误。您需要检查tablet服务器日志,以获取有关特定错误的更多信息 TApplicationException .
表权限通常是从zookeeper检索的,因此它可能表示tserver连接到zookeeper时出现问题。
不幸的是,我在堆栈跟踪中没有看到主机名或ip,因此您可能需要检查所有tserver日志才能找到它。

相关问题