无法在pentaho中运行pig脚本

ghg1uchk  于 2021-05-30  发布在  Hadoop
关注(0)|答案(0)|浏览(227)

我在分布式模式下使用hadoop。我想通过远程机器在hadoop集群上执行pig脚本。所以为了实现这一点,我使用pentaho&pig脚本工具。我设置了所有参数,比如hdfs hostname:hadoop master name hdfs port:8020 job tracker hostname:another slave machine name job tracker port:8021 pig脚本路径
我按照这个链接输入链接描述在这里
但是pig脚本失败,下面是错误日志

2015/03/27 16:10:20 – RepositoriesMeta – Reading repositories XML file: C:\Users\vijay.shinde\.kettle\repositories.xml
2015/03/27 16:10:21 – Version checker – OK
2015/03/27 16:10:45 – Spoon – Connected to metastore : pentaho, added to delegating metastore
2015/03/27 16:11:03 – Spoon – Spoon
2015/03/27 16:11:28 – Spoon – Starting job…

**2015/03/27 16:11:28 – Job_pig – Start of job execution**

2015/03/27 16:11:28 – Job_pig – Starting entry [Pig Script Executor]
2015/03/27 16:11:29 – Pig Script Executor – 2015/03/27 16:11:29 – Connecting to hadoop file system at: hdfs://server_name:8020
2015/03/27 16:11:31 – Pig Script Executor – 2015/03/27 16:11:31 – Connecting to map-reduce job tracker at:job_tracker:8021
2015/03/27 16:11:32 – Pig Script Executor – 2015/03/27 16:11:32 – Pig features used in the script: GROUP_BY,FILTER
2015/03/27 16:11:33 – Pig Script Executor – 2015/03/27 16:11:33 – {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, DuplicateForEachColumnRewrite, FilterLogicExpressionSimplifier, GroupByConstParallelSetter, ImplicitSplitInserter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, NewPartitionFilterOptimizer, PartitionFilterOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter]}
2015/03/27 16:11:33 – Pig Script Executor – 2015/03/27 16:11:33 – File concatenation threshold: 100 optimistic? false
2015/03/27 16:11:33 – Pig Script Executor – 2015/03/27 16:11:33 – Choosing to move algebraic foreach to combiner
2015/03/27 16:11:33 – Pig Script Executor – 2015/03/27 16:11:33 – MR plan size before optimization: 1
2015/03/27 16:11:33 – Pig Script Executor – 2015/03/27 16:11:33 – MR plan size after optimization: 1
2015/03/27 16:11:33 – Pig Script Executor – 2015/03/27 16:11:33 – Pig script settings are added to the job
2015/03/27 16:11:33 – Pig Script Executor – 2015/03/27 16:11:33 – mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
2015/03/27 16:11:33 – Pig Script Executor – 2015/03/27 16:11:33 – Reduce phase detected, estimating # of required reducers.
2015/03/27 16:11:33 – Pig Script Executor – 2015/03/27 16:11:33 – Using reducer estimator: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator
2015/03/27 16:11:33 – Pig Script Executor – 2015/03/27 16:11:33 – BytesPerReducer=1000000000 maxReducers=999 totalInputFileSize=110
2015/03/27 16:11:33 – Pig Script Executor – 2015/03/27 16:11:33 – Setting Parallelism to 1
2015/03/27 16:11:33 – Pig Script Executor – 2015/03/27 16:11:33 – creating jar file Job9065727596293143224.jar
2015/03/27 16:11:38 – Pig Script Executor – 2015/03/27 16:11:38 – jar file Job9065727596293143224.jar created
2015/03/27 16:11:38 – Pig Script Executor – 2015/03/27 16:11:38 – Setting up single store job
2015/03/27 16:11:38 – Pig Script Executor – 2015/03/27 16:11:38 – Key [pig.schematuple] is false, will not generate code.
2015/03/27 16:11:38 – Pig Script Executor – 2015/03/27 16:11:38 – Starting process to move generated code to distributed cache
2015/03/27 16:11:38 – Pig Script Executor – 2015/03/27 16:11:38 – Setting key [pig.schematuple.classes] with classes to deserialize []
2015/03/27 16:11:39 – Pig Script Executor – 2015/03/27 16:11:39 – 1 map-reduce job(s) waiting for submission.

**2015/03/27 16:37:31 – Pig Script Executor – 2015/03/27 16:37:31 – 0% complete

2015/03/27 16:37:36 – Pig Script Executor – 2015/03/27 16:37:36 – Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.**
2015/03/27 16:37:36 – Pig Script Executor – 2015/03/27 16:37:36 – job null has failed! Stop running all dependent jobs
2015/03/27 16:37:36 – Pig Script Executor – 2015/03/27 16:37:36 – 100% complete
2015/03/27 16:37:36 – Pig Script Executor – 2015/03/27 16:37:36 – There is no log file to write to.
2015/03/27 16:37:36 – Pig Script Executor – 2015/03/27 16:37:36 – Backend error message during job submission

N/A filtered_records,grouped_records,max_temp,records   GROUP_BY,COMBINER   Message: java.net.ConnectException: Call From server_name/ip_address to server_name:8050 failed on connection exception: java.net.ConnectException: Connection refused: no further information; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
2015/03/27 16:37:36 – Pig Script Executor – at sun.reflect.GeneratedConstructorAccessor26.newInstance(Unknown Source)
2015/03/27 16:37:36 – Pig Script Executor – at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
2015/03/27 16:37:36 – Pig Script Executor – at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.ipc.Client.call(Client.java:1351)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.ipc.Client.call(Client.java:1300)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
2015/03/27 16:37:36 – Pig Script Executor – at com.sun.proxy.$Proxy21.getNewApplication(Unknown Source)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getNewApplication(ApplicationClientProtocolPBClientImpl.java:167)
2015/03/27 16:37:36 – Pig Script Executor – at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
2015/03/27 16:37:36 – Pig Script Executor – at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2015/03/27 16:37:36 – Pig Script Executor – at java.lang.reflect.Method.invoke(Method.java:483)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
2015/03/27 16:37:36 – Pig Script Executor – at com.sun.proxy.$Proxy22.getNewApplication(Unknown Source)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getNewApplication(YarnClientImpl.java:127)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.createApplication(YarnClientImpl.java:135)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.mapred.ResourceMgrDelegate.getNewJobID(ResourceMgrDelegate.java:175)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.mapred.YARNRunner.getNewJobID(YARNRunner.java:229)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:355)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
2015/03/27 16:37:36 – Pig Script Executor – at java.security.AccessController.doPrivileged(Native Method)
2015/03/27 16:37:36 – Pig Script Executor – at javax.security.auth.Subject.doAs(Subject.java:422)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
2015/03/27 16:37:36 – Pig Script Executor – at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2015/03/27 16:37:36 – Pig Script Executor – at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2015/03/27 16:37:36 – Pig Script Executor – at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2015/03/27 16:37:36 – Pig Script Executor – at java.lang.reflect.Method.invoke(Method.java:483)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
2015/03/27 16:37:36 – Pig Script Executor – at java.lang.Thread.run(Thread.java:745)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:270)
2015/03/27 16:37:36 – Pig Script Executor – Caused by: java.net.ConnectException: Connection refused: no further information
2015/03/27 16:37:36 – Pig Script Executor – at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
2015/03/27 16:37:36 – Pig Script Executor – at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:547)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:642)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.ipc.Client$Connection.access$2600(Client.java:314)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.ipc.Client.getConnection(Client.java:1399)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.ipc.Client.call(Client.java:1318)
2015/03/27 16:37:36 – Pig Script Executor – … 30 more
2015/03/27 16:37:36 – Pig Script Executor – /data/pig/input/test1,

**Input(s):

Failed to read data from “/data/pig/input/pigtest.txt”
Output(s):
Failed to produce result in “/data/pig/input/test1″
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:**
null
2015/03/27 16:37:36 – Pig Script Executor – 2015/03/27 16:37:36 – Failed!
2015/03/27 16:37:36 – Pig Script Executor – 2015/03/27 16:37:36 – ERROR 2244: Job failed, hadoop does not return any error message
2015/03/27 16:37:36 – Pig Script Executor – 2015/03/27 16:37:36 – There is no log file to write to.
2015/03/27 16:37:36 – Pig Script Executor – 2015/03/27 16:37:36 – org.apache.pig.backend.executionengine.ExecException: ERROR 2244: Job failed, hadoop does not return any error message
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:148)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:202)
2015/03/27 16:37:36 – Pig Script Executor – at org.pentaho.hadoop.shim.common.CommonPigShim.executeScript(CommonPigShim.java:105)
2015/03/27 16:37:36 – Pig Script Executor – at org.pentaho.di.job.entries.pig.JobEntryPigScriptExecutor.execute(JobEntryPigScriptExecutor.java:492)
2015/03/27 16:37:36 – Pig Script Executor – at org.pentaho.di.job.Job.execute(Job.java:678)
2015/03/27 16:37:36 – Pig Script Executor – at org.pentaho.di.job.Job.execute(Job.java:815)
2015/03/27 16:37:36 – Pig Script Executor – at org.pentaho.di.job.Job.execute(Job.java:500)
2015/03/27 16:37:36 – Pig Script Executor – at org.pentaho.di.job.Job.run(Job.java:407)
2015/03/27 16:37:36 – Pig Script Executor – Num successful jobs: 0 num failed jobs: 1
2015/03/27 16:37:36 – Job_pig – Finished job entry [Pig Script Executor] (result=[false])
2015/03/27 16:37:36 – Job_pig – Job execution finished
2015/03/27 16:37:36 – Spoon – Job has ended.

这是我的Pig剧本。

records = LOAD ‘/data/pig/input/pigtest.txt’ USING PigStorage(‘,’) AS (year:chararray,temperature:int,quality:int);
filtered_records = FILTER records BY quality==1;
grouped_records = GROUP filtered_records BY year;
max_temp = FOREACH grouped_records GENERATE group,MAX(filtered_records.temperature);
STORE max_temp into ‘/data/pig/input/test1′;

谢谢

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题