我正在使用swf来运行一个创建emr集群的工作流,在这个集群上运行一个pig脚本。我正在尝试用pig 0.12.0和hadoop 2.4.0运行此脚本,当脚本试图使用org.apache.pig.piggybank.storage.dbstorage存储到rds中的mysql数据库时,抛出了一个异常:
2015-05-26 14:36:47,057 [main] ERROR org.apache.pig.piggybank.storage.DBStorage -
can't load DB driver:com.mysql.jdbc.Driver
java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:191)
at org.apache.pig.piggybank.storage.DBStorage.<init>(DBStorage.java:66)
这是以前使用pig0.11.1和hadoop1.0.3实现的。swf工作流和活动是用java编写的,使用javaawsdkversion1.9.19。在更广泛的互联网上搜索信息表明pig\u类路径需要修改,以包含mysql连接器jar——目前脚本包括
REGISTER $LIB_PATH/mysql-connector-java-5.1.26.jar;
其中,$lib\u path是一个s3位置,但有人认为这对于pig0.12.0+hadoop2.4.0已经不够了
构建用于启动集群的请求的代码如下所示
public final RunJobFlowRequest constructRequest(final List<String> params) {
ConductorContext config = ContextHolder.get();
final JobFlowInstancesConfig instances = new JobFlowInstancesConfig().withInstanceCount(config.getEmrInstanceCount())
.withMasterInstanceType(config.getEmrMasterType()).withSlaveInstanceType(config.getEmrSlaveType())
.withKeepJobFlowAliveWhenNoSteps(false).withHadoopVersion(config.getHadoopVersion());
if (!StringUtils.isBlank(config.getEmrEc2SubnetId())) {
instances.setEc2SubnetId(config.getEmrEc2SubnetId());
}
final BootstrapActionConfig bootStrap = new BootstrapActionConfig().withName("Bootstrap Pig").withScriptBootstrapAction(
new ScriptBootstrapActionConfig().withPath(config.getEmrBootstrapPath()).withArgs(config.getEmrBootstrapArgs()));
final StepFactory stepFactory = new StepFactory();
final List<StepConfig> steps = new LinkedList<>();
steps.add(new StepConfig().withName("Enable Debugging").withActionOnFailure(ActionOnFailure.TERMINATE_JOB_FLOW)
.withHadoopJarStep(stepFactory.newEnableDebuggingStep()));
steps.add(new StepConfig().withName("Install Pig").withActionOnFailure(ActionOnFailure.TERMINATE_JOB_FLOW)
.withHadoopJarStep(stepFactory.newInstallPigStep(config.getPigVersion())));
for (final PigScript originalScript : config.getScripts()) {
ArrayList<String> newParams = new ArrayList<>();
newParams.addAll(Arrays.asList(originalScript.getScriptParams()));
newParams.addAll(params);
final PigScript script = new PigScript(originalScript.getName(), originalScript.getScriptUrl(),
AWSHelper.burstParameters(newParams.toArray(new String[newParams.size()])));
steps.add(new StepConfig()
.withName(script.getName())
.withActionOnFailure(ActionOnFailure.CONTINUE)
.withHadoopJarStep(
stepFactory.newRunPigScriptStep(script.getScriptUrl(), config.getPigVersion(), script.getScriptParams())));
}
final RunJobFlowRequest request = new RunJobFlowRequest().withName(makeRunJobName()).withSteps(steps).withVisibleToAllUsers(true)
.withBootstrapActions(bootStrap).withLogUri(config.getEmrLogUrl()).withInstances(instances);
return request;
}
1条答案
按热度按时间sqougxex1#
在我的例子中,解决方案是修改引导集群时使用的shell脚本,以便将合适的jar复制到位
总之,对于hadoop2.4.0和pig0.12.0,在脚本中注册jar已经不够了,jar必须在调用pig时可用,确保它在$pig\u类路径中