我使用SWF来运行一个工作流,该工作流创建了一个EMR集群,在该集群上运行一个PIG脚本。我试图用PIG 0.12.0和Hadoop 2.4.0来运行这个,并且在脚本试图在RDS中存储到我们的MySql数据库的地方,使用org.apache.pig.piggybank.storage。DBStorage,抛出异常:
2015-05-26 14:36:47,057 [main] ERROR org.apache.pig.piggybank.storage.DBStorage -
can't load DB driver:com.mysql.jdbc.Driver
java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:191)
at org.apache.pig.piggybank.storage.DBStorage.<init>(DBStorage.java:66)
这是以前使用Pig 0.11.1和Hadoop 1.0.3工作。SWF工作流和活动是用Java编写的,使用Java AWS SDK版本1.9.19。在更广泛的互联网上搜索信息,建议PIG_CLASSPATH需要修改,以包括MySql连接器JAR -目前脚本包括
REGISTER $LIB_PATH/mysql-connector-java-5.1.26.jar;
其中$LIB_PATH是一个S3位置,但是有人建议这对于Pig 0.12.0 + Hadoop 2.4.0来说已经不够了
构造用于启动集群的请求的代码如下所示
public final RunJobFlowRequest constructRequest(final List<String> params) {
ConductorContext config = ContextHolder.get();
final JobFlowInstancesConfig instances = new JobFlowInstancesConfig().withInstanceCount(config.getEmrInstanceCount())
.withMasterInstanceType(config.getEmrMasterType()).withSlaveInstanceType(config.getEmrSlaveType())
.withKeepJobFlowAliveWhenNoSteps(false).withHadoopVersion(config.getHadoopVersion());
if (!StringUtils.isBlank(config.getEmrEc2SubnetId())) {
instances.setEc2SubnetId(config.getEmrEc2SubnetId());
}
final BootstrapActionConfig bootStrap = new BootstrapActionConfig().withName("Bootstrap Pig").withScriptBootstrapAction(
new ScriptBootstrapActionConfig().withPath(config.getEmrBootstrapPath()).withArgs(config.getEmrBootstrapArgs()));
final StepFactory stepFactory = new StepFactory();
final List<StepConfig> steps = new LinkedList<>();
steps.add(new StepConfig().withName("Enable Debugging").withActionOnFailure(ActionOnFailure.TERMINATE_JOB_FLOW)
.withHadoopJarStep(stepFactory.newEnableDebuggingStep()));
steps.add(new StepConfig().withName("Install Pig").withActionOnFailure(ActionOnFailure.TERMINATE_JOB_FLOW)
.withHadoopJarStep(stepFactory.newInstallPigStep(config.getPigVersion())));
for (final PigScript originalScript : config.getScripts()) {
ArrayList<String> newParams = new ArrayList<>();
newParams.addAll(Arrays.asList(originalScript.getScriptParams()));
newParams.addAll(params);
final PigScript script = new PigScript(originalScript.getName(), originalScript.getScriptUrl(),
AWSHelper.burstParameters(newParams.toArray(new String[newParams.size()])));
steps.add(new StepConfig()
.withName(script.getName())
.withActionOnFailure(ActionOnFailure.CONTINUE)
.withHadoopJarStep(
stepFactory.newRunPigScriptStep(script.getScriptUrl(), config.getPigVersion(), script.getScriptParams())));
}
final RunJobFlowRequest request = new RunJobFlowRequest().withName(makeRunJobName()).withSteps(steps).withVisibleToAllUsers(true)
.withBootstrapActions(bootStrap).withLogUri(config.getEmrLogUrl()).withInstances(instances);
return request;
}
在我的情况下,解决方案是修改启动集群时使用的shell脚本,以便将合适的JAR复制到适当的位置
wget http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.34/mysql-connector-java-5.1.34.jar -O $PIG_CLASSPATH/mysql-connector-java-5.1.34.jar
所以总结一下,对于Hadoop 2.4.0和Pig 0.12.0,在脚本中注册JAR已经不够了,JAR必须在调用Pig时可用,确保它位于$PIG_CLASSPATH