我正在编写一个MR作业来挖掘Web服务器日志。作业的输入来自文本文件,输出到MySQL数据库。问题是,作业成功完成,但没有向数据库写入任何内容。我已经有一段时间没有做MR编程了,所以很可能是我找不到的错误。这不是模式匹配(见下文),我已经进行了单元测试并运行良好。我错过了什么?Mac OS X, Oracle JDK 1.8.0_31, hadoop-2.6.0
注意:异常记录在日志中,为了简洁起见,我省略了它们。
可跳过日志记录:
public class SkippableLogRecord implements WritableComparable<SkippableLogRecord> {
// fields
public SkippableLogRecord(Text line) {
readLine(line.toString());
}
private void readLine(String line) {
Matcher m = PATTERN.matcher(line);
boolean isMatchFound = m.matches() && m.groupCount() >= 5;
if (isMatchFound) {
try {
jvm = new Text(m.group("jvm"));
Calendar cal = getInstance();
cal.setTime(new SimpleDateFormat(DATE_FORMAT).parse(m
.group("date")));
day = new IntWritable(cal.get(DAY_OF_MONTH));
month = new IntWritable(cal.get(MONTH));
year = new IntWritable(cal.get(YEAR));
String p = decode(m.group("path"), UTF_8.name());
root = new Text(p.substring(1, p.indexOf(FILE_SEPARATOR, 1)));
filename = new Text(
p.substring(p.lastIndexOf(FILE_SEPARATOR) + 1));
path = new Text(p);
status = new IntWritable(Integer.parseInt(m.group("status")));
size = new LongWritable(Long.parseLong(m.group("size")));
} catch (ParseException | UnsupportedEncodingException e) {
isMatchFound = false;
}
}
public boolean isSkipped() {
return jvm == null;
}
@Override
public void readFields(DataInput in) throws IOException {
jvm.readFields(in);
day.readFields(in);
// more code
}
@Override
public void write(DataOutput out) throws IOException {
jvm.write(out);
day.write(out);
// more code
}
@Override
public int compareTo(SkippableLogRecord other) {...}
@Override
public boolean equals(Object obj) {...}
}
映射器:
public class LogMapper extends
Mapper<LongWritable, Text, SkippableLogRecord, NullWritable> {
@Override
protected void map(LongWritable key, Text line, Context context) {
SkippableLogRecord rec = new SkippableLogRecord(line);
if (!rec.isSkipped()) {
try {
context.write(rec, NullWritable.get());
} catch (IOException | InterruptedException e) {...}
}
}
}
减速器:
public class LogReducer extends
Reducer<SkippableLogRecord, NullWritable, DBRecord, NullWritable> {
@Override
protected void reduce(SkippableLogRecord rec,
Iterable<NullWritable> values, Context context) {
try {
context.write(new DBRecord(rec), NullWritable.get());
} catch (IOException | InterruptedException e) {...}
}
}
数据库记录:
public class DBRecord implements Writable, DBWritable {
// fields
public DBRecord(SkippableLogRecord logRecord) {
jvm = logRecord.getJvm().toString();
day = logRecord.getDay().get();
// more code for rest of the fields
}
@Override
public void readFields(ResultSet rs) throws SQLException {
jvm = rs.getString("jvm");
day = rs.getInt("day");
// more code for rest of the fields
}
@Override
public void write(PreparedStatement ps) throws SQLException {
ps.setString(1, jvm);
ps.setInt(2, day);
// more code for rest of the fields
}
}
驱动程序:
public class Driver extends Configured implements Tool {
@Override
public int run(String[] args) throws Exception {
Configuration conf = getConf();
DBConfiguration.configureDB(conf, "com.mysql.jdbc.Driver", // driver
"jdbc:mysql://localhost:3306/aac", // db url
"***", // user name
"***"); // password
Job job = Job.getInstance(conf, "log-miner");
job.setJarByClass(getClass());
job.setMapperClass(LogMapper.class);
job.setReducerClass(LogReducer.class);
job.setMapOutputKeyClass(SkippableLogRecord.class);
job.setMapOutputValueClass(NullWritable.class);
job.setOutputKeyClass(DBRecord.class);
job.setOutputValueClass(NullWritable.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(DBOutputFormat.class);
FileInputFormat.setInputPaths(job, new Path(args[0]));
DBOutputFormat.setOutput(job, "log", // table name
new String[] { "jvm", "day", "month", "year", "root",
"filename", "path", "status", "size" } // table columns
);
return job.waitForCompletion(true) ? 0 : 1;
}
public static void main(String[] args) throws Exception {
GenericOptionsParser parser = new GenericOptionsParser(
new Configuration(), args);
ToolRunner.run(new Driver(), parser.getRemainingArgs());
}
}
作业执行日志:
15/02/28 02:17:58 INFO mapreduce.Job: map 100% reduce 100%
15/02/28 02:17:58 INFO mapreduce.Job: Job job_local166084441_0001 completed successfully
15/02/28 02:17:58 INFO mapreduce.Job: Counters: 35
File System Counters
FILE: Number of bytes read=37074
FILE: Number of bytes written=805438
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=476788498
HDFS: Number of bytes written=0
HDFS: Number of read operations=11
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
Map-Reduce Framework
Map input records=482230
Map output records=0
Map output bytes=0
Map output materialized bytes=12
Input split bytes=210
Combine input records=0
Combine output records=0
Reduce input groups=0
Reduce shuffle bytes=12
Reduce input records=0
Reduce output records=0
Spilled Records=0
Shuffled Maps =2
Failed Shuffles=0
Merged Map outputs=2
GC time elapsed (ms)=150
Total committed heap usage (bytes)=1381498880
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=171283337
File Output Format Counters
Bytes Written=0
为了回答我自己的问题,问题是导致匹配器失败的空白。单元测试没有使用领先的空白进行测试,但由于某种原因,实际日志中有这些空白。上面发布的代码的另一个问题是,类中的所有字段都是在readLine
方法中初始化的。正如@AnonyMousse所提到的,这是昂贵的,因为Hadoop数据类型被设计为可重用的。它还导致了序列化和反序列化方面更大的问题。当Hadoop试图通过调用readFields
来重构类时,由于所有字段都为null,因此导致了NPE。我还使用一些Java8类和语法进行了其他一些小的改进。最后,尽管我成功了,但我还是使用Spring Boot、Spring Data JPA以及Spring对使用@Async
进行异步处理的支持重写了代码。