如何在没有键,值对的情况下保存Mapreduce的减速器输出?



我正在编写一个Mapreduce程序来处理Dicom图像。 这个Mapreduce程序的目的是处理dicom图像,从中提取元数据,索引到solr,最后在Reducer阶段,它应该将原始图像保存在hdfs中。 我想将相同的文件保存在 HDFS 中作为减速器输出

所以我已经实现了大部分功能,但是在化简器阶段,当在hdfs中存储相同的文件时,它不起作用。

我已经使用dicom图像查看器测试了已处理的Dicom文件,它说该文件已压缩,并且处理的dicom文件的大小也略有增加。前任。原始Dicom大小为628Kb,当reducer将此文件保存在hdfs中时,其大小更改为630Kb。

我已经尝试了这些链接的解决方案,但没有一个给出预期的结果。

Hadoop mapReduce 如何在 HDFS 中仅存储值

Hadoop - 如何收集没有值的文本输出

这是将Dicom文件读取为单个文件(不拆分它)的代码。

public class WholeFileInputFormat extends FileInputFormat<NullWritable, BytesWritable>{
@Override
protected boolean isSplitable(JobContext context, Path filename) {
return false;
}
@Override
public RecordReader<NullWritable, BytesWritable> createRecordReader(InputSplit split, TaskAttemptContext context)
throws IOException, InterruptedException {
WholeFileRecordReader reader = new WholeFileRecordReader();
reader.initialize(split, context);
return reader;
}       
}

自定义记录读取器

public class WholeFileRecordReader extends RecordReader<NullWritable, BytesWritable>{
private FileSplit fileSplit;
private Configuration conf;
private BytesWritable value = new BytesWritable();
private boolean processed = false;
@Override
public void initialize(InputSplit split, TaskAttemptContext context) throws IOException, InterruptedException {     
this.fileSplit = (FileSplit) split;
this.conf = context.getConfiguration();     
}
@Override
public boolean nextKeyValue() throws IOException, InterruptedException {
if (!processed) {
byte[] contents = new byte[(int) fileSplit.getLength()];
System.out.println("Inside nextKeyvalue");
System.out.println(fileSplit.getLength());
Path file = fileSplit.getPath();
FileSystem fs = file.getFileSystem(conf);
FSDataInputStream in = null;
try {
in = fs.open(file);
IOUtils.readFully(in, contents, 0, contents.length);
value.set(contents, 0, contents.length);
} finally {
IOUtils.closeStream(in);
}
processed = true;
return true;
}
return false;
}
@Override
public void close() throws IOException {
}
@Override
public NullWritable getCurrentKey() throws IOException, InterruptedException 
{
return NullWritable.get();
}
@Override
public BytesWritable getCurrentValue() throws IOException, InterruptedException {
return value;
}
@Override
public float getProgress() throws IOException, InterruptedException {
return processed ? 1.0f : 0.0f;
}
}

映射器类 映射器类根据我们的需要完美地工作。

public class MapClass{
public static class Map extends Mapper<NullWritable, BytesWritable, Text, BytesWritable>{   
@Override
protected void map(NullWritable key, BytesWritable value,
Mapper<NullWritable, BytesWritable, Text, BytesWritable>.Context context)
throws IOException, InterruptedException {
value.setCapacity(value.getLength());
InputStream in = new ByteArrayInputStream(value.getBytes());            
ProcessDicom.metadata(in); // Process dicom image and extract metadata from it
Text keyOut = getFileName(context);
context.write(keyOut, value);
}
private Text getFileName(Mapper<NullWritable, BytesWritable, Text, BytesWritable>.Context context)
{
InputSplit spl = context.getInputSplit();
Path filePath = ((FileSplit)spl).getPath();
String fileName = filePath.getName();
Text text = new Text(fileName);
return text;
}
@Override
protected void setup(Mapper<NullWritable, BytesWritable, Text, BytesWritable>.Context context)
throws IOException, InterruptedException {
super.setup(context);
}
}

减速器类这是减速器类。 公共类 ReduceClass{

public static class Reduce extends Reducer<Text, BytesWritable, BytesWritable, BytesWritable>{
@Override
protected void reduce(Text key, Iterable<BytesWritable> value,
Reducer<Text, BytesWritable, BytesWritable, BytesWritable>.Context context)
throws IOException, InterruptedException {
Iterator<BytesWritable> itr = value.iterator();
while(itr.hasNext())
{
BytesWritable wr = itr.next();
wr.setCapacity(wr.getLength());
context.write(new BytesWritable(key.copyBytes()), itr.next());
}
}
}

主类

public class DicomIndexer{
public static void main(String[] argss) throws Exception{
String args[] = {"file:///home/b3ds/storage/dd","hdfs://192.168.38.68:8020/output"};
run(args);
}
public static void run(String[] args) throws Exception {
//Initialize the Hadoop job and set the jar as well as the name of the Job
Configuration conf = new Configuration();
Job job = new Job(conf, "WordCount");
job.setJarByClass(WordCount.class);
//      job.getConfiguration().set("mapreduce.output.basename", "hi");
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(BytesWritable.class);
job.setOutputKeyClass(BytesWritable.class);
job.setOutputValueClass(BytesWritable.class);
job.setMapperClass(Map.class);
job.setCombinerClass(Reduce.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(WholeFileInputFormat.class);
job.setOutputFormatClass(SequenceFileOutputFormat.class);
WholeFileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.waitForCompletion(true);
}
}

所以我完全不知道该怎么办。一些链接说这是不可能的,因为Mapreduce在对上工作,有些说使用NullWritable。到目前为止,我已经尝试了NullWritable,SequenceFileOutputFormat,但没有一个工作。

两件事:

  1. 通过调用itr.next()两次,您无意中一次消耗了两个元素,这无济于事。

  2. 正如您所确定的,当您只想写入一个键和一个值时,您正在编写一个键和一个值。请改为对值使用NullWritable。您的减速器将如下所示:

    public static class Reduce extends Reducer<Text, BytesWritable, BytesWritable, NullWritable>{
    @Override
    protected void reduce(Text key, Iterable<BytesWritable> value,
    Reducer<Text, BytesWritable, BytesWritable, NullWritable>.Context context)
    throws IOException, InterruptedException {
    NullWritable nullWritable = NullWritable.get();
    Iterator<BytesWritable> itr = value.iterator();
    while(itr.hasNext())
    {
    BytesWritable wr = itr.next();
    wr.setCapacity(wr.getLength());
    context.write(wr, nullWritable);
    }
    }
    }
    

相关内容

最新更新