在序列化中使用Avro NullPointerException的MRUnit



我正在尝试使用MRUnit测试Hadoop .mapreduce Avro作业。我收到一个NullPointerException,如下所示。我附上了部分文件和源代码。任何协助将不胜感激。

感谢

得到的错误是:

java.lang.NullPointerException
at org.apache.hadoop.mrunit.internal.io.Serialization.copy(Serialization.java:73)
at org.apache.hadoop.mrunit.internal.io.Serialization.copy(Serialization.java:91)
at org.apache.hadoop.mrunit.internal.io.Serialization.copyWithConf(Serialization.java:104)
at org.apache.hadoop.mrunit.TestDriver.copy(TestDriver.java:608)
at org.apache.hadoop.mrunit.MapDriverBase.setInputKey(MapDriverBase.java:64)
at org.apache.hadoop.mrunit.MapDriverBase.setInput(MapDriverBase.java:104)
at org.apache.hadoop.mrunit.MapDriverBase.withInput(MapDriverBase.java:218)
at org.lab41.project.mapreduce.ParseMetadataAsTextIntoAvroTest.testMap(ParseMetadataAsTextIntoAvroTest.java:115)
.....

pom片段:

<dependency>
    <groupId>org.apache.mrunit</groupId>
    <artifactId>mrunit</artifactId>
    <version>0.9.0-incubating</version>
    <classifier>hadoop2</classifier>
    <scope>test</scope>
</dependency>

<avro.version>1.7.4</avro.version>
<hadoop.version>2.0.0-mr1-cdh4.1.3</hadoop.version>
<dependency>
    <groupId>org.apache.avro</groupId>
    <artifactId>avro</artifactId>
    <version>${avro.version}</version>
</dependency>
<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-client</artifactId>
    <version>${hadoop.version}</version>
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-core</artifactId>
    <version>${hadoop.version}</version>
    <scope>provided</scope>
 </dependency>
 <dependency>
    <groupId>org.apache.avro</groupId>
    <artifactId>avro-mapred</artifactId>
    <version>${avro.version}</version>
    <classifier>hadoop2</classifier>
 </dependency>

下面是测试的摘录:

import static org.junit.Assert.*;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.CharBuffer;
import java.nio.charset.Charset;
import java.nio.charset.CharsetEncoder;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import org.apache.avro.mapred.AvroKey;
import org.apache.avro.hadoop.io.AvroSerialization;
import org.apache.avro.mapred.AvroValue;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mrunit.mapreduce.MapDriver;
import org.apache.hadoop.mrunit.types.Pair;
import org.junit.After;
import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;
import org.lab41.project.domain.DataRecord;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class ParseMetadataAsTextIntoAvroTest {
    Logger logger = LoggerFactory
            .getLogger(ParseMetadataAsTextIntoAvroTest.class);
    private MapDriver<LongWritable, Text, AvroKey<Long>, AvroValue<DataRecord>> mapDriver;
    @BeforeClass
    public static void setUpClass() {
    }
    @AfterClass
    public static void tearDownClass() {
    }
    @Before
    public void setUp() throws IOException {
        ParseMetadataAsTextIntoAvroMapper mapper = new ParseMetadataAsTextIntoAvroMapper();
        mapDriver = new MapDriver<LongWritable, Text, AvroKey<Long>, AvroValue<DataRecord>>();
        mapDriver.setMapper(mapper);
        mapDriver.getConfiguration().setStrings("io.serializations", new String[]{  
            AvroSerialization.class.getName()
        });
    }
    @Test
    public void testMap() throws ParseException, IOException {
        Text testInputText = new Text(test0);
        DataRecord record = new DataRecord();
       ….
        AvroKey<Long> expectedPivot = new AvroKey<Long>(1L);
        AvroValue<DataRecord> expectedRecord = new AvroValue<DataRecord>(record);
        mapDriver.withInput(new Pair<LongWritable, Text>(new LongWritable(1), testInputText));
        mapDriver.withOutput(new Pair<AvroKey<Long>, AvroValue<DataRecord>>(expectedPivot, expectedRecord));
        mapDriver.runTest();
    }
}

为了使其工作,您必须将AvroSerializatio添加到默认序列化中。您还必须配置AvroSerializationn

 @Before
public void setUp() throws IOException {
    ParseMetadataAsTextIntoAvroMapper mapper = new ParseMetadataAsTextIntoAvroMapper();
    mapDriver = new MapDriver<LongWritable, Text, AvroKey<Long>, AvroValue<NetworkRecord>>();
    mapDriver.setMapper(mapper);
    //Copy over the default io.serializations. If you don't do this then you will 
    //not be able to deserialize the inputs to the mapper
    String[] strings = mapDriver.getConfiguration().getStrings("io.serializations");
    String[] newStrings = new String[strings.length +1];
    System.arraycopy( strings, 0, newStrings, 0, strings.length );
    newStrings[newStrings.length-1] = AvroSerialization.class.getName();
    //Now you have to configure AvroSerialization by sepecifying the key
    //writer Schema and the value writer schema.
    mapDriver.getConfiguration().setStrings("io.serializations", newStrings);
    mapDriver.getConfiguration().setStrings("avro.serialization.key.writer.schema", Schema.create(Schema.Type.LONG).toString(true));
    mapDriver.getConfiguration().setStrings("avro.serialization.value.writer.schema", NetworkRecord.SCHEMA$.toString(true));
}

这也解决了问题,具有更短,更清晰的代码的优点。

        MapDriver driver = MapDriver.newMapDriver(your mapper);
        Configuration conf = driver.getConfiguration();
        AvroSerialization.addToConfiguration(conf);
        AvroSerialization.setKeyWriterSchema(conf, your schema);
        AvroSerialization.setKeyReaderSchema(conf, your schema);
        Job job = new Job(conf);
        job.set... your job settings;
        AvroJob.set... your avro job settings;

可能是mrunit的bug,没有设置io。序列化对吧相反,它应该由job.setInputFormatClass(AvroKeyInputFormat.class)设置。

您必须将AvroSerialization添加到默认序列化中并配置AvroSerialization

@Before
public void setUp() throws IOException {
    ParseMetadataAsTextIntoAvroMapper mapper = new ParseMetadataAsTextIntoAvroMapper();
    mapDriver = new MapDriver<LongWritable, Text, AvroKey<Long>, AvroValue<NetworkRecord>>();
    mapDriver.setMapper(mapper);
    Configuration configuration = mapDriver.getConfiguration();
    // Add AvroSerialization to the configuration
    // (copy over the default serializations for deserializing the mapper inputs)
    String[] serializations = configuration.getStrings(CommonConfigurationKeysPublic.IO_SERIALIZATIONS_KEY);
    String[] newSerializations = Arrays.copyOf(serializations, serializations.length + 1);
    newSerializations[serializations.length] = AvroSerialization.class.getName();
    configuration.setStrings(CommonConfigurationKeysPublic.IO_SERIALIZATIONS_KEY, newSerializations);
    //Configure AvroSerialization by specifying the key writer and value writer schemas
    AvroSerialization.setKeyWriterSchema(configuration, Schema.create(Schema.Type.LONG));
    AvroSerialization.setValueWriterSchema(configuration, NetworkRecord.SCHEMA$)
}

这里回答:https://issues.apache.org/jira/browse/MRUNIT-181具体:https://cwiki.apache.org/confluence/display/MRUNIT/MRUnit+with+Avro

相关内容

  • 没有找到相关文章

最新更新