Hadoop Serialization on HashMap and float[]



目前,我正在努力将普通的Java代码转换为Hadoop MapReduce结构。

我想像下面这样修改类Graph,我仍然不知道如何序列化和反序列化HashMapfloat[]类型。代码部分如下。

法典:

public class Graph implements WritableComparable, Cloneable {
  private static long serialVersionUID = 3L;
  public static int MAX_FREQUENCY = 3;
  public static float[] freqWeight = { 1.0F, 1.6F, 2.0F };
  int nNodes;
  int nEdges;
  String strName;
  HashMap<String, Node> nodes;
  boolean isMCS;
  String taxonomyName;

@Override
public void write(DataOutput out) throws IOException {
    // TODO Auto-generated method stub
    out.writeBoolean(isMCS);
    out.writeInt(nEdges);
    out.writeInt(nNodes);
    out.writeLong(serialVersionUID);
    out.writeInt(MAX_FREQUENCY);
    out.writeBytes(strName);
    out.writeBytes(taxonomyName);
    //ArrayWritable a=new ArrayWritable(FloatWritable.class);
    //HashMap
    //float[]
}
@Override
public void readFields(DataInput in) throws IOException {
    // TODO Auto-generated method stub
    isMCS=in.readBoolean();
    nEdges=in.readInt();
    nNodes=in.readInt();
    serialVersionUID=in.readLong();
    MAX_FREQUENCY=in.readInt();
    strName=in.readLine();
    taxonomyName=in.readLine();
    //HashMap
    //float[]
}
@Override
public int compareTo(Graph graph) {
    // TODO Auto-generated method stub
    return 0;
}
.........

使用 ArrayPrimitiveWritable:

// float[] floats
new ArrayPrimitiveWritable(floats).write(out);

ArrayPrimitiveWritable apw = (float[]) new ArrayPrimitiveWritable().readFields(in);
float[] floats = (float[]) apw.get();

对地图使用 MapWwriteable。 MapWritable基本上是一个键和值类型都是可写的映射。 与 ArrayPrimiveWriatable 一样,MapWritable 实现了用于序列化的 readFiles() 和 write() 方法。

相关内容

最新更新