我写了一个类似于SQL GroupBy的代码。
我获取的数据集在这里:
250788681419,20090906,200937,200909,619,周日,周末,网内,早上,外出,语音,25078,PAY_AS_YOU_GO_PER_SECOND_PSB,成功发布服务,17,0,1,21.25,635-10-112-30455
public class MyMap extends Mapper<LongWritable, Text, Text, DoubleWritable> {
public void map(LongWritable key, Text value, Context context) throws IOException
{
String line = value.toString();
String[] attribute=line.split(",");
double rs=Double.parseDouble(attribute[17]);
String comb=new String();
comb=attribute[5].concat(attribute[8].concat(attribute[10]));
context.write(new Text(comb),new DoubleWritable (rs));
}
}
public class MyReduce extends Reducer<Text, DoubleWritable, Text, DoubleWritable> {
protected void reduce(Text key, Iterator<DoubleWritable> values, Context context)
throws IOException, InterruptedException {
double sum = 0;
Iterator<DoubleWritable> iter=values.iterator();
while (iter.hasNext())
{
double val=iter.next().get();
sum = sum+ val;
}
context.write(key, new DoubleWritable(sum));
};
}
在映射器中,作为其值将第 17 个参数发送到化简器以求和。现在我还想总结第 14 个参数如何将其发送到化简器?
如果你的数据类型相同,那么创建一个 ArrayWriable 类应该可以解决这个问题。该类应类似于:
public class DblArrayWritable extends ArrayWritable
{
public DblArrayWritable()
{
super(DoubleWritable.class);
}
}
然后,映射器类如下所示:
public class MyMap extends Mapper<LongWritable, Text, Text, DblArrayWritable>
{
public void map(LongWritable key, Text value, Context context) throws IOException
{
String line = value.toString();
String[] attribute=line.split(",");
DoubleWritable[] values = new DoubleWritable[2];
values[0] = Double.parseDouble(attribute[14]);
values[1] = Double.parseDouble(attribute[17]);
String comb=new String();
comb=attribute[5].concat(attribute[8].concat(attribute[10]));
context.write(new Text(comb),new DblArrayWritable.set(values));
}
}
在化简器中,您现在应该能够迭代 DblArrayWwrite 的值。
但是,根据您的示例数据,它们似乎是单独的类型。你也许能够实现一个 ObjectArrayWriable 类来解决这个问题,但我不确定这一点,我看不到太多支持它的东西。如果它有效,该类将是:
public class ObjArrayWritable extends ArrayWritable
{
public ObjArrayWritable()
{
super(Object.class);
}
}
您可以通过简单地连接值并将它们作为文本传递给化简器来处理这个问题,然后化简器会再次拆分它们。
另一种选择是实现自己的可写类。以下是其工作原理的示例:
public static class PairWritable implements Writable
{
private Double myDouble;
private String myString;
// TODO :- Override the Hadoop serialization/Writable interface methods
@Override
public void readFields(DataInput in) throws IOException {
myLong = in.readDouble();
myString = in.readUTF();
}
@Override
public void write(DataOutput out) throws IOException {
out.writeDouble(myLong);
out.writeUTF(myString);
}
//End of Implementation
//Getter and Setter methods for myLong and mySring variables
public void set(Double d, String s) {
myDouble = d;
myString = s;
}
public Long getLong() {
return myDouble;
}
public String getString() {
return myString;
}
}