这是我的mapreduce代码:
DBCollection mongoCollection = MongoDAO.getCollection();
String map = "function() {"
+ "for (index in this.positions.positionList) {"
+ "emit(this._id+'|'+this.headline+'|'+"
+ "this.location.name+'|'+this.location.country.code+'|'+this.publicProfileUrl+'|'+"
+ "this.positions.positionList[index].title+'|'+"
+ "this.positions.positionList[index].company.name+'|'+this.positions.positionList[index].company.industry+'|'+"
+ "this.positions.positionList[index].company.type+'|'+this.positions.positionList[index].company.size+'|'+"
+ "this.lastName+'|'+this.firstName+'|'+this.industry+'|'+this.updatedDate+'|' , {count: 1});"
+ "}}";
String reduce = "";
MapReduceCommand mapReduceCommand = new MapReduceCommand(
mongoCollection, map, reduce.toString(), "final_result",
MapReduceCommand.OutputType.REPLACE, null);
MapReduceOutput out = mongoCollection.mapReduce(mapReduceCommand);
目前,我正在处理 140,000 条记录。但是在做mapreduce时,记录数量减少到90,000。数据集中没有重复记录。
更改 emit 以发出 _id 作为键,将管道分隔的字符串作为值。举个例子:
emit(this._id, [this._id, this.a, this.b,...].join('|'))
我认为正在发生的事情是你在键中制作了过长的字符串。_id值的限制为 1KB(在 2.0 中,高于之前的 800B),这就是密钥。
此外,与其自己滚动,不如查看预打包的mongodb-hadoop连接器:https://github.com/mongodb/mongo-hadoop