如何使用hadoopmapreduce编程计算文件中特定单词的出现次数



我正试图使用java中的hadoop mapreduce编程来计算文件中特定单词的出现次数。文件和单词都应该是用户输入。因此,我试图将特定单词作为第三个参数,以及I/p和o/p路径(In,Out,word)一起传递。但我找不到一种方法将单词传递给map函数。我尝试了以下方法,但没有成功:-在mapper类中创建了一个静态String变量,并将我的第三个参数(即要搜索的单词)的值分配给它。然后尝试在map函数中使用这个静态变量。但在映射函数中,静态变量的值为Null。我无法在map函数中获取第三个arument的值。

是否有通过JobConf对象设置值的方法?请帮忙。我已经在下面粘贴了我的代码。

public class MyWordCount {
    public static class MyWordCountMap extends Mapper < Text, Text, Text, LongWritable > {
        static String wordToSearch;
        private final static LongWritable ONE = new LongWritable(1L);
        private Text word = new Text();
        public void map(Text key, Text value, Context context)
        throws IOException, InterruptedException {
            System.out.println(wordToSearch); // Here the value is coming as Null
            if (value.toString().compareTo(wordToSearch) == 0) {
                context.write(word, ONE);
            }
        }
    }

    public static class SumReduce extends Reducer < Text, LongWritable, Text, LongWritable > {
        public void reduce(Text key, Iterator < LongWritable > values,
            Context context) throws IOException, InterruptedException {
            long sum = 0L;
            while (values.hasNext()) {
                sum += values.next().get();
            }
            context.write(key, new LongWritable(sum));
        }
    }
    public static void main(String[] rawArgs) throws Exception {
        GenericOptionsParser parser = new GenericOptionsParser(rawArgs);
        Configuration conf = parser.getConfiguration();
        String[] args = parser.getRemainingArgs();
        Job job = new Job(conf, "wordcount");
        job.setJarByClass(MyWordCountMap.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(LongWritable.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(LongWritable.class);
        job.setMapperClass(MyWordCountMap.class);
        job.setReducerClass(SumReduce.class);
        job.setInputFormatClass(SequenceFileInputFormat.class);
        job.setOutputFormatClass(TextOutputFormat.class);
        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        String MyWord = args[2];
        MyWordCountMap.wordToSearch = MyWord;
        job.waitForCompletion(true);
    }
}

有一种方法可以用Configuration实现这一点(请参阅此处的api)。例如,可以使用以下代码将"树"设置为要搜索的单词:

//Create a new configuration
Configuration conf = new Configuration();
//Set the work to be searched
conf.set("wordToSearch", "Tree");
//create the job
Job job = new Job(conf);

然后,在您的mapper/reducer类中,您可以使用以下命令获得wordToSearch(即本例中的"树"):

//Create a new configuration
Configuration conf = context.getConfiguration();
//retrieve the wordToSearch variable
String wordToSearch = conf.get("wordToSearch");

请参阅此处了解更多详细信息。

相关内容

  • 没有找到相关文章

最新更新