MapReduce,用于使用Java查找字符串



我试图从文本文件中搜索一个特定的字符串以及该字符串的出现,但在运行此代码后,我在io.LongWritable.之间得到了classCastException

Error: java.lang.ClassCastException: org.apache.hadoop.io.LongWritable cannot be cast to org.apache.hadoop.io.Text
        at searchaString.SearchDriver$searchMap.map(SearchDriver.java:1)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)

30年4月16日02:48:17信息地图缩减。作业:映射0%减少0%30年4月16日02:48:23信息地图缩减。作业:任务Id:attempt_1461630807194_0021_m_0000000_2,状态:失败错误:java.lang.ClassCastException:org.apache.hadoop.io.LongWritable无法转换为org.apache.haoop.io.Text

package samples.wordcount;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
//import org.apache.hadoop.util.GenericOptionsParser;
//import org.apache.hadoop.mapred.lib.NLineInputFormat;
import java.io.IOException;
import java.util.Iterator;

public class WordCount {
    public static void main(String[] args) throws Exception {
        @SuppressWarnings("unused")
        JobClient jobC =new JobClient();
        Configuration conf = new Configuration();
        //String args[] = parser.getRemainingArgs();
        Job job = Job.getInstance(conf);
        job.setJobName("WordCount");

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        job.setJarByClass(WordCount.class);
        job.setMapperClass(TokenizerMapper.class);
        job.setReducerClass(IntSumReducer.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(IntWritable.class);
        //job.setInputFormatClass(TextInputFormat.class);
        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        /*String MyWord = args[2];
        TokenizerMapper.find = MyWord;*/
        System.exit(job.waitForCompletion(true) ?  0:1);
    }
    public static class TokenizerMapper extends Mapper<Text, Text, Text, IntWritable> {
        private final static IntWritable one = new IntWritable(1);
        //  private Text word = new Text();
        static String find="txt was not created";
        public int i;
        public void map(Text key, Text value,OutputCollector<Text, IntWritable> output,Reporter reporter) throws IOException, InterruptedException
        {
            String cleanLine = value.toString();        
            String[] cleanL =cleanLine.split("home");
            output.collect(new Text(cleanL[1]), one);
        }
    }
    public static class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable> {

        public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output,Reporter reporter)
                throws IOException, InterruptedException {
            int sum = 0;
            String wordText="txt was not created";
            while(values.hasNext()) {
                Boolean check = values.toString().contains("txt was not created");
                if(check)
                {
                    String[] cleanL =values.toString().split("\.");
                    for(String w : cleanL)
                    {
                        if(w.length()>=wordText.length())
                        {
                            String wrd = w.substring(0,wordText.length()); 
                            if(wrd.equals(wordText))
                            {
                                IntWritable value=values.next();
                                sum += value.get();
                            }
                        }
                    }
                }
            }
            output.collect(key,new IntWritable(sum));
        }
    }
}

我是这个MapReduce的新手,不知道如何做到这一点。

这也是我的文本文件的外观:

tab/hdhd/hip/home.slkj.skjdh.dgsyququ/djkdjjjd.***文本未创建**我必须搜索特定的文本出现。

请回复。

如果你分享一些解决方案,请简要解释我应该在代码中更改什么。

谢谢。

您已经给出了Mapper类的签名,如下

公共静态类TokenizerMapper扩展Mapper

map方法采用的输入键是行的字节偏移量。例如,如果以下是您的文件的内容

你好,世界!

映射函数将以第一行的字节偏移量(十六进制)为键,以"Hello World!"为值。ByteOffset是Long值的kinf。

将输入键更改为LongWritable

新映射器:公共类TokenizerMapper扩展映射器

你的写作方法是cont.write(new Text(cleanL[1]),一个);

"一"不是IntWritabe的意思。请按以下方式更改您的签名公共类TokenizerMapper扩展Mapper并按照以下进行写入

cont.write(新文本(cleanL[1]),新文本("一"));

公共类TokenizerMapper扩展Mapper并作为写入

cont.write(new Text(cleanL[1]),new IntWritable(1);

相关内容

  • 没有找到相关文章

最新更新