Nutch Crawler读取分段结果



我使用apache-nutch-crawler1.6进行爬行。爬行后,当我尝试使用命令读取爬行结果的内容时

 bin/nutch readseg -dump crawl/segments/* segmentAllContent

错误是

 Exception in thread "main" org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/home/ubuntu/nutch/framework/apache-nutch-1.6/blogs/segments/2013062110/crawl_generate
    Input path does not exist: file:/home/ubuntu/nutch/framework/apache-nutch-1.6/blogs/segments/2013062110/crawl_fetch
    Input path does not exist: file:/home/ubuntu/nutch/framework/apache-nutch-1.6/blogs/segments/2013062110/crawl_parse
    Input path does not exist: file:/home/ubuntu/nutch/framework/apache-nutch-1.6/blogs/segments/2013062110/content
    Input path does not exist: file:/home/ubuntu/nutch/framework/apache-nutch-1.6/blogs/segments/2013062110/parse_data
    Input path does not exist: file:/home/ubuntu/nutch/framework/apache-nutch-1.6/blogs/segments/2013062110/parse_text
            at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)
            at org.apache.hadoop.mapred.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:40)
            at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)
            at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:989)
            at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:981)
            at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:174)
            at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:897)
            at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:850)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:416)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
            at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:850)
            at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:824)
            at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1261)
            at org.apache.nutch.segment.SegmentReader.dump(SegmentReader.java:224)
            at org.apache.nutch.segment.SegmentReader.main(SegmentReader.java:572)

如何读取爬行后的html内容?

我通常会先合并所有的段,

bin/nutch mergesegs crawl/merge crawl/segments/*

bin/nutch readseg -dump crawl/merged/* segmentAllContent

最新更新