我正在使用Crawler4j示例代码,但我发现我得到了一个异常。
这是我的例外:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/http/conn/scheme/SchemeSocketFactory
at LocalDataCollectorController.main(LocalDataCollectorController.java:24)
Caused by: java.lang.ClassNotFoundException: org.apache.http.conn.scheme.SchemeSocketFactory
这是我的代码:
public static void main(String[] args) throws Exception {
String root Folder = "D:\";
int numberOfCrawlers = 5;
System.out.println("numberOfCrawlers"+numberOfCrawlers);
System.out.println(rootFolder);
CrawlConfig config = new CrawlConfig();
config.setCrawlStorageFolder(rootFolder);
config.setMaxPagesToFetch(10);
config.setPolitenessDelay(1000);
PageFetcher pageFetcher = new PageFetcher(config);
RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig, pageFetcher);
CrawlController controller = new CrawlController(config, pageFetcher, robotstxtServer);
controller.addSeed("http://www.ohloh.net/p/crawler4j");
controller.start(LocalDataCollectorCrawler.class, numberOfCrawlers);
List<Object> crawlersLocalData = controller.getCrawlersLocalData();
long totalLinks = 0;
long totalTextSize = 0;
int totalProcessedPages = 0;
for (Object localData : crawlersLocalData) {
CrawlStat stat = (CrawlStat) localData;
totalLinks += stat.getTotalLinks();
totalTextSize += stat.getTotalTextSize();
totalProcessedPages += stat.getTotalProcessedPages();
}
System.out.println("Aggregated Statistics:");
System.out.println(" Processed Pages: " + totalProcessedPages);
System.out.println(" Total Links found: " + totalLinks);
System.out.println(" Total Text Size: " + totalTextSize);
}
}
下载HttpClient
并将其添加到构建路径中。
还有一个软件包,其中包含下载部分的所有 crawler4j 依赖项。您应该使用它来避免进一步的问题。
NoClassDefFoundError
的原因始终相同:您在运行时执行期间未提供依赖项类。换句话说,当你运行你的示例时,你没有把 HttpClient 的 JAR 文件放在类路径上。这样做,问题就会消失。