我需要爬行一个网站,使一些检查,以知道是否可用的url或不定期。为此,我使用crawler4j。
我的问题来自于一些网页,这些网页已经禁用了具有<meta name="robots" content="noindex,nofollow" />
的机器人,这是有意义的,因为它的内容没有在搜索引擎中索引这个网页。
尽管禁用了RobotServer的配置,crawler4j也没有遵循这些链接。使用robotstxtConfig.setEnabled(false);
:
RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
robotstxtConfig.setUserAgentName(USER_AGENT_NAME);
robotstxtConfig.setEnabled(false);
RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig, pageFetcher);
WebCrawlerController controller = new WebCrawlerController(config, pageFetcher, robotstxtServer);
...
但是所描述的网页仍然没有被探索。我已经阅读了代码,这必须足以禁用机器人指令,但它不像预期的那样工作。也许我漏掉了什么?我用3.5
和3.6-SNAPSHOT
版本测试了它,结果相同。
我使用的是新版本
<dependency>
<groupId>edu.uci.ics</groupId>
<artifactId>crawler4j</artifactId>
<version>4.1</version>
</dependency>`
像这样设置RobotstxtConfig后,它工作了:
RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
robotstxtConfig.setEnabled(false);
Crawler4J中的测试结果和源代码证明:
public boolean allows(WebURL webURL) {
if (config.isEnabled()) {
try {
URL url = new URL(webURL.getURL());
String host = getHost(url);
String path = url.getPath();
HostDirectives directives = host2directivesCache.get(host);
if ((directives != null) && directives.needsRefetch()) {
synchronized (host2directivesCache) {
host2directivesCache.remove(host);
directives = null;
}
}
if (directives == null) {
directives = fetchDirectives(url);
}
return directives.allows(path);
} catch (MalformedURLException e) {
logger.error("Bad URL in Robots.txt: " + webURL.getURL(), e);
}
}
return true;
}
当设置Enabled为false时,将不再进行检查
为什么不在crawler4j中排除关于robostext的所有内容呢?我需要抓取一个网站,忽略机器人,这对我来说很有效。
我把。crawler中的CrawlController和WebCrawler改成这样:
WebCrawler.java:
删除
private RobotstxtServer robotstxtServer;
删除
this.robotstxtServer = crawlController.getRobotstxtServer();
编辑
if ((shouldVisit(webURL)) && (this.robotstxtServer.allows(webURL)))
-->
if ((shouldVisit(webURL)))
编辑
if (((maxCrawlDepth == -1) || (curURL.getDepth() < maxCrawlDepth)) &&
(shouldVisit(webURL)) && (this.robotstxtServer.allows(webURL)))
-->
if (((maxCrawlDepth == -1) || (curURL.getDepth() < maxCrawlDepth)) &&
(shouldVisit(webURL)))
CrawlController.java:
删除
import edu.uci.ics.crawler4j.robotstxt.RobotstxtServer;
删除
protected RobotstxtServer robotstxtServer;
编辑
public CrawlController(CrawlConfig config, PageFetcher pageFetcher, RobotstxtServer robotstxtServer) throws Exception
-->
public CrawlController(CrawlConfig config, PageFetcher pageFetcher) throws Exception
删除
this.robotstxtServer = robotstxtServer;
编辑
if (!this.robotstxtServer.allows(webUrl))
{
logger.info("Robots.txt does not allow this seed: " + pageUrl);
}
else
{
this.frontier.schedule(webUrl);
}
-->
this.frontier.schedule(webUrl);
删除
public RobotstxtServer getRobotstxtServer()
{
return this.robotstxtServer;
}
public void setRobotstxtServer(RobotstxtServer robotstxtServer)
{
this.robotstxtServer = robotstxtServer;
}
希望这是你正在寻找的