抓取受密码保护页面的所有链接



我正在抓取一个需要用户名和密码进行身份验证的页面。当我在代码中传递用户名和密码时,我成功地从服务器获得了该页面的200 OK响应。但是当它返回200 OK响应时,它就会停止。在身份验证后,它不会向前移动到该页面以抓取该页中的所有链接。 And this crawler is taken from http://code.google.com/p/crawler4j/。这是我做身份验证的代码…

public class MyCrawler extends WebCrawler {
    Pattern filters = Pattern.compile(".*(\.(css|js|bmp|gif|jpe?g"
            + "|png|tiff?|mid|mp2|mp3|mp4" + "|wav|avi|mov|mpeg|ram|m4v|pdf"
            + "|rm|smil|wmv|swf|wma|zip|rar|gz))$");
    List<String> exclusions;

    public MyCrawler() {
        exclusions = new ArrayList<String>();
        //Add here all your exclusions
    exclusions.add("http://www.dot.ca.gov/dist11/d11tmc/sdmap/cameras/cameras.html");
    }

    public boolean shouldVisit(WebURL url) {
    String href = url.getURL().toLowerCase();

    DefaultHttpClient client = null;
        try
        {
        System.out.println("----------------------------------------");
            System.out.println("WEB URL:- " +url);

            client = new DefaultHttpClient();
            client.getCredentialsProvider().setCredentials(
                    new AuthScope(AuthScope.ANY_HOST, AuthScope.ANY_PORT, AuthScope.ANY_REALM),
                    new UsernamePasswordCredentials("test", "test"));
            client.getParams().setParameter(ClientPNames.ALLOW_CIRCULAR_REDIRECTS, true);

        for(String exclusion : exclusions){
            if(href.startsWith(exclusion)){
                return false;
            }
        }   
        if (href.startsWith("http://") || href.startsWith("https://")) {
            return true;
        }
            HttpGet request = new HttpGet(url.toString());
            System.out.println("----------------------------------------");
            System.out.println("executing request" + request.getRequestLine());
            HttpResponse response = client.execute(request);
            HttpEntity entity = response.getEntity();

            System.out.println(response.getStatusLine());

    }
        catch(Exception e) {
            e.printStackTrace();
        }

        return false;
    }
    public void visit(Page page) {
    System.out.println("hello");
    int docid = page.getWebURL().getDocid();
        String url = page.getWebURL().getURL();
        System.out.println("Page:- " +url);
        String text = page.getText();
        List<WebURL> links = page.getURLs();
    int parentDocid = page.getWebURL().getParentDocid();

    System.out.println("Docid: " + docid);
        System.out.println("URL: " + url);
        System.out.println("Text length: " + text.length());
        System.out.println("Number of links: " + links.size());
        System.out.println("Docid of parent page: " + parentDocid);
}
}

这是控制器类

public class Controller {
    public static void main(String[] args) throws Exception {
            CrawlController controller = new CrawlController("/data/crawl/root");

//And I want to crawl all those links that are there in this password protected page             
            controller.addSeed("http://search.somehost.com/");
            controller.start(MyCrawler.class, 20);  
            controller.setPolitenessDelay(200);
            controller.setMaximumCrawlDepth(2);
    }
}

我做错了什么....

如http://code.google.com/p/crawler4j/中所述,shouldvisit()函数应该只返回true或false。但是在你的代码中,这个函数也在抓取页面的内容,这是错误的。当前版本的crawler4j(3.0)不支持抓取有密码保护的页面。

最新更新