我正在构建一个网络爬虫,并有一个方法来检查坏链接。有一次,我试图获取HTTP响应代码,以确定它是否有效。尽管给了它一个有效的URL(在浏览器中打开它很好),但它仍然返回无效。这是代码:
public static boolean isBrokenLink(URL baseURL, String theHREF) {
boolean isBroken = false;
if (baseURL == null) {
try {
baseURL = new URL("HTTP", "cs.uwec.edu/~stevende/cs145testpages/", theHREF);
System.out.println(baseURL);
} catch (MalformedURLException e) {
isBroken = true;
//e.printStackTrace();
}
}
try {
URLConnection con = baseURL.openConnection();
HttpURLConnection httpProtocol = (HttpURLConnection) con;
System.out.println(httpProtocol.getResponseCode());
if (httpProtocol.getResponseCode() != 200 && httpProtocol.getResponseCode() == -1) {
isBroken = true;
}
} catch (IOException e) {
isBroken = true;
e.printStackTrace();
}
return isBroken;
}
}
这是我传递的URL。isBroken是返回的布尔值。我将baseURL作为null传递,将HREF作为相对链接(page2.htm)传递。我将在从字符串创建URL后打印出它。谢谢你的帮助!错误如下:
java.net.UnknownHostException: cs.uwec.edu/~stevende/cs145testpages/
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:178)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
at sun.net.www.http.HttpClient.New(HttpClient.java:308)
at sun.net.www.http.HttpClient.New(HttpClient.java:326)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1300)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:468)
at edu.uwec.cs.carpenne.webcrawler.Webcrawler.isBrokenLink(Webcrawler.java:106)
at edu.uwec.cs.carpenne.webcrawler.Webcrawler.main(Webcrawler.java:181)
异常告诉我们,它使用主机名和本地部分作为(未知)主机。这看起来像是您构建的URL不正确。也许您忘记使用http://
前缀,或者使用了错误的getter?您可以通过调用baseURL.getHost()
、baseURL.getPath()
和baseURL.getProtocol()
来调试它,看看它是否返回cs.uwec.edu
、/~steve...
和http
。
我刚刚注意到你添加了带有new URL("HTTP", "cs.uwec.edu/~stevende/cs145testpages/", theHREF)
的baseURL,这是错误的,你需要使用new URL("http", "cs.uwec.edu", 80, "/~stevende/cs145testpages/#"+theHREF)
。但是,您通常可以跳过锚点/ref,因为它不会传输到服务器。
您也可以使用单参数构造函数new URL("http://cs.uwec.edu//~stevende/cs145testpages/")
。