网络爬虫Java



...

public static void download() {
             try {
                 URL oracle = new URL("http://api.wunderground.com/api/54f05b23fd8fd4b0/geolookup/conditions/forecast/q/US/CO/Denver.json");
                 BufferedReader in = new BufferedReader(new InputStreamReader(oracle.openStream()));
                 File file = new File("C:\Users\User\Desktop\test2.json");
                 if (!file.exists()) {
                     file.createNewFile();
                 }
                 FileWriter fw = new FileWriter(file.getAbsoluteFile());
             BufferedWriter bw = new BufferedWriter(fw);
                 String inputLine;
                 while ((inputLine = in.readLine()) != null) {
                     bw.write(inputLine+"n");
                 }
                 fw.close();
                 in.close();
                 System.out.println("Finished...");
             }
             catch(MalformedURLException e){e.printStackTrace();}
             catch(IOException e){e.printStackTrace();}
         }

我正在制作一个网络爬虫来检索Wunderground的天气更新。我确实让它工作了,但它不会解析整个文档(最后一点的剪切)。我做错了什么?

你正在用BufferedWriter包裹你的FileWriter

FileWriter fw = new FileWriter(file.getAbsoluteFile());
BufferedWriter bw = new BufferedWriter(fw);

但只是关闭FileWriter

fw.close();

由于FileWriter没有访问权限,也不知道BufferedWriter,它不会刷新任何可能的剩余缓冲区。您可以在bw上调用flush(),也可以关闭bw而不是fw。调用bw.close()将负责关闭包装的FileWriter

bw.close();
in.close();

最新更新