如何使用 Python 保存"complete webpage"而不仅仅是基本的 html



我使用以下代码使用 Python 保存网页:

import urllib
import sys
from bs4 import BeautifulSoup
url = 'http://www.vodafone.de/privat/tarife/red-smartphone-tarife.html'
f = urllib.urlretrieve(url,'test.html')

问题:此代码将html保存为基本html,没有javascript,图像等。我想将网页保存为完整(就像我们在浏览器中有选项一样)

更新:我现在使用以下代码来保存 webapge 的所有 js/images/css 文件,以便它可以保存为完整的网页,但我的输出 html 仍然像基本 html 一样保存:

import pycurl
import StringIO
c = pycurl.Curl()
c.setopt(pycurl.URL, "http://www.vodafone.de/privat/tarife/red-smartphone-tarife.html")
b = StringIO.StringIO()
c.setopt(pycurl.WRITEFUNCTION, b.write)
c.setopt(pycurl.FOLLOWLOCATION, 1)
c.setopt(pycurl.MAXREDIRS, 5)
c.perform()
html = b.getvalue()
#print html
fh = open("file.html", "w")
fh.write(html)
fh.close()

尝试用硒模拟浏览器。此脚本将弹出网页的save as对话框。您仍然需要弄清楚如何模拟按回车键下载以启动,因为文件对话框超出了 Selenium 的范围(您如何执行此操作也取决于操作系统)。

from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.keys import Keys
br = webdriver.Firefox()
br.get('http://www.google.com/')
save_me = ActionChains(br).key_down(Keys.CONTROL)
         .key_down('s').key_up(Keys.CONTROL).key_up('s')
save_me.perform()

此外,我认为遵循@Amber获取链接资源的建议可能更简单,因此是更好的解决方案。尽管如此,我认为使用Selenium是一个很好的起点,br.page_source因为它会让你得到整个dom以及javascript生成的动态内容。

你可以用简单的python库pywebcopy轻松做到这一点。

对于当前版本:5.0.1

from pywebcopy import save_webpage
url = 'http://some-site.com/some-page.html'
download_folder = '/path/to/downloads/'    
kwargs = {'bypass_robots': True, 'project_name': 'recognisable-name'}
save_webpage(url, download_folder, **kwargs)

您将拥有html,css,js都在download_folder。完全像原始网站一样工作。

要通过@rajatomar788运行上述脚本,我必须先执行以下所有导入:

要运行 pywebcopy,您需要安装以下软件包:

pip install pywebcopy 
pip install pyquery
pip install w3lib
pip install parse 
pip install lxml

在那之后,它出现了一些错误,但我确实得到了一个充满构成网页的文件的文件夹。

webpage    - INFO     - Starting save_assets Action on url: 'http://www.gatsby.ucl.ac.uk/teaching/courses/ml1-2016.html'
webpage    - Level 100 - Queueing download of <89> asset files.
Exception in thread <Element(LinkTag, file:///++resource++images/favicon2.ico)>:
Traceback (most recent call last):
  File "C:ProgramDataAnaconda3libthreading.py", line 917, in _bootstrap_inner
    self.run()
  File "C:ProgramDataAnaconda3libthreading.py", line 865, in run
    self._target(*self._args, **self._kwargs)
  File "C:ProgramDataAnaconda3libsite-packagespywebcopyelements.py", line 312, in run
    super(LinkTag, self).run()
  File "C:ProgramDataAnaconda3libsite-packagespywebcopyelements.py", line 58, in run
    self.download_file()
  File "C:ProgramDataAnaconda3libsite-packagespywebcopyelements.py", line 107, in download_file
    req = SESSION.get(url, stream=True)
  File "C:ProgramDataAnaconda3libsite-packagespywebcopyconfigs.py", line 244, in get
    return super(AccessAwareSession, self).get(url, **kwargs)
  File "C:ProgramDataAnaconda3libsite-packagesrequestssessions.py", line 546, in get
    return self.request('GET', url, **kwargs)
  File "C:ProgramDataAnaconda3libsite-packagesrequestssessions.py", line 533, in request
    resp = self.send(prep, **send_kwargs)
  File "C:ProgramDataAnaconda3libsite-packagesrequestssessions.py", line 640, in send
    adapter = self.get_adapter(url=request.url)
  File "C:ProgramDataAnaconda3libsite-packagesrequestssessions.py", line 731, in get_adapter
    raise InvalidSchema("No connection adapters were found for '%s'" % url)
requests.exceptions.InvalidSchema: No connection adapters were found for 'file:///++resource++images/favicon2.ico'
webpage    - INFO     - Starting save_html Action on url: 'http://www.gatsby.ucl.ac.uk/teaching/courses/ml1-2016.html'

尝试saveFullHtmlPage风箱或改编它。

将保存修改后的 *.html,并根据标签脚本、链接和 img(tags_inner 字典键)在文件夹_files上保存 javascript、css 和图像。

import os, sys, re
import requests
from urllib.parse import urljoin
from bs4 import BeautifulSoup
def saveFullHtmlPage(url, pagepath='page', session=requests.Session(), html=None):
    """Save web page html and supported contents        
        * pagepath : path-to-page   
        It will create a file  `'path-to-page'.html` and a folder `'path-to-page'_files`
    """
    def savenRename(soup, pagefolder, session, url, tag, inner):
        if not os.path.exists(pagefolder): # create only once
            os.mkdir(pagefolder)
        for res in soup.findAll(tag):   # images, css, etc..
            if res.has_attr(inner): # check inner tag (file object) MUST exists  
                try:
                    filename, ext = os.path.splitext(os.path.basename(res[inner])) # get name and extension
                    filename = re.sub('W+', '', filename) + ext # clean special chars from name
                    fileurl = urljoin(url, res.get(inner))
                    filepath = os.path.join(pagefolder, filename)
                    # rename html ref so can move html and folder of files anywhere
                    res[inner] = os.path.join(os.path.basename(pagefolder), filename)
                    if not os.path.isfile(filepath): # was not downloaded
                        with open(filepath, 'wb') as file:
                            filebin = session.get(fileurl)
                            file.write(filebin.content)
                except Exception as exc:
                    print(exc, file=sys.stderr)
    if not html:
        html = session.get(url).text
    soup = BeautifulSoup(html, "html.parser")
    path, _ = os.path.splitext(pagepath)
    pagefolder = path+'_files' # page contents folder
    tags_inner = {'img': 'src', 'link': 'href', 'script': 'src'} # tag&inner tags to grab
    for tag, inner in tags_inner.items(): # saves resource files and rename refs
        savenRename(soup, pagefolder, session, url, tag, inner)
    with open(path+'.html', 'wb') as file: # saves modified html doc
        file.write(soup.prettify('utf-8'))

示例google.com另存为google.htmlgoogle_files文件夹中的内容。(当前文件夹

saveFullHtmlPage('https://www.google.com', 'google')

相关内容

  • 没有找到相关文章

最新更新