Python :使用机械化将数据提交到表单后提取.csv结果



我是使用Python从网络中提取数据的新手。感谢其他一些帖子和这个网页,我想出了如何使用模块向表单提交数据mechanize

现在,我坚持寻找如何提取结果。提交表单时有很多不同的结果,但如果我可以访问 csv 文件,那就完美了。我假设您必须使用模块re,但是您如何通过 Python 下载结果?

运行作业后,csv 文件如下所示: 摘要 => 结果 => 下载重型链表 (您只需单击"加载示例"即可查看网页的工作原理)。

import re
import mechanize
br = mechanize.Browser()
br.set_handle_robots(False)   # ignore robots
br.set_handle_refresh(False)  # can sometimes hang without this
url = 'http://circe.med.uniroma1.it/proABC/index.php'
response = br.open(url)
br.form = list(br.forms())[1]
# Controls can be found by name
control1 = br.form.find_control("light")
# Text controls can be set as a string
br["light"] = "DIQMTQSPASLSASVGETVTITCRASGNIHNYLAWYQQKQGKSPQLLVYYTTTLADGVPSRFSGSGSGTQYSLKINSLQPEDFGSYYCQHFWSTPRTFGGGTKLEIKRADAAPTVSIFPPSSEQLTSGGASVVCFLNNFYPKDINVKWKIDGSERQNGVLNSWTDQDSKDSTYSMSSTLTLTKDEYERHNSYTCEATHKTSTSPIVKSFNRNEC" 
br["heavy"] = "QVQLKESGPGLVAPSQSLSITCTVSGFSLTGYGVNWVRQPPGKGLEWLGMIWGDGNTDYNSALKSRLSISKDNSKSQVFLKMNSLHTDDTARYYCARERDYRLDYWGQGTTLTVSSASTTPPSVFPLAPGSAAQTNSMVTLGCLVKGYFPEPVTVTWNSGSLSSGVHTFPAVLQSDLYTLSSSVTVPSSPRPSETVTCNVAHPASSTKVDKKIVPRDC"
# To submit form
response = br.submit()
content = response.read()
# print content
result = re.findall(r"Prob_Heavy.csv", content)
print result

打印content时,我感兴趣的行如下所示:

<h2>Results</h2><br>
Predictions for Heavy Chain:
<a href='u17003I9f1/Prob_Heavy.csv'>Download Heavy Chain Table</a><br>
Predictions for Light Chain:
<a href='u17003I9f1/Prob_Light.csv'>Download Light Chain Table</a><br>

所以问题是:我如何下载/访问href='u17003I9f1/Prob_Heavy.csv'

这是一个快速而肮脏的示例,使用 BeautifulSouprequests 来避免使用正则表达式解析 HTML。 sudo pip install bs4您是否已经安装了pip但尚未BeautifulSoup安装。

import re
import mechanize
from bs4 import BeautifulSoup as bs
import requests
import time

br = mechanize.Browser()
br.set_handle_robots(False)   # ignore robots
br.set_handle_refresh(False)  # can sometimes hang without this
url_base = "http://circe.med.uniroma1.it/proABC/"
url_index = url_base + "index.php"
response = br.open(url_index)
br.form = list(br.forms())[1]
# Controls can be found by name
control1 = br.form.find_control("light")
# Text controls can be set as a string
br["light"] = "DIQMTQSPASLSASVGETVTITCRASGNIHNYLAWYQQKQGKSPQLLVYYTTTLADGVPSRFSGSGSGTQYSLKINSLQPEDFGSYYCQHFWSTPRTFGGGTKLEIKRADAAPTVSIFPPSSEQLTSGGASVVCFLNNFYPKDINVKWKIDGSERQNGVLNSWTDQDSKDSTYSMSSTLTLTKDEYERHNSYTCEATHKTSTSPIVKSFNRNEC" 
br["heavy"] = "QVQLKESGPGLVAPSQSLSITCTVSGFSLTGYGVNWVRQPPGKGLEWLGMIWGDGNTDYNSALKSRLSISKDNSKSQVFLKMNSLHTDDTARYYCARERDYRLDYWGQGTTLTVSSASTTPPSVFPLAPGSAAQTNSMVTLGCLVKGYFPEPVTVTWNSGSLSSGVHTFPAVLQSDLYTLSSSVTVPSSPRPSETVTCNVAHPASSTKVDKKIVPRDC"
# To submit form
response = br.submit()
content = response.read()
# print content
soup = bs(content)
urls_csv = [x.get("href") for x in soup.findAll("a") if ".csv" in x.get("href")]
for file_path in urls_csv:
    status_code = 404
    retries = 0
    url_csv = url_base + file_path
    file_name = url_csv.split("/")[-1]
    while status_code == 404 and retries < 10:
        print "{} not ready yet".format(file_name)
        req = requests.get(url_csv )
        status_code = req.status_code
        time.sleep(5)
    print "{} ready. Saving.".format(file_name)
    with open(file_name, "wb") as f:
        f.write(req.content)

在 REPL 中运行脚本:

Prob_Heavy.csv not ready yet
Prob_Heavy.csv not ready yet
Prob_Heavy.csv not ready yet
Prob_Heavy.csv ready. Saving.
Prob_Light.csv not ready yet
Prob_Light.csv ready. Saving.
>>> 
>>> 

在看起来您正在使用的 Python2 中,请使用 urllib2 .

>>> import urllib2
>>> URL = "http://circe.med.uniroma1.it/proABC/u17003I9f1/Prob_Heavy.csv"
>>> urllib2.urlopen(URL).read()

或者,如果您尝试根据href动态执行此操作,则可以执行以下操作:

>>> import urllib2
>>> href='u17003I9f1/Prob_Heavy.csv'
>>> URL = 'http://circe.med.uniroma1.it/proABC/' + href
>>> urllib2.urlopen(URL).read()

即使使用正则表达式解析 HTML 是一种技巧,如果格式始终相同,也可以工作:

result=re.findall("<a href='([^']*)'>",contents)

也不确定它是否是最好/时尚的解决方案,但我会使用 wget 下载文件

import wget
for r in result:
    # compute full url
    csv_file = url.rpartition("/")[0]+"/"+r
    print("downloading {}".format(csv_file))
    # downloads and saves the .csv file in the current directory
    # "flattening" the path replacing slashes by underscores
    wget.download(csv_file,out=r.replace("/","_"))

两个先例答案都工作正常...如果网页存在。但是,当作业运行时,程序会花费时间(大约 30 秒)。所以我通过使用time模块暂停程序找到了答案:

from urllib2 import urlopen
import time
print "Job running..."
time.sleep(60)
csv_files = []
for href in result:
    URL = "http://circe.med.uniroma1.it/proABC/" + href + ".csv"    
    csv_files.append(urlopen(URL).read())
    print("downloading {}".format(URL))
print "Job finished"
print csv_files

我不确定这是更优雅的解决方案,但我确实在这种情况下工作。

相关内容

  • 没有找到相关文章

最新更新