如何使用多处理循环访问大型 URL 列表



>问题:检查超过 1000 个 url 的列表并获取 url 返回代码 (status_code(。

我有的脚本有效,但非常慢。

我认为必须有一种更好的、pythonic(更完美(的方式来做到这一点,我可以生成 10 或 20 个线程来检查 url 并收集共鸣。(即:

200 -> www.yahoo.com
404 -> www.badurl.com
...
<小时 />

输入文件:网址10.txt

www.example.com
www.yahoo.com
www.testsite.com

....

import requests
with open("url10.txt") as f:
    urls = f.read().splitlines()
print(urls)
for url in urls:
    url =  'http://'+url   #Add http:// to each url (there has to be a better way to do this)
    try:
        resp = requests.get(url, timeout=1)
        print(len(resp.content), '->', resp.status_code, '->', resp.url)
    except Exception as e:
        print("Error", url)

挑战:通过多处理提高速度。

<小时 />

带多处理

但它不起作用。我收到以下错误:(注意:我不确定我是否正确实现了它(

AttributeError: Can't get attribute 'checkurl' on <module '__main__' (built-in)>

--

import requests
from multiprocessing import Pool
with open("url10.txt") as f:
    urls = f.read().splitlines()
 
def checkurlconnection(url):
    
    for url in urls:
        url =  'http://'+url
        try:
            resp = requests.get(url, timeout=1)
            print(len(resp.content), '->', resp.status_code, '->', resp.url)
        except Exception as e:
            print("Error", url)
        
if __name__ == "__main__":
    p = Pool(processes=4)
    result = p.map(checkurlconnection, urls)
在这种情况下,

您的任务是 I/O 绑定的,而不是处理器绑定的 - 网站回复所需的时间比 CPU 循环一次脚本(不包括 TCP 请求(的时间要长。这意味着您不会从并行执行此任务中获得任何加速(这就是multiprocessing所做的(。你想要的是多线程。实现此目的的方法是使用很少记录的,可能名称不佳的multiprocessing.dummy

import requests
from multiprocessing.dummy import Pool as ThreadPool 
urls = ['https://www.python.org',
        'https://www.python.org/about/']
def get_status(url):
    r = requests.get(url)
    return r.status_code
if __name__ == "__main__":
    pool = ThreadPool(4)  # Make the Pool of workers
    results = pool.map(get_status, urls) #Open the urls in their own threads
    pool.close() #close the pool and wait for the work to finish 
    pool.join() 

有关 Python 中多处理与多线程的示例,请参阅此处。

checkurlconnection函数中,参数必须urls而不是url。否则,在 for 循环中,urls将指向全局变量,这不是您想要的。

import requests
from multiprocessing import Pool
with open("url10.txt") as f:
    urls = f.read().splitlines()
def checkurlconnection(urls):
    for url in urls:
        url =  'http://'+url
        try:
            resp = requests.get(url, timeout=1)
            print(len(resp.content), '->', resp.status_code, '->', resp.url)
        except Exception as e:
            print("Error", url)
if __name__ == "__main__":
    p = Pool(processes=4)
    result = p.map(checkurlconnection, urls)

相关内容

  • 没有找到相关文章

最新更新