如何在python中提高我的多线程速度和效率



如何提高代码中的多线程速度?

我的代码需要130秒,100个线程完成700个请求,假设我使用100个线程,这真的很慢,也很令人沮丧。

我的代码编辑url中的参数值,并向其发出包括原始url(未经编辑)的请求,这些url是从文件(urls.txt)中接收的

让我给你看一个例子:

让我们考虑以下网址:

https://www.test.com/index.php?parameter=value1&parameter2=value2

url包含2个参数,因此我的代码将发出3个请求。

1个对原始url的请求:

https://www.test.com/index.php?parameter=value1&parameter2=value2

1个对第一个修改值的请求:

https://www.test.com/index.php?parameter=replaced_value&parameter2=value2

1请求第二个修改值:

https://www.test.com/index.php?parameter=value1&parameter2=replaced_value

我已经尝试过使用asyncio,但我在使用concurrent.futures时取得了更大的成功

我甚至尝试增加线程,起初我认为这是问题所在,但在这种情况下,如果我大幅增加线程,那么脚本在开始时会冻结30-50秒,而且它确实没有像我预期的那样提高的速度

我认为这是一个代码问题,我如何构建多线程,因为我看到其他人使用concurrent实现了令人难以置信的速度。futures

import requests
import uuid
from concurrent.futures import ThreadPoolExecutor, as_completed
import time
start = time.time()
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36'}
def make_request(url2):
try:
if '?' and '=':
request_1 = requests.get(url2, headers=headers, timeout=10)
url2_modified = url2.split("?")[1]
times = url2_modified.count("&") + 1
for x in range(0, times):
split1 = url2_modified.split("&")[x]
value = split1.split("=")[1]
parameter = split1.split("=")[0]
url = url2.replace('='+value, '=1') 
request_2 = requests.get(url, stream=True, headers=headers, timeout=10)
html_1 = request_1.text
html_2 = request_2.text
print(request_1.status_code + ' - ' + url2)
print(request_2.status_code + ' - ' + url)
except requests.exceptions.RequestException as e:
return e

def runner():
threads= []
with ThreadPoolExecutor(max_workers=100) as executor:
file1 = open('urls.txt', 'r', errors='ignore')
Lines = file1.readlines()   
count = 0
for line in Lines:
count += 1
threads.append(executor.submit(make_request, line.strip()))

runner()
end = time.time()
print(end - start)

make_request中的loop中,您运行普通的requests.get,它不使用thread(或任何其他方法)来加快速度,因此它必须等待上一个请求的结束才能运行下一个请求。

make_request中,我使用另一个ThreadPoolExecutor在分离的thread中运行每个requests.get(在循环中创建)

executor.submit(make_modified_request, modified_url) 

它给我时间~1.2s

如果我使用普通

make_modified_request(modified_url)

那么它给我时间~3.2s


最小工作示例:

我使用真实的URLhttps://httpbin.org/get这样每个人都可以简单地复制并运行它。

from concurrent.futures import ThreadPoolExecutor
import requests
import time
#import urllib.parse
# --- constansts --- (PEP8: UPPER_CASE_NAMES)
HEADERS = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36'}
# --- functions ---
def make_modified_request(url):
"""Send modified url."""
print('send:', url)
response = requests.get(url, stream=True, headers=HEADERS)
print(response.status_code, '-', url)
html = response.text   # ???
# ... code to process HTML ...
def make_request(url):
"""Send normal url and create threads with modified urls."""
threads = []
with ThreadPoolExecutor(max_workers=10) as executor:
print('send:', url)
# send base url            
response = requests.get(url, headers=HEADERS)
print(response.status_code, '-', url)
html = response.text   # ???
#parts = urllib.parse.urlparse(url)
#print('query:',  parts.query)
#arguments = urllib.parse.parse_qs(parts.query)
#print('arguments:', arguments)   # dict  {'a': ['A'], 'b': ['B'], 'c': ['C'], 'd': ['D'], 'e': ['E']}
arguments = url.split("?")[1]
arguments = arguments.split("&")
arguments = [arg.split("=") for arg in arguments]
print('arguments:', arguments)    # list [['a', 'A'], ['b', 'B'], ['c', 'C'], ['d', 'D'], ['e', 'E']]

for name, value in arguments:
modified_url = url.replace('='+value, '=1')
print('modified_url:', modified_url)

# run thread with modified url
threads.append(executor.submit(make_modified_request, modified_url))

# run normal function with modified url 
#make_modified_request(modified_url)
print('[make_request] len(threads):', len(threads))

def runner():
threads = []

with ThreadPoolExecutor(max_workers=10) as executor:
#fh = open('urls.txt', errors='ignore')
fh = [
'https://httpbin.org/get?a=A&b=B&c=C&d=D&e=E', 
'https://httpbin.org/get?f=F&g=G&h=H&i=I&j=J',
'https://httpbin.org/get?k=K&l=L&m=M&n=N&o=O',
'https://httpbin.org/get?a=A&b=B&c=C&d=D&e=E', 
'https://httpbin.org/get?f=F&g=G&h=H&i=I&j=J',
'https://httpbin.org/get?k=K&l=L&m=M&n=N&o=O',
]
for line in fh:
url = line.strip()
# create thread with url
threads.append(executor.submit(make_request, url))
print('[runner] len(threads):', len(threads))
# --- main ---
start = time.time()
runner()
end = time.time()
print('time:', end - start)

BTW:

我想用一个

executor = ThreadPoolExecutor(max_workers=10)

稍后在所有函数中使用相同的executor——也许它会运行得更快——但目前我还没有可用的代码。

最新更新