有什么方法可以在线程时使用美丽汤进行解析?



如何解析多线程链接?

基本上我正在查找链接,然后逐个解析这些链接。

它这样做:

for link in links:
scrape_for_info(link)

链接包含:

https://www.xtip.co.uk/en/?r=bets/xtra&group=476641&game=312053910
https://www.xtip.co.uk/en/?r=bets/xtra&group=476381&game=312057618
...
https://www.xtip.co.uk/en/bets/xtra.html?group=477374&game=312057263

scrape_for_info(url(看起来像这样:

def scrape_for_info(url):
scrape = CP_GetOdds(url)
for x in range(scrape.GameRange()):
sql_str = "INSERT INTO Scraped_Odds ('"
sql_str += str(scrape.Time()) + "', '"
sql_str += str(scrape.Text(x)) + "', '"
sql_str += str(scrape.HomeTeam()) + "', '"
sql_str += str(scrape.Odds1(x)) + "', '"
sql_str += str(scrape.Odds2(x)) + "', '"
sql_str += str(scrape.AwayTeam()) + "')"
cursor.execute(sql_str)
conn.commit()

我看到在抓取网站时使用线程,但它主要用于抓取而不是解析。

我希望有人可以教我如何比目前更快地解析。当我查看实时赔率时,我必须尽快更新

对于多处理,您可以考虑使用队列。

通常,您将创建两个作业,一个创建 url,另一个使用它们。让我们称它们为creatorconsumer.我将假设此处存在任何称为closing_condition的信号量(例如使用 Value(,并且您用于解析 url 并保存它们的方法分别称为create_url_methodstore_url

from multiprocessing import Queue, Value, Process
import queue

def creator(urls, closing_condition):
"""Parse page and put urls in given Queue."""
while (not closing_condition):
created_urls = create_url_method()
[urls.put(url) for url in created_urls]

def consumer(urls, closing_condition):
"""Consume urls in given Queue."""
while (not closing_condition):
try:
store_url(urls.get(timeout=1))
except queue.Empty:
pass

urls = Queue()
semaphore = Value('d', 0)
creators_number = 2
consumers_number = 2
creators = [
Process(target=creator, args=(urls, semaphore))
for i in range(creators_number)
]
consumers = [
Process(target=consumer, args=(urls, semaphore))
for i in range(consumers_number)
]
[p.start() for p in creators + consumer]
[p.join() for p in creators + consumer]

Automate the Boring Stuff with Python中有一个很好的例子。

https://automatetheboringstuff.com/chapter15/

基本上,您需要使用threading模块为每个URL创建一个不同的线程,然后等待所有URL完成。

import threading
def scrape_for_info(url):
scrape = CP_GetOdds(url)
for x in range(scrape.GameRange()):
sql_str = "INSERT INTO Scraped_Odds ('"
sql_str += str(scrape.Time()) + "', '"
sql_str += str(scrape.Text(x)) + "', '"
sql_str += str(scrape.HomeTeam()) + "', '"
sql_str += str(scrape.Odds1(x)) + "', '"
sql_str += str(scrape.Odds2(x)) + "', '"
sql_str += str(scrape.AwayTeam()) + "')"
cursor.execute(sql_str)
conn.commit()
# Create and start the Thread objects.
threads = []
for link in links:
thread = threading.Thread(target=scrape_for_info, args=(link))
threads.append(thread)
thread.start()
# Wait for all threads to end.
for thread in threads:
thread.join()
print('Done.')

谢谢你的所有答案!

以下方法起到了作用:

from multiprocessing import Pool
with Pool(10) as p:
p.map(scrape_for_info, links))

最新更新