网页抓取多处理不起作用



我正在尝试在大量URL上使用网络剪接,并且我应用多处理来加快速度,但不知道为什么它根本无法加快速度。这是我的代码的一部分:

def scrape(url,output_path):
    page = urlopen(URL)
    soup = BeautifulSoup(page, 'html.parser')
    item_text = soup.select('#scatter6001 script')[0].text
    table = soup.find_all('table',{'class':'noborder dark'})
    df1 = pd.read_html(str(table),header = 0)
    df1 = pd.DataFrame(df1[0])
    ...
# function for scraping the data from url
rootPath = '...' 
urlp1 = "https://www.proteinatlas.org/"
try:
    df1 = pd.read_csv(rootPath + "cancer_list1_2(1).csv", header=0);
except Exception as e:
    print("File " + f + " doesn't exist")
    print(str(e))
    sys.exit()
cancer_list = df1.as_matrix().tolist()
URLs = []
for cancer in cancer_list:
    urlp2 = "/pathology/tissue/" + cancer[1]
    f = cancer[0]
    
    try:
        df1 = pd.read_csv(rootPath + f + ".csv", header=0);
    except Exception as e:
        print("File " + f + " doesn't exist")
        print(str(e))
        sys.exit()
    ...
# list of URLs
if __name__ == '__main__':
    pool = multiprocessing.Pool(processes=6)
    records = p.map(scrape(url,output_path))
    p.terminate()
    p.join()

不确定如何使用多处理加快网络剪贴。

您实际上并未使用多处理。您正在运行一次scrape函数,然后将结果作为参数传递给p.map()。相反,您需要通过一个可叫的一个参数,例如:

func = lambda url: scrape(url, output_path)
p.map(func, list_of_urls)

相关内容

  • 没有找到相关文章

最新更新