Python有效地从URL下载图像



我有一个存储在熊猫数据帧中的图像URL列表。我想下载所有这些图像并将它们存储在本地。

我用来这样做的代码是:

import os
import requests
def load(df, output_folder):        
    print("Ready to load "+str(len(df.index))+" images.")
    for i,row in df.iterrows():
        print("Image "+str(i))
        save_image_from_url(row["image_url"], os.path.join(output_folder, row["image_name"]))
''' From a given URL, download the image and store it at the given path'''
def save_image_from_url(url, output_path):
    image = requests.get(url)
    with open(output_path, 'wb') as f:
        f.write(image.content)

问题是该过程非常慢(每个图像从 0.5 秒到 4 秒(。有没有办法做得更快?

显而易见的方法是并行下载,文档中有一个清晰的例子

对于您的情况,请尝试以下方法:

import concurrent.futures
import os
import requests

def save_image_from_url(url, output_folder):
    image = requests.get(url.image_url)
    output_path = os.path.join(
        output_folder, url.image_name
    )
    with open(output_path, "wb") as f:
        f.write(image.content)
def load(df, output_folder):    
    with concurrent.futures.ThreadPoolExecutor(
        max_workers=5
    ) as executor:
        future_to_url = {
            executor.submit(save_image_from_url, url, output_folder): url
            for _, url in df.iterrows()
        }
        for future in concurrent.futures.as_completed(
            future_to_url
        ):
            url = future_to_url[future]
            try:
                future.result()
            except Exception as exc:
                print(
                    "%r generated an exception: %s" % (url, exc)
                )

相关内容

  • 没有找到相关文章

最新更新