我正在从spotify api中检索总共10条轨道的轨道数据,但它需要大约2-3秒才能运行。有没有办法通过使用一些python库,如multiprocessing或其他东西来加快它的速度?
track_url = []
track_name = []
album_image = []
for i in range(len(tracks_recommend)):
track_id = tracks_recommend.at[i, 'id']
# call to spotify api
res = spotify.track(track_id=track_id)
track_url.append(res['external_urls'])
track_name.append(res['name'])
album_image.append(res['album']['images'][0]['url'])
有没有办法通过使用一些python库如multiprocessing来加快速度
是的,多进程可以并行运行API请求。这将让你开始:
from multiprocessing.pool import ThreadPool as Pool
def recommend(track_id):
return spotify.track(track_id=track_id)
track_ids = [tracks_recommend.at[i, 'id']
for i in range(len(tracks_recommend))]
with Pool(5) as pool:
for res in pool.map(recommend, track_ids):
...
取决于Spotify是否跟踪您并限制您的一次请求。如果没有,你可以这样开始:
def process_track(track_id)
# call to spotify api
res = spotify.track(track_id=track_id)
return (res['external_urls'], res['name'], res['album']['images'][0]['url'])
with Pool(4) as p: # replace 4 with whatever number you want
track_ids = [tracks_recommend.at[i, 'id'] for i in range(len(tracks_recommend))]
output = p.map(process_track, track_ids)
track_url, track_name, album_image = zip(*output)
这对延迟没有帮助,但可能会增加吞吐量。