我想下载多个文件并将它们保存在一个zip文件中。
下面是我的代码:import zipfile
import requests
import os
pics = ['url/1.jpg', 'url/2.jpg', 'url/3.jpg']
dir = '/directory'
with zipfile.ZipFile(dir + '/test.zip', 'w') as my_zip:
for x in range(len(pics)):
fn = dir + '/' + pics[x].split('/')[-1]
r = requests.get(pics[x], allow_redirects=True)
open(fn, 'wb').write(r.content)
my_zip.write(fn, str(x+1) + os.path.splitext(fn)[1])
os.remove(fn)
有更聪明/更干净/更短的方法来做这件事吗?下载img文件,然后把它放入zip文件中,然后删除它,似乎没有必要这么复杂。
我认为在这种情况下你应该保存一个临时文件。
这个应该可以工作:
import requests
import zipfile
import tempfile
pics = ['http://www.princeton.edu/~dancexp/Images/1-XPAlifun.jpg',
'http://www.princeton.edu/~dancexp/Images/1-ExposeJess.jpg',
'http://www.princeton.edu/~dancexp/Images/JessHsurevisedlores.jpg']
picdirname = "mypics"
with zipfile.ZipFile("test.zip", "w") as my_zip:
for pic in pics:
response = requests.get(pic)
if response.status_code == 200:
tmpf = tempfile.TemporaryFile()
tmpf.write(response.content)
tmpf.seek(0)
my_zip.writestr(pic.split('/')[-1], tmpf.read())
tmpf.close()
我认为速度也很重要。试试这段代码
import requests, zipfile, os
from multiprocessing.dummy import Pool
directory = 'pics/'
def Save(url):
filename = directory+url.split('/')[-1]
try:
try:
res = requests.get(url)
open(filename, 'wb').write(res.content)
except:
return {'filename': filename, 'status': 'cant crate file.'}
except:
return {'filename': filename, 'status': 'error'}
else:
return {'filename': filename, 'status': 'ok'}
urls = ['https://i.stack.imgur.com/kBVja.jpg',
'https://lh5.googleusercontent.com/-TlrV5ArUF6s/AAAAAAAAAAI/AAAAAAAAAC0/d605oPHpYgc/photo.jpg',
'https://i.stack.imgur.com/ipDCR.png',
'https://i.stack.imgur.com/Zpq5l.jpg']
pool_ = Pool(16) #It depends on your processor and the Internet to increase or decrease it
result = pool_.map(Save, urls)
with zipfile.ZipFile('test.zip', 'w') as zip_file:
for x in result:
if x['status'] == 'ok':
zip_file.write(x['filename'])
os.remove(x['filename'])
print(x['filename'], " + Added to zip file")
else:
print(x)