asyncio的新特性,使用它来尝试更快地生成大量API请求,并将每个请求返回的数据存储在字典中。我想我已经掌握了使用asyncio和aiohttp的语法,因为我得到了返回的数据,但我很难将数据存储在字典中。
search_ids = [1,2,3,4,5,6,7,8,9,10]
stats = {"Date":[],"Instance ID":[],"Result":[],"Display Name":[]}
async def main():
async with aiohttp.ClientSession() as session:
tasks = []
for search_id in search_ids:
task = asyncio.ensure_future(get_data(session, search_id))
tasks.append(task)
responses = await asyncio.gather(*tasks)
for y in responses['entries']:
stats['Display Name'].append(y['player']['UserInfo']['displayName'])
async def get_data(session, search_id):
url = f'https://www.myapi.com/{search_id}'
async with session.get(url, headers=HEADERS, ssl=False) as response:
results = await response.json()
return results['Response']
asyncio.run(main())
TypeError: list index必须是整数或切片,而不是str
这使得它似乎在我看来,如果已经返回的数据是不可迭代的。然而,我已经看到了被退回的东西,这正是我所期望的。因此,如果我将代码更改为如下所示,它可以正常工作:
search_ids = [1,2,3,4,5,6,7,8,9,10]
stats = {"Date":[],"Instance ID":[],"Result":[],"Display Name":[]}
async def main():
async with aiohttp.ClientSession() as session:
tasks = []
for search_id in search_ids:
task = asyncio.ensure_future(get_data(session, search_id))
tasks.append(task)
responses = await asyncio.gather(*tasks)
for y in responses:
stats['Display Name'].append(y['entries'][0]['player']['UserInfo']['displayName'])
stats['Display Name'].append(y['entries'][1]['player']['UserInfo']['displayName'])
stats['Display Name'].append(y['entries'][2]['player']['UserInfo']['displayName'])
stats['Display Name'].append(y['entries'][3]['player']['UserInfo']['displayName'])
stats['Display Name'].append(y['entries'][4]['player']['UserInfo']['displayName'])
stats['Display Name'].append(y['entries'][5]['player']['UserInfo']['displayName'])
async def get_data(session, search_id):
url = f'https://www.myapi.com/{search_id}'
async with session.get(url, headers=HEADERS, ssl=False) as response:
results = await response.json()
return results['Response']
asyncio.run(main())
我不是基本上做同样的事情手动在这里,我试图做一个For循环在顶部的代码片段?我只会使用这个变通方法,除了我计划从每个响应中提取更多的数据,而且手动一遍又一遍地这样做是不实际的。
加上很明显,这让我质疑我是否正确理解async,如果这给了我这样一个简单的错误。
谢谢你的帮助。
你在迭代两个不同的东西。在第一个中,你迭代responses[“entries”]
。在第二种情况下,你使用responses
。responses
是一个列表(字典),而不是字典,所以它只能通过索引访问,不能通过键访问。
同步运行代码时,所要做的就是遍历响应中的条目。现在您正在处理列表中的多个响应,您需要遍历响应和每个响应中的条目。要做到这一点,您需要使用两个单独的for循环。
responses = await asyncio.gather(
*[get_data(session, search_id) for search_id in search_ids]
)
for response in responses:
for entry in response["entries"]:
stats["Display Name"].append(
entry["player"]["UserInfo"]["displayName"]
)
这可能有帮助
for index, y in enumerate(responses['entries']):
stats['Display Name'].append(y['entries'][index]['player']['UserInfo']['displayName'])
错误是由于数据类型中的键。请根据您的第二个代码
尝试此代码search_ids = [1,2,3,4,5,6,7,8,9,10]
stats = {"Date":[],"Instance ID":[],"Result":[],"Display Name":[]}
async def main():
async with aiohttp.ClientSession() as session:
tasks = []
for search_id in search_ids:
task = asyncio.ensure_future(get_data(session, search_id))
tasks.append(task)
responses = await asyncio.gather(
*[get_data(session, search_id) for search_id in search_ids])
for response in responses:
for entry in response["entries"]:
stats["Display Name"].append(
entry["player"]["UserInfo"]["displayName"]
)
async def get_data(session, carnage_id):
url = f'https://www.myapi.com/{search_id}'
async with session.get(url, headers=HEADERS, ssl=False) as response:
results = await response.json()
return results['Response']
asyncio.run(main())