我正在尝试捕获来自邮递员的响应,它需要存储在csv中。
这是我尝试过的代码,但没有得到预期的输出
UsergroupURL="https://"+ dex.3ds.com"
UsergroupsURL= UsergroupURL + "/3drdd/resources/b1/usersgroup?select=title,description,owner,members,pending_members,creation_date,modification_date&top=100&skip=0"
skip = 0
all_results = []
while True:
usergroupresponse = session.post(UsergroupsURL, data = data)
if usergroupresponse.status_code != 200:
Failmsg="Failed to post usergroup. Status code : " + str(response.status_code)
sys.exit(Failmsg)
results = usergroupresponse.json()
print(results)
if len(results) == 0:
# No more results to retrieve
break
# Append the results to the all_results list
all_results += results
# Increment skip to skip the previously retrieved results
skip += 100
# Update the URL with the new skip value
UsergroupsURL = UsergroupURL + "/3drdd/resources/b1/usersgroup?select=title,description,owner,members,pending_members,creation_date,modification_date&top=100&skip=" + str(skip)
print(results)
# Write the results to a CSV file
with open('response.csv', 'w', newline='') as csvfile:
writer = csv.writer(csvfile)
for row in all_results:
writer.writerow(row.values())
在这段代码中,第一次点击URL后将只获得100个数据,但URL包含多个数据,所以这里通过skip参数从URL检索所有数据,前100个数据将被捕获到CSV文件
我期望当我第二次尝试点击URL时,跳过值应该更改为100,所以它将检索第二个100个数据,第一个100将被跳过,所以第二个100个数据将被捕获CSV文件,当我第三次尝试点击URL时跳过值应该更改为200,所以它将检索其余的数据,它应该被击中,直到URL
没有响应这是我更新的代码:
需要单独更改这部分,
csv_filename = f"response_{skip_value}.csv"
with open(csv_filename, 'w', newline='', encoding='utf-8', errors='ignore') as csvfile:
writer = csv.writer(csvfile)
header = ["uri","title", "description", "owner", "members", "pending_members", "creation_date", "modification_date"]
writer.writerow(header)
for group in results['groups']:
writer.writerow(group[x] for x in header)
if not results:
break
# Increment the skip value by 100
skip_value += 100