我正在使用python来访问foreman API,以收集有关foreman所知道的所有主机的一些事实。不幸的是,v1 foreman API中没有get-all-hosts-facts(或类似的东西),所以我必须遍历所有主机并获取信息。这样做给我带来了一个恼人的问题。对给定主机的每次调用都会返回一个JSON对象,如下所示:
{
"host1.com": {
"apt_update_last_success": "1452187711",
"architecture": "amd64",
"augeasversion": "1.2.0",
"bios_release_date": "06/03/2015",
"bios_vendor": "Dell Inc."
}
}
这完全没问题,当我附加下一个主机的信息时,问题就出现了。然后我得到一个json文件,看起来像这样:
{
"host1.com": {
"apt_update_last_success": "1452187711",
"architecture": "amd64",
"augeasversion": "1.2.0",
"bios_release_date": "06/03/2015",
"bios_vendor": "Dell Inc."
}
}{
"host2.com": {
"apt_update_last_success": "1452703454",
"architecture": "amd64",
"augeasversion": "1.2.0",
"bios_release_date": "06/03/2015",
"bios_vendor": "Dell Inc."
}
}
下面的代码是这样做的:
for i in hosts_data:
log.info("Gathering host facts for host: {}".format(i['host']['name']))
try:
facts = requests.get(foreman_host+api+"hosts/{}/facts".format(i['host']['id']), auth=(username, password))
if hosts.status_code != 200:
log.error("Unable to connect to Foreman! Got retcode '{}' and error message '{}'"
.format(hosts.status_code, hosts.text))
sys.exit(1)
except requests.exceptions.RequestException as e:
log.error(e)
facts_data = json.loads(facts.text)
log.debug(facts_data)
with open(results_file, 'a') as f:
f.write(json.dumps(facts_data, sort_keys=True, indent=4))
我希望这个文件看起来是这样的:
{
"host1.com": {
"apt_update_last_success": "1452187711",
"architecture": "amd64",
"augeasversion": "1.2.0",
"bios_release_date": "06/03/2015",
"bios_vendor": "Dell Inc."
},
"host2.com": {
"apt_update_last_success": "1452703454",
"architecture": "amd64",
"augeasversion": "1.2.0",
"bios_release_date": "06/03/2015",
"bios_vendor": "Dell Inc."
}
}
最好将所有数据汇编成一个字典,然后一次将其全部写出来,而不是每次都在循环中。
d = {}
for i in hosts_data:
log.info("Gathering host facts for host: {}".format(i['host']['name']))
try:
facts = requests.get(foreman_host+api+"hosts/{}/facts".format(i['host']['id']), auth=(username, password))
if hosts.status_code != 200:
log.error("Unable to connect to Foreman! Got retcode '{}' and error message '{}'"
.format(hosts.status_code, hosts.text))
sys.exit(1)
except requests.exceptions.RequestException as e:
log.error(e)
facts_data = json.loads(facts.text)
log.debug(facts_data)
d.update(facts_data) #add to dict
# write everything at the end
with open(results_file, 'a') as f:
f.write(json.dumps(d, sort_keys=True, indent=4))
不要在循环中写入json,而是将数据插入具有正确结构的dict
中。然后在循环结束时将该dict写入json。
为了安全/一致性,您需要加载旧数据,对其进行修改,然后将其写回来。
将当前的with
和write
更改为:
# If file guaranteed to exist, can use r+ and avoid initial seek
with open(results_file, 'a+') as f:
f.seek(0)
combined_facts = json.load(f)
combined_facts.update(facts_data)
f.seek(0)
json.dump(combined_facts, f, sort_keys=True, indent=4)
f.truncate() # In case new JSON encoding smaller, e.g. due to replaced key
注意:如果可能的话,您希望使用pault的答案来最小化不必要的I/O,如果数据检索应该分段完成,则应该这样做,当每个条目可用时立即更新。
仅供参考,不安全的方法是基本上找到尾随大括号,删除它,然后写出一个逗号,后面跟着新的JSON(从它的JSON表示中删除前导大括号)。它的I/O密集程度要低得多,但它也不太安全,不清除重复项,不对主机进行排序,根本不验证输入文件,等等。所以不要这么做