使用缩进文本文件中的列表创建树/深度嵌套字典



我想遍历一个文件,并将每行的内容放入一个深度嵌套的字典中,其结构由前导空格定义。这种愿望与这里记录的非常相似。我已经解决了这个问题,但现在遇到了处理重复键被覆盖而不是被强制转换为列表的情况的问题。

本质上:

a:
    b:      c
    d:      e
a:
    b:      c2
    d:      e2
    d:      wrench

在应该强制转换为时被强制转换为{"a":{"b":"c2","d":"wrench"}}

{"a":[{"b":"c","d":"e"},{"b":"c2","d":["e2","wrench"]}]}

一个自包含的示例:

import json
def jsonify_indented_tree(tree):
    #convert indentet text into json
    parsedJson= {}
    parentStack = [parsedJson]
    for i, line in enumerate(tree):
        data = get_key_value(line)
        if data['key'] in parsedJson.keys(): #if parent key is repeated, then cast value as list entry
            # stuff that doesn't work
#            if isinstance(parsedJson[data['key']],list):
#                parsedJson[data['key']].append(parsedJson[data['key']])
#            else:
#                parsedJson[data['key']]=[parsedJson[data['key']]]
            print('Hey - Make a list now!')
        if data['value']: #process child by adding it to its current parent
            currentParent = parentStack[-1] #.getLastElement()
            currentParent[data['key']] = data['value']
            if i is not len(tree)-1:
                #determine when to switch to next branch
                level_dif = data['level']-get_key_value(tree[i+1])['level'] #peek next line level
                if (level_dif > 0):
                    del parentStack[-level_dif:] #reached leaf, process next branch
        else:
        #group node, push it as the new parent and keep on processing.
            currentParent = parentStack[-1] #.getLastElement()
            currentParent[data['key']] = {}
            newParent = currentParent[data['key']]
            parentStack.append(newParent)
    return parsedJson
def get_key_value(line):
    key = line.split(":")[0].strip()
    value = line.split(":")[1].strip()
    level = len(line) - len(line.lstrip())
    return {'key':key,'value':value,'level':level}
def pp_json(json_thing, sort=True, indents=4):
    if type(json_thing) is str:
        print(json.dumps(json.loads(json_thing), sort_keys=sort, indent=indents))
    else:
        print(json.dumps(json_thing, sort_keys=sort, indent=indents))
    return None
#nested_string=['a:', 'tb:ttc', 'td:tte', 'a:', 'tb:ttc2', 'td:tte2']
#nested_string=['w:','tgeneral:ttcase','a:','tb:ttc','td:tte','a:','tb:ttc2','td:tte2']
nested_string=['a:',
 'tb:ttc',
 'td:tte',
 'a:',
 'tb:ttc2',
 'td:tte2',
  'td:ttwrench']
pp_json(jsonify_indented_tree(nested_string))

这种方法(逻辑上(要简单得多(尽管更长(:

  1. 跟踪levelkey - 多行字符串中每行value
  2. 将此数据存储在列表的level键字典中:{ level1 :[ dict1dict2 ]}
  3. 在仅键
  4. 行中仅追加表示的字符串:{ level1 :[ dict1dict2"nestKeyA" ]}
  5. 由于仅键行意味着下一行更深一级,因此在下一级处理:{ level1 :[ dict1dict2"nestKeyA" ], level2 :[...]}。一些更深层次level2的内容本身可能只是另一个仅键行(下一个循环将添加一个新级别level3,使其变为 { level1 :[ dict1dict2"nestKeyA" ], level2 :[ "nestKeyB" ], level3 :[...]}( 或一个新的字典dict3使得 { level1 :[ dict1dict2"nestKeyA" ], level2 :[ dict3 ]
  6. 步骤 1-4 继续,直到当前行缩进的幅度小于前一行(表示返回到某个先前的范围(。这是我的示例每次行迭代的数据结构的样子。

    0, {0: []}
    1, {0: [{'k': 'sds'}]}
    2, {0: [{'k': 'sds'}, 'a']}
    3, {0: [{'k': 'sds'}, 'a'], 1: [{'b': 'c'}]}
    4, {0: [{'k': 'sds'}, 'a'], 1: [{'b': 'c'}, {'d': 'e'}]}
    5, {0: [{'k': 'sds'}, {'a': {'d': 'e', 'b': 'c'}}, 'a'], 1: []}
    6, {0: [{'k': 'sds'}, {'a': {'d': 'e', 'b': 'c'}}, 'a'], 1: [{'b': 'c2'}]}
    7, {0: [{'k': 'sds'}, {'a': {'d': 'e', 'b': 'c'}}, 'a'], 1: [{'b': 'c2'}, {'d': 'e2'}]}
    

    然后需要做两件事。1:需要检查字典列表是否包含重复的键以及任何这些重复的字典的值组合在一个列表中 - 这将在稍后演示。2:在迭代 4 和迭代 5 之间可以看出,来自最深层次(这里1(的字典列表被组合成一个字典......最后,为了演示重复处理,请观察:

    [7b, {0: [{'k': 'sds'}, {'a': {'d': 'e', 'b': 'c'}}, 'a'], 1: [{'b': 'c2'}, {'d': 'e2'}, {'d': 'wrench'}]}]
    [7c, {0: [{'k': 'sds'}, {'a': {'d': 'e', 'b': 'c'}}, {'a': {'d': ['wrench', 'e2'], 'b': 'c2'}}], 1: []}]
    

    其中wrenche2被放置在一个列表中,该列表本身进入由其原始键键的字典。

  7. 重复步骤 1-5,将更深的范围字典提升到其父键上,直到达到当前行的范围(级别(。

  8. 处理终止条件以将第 0 个级别的字典列表合并到一个字典中。

代码如下:

import json
def get_kvl(line):
    key = line.split(":")[0].strip()
    value = line.split(":")[1].strip()
    level = len(line) - len(line.lstrip())
    return {'key':key,'value':value,'level':level}
def pp_json(json_thing, sort=True, indents=4):
    if type(json_thing) is str:
        print(json.dumps(json.loads(json_thing), sort_keys=sort, indent=indents))
    else:
        print(json.dumps(json_thing, sort_keys=sort, indent=indents))
    return None
def jsonify_indented_tree(tree): #convert shitty sgml header into json
    level_map= {0:[]}
    tree_length=len(tree)-1
    for i, line in enumerate(tree):
        data = get_kvl(line)
        if data['level'] not in level_map.keys():
            level_map[data['level']]=[] # initialize
        prior_level=get_kvl(tree[i-1])['level']
        level_dif = data['level']-prior_level # +: line is deeper, -: shallower, 0:same
        if data['value']:
            level_map[data['level']].append({data['key']:data['value']})
        if not data['value'] or i==tree_length:
            if i==tree_length: #end condition
                level_dif = -len(list(level_map.keys()))        
            if level_dif < 0:
                for level in reversed(range(prior_level+level_dif+1,prior_level+1)): # (end, start)
                    #check for duplicate keys in current deepest (child) sibling group,
                    # merge them into a list, put that list in a dict 
                    key_freq={} #track repeated keys
                    for n, dictionary in enumerate(level_map[level]):
                        current_key=list(dictionary.keys())[0]
                        if current_key in list(key_freq.keys()):
                            key_freq[current_key][0]+=1
                            key_freq[current_key][1].append(n)
                        else:
                            key_freq[current_key]=[1,[n]]
                    for k,v in key_freq.items():
                        if v[0]>1: #key is repeated
                            duplicates_list=[]
                            for index in reversed(v[1]): #merge value of key-repeated dicts into list
                                duplicates_list.append(list(level_map[level].pop(index).values())[0])
                            level_map[level].append({k:duplicates_list}) #push that list into a dict on the same stack it came from
                    if i==tree_length and level==0: #end condition
                        #convert list-of-dict into dict
                        parsed_nest={k:v for d in level_map[level] for k,v in d.items()}
                    else:
                        #push current deepest (child) sibling group onto parent key
                        key=level_map[level-1].pop() #string
                        #convert child list-of-dict into dict
                        level_map[level-1].append({key:{k:v for d in level_map[level] for k,v in d.items()}})
                        level_map[level]=[] #reset deeper level
            level_map[data['level']].append(data['key'])
    return parsed_nest
nested_string=['k:ttsds', #need a starter key,value pair otherwise this won't work... fortunately I always have one
 'a:',
 'tb:ttc',
 'td:tte',
 'a:',
 'tb:ttc2',
 'td:tte2',
 'td:ttwrench']
pp_json(jsonify_indented_tree(nested_string))

最新更新