链接Python生成器以遍历嵌套的数据结构;最佳实践



假设我有一个要遍历的嵌套数据结构。该数据结构包含节点,这些节点反过来可以通过node.get_children_generator()提供其子节点。当然,这些子类也是node类型,并且以惰性方式进行评估,即由生成器枚举。为了简单起见,我们假设node没有子级,函数get_children_generator只返回一个空列表/生成器(因此我们不必手动检查它是否为空)。

为了遍历这种嵌套节点的数据结构,简单地迭代链接所有生成器是个好主意吗?这是在创造一条又一条的链条,等等?或者这会造成太多的开销?

我的想法如下:

import itertools as it
def traverse_nodes(start_node):
    """Traverses nodes in breadth first manner.
    
    First returns the start node.
    
    For simplicity we require that 
    there are no cycles in the data structure,
    i.e. we are dealing with a simple tree.
    
    """
    node_queue = iter([start_node])
    while True:
        try:
            next_node = node_queue.next()
            yield next_node
            
            # Next get the children
            child_gen = next_node.get_children_generator()
            
            # The next code line is the one I am worried about
            # is it a good idea to make a chain of chains?
            node_queue = it.chain(node_queue, child_gen)
        except StopIteration:
            # There are no more nodes
            break

node_queue = it.chain(node_queue, child_gen)行是遍历的好方法吗?这是一个好主意吗。?

这样,您实际上就有了可以执行的东西,这里有一个相当愚蠢的伪node类。生成器有点无用,但假设在现实世界中评估孩子的例子有点昂贵,并且确实需要生成器。

class Node(object):
    """Rather silly example of a nested node.
    The children are actually stored in a list,
    so the generator is actually not needed.
    
    But simply assume that returning a children
    requires a lazy evaluation.
    
    """
    counter = 0 # Counter for node identification
    
    def __init__(self):
        self.children = [] # children list
        self.node_number = Node.counter # identifies the node
        Node.counter += 1
    
    def __repr__(self):
        return 'I am node #%d' % self.node_number
    
    def get_children_generator(self):
        """Returns a generator over children"""
        return (x for x in self.children)

所以下面的代码片段

node0 = Node()
node1 = Node()
node2 = Node()
node3 = Node()
node4 = Node()
node5 = Node()
node6 = Node()
node0.children = [node1, node2]
node1.children = [node6]
node2.children = [node3, node5]
node3.children = [node4]
for node in traverse_nodes(node0):
    print(node)

打印

我是节点#0

我是节点#1

我是节点#2

我是节点#6

我是节点#3

我是节点#5

我是节点#4

链接多个chain s会导致递归函数调用开销与链接在一起的chain s的数量成比例。

首先,我们的纯python chain实现,这样我们就不会丢失堆栈信息。C实现就在这里,您可以看到它基本上做了同样的事情——在底层可迭代对象上调用next()

from inspect import stack

def chain(it1, it2):
    for collection in [it1, it2]:
        try:
            for el in collection:
                yield el
        except StopIteration:
            pass

我们只关心chain的2迭代版本。我们首先使用第一个可迭代的,然后使用另一个。

class VerboseListIterator(object):
    def __init__(self, collection, node):
        self.collection = collection
        self.node = node
        self.idx = 0
    def __iter__(self):
        return self
    def __next__(self):
        print('Printing {}th child of "{}". Stack size: {}'.format(self.idx, self.node, len(stack())))
        if self.idx >= len(self.collection):
            raise StopIteration()
        self.idx += 1
        return self.collection[self.idx - 1]

这是我们方便的列表迭代器,当返回包装列表的下一个元素时,它会告诉我们有多少堆栈帧。

class Node(object):
    """Rather silly example of a nested node.
    The children are actually stored in a list,
    so the generator is actually not needed.
    But simply assume that returning a children
    requires a lazy evaluation.
    """
    counter = 0 # Counter for node identification
    def __init__(self):
        self.children = [] # children list
        self.node_number = Node.counter # identifies the node
        Node.counter += 1
    def __repr__(self):
        return 'I am node #%d' % self.node_number
    def get_children_generator(self):
        """Returns a generator over children"""
        return VerboseListIterator(self.children, self)

def traverse_nodes(start_node):
    """Traverses nodes in breadth first manner.
    First returns the start node.
    For simplicity we require that
    there are no cycles in the data structure,
    i.e. we are dealing with a simple tree.
    """
    node_queue = iter([start_node])
    while True:
        try:
            next_node = next(node_queue)
            yield next_node
            # Next get the children
            child_gen = next_node.get_children_generator()
            # The next code line is the one I am worried about
            # is it a good idea to make a chain of chains?
            node_queue = chain(node_queue, child_gen)
        except StopIteration:
            # There are no more nodes
            break

这些是您使用的Python版本(3.4)的实现

nodes = [Node() for _ in range(10)]
nodes[0].children = nodes[1:6]
nodes[1].children = [nodes[6]]
nodes[2].children = [nodes[7]]
nodes[3].children = [nodes[8]]
nodes[4].children = [nodes[9]]

节点的图初始化。根连接到前5个节点,这些节点依次连接到i + 5节点。

for node in traverse_nodes(nodes[0]):
    print(node)

这种交互作用的结果如下:

I am node #0
Printing 0th child of "I am node #0". Stack size: 4
I am node #1
Printing 1th child of "I am node #0". Stack size: 5
I am node #2
Printing 2th child of "I am node #0". Stack size: 6
I am node #3
Printing 3th child of "I am node #0". Stack size: 7
I am node #4
Printing 4th child of "I am node #0". Stack size: 8
I am node #5
Printing 5th child of "I am node #0". Stack size: 9
Printing 0th child of "I am node #1". Stack size: 8
I am node #6
Printing 1th child of "I am node #1". Stack size: 9
Printing 0th child of "I am node #2". Stack size: 8
I am node #7
Printing 1th child of "I am node #2". Stack size: 9
Printing 0th child of "I am node #3". Stack size: 8
I am node #8
Printing 1th child of "I am node #3". Stack size: 9
Printing 0th child of "I am node #4". Stack size: 8
I am node #9
Printing 1th child of "I am node #4". Stack size: 9
Printing 0th child of "I am node #5". Stack size: 8
Printing 0th child of "I am node #6". Stack size: 7
Printing 0th child of "I am node #7". Stack size: 6
Printing 0th child of "I am node #8". Stack size: 5
Printing 0th child of "I am node #9". Stack size: 4

正如您所看到的,我们越接近node0的子列表的末尾,堆栈就越大。为什么?让我们仔细看一下每一步——为了澄清,列举了每个chain调用:

  1. node_queue = [node0]
  2. 称为next(node_queue),生成node0。CCD_ 18
  3. 称为next(node_queue)。列表[node0]被消耗,并且第二列表正在被消耗。得到node1node_queue = chain2(chain1([node0], [node1, ...]), [node6])
  4. 调用next(node_queue)向下传播到chain1(从chain2),并且产生node2node_queue = chain3(chain2(chain1([node0], [...]), [node6]), [node7])
  5. 这种模式一直持续到我们即将产生node5:时

    next(chain5(chain4, [node9]))
                  |
                  V
        next(chain4(chain3, [node8]))
                      |
                      V
            next(chain3(chain2, [node7]))
                          |
                          V
                next(chain2(chain1, [node6]))
                              |
                              V
                    next(chain1([node0], [node1, node2, node3, node4, node5]))
                                                                   ^
                                                                 yield
    

它比简单的BFS/DFS快吗

当然不是。一个next(node_queue)调用实际上会导致大量递归调用,这与每个BFS中常规迭代器队列的大小成比例,或者简单地说,就是图中节点的最大子级数量。

tl;dr

这是一张显示算法的gif图:http://i.imgur.com/hnPIVG4.gif

最新更新