如何使发电机在Spark Mappartitions()中工作



我试图在Spark中使用Mappartiton来处理大型文本语料库:假设我们有一些看起来像这样的半处理数据:

    text_1 = [['A', 'B', 'C', 'D', 'E'],
    ['F', 'E', 'G', 'A', 'B'],
    ['D', 'E', 'H', 'A', 'B'],
    ['A', 'B', 'C', 'F', 'E'],
    ['A', 'B', 'C', 'J', 'E'],
    ['E', 'H', 'A', 'B', 'C'],
    ['E', 'G', 'A', 'B', 'C'],
    ['C', 'F', 'E', 'G', 'A'],
    ['C', 'D', 'E', 'H', 'A'],
    ['C', 'J', 'E', 'H', 'A'],
    ['H', 'A', 'B', 'C', 'F'],
    ['H', 'A', 'B', 'C', 'J'],
    ['B', 'C', 'F', 'E', 'G'],
    ['B', 'C', 'D', 'E', 'H'],
    ['B', 'C', 'F', 'E', 'K'],
    ['B', 'C', 'J', 'E', 'H'],
    ['G', 'A', 'B', 'C', 'F'],
    ['J', 'E', 'H', 'A', 'B']]

每个字母都是一个单词。我也有词汇:

    V = ['D','F','G','C','J','K']
    text_1RDD = sc.parallelize(text_1)

我想在Spark中运行以下内容:

    filtered_lists = text_1RDD.mapPartitions(partitions)
    filtered_lists.collect()

我有此功能:

    def partitions(list_of_lists,vc):
            for w in vc:
                iterator = []
                for sub_list in list_of_lists:
                    if w in sub_list:
                        iterator.append(sub_list)
        yield (w,len(iterator))

如果我这样运行:

    c = partitions(text_1,V)
    for item in c:
        print(item)

它返回正确的计数

    ('D', 4)
    ('F', 7)
    ('G', 5)
    ('C', 15)
    ('J', 5)
    ('K', 1)

但是,我不知道如何在火花中运行它:

    filtered_lists = text_1RDD.mapPartitions(partitions)
    filtered_lists.collect()

它只有一个参数,在Spark运行时会产生很多错误...

,即使我在分区内部编码词汇函数:

    def partitionsV(list_of_lists):
            vc = ['D','F','G','C','J','K']
            for w in vc:
                iterator = []
                for sub_list in list_of_lists:
                    if w in sub_list:
                        iterator.append(sub_list)
        yield (w,len(iterator))

..我得到了:

    filtered_lists = text_1RDD.mapPartitions(partitionsV)
    filtered_lists.collect()

输出:

     [('D', 2),
     ('F', 0),
     ('G', 0),
     ('C', 0),
     ('J', 0),
     ('K', 0),
     ('D', 0),
     ('F', 0),
     ('G', 0),
     ('C', 0),
     ('J', 0),
     ('K', 0),
     ('D', 1),
     ('F', 0),
     ('G', 0),
     ('C', 0),
     ('J', 0),
     ('K', 0),
     ('D', 1),
     ('F', 0),
     ('G', 0),
     ('C', 0),
     ('J', 0),
     ('K', 0)]

显然,发电机无法正常工作。我完全被困。我非常新的火花。如果有人能向我解释这里发生的事情,我会非常感激...

这是另一个单词计数问题, mapPartitions不是工作的工具:

from operator import add
v = set(['D','F','G','C','J','K'])
result = text_1RDD.flatMap(v.intersection).map(lambda x: (x, 1)).reduceByKey(add)

结果是

for x in result.sortByKey().collect(): 
    print(x) 
('C', 15)
('D', 4)
('F', 7)
('G', 5)
('J', 5)
('K', 1)

最新更新