快速计数bigram(有或没有多处理)-python



给定norvig.com/big.txt中的big.txt,目标是快速计数bigram(想象一下,我必须重复这个计数100000次(。

根据python中的快速/优化N-gram实现,像这样提取bigram将是最优化的:

_bigrams = zip(*[text[i:] for i in range(2)])

如果我使用Python3,那么在我用list(_bigrams)或其他一些函数实现生成器之前,不会对生成器进行评估。

import io
from collections import Counter
import time
with io.open('big.txt', 'r', encoding='utf8') as fin:
     text = fin.read().lower().replace(u' ', u"uE000")
while True: 
    _bigrams = zip(*[text[i:] for i in range(2)])
    start = time.time()
    top100 = Counter(_bigrams).most_common(100)
    # Do some manipulation to text and repeat the counting.
    text = manipulate(text, top100)      

但每次迭代大约需要1秒以上的时间,100000次迭代太长了。

我也尝试过sklearn CountVectorizer,但提取、计数和获得前100个bigram的时间与原生python相当。

然后,我用一些multiprocessing进行了实验,使用了Python多处理和共享计数器的轻微修改http://eli.thegreenplace.net/2012/01/04/shared-counter-with-pythons-multiprocessing:

from multiprocessing import Process, Manager, Lock
import time
class MultiProcCounter(object):
    def __init__(self):
        self.dictionary = Manager().dict()
        self.lock = Lock()
    def increment(self, item):
        with self.lock:
            self.dictionary[item] = self.dictionary.get(item, 0) + 1
def func(counter, item):
    counter.increment(item)
def multiproc_count(inputs):
    counter = MultiProcCounter()
    procs = [Process(target=func, args=(counter,_in)) for _in in inputs]
    for p in procs: p.start()
    for p in procs: p.join()
    return counter.dictionary
inputs = [1,1,1,1,2,2,3,4,4,5,2,2,3,1,2]
print (multiproc_count(inputs))

但在二元计数中使用MultiProcCounter每次迭代甚至需要超过1+秒的时间。我不知道为什么会出现这种情况,使用int的伪列表示例,multiproc_count可以完美地工作。

我试过:

import io
from collections import Counter
import time
with io.open('big.txt', 'r', encoding='utf8') as fin:
     text = fin.read().lower().replace(u' ', u"uE000")
while True:
    _bigrams = zip(*[text[i:] for i in range(2)])
    start = time.time()
    top100 = Counter(multiproc_count(_bigrams)).most_common(100)

在Python中,有没有什么方法可以快速计算bigram

import os, thread
text = 'I really like cheese' #just load whatever you want here, this is just an example
CORE_NUMBER = os.cpu_count() # may not be available, just replace with how many cores you have if it crashes
ready = []
bigrams = []
def extract_bigrams(cores):
    global ready, bigrams
    bigrams = []
    ready = []
    for a in xrange(cores): #xrange is best for performance
        bigrams.append(0)
        ready.append(0)
    cpnt = 0#current point
    iterator = int(len(text)/cores)
    for a in xrange(cores-1):
        thread.start_new(extract_bigrams2, (cpnt, cpnt+iterator+1, a)) #overlap is intentional
        cpnt += iterator
    thread.start_new(extract_bigrams2, (cpnt, len(text), a+1))
    while 0 in ready:
        pass
def extract_bigrams2(startpoint, endpoint, threadnum):
    global ready, bigrams
    ready[threadnum] = 0
    bigrams[threadnum] = zip(*[text[startpoint+i:endpoint] for i in xrange(2)])
    ready[threadnum] = 1
extract_bigrams(CORE_NUMBER)
thebigrams = []
for a in bigrams:
    thebigrams+=a
print thebigrams

这个程序有一些问题,比如它没有过滤掉空白或标点符号,但我制作这个程序是为了展示你应该拍摄的东西。您可以轻松地对其进行编辑以满足您的需求。

这个程序自动检测你的计算机有多少内核,并创建这个数量的线程,试图均匀地分布它寻找bigram的区域。我只能在学校电脑上的在线浏览器中测试这段代码,所以我不能确定它是否完全有效。如果有任何问题或疑问,请在评论中留下。

我的建议:

Text= "The Project Gutenberg EBook of The Adventures of Sherlock Holmes"
"by Sir Arthur Conan Doyle"
# Counters
Counts= [[0 for x in range(128)] for y in range(128)]
# Perform the counting
R= ord(Text[0])
for i in range(1, len(Text)):
    L= R; R= ord(Text[i])
    Counts[L][R]+= 1
# Output the results
for i in range(ord('A'), ord('{')):
    if i < ord('[') or i >= ord('a'):
        for j in range(ord('A'), ord('{')):
            if (j < ord('[') or j >= ord('a')) and Counts[i][j] > 0:
                print chr(i) + chr(j), Counts[i][j]

Ad 1
Bo 1
EB 1
Gu 1
Ho 1
Pr 1
Sh 1
Th 2
be 1
ck 1
ct 1
dv 1
ec 1
en 2
er 2
es 2
he 3
je 1
lm 1
lo 1
me 1
nb 1
nt 1
oc 1
of 2
oj 1
ok 1
ol 1
oo 1
re 1
rg 1
rl 1
ro 1
te 1
tu 1
ur 1
ut 1
ve 1

此版本区分大小写;可能最好先将整个文本小写。

相关内容

  • 没有找到相关文章

最新更新