二进制numpy数组之间的快速汉明距离计算



我有两个相同长度的numpy数组,它们包含二进制值

import numpy as np
a=np.array([1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0])
b=np.array([1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1])

我想尽快计算它们之间的汉明距离,因为我要进行数百万次这样的距离计算。

一个简单但缓慢的选择是这样的(取自维基百科):

%timeit sum(ch1 != ch2 for ch1, ch2 in zip(a, b))
10000 loops, best of 3: 79 us per loop

我已经提出了更快的选择,灵感来自这里的一些答案堆栈溢出。

%timeit np.sum(np.bitwise_xor(a,b))
100000 loops, best of 3: 6.94 us per loop
%timeit len(np.bitwise_xor(a,b).nonzero()[0])
100000 loops, best of 3: 2.43 us per loop

我想知道是否有更快的方法来计算这个,可能使用cython?

有一个现成的numpy函数胜过len((a != b).nonzero()[0]);)

np.count_nonzero(a!=b)

与在我的平台上np.count_nonzero(a!=b)的1.07µs相比,gmpy2。在将每个数组转换为MPZ(多精度整数)后,Hamdist将其降低到约143ns:

import numpy as np
from gmpy2 import mpz, hamdist, pack
a = np.array([1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0])
b = np.array([1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1])

根据@casevh的提示,使用gmpy2.pack(list(reversed(list(array))),1)可以合理有效地完成从1和0组成的1D数组到gmpy2 mpz对象的转换。

# gmpy2.pack reverses bit order but that does not affect
# hamdist since both its arguments are reversed
ampz = pack(list(a),1) # takes about 4.29µs
bmpz = pack(list(b),1)
hamdist(ampz,bmpz)
Out[8]: 7
%timeit hamdist(ampz,bmpz)
10000000 loops, best of 3: 143 ns per loop

用于相对比较,在我的平台上:

%timeit np.count_nonzero(a!=b)
1000000 loops, best of 3: 1.07 µs per loop
%timeit len((a != b).nonzero()[0])
1000000 loops, best of 3: 1.55 µs per loop
%timeit len(np.bitwise_xor(a,b).nonzero()[0])
1000000 loops, best of 3: 1.7 µs per loop
%timeit np.sum(np.bitwise_xor(a,b))
100000 loops, best of 3: 5.8 µs per loop   

使用pythran可以带来额外的好处:

$ cat hamm.py
#pythran export hamm(int[], int[])
from numpy import nonzero
def hamm(a,b):
    return len(nonzero(a != b)[0])

作为引用(不含pythran):

$ python -m timeit -s 'import numpy as np; a = np.random.randint(0,2, 100); b = np.random.randint(0,2, 100); from hamm import hamm' 'hamm(a,b)'
100000 loops, best of 3: 4.66 usec per loop

当python编译完成后:

$ python -m pythran.run hamm.py
$ python -m timeit -s 'import numpy as np; a = np.random.randint(0,2, 100); b = np.random.randint(0,2, 100); from hamm import hamm' 'hamm(a,b)'
1000000 loops, best of 3: 0.745 usec per loop

与numpy实现相比,这大约是6x的加速,因为pythran在计算元素明智比较时跳过了中间数组的创建。

我还测量了:

def hamm(a,b):
    return count_nonzero(a != b)

Python版本是3.11 usec per loop, Pythran版本是0.427 usec per loop

免责声明:我是Pythran开发人员之一。

对于字符串,它工作得更快

def Hamm(a, b):
    c = 0
    for i in range(a.shape[0]):
        if a[i] != b[i]:
            c += 1
    return c

我建议您使用np.packbits将numpy位数组转换为numpy uint8数组

看看scipy的space .distance.hamming:https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.hamming.html

否则,这里是一个小片段,它只需要numpy,灵感来自于计数正整数中非零位的快速方法:

bit_counts = np.array([int(bin(x).count("1")) for x in range(256)]).astype(np.uint8)
def hamming_dist(a,b,axis=None):
    return np.sum(bit_counts[np.bitwise_xor(a,b)],axis=axis)

与轴=-1,这允许取一个条目和一个大数组之间的hamming距离;如:

inp = np.uint8(np.random.random((512,8))*255) #512 entries of 8 byte
hd = hamming_dist(inp, inp[123], axis=-1) #results in 512 hamming distances to entry 123
idx_best = np.argmin(hd)    # should point to identity 123
hd[123] = 255 #mask out identity
idx_nearest= np.argmin(hd)    # should point entry in list with shortest distance to entry 123
dist_hist = np.bincount(np.uint8(hd)) # distribution of hamming distances; for me this started at 18bits to 44bits with a maximum at 31

最新更新