加速kronecker产品numpy



因此,我正在尝试计算两个矩阵的kronecker乘积,每个矩阵的任意维度。(我仅将相同维度的平方矩阵用于示例(

最初我尝试使用kron

a = np.random.random((60,60))
b = np.random.random((60,60))
start = time.time()
a = np.kron(a,b)
end = time.time()
Output: 0.160096406936645

尝试提高速度,我使用了tensordot

a = np.random.random((60,60))
b = np.random.random((60,60))
start = time.time()
a = np.tensordot(a,b,axes=0)
a = np.transpose(a,(0,2,1,3))
a = np.reshape(a,(3600,3600))
end = time.time()
Output: 0.11808371543884277

稍微搜索了一个网络后,我发现(或至少对我的理解(Numpy在必须重塑已转换的张量时进行了额外的副本。

因此,我尝试了以下内容(此代码显然没有给出A和B的Kronecker产品,但我只是作为测试而进行(:

a = np.random.random((60,60))
b = np.random.random((60,60))
start = time.time()
a = np.tensordot(a,b,axes=0)
a = np.reshape(a,(3600,3600))
end = time.time()
Output: 0.052041053771972656

我的问题是:如何计算Kronecker产品,而不会遇到与转台相关的问题?

我只是在寻找快速加速,因此解决方案不必使用tensordot

编辑

我刚刚在此堆栈帖子上找到:加快Numpy Kronecker产品,还有另一种方法:

a = np.random.random((60,60))
b = np.random.random((60,60))
c = a
start = time.time()
a = a[:,np.newaxis,:,np.newaxis]
a = a[:,np.newaxis,:,np.newaxis]*b[np.newaxis,:,np.newaxis,:]
a.shape = (3600,3600)
end = time.time()
test = np.kron(c,b)
print(np.array_equal(a,test))
print(end-start)

Output: True
0.05503702163696289

我仍然对您是否可以进一步加速此计算的问题感兴趣?

einsum似乎有效:

>>> a = np.random.random((60,60))
>>> b = np.random.random((60,60))
>>> ab = np.kron(a,b)
>>> abe = np.einsum('ik,jl', a, b).reshape(3600,3600)
>>> (abe==ab).all()
True
>>> timeit(lambda: np.kron(a, b), number=10)
1.0697475590277463
>>> timeit(lambda: np.einsum('ik,jl', a, b).reshape(3600,3600), number=10)
0.42500176999601535

简单的广播甚至更快:

>>> abb = (a[:, None, :, None]*b[None, :, None, :]).reshape(3600,3600)
>>> (abb==ab).all()
True
>>> timeit(lambda:  (a[:, None, :, None]*b[None, :, None, :]).reshape(3600,3600), number=10)
0.28011218502069823

更新:使用Blas和Cython,我们可以获得另一个谦虚(30%(的加速。自己决定是否值得。

[setup.py]

from distutils.core import setup
from Cython.Build import cythonize
setup(name='kronecker',
      ext_modules=cythonize("cythkrn.pyx"))

[cythkrn.pyx]

import cython
cimport scipy.linalg.cython_blas as blas
import numpy as np
@cython.boundscheck(False)
@cython.wraparound(False)
def kron(double[:, ::1] a, double[:, ::1] b):
    cdef int i = a.shape[0]
    cdef int j = a.shape[1]
    cdef int k = b.shape[0]
    cdef int l = b.shape[1]
    cdef int onei = 1
    cdef double oned = 1
    cdef int m, n
    result = np.zeros((i*k, j*l), float)
    cdef double[:, ::1] result_v = result
    for n in range(i):
        for m in range(k):
            blas.dger(&l, &j, &oned, &b[m, 0], &onei, &a[n, 0], &onei, &result_v[m+k*n, 0], &l)
    return result

构建运行cython cythkrn.pyx,然后构建python3 setup.py build

>>> from timeit import timeit
>>> import cythkrn
>>> import numpy as np
>>> 
>>> a = np.random.random((60,60))
>>> b = np.random.random((60,60))
>>>
>>> np.all(cythkrn.kron(a, b)==np.kron(a, b))
True
>>> 
>>> timeit(lambda: cythkrn.kron(a, b), number=10)
0.18925874299020506

加快内存绑定计算

  • 完全避免它,例如kron_and_sum示例(
  • 被阻止执行,与其他计算结合使用
  • 也许Float32 Float64的Intead也足够
  • 如果此计算在循环中,则仅分配内存一次

我在此代码和@paul Panzers实现方面得到了完全相同的时间,但是在这两个实施方面,我都得到了相同的怪异行为。有了预分配的内存,如果计算是并行的(这是可以预期的(,则绝对不会加快速度,但是如果没有预分配的内存,就会有很大的速度。

代码

import numba as nb
import numpy as np

@nb.njit(fastmath=True,parallel=True)
def kron(A,B):
    out=np.empty((A.shape[0],B.shape[0],A.shape[1],B.shape[1]),dtype=A.dtype)
    for i in nb.prange(A.shape[0]):
        for j in range(B.shape[0]):
            for k in range(A.shape[1]):
                for l in range(B.shape[1]):
                    out[i,j,k,l]=A[i,k]*B[j,l]
    return out
@nb.njit(fastmath=True,parallel=False)
def kron_preallocated(A,B,out):
    for i in nb.prange(A.shape[0]):
        for j in range(B.shape[0]):
            for k in range(A.shape[1]):
                for l in range(B.shape[1]):
                    out[i,j,k,l]=A[i,k]*B[j,l]
    return out
@nb.njit(fastmath=True,parallel=True)
def kron_and_sum(A,B):
    out=0.
    for i in nb.prange(A.shape[0]):
        TMP=np.float32(0.)
        for j in range(B.shape[0]):
            for k in range(A.shape[1]):
                for l in range(B.shape[1]):
                    out+=A[i,k]*B[j,l]
    return out

时间

#Create some data
out_float64=np.empty((a.shape[0],b.shape[0],a.shape[1],b.shape[1]),dtype=np.float64)
out_float32=np.empty((a.shape[0],b.shape[0],a.shape[1],b.shape[1]),dtype=np.float32)
a_float64 = np.random.random((60,60))
b_float64 = np.random.random((60,60))
a_float32 = a_float64.astype(np.float32)
b_float32 = b_float64.astype(np.float32)

#Reference
%timeit np.kron(a_float64,b_float64)
147 ms ± 1.22 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
#If you have to allocate memory for every calculation (float64)
%timeit B=kron(a_float64,b_float64).reshape(3600,3600)
17.6 ms ± 244 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
#If you don't have to allocate memory for every calculation (float64)
%timeit B=kron_preallocated(a_float64,b_float64,out_float64).reshape(3600,3600)
8.08 ms ± 269 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
#If you have to allocate memory for every calculation (float32)
%timeit B=kron(a_float32,b_float32).reshape(3600,3600)
9.27 ms ± 185 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
#If you don't have to allocate memory for every calculation (float32)
%timeit B=kron_preallocated(a_float32,b_float32,out_float32).reshape(3600,3600)
3.95 ms ± 155 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
#Example for a joined operation (sum of kroncker product)
#which isn't memory bottlenecked
%timeit B=kron_and_sum(a_float64,b_float64)
881 µs ± 104 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)

我已经改进了np.kron(57%(:https://github.com/numpy/numpy/numpy/pull/pull/21232

这个想法是摆脱恰好引起ValueErrorconcatenate

对于它的价值,由于我偶然发现了这一点,@paul Panzer似乎在他的回答中有了更好的解决方案,我现在想将其添加到Numpy。您可以随时查看https://github.com/numpy/numpy/issues/21257,以了解进度。感谢@paul的想法。

相关内容

  • 没有找到相关文章

最新更新