我必须为 x 求解大量类型为 "Ax=B" 的线性矩阵方程,其中 A 是一个稀疏矩阵,主要填充主对角线,B 是向量。
我的第一个方法是通过numpy.linalg.solve使用密集的numpy数组来实现此目的,它适用于(N,n,n)维数组,其中N是线性矩阵方程的数量,n是方阵维度。我首先将它与 for 循环一起使用,遍历所有方程,这实际上相当慢。但后来意识到你也可以将 (N,n,n) 维矩阵直接传递给 numpy.linalg.solve,而无需任何 for 循环(顺便说一下,我在阅读的文档中没有找到)。这已经大大提高了计算速度(详细信息见下文)。
但是,因为我有稀疏矩阵,所以我也看了一下scipy.sparse.linalg.spsolve函数,它做类似的事情,就像相应的numpy函数一样。使用 for 循环遍历所有方程比 numpy 解快得多,但似乎不可能将 (N,n,n) 二元数组直接传递给 scipy 的 spsolve。
这是我到目前为止尝试过的:
首先,我计算一些虚构的 A 矩阵和带有随机数的 B 向量以进行测试,既稀疏又密集:
import numpy as np
from scipy import sparse
from scipy.sparse.linalg import spsolve
number_of_systems = 100 #corresponds to N in the text
number_of_data_points = 1000 #corresponds to n in the text
#calculation of sample matrices (dense and sparse)
A_sparse = np.empty(number_of_systems,dtype=object)
A_dense = np.empty((number_of_systems,number_of_data_points,number_of_data_points))
for ii in np.arange(number_of_systems):
A_sparse[ii] = sparse.spdiags(np.random.random(number_of_data_points),0,number_of_data_points,number_of_data_points)
A_dense[ii] = A_sparse[ii].todense()
#calculation of sample vectors
B = np.random.random((number_of_systems,number_of_data_points))
1)第一种方法:numpy.linalg.solve with for loop:
def solve_dense_3D(A,B):
results = np.empty((A.shape[0],A.shape[1]))
for ii in np.arange(A.shape[0]):
results[ii] = np.linalg.solve(A[ii],B[ii])
return results
result_dense_for = solve_dense_3D(A_dense,B)
定时:
timeit(solve_dense_3D(A_dense,B))
1.25 s ± 27.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
2)第二种方法:将(N,n,n)维矩阵直接传递给numpy.linalg.solve:
result_dense = np.linalg.solve(A_dense,B)
定时:
timeit(np.linalg.solve(A_dense,B))
769 ms ± 9.68 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
3)第三种方法:使用scipy.sparse.linalg.spsolve和for循环:
def solve_sparse_3D(A,B):
results = np.empty((A.shape[0],A[0].shape[0]))
for ii in np.arange(A.shape[0]):
results[ii] = spsolve(A[ii],B[ii])
return results
result_sparse_for = solve_sparse_3D(A_sparse,B)
定时:
timeit(solve_sparse_3D(A_sparse,B))
30.9 ms ± 132 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
值得注意的是,能够从方法 1 和 2 中省略 for 循环是有好处的。到目前为止,最快的替代方案是,正如预期的那样,使用稀疏矩阵的方法 3。
因为这个例子中的方程数量对我来说仍然相当少,而且因为我必须经常做这样的事情,所以如果有一个使用 scipy 稀疏矩阵的解决方案而没有 for 循环,我会很高兴。有没有人知道实现这一目标的方法?或者也许还有另一种方法可以以不同的方式解决问题?我很乐意提出建议。
一个小演示概述了我上面评论中的想法:
""" YOUR CODE (only imports changed + deterministic randomness) """
import numpy as np
from scipy import sparse
from scipy.sparse.linalg import spsolve
from scipy.sparse import block_diag
from time import perf_counter as pc
np.random.seed(0)
number_of_systems = 100 #corresponds to N in the text
number_of_data_points = 1000 #corresponds to n in the text
#calculation of sample matrices (dense and sparse)
A_sparse = np.empty(number_of_systems,dtype=object)
A_dense = np.empty((number_of_systems,number_of_data_points,number_of_data_points))
for ii in np.arange(number_of_systems):
A_sparse[ii] = sparse.spdiags(np.random.random(number_of_data_points),0,number_of_data_points,number_of_data_points)
A_dense[ii] = A_sparse[ii].todense()
#calculation of sample vectors
B = np.random.random((number_of_systems,number_of_data_points))
def solve_sparse_3D(A,B):
results = np.empty((A.shape[0],A[0].shape[0]))
for ii in np.arange(A.shape[0]):
results[ii] = spsolve(A[ii],B[ii])
return results
start = pc()
result_sparse_for = solve_sparse_3D(A_sparse,B)
end = pc()
print(result_sparse_for)
print(end - start)
""" ALTERNATIVE APPROACH """
def solve_sparse_3D_blockdiag(A,B):
oldN = B.shape[0]
A_ = block_diag(A) # huge sparse block-matrix of independent problems
B_ = B.ravel() # flattened vector
results = spsolve(A_, B_)
return results.reshape(oldN, -1) # unflatten results
start = pc()
result_sparse_for = solve_sparse_3D_blockdiag(A_sparse,B)
end = pc()
print(result_sparse_for)
print(end - start)
哪些输出
[[ 0.97529866 1.26406276 0.83348888 ... 0.99310639 3.90781207
0.16724226]
[ 1.23398934 28.82088739 1.6955886 ... 1.85011686 0.23386882
1.17208753]
[ 0.92864777 0.22248781 0.09445412 ... 2.5080376 0.91701228
0.97266564]
...
[ 0.33087093 0.89034736 1.7523883 ... 0.2171746 4.89236164
0.31546549]
[ 1.2163625 3.0100941 0.87216264 ... 1.62105596 0.33211353
2.07929302]
[ 5.35677404 1.23830776 0.16073721 ... 0.26492506 0.53676822
3.73192617]]
0.08764066299999995
###
[[ 0.97529866 1.26406276 0.83348888 ... 0.99310639 3.90781207
0.16724226]
[ 1.23398934 28.82088739 1.6955886 ... 1.85011686 0.23386882
1.17208753]
[ 0.92864777 0.22248781 0.09445412 ... 2.5080376 0.91701228
0.97266564]
...
[ 0.33087093 0.89034736 1.7523883 ... 0.2171746 4.89236164
0.31546549]
[ 1.2163625 3.0100941 0.87216264 ... 1.62105596 0.33211353
2.07929302]
[ 5.35677404 1.23830776 0.16073721 ... 0.26492506 0.53676822
3.73192617]]
0.07241856000000013
有一些事情要做:
- 使用您原始的更理智的基准测试
- 以正确的稀疏格式构建block_diag以摆脱一些潜在的警告和减速:请参阅文档
- 调整 SPSOLVE的参数
permc_spec