我正在用numpy模拟信号。我想模拟不同的采样频率,代码如下:
import numpy as np
import matplotlib.pyplot as plt
frequencies = [10, 20, 40, 49, 100, 101, 102, 103, 104, 301, 526, 1222] # different sampling frequencies
T = 1 #amount of time for which to simulate the signal in seconds
f = 50 #frequency of the signal
for i in frequencies:
fig, ax1 = plt.subplots(1)
N = T * i # number of samples needed at the sampling frequency for the elapsed time of the signal
linear = np.linspace(0, T, N) # create the data points at which to evaluate the sin
y = np.sin(2 * np.pi * f * linear) # evaluate the sin for the amount of sample points
#creat the graph
ax1.scatter(linear, y)
ax1.set(ylabel = 'Amplitude', xlabel = 'Time in s')
fig.savefig('Freq{}.png'.format(i))
plt.close(fig)
问题出现在采样频率101上,这里的sin函数的值非常小,大约是10^(-14)(对于所有其他值,正常图都出来了)(Sin函数以101 Hz的采样频率计算)而如果我通过打印采样频率101 Hz的线性阵列来手动评估正弦函数,我得到的是正常值。有人知道是什么问题吗?也许还有np。sin的近似问题?也许dtype float64在某种程度上坏了,因为值有点像sus。
当i
= 101时,f * linear
=[0.0, 0.5, 1.0, 1.5, ...]
,因此表达式np.sin(2 * np.pi * f * linear)
是对sin
函数按pi
的倍数进行抽样。如果你用无限精度计算,这些值都是0。然而,这些样本点不会是pi
的精确倍,因为普通浮点不精度,所以该表达式返回的值应该接近于0,但它们不一定是精确的0。