"takes 1 positional argument but 2 were given",同时找到最少的功能



我想使用fmin找到函数的最小值,但我得到了以下错误:

TypeError: <lambda>() takes 1 positional argument but 2 were given

有问题的代码如下:

import numpy as np
from scipy.optimize import fmin
g = lambda alpha: np.sum(np.square(np.subtract(D, (avec[0]-alpha*grad)*f((avec[1]-alpha*grad),y))))
b = fmin(g,0.0)

你能告诉我怎么修吗?

整个代码在这里:

from scipy.optimize import fmin
from scipy import interpolate
import numpy as np

Emax = 10;
bins = 200;
x = np.linspace(1, Emax, num = Emax, dtype=np.int)   #create grid of indexes
y = np.linspace(1, bins, num = bins, dtype=np.int)
z = np.random.rand(bins, Emax)                       # response matrix   
f = interpolate.interp2d(x,y,z, kind='cubic')        # make the matrix continious
D= np.zeros(bins) 
D = 1*f(1.5, y) + 3*f(2.5, y)   # signal
iterations = 1000
step = 1e-5
avec = np.array([1.0,2.0])   # chosen starting parameters 
grad = np.array([0.0,0.0])
chix_current = np.arange(iterations, dtype=float)
#gradient unfolding
for i in range(0, iterations):
fx = avec[0]*f(avec[1], y)     # evaluation in every layer
chi = np.square(np.subtract(D,fx))    #chi function  
chi_a = np.square(np.subtract(D,  (avec[0]+step)*f(avec[1],y))) 
chi_b = np.square(np.subtract(D,  avec[0]*f((avec[1]+step),y)))   
chisquared = np.sum(chi)
chisquared_a = np.sum(chi_a)
chisquared_b = np.sum(chi_b)
grad[0] = np. divide(np.subtract(chisquared_a, chisquared), step)                 
grad[1] = np.divide(np.subtract(chisquared_b, chisquared), step)        

g= lambda alpha: np.sum(np.square(np.subtract(D,  (avec[0]-alpha*grad)*f((avec[1]-alpha*grad),y)) ))
b= fmin(g,(0.0))

avec= np.subtract(avec, 1e5*grad )

最后,我只需要知道当函数g处于最小值时alpha的值,并在最后一行使用它而不是1e5。

您的代码不太清楚,有些方法和变量不明确。

根据您给出的,lambda函数f()似乎应该接受1个参数,但在f()内部,您再次递归调用f(),但内部f()接受2个参数,我认为这是一个拼写错误。

尝试将lambda方法重命名为其他名称而不发生冲突,例如g:

g = lambda alpha: np.sum(np.square(np.subtract(D, (avec[0]-alpha*grad)*f((avec[1]-alpha*grad),y))))
b = fmin(g,0.0)

最新更新