1.服用np.log时如何处理0?2. Scipy,optimize.fmin_tnc即使在转调后也会产生形状错误



关于逻辑回归的python 3实现(吴恩达课程(,我有两个问题:

  1. 当我取alpha=0.01时,我得到两个错误:

    a。获取日志时遇到零值

    b。矩阵乘法误差

我知道sigmoid函数只会返回(0,1(之间的值,但在运行梯度下降时打印假设时,我意识到一些值被四舍五入为1(使1-hyp=0,因此产生误差(。所以,我认为将θ的精度提高到np.float128会有所帮助,但事实并非如此!

然而,将alpha取为0.001不会产生任何错误,但我必须将迭代次数增加到1000000次,以将成本从0.693降低到0.224。

  1. 我还尝试使用scipy的优化器来获得θ的最佳值。然而,它给出了我随代码附上的错误。即使经过θ。T、 我也犯了同样的错误
def sigmoid(z):
return 1/(1+np.exp(-z)) 
data_set.insert(0,'Ones',1)
X= data_set.iloc[:,0:3]
Y=data_set.iloc[:,3]
#convert X and Y to numpy matrices
X= np.matrix(X.values)
Y= np.matrix(Y.values)
#intilize theta
theta= np.float128(np.zeros([1,3]))
theta= np.matrix(theta)
Y= Y.T
#now let's define our cost functio
def costfunction(theta,X,Y):
m=len(Y)
hypothesis= sigmoid(np.dot(X,theta.T))
error= (np.multiply(-Y,np.log(hypothesis)) - np.multiply((1-Y),np.log(1-hypothesis)))
return 1/m * np.sum(error)
#let's define our gradient descent function now
def gradientdescent(X,Y,theta,alpha,iters):
parameters=3
temp= np.matrix(np.zeros(theta.shape))
cost= np.zeros(iters)
m= len(Y)

for i in range(iters):
error= sigmoid(X*theta.T) - Y
for j in range(parameters):
term= np.multiply(error,X[:,j])
temp[0,j]= theta[0,j] - ((alpha/m) * np.sum(term))

theta=temp
cost[i]= costfunction(theta,X,Y)

return theta, cost
alpha=0.001
iters=1000000
param,cost= gradientdescent(X,Y,theta,alpha,iters)
#We can also the optimum values for theta using scipy's optimize function
#so, let's define a gradient function now
def gradient(theta,X,Y):
parameters=3
grad= np.zeros(parameters)
m=len(Y)

for i in range(parameters):
error= sigmoid((X*theta.T)) -Y
term= np.multiply(error,X[:,i])
grad[i]= np.sum(term)/m


return grad
#now let's use scipy
import scipy.optimize as opt
result= opt.fmin_tnc(func=costfunction,x0=theta, fprime= gradient, args=(X,Y))
costfunction(result[0],X,Y)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-226-ac3f2f801635> in <module>
1 #now let's use scipy
2 import scipy.optimize as opt
----> 3 result= opt.fmin_tnc(func=costfunction,x0=theta, fprime= gradient, args=(X,Y))
4 costfunction(result[0],X,Y)
~/anaconda3/lib/python3.7/site-packages/scipy/optimize/tnc.py in fmin_tnc(func, x0, fprime, args, approx_grad, bounds, epsilon, scale, offset, messages, maxCGit, maxfun, eta, stepmx, accuracy, fmin, ftol, xtol, pgtol, rescale, disp, callback)
273             'disp': False}
274 
--> 275     res = _minimize_tnc(fun, x0, args, jac, bounds, callback=callback, **opts)
276 
277     return res['x'], res['nfev'], res['status']
~/anaconda3/lib/python3.7/site-packages/scipy/optimize/tnc.py in _minimize_tnc(fun, x0, args, jac, bounds, eps, scale, offset, mesg_num, maxCGit, maxiter, eta, stepmx, accuracy, minfev, ftol, xtol, gtol, rescale, disp, callback, **unknown_options)
407                                         offset, messages, maxCGit, maxfun,
408                                         eta, stepmx, accuracy, fmin, ftol,
--> 409                                         xtol, pgtol, rescale, callback)
410 
411     funv, jacv = func_and_grad(x)
~/anaconda3/lib/python3.7/site-packages/scipy/optimize/tnc.py in func_and_grad(x)
370         def func_and_grad(x):
371             f = fun(x, *args)
--> 372             g = jac(x, *args)
373             return f, g
374 
<ipython-input-225-ad5800c8116a> in gradient(theta, X, Y)
7 
8     for i in range(parameters):
----> 9         error= sigmoid((X*theta.T)) -Y
10         term= np.multiply(error,X[:,i])
11         grad[i]= np.sum(term)/m
~/anaconda3/lib/python3.7/site-packages/numpy/matrixlib/defmatrix.py in __mul__(self, other)
218         if isinstance(other, (N.ndarray, list, tuple)) :
219             # This promotes 1-D vectors to row vectors
--> 220             return N.dot(self, asmatrix(other))
221         if isscalar(other) or not hasattr(other, '__rmul__') :
222             return N.dot(self, other)
<__array_function__ internals> in dot(*args, **kwargs)
ValueError: shapes (100,3) and (1,3) not aligned: 3 (dim 1) != 1 (dim 0)

我不是scipy方面的专家,但是,如果你想让sigmoid函数永远不会返回0或1,你可以使用numpy最小值和最大值:

def sigmoid(z):
sig = 1 / (1 + np.exp(-z))     # Define sigmoid function
sig = np.minimum(sig, 0.9999)  # Set upper bound
sig = np.maximum(sig, 0.0001)  # Set lower bound
return sig

然而,真正的问题不是成本计算中的四舍五入(甚至对于代码生成的一些数据,Octave/MATLAB也会返回nan(。你真正的问题是,除非学习率很小,否则你的梯度下降实现是发散的。使用梯度下降而不是更高级的优化算法(如Octave/MATLAB中的"fminunc"(会迫使您选择较小的学习率并进行多次迭代。如果您还没有进行某种功能规范化/标准化,这可能会有所帮助。