实现无约束优化问题的回溯线搜索算法.



我无法思考如何在python中实现回溯线搜索算法。算法本身是: 这里

该算法的另一种形式是: 这里

从理论上讲,它们是完全相同的。

我正在尝试在 python 中实现这一点,以解决具有给定起点的无约束优化问题。这是我到目前为止尝试解决的问题:

def func(x):  
return # my function with inputs x1,x2
def grad_func(x):
df1 # derivative with respect to x1
df2 # derivative with respect to x2
return np.array([df1, df2])
def backtrack(x, gradient, t, a, b):  
'''  
x: the initial values given  
gradient: the initial gradient direction for the given initial value  
t: t is initialized at t=1 
a: alpha value between (0, .5). I set it to .3  
b: beta value between (0, 1). I set it to .8  
'''
return t
# Define the initial point, step size, and alpha/beta constants
x0, t0, alpha, beta = [x1, x2], 1, .3, .8
# Find the gradient of the initial value to determine the initial slope
direction = grad_func(x0)
t = backtrack(x0, direction, t0, alpha, beta)

任何人都可以就如何最好地实现回溯算法提供任何指导吗?我觉得我拥有我需要的所有信息,但我只是不理解代码中的实现

import numpy as np
alpha = 0.3
beta = 0.8
f = lambda x: (x[0]**2 + 3*x[1]*x[0] + 12)
dfx1 = lambda x: (2*x[0] + 3*x[1])
dfx2 = lambda x: (3*x[0])
t = 1
count = 1
x0 = np.array([2,3])
dx0 = np.array([.1, 0.05])

def backtrack(x0, dfx1, dfx2, t, alpha, beta, count):
while (f(x0) - (f(x0 - t*np.array([dfx1(x0), dfx2(x0)])) + alpha * t * np.dot(np.array([dfx1(x0), dfx2(x0)]), np.array([dfx1(x0), dfx2(x0)])))) < 0:
t *= beta
print("""
########################
###   iteration {}   ###
########################
""".format(count))
print("Inequality: ",  f(x0) - (f(x0 - t*np.array([dfx1(x0), dfx2(x0)])) + alpha * t * np.dot(np.array([dfx1(x0), dfx2(x0)]), np.array([dfx1(x0), dfx2(x0)]))))
count += 1
return t
t = backtrack(x0, dfx1, dfx2, t, alpha, beta,count)
print("nfinal step size :",  t)

输出:

########################
###   iteration 1   ###
########################
Inequality:  -143.12

########################
###   iteration 2   ###
########################
Inequality:  -73.22880000000006

########################
###   iteration 3   ###
########################
Inequality:  -32.172032000000044

########################
###   iteration 4   ###
########################
Inequality:  -8.834580480000021

########################
###   iteration 5   ###
########################
Inequality:  3.7502844927999845
final step size : 0.32768000000000014
[Finished in 0.257s]

我这样做了,但在 matlab 中,这是代码:

syms params 
f = @(params) %your function ;
gradient_f=[diff(f,param1);diff(f,param2);diff(f,param3), ....];
x0 = %first value ;
norm_gradient_zero = %norm of gradient_f(x0));
ov = %value to optimize;
a = %alpha;
b = %beta;
while f(ov, 0)-(f(x0)-ov*b*norm_gradient_zero^2)>0
ov = a*ov;
end
disp(ov)

最新更新