我想实现标准普尔500指数的k近邻算法来预测未来价格,并通过scikit-learn库用python开发定量算法交易模型。虽然我对kNN算法有基本的了解,但我是一个用python编写机器学习代码的完全新手,所以如果有人能帮助我,我很高兴。
这是我的模拟逻辑
-
资产:标普500指数月度价格(可与ETF投资)
-
逻辑
-
每个月底根据kNN算法预测下个月的价格走势(上涨或下跌)---->预测上涨:买入标普500指数,下跌:持有现金(假设指数年回报率为3%)
-
训练数据集:最近12个月的滚动数据(训练数据集随着时间的推移不断变化,如移动平均线的情况)
-
自变量:最近3、6、9、12个月收益率,最近12个月月度收益率滚动标准差
-
因变量:下月收益表示为正或负
-
这是我的代码。我可以编写基本数据集,但不知道编写主要算法和仿真逻辑。有人能完成这个代码吗?
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import pandas_datareader.data as web
def price(stock, start):
price = web.DataReader(name=stock, data_source='yahoo', start=start)['Adj Close']
return price.div(price.iat[0]).resample('M').last().to_frame('price')
a = price('SPY','2000-01-01')
a['cash'] = [(1.03**(1/12))**x for x in range(len(a.index))]
a['R3'] = a.price/a.price.shift(3)
a['R6'] = a.price/a.price.shift(6)
a['R9'] = a.price/a.price.shift(9)
a['R12'] = a.price/a.price.shift(12)
a['rollingstd'] = a.price.pct_change().rolling(12).std()
我成功了。虽然这是使用分形动量评分的策略的另一个版本,但它可能会有所帮助
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import pandas_datareader.data as web
from sklearn import neighbors, svm
from sklearn.ensemble import RandomForestClassifier
def price(stock, start):
price = web.DataReader(name=stock, data_source='yahoo', start=start)['Adj Close']
return price.div(price.iat[0]).resample('M').last().to_frame('price')
def fractal(a, p):
df = pd.DataFrame()
for count in range(1,p+1):
a['direction'] = np.where(a['price'].diff(count)>0,1,0)
a['abs'] = a['price'].diff(count).abs()
a['volatility'] = a.price.diff().abs().rolling(count).sum()
a['fractal'] = a['abs']/a['volatility']*a['direction']
df = pd.concat([df, a['fractal']], axis=1)
return df
def meanfractal(a, l=12):
a['meanfractal']= pd.DataFrame(fractal(a, l)).sum(1,skipna=False)/l
a = price('^KS11','2000-01-01')
a['cash'] = [(1.03**(1/12))**x for x in range(len(a.index))]
a['meanfractal']= pd.DataFrame(fractal(a, 12)).sum(1,skipna=False)/12
a['rollingstd'] = a.price.pct_change().shift(1).rolling(12).std()
a['result'] = np.where(a.price > a.price.shift(1), 1,0)
a = a.dropna()
print(a)
clf = neighbors.KNeighborsClassifier(n_neighbors=3)
clf1 = svm.SVC()
clf3 = RandomForestClassifier(n_estimators=5)
a['predicted']= pd.Series()
for i in range(12,len(a.index)):
x = a.iloc[i-12:i,6:8]
y = a['result'][i-12:i]
clf3.fit(x, y)
a['predicted'][i]= clf3.predict(x)[-1]
a = a.dropna()
a.price = a.price.div(a.price.ix[0])
print(a)
accuracy=clf3.score(a.iloc[:,6:8],a['result'])
a['결과'] = np.where(a.predicted.shift(1)==1,a.price/a.price.shift(1),1).cumprod()
a['result'] = np.where(a.predicted.shift(1)==1,(a.price/a.price.shift(1)+1.0026)/2,1.0026).cumprod()
a['동일비중'] = ((a.price/a.price.shift(1)+1.0026)/2).cumprod()
a[['result','price','결과']].plot()
plt.show()
print ("Predicted model accuracy: "+ str(accuracy))