Python中SVM的可视化(2D)



我有一个赋值,如下所示。我已经完成了前5项任务,最后一项有问题。绘制它。请给出如何做的说明。提前谢谢。

*(我几天前才开始学习SVM和ML,请考虑一下(

**(正如我所认为的,所有类型的内核的操作顺序都应该是相同的。如果你为其中一个内核显示,那就太好了。我会尝试为其他内核调整你的代码(

要遵循的程序:

  1. 从该地图中随机抽取样本。(#100(,并将其应用于SVC的Python中。该数据集包括Easting、Northing和Rock信息。

  2. 利用这100个随机选择的样本,再次随机分割以训练和测试数据集。

  3. 尝试使用线性、多项式、径向基函数和切线的核来运行SVC。

  4. 例如,如果你使用的是径向基函数,则找出每种函数的最佳值;C";以及";γ";根据你从准确度得分中获得的准确度,可以是最佳的。

  5. 一旦你有了拟合模型并计算了准确度分数(从测试数据集获得(,然后将整个数据集导入到获得的拟合模型中,并预测我们在reference.csv中拥有的所有90000个样本点的输出。

  6. 向我展示获得的地图,以及您从每个FIT模型中获得的准确度分数。

数据集看起来像:

在此处输入图像描述

同款90000分。

这是代码:

import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
### Importing Info
df = pd.read_csv("C:/Users/Admin/Desktop/RA/step 1/reference.csv", header=0)
df_model = df.sample(n = 100)
df_model.shape
## X-y split
X = df_model.loc[:,df_model.columns!="Rock"]
y = df_model["Rock"]
y_initial = df["Rock"]
### for whole dataset
X_wd = df.loc[:, df_model.columns!="Rock"]
y_wd = df["Rock"]
## Test-train split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
## Standardizing the Data
from sklearn.preprocessing import StandardScaler
sc = StandardScaler().fit(X_train)
X_train_std = sc.transform(X_train)
X_test_std = sc.transform(X_test)
## Linear
### Grid Search
from sklearn.model_selection import GridSearchCV
from sklearn import svm
from sklearn.metrics import accuracy_score, confusion_matrix
params_linear = {'C' : (0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100, 500,1000)}
clf_svm_l = svm.SVC(kernel = 'linear')
svm_grid_linear = GridSearchCV(clf_svm_l, params_linear, n_jobs=-1,
cv = 3, verbose = 1, scoring = 'accuracy')
svm_grid_linear.fit(X_train_std, y_train)
svm_grid_linear.best_params_
linsvm_clf = svm_grid_linear.best_estimator_
accuracy_score(y_test, linsvm_clf.predict(X_test_std))
### training svm
clf_svm_l = svm.SVC(kernel = 'linear', C = 0.1)
clf_svm_l.fit(X_train_std, y_train)
### predicting model
y_train_pred_linear = clf_svm_l.predict(X_train_std)
y_test_pred_linear = clf_svm_l.predict(X_test_std)
y_test_pred_linear
clf_svm_l.n_support_
### whole dataset
y_pred_linear_wd = clf_svm_l.predict(X_wd)
### map


## Poly
### grid search for poly
params_poly = {'C' : (0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100, 500,1000),
'degree' : (1,2,3,4,5,6)}
clf_svm_poly = svm.SVC(kernel = 'poly')
svm_grid_poly = GridSearchCV(clf_svm_poly, params_poly, n_jobs = -1,
cv = 3, verbose = 1, scoring = 'accuracy')
svm_grid_poly.fit(X_train_std, y_train)
svm_grid_poly.best_params_
polysvm_clf = svm_grid_poly.best_estimator_
accuracy_score(y_test, polysvm_clf.predict(X_test_std))
### training svm
clf_svm_poly = svm.SVC(kernel = 'poly', C = 50, degree = 2)
clf_svm_poly.fit(X_train_std, y_train)
### predicting model
y_train_pred_poly = clf_svm_poly.predict(X_train_std)
y_test_pred_poly = clf_svm_poly.predict(X_test_std)
clf_svm_poly.n_support_
### whole dataset
y_pred_poly_wd = clf_svm_poly.predict(X_wd)
### map            

## RBF
### grid search rbf
params_rbf = {'C' : (0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100, 500,1000),
'gamma' : (0.001, 0.01, 0.1, 0.5, 1)}
clf_svm_r = svm.SVC(kernel = 'rbf')
svm_grid_r = GridSearchCV(clf_svm_r, params_rbf, n_jobs = -1,
cv = 10, verbose = 1, scoring = 'accuracy')
svm_grid_r.fit(X_train_std, y_train)
svm_grid_r.best_params_
rsvm_clf = svm_grid_r.best_estimator_
accuracy_score(y_test, rsvm_clf.predict(X_test_std))
### training svm
clf_svm_r = svm.SVC(kernel = 'rbf', C = 500, gamma = 0.5)
clf_svm_r.fit(X_train_std, y_train)
### predicting model
y_train_pred_r = clf_svm_r.predict(X_train_std)
y_test_pred_r = clf_svm_r.predict(X_test_std)
### whole dataset
y_pred_r_wd = clf_svm_r.predict(X_wd)
### map            

## Tangent
### grid search
params_tangent = {'C' : (0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50),
'gamma' : (0.001, 0.01, 0.1, 0.5, 1)}
clf_svm_tangent = svm.SVC(kernel = 'sigmoid')
svm_grid_tangent = GridSearchCV(clf_svm_tangent, params_tangent, n_jobs = -1,
cv = 10, verbose = 1, scoring = 'accuracy')
svm_grid_tangent.fit(X_train_std, y_train)
svm_grid_tangent.best_params_
tangentsvm_clf = svm_grid_tangent.best_estimator_
accuracy_score(y_test, tangentsvm_clf.predict(X_test_std))
### training svm
clf_svm_tangent = svm.SVC(kernel = 'sigmoid', C = 1, gamma = 0.1)
clf_svm_tangent.fit(X_train_std, y_train)
### predicting model
y_train_pred_tangent = clf_svm_tangent.predict(X_train_std)
y_test_pred_tangent = clf_svm_tangent.predict(X_test_std)
### whole dataset
y_pred_tangent_wd = clf_svm_tangent.predict(X_wd)
### map

从您的样本数据来看,您似乎在处理规则间隔的数据,并且行/列以单调递增的方式迭代。这里有一种方法可以将这个数据集重塑为2d数组(通过将数组重塑为行(,并相应地绘制它:

import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
# create sample data
data = {
'Easting': [0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3],
'Northing': [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
'Rocks': [0, 0, 1, 0, 0, 2, 0, 0, 0, 1, 0, 0],
}
df = pd.DataFrame(data)
# reshape data into 2d matrix (assuming easting / northing steps from 0 to max value)
max_easting = np.max(df['Easting'])
img_data = np.reshape(data['Rocks'], (max_easting, -1))
# plot as image
plt.imshow(img_data)
plt.show()

如果处理的是不规则间距的数据,即并非每个东距/北距组合都有一个值,则可以考虑绘制不规则间距数据。

这是绘制线性可视化的答案,对于那些会遇到和我一样问题的人来说。将这些代码用于其他内核将很容易。

# Visualising the Training set results
from matplotlib.colors import ListedColormap
X_set, y_set = X_train_std, y_train
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, clf_svm_l.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('darkblue', 'yellow')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('blue', 'gold'))(i), label = j)
plt.title('SVM (Training set)')
plt.xlabel('Easting')
plt.ylabel('Northing')
plt.legend()
plt.show()

最新更新