数据集在循环浏览变量列表时引发 KeyError



我有一个变量列表,其值的编码方式会让熊猫失望。 例如:我有一个名为"小巷"的列,它有一个值列表,其中一个是NA,代表"无小巷"。 然而,熊猫将其解释为NaN。 为了解决这个问题,我用任意符号对所有NaN值进行编码,例如XX. 这些变量实际上没有空值/缺失值。 这些只是变量,其值被熊猫误解了。 我将它们收集在一个列表中:

na_data = ['Alley', 'BsmtQual', 'BsmtCond', 'BsmtExposure', 'BsmtFinType1', 'BsmtFinType2', 'FireplaceQu',
'GarageType', 'GarageFinish', 'GarageQual', 'GarageCond', 'PoolQC', 'Fence', 'MiscFeature']

并将每个NaN读数替换为XX

for i in na_data:
df[i] = df[i].fillna('XX')

这是我得到的旧错误:

Traceback (most recent call last):
File "C:UserssecurityAppDataRoamingPythonPython37site-packagespandascoreindexesbase.py", line 2657, in get_loc
return self._engine.get_loc(key)
File "pandas_libsindex.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc
File "pandas_libsindex.pyx", line 129, in pandas._libs.index.IndexEngine.get_loc
File "pandas_libsindex_class_helper.pxi", line 91, in pandas._libs.index.Int64Engine._check_type
KeyError: 'Alley'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Users/security/Downloads/AP/Boston-Kaggle/Model.py", line 67, in <module>
print(feature_encoding(train, categorical_columns))
File "C:/Users/security/Downloads/AP/Boston-Kaggle/Model.py", line 50, in feature_encoding
df[i] = df[i].fillna('XX')
File "C:UserssecurityAppDataRoamingPythonPython37site-packagespandascoreframe.py", line 2927, in __getitem__
indexer = self.columns.get_loc(key)
File "C:UserssecurityAppDataRoamingPythonPython37site-packagespandascoreindexesbase.py", line 2659, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File "pandas_libsindex.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc
File "pandas_libsindex.pyx", line 129, in pandas._libs.index.IndexEngine.get_loc
File "pandas_libsindex_class_helper.pxi", line 91, in pandas._libs.index.Int64Engine._check_type
KeyError: 'Alley'

变量Alley肯定存在于数据集中! 我从数据集中复制/粘贴名称只是为了更好地衡量。

这是我的整个代码(更新(:

import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_selection import SelectFromModel
from sklearn.model_selection import train_test_split
train = pd.read_csv("https://raw.githubusercontent.com/oo92/Boston-Kaggle/master/train.csv")
test = pd.read_csv("https://raw.githubusercontent.com/oo92/Boston-Kaggle/master/test.csv")
categorical_columns = ['MSSubClass', 'MSZoning', 'LotShape', 'LandContour', 'LotConfig', 'Neighborhood', 'Condition1',
'Condition2', 'BldgType', 'HouseStyle', 'RoofStyle', 'RoofMatl', 'Exterior1st', 'Exterior2nd',
'Foundation', 'Heating', 'Electrical', 'Functional', 'GarageType', 'PavedDrive', 'Fence',
'MiscFeature', 'SaleType', 'SaleCondition', 'Street', 'CentralAir', 'Utilities', 'ExterQual',
'LandSlope', 'ExterCond', 'HeatingQC', 'KitchenQual']
ranked_columns = ['Utilities', 'LandSlope', 'ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond', 'BsmtExposure',
'BsmtFinType1', 'BsmtFinType2', 'HeatingQC', 'KitchenQual', 'FireplaceQu', 'GarageQual', 'GarageCond',
'PoolQC', 'OverallQual', 'OverallCond']
numerical_columns = ['LotArea', 'LotFrontage', 'YearBuilt', 'YearRemodAdd', 'MasVnrArea', 'BsmtFinSF1', 'BsmtFinSF2',
'BsmtUnfSF','TotalBsmtSF', '1stFlrSF', '2ndFlrSf', 'LowQualFinSF', 'GrLivArea', 'BsmtFullBath',
'BsmtHalfBath', 'FullBath', 'HalfBath', 'Bedroom', 'Kitchen', 'TotRmsAbvGrd', 'Fireplaces',
'GarageYrBlt', 'GarageCars', 'GarageArea', 'WoodDeckSF', 'OpenPorchSF', 'EnclosedPorch',
'3SsnPorch', 'ScreenPorch', 'PoolArea', 'MiscVal', 'MoSold', 'YrSold']
na_data = ['Alley', 'BsmtQual', 'BsmtCond', 'BsmtExposure', 'BsmtFinType1', 'BsmtFinType2', 'FireplaceQu',
'GarageType', 'GarageFinish', 'GarageQual', 'GarageCond', 'PoolQC', 'Fence', 'MiscFeature']
for i in na_data:
train[i] = train[i].fillna('XX')
#Replaced the NaN values of LotFrontage and MasVnrArea with the mean of their column
train['LotFrontage'] = train['LotFrontage'].fillna(train['LotFrontage'].mean())
train['MasVnrArea'] = train['MasVnrArea'].fillna(train['MasVnrArea'].mean())
concatenated_list = categorical_columns + na_data
# take one-hot encoding
OHE_sdf = pd.get_dummies(train[concatenated_list])
# drop the old categorical column from original df
train.drop(columns = categorical_columns, axis = 1, inplace = True)
# attach one-hot encoded columns to original data frame
train = pd.concat([train, OHE_sdf], axis = 1, ignore_index = False)
x_train, x_test, y_train, y_test = train_test_split(train, train['SalePrice'], test_size = 0.3, random_state = 42)
sel = SelectFromModel(RandomForestClassifier(n_estimators = 100), threshold = 300 * "mean")
sel.fit(x_train, y_train)
sel.get_support()
selected_feat = x_train.columns[sel.get_support()]
print(selected_feat())

这是新错误:

Traceback (most recent call last):
File "/home/onur/Documents/Boston-Kaggle/Model.py", line 49, in <module>
sel.fit(x_train, y_train)
File "/opt/anaconda/envs/lib/python3.7/site-packages/sklearn/feature_selection/from_model.py", line 196, in fit
self.estimator_.fit(X, y, **fit_params)
File "/opt/anaconda/envs/lib/python3.7/site-packages/sklearn/ensemble/forest.py", line 249, in fit
X = check_array(X, accept_sparse="csc", dtype=DTYPE)
File "/opt/anaconda/envs/lib/python3.7/site-packages/sklearn/utils/validation.py", line 496, in check_array
array = np.asarray(array, dtype=dtype, order=order)
File "/opt/anaconda/envs/lib/python3.7/site-packages/numpy/core/_asarray.py", line 85, in asarray
return array(a, dtype, copy=False, order=order)
ValueError: could not convert string to float: 'XX'

您在错误的轴上承认数据

df = pd.concat([df, OHE_sdf], axis = 1, ignore_index = True)
# Should be
df = pd.concat([df, OHE_sdf], axis = 0, ignore_index = True)

但是,这将导致另一个错误,即您对na_columns中列出的某些列进行热编码,例如Garage_Type已被编码为多个列,每个潜在值一个列,因此它不再存在,因此它不能替换其nan值。

编辑:

我已经更新了问题代码的几个部分,以确保它完整运行。

首先,我们需要导入我们将使用的所有库,请注意添加 numpy

import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_selection import SelectFromModel
from sklearn.model_selection import train_test_split
import numpy as np

其次,我们需要从源头获取数据

train = pd.read_csv("https://raw.githubusercontent.com/oo92/Boston-Kaggle/master/train.csv")
test = pd.read_csv("https://raw.githubusercontent.com/oo92/Boston-Kaggle/master/test.csv")

现在我们将从数据集中删除所有 NaN

# Create a series of how many NaN's are in each column
nanCounts = train.isna().sum()
# Find the total number of NaN's and print it (used to check that this bits doing somethin)
nanTotal = train.isna().sum().sum()
print('NaN's found: ', nanTotal)
# Create a template list
nanCols = []
# Iterate over the series and if the value is more than 0 (i.e there are some NaN's present)
for i in range(0,len(nanCounts)):
if nanCounts[i] > 0:
# If it is append the current column to the list of columns that contain NaN's
nanCols.append(train.columns[i])
# Iterate through all the columns which are known to have NaN's
for i in nanCols:
if train[nanCols][i].dtypes == 'float64':
# If the column is of the data type float64 (a floating point number), replace it with the mean of the column
train[i] = train[i].fillna(train[i].mean())
elif train[nanCols][i].dtypes == 'object':
# If it's of the data type object (a text string) replace it with XX
train[i] = train[i].fillna('XX')
# Reprint the total number of NaN's
nanTotal = train.isna().sum().sum()
print('NaN's after removal: ', nanTotal)

现在数据集中没有 NaN,可以组装分类数据列表

# Create a template list
categorical = []
# Iterate across all the columns checking if they're of the object datatype and if they are appending them to the categorical list
for i in range(0, len(train.dtypes)):
if train.dtypes[i] == 'object':
categorical.append(train.columns[i])
# Print out the list of categorical features
print('Categorical columns are: n', categorical)

现在代码与原始代码非常相似,但由于变量更改而进行了一些小的更改

# take one-hot encoding
OHE_sdf = pd.get_dummies(train[categorical])
# drop the old categorical column from original df
train.drop(columns = categorical, axis = 1, inplace = True)
# attach one-hot encoded columns to original data frame
train = pd.concat([train, OHE_sdf], axis = 1, ignore_index = False)
print('splitting dataset')
x_train, x_test, y_train, y_test = train_test_split(train, train['SalePrice'], test_size = 0.3, random_state = 42)
print('Selecting features')
# Note that here i changed the threshold so that it would actually show some features to use
sel = SelectFromModel(RandomForestClassifier(n_estimators = 100), threshold = '1.25*mean')
sel.fit(x_train, y_train)
# Also just straight up save the boolean array it will be quicker and i prefer the formatting this way
selected = sel.get_support()
# Print the boolean array of selected features
print(selected)
# Print the finally selected features
print(train.columns[selected])

总之,它看起来像

import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_selection import SelectFromModel
from sklearn.model_selection import train_test_split
import numpy as np
train = pd.read_csv("https://raw.githubusercontent.com/oo92/Boston-Kaggle/master/train.csv")
test = pd.read_csv("https://raw.githubusercontent.com/oo92/Boston-Kaggle/master/test.csv")
nanCounts = train.isna().sum()
nanTotal = train.isna().sum().sum()
print('NaN's found: ', nanTotal)
nanCols = []
for i in range(0,len(nanCounts)):
if nanCounts[i] > 0:
nanCols.append(train.columns[i])
for i in nanCols:
if train[nanCols][i].dtypes == 'float64':
train[i] = train[i].fillna(train[i].mean())
elif train[nanCols][i].dtypes == 'object':
train[i] = train[i].fillna('XX')
nanTotal = train.isna().sum().sum()
print('NaN's after removal: ', nanTotal)
categorical = []
for i in range(0, len(train.dtypes)):
if train.dtypes[i] == 'object':
categorical.append(train.columns[i])
print('Categorical columns are: n', categorical)
# take one-hot encoding
OHE_sdf = pd.get_dummies(train[categorical])
# drop the old categorical column from original df
train.drop(columns = categorical, axis = 1, inplace = True)
# attach one-hot encoded columns to original data frame
train = pd.concat([train, OHE_sdf], axis = 1, ignore_index = False)
print('splitting dataset')
x_train, x_test, y_train, y_test = train_test_split(train, train['SalePrice'], test_size = 0.3, random_state = 42)
print('Selecting features')
sel = SelectFromModel(RandomForestClassifier(n_estimators = 100), threshold = '1.25*mean')
sel.fit(x_train, y_train)
selected = sel.get_support()
print(selected)
print(train.columns[selected])

你的代码对我有用。

import pandas as pd
import numpy as np
df = pd.DataFrame({'x1':[np.nan,2,3,4,5],'x2':[6,7,np.nan,9,10], 'x3':range(10,15)})
list = ['x1', 'x2']
for i in list:
df[i] = df[i].fillna('XX')

最新更新