tensorflow中修正神经网络的NaN值/损失



我正在使用tensorflow运行一个客户流失模型,并遇到NaN损失。仔细阅读,我发现我的数据中可能有一些NaN值,print(np.any(np.isnan(X_test)))证实了这一点。

我试过使用

def standardize(train, test):
mean = np.mean(train, axis=0)
std = np.std(train, axis=0)+0.000001
X_train = (train - mean) / std
X_test = (test - mean) /std
return X_train, X_test

但仍然提出了NaN值。

如果有帮助的话,下面是完整的代码:

import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import tensorflow as tf
dataset = pd.read_excel('CHURN DATA.xlsx')
X = dataset.iloc[:, 2:45].values
y = dataset.iloc[:, 45].values
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
X[:, 1] = le.fit_transform(X[:,1])
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
ct = ColumnTransformer(transformers=[('encoder', OneHotEncoder(),[0])], remainder = 'passthrough')
X = np.array(ct.fit_transform(X))
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
ann = tf.keras.models.Sequential()
ann.add(tf.keras.layers.Dense(units = 43, activation = 'relu'))
ann.add(tf.keras.layers.Dense(units = 43, activation = 'relu'))
ann.add(tf.keras.layers.Dense(units = 1, activation = 'sigmoid'))
ann.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
ann.fit(X_train, y_train, batch_size = 256, epochs = 50)

您尚未替换nan值。数据中可能也有一些inf-inf值。您可以用0同时替换它们

对于数据帧

X.replace([np.inf, -np.inf], np.nan, inplace=True)
X = X.fillna(0)

或者如果您的数据在numpy数组中

X[np.isnan(X)] = 0
X[X == np.inf] = 0 
X[X == -np.inf] = 0

最新更新