在下面的脚本中,我发现GridSearchCV启动的作业似乎挂起了。
import json
import pandas as pd
import numpy as np
import unicodedata
import re
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.decomposition import TruncatedSVD
from sklearn.linear_model import SGDClassifier
import sklearn.cross_validation as CV
from sklearn.grid_search import GridSearchCV
from nltk.stem import WordNetLemmatizer
# Seed for randomization. Set to some definite integer for debugging and set to None for production
seed = None
### Text processing functions ###
def normalize(string):#Remove diacritics and whatevs
return "".join(ch.lower() for ch in unicodedata.normalize('NFD', string) if not unicodedata.combining(ch))
wnl = WordNetLemmatizer()
def tokenize(string):#Ignores special characters and punct
return [wnl.lemmatize(token) for token in re.compile('ww+').findall(string)]
def ngrammer(tokens):#Gets all grams in each ingredient
max_n = 2
return [":".join(tokens[idx:idx+n]) for n in np.arange(1,1 + min(max_n,len(tokens))) for idx in range(len(tokens) + 1 - n)]
print("Importing training data...")
with open('/Users/josh/dev/kaggle/whats-cooking/data/train.json','rt') as file:
recipes_train_json = json.load(file)
# Build the grams for the training data
print('nBuilding n-grams from input data...')
for recipe in recipes_train_json:
recipe['grams'] = [term for ingredient in recipe['ingredients'] for term in ngrammer(tokenize(normalize(ingredient)))]
# Build vocabulary from training data grams.
vocabulary = list({gram for recipe in recipes_train_json for gram in recipe['grams']})
# Stuff everything into a dataframe.
ids_index = pd.Index([recipe['id'] for recipe in recipes_train_json],name='id')
recipes_train = pd.DataFrame([{'cuisine': recipe['cuisine'], 'ingredients': " ".join(recipe['grams'])} for recipe in recipes_train_json],columns=['cuisine','ingredients'], index=ids_index)
# Extract data for fitting
fit_data = recipes_train['ingredients'].values
fit_target = recipes_train['cuisine'].values
# extracting numerical features from the ingredient text
feature_ext = Pipeline([('vect', CountVectorizer(vocabulary=vocabulary)),
('tfidf', TfidfTransformer(use_idf=True)),
('svd', TruncatedSVD(n_components=1000))
])
lsa_fit_data = feature_ext.fit_transform(fit_data)
# Build SGD Classifier
clf = SGDClassifier(random_state=seed)
# Hyperparameter grid for GRidSearchCV.
parameters = {
'alpha': np.logspace(-6,-2,5),
}
# Init GridSearchCV with k-fold CV object
cv = CV.KFold(lsa_fit_data.shape[0], n_folds=3, shuffle=True, random_state=seed)
gs_clf = GridSearchCV(
estimator=clf,
param_grid=parameters,
n_jobs=-1,
cv=cv,
scoring='accuracy',
verbose=2
)
# Fit on training data
print("nPerforming grid search over hyperparameters...")
gs_clf.fit(lsa_fit_data, fit_target)
控制台输出为:
Importing training data...
Building n-grams from input data...
Performing grid search over hyperparameters...
Fitting 3 folds for each of 5 candidates, totalling 15 fits
[CV] alpha=1e-06 .....................................................
[CV] alpha=1e-06 .....................................................
[CV] alpha=1e-06 .....................................................
[CV] alpha=1e-05 .....................................................
[CV] alpha=1e-05 .....................................................
[CV] alpha=1e-05 .....................................................
[CV] alpha=0.0001 ....................................................
[CV] alpha=0.0001 ....................................................
然后就挂起了。如果我在GridSearchCV
中设置n_jobs=1
,那么脚本按照预期完成输出:
Importing training data...
Building n-grams from input data...
Performing grid search over hyperparameters...
Fitting 3 folds for each of 5 candidates, totalling 15 fits
[CV] alpha=1e-06 .....................................................
[CV] ............................................ alpha=1e-06 - 6.5s
[Parallel(n_jobs=1)]: Done 1 jobs | elapsed: 6.6s
[CV] alpha=1e-06 .....................................................
[CV] ............................................ alpha=1e-06 - 6.6s
[CV] alpha=1e-06 .....................................................
[CV] ............................................ alpha=1e-06 - 6.7s
[CV] alpha=1e-05 .....................................................
[CV] ............................................ alpha=1e-05 - 6.7s
[CV] alpha=1e-05 .....................................................
[CV] ............................................ alpha=1e-05 - 6.7s
[CV] alpha=1e-05 .....................................................
[CV] ............................................ alpha=1e-05 - 6.6s
[CV] alpha=0.0001 ....................................................
[CV] ........................................... alpha=0.0001 - 6.6s
[CV] alpha=0.0001 ....................................................
[CV] ........................................... alpha=0.0001 - 6.7s
[CV] alpha=0.0001 ....................................................
[CV] ........................................... alpha=0.0001 - 6.7s
[CV] alpha=0.001 .....................................................
[CV] ............................................ alpha=0.001 - 7.0s
[CV] alpha=0.001 .....................................................
[CV] ............................................ alpha=0.001 - 6.8s
[CV] alpha=0.001 .....................................................
[CV] ............................................ alpha=0.001 - 6.6s
[CV] alpha=0.01 ......................................................
[CV] ............................................. alpha=0.01 - 6.7s
[CV] alpha=0.01 ......................................................
[CV] ............................................. alpha=0.01 - 7.3s
[CV] alpha=0.01 ......................................................
[CV] ............................................. alpha=0.01 - 7.1s
[Parallel(n_jobs=1)]: Done 15 out of 15 | elapsed: 1.7min finished
单线程执行完成得非常快,所以我确信我给了并行作业案例足够的时间来完成计算。
环境规格:MacBook Pro(15英寸,2010年中期),2.4 GHz Intel Core i5, 8gb 1067 MHz DDR3, OSX 10.10.5, python 3.4.3, ipyth3.2.0, numpy v1.9.3, scipy 0.16.0, scikit-learn v0.16.1 (python和软件包全部来自anaconda发行版)
一些附加注释:
我在这台机器上一直使用n_jobs=-1
和GridSearchCV
,没有问题,所以我的平台不支持该功能。它通常一次有4个作业,因为我在这台机器上有4个核心(2个物理核心,但由于Mac超线程,有4个"虚拟核心")。但是,除非我误解了控制台输出,否则在本例中,它有8个作业输出,没有任何返回。在Activity Monitor中实时观察CPU使用情况,4个作业启动,工作一点,然后完成(或死亡?),然后是4个更多的启动,工作一点,然后完全空闲,但坚持下去。
我没有看到明显的内存压力。主进程的最大内存容量约为1GB,子进程的内存容量约为600MB。当它们挂起时,实际内存可以忽略不计。
如果从特征提取管道中删除TruncatedSVD
步骤,该脚本可以很好地处理多个作业。但是,请注意,这个管道在网格搜索之前运行,而不是GridSearchCV
作业的一部分。
这个脚本是为kaggle比赛什么是烹饪?所以如果你想在我使用的相同数据上运行它,你可以从那里获取它。数据以JSON对象数组的形式出现。每个对象表示一个配方,并包含一个文本片段列表,这些文本片段是配料。由于每个示例都是文档的集合,而不是单个文档,因此我最终不得不编写一些自己的n-gramming和标记化逻辑,因为我不知道如何让scikit-learn的内置转换器准确地执行我想要的操作。我怀疑这些都不重要,只是供参考。
我通常在iPython命令行中使用%run来运行脚本,但我直接使用python(3.4.3)从OSX bash终端运行它会得到相同的行为。
如果njob>1,这可能是GridSearchCV使用的多处理问题。因此,与其使用多处理,不如尝试多线程,看看它是否工作良好。
from sklearn.externals.joblib import parallel_backend
clf = GridSearchCV(...)
with parallel_backend('threading'):
clf.fit(x_train, y_train)
我在使用njob>1的GSV估计器时遇到了同样的问题,并且在njob值上使用它效果很好。
PS:我不确定"线程"是否与"多处理"具有相同的优势。但从理论上讲,如果你的估计器受GIL限制,"线程"不是一个很好的选择,但如果估计器是基于cython/numpy的,它将比"多处理"更好。
System trying on:
MAC OS: 10.12.6
Python: 3.6
numpy==1.13.3
pandas==0.21.0
scikit-learn==0.19.1
我相信我有类似的问题,罪魁祸首是内存使用突然激增。进程将尝试分配内存并立即死亡,因为没有足够的可用
如果您可以访问具有更多可用内存(如128-256GB)的机器,则值得检查那里的作业数量是否相同或更少(n_jobs=4)。这就是我如何解决这个问题-只是将我的脚本移动到一个大型服务器。