在保存模型时触发 MLIB 解决错误



我得到以下错误。我正在遵循页面中的线性回归示例。我有 Spark 1.6.1 和 python 3.5.1。我应该进行哪些更改?

from pyspark.mllib.regression import LabeledPoint, LinearRegressionWithSGD, LinearRegressionModel
# Load and parse the data
def parsePoint(line):
    values = [float(x) for x in line.replace(',', ' ').split(' ')]
    return LabeledPoint(values[0], values[1:])
data = sc.textFile("data/mllib/ridge-data/lpsa.data")
parsedData = data.map(parsePoint)
# Build the model
model = LinearRegressionWithSGD.train(parsedData, iterations=100, step=0.00000001)
# Evaluate the model on training data
valuesAndPreds = parsedData.map(lambda p: (p.label, model.predict(p.features)))
MSE = valuesAndPreds.map(lambda v: (v[0] - v[1])**2).reduce(lambda x, y: x + y) / valuesAndPreds.count()
print("Mean Squared Error = " + str(MSE))
# Save and load model
>>> model.save(sc, "myModelPath")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "c:spark-1.6.1-bin-hadoop2.6spark-1.6.1-bin-hadoop2.6pythonpysparkmllibregression.py", line 185, in save
    java_model.save(sc._jsc.sc(), path)
  File "c:spark-1.6.1-bin-hadoop2.6spark-1.6.1-bin-hadoop2.6pythonlibpy4j-0.9-src.zippy4jjava_gateway.py", line 813, in __call__
  File "c:spark-1.6.1-bin-hadoop2.6spark-1.6.1-bin-hadoop2.6pythonpysparksqlutils.py", line 45, in deco
    return f(*a, **kw)

请注意,我编辑了计算MSE的行如下

MSE = valuesAndPreds.map(lambda v: (v[0] - v[1])**2).reduce(lambda x, y: x + y) / valuesAndPreds.count()

只要确保不存在同名的模型/目录即可。如果myModelPath重命名为其他值,或者删除文件夹myModelPath则上面的代码有效

相关内容

最新更新