加载持久化的交叉验证器模型引发"Param numTrees does not exist"错误



我正在使用Spark 2.0来创建一个随机图clast级,用于求解多类分类问题。我能够成功训练模型,并使用Model.save((方法将训练有素的模型保存到S3存储桶中。但是,在使用LOAD((加载此模型时,我会收到以下错误。

`

Exception in thread "main" java.util.NoSuchElementException: Param numTrees does not exist.
    at org.apache.spark.ml.param.Params$$anonfun$getParam$2.apply(params.scala:609)
    at org.apache.spark.ml.param.Params$$anonfun$getParam$2.apply(params.scala:609)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.ml.param.Params$class.getParam(params.scala:608)
    at org.apache.spark.ml.PipelineStage.getParam(Pipeline.scala:42)
  at org.apache.spark.ml.util.DefaultParamsReader$$anonfun$getAndSetParams$1.apply(ReadWrite.scala:430)
at org.apache.spark.ml.util.DefaultParamsReader$$anonfun$getAndSetParams$1.apply(ReadWrite.scala:429)
    at scala.collection.immutable.List.foreach(List.scala:381)
at org.apache.spark.ml.util.DefaultParamsReader$.getAndSetParams(ReadWrite.scala:429)
    at org.apache.spark.ml.classification.RandomForestClassificationModel$RandomForestClassificationModelReader.load(RandomForestClassifier.scala:310)
at org.apache.spark.ml.classification.RandomForestClassificationModel$RandomForestClassificationModelReader.load(RandomForestClassifier.scala:284)
    at org.apache.spark.ml.util.DefaultParamsReader$.loadParamsInstance(ReadWrite.scala:447)
    at org.apache.spark.ml.Pipeline$SharedReadWrite$$anonfun$4.apply(Pipeline.scala:267)
o.a.p.h.InternalParquetRecordReader         at org.apache.spark.ml.Pipeline$SharedReadWrite$$anonfun$4.apply(Pipeline.scala:265)
:   at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 block read in memory in 4226 ms. row count = 52598
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
2017-05-04 21:53:08.140 at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
at org.apache.spark.ml.Pipeline$SharedReadWrite$.load(Pipeline.scala:265)
at org.apache.spark.ml.PipelineModel$PipelineModelReader.load(Pipeline.scala:341)
    at org.apache.spark.ml.PipelineModel$PipelineModelReader.load(Pipeline.scala:335)
--- at org.apache.spark.ml.util.DefaultParamsReader$.loadParamsInstance(ReadWrite.scala:447)
    at org.apache.spark.ml.tuning.CrossValidatorModel$CrossValidatorModelReader.load(CrossValidator.scala:269)
at org.apache.spark.ml.tuning.CrossValidatorModel$CrossValidatorModelReader.load(CrossValidator.scala:256)
at org.apache.spark.ml.util.MLReadable$class.load(ReadWrite.scala:227)
    at org.apache.spark.ml.tuning.CrossValidatorModel$.load(CrossValidator.scala:240)
:   at org.apache.spark.ml.tuning.CrossValidatorModel.load(CrossValidator.scala)

`

以下是我用来训练和保存模型的代码段

val assembler = new VectorAssembler();
assembler.setInputCols(inputColumnNames);
assembler.setOutputCol("Inputs_Indexed");

//split 70:30 training and test data
val Array(trainingData, testData) = df.randomSplit(Array(0.7, 0.3))
//train using RandomForest Model
val rf = new RandomForestClassifier()
        .setLabelCol("Facing_Indexed")
        .setFeaturesCol("Inputs_Indexed")
        .setNumTrees(500);
val labelConverter = new IndexToString()
                .setInputCol("prediction")
                .setOutputCol("predictedLabel")
                .setLabels(labelIndexer.labels);
val stageList = new ArrayList[PipelineStage];
stageList.addAll(categoricalInputModels);
stageList.add(labelIndexer);
stageList.add(assembler);
stageList.add(rf);
stageList.add(labelConverter);
val stages= new Array[PipelineStage](stageList.size);
//convert stages list to array
stageList.toArray(stages);
val pipeline = new Pipeline().setStages(stages)
val paramGrid = new ParamGridBuilder().addGrid(rf.maxDepth, Array(3, 5, 8)).build()
val evaluator = new MulticlassClassificationEvaluator()
  .setLabelCol("Facing_Indexed")
  .setPredictionCol("prediction")
  .setMetricName("accuracy")
val cv = new CrossValidator().setEstimator(pipeline).setEvaluator(evaluator).setEstimatorParamMaps(paramGrid)
val model = cv.fit(trainingData)
val predictions = model.transform(testData);
predictions.select("predictedLabel", "Facing", "Inputs_Indexed").show(5);
val accuracy = evaluator.evaluate(predictions)
  println("Test Error = " + (1.0 - accuracy))
model.save("s3n://xyz_path/au.model")

保存训练的模型后,我使用crossValidatorModel.load(" s3n://xyz_path/au.model"(将模型加载到一个单独的Java程序中,该程序引发上述错误。在我的S3存储桶中,我可以看到被保存的序列化模型。我不确定它在哪里出错。对此错误的任何帮助都将不胜感激。

我弄清楚了什么问题。AWS EMR群集使用我正在训练并将模型保存到S3存储桶中的Spark 2.1.0。但是,在我的Java程序中,我指的是Spark Mllib的2.0.0版本。我发现在2.0至2.1迁移指南中报道的RandomforestClassifiermodel中的" numetrees"参数有一个破裂的变化。此处TO-21

所以,最后,我在Java项目中更新了Spark Mllib Maven依赖性,以指向版本2.1.0。

<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-mllib_2.11</artifactId>
    <version>2.1.0</version>
</dependency>

然后,它抱怨额外的丢失类

java.lang.NoClassDefFoundError: org/codehaus/commons/compiler/UncheckedCompileException

当我添加Commons-Compiler依赖项

时,它已修复
<dependency>
    <groupId>org.codehaus.janino</groupId>
    <artifactId>commons-compiler</artifactId>
    <version>2.7.8</version>
</dependency>

这就是我持久的模型成功加载的方式!

最新更新