重新加载的Spark模型似乎不起作用



我正在从CSV文件中训练和保存模型。这第一步一切都很好。保存模型后,我试图加载和使用新数据保存的模型,但它不起作用。

有什么问题吗?

Java培训文件

SparkConf sconf = new SparkConf().setMaster("local[*]").setAppName("Test").set("spark.sql.warehouse.dir","D:/Temp/wh");
          SparkSession spark = SparkSession.builder().appName("Java Spark").config(sconf).getOrCreate();
          JavaRDD<Cobj> cRDD = spark.read().textFile("file:///C:/Temp/classifications1.csv").javaRDD()
                       .map(new Function<String, Cobj>() {
                              @Override
                              public Cobj call(String line) throws Exception {
                                     String[] parts = line.split(",");
                                     Cobj c = new Cobj();
                                     c.setClassName(parts[1].trim());
                                     c.setProductName(parts[0].trim());                                   
                                     return c;
                              }
                       });
          Dataset<Row> mainDataset = spark.createDataFrame(cRDD, Cobj.class);                         
          //StringIndexer
          StringIndexer classIndexer = new StringIndexer()
                        .setHandleInvalid("skip")
                        .setInputCol("className")
                        .setOutputCol("label");
          StringIndexerModel classIndexerModel=classIndexer.fit(mainDataset);
          //Tokenizer
          Tokenizer tokenizer = new Tokenizer()                                
                       .setInputCol("productName")                     
                       .setOutputCol("words");              
          //HashingTF
          HashingTF hashingTF = new HashingTF()
                  .setInputCol(tokenizer.getOutputCol())
                  .setOutputCol("features");
          DecisionTreeClassifier  decisionClassifier = new DecisionTreeClassifier ()                      
                  .setLabelCol("label")
                  .setFeaturesCol("features");
          Pipeline pipeline = new Pipeline()
                  .setStages(new PipelineStage[] {classIndexer,tokenizer,hashingTF,decisionClassifier});
       Dataset<Row>[] splits = mainDataset.randomSplit(new double[]{0.8, 0.2});
       Dataset<Row> train = splits[0];
       Dataset<Row> test = splits[1];
       PipelineModel pipelineModel = pipeline.fit(train);
       Dataset<Row> result = pipelineModel.transform(test);           
       pipelineModel.write().overwrite().save(savePath+"DecisionTreeClassificationModel");
       IndexToString labelConverter = new IndexToString()
                   .setInputCol("prediction")
                   .setOutputCol("PredictedClassName")                       
                   .setLabels(classIndexerModel.labels());
       result=labelConverter.transform(result);
       result.show(num,false);
       Dataset<Row> predictionAndLabels = result.select("prediction", "label");
       MulticlassClassificationEvaluator evaluator = new MulticlassClassificationEvaluator()
         .setMetricName("accuracy");
      System.out.println("Accuracy = " + evaluator.evaluate(predictionAndLabels));
输出:

+--------------------------+---------------------------------------------+-----+------------------------------------------------------+-------------------------------------------------------------------------------------------------+---------------------+---------------------+----------+--------------------------+
|className                 |productName                                  |label|words                                                 |features                                                                                         |rawPrediction        |probability          |prediction|PredictedClassName        |
+--------------------------+---------------------------------------------+-----+------------------------------------------------------+-------------------------------------------------------------------------------------------------+---------------------+---------------------+----------+--------------------------+
|Apple iPhone 6S 16GB      |Apple IPHONE 6S 16GB SGAY Telefon            |2.0  |[apple, iphone, 6s, 16gb, sgay, telefon]              |(262144,[27536,56559,169565,200223,210029,242621],[1.0,1.0,1.0,1.0,1.0,1.0])                     |[0.0,0.0,6.0,0.0,0.0]|[0.0,0.0,1.0,0.0,0.0]|2.0       |Apple iPhone 6S Plus 64GB |
|Apple iPhone 6S 16GB      |Apple iPhone 6S 16 GB Space Gray MKQJ2TU/A   |2.0  |[apple, iphone, 6s, 16, gb, space, gray, mkqj2tu/a]   |(262144,[10879,56559,95900,139131,175329,175778,200223,210029],[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0])|[0.0,0.0,6.0,0.0,0.0]|[0.0,0.0,1.0,0.0,0.0]|2.0       |Apple iPhone 6S Plus 64GB |
|Apple iPhone 6S 16GB      |iPhone 6s 16GB                               |2.0  |[iphone, 6s, 16gb]                                    |(262144,[27536,56559,210029],[1.0,1.0,1.0])                                                      |[0.0,0.0,6.0,0.0,0.0]|[0.0,0.0,1.0,0.0,0.0]|2.0       |Apple iPhone 6S Plus 64GB |
|Apple iPhone 6S Plus 128GB|Apple IPHONE 6S PLUS 128GB SG Telefon        |4.0  |[apple, iphone, 6s, plus, 128gb, sg, telefon]         |(262144,[56559,99916,137263,175839,200223,210029,242621],[1.0,1.0,1.0,1.0,1.0,1.0,1.0])          |[0.0,0.0,0.0,0.0,2.0]|[0.0,0.0,0.0,0.0,1.0]|4.0       |Apple iPhone 6S Plus 128GB|
|Apple iPhone 6S Plus 16GB |Iphone 6S Plus 16GB SpaceGray - Apple Türkiye|1.0  |[iphone, 6s, plus, 16gb, spacegray, -, apple, türkiye]|(262144,[27536,45531,46750,56559,59104,99916,200223,210029],[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0])   |[0.0,5.0,0.0,0.0,0.0]|[0.0,1.0,0.0,0.0,0.0]|1.0       |Apple iPhone 6S Plus 16GB |
+--------------------------+---------------------------------------------+-----+------------------------------------------------------+-------------------------------------------------------------------------------------------------+---------------------+---------------------+----------+--------------------------+
Accuracy = 1.0

加载Java文件

SparkConf sconf = new SparkConf().setMaster("local[*]").setAppName("Test").set("spark.sql.warehouse.dir","D:/Temp/wh");
          SparkSession spark = SparkSession.builder().appName("Java Spark").config(sconf).getOrCreate();
          JavaRDD<Cobj> cRDD = spark.read().textFile("file:///C:/Temp/classificationsTest.csv").javaRDD()
                       .map(new Function<String, Cobj>() {
                              @Override
                              public Cobj call(String line) throws Exception {
                                     String[] parts = line.split(",");
                                     Cobj c = new Cobj();
                                     c.setClassName("?");
                                     c.setProductName(parts[0].trim());
                                     return c;
                              }
                       });
          Dataset<Row> mainDataset = spark.createDataFrame(cRDD, Cobj.class);
          mainDataset.show(100,false);
          PipelineModel pipelineModel = PipelineModel.load(savePath+"DecisionTreeClassificationModel");
          Dataset<Row> result = pipelineModel.transform(mainDataset);
          result.show(100,false);
输出:

+---------+-----------+-----+-----+--------+-------------+-----------+----------+
|className|productName|label|words|features|rawPrediction|probability|prediction|
+---------+-----------+-----+-----+--------+-------------+-----------+----------+
+---------+-----------+-----+-----+--------+-------------+-----------+----------+

我从管道中删除StringIndexer并保存为"StringIndexer"。在第二个档案中;在管道加载之后,我加载了StringIndexer以将其转换为预测的标签。

相关内容

  • 没有找到相关文章

最新更新