如何将 ALS 集成到我的火花管道中以实现非负矩阵分解?



我正在使用 spark mllib 来训练朴素贝叶斯分类器模型,其中我创建了一个管道来索引我的字符串特征,然后规范化并应用 PCA 进行降维,然后我训练我的朴素贝叶斯模型。当我运行管道时,我在 PCA 组件向量中获得负值。在谷歌搜索中,我发现我必须应用 NMF(非负矩阵分解(才能获得正向量,我发现 ALS 将使用方法 .setnonnegative(true( 实现 NMF,但我不知道如何在 PCA 之后将 ALS 集成到我的管道中。任何帮助表示赞赏。谢谢。

这是代码

import org.apache.spark.SparkConf;
import org.apache.spark.SparkContext;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.ml.Pipeline;
import org.apache.spark.ml.PipelineModel;
import org.apache.spark.ml.PipelineStage;
import org.apache.spark.ml.classification.NaiveBayes;
import org.apache.spark.ml.feature.IndexToString;
import org.apache.spark.ml.feature.Normalizer;
import org.apache.spark.ml.feature.PCA;
import org.apache.spark.ml.feature.StringIndexer;
import org.apache.spark.ml.feature.StringIndexerModel;
import org.apache.spark.ml.feature.VectorAssembler;
import org.apache.spark.ml.recommendation.ALS;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.SQLContext;
public class NBTrainPCA {
public static void main(String args[]){
try{
SparkConf conf = new SparkConf().setAppName("NBTrain");
SparkContext scc = new SparkContext(conf);
scc.setLogLevel("ERROR");
JavaSparkContext sc = new JavaSparkContext(scc);
SQLContext sqlc = new SQLContext(scc);
DataFrame traindata = sqlc.read().format("parquet").load(args[0]).filter("user_email!='NA' and user_email!='00' and user_email!='0ed709b5bec77b6bff96ea5b5e334a8e5' and user_email is not null  and ip is not null  and region_code is not null and city is not null and browser_name is not null and os_name is not null");
traindata.registerTempTable("master");
//DataFrame data = sqlc.sql("select user_email,user_device,ip,country_code,region_code,city,zip_code,time_zone,browser_name,browser_manf,os_name,os_manf from master where user_email!='NA' and user_email is not null and user_device is not null and ip is not null and country_code is not null and region_code is not null and city is not null and browser_name is not null and browser_manf is not null and zip_code is not null and time_zone is not null and os_name is not null and os_manf is not null");
StringIndexerModel emailIndexer = new StringIndexer()
.setInputCol("user_email")
.setOutputCol("email_index")
.setHandleInvalid("skip")
.fit(traindata);
StringIndexer udevIndexer = new StringIndexer()
.setInputCol("user_device")
.setOutputCol("udev_index")
.setHandleInvalid("skip");
StringIndexer ipIndexer = new StringIndexer()
.setInputCol("ip")
.setOutputCol("ip_index")
.setHandleInvalid("skip");
StringIndexer ccodeIndexer = new StringIndexer()
.setInputCol("country_code")
.setOutputCol("ccode_index")
.setHandleInvalid("skip");
StringIndexer rcodeIndexer = new StringIndexer()
.setInputCol("region_code")
.setOutputCol("rcode_index")
.setHandleInvalid("skip");
StringIndexer cyIndexer = new StringIndexer()
.setInputCol("city")
.setOutputCol("cy_index")
.setHandleInvalid("skip");
StringIndexer zpIndexer = new StringIndexer()
.setInputCol("zip_code")
.setOutputCol("zp_index")
.setHandleInvalid("skip");
StringIndexer tzIndexer = new StringIndexer()
.setInputCol("time_zone")
.setOutputCol("tz_index")
.setHandleInvalid("skip");
StringIndexer bnIndexer = new StringIndexer()
.setInputCol("browser_name")
.setOutputCol("bn_index")
.setHandleInvalid("skip");
StringIndexer bmIndexer = new StringIndexer()
.setInputCol("browser_manf")
.setOutputCol("bm_index")
.setHandleInvalid("skip");
StringIndexer bvIndexer = new StringIndexer()
.setInputCol("browser_version")
.setOutputCol("bv_index")
.setHandleInvalid("skip");
StringIndexer onIndexer = new StringIndexer()
.setInputCol("os_name")
.setOutputCol("on_index")
.setHandleInvalid("skip");
StringIndexer omIndexer = new StringIndexer()
.setInputCol("os_manf")
.setOutputCol("om_index")
.setHandleInvalid("skip");
VectorAssembler assembler = new VectorAssembler()
.setInputCols(new String[]{ "udev_index","ip_index","ccode_index","rcode_index","cy_index","zp_index","tz_index","bn_index","bm_index","bv_index","on_index","om_index"})
.setOutputCol("ffeatures");
Normalizer normalizer = new Normalizer()
.setInputCol("ffeatures")
.setOutputCol("sfeatures")
.setP(1.0);
PCA pca = new PCA()
.setInputCol("sfeatures")
.setOutputCol("pcafeatures")
.setK(5);
NaiveBayes nbcl = new NaiveBayes()
.setFeaturesCol("pcafeatures")
.setLabelCol("email_index")
.setSmoothing(1.0);
IndexToString is = new IndexToString()
.setInputCol("prediction")
.setOutputCol("op")
.setLabels(emailIndexer.labels());
Pipeline pipeline = new Pipeline()
.setStages(new PipelineStage[] {emailIndexer,udevIndexer,ipIndexer,ccodeIndexer,rcodeIndexer,cyIndexer,zpIndexer,tzIndexer,bnIndexer,bmIndexer,bvIndexer,onIndexer,omIndexer,assembler,normalizer,pca,nbcl,is});
PipelineModel model = pipeline.fit(traindata);
//DataFrame chidata = model.transform(data);
//chidata.write().format("com.databricks.spark.csv").save(args[1]);
model.write().overwrite().save(args[1]);
sc.close();
}
catch(Exception e){
}
}
}

我建议您阅读一些有关PCA的信息,以便更好地了解它在做什么。这里有一些链接:

https://stats.stackexchange.com/questions/26352/interpreting-positive-and-negative-signs-of-the-elements-of-pca-eigenvectors

https://stats.stackexchange.com/questions/2691/making-sense-of-principal-component-analysis-eigenvectors-eigenvalues

在与管道的 ALS 集成中,您似乎只想一个接一个地插入。更好地理解它们每个人在做什么和用于什么:ALS和PCA是完全不同的东西。ALS 正在使用 AlS 进行矩阵分解以实现误差最小化,没有找到任何主成分来对数据进行转换或降维。

顺便说一句:我认为在 PCA 分量向量中获得负值没有任何问题。您可以在上面的链接中查看这一点。您正在对数据应用线性变换。因此,新向量现在是转换的结果。 我希望它有所帮助。