我正在尝试构建一个朴素贝叶斯分类器,将数据从数据库加载为包含(标签,文本)的数据帧。下面是数据示例(多项式标签):
label| feature|
+-----+--------------------+
| 1|combusting prepar...|
| 1|adhesives for ind...|
| 1| |
| 1| salt for preserving|
| 1|auxiliary fluids ...|
我使用以下转换进行标记化、停用词、n 元语法和哈希TF
:val selectedData = df.select("label", "feature")
// Tokenize RDD
val tokenizer = new Tokenizer().setInputCol("feature").setOutputCol("words")
val regexTokenizer = new RegexTokenizer().setInputCol("feature").setOutputCol("words").setPattern("\W")
val tokenized = tokenizer.transform(selectedData)
tokenized.select("words", "label").take(3).foreach(println)
// Removing stop words
val remover = new StopWordsRemover().setInputCol("words").setOutputCol("filtered")
val parsedData = remover.transform(tokenized)
// N-gram
val ngram = new NGram().setInputCol("filtered").setOutputCol("ngrams")
val ngramDataFrame = ngram.transform(parsedData)
ngramDataFrame.take(3).map(_.getAs[Stream[String]]("ngrams").toList).foreach(println)
//hashing function
val hashingTF = new HashingTF().setInputCol("ngrams").setOutputCol("hash").setNumFeatures(1000)
val featurizedData = hashingTF.transform(ngramDataFrame)
转换的输出:
+-----+--------------------+--------------------+--------------------+------ --------------+--------------------+
|label| feature| words| filtered| ngrams| hash|
+-----+--------------------+--------------------+--------------------+------ --------------+--------------------+
| 1|combusting prepar...|[combusting, prep...|[combusting, prep...| [combusting prepa...|(1000,[124,161,69...|
| 1|adhesives for ind...|[adhesives, for, ...|[adhesives, indus...| [adhesives indust...|(1000,[451,604],[...|
| 1| | []| []| []| (1000,[],[])|
| 1| salt for preserving|[salt, for, prese...| [salt, preserving]| [salt preserving]| (1000,[675],[1.0])|
| 1|auxiliary fluids ...|[auxiliary, fluid...|[auxiliary, fluid...|[auxiliary fluids...|(1000,[661,696,89...|
要构建朴素贝叶斯模型,我需要将标签和特征转换为LabelPoint
。按照以下方法,我尝试将数据帧转换为RDD并创建标签点:
val rddData = featurizedData.select("label","hash").rdd
val trainData = rddData.map { line =>
val parts = line.split(',')
LabeledPoint(parts(0), parts(1))
}
val rddData = featurizedData.select("label","hash").rdd.map(r => (Try(r(0).asInstanceOf[Integer]).get.toDouble, Try(r(1).asInstanceOf[org.apache.spark.mllib.linalg.SparseVector]).get))
val trainData = rddData.map { line =>
val parts = line.split(',')
LabeledPoint(parts(0).toDouble, Vectors.dense(parts(1).split(',').map(_.toDouble)))
}
我收到以下错误:
scala> val trainData = rddData.map { line =>
| val parts = line.split(',')
| LabeledPoint(parts(0).toDouble, Vectors.dense(parts(1).split(',').map(_.toDouble)))
| }
<console>:67: error: value split is not a member of (Double, org.apache.spark.mllib.linalg.SparseVector)
val parts = line.split(',')
^
<console>:68: error: not found: value Vectors
LabeledPoint(parts(0).toDouble, Vectors.dense(parts(1).split(',').map(_.toDouble)))
编辑 1:
根据下面的建议,我已经创建了 LabelPoint 并训练了模型。
val trainData = featurizedData.select("label","features")
val trainLabel = trainData.map(line => LabeledPoint(Try(line(0).asInstanceOf[Integer]).get.toDouble,Try(line(1).asInsta nceOf[org.apache.spark.mllib.linalg.SparseVector]).get))
val splits = trainLabel.randomSplit(Array(0.8, 0.2), seed = 11L)
val training = splits(0)
val test = splits(1)
val model = NaiveBayes.train(training, lambda = 1.0, modelType = "multinomial")
val predictionAndLabels = test.map { point =>
val score = model.predict(point.features)
(score, point.label)}
使用 N-gram 和没有 N-gram 以及不同的哈希特征编号,我在 40% 左右的准确度较低。我的数据集包含 5000 行和 45 个多项式标签。有什么方法可以提高模型性能吗?提前致谢
不需要将你的featurizedData
转换为RDD
,因为Apache Spark
有两个库ML
和MLLib
,第一个使用DataFrame
s,而MLLib
使用RDD
s。因此,您可以使用ML
,因为您已经有一个 DataFrame
.
为了实现这一点,你只需要将列重命名为(label
,features
),并适合你的模型,如朴素贝叶斯所示,例如下面。
df = sqlContext.createDataFrame([
Row(label=0.0, features=Vectors.dense([0.0, 0.0])),
Row(label=0.0, features=Vectors.dense([0.0, 1.0])),
Row(label=1.0, features=Vectors.dense([1.0, 0.0]))])
nb = NaiveBayes(smoothing=1.0, modelType="multinomial")
model = nb.fit(df)
关于你得到的错误,是因为你已经有一个SparseVector
,而该类没有split
方法。所以仔细想想,你的RDD
几乎有你实际需要的结构,但你必须把Tuple
转换为LabeledPoint
。
有一些技术可以提高性能,我想到的第一个是删除停用词(例如the,a,an,to,虽然等...),第二个是计算文本中不同单词的数量,然后手动构造向量,即这是因为如果哈希数很低,那么不同的单词可能具有相同的哈希, 因此表现不佳。