pyspark与scala中的FPgrowth计算关联



使用

http://spark.apache.org/docs/1.6.1/mllib-frequent-pattern-mining.html

Python代码:

from pyspark.mllib.fpm import FPGrowth
model = FPGrowth.train(dataframe,0.01,10)

Scala:

import org.apache.spark.mllib.fpm.FPGrowth
import org.apache.spark.rdd.RDD
val data = sc.textFile("data/mllib/sample_fpgrowth.txt")
val transactions: RDD[Array[String]] = data.map(s => s.trim.split(' '))
val fpg = new FPGrowth()
  .setMinSupport(0.2)
  .setNumPartitions(10)
val model = fpg.run(transactions)
model.freqItemsets.collect().foreach { itemset =>
  println(itemset.items.mkString("[", ",", "]") + ", " + itemset.freq)
}
val minConfidence = 0.8
model.generateAssociationRules(minConfidence).collect().foreach { rule =>
  println(
    rule.antecedent.mkString("[", ",", "]")
      + " => " + rule.consequent .mkString("[", ",", "]")
      + ", " + rule.confidence)
}

从这里的代码可以看出scala部分并没有最小置信度。

def trainFPGrowthModel(
      data: JavaRDD[java.lang.Iterable[Any]],
      minSupport: Double,
      numPartitions: Int): FPGrowthModel[Any] = {
    val fpg = new FPGrowth()
      .setMinSupport(minSupport)
      .setNumPartitions(numPartitions)
    val model = fpg.run(data.rdd.map(_.asScala.toArray))
    new FPGrowthModelWrapper(model)
  }

在pyspark的情况下,如何添加minConfidence来生成关联规则?我们可以看到scala有这个例子,但是python没有这个例子。

Spark>=2.2

有一个DataFrame基础ml API,它提供AssociationRules:

from pyspark.ml.fpm import FPGrowth
data = ...
fpm = FPGrowth(minSupport=0.3, minConfidence=0.9).fit(data)
associationRules = fpm.associationRules.

火花<2.2

到目前为止,PySpark不支持提取关联规则(支持Python的基于DataFrameFPGrowth API是正在进行的工作SPARK-1450(,但我们可以很容易地解决这一问题。

首先,您必须安装SBT(只需进入下载页面(,并按照操作系统的说明进行操作。

接下来,您将不得不创建一个只有两个文件的简单Scala项目:

.
├── AssociationRulesExtractor.scala
└── build.sbt

您可以稍后对其进行调整,以遵循已建立的目录结构。

接下来,在build.sbt中添加以下内容(调整Scala版本和Spark版本以匹配您使用的版本(:

name := "fpm"
version := "1.0"
scalaVersion := "2.10.6"
val sparkVersion = "1.6.2"
libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-core" % sparkVersion,
  "org.apache.spark" %% "spark-mllib" % sparkVersion
)

并遵循AssociationRulesExtractor.scala:

package com.example.fpm
import org.apache.spark.mllib.fpm.AssociationRules.Rule
import org.apache.spark.rdd.RDD
object AssociationRulesExtractor {
  def apply(rdd: RDD[Rule[String]]) = {
    rdd.map(rule => Array(
      rule.confidence, rule.javaAntecedent, rule.javaConsequent
    ))
  }
}

打开您选择的终端模拟器,转到项目的根目录并调用:

sbt package

它将在目标目录中生成一个jar文件。例如,在Scala 2.10中,它将是:

target/scala-2.10/fpm_2.10-1.0.jar

启动PySpark shell或使用spark-submit并将生成的jar文件的路径传递给--driver-class-path:

bin/pyspark --driver-class-path /path/to/fpm_2.10-1.0.jar

在非本地模式下:

bin/pyspark --driver-class-path /path/to/fpm_2.10-1.0.jar --jars /path/to/fpm_2.10-1.0.jar

在集群模式下,jar应该出现在所有节点上。

添加一些方便的包装:

from pyspark import SparkContext
from pyspark.mllib.fpm import FPGrowthModel
from pyspark.mllib.common import _java2py
from collections import namedtuple

rule = namedtuple("Rule", ["confidence", "antecedent", "consequent"])
def generateAssociationRules(model, minConfidence):
    # Get active context
    sc = SparkContext.getOrCreate()
    # Retrieve extractor object
    extractor = sc._gateway.jvm.com.example.fpm.AssociationRulesExtractor
    # Compute rules
    java_rules = model._java_model.generateAssociationRules(minConfidence)
    # Convert rules to Python RDD
    return _java2py(sc, extractor.apply(java_rules)).map(lambda x:rule(*x))

最后,您可以使用这些助手作为函数:

generateAssociationRules(model, 0.9)

或者作为一种方法:

FPGrowthModel.generateAssociationRules = generateAssociationRules
model.generateAssociationRules(0.9)

该解决方案依赖于内部PySpark方法,因此不能保证它在不同版本之间是可移植的。

您可以使用Spark<2.2带有一点py4j代码:

# model was produced by FPGrowth.train() method
rules = sorted(model._java_model.generateAssociationRules(0.9).collect(), 
    key=lambda x: x.confidence(), reverse=True)
for rule in rules[:200]:
    # rule variable has confidence(), consequent() and antecedent() 
    # methods for individual value access.
    print rule

相关内容

  • 没有找到相关文章

最新更新