如何在Apache Spark中进行标签编码或分类值



我的数据集包含字符串列。我如何编码基于字符串的列就像我们在scikit-learn LabelEncoder

StringIndexer是您需要的https://spark.apache.org/docs/1.5.1/ml-features.html stringindexer

from pyspark.ml.feature import StringIndexer
df = sqlContext.createDataFrame(
            [(0, "a"), (1, "b"), (2, "c"), (3, "a"), (4, "a"), (5, "c")],
            ["id", "category"]) 
indexer = StringIndexer(inputCol="category", outputCol="categoryIndex") 
indexed = indexer.fit(df).transform(df) 
indexed.show()

我们正在开发spark -learn,旨在在PySpark上提供scikit-learn功能和API。你可以这样使用SparkLabelEncoder:

$ pip install sparkit-learn
>>> from splearn.preprocessing import SparkLabelEncoder
>>> from splearn import BlockRDD
>>>
>>> data = ["paris", "paris", "tokyo", "amsterdam"]
>>> y = BlockRDD(sc.parallelize(data))
>>>
>>> le = SparkLabelEncoder()
>>> le.fit(y)
>>> le.classes_
array(['amsterdam', 'paris', 'tokyo'],
      dtype='|S9')
>>>
>>> test = ["tokyo", "tokyo", "paris"]
>>> y_test = BlockRDD(sc.parallelize(test))
>>>
>>> le.transform(y_test).toarray()
array([2, 2, 1])
>>>
>>> test = [2, 2, 1]
>>> y_test = BlockRDD(sc.parallelize(test))
>>>
>>> le.inverse_transform(y_test).toarray()
array(['tokyo', 'tokyo', 'paris'],
      dtype='|S9')

相关内容

  • 没有找到相关文章

最新更新