pyspark-拆分一列并取出n个元素



我想拿一列并使用字符拆分字符串。与往常一样,我知道该方法拆分将返回列表,但是在编码时,我发现返回对象只有getItem或getfield,其中包含来自API的以下说明:

@since(1.3)   
def getItem(self, key):
    """
    An expression that gets an item at position ``ordinal`` out of a list,
    or gets an item by key out of a dict.

@since(1.3)
def getField(self, name):
    """
    An expression that gets a field by name in a StructField.

显然,这不符合我的要求,例如,对于" A_B_C_D"列中的文本,我想在两个不同的列中分配" A_B_C_"one_answers" D"。

这是我正在使用的代码

from pyspark.sql.functions import regexp_extract, col, split
df_test=spark.sql("SELECT * FROM db_test.table_test")
#Applying the transformations to the data
split_col=split(df_test['Full_text'],'_')
df_split=df_test.withColumn('Last_Item',split_col.getItem(3))

找到一个示例:

from pyspark.sql import Row
from pyspark.sql.functions import regexp_extract, col, split
l = [("Item1_Item2_ItemN"),("FirstItem_SecondItem_LastItem"),("ThisShouldBeInTheFirstColumn_ThisShouldBeInTheLastColumn")]
rdd = sc.parallelize(l)
datax = rdd.map(lambda x: Row(fullString=x))
df = sqlContext.createDataFrame(datax)
split_col=split(df['fullString'],'_')
df=df.withColumn('LastItemOfSplit',split_col.getItem(2))

结果:

fullString                                                LastItemOfSplit
Item1_Item2_ItemN                                            ItemN
FirstItem_SecondItem_LastItem                                LastItem
ThisShouldBeInTheFirstColumn_ThisShouldBeInTheLastColumn     null

我的预期结果将永远拥有最后一项

fullString                                                LastItemOfSplit
Item1_Item2_ItemN                                            ItemN
FirstItem_SecondItem_LastItem                                LastItem
ThisShouldBeInTheFirstColumn_ThisShouldBeInTheLastColumn  ThisShouldBeInTheLastColumn

您可以使用getItem(size - 1)从数组中获取最后一项:

示例

df = spark.createDataFrame([[['A', 'B', 'C', 'D']], [['E', 'F']]], ['split'])
df.show()
+------------+
|       split|
+------------+
|[A, B, C, D]|
|      [E, F]|
+------------+
import pyspark.sql.functions as F
df.withColumn('lastItem', df.split.getItem(F.size(df.split) - 1)).show()
+------------+--------+
|       split|lastItem|
+------------+--------+
|[A, B, C, D]|       D|
|      [E, F]|       F|
+------------+--------+

对于您的情况:

from pyspark.sql.functions import regexp_extract, col, split, size
df_test=spark.sql("SELECT * FROM db_test.table_test")
#Applying the transformations to the data
split_col=split(df_test['Full_text'],'_')
df_split=df_test.withColumn('Last_Item',split_col.getItem(size(split_col) - 1))

您可以将正则表达模式传递给split

以下可用于您的示例:

from pyspark.sql.functions split
split_col=split(df['fullString'], r"_(?=.+$)")
df = df.withColumn('LastItemOfSplit', split_col.getItem(1))
df.show(truncate=False)
#+--------------------------------------------------------+---------------------------+
#|fullString                                              |LastItemOfSplit            |
#+--------------------------------------------------------+---------------------------+
#|Item1_Item2_ItemN                                       |Item2                      |
#|FirstItem_SecondItem_LastItem                           |SecondItem                 |
#|ThisShouldBeInTheFirstColumn_ThisShouldBeInTheLastColumn|ThisShouldBeInTheLastColumn|
#+--------------------------------------------------------+---------------------------+

模式表示以下内容:

  • _字面的下划线
  • (?=.+$)阳性近距离预先实现任何事物(.),直到字符串结束$

这将在最后一个下划线上拆分字符串。然后致电.getItem(1)在结果列表中以索引1获取项目。

相关内容

  • 没有找到相关文章

最新更新