我正在对多列数据帧执行ltrim和rtrim,但现在我可以单独执行了。像
# selected_colums = selected_colums.withColumn("last_name", ltrim(selected_colums.last_name))
# selected_colums = selected_colums.withColumn("last_name", rtrim(selected_colums.last_name))
# selected_colums = selected_colums.withColumn("email", ltrim(selected_colums.email))
# selected_colums = selected_colums.withColumn("email", rtrim(selected_colums.email))
# selected_colums = selected_colums.withColumn("phone_number", ltrim(selected_colums.phone_number))
# selected_colums = selected_colums.withColumn("phone_number", rtrim(selected_colums.phone_number))
但我想在下面这样的循环中进行
sdk = ['first_name','last_name','email','phone_number','email_alt','phone_number_alt']
for x in sdk:
selected_colums = selected_colums.withColumn(x, ltrim(selected_colums.last_name))
它给了我语法错误。请帮我优化这个代码,这样对于任何数量的列,我都可以通过列表进行ltrim或rtrim。
检查以下代码。
导入所需功能
>>> from pyspark.sql.functions import col
在所有列上应用ltrim
和rtrim
>>> columnExprs = map(lambda c: rtrim(ltrim(col(c))).alias(c),df.columns)
在选择中应用columnExprs
df.select(*columnExprs).show()