假设我有以下表格:
+--------------------+--------------------+------+------------+--------------------+
| host| path|status|content_size| time|
+--------------------+--------------------+------+------------+--------------------+
|js002.cc.utsunomi...|/shuttle/resource...| 404| 0|1995-08-01 00:07:...|
| tia1.eskimo.com |/pub/winvn/releas...| 404| 0|1995-08-01 00:28:...|
|grimnet23.idirect...|/www/software/win...| 404| 0|1995-08-01 00:50:...|
|miriworld.its.uni...|/history/history.htm| 404| 0|1995-08-01 01:04:...|
| ras38.srv.net |/elv/DELTA/uncons...| 404| 0|1995-08-01 01:05:...|
| cs1-06.leh.ptd.net | | 404| 0|1995-08-01 01:17:...|
|dialip-24.athenet...|/history/apollo/a...| 404| 0|1995-08-01 01:33:...|
| h96-158.ccnet.com |/history/apollo/a...| 404| 0|1995-08-01 01:35:...|
| h96-158.ccnet.com |/history/apollo/a...| 404| 0|1995-08-01 01:36:...|
| h96-158.ccnet.com |/history/apollo/a...| 404| 0|1995-08-01 01:36:...|
| h96-158.ccnet.com |/history/apollo/a...| 404| 0|1995-08-01 01:36:...|
| h96-158.ccnet.com |/history/apollo/a...| 404| 0|1995-08-01 01:36:...|
| h96-158.ccnet.com |/history/apollo/a...| 404| 0|1995-08-01 01:36:...|
| h96-158.ccnet.com |/history/apollo/a...| 404| 0|1995-08-01 01:36:...|
| h96-158.ccnet.com |/history/apollo/a...| 404| 0|1995-08-01 01:37:...|
| h96-158.ccnet.com |/history/apollo/a...| 404| 0|1995-08-01 01:37:...|
| h96-158.ccnet.com |/history/apollo/a...| 404| 0|1995-08-01 01:37:...|
|hsccs_gatorbox07....|/pub/winvn/releas...| 404| 0|1995-08-01 01:44:...|
|www-b2.proxy.aol....|/pub/winvn/readme...| 404| 0|1995-08-01 01:48:...|
|www-b2.proxy.aol....|/pub/winvn/releas...| 404| 0|1995-08-01 01:48:...|
+--------------------+--------------------+------+------------+--------------------+
我如何过滤这个表只有不同的路径在PySpark?但是这个表应该包含所有的列。
如果要保存特定列中所有值不同的行,则必须在DataFrame上调用dropDuplicates
方法。在我的例子中像这样:
dataFrame = ...
dataFrame.dropDuplicates(['path'])
where path是列名
至于调优哪些记录被保留和丢弃,如果您可以将您的条件放入Window表达式中,您可以使用这样的东西。这是在scala中(或多或少),但我想你也可以在PySpark中做到。
val window = Window.parititionBy('columns,'to,'make,'unique).orderBy('conditionToPutRowToKeepFirst)
dataframe.withColumn("row_number",row_number().over(window)).where('row_number===1).drop('row_number)