Python:有没有一种方法可以解析过滤器中的元组



我有以下代码

def important_predicate(tuple_):
n1, n2 = tuple_
return ...
list_of_tuples = [(i, some_function(i)) for i in range(..)]
important_group = list(filter(important_predicate, list_of_tuples))

事实上,我必须将元组解析为n1和n2才能使用这两个数字,这相当烦人。我不想使用元组_[0],元组_[1],因为n1和n2可以被命名为重要的名称。我们能做些什么来去除这个吗?我们可以应用";拆包";蟒蛇在这里?我的粗略想法是:

def some_function(n1, n2):
return -1
tuple_ = (1, -1)
some_function(*tuple_)

但是,应用于python内置过滤器。我会发现将some_function移到important_predicate函数的建议是没有帮助的,因为保留其他函数的list_of_tuples很重要。所以这不是我想要的:

def important_predicate(n1):
n2 = some_function(n1)
return ...
list_of_tuples = [(i, some_function(i)) for i in range(..)]
important_group = list(filter(important_predicate, list_of_tuples))

一个不同的解决方案,因为我喜欢compress。而starmap就是这样拆包的。

from itertools import starmap, compress
def important_predicate(n1, n2):
return n1 == n2
list_of_tuples = [(1, 2), (2, 2), (3, 2)]
important_group = list(compress(list_of_tuples,
starmap(important_predicate, list_of_tuples)))
print(important_group)

输出:

[(2, 2)]

似乎比其他答案的解决方案更快,也比列表理解更快:

0.46 s  compress_starmap
0.93 s  filter_lambda
0.65 s  list_comprehension
0.46 s  compress_starmap
0.92 s  filter_lambda
0.65 s  list_comprehension
0.46 s  compress_starmap
0.91 s  filter_lambda
0.73 s  list_comprehension

代码:

from timeit import repeat
from itertools import starmap, compress
def important_predicate(n1, n2):
return n1 == n2
def compress_starmap():
return list(compress(list_of_tuples, starmap(important_predicate, list_of_tuples)))
def filter_lambda():
return list(filter(lambda tuple_: important_predicate(*tuple_), list_of_tuples))
def list_comprehension():
return [tuple_ for tuple_ in list_of_tuples if important_predicate(*tuple_)]
list_of_tuples = [(1, 2), (2, 2), (3, 2)] * 1000
for _ in range(3):
for func in compress_starmap, filter_lambda, list_comprehension:
t = min(repeat(func, number=1000))
print('%.2f s ' % t, func.__name__)
print()

您可以在lambda表达式中进行解压缩,如下所示:

important_group = list(filter(lambda tuple_: important_predicate(*tuple_), list_of_tuples))

最新更新