将一列随机数添加到任务数据框的正确方法是什么?显然,我可以使用map_partitions
将列添加到每个分区,但我不确定当dask并行计算时如何处理随机状态。(也就是说,它会在所有工人中使用相同的随机状态,从而在每个工人中生成相同的随机数吗?)
似乎有相关的功能在dask.array.random
(https://docs.dask.org/en/latest/_modules/dask/array/random.html),但我没能找到一个例子,如何使用这些与任务数据框在一起。
按此讨论(https://github.com/dask/distributed/issues/2558),没有努力设置/跟踪numpy
种子,建议的方法是使用dask.array
(在问题中提到)。也许对于可复制随机性的最佳途径是创建dask.array
并转换为dask.dataframe
:
import dask.array as da
# this is not reproducible
for _ in range(3):
x = da.random.random((10, 1), chunks=(2, 2))
print(x.sum().compute())
# this is reproducible
for _ in range(3):
state = da.random.RandomState(1234)
y = state.random(size=(10,1), chunks=(2,2))
print(y.sum().compute())
# conver to ddf
import dask.dataframe as dd
ddf = dd.from_dask_array(y, columns=['A'])
# if there's another existing dataframe ddf2
ddf2 = dd.from_pandas(pd.DataFrame(range(10), columns=['B']), npartitions=2)
ddf2
# then simple column assignment will work even if partitions are not aligned
ddf2['A'] = ddf['A']
print((ddf.compute() == ddf2[['A']].compute()).sum() == len(ddf))
# of course it will be more efficient to have partitions align
# you can inspect the DAG with ddf2.visualize() to see why
# also note carefully that the lengths of ddf and ddf2 should match
# otherwise there might be unexpected situations downstream
# to see why, try changing the size of `y` above and then compare
# ddf and ddf2