在训练每轮参与者数量不同的FL模型时,内存消耗激增



我按照图像分类教程运行FL算法。

每轮参赛人数根据预先设定的参赛人数列表变化。
number_of_participants_each_round = 
[108, 113, 93, 92, 114, 101, 94, 93, 107, 99, 118, 101, 114, 111, 88, 
101, 86, 96, 110, 80, 118, 84, 91, 120, 110, 109, 113, 96, 112, 107, 
119, 91, 97, 99, 97, 104, 103, 120, 89, 100, 104, 104, 103, 88, 108]

在开始训练之前对联邦数据进行预处理和批处理。


NUM_EPOCHS = 5
BATCH_SIZE = 20
SHUFFLE_BUFFER = 418
PREFETCH_BUFFER = 10
def preprocess(dataset):
def batch_format_fn(element):
return collections.OrderedDict(
x=tf.reshape(element['pixels'], [-1, 784]),
y=tf.reshape(element['label'], [-1, 1]))
return dataset.repeat(NUM_EPOCHS).shuffle(SHUFFLE_BUFFER).batch(
BATCH_SIZE).map(batch_format_fn).prefetch(PREFETCH_BUFFER)

def make_federated_data(client_data, client_ids):
return [preprocess(client_data.create_tf_dataset_for_client(x)) for x in client_ids]
federated_train_data = make_federated_data(data_train, data_train.client_ids)

每轮根据number_of_participants_each_round随机从federated_train_data[0:expected_total_clients]中抽取参与者,然后对45 rounds执行iterative_process

expected_total_clients = 500
round_nums = 45
for round_num in range(0, round_nums):
sampled_clients = 
np.random.choice(a=federated_train_data[0:expected_total_clients],                          
size=number_of_participants_each_round[round_num], 
replace=False)

state, metrics = iterative_process.next(state, list(sampled_clients))
print('round {:2d}, metrics={}'.format(round_num + 1, metrics))

问题是VRAM的使用在几轮后迅速爆炸,它在6~7轮达到5.5 GB,并以0.8 GB/round的近似速率继续增加,直到训练最终在25~26轮崩溃,其中VRAM达到17 GB并创建+4000python线程。

下面的错误信息
F tensorflow/core/platform/default/env.cc:72] Check failed: ret == 0 (35 vs. 0)Thread creation via pthread_create() failed.

###故障排除###

number_of_participants_each_round减少到20可以完成训练,但内存消耗仍然很大并且还在增长。

在每轮固定参与者数量的情况下运行相同的代码,内存消耗在整个训练过程中大约为1.5 ~ 2.0 GBVRAM。

expected_total_clients = 500
fixed_client_size_per_round = 100
round_nums = 45
for round_num in range(0, round_nums):
sampled_clients =
np.random.choice(a=federated_train_data[0:expected_total_clients],
size=fixed_client_size_per_round,
replace=False)

state, metrics = iterative_process.next(state, list(sampled_clients))
print('round {:2d}, metrics={}'.format(round_num + 1, metrics))

额外的细节:

OS: MacOS Mojave, 10.14.6
python -V: Python 3.8.5 then downgraded to Python 3.7.9
TF version: 2.4.1
TFF version: 0.18.0
Keras version: 2.4.3

这是正常的记忆行为还是bug?是否有任何重构/提示来优化内存消耗?

这个问题是TFF运行时进程的executor stack中的一个bug。

完整的细节和错误修复https://github.com/tensorflow/federated/issues/1215

最新更新