我已经在TensorFlow上训练了一个ViT模型,用于基于https://github.com/yangsenius/TransPose的关键点估计,我想模拟每个关键点的注意力图,如下所示:https://raw.githubusercontent.com/yangsenius/TransPose/main/attention_map_image_dependency_transposeh_thres_0.00075.jpg
我在Pytorch上找到了代码,但我不知道如何在TensorFlow上模拟它:https://github.com/yangsenius/TransPose/blob/dab9007b6f61c9c8dce04d61669a04922bbcd148/visualize.py L128
我通过获取多头注意力层的前一层的输出并通过多头注意力传递它来解决这个问题:
atten_maps_hooks = [Model(inputs = model.input, outputs = model.layers[getLayerIndexByName(model, 'encoded_0') - 1].output),
Model(inputs = model.input, outputs = model.layers[getLayerIndexByName(model, 'encoded_1') - 1].output),
Model(inputs = model.input, outputs = model.layers[getLayerIndexByName(model, 'encoded_2') - 1].output),
Model(inputs = model.input, outputs = model.layers[getLayerIndexByName(model, 'encoded_3') - 1].output),
Model(inputs = model.input, outputs = model.layers[getLayerIndexByName(model, 'encoded_4') - 1].output),
Model(inputs = model.input, outputs = model.layers[getLayerIndexByName(model, 'encoded_5') - 1].output)]
for i in range(len(atten_maps_hooks)):
temp = atten_maps_hooks[i].predict(input)
mha, scores = model.get_layer('encoded_' + str(i))(temp, temp, return_attention_scores = True)
enc_atten_maps_hwhw.append(scores.numpy()[0].reshape(shape + shape))