我有一个数据帧,其中有大约15k条音频文件路径,我想对其执行操作(人为添加噪声(。一般来说,整个过程是有效的,但即使记录较少,也需要很长时间。问题不在于函数的执行时间,而在于所有函数初始化之前的时间。
start = time.time()
data_augmented = data_augmented.swifter.progress_bar(True, desc="Merge Sounds")
.apply(merge_sounds(**settings), axis=1)
print(f"{time.time-(start)} - Map Timer")
Merge Sounds: 100%|█████████████████████████████| 16/16 [00:07<00:00, 2.09it/s]
26.973325729370117 - Map Timer
正如您在这里看到的,初始化所花费的时间几乎是Lambda函数(merge_sounds
(运行时间的4倍。initialization-time
im指的是elapsed_time_measured_by_myself - elapsed_time_measured_by_tqdm
,所以在这种情况下是26.97.. - 7 = 19.97
start = time.time()
lambda_fn = merge_sounds(**settings) #doesnt work if i put it in the line below.
data_augmented = process_map(lambda_fn, data_augmented, max_workers=threads,
desc=f"Merge_sounds [{threads} Threads]")
print(f"{time.time-(start)} - Map Timer")
卡在:
Merge_sounds [16 Threads]: 0%| | 0/16 [00:00<?, ?it/s]
with Pool(processes=16) as pool:
data_augmented = pool.map(merge_sounds(**settings), tqdm(data_augmented, desc=f"Merge Sounds: {16} Threads"))
卡在:
Merge Sounds: 16 Threads: 38%|██████ | 6/16 [00:00<00:00, 4697.75it/s]
我知道并行化对较小的数据集没有意义,我只是很困惑为什么我可以轻松地并行化代码中的所有内容,而我在这方面却无法取得进展。后来我在大量数据上运行了这段代码,所以如果并行性能起作用,我会非常高兴。
Map中使用的函数是:
def merge_sounds(**settings):
_range = settings.get("snr_range", (0.15, 0.65))
assert len(_range), "snr_range -> e.g. (0.15, 0.75)"
target_sample_rate = settings.get("target_sample_rate", "16000")
if "target_path" not in settings.keys():
raise Exception("please Specify target_path in Settings-Dict")
target_path = Path(settings["target_path"])
target_path.mkdir(parents=True, exist_ok=True)
def __call__(item):
_target_path = item["path_augmented"]
snr = round(uniform(_range[0], _range[1]), 4)
pad_idx = item.name
yp, _ = IO.load(item["path"], sample_rate=target_sample_rate)
yn, _ = IO.load(item["path_noise"], sample_rate=target_sample_rate)
item["snr"] = snr
y_augmented = Effect.add_noise(yp, yn, snr=snr, pad_idx=pad_idx)
IO.save_wav(y_augmented, _target_path, target_sample_rate)
return item
return __call__
我有没有忘记并行映射函数(似乎在我的代码中的其他地方都可以使用这种变体(
Ty前进了。
如果通过绕过TQDM解决了此问题。我压缩了列,我需要这样的
_paths_in = _df["path_input"]
_paths_out = _df["path_output"]
_path_noise = _df["path_noise"]
job = zip(_paths_in, _path_noise, _paths_out, _filter_jobs)
然后我把它传给了MultiProcessor Function。
jobs = list(enumerate((zip_jobs(df))))
with Pool(processes=_threads) as pool:
data_augmented = pool.map(execute_job, tqdm(jobs, desc=f"Audio-Augmentation: {_threads} Threads"))
merge_sounds
和新的execute_job
很相似,只是改变了,功能需要什么参数。