subprocess.run()在第二次迭代时失败



我想主动(在线)学习一个统计模型。

这意味着我有一个在编译时已知的初始训练数据集(x-y对)。

然而,由于活动性质(在线),更多的数据来自运行时,来自第三方程序(cpp模拟程序)。

我使用GPytorch在python内部执行此操作,并且我通过subprocesspython模块调用第三方程序。

我的问题是编程类型而不是GPytorch或统计类型,因此我的问题在这里。

工作流程如下:Python指定运行.cpp的输入参数,根据参数创建一个名为.cpp的新文件夹,进入该文件夹,运行.cpp,收集文件夹中出现的数据,更新统计模型,Python指定运行.cpp的输入参数,根据参数创建一个名为.cpp的新文件夹,进入该文件夹,运行.cpp,收集该文件夹中出现的数据,更新统计模型…(例如,100次)。

在WSL1终端中,我通常使用:$ mpirun -n 1 smilei namelist.py运行.cpp代码,其中该命令在包含可执行的smilei和称为namelist.py的.py的文件夹中运行

python工作流在我的主动学习循环的第一次迭代时返回退出代码0(和必要的数据),但在第二次迭代时失败并返回退出代码1。它基本上完成了第一次迭代的工作,但在第二次迭代中失败了。

我尝试了subprocess.run()os.system()(请参阅下面的代码,我的所有试验之前都有注释),其中在括号内键入我通常在BASH WindowsSubsytemForLinux1终端内运行的命令,以运行第三部分cpp程序。

我无法调试为什么它第二次失败。

我试图打印出子进程的stdoutstderr,它们在查询时都返回空行,没有这样的事情出现(没有标准输出和标准错误),对于主动学习循环的第二次迭代。

我知道下面的代码可能看起来很复杂,但事实并非如此。它只是遵循我上面展示的工作流程。

def SMILEI(I):
os.chdir(top_folder_path)
# create a new folder called a0_942.782348987103 (example value)
a0 = "%.13f" % a0_from_IntensityWcm2(I)
dirname = "a0_%.13f" % a0_from_IntensityWcm2(I)
os.mkdir(dirname)
# enter the created folder
os.chdir(top_folder_path + "/" + dirname)
print("We change the directory and entered the newly created one!")
# copy general namelist into this newly created folder
shutil.copy(top_folder_path + "/" + general_namelist_name, ".")
print("We copied the general namelist!")
# add the a0 value to the general namelist, i.e. add a line a0 = 942.782348987103 , at row 8 (empty row) in the general namelist.
with open(general_namelist_name, 'r+') as fd:
contents = fd.readlines()
contents.insert(8, "a0 = {}".format(a0))  # new_string should end in a newline
fd.seek(0)  # readlines consumes the iterator, so we need to start over
fd.writelines(contents)  # No need to truncate as we are increasing filesize
print("We modified the general namelist to contain the line a0 = ..., at line 8")
# rename the modified namelist
os.rename(general_namelist_name, particular_namelist_name)
print("We renamed the general namelist to namelist_Xe_GPtrial_noOAM_a0included.py")
# run the simulation
print("We'll be now running the SMILEI command inside the folder: ")
print(os.getcwd())
print("The smilei executable's absolute path as dictated by os is: ")
print(os.path.abspath("../smilei"))
cp = subprocess.run(["mpirun", "-n", "1", os.path.abspath("../smilei"), particular_namelist_name], 
# stdin=subprocess.DEVNULL, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
#stdout=subprocess.PIPE, stderr=subprocess.PIPE)
#capture_output=True)
)
print("The return code is: ")
print(cp.returncode)
#os.system("mpirun -n 1 ../smilei {}".format(particular_namelist_name))         
#subprocess.run("mpirun -n 1 ../smilei {}".format(particular_namelist_name), shell=True)        
#print(cp.stdout) # Y
#print(cp.stderr)
#print(cp.returncode) 
# get the results of the simulation
# os.chdir(top_folder_path + "/" + dirname)
# print("We changed the directory again and entered again the newly created one!")
S = happi.Open(".")
pbb = S.ParticleBinning(0).get()
results_dict = dict()
for z in range(len(pbb['data'][-1])):
results_dict['c_%d' % z] = pbb['data'][-1][z]
return np.asarray(list(results_dict.values()))

if __name__ == '__main__':
# Initial Train Dataset:
x_train = torch.from_numpy(np.array([0.1, 0.3, 0.5, 0.6, 0.8]))
y_train = torch.from_numpy(np.array([0.1, 0.2, 0.3, 0.4, 0.5]))
# initialize likelihood and model
likelihood = gpytorch.likelihoods.GaussianLikelihood()
model = ExactGPModel(x_train, y_train, likelihood)
model.train()
likelihood.train()
# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
optimizer = torch.optim.Adam(model.parameters(), lr=0.1)  
training_iters = 10
for i in range(training_iters):
optimizer.zero_grad()
output = model(x_train)
loss   = - mll(output, y_train)
loss.backward()
print('Iter %d/%d' % (i+1, training_iters))
optimizer.step()
Xn = x_train
Yn = y_train
######################################################################################
# The Active-Learning (AL) loop:
budget_value = 100
for i in range(budget_value):
OldValues = lhs(1, samples=100)
Xref = range_transform(OldValues, 10.0**20, 10.0**25)
x_nplus1 = xnp1search(model, Xn, Xref) # x_nplus1 is Intensity in W/cm2 at which to run SMILEI next for Active-Learning the GP fit
y_nplus1 = SMILEI(x_nplus1.detach().numpy())[53] # SMILEI(x_nplus1.detach().numpy()) returns an ndarray of shape (55,)
Xn = torch.cat(   ( Xn, torch.reshape(x_nplus1, (1,)) )   )
Yn = torch.cat(   ( Yn, torch.reshape(torch.from_numpy(np.reshape(y_nplus1, (1,))), (1,)) )   )
model.set_train_data(Xn, Yn, strict=False)
for j in range(training_iters):
optimizer.zero_grad()
output = model(Xn)
loss = -mll(output, Yn)
loss.backward()
print('Iter %d/%d' % (j+1, training_iters) + 'inside AL step number %d/%d' % (i+1, budget_value))
optimizer.step()

为什么第二次失败?

我就是看不出来。我无法调试它,我没有得到任何错误信息或任何东西,它只是没有在第二个创建的文件夹内运行模拟,该文件夹在python脚本的末尾只包含namelist_Xe_GPtrial_noOAM_a0included.py,包含a0值(应该)。

谢谢!

我能想到两个选择在子进程调用周围使用try: except subprocess.CalledProcessError as e:print(e)。这样就得到了误差。另一种选择是打印出cmd并在命令行上运行它以查看任何错误。可能是代码第二次执行时缺少了一个变量。

我自己用plumbum模块解决了这个问题。

我的代码保持不变,它们都很好。

然而,我修改了subprocess.run()命令,或者我尝试过的许多变体,将其改为smi = local.cmd.mpirun,然后是smi("-n", "1", "../smilei", particular_namelist_name),并且我可以在循环的每次迭代中运行它!

相关内容

  • 没有找到相关文章

最新更新