我在R中相对较初学者,并试图弄清楚如何将cpquery函数用于DAG的所有边缘的bnlearn包。
首先,我创建了一个 bn 对象、一个bn网络和一个具有所有优势的表。
library(bnlearn)
data(learning.test)
baynet = hc(learning.test)
fit = bn.fit(baynet, learning.test)
sttbl = arc.strength(x = baynet, data = learning.test)
然后我尝试在sttbl数据集中创建一个新变量,这是 cpquery 函数的结果。
sttbl = sttbl %>% mutate(prob = NA) %>% arrange(strength)
sttbl[1,4] = cpquery(fit, `A` == 1, `D` == 1)
它看起来相当不错(尤其是在更大的数据上(,但是当我尝试以某种方式自动化此过程时,我正在努力解决错误,例如:
抽样误差(拟合=拟合,事件=事件,证据=证据,: 证据的逻辑向量长度为 1 而不是 10000。
在理想情况下,我需要创建一个函数来填充sttbl数据集的prob生成的变量,无论它的大小如何。我尝试使用 for loop to 来做到这一点,但一次又一次地绊倒了上面的错误。不幸的是,我正在删除失败的尝试,但它们是这样的:
for (i in 1:nrow(sttbl)) {
j = sttbl[i,1]
k = sttbl[i,2]
sttbl[i,4]=cpquery(fit, fit$j %in% sttbl[i,1]==1, fit$k %in% sttbl[i,2]==1)
}
或者这个:
for (i in 1:nrow(sttbl)) {
sttbl[i,4]=cpquery(fit, sttbl[i,1] == 1, sttbl[i,2] == 1)
}
现在我想我误解了R或bnlearn包中的某些内容。
您能否告诉我如何通过多个 cpquery 填充列来实现此任务?这将对我的研究有很大帮助!
cpquery
很难以编程方式使用。如果您查看帮助页面中的示例,您会发现作者使用eval(parse(...))
来构建查询。我在下面添加了两种方法,一种使用帮助页面中的方法,另一种使用cpdist
来绘制样本并重新加权以获得概率。
您的示例
library(bnlearn); library(dplyr)
data(learning.test)
baynet = hc(learning.test)
fit = bn.fit(baynet, learning.test)
sttbl = arc.strength(x = baynet, data = learning.test)
sttbl = sttbl %>% mutate(prob = NA) %>% arrange(strength)
这使用了cpquery
和备受诟病的eval(parse(...))
- 这是 方法作者在?cpquery
示例中以编程方式执行此操作bnlearn
。无论如何
# You want the evidence and event to be the same; in your question it is `1`
# but for example using learning.test data we use 'a'
state = "'a'" # note if the states are character then these need to be quoted
event = paste(sttbl$from, "==", state)
evidence = paste(sttbl$to, "==", state)
# loop through using code similar to that found in `cpquery`
set.seed(1) # to make sampling reproducible
for(i in 1:nrow(sttbl)) {
qtxt = paste("cpquery(fit, ", event[i], ", ", evidence[i], ",n=1e6", ")")
sttbl$prob[i] = eval(parse(text=qtxt))
}
我发现最好使用cpdist
它用于根据某些证据生成随机样本。然后,可以使用这些示例来构建查询。如果你使用似然加权(method="lw"
(,则以编程方式执行此操作稍微容易一些(并且没有evil(parse(...))
(。 证据被添加到命名列表中,即list(A='a')
.
# The following just gives a quick way to assign the same
# evidence state to all the evidence nodes.
evidence = setNames(replicate(nrow(sttbl), "a", simplify = FALSE), sttbl$to)
# Now loop though the queries
# As we are using likelihood weighting we need to reweight to get the probabilities
# (cpquery does this under the hood)
# Also note with this method that you could simulate from more than
# one variable (event) at a time if the evidence was the same.
for(i in 1:nrow(sttbl)) {
temp = cpdist(fit, sttbl$from[i], evidence[i], method="lw")
w = attr(temp, "weights")
sttbl$prob2[i] = sum(w[temp=='a'])/ sum(w)
}
sttbl
# from to strength prob prob2
# 1 A D -1938.9499 0.6186238 0.6233387
# 2 A B -1153.8796 0.6050552 0.6133448
# 3 C D -823.7605 0.7027782 0.7067417
# 4 B E -720.8266 0.7332107 0.7328657
# 5 F E -549.2300 0.5850828 0.5895373