Relevance Reasoning with pyAgrum
![]() | ![]() |
Relevance reasoning is the analysis of the influence of evidence on a Bayesian network.
In this notebook we will explain what is relevance reasoning and how to do it using pyagrum.
import pyagrum as gumimport pyagrum.lib.notebook as gnb
import time
%matplotlib inlinefrom pylab import *Multiple inference
Section titled “Multiple inference”In the well known ‘alarm’ BN, how to analyze the influence on ‘VENTALV’ of a soft evidence on ‘MINVOLSET’ ?
bn = gum.loadBN("res/alarm.dsl")gnb.showBN(bn, size="6")We propose to draw the plot of the posterior of ‘VENTALV’ for the evidence :
To do so, we perform a large number of inference and plot the posteriors.
K = 1000r = range(0, K)xs = [x / K for x in r]
def getPlot(xs, ys, K, duration): p = plot(xs, ys) legend(p, [bn["VENTALV"].label(i) for i in range(bn["VENTALV"].domainSize())], loc=7) title("VENTALV ({} inferences in {:5.3} s)".format(K, duration)) ylabel("posterior Probability") xlabel("Evidence on MINVOLSET : [0,x,0.5]")First try : classical lazy propagation
Section titled “First try : classical lazy propagation”tf = time.time()ys = []for x in r: ie = gum.LazyPropagation(bn) ie.setNumberOfThreads(1) # to be fair, we avoid multithreaded inference ie.addEvidence("MINVOLSET", [0, x / K, 0.5]) ie.makeInference() ys.append(ie.posterior("VENTALV").tolist())delta1 = time.time() - tfgetPlot(xs, ys, K, delta1)The title of the figure above gives the time for those 1000 inference.
Second try : classical variable elimination
Section titled “Second try : classical variable elimination”One can note that we just need one posterior. This is a case where VariableElimination should give better results.
tf = time.time()ys = []for x in r: ie = gum.VariableElimination(bn) ie.addEvidence("MINVOLSET", [0, x / K, 0.5]) ie.makeInference() ys.append(ie.posterior("VENTALV").tolist())delta2 = time.time() - tfgetPlot(xs, ys, K, delta2)pyAgrum give us a function gum.getPosterior to do this same job more easily.
tf = time.time()ys = [gum.getPosterior(bn, evs={"MINVOLSET": [0, x / K, 0.5]}, target="VENTALV").tolist() for x in r]getPlot(xs, ys, K, time.time() - tf)Last try : optimized Lazy propagation with relevance reasoning and incremental inference
Section titled “Last try : optimized Lazy propagation with relevance reasoning and incremental inference”Optimized inference in aGrUM can use the targets and the evidence to optimize the computations. This is called relevance reasonning.
Moreover, if the values of the evidence change but not the structure of the query (same nodes as target, same nodes as hard evidence, same nodes as soft evidence), inference in aGrUM may re-use some of the computations from a query to another. This is called incremental inference.
tf = time.time()ie = gum.LazyPropagation(bn)ie.setNumberOfThreads(1) # to be fair, we avoid multithreaded inferenceie.addEvidence("MINVOLSET", [1, 1, 1])ie.addTarget("VENTALV")ys = []for x in r: ie.chgEvidence("MINVOLSET", [0, x / K, 0.5]) ie.makeInference() ys.append(ie.posterior("VENTALV").tolist())delta3 = time.time() - tfgetPlot(xs, ys, K, delta3)print("Mean duration of a lazy propagation : {:5.3f}ms".format(1000 * delta1 / K))print("Mean duration of a variable elimination : {:5.3f}ms".format(1000 * delta2 / K))print("Mean duration of an optimized lazy propagation : {:5.3f}ms".format(1000 * delta3 / K))Mean duration of a lazy propagation : 17.086msMean duration of a variable elimination : 1.514msMean duration of an optimized lazy propagation : 1.458msHow it works
Section titled “How it works”bn = gum.fastBN("Y->X->T1;Z2->X;Z1->X;Z1->T1;Z1->Z3->T2")ie = gum.LazyPropagation(bn)
gnb.flow.row( bn, bn.cpt("X"), gnb.getJunctionTree(bn), gnb.getJunctionTreeMap(bn, size="3!"), captions=["BN", "potential", "Junction Tree", "The map"],)|
|
| |||
|---|---|---|---|---|
|
|
| 0.5452 | 0.4548 | |
| 0.2843 | 0.7157 | |||
|
| 0.5585 | 0.4415 | ||
| 0.4513 | 0.5487 | |||
|
|
| 0.5655 | 0.4345 | |
| 0.1353 | 0.8647 | |||
|
| 0.3905 | 0.6095 | ||
| 0.1562 | 0.8438 | |||
aGrUM/pyAgrum uses as much as possible techniques of relevance reasonning to reduce the complexity of the inference.
ie.setEvidence({"X": 0})gnb.sideBySide( ie, gnb.getDot(ie.joinTree().toDotWithNames(bn)), ie.joinTree().map(), captions=["", "Join tree optimized for hard evidence on X", "the map"],)ie.updateEvidence({"X": [0.1, 0.9]})gnb.sideBySide( ie, gnb.getDot(ie.joinTree().toDotWithNames(bn)), ie.joinTree().map(), captions=["", "Join tree optimized for soft evidence on X", "the map"],)ie.updateEvidence({"Y": 0, "X": 0, 3: [0.1, 0.9], "Z1": [0.4, 0.6]})gnb.sideBySide( ie, gnb.getDot(ie.joinTree().toDotWithNames(bn)), ie.joinTree().map(), captions=["", "Join tree optimized for hard evidence on X and Y, soft on Z2 and Z1", "the map"],)ie.setEvidence({"X": 0})ie.setTargets({"T1", "Z1"})gnb.sideBySide( ie, gnb.getDot(ie.joinTree().toDotWithNames(bn)), ie.joinTree().map(), captions=["", "Join tree optimized for hard evidence on X and targets T1,Z1", "the map"],)ie.updateEvidence({"Y": 0, "X": 0, 3: [0.1, 0.9], "Z1": [0.4, 0.6]})ie.addJointTarget({"Z2", "Z1", "T1"})
gnb.sideBySide( ie, gnb.getDot(ie.joinTree().toDotWithNames(bn)), ie.joinTree().map(), captions=["", "Join tree optimized for hard evidence on X and targets T1,Z1", "the map"],)ie.makeInference()ie.jointPosterior({"Z2", "Z1", "T1"})|
|
| ||
|---|---|---|---|
|
| 0.0011 | 0.0071 | |
| 0.0007 | 0.0538 | ||
|
| 0.0243 | 0.1051 | |
| 0.0149 | 0.7930 | ||
ie.jointPosterior({"Z2", "Z1"})|
|
| |
|---|---|---|
| 0.0018 | 0.0609 | |
| 0.0392 | 0.8981 | |
## this will not worktry: ie.jointPosterior({"Z3", "Z1"})except gum.UndefinedElement: print("Indeed, there is no joint target which contains {4,5} !")Indeed, there is no joint target which contains {4,5} !ie.addJointTarget({"Z2", "Z1"})gnb.sideBySide(ie, gnb.getDot(ie.joinTree().toDotWithNames(bn)), captions=["", "JoinTree"])
