Kullback-Leibler for Bayesian networks
![]() | ![]() |
%matplotlib inline
from pylab import *import pyagrum and pyagrum.lib.notebook (for … notebooks :-) )
Section titled “import pyagrum and pyagrum.lib.notebook (for … notebooks :-) )”import pyagrum as gumimport pyagrum.lib.notebook as gnbCreate a first BN : bn
Section titled “Create a first BN : bn”bn = gum.loadBN("res/asia.bif")## randomly re-generate parameters for every Conditional Probability Tablebn.generateCPTs()bnCreate a second BN : bn2
Section titled “Create a second BN : bn2”bn2 = gum.loadBN("res/asia.bif")bn2.generateCPTs()bn2bn vs bn2 : different parameters
Section titled “bn vs bn2 : different parameters”gnb.flow.row(bn.cpt(3), bn2.cpt(3), captions=["a CPT in bn", "same CPT in bn2 (with different parameters)"])|
|
| |
|---|---|---|
| 0.6700 | 0.3300 | |
| 0.4316 | 0.5684 | |
|
|
| |
|---|---|---|
| 0.2943 | 0.7057 | |
| 0.4315 | 0.5685 | |
Exact and (Gibbs) approximated KL-divergence
Section titled “Exact and (Gibbs) approximated KL-divergence”In order to compute KL-divergence, we just need to be sure that the 2 distributions are defined on the same domain (same variables, etc.)
Exact KL
g1 = gum.ExactBNdistance(bn, bn2)print(g1.compute()){'klPQ': 2.476584381649645, 'errorPQ': 0, 'klQP': 2.244520928404808, 'errorQP': 0, 'hellinger': 0.813592705605187, 'bhattacharya': 0.4019212130892461, 'jensen-shannon': 0.4136335100698562}If the models are not on the same domain :
bn_different_domain = gum.loadBN("res/alarm.dsl")
## g=gum.BruteForceKL(bn,bn_different_domain) # a KL-divergence between asia and alarm ... :(### would cause# ---------------------------------------------------------------------------## OperationNotAllowed Traceback (most recent call last)### OperationNotAllowed: this operation is not allowed : KL : the 2 BNs are not compatible (not the same vars : visit_to_Asia?)Gibbs-approximated KL
g = gum.GibbsBNdistance(bn, bn2)g.setVerbosity(True)g.setMaxTime(120)g.setBurnIn(5000)g.setEpsilon(1e-7)g.setPeriodSize(500)print(g.compute())print("Computed in {0} s".format(g.currentTime())){'klPQ': 2.475361213496724, 'errorPQ': 0, 'klQP': 2.1957241806099814, 'errorQP': 0, 'hellinger': 0.8105538873770256, 'bhattacharya': 0.3989042244816927, 'jensen-shannon': 0.411228001161747}Computed in 1.338172 sprint("--")
print(g.messageApproximationScheme())print("--")
print("Temps de calcul : {0}".format(g.currentTime()))print("Nombre d'itérations : {0}".format(g.nbrIterations()))--stopped with epsilon=1e-07--Temps de calcul : 1.338172Nombre d'itérations : 380500p = plot(g.history(), "g")Animation of Gibbs KL
Section titled “Animation of Gibbs KL”Since it may be difficult to know what happens during approximation algorithm, pyAgrum allows to follow the iteration using animated matplotlib figure
g = gum.GibbsBNdistance(bn, bn2)g.setMaxTime(60)g.setBurnIn(500)g.setEpsilon(1e-7)g.setPeriodSize(5000)gnb.animApproximationScheme(g) # logarithmique scale for Yg.compute(){'klPQ': 2.469542051480538, 'errorPQ': 0, 'klQP': 2.1146518638711105, 'errorQP': 0, 'hellinger': 0.8035301805274394, 'bhattacharya': 0.404240337375901, 'jensen-shannon': 0.40444581031110494}
