Skip to content

Conditional Linear Gaussian models

Creative Commons LicenseaGrUMinteractive online version
import pyagrum as gum
import pyagrum.lib.notebook as gnb
import pyagrum.lib.bn_vs_bn as gcm
import pyagrum.clg as gclg
import pyagrum.clg.notebook as gclgnb

Suppose we want to build a CLG with these specifications A=N(5,1)A={\cal N}(5,1), B=N(4,3)B={\cal N}(4,3) and C=2.A+3.B+N(3,2)C=2.A+3.B+{\cal N}(3,2)

model = gclg.CLG()
model.add(gclg.GaussianVariable("A", 5, 1))
model.add(gclg.GaussianVariable("C", 3, 2))
model.add(gclg.GaussianVariable("B", 4, 3))
model.addArc("A", "C", 2)
model.addArc("B", "C", 3)
model
G A A μ=5.000 σ=1.000 C C μ=3.000 σ=2.000 A->C 2.00 B B μ=4.000 σ=3.000 B->C 3.00

We can create a Conditional Linear Gaussian Bayesian networ(CLG model) using a SEM-like syntax.

A = 4.5 [0.3] means that the mean of the distribution for Gaussian random variable A is 4.5 and ist standard deviation is 0.3.

B = 3 + 0.8F [0.3] means that the mean of the distribution for the Gaussian random variable B is 3 and the standard deviation is 0.3.

pyagrum.CLG.SEM is a set of static methods to manipulate this kind of SEM.

sem2 = """
A=4.5 [0.3] # comments are allowed
F=7 [0.5]
B=3 + 1.2F [0.3]
C=9 + 2A + 1.5B [0.6]
D=9 + C + F[0.7]
E=9 + D [0.9]
"""
model2 = gclg.SEM.toclg(sem2)
gnb.show(model2)

svg

One can of course build the SEM from a CLG using pyagrum.CLG.SEM.tosem :

gnb.flow.row(
model,
"<pre><div align='left'>" + gclg.SEM.tosem(model) + "</div></pre>",
captions=["the first CLG model", "the SEM from the CLG"],
)
B=4[3] A=5[1] C=3+2A+3B[2]

the SEM from the CLG

And this SEM allows of course input/output format for CLG

gclg.SEM.saveCLG(model2, "out/model2.sem")
print("=== file content ===")
with open("out/model2.sem", "r") as file:
for line in file.readlines():
print(line, end="")
print("====================")
=== file content ===
F=7.0[0.5]
B=3.0+1.2F[0.3]
A=4.5[0.3]
C=9.0+2.0A+1.5B[0.6]
D=9.0+F+C[0.7]
E=9.0+D[0.9]
====================
model3 = gclg.SEM.loadCLG("out/model2.sem")
gnb.sideBySide(model2, model3, captions=["saved model", "loaded model"])
import pickle
with open("out/testCLG.pkl", "bw") as f:
pickle.dump(model3, f)
model3
G F F μ=7.000 σ=0.500 B B μ=3.000 σ=0.300 F->B 1.20 D D μ=9.000 σ=0.700 F->D 1.00 C C μ=9.000 σ=0.600 B->C 1.50 A A μ=4.500 σ=0.300 A->C 2.00 C->D 1.00 E E μ=9.000 σ=0.900 D->E 1.00
with open("out/testCLG.pkl", "br") as f:
copyModel3 = pickle.load(f)
copyModel3
G F F μ=7.000 σ=0.500 B B μ=3.000 σ=0.300 F->B 1.20 D D μ=9.000 σ=0.700 F->D 1.00 C C μ=9.000 σ=0.600 B->C 1.50 A A μ=4.500 σ=0.300 A->C 2.00 C->D 1.00 E E μ=9.000 σ=0.900 D->E 1.00

Compute some posterior using difference exact inference

ie = gclg.CLGVariableElimination(model2)
ie.updateEvidence({"D": 3})
print(ie.posterior("A"))
print(ie.posterior("B"))
print(ie.posterior("C"))
print(ie.posterior("D"))
print(ie.posterior("E"))
print(ie.posterior("F"))
v = ie.posterior("E")
print(v)
print(f" - mean(E|D=3)={v.mu()}")
print(f" - stdev(E|D=3)={v.sigma()}")
A:1.9327650111193468[0.28353638852446156]
B:-2.5058561897702[0.41002992170553515]
C:3.9722757598220895[0.5657771474513671]
D:3[0]
E:12.0[0.9]
F:-2.9836916234247597[0.32358490464094586]
E:12.0[0.9]
- mean(E|D=3)=12.0
- stdev(E|D=3)=0.9
gnb.sideBySide(
model2,
gclgnb.getInference(model2, evs={"D": 3}, size="3!"),
gclgnb.getInference(model2, evs={"D": 3, "F": 1}),
captions=["The CLG", "First inference", "Second inference"],
)

Approximated inference : MonteCarlo Sampling

Section titled “Approximated inference : MonteCarlo Sampling”

When the model is too complex for exact infernece, we can use forward sampling to generate 5000 samples from the original CLG model.

fs = gclg.ForwardSampling(model2)
fs.makeSample(5000).tocsv("./out/model2.csv")

We will use the generated database to do learning. But before, we can also compute posterior but without evidence :

ie = gclg.CLGVariableElimination(model2)
print("| 'Exact' inference | Results from sampling |")
print("|------------------------------------------|------------------------------------------|")
for i in model2.names():
print(f"| {str(ie.posterior(i)):40} | {str(gclg.GaussianVariable(i, fs.mean_sample(i), fs.stddev_sample(i))):40} |")
| 'Exact' inference | Results from sampling |
|------------------------------------------|------------------------------------------|
| A:4.499999999999998[0.3] | A:4.500061006891443[0.29760719227927485] |
| F:7.000000000000008[0.5000000000000002] | F:6.998387385505476[0.5059458347122918] |
| B:11.399999999999999[0.6708203932499367] | B:11.392812613702848[0.677718501416994] |
| C:35.099999999999994[1.3162446581088183] | C:35.09569409219343[1.3276896051160967] |
| D:51.10000000000002[1.8364367672206963] | D:51.08723020101847[1.8594821539393176] |
| E:60.100000000000016[2.0451161336217565] | E:60.10490625818644[2.05927154997816] |

Now with the generated database and the original model, we can calculate the log-likelihood of the model.

print("log-likelihood w.r.t orignal model : ", model2.logLikelihood("./out/model2.csv"))
log-likelihood w.r.t orignal model : -22241.484436882653

Use the generated database to do our RAvel Learning. This part needs some time to run.

## RAveL learning
learner = gclg.CLGLearner("./out/model2.csv")

We can get the learned_clg model with function learn_clg() which contains structure learning and parameter estimation.

learned_clg = learner.learnCLG()
gnb.sideBySide(model2, learned_clg, captions=["original CLG", "learned CLG"])

Compare the learned model’s structure with that of the original model’.

cmp = gcm.GraphicalBNComparator(model2, learned_clg)
print(f"F-score(original_clg,learned_clg) : {cmp.scores()['fscore']}")
F-score(original_clg,learned_clg) : 0.6666666666666666

Get the learned model’s parameters and compare them with the original model’s parameters using the SEM syntax.

gnb.flow.row(
"<pre><div align='left'>" + gclg.SEM.tosem(model2) + "</div></pre>",
"<pre><div align='left'>" + gclg.SEM.tosem(learned_clg) + "</div></pre>",
captions=["original sem", "learned sem"],
)
F=7.0[0.5] B=3.0+1.2F[0.3] A=4.5[0.3] C=9.0+2.0A+1.5B[0.6] D=9.0+F+C[0.7] E=9.0+D[0.9]

original sem
E=60.105[2.059] F=6.998[0.506] B=3.009+1.2F[0.303] D=5.351+1.03F+0.64E[0.712] A=4.5[0.298] C=2.212+1.19A+0.64B+0.4D[0.463]

learned sem

We can algo do parameter estimation only with function fitParameters() if we already have the structure of the model.

## We can copy the original CLG
copy_original = gclg.CLG(model2)
## RAveL learning again
RAveL_l = gclg.CLGLearner("./out/model2.csv")
## Fit the parameters of the copy clg
RAveL_l.fitParameters(copy_original)
copy_original
G A A μ=4.500 σ=0.298 C C μ=9.073 σ=0.597 A->C 1.99 F F μ=6.998 σ=0.506 B B μ=3.009 σ=0.303 F->B 1.20 D D μ=8.725 σ=0.703 F->D 0.97 B->C 1.50 C->D 1.01 E E μ=9.159 σ=0.896 D->E 1.00

We first create two CLG from two SEMs.

## TWO DIFFERENT CLGs
## FIRST CLG
clg1 = gclg.SEM.toclg("""
## hyper parameters
A=4[1]
B=3[5]
C=-2[5]
#equations
D=A[.2] # D is a noisy version of A
E=1+D+2B [2]
F=E+C+B+E [0.001]
""")
## SECOND CLG
clg2 = gclg.SEM.toclg("""
## hyper parameters
A=4[1]
B=3+A[5]
C=-2+2B+A[5]
#equations
D=A[.2] # D is a noisy version of A
E=1+D+2B [2]
F=E+C [0.001]
""")

This cell shows how to have a quick view of the differences

gnb.flow.row(clg1, clg2, gcm.graphDiff(clg1, clg2), gcm.graphDiffLegend(), gcm.graphDiff(clg2, clg1))
G A A B B A->B C C A->C D D A->D B->C E E B->E F F B->F C->F D->E E->F
G a->b overflow c->d Missing e->f reversed g->h Correct
G A A B B A->B C C A->C D D A->D B->C E E B->E F F B->F C->F D->E E->F

We compare the CLG models.

## We use the F-score to compare the two CLGs
cmp = gcm.GraphicalBNComparator(clg1, clg1)
print(f"F-score(clg1,clg1) : {cmp.scores()['fscore']}")
cmp = gcm.GraphicalBNComparator(clg1, clg2)
print(f"F-score(clg1,clg2) : {cmp.scores()['fscore']}")
F-score(clg1,clg1) : 1.0
F-score(clg1,clg2) : 0.7142857142857143
## The complete list of structural scores is :
print("score(clg1,clg2) :")
for score, val in cmp.scores().items():
print(f" - {score} : {val}")
score(clg1,clg2) :
- count : {'tp': 5, 'tn': 21, 'fp': 3, 'fn': 1}
- recall : 0.8333333333333334
- precision : 0.625
- fscore : 0.7142857142857143
- dist2opt : 0.41036907507483766
## We create a simple CLG with 3 variables
clg = gclg.CLG()
## prog=« sigma=2;X=N(5);Y=N(3);Z=X+Y »
A = gclg.GaussianVariable(mu=2, sigma=1, name="A")
B = gclg.GaussianVariable(mu=1, sigma=2, name="B")
C = gclg.GaussianVariable(mu=2, sigma=3, name="C")
idA = clg.add(A)
idB = clg.add(B)
idC = clg.add(C)
clg.addArc(idA, idB, 1.5)
clg.addArc(idB, idC, 0.75)
## We can show it as a graph
original_clg = gclgnb.CLG2dot(clg)
original_clg
G A A μ=2.000 σ=1.000 B B μ=1.000 σ=2.000 A->B 1.50 C C μ=2.000 σ=3.000 B->C 0.75
fs = gclg.ForwardSampling(clg)
fs.makeSample(10)

<pyagrum.clg.forwardSampling.ForwardSampling at 0x1144f4910>

print("A's sample_variance: ", fs.variance_sample(0))
print("B's sample_variance: ", fs.variance_sample("B"))
print("C's sample_variance: ", fs.variance_sample(2))
A's sample_variance: 1.767518150796382
B's sample_variance: 15.720292450916617
C's sample_variance: 11.009308355467242
print("A's sample_mean: ", fs.mean_sample("A"))
print("B's sample_mean: ", fs.mean_sample("B"))
print("C's sample_mean: ", fs.mean_sample("C"))
A's sample_mean: 1.9617397815271684
B's sample_mean: 4.205992166798607
C's sample_mean: 6.1828962749955565
fs.toarray()
array([[ 1.43147261, 2.86500725, 5.67586603],
[ 3.44913656, 7.51704535, 10.87158975],
[-1.18072728, -1.69433061, 4.98651031],
[ 2.83735922, 8.57355338, 9.43734626],
[ 1.69363696, 3.81134442, 9.68760137],
[ 3.3130614 , 10.77153712, 4.78463062],
[ 3.29632283, 5.23879156, 6.96693693],
[ 1.0975627 , -1.53683074, -1.56379019],
[ 2.04500819, 0.8650565 , 5.42658875],
[ 1.63456462, 5.64874744, 5.55568292]])
## export to dataframe
fs.topandas()
A B C
0 1.431473 2.865007 5.675866
1 3.449137 7.517045 10.871590
2 -1.180727 -1.694331 4.986510
3 2.837359 8.573553 9.437346
4 1.693637 3.811344 9.687601
5 3.313061 10.771537 4.784631
6 3.296323 5.238792 6.966937
7 1.097563 -1.536831 -1.563790
8 2.045008 0.865057 5.426589
9 1.634565 5.648747 5.555683
## export to csv
fs.makeSample(10000)
fs.tocsv("./out/samples.csv")

The module allows to investigale more deeply into the learning algorithm.

We first create a random CLG model with 5 variables.

## Create a new random CLG
clg = gclg.randomCLG(nb_variables=5, names="ABCDE")
## Display the CLG
print(clg)
CLG{nodes: 5, arcs: 5, parameters: 15}

We then do the Forward Sampling and CLGLearner.

n = 20 # n is the selected values of MC number n in n-MCERA
K = 10000 # K is the list of selected values of number of samples
Delta = 0.05 # Delta is the FWER we want to control
## Sample generation
fs = gclg.ForwardSampling(clg)
fs.makeSample(K).tocsv("./out/clg.csv")
## Learning
RAveL_l = gclg.CLGLearner("./out/clg.csv", n_sample=n, fwer_delta=Delta)

We use the PC algorithme to learn the structure of the model.

## Use the PC algorithm to get the skeleton
C = RAveL_l.PC_algorithm(order=clg.nodes(), verbose=False)
print("The final skeleton is:\n", C)
The final skeleton is:
{0: set(), 1: set(), 2: set(), 3: {0}, 4: {1, 2}}
## Create a Mixedgraph to display the skeleton
RAveL_MixGraph = gum.MixedGraph()
## Add variables
for i in range(len(clg.names())):
RAveL_MixGraph.addNodeWithId(i)
## Add arcs and edges
for father, kids in C.items():
for kid in kids:
if father in C[kid]:
RAveL_MixGraph.addEdge(father, kid)
else:
RAveL_MixGraph.addArc(father, kid)
RAveL_MixGraph
no_name 0 0 1 1 2 2 3 3 3->0 4 4 4->1 4->2
## Create a BN with the same structure as the CLG
bn = gum.BayesNet()
## add variables
for name in clg.names():
new_variable = gum.LabelizedVariable(name, "a labelized variable", 2)
bn.add(new_variable)
## add arcs
for arc in clg.arcs():
bn.addArc(arc[0], arc[1])
## Compare the result above with the EssentialGraph
Real_EssentialGraph = gum.EssentialGraph(bn)
Real_EssentialGraph
no_name 0 E 3 D 0->3 1 C 1->3 4 A 1->4 2 B 2->4 3->4
## create a CLG from the skeleton of PC algorithm
clg_PC = gclg.CLG()
for node in clg.nodes():
clg_PC.add(clg.variable(node))
for father, kids in C.items():
for kid in kids:
clg_PC.addArc(father, kid)
## Compare the structure of the created CLG and the original CLG
print(f"F-score : {clg.CompareStructure(clg_PC)}")
F-score : 0.7499999999999999

We can also do the parameter learning.

id2mu, id2sigma, arc2coef = RAveL_l.estimate_parameters(C)
for node in clg.nodes():
print(f"Real Value: node {node} : mu = {clg.variable(node)._mu}, sigma = {clg.variable(node)._sigma}")
print(f"Estimation: node {node} : mu = {id2mu[node]}, sigma = {id2sigma[node]}")
for arc in clg.arcs():
print(f"Real Value: arc {arc} : coef = {clg.coefArc(*arc)}")
print(f"Estimation: arc {arc} : coef = {(arc2coef[arc] if arc in arc2coef else '-')}")
Real Value: node 0 : mu = 3.8673906007462264, sigma = 3.759637155856595
Estimation: node 0 : mu = 3.838858787162053, sigma = 3.7799229188544223
Real Value: node 1 : mu = 0.9763700868618352, sigma = 8.987256621242013
Estimation: node 1 : mu = 0.9683953084133385, sigma = 8.86811447800987
Real Value: node 2 : mu = 2.390879095590969, sigma = 3.629936283361723
Estimation: node 2 : mu = 2.404110746071634, sigma = 3.625734028771289
Real Value: node 3 : mu = 2.6411742880562707, sigma = 3.3254135590230267
Estimation: node 3 : mu = 107.67780728636514, sigma = 618.3674167149949
Real Value: node 4 : mu = 1.6519050991583937, sigma = 8.787358173374741
Estimation: node 4 : mu = 1.6277432768895228, sigma = 8.714182473883996
Real Value: arc (4, 1) : coef = -8.100863091875366
Estimation: arc (4, 1) : coef = -8.0927867803821
Real Value: arc (4, 3) : coef = -8.117063574390897
Estimation: arc (4, 3) : coef = -
Real Value: arc (4, 2) : coef = 7.548214206386582
Estimation: arc (4, 2) : coef = 7.554236445469352
Real Value: arc (3, 0) : coef = 8.427712337400717
Estimation: arc (3, 0) : coef = 8.427729170784708
Real Value: arc (1, 3) : coef = -9.686637014482704
Estimation: arc (1, 3) : coef = -