Since the nodes are independent in the simpler UGM the approximate edge marginals are given by the product of the relevant node marginals, while the Gibbs mean field free energy gives a lower bound on the normalizing constant. To use the naive mena field method for approximate inference with UGM, you can use:

An example of the marginals computed by the mean field method for a sample of the noisy X problem is:[nodeBelMF,edgeBelMF,logZMF] = UGM_Infer_MeanField(nodePot,edgePot,edgeStruct);

We can also consider taking the maximum of the marginals as an approximate decoding:

maxOfMarginalsMFdecode = UGM_Decode_MaxOfMarginals(nodePot,edgePot,edgeStruct,@UGM_Infer_MeanField);

A reference describing the mean field method and loopy belief propagation and their implementations as local message passing algorithms is:

- Y. Weiss.
**Comparing the mean field method and belief propagation for approximate inference in MRFs**. In: Saad and Opper (ed), Advanced Mean Field Methods, 2001.

[nodeBelLBP,edgeBelLBP,logZLBP] = UGM_Infer_LBP(nodePot,edgePot,edgeStruct);

We can obtain the maximum of the marginals using:

maxOfMarginalsLBPdecode = UGM_Decode_MaxOfMarginals(nodePot,edgePot,edgeStruct,@UGM_Infer_LBP);

In the case of loopy belief propagation, we can also consider applying the messages used in the decoding version of belief propagation as an approximate decoding method (sometimes called 'max-product' loopy belief propagation, instead of 'sum-product' loopy belief propagation). We can do this with UGM using:

decodeLBP = UGM_Decode_LBP(nodePot,edgePot,edgeStruct);

In the tree-reweighted belief propagation method, the log partition function is upper bounded by a convex combination of tree-structured distributions. Indeed, we might use a very large number of tree-structured distributions in the bound. However, the most interesting part about this method is that we don't need to store the individual distributions. As long as we know the proportion of trees that each edge appears in (and we are careful that each edge appears in at least one tree), we can optimize the bound using an algorithm that is very similar to loopy belief propagation that incorporates these proportions.

The method is described in detail in:

- M.J. Wainwright, T.S. Jakkola, A.S. Willsky.
**A new class of upper bounds on the log partition function**. Uncertainty in Artificial Intelligence, 2002.

[nodeBelTRBP,edgeBelTRBP,logZTRBP] = UGM_Infer_TRBP(nodePot,edgePot,edgeStruct);

- N. de Freitas, P. Hojen-Sorensen, M. Jordan, and S. Russell.
**Variational MCMC**. Uncertainty in Artificial Intelligence, 2001.

burnIn = 100; edgeStruct.maxIter = 100; variationalProportion = .25; samplesVarMCMC = UGM_Sample_VarMCMC(nodePot,edgePot,edgeStruct,burnIn,variationalProportion);

- M.J. Wainwright and M. Jordan.
**Graphical models, exponential families, and variational inference**. Foundations and Trends in Machine Learning, 2008.

Mark Schmidt > Software > UGM > Variational Demo