Contents
Description
Demonstrates parameter learning and inference on Hidden Markov Models using classic problem setup of a Casino that switches out a fair die for a loaded die
close all clear all % ----- Generate Sequence Data -------------------------------------------- % "Occassionally dishonest casino" example O = [1/6 , 1/6 , 1/6 , 1/6 , 1/6 , 1/6 ; ... % fair die 1/10, 1/10, 1/10, 1/10, 1/10, 5/10 ]; % loaded die T = [1/2 , 1/2; 1/4, 3/4]; pi = [1/2, 1/2]; len = 25; numSeq = 1; seq = ml_sampleMC(T, pi, numSeq, len); X = zeros(numSeq, len); % generate observation states from distribution at each hidden state for k = 1:len for p = 1:numSeq X(p, k) = ml_randSampleDiscrete(O(seq(k), :)); end end
Usage with known observation and transition probability matrix: Viterbi
options_hmm = []; options_hmm.observed = O; options_hmm.transition = T; options_hmm.pi = pi; model_hmm = ml_unsupervised_HMM(X, options_hmm); viterbiSeq = model_hmm.Viterbi(model_hmm); hmmviterbi(X, T, O) individuallyBestSeq = model_hmm.individuallyMostLikely(model_hmm); disp('Individually most likely latent variable states:') disp(individuallyBestSeq); disp('True latent variable states:') disp(seq) disp('Viterbi most likely latent variable sequence:') disp(viterbiSeq) disp('Misclassification rate for Viterbi:') disp(sum(viterbiSeq ~= seq) / length(seq)) disp('Misclassification rate for Most Likely Individual States:') disp(sum(individuallyBestSeq ~= seq) / length(seq))
ans = Columns 1 through 13 1 1 1 2 2 2 2 2 2 2 2 2 2 Columns 14 through 25 2 2 2 2 2 2 2 2 2 2 2 2 Individually most likely latent variable states: Columns 1 through 13 1 1 1 2 2 2 2 2 1 1 2 2 2 Columns 14 through 25 2 2 2 1 1 2 2 2 2 2 2 2 True latent variable states: Columns 1 through 13 1 1 2 2 2 2 1 1 2 1 2 2 1 Columns 14 through 25 2 1 1 1 2 2 2 2 2 2 2 2 Viterbi most likely latent variable sequence: Columns 1 through 13 1 1 1 2 2 2 2 2 2 2 2 2 2 Columns 14 through 25 2 2 2 2 2 2 2 2 2 2 2 2 Misclassification rate for Viterbi: 0.3200 Misclassification rate for Most Likely Individual States: 0.3200
Usage with unknown parameters: Baum Welch (EM) algorithm
Learn the parameters of the occassionaly dishonest casino based only on observations and structure assumption
O = [1/6 , 1/6 , 1/6 , 1/6 , 1/6 , 1/6; ... % fair die 1/10, 1/10, 1/10, 1/10, 1/10, 5/10 ]; % loaded die T = [0.50 , 0.50; 0.25, 0.75]; len = 100; % if len is too short, may not observe something in the sequence numSeq = 100; seq = ml_sampleMC(T, pi, numSeq, len); obs = zeros(numSeq, len); [~, seqLen] = size(O); % generate observation states from distribution at each hidden state X = zeros(numSeq, len); for k = 1:len for p = 1:numSeq X(p, k) = ml_randSampleDiscrete(O(seq(k), :)); end end options_hmm_unknown = []; options_hmm_unknown.nHiddenStatesGuess = 2; options_hmm_unknown.nObservableStatesGuess = 6; model_hmm_unknown = ml_unsupervised_HMM(X, options_hmm_unknown); [Ohat, That] = model_hmm_unknown.BaumWelch(model_hmm_unknown); disp('True emission probabilities:') disp(O); disp('Learned emissions probabilities:') disp(Ohat); disp('True transition probabilities:') disp(T); disp('Learned transition probabilities:') disp(That);
True emission probabilities: 0.1667 0.1667 0.1667 0.1667 0.1667 0.1667 0.1000 0.1000 0.1000 0.1000 0.1000 0.5000 Learned emissions probabilities: 0.1209 0.0911 0.2109 0.0637 0.1644 0.3489 0.1533 0.1888 0.0206 0.2192 0.0869 0.3312 True transition probabilities: 0.5000 0.5000 0.2500 0.7500 Learned transition probabilities: 0.4942 0.5058 0.6758 0.3242