score gives you the probability of the observed state sequence, marginalizing out all the possible hidden states. So, yes, decode gives you the most likely sequence of hidden states for an observation sequence, as well as its log probability, using (by default) the Viterbi algorithm. Each row corresponds to a single data point. Sequence of n_features-dimensional data points. obs : array_like, shape (n, n_features) :.state_sequence : array_like, shape (n,) : Index of the most likely states for each observationĬompute the log probability under the model.logprob : float : Log probability of the maximum likelihood path through the HMM The code merges the easy-to-use API of scikit-learn with the modularity of probabilistic modeling, including general mixture and hidden Markov models and.algorithm : string, one of the decoder_algorithms: decoder.obs : array_like, shape (n, n_features):.Uses the selected algorithm for decoding. This class allows for easy evaluation of, sampling from, and maximum-likelihood estimation of the parameters of a HMM. The documentation for decode: decode(obs, algorithm='viterbi')Ĭorresponding to obs. Hidden Markov Model with Gaussian emissions Representation of a hidden Markov model probability distribution.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |