Next:
List of abbreviations
Up:
Nonlinear Switching State-Space Models
Previous:
Nonlinear Switching State-Space Models
Contents
List of abbreviations
List of symbols
Introduction
Problem setting
Aim of the thesis
Structure of the thesis
Contributions of the thesis
Mathematical preliminaries of time series modelling
Theory
Dynamical systems
Linear systems
Nonlinear systems and chaos
Tools for time series analysis
Practical considerations
Delay coordinates in practice
Prediction algorithms
Bayesian methods for data analysis
Bayesian statistics
Constructing probabilistic models
Hierarchical models
Conjugate priors
Posterior approximations
Model selection
Stochastic approximations
Laplace approximation
EM algorithm
Ensemble learning
Information theoretic approach
Building blocks of the model
Hidden Markov models
Markov chains
Hidden states
Continuous observations
Learning algorithms
Nonlinear state-space models
Linear models
Extension from linear to nonlinear
Multilayer perceptrons
Nonlinear factor analysis
Learning algorithms
Previous hybrid models
Switching state-space models
Other hidden Markov model hybrids
The model
Bayesian continuous density hidden Markov model
The model
The approximating posterior distribution
Bayesian nonlinear state-space model
The generative model
The probabilistic model
The approximating posterior distribution
Combining the two models
The structure of the model
The approximating posterior distribution
The algorithm
Learning algorithm for the continuous density hidden Markov model
Evaluating the cost function
Optimising the cost function
Learning algorithm for the nonlinear state-space model
Evaluating the cost function
Optimising the cost function
Learning procedure
Continuing learning with new data
Learning algorithm for the switching model
Evaluating and optimising the cost function
Learning procedure
Learning with known state sequence
Experimental results
Speech data
Preprocessing
Properties of the data set
Comparison with other models
The experimental setting
The results
Segmentation of annotated data
The training procedure
The results
Discussion
Standard probability distributions
Normal distribution
Dirichlet distribution
Probabilistic computations for MLP networks
Bibliography
About this document ...
Antti Honkela 2001-05-30