contributions on Context Adaptive Training with Factorized Decision Trees for HMM-Based Speech Synthesis

From statwiki
Revision as of 11:47, 7 November 2011 by Tameem (talk | contribs) (MLLR based approach)
Jump to: navigation, search

Speech synthesis vs. speech recognition

As mentioned in the original paper, speech synthesis requires a much larger and more complex set of contexts in order to achieve high quality synthesised speech. Examples of such contexts are the following:

  • Identity of neighbouring phones to the central phone. Two phones to the left and the right of the centre phone are usually considered as phonetic neighbouring contexts
  • Position of phones, syllables, words and phrases w.r.t. higher level units
  • Number of phones, syllables, words and phrases w.r.t. higher level units
  • Syllable stress and accent status
  • Linguistic role, e.g. part-of-speech tag
  • Emotion and emphasis

Notes

There are many factors that could affect the acoustic realisation of phones. The prior knowledge of such factors form the questions used in the decision tree based state clustering procedure. Some questions are highly correlated, e.g. the phonetic broad class questions and the syllable questions. Some others are not, like the example mentioned in the paper (phonetic broad class questions and emphasis questions).

MLLR based approach

let's rewrite the first equation in (4) of the original paper as:


[math] \begin{matrix} \hat \mu_{r_{c}} = \mu_m = A_{r_{e}}\mu_{r_{p}} + b_{r_{e}} = W_{r_{e}(m)}\xi_{r_{p}(m)}\\ \hat \sum_{r_{c}} = \hat \sum_{m} = \sum_{r_{p}(m)} \end{matrix}[/math]

let m be used instead of [math]r_c[/math] to denote the index of the atomic state cluster, while [math]W_{r_{e}} = [A_{r_{e}} b_{r_{e}}][/math] is the extended transform associated with leaf node [math]r_e[/math], and all other parameters are as previously defined. From the above equation, the parameters of the combined leaf node can not be directly estimated. Instead, they are constructed using two sets of parameters with different state clustering structures. The detailed procedure is as follows:

1. Construct factorized decision trees for normal contexts [math](r_p)[/math] and emphasis contexts [math](r_e)[/math]. Let [math] m = r_e(m) \cap r_p(m)[/math] be the atomic state cluster (atomic Gaussian in the single Gaussian case)

2. Get initial parameters of the atomic Gaussians from state clustering using normal decision tree and let [math]\hat \mu_m = \mu_{r_{p}(m)} [/math]

3. Estimate [math]W_{r_{e}}[/math] given the current model parameters [math]\mu_{r_{p}(m)}[/math] and [math]\sum_{r_{p}(m)}[/math] The [math]d^{th}[/math] row of [math]W_{r_{e}}, w_{r_{e},d}^T[/math] is estimated as

[math] \begin{matrix} w_{r_{e},d} = G_{r_{e},d}^{-1}k_{r_{e},d} \end{matrix}[/math]

where the sufficient statistics for the [math]d^{th}[/math] row are given by

[math] \begin{matrix} G_{r_{e},d} = \sum_t\sum_{m\in{r_e}}\frac {\gamma_m(t)}{\sigma_{dd}^{r_{p}(m)}}\xi_{r_{p}(m)}\xi_{r_{p}(m)}^T\\ G_{r_{e},d} = \sum_t\sum_{m\in{r_e}}\frac {\gamma_m(t)o_{t,d}}{\sigma_{dd}^{r_{p}(m)}}\xi_{r_{p}(m)} \end{matrix}[/math]

where [math]o_{t,d}[/math] is the [math]d^{th}[/math] element of observation vector [math]o_t[/math], and [math]\sigma_{dd}^{r_{p}(m)}}[/math] is the [math]d^{th}[/math] diagonal element of [math]\sum_{r_{p}(m)}[/math]. r_{p}(m) is the leaf node of the normal decision tree to which Gaussian component m belongs. [math]\gamma_m(t)[/math] is the posterior for Gaussian component m at time t which is calculated using the forward-backward algorithm with the parameters from the first equation above.

[math][/math]