contributions on Context Adaptive Training with Factorized Decision Trees for HMM-Based Speech Synthesis: Difference between revisions
Line 42: | Line 42: | ||
\end{matrix}</math></center> | \end{matrix}</math></center> | ||
where <math>o_{t,d}</math> is the <math>d^{th}</math> element of observation vector <math>o_t</math>, and <math>\sigma_{dd}^{r_{p}(m) | where <math>o_{t,d}</math> is the <math>d^{th}</math> element of observation vector <math>o_t</math>, and <math>\sigma_{dd}^{r_{p}(m)}</math> is the <math>d^{th}</math> diagonal element of <math>\sum_{r_p(m)}</math>. r_{p}(m) is the leaf node of the normal decision tree to which Gaussian component m belongs. <math>\gamma_m(t)</math> is the posterior for Gaussian component m at time t which is calculated using the forward-backward algorithm with the parameters obtained from the first equation above. | ||
4. Estimate <math>\mu_{r_c}</math> given the emphasis transform parameters <math>W_{r_e}</math>. Given sufficient statistics: | |||
<center><math> \begin{matrix} | |||
G_{r_p} = \sum_t\sum_{m\in{r_p}}{\gamma_m(t)}A_{r_e(m)}^T \sum_m^{-1}A_{r_e(m)}\\ | |||
K_{r_p} = \sum_t\sum_{m\in{r_p}}{\gamma_m(t)}A_{r_e(m)}^T \sum_m^{-1}(o_t - b_{r_e(m)}) | |||
\end{matrix}</math></center> | |||
and the new mean is then estimated by | |||
<center><math> \begin{matrix} | |||
\mu_{r_p} = G_{r_p}^{-1}k_{r_p} | |||
\end{matrix}</math></center> | |||
<math></math> | <math></math> |
Revision as of 12:01, 7 November 2011
Speech synthesis vs. speech recognition
As mentioned in the original paper, speech synthesis requires a much larger and more complex set of contexts in order to achieve high quality synthesised speech. Examples of such contexts are the following:
- Identity of neighbouring phones to the central phone. Two phones to the left and the right of the centre phone are usually considered as phonetic neighbouring contexts
- Position of phones, syllables, words and phrases w.r.t. higher level units
- Number of phones, syllables, words and phrases w.r.t. higher level units
- Syllable stress and accent status
- Linguistic role, e.g. part-of-speech tag
- Emotion and emphasis
Notes
There are many factors that could affect the acoustic realisation of phones. The prior knowledge of such factors form the questions used in the decision tree based state clustering procedure. Some questions are highly correlated, e.g. the phonetic broad class questions and the syllable questions. Some others are not, like the example mentioned in the paper (phonetic broad class questions and emphasis questions).
MLLR based approach
let's rewrite the first equation in (4) of the original paper as:
let m be used instead of [math]\displaystyle{ r_c }[/math] to denote the index of the atomic state cluster, while [math]\displaystyle{ W_{r_{e}} = [A_{r_{e}} b_{r_{e}}] }[/math] is the extended transform associated with leaf node [math]\displaystyle{ r_e }[/math], and all other parameters are as previously defined. From the above equation, the parameters of the combined leaf node can not be directly estimated. Instead, they are constructed using two sets of parameters with different state clustering structures. The detailed procedure is as follows:
1. Construct factorized decision trees for normal contexts [math]\displaystyle{ (r_p) }[/math] and emphasis contexts [math]\displaystyle{ (r_e) }[/math]. Let [math]\displaystyle{ m = r_e(m) \cap r_p(m) }[/math] be the atomic state cluster (atomic Gaussian in the single Gaussian case)
2. Get initial parameters of the atomic Gaussians from state clustering using normal decision tree and let [math]\displaystyle{ \hat \mu_m = \mu_{r_{p}(m)} }[/math]
3. Estimate [math]\displaystyle{ W_{r_{e}} }[/math] given the current model parameters [math]\displaystyle{ \mu_{r_{p}(m)} }[/math] and [math]\displaystyle{ \sum_{r_{p}(m)} }[/math] The [math]\displaystyle{ d^{th} }[/math] row of [math]\displaystyle{ W_{r_{e}}, w_{r_{e},d}^T }[/math] is estimated as
where the sufficient statistics for the [math]\displaystyle{ d^{th} }[/math] row are given by
where [math]\displaystyle{ o_{t,d} }[/math] is the [math]\displaystyle{ d^{th} }[/math] element of observation vector [math]\displaystyle{ o_t }[/math], and [math]\displaystyle{ \sigma_{dd}^{r_{p}(m)} }[/math] is the [math]\displaystyle{ d^{th} }[/math] diagonal element of [math]\displaystyle{ \sum_{r_p(m)} }[/math]. r_{p}(m) is the leaf node of the normal decision tree to which Gaussian component m belongs. [math]\displaystyle{ \gamma_m(t) }[/math] is the posterior for Gaussian component m at time t which is calculated using the forward-backward algorithm with the parameters obtained from the first equation above.
4. Estimate [math]\displaystyle{ \mu_{r_c} }[/math] given the emphasis transform parameters [math]\displaystyle{ W_{r_e} }[/math]. Given sufficient statistics:
and the new mean is then estimated by
[math]\displaystyle{ }[/math]