contributions on Context Adaptive Training with Factorized Decision Trees for HMM-Based Speech Synthesis: Difference between revisions

From statwiki
Jump to navigation Jump to search
 
(17 intermediate revisions by 2 users not shown)
Line 1: Line 1:
==Speech synthesis vs. speech recognition==
==Speech synthesis==
As mentioned in the original paper, speech synthesis requires a much larger and more complex set of contexts in order to achieve high quality synthesised speech. Examples of such contexts are the following:
Speech synthesis is the process of producing human like speech artificially. As mentioned in the original paper, speech synthesis requires a much larger and more complex set of contexts in order to achieve high quality synthesised speech. Examples of such contexts are the following:


* Identity of neighbouring phones to the central phone. Two phones to the left and the right of the centre phone are usually considered as phonetic neighbouring contexts
* Identity of neighbouring phones to the central phone. Two phones to the left and the right of the centre phone are usually considered as phonetic neighbouring contexts
Line 8: Line 8:
* Linguistic role, e.g. part-of-speech tag
* Linguistic role, e.g. part-of-speech tag
* Emotion and emphasis
* Emotion and emphasis
The traditional approach to address this different kinds of ambiguous contexts is by using distinct Hidden Markov Models for every possible combination of the possible contexts. This method, clearly, would require a huge training data set that covers all the possible all context combinations.


==Notes==
==Notes==
Line 15: Line 17:
==MLLR based approach==
==MLLR based approach==


let's rewrite the first equation in (4) as:
let's rewrite the first equation in (4) of the original paper as:
 
 
<center><math> \begin{matrix}
\hat \mu_{r_{c}} = \mu_m = A_{r_{e}}\mu_{r_{p}} +  b_{r_{e}} = W_{r_{e}(m)}\xi_{r_{p}(m)}\\
\hat \sum_{r_{c}} = \hat \sum_{m} = \sum_{r_{p}(m)}
\end{matrix}</math></center>
 
let m be used instead of <math>r_c</math> to denote the index of the atomic state cluster, while <math>W_{r_{e}} = [A_{r_{e}} b_{r_{e}}]</math> is the extended transform associated with leaf node <math>r_e</math>, and all other parameters are as previously defined. From the above equation, the parameters of the combined leaf node can not be directly estimated. Instead, they are constructed using two sets of parameters with different state clustering structures. The detailed procedure is as follows:
 
1. Construct factorized decision trees for normal contexts <math>(r_p)</math> and emphasis contexts <math>(r_e)</math>. Let <math> m = r_e(m) \cap r_p(m)</math> be the atomic state cluster (atomic Gaussian in the single Gaussian case)
 
2. Get initial parameters of the atomic Gaussians from state clustering using normal decision tree and let <math>\hat \mu_m = \mu_{r_{p}(m)} </math>
 
3. Estimate <math>W_{r_{e}}</math> given the current model parameters <math>\mu_{r_{p}(m)}</math> and <math>\sum_{r_{p}(m)}</math> The <math>d^{th}</math> row of <math>W_{r_{e}}, w_{r_{e},d}^T</math> is estimated as
 
<center><math> \begin{matrix}
w_{r_{e},d} = G_{r_{e},d}^{-1}k_{r_{e},d}
\end{matrix}</math></center>
 
where the sufficient statistics for the <math>d^{th}</math> row are given by
 
<center><math> \begin{matrix}
G_{r_{e},d} = \sum_t\sum_{m\in{r_e}}\frac {\gamma_m(t)}{\sigma_{dd}^{r_{p}(m)}}\xi_{r_{p}(m)}\xi_{r_{p}(m)}^T\\
G_{r_{e},d} = \sum_t\sum_{m\in{r_e}}\frac {\gamma_m(t)o_{t,d}}{\sigma_{dd}^{r_{p}(m)}}\xi_{r_{p}(m)}
\end{matrix}</math></center>
 
where <math>o_{t,d}</math> is the <math>d^{th}</math> element of observation vector <math>o_t</math>, and <math>\sigma_{dd}^{r_{p}(m)}</math> is the <math>d^{th}</math> diagonal element of <math>\sum_{r_p(m)}</math>. r_{p}(m) is the leaf node of the normal decision tree to which Gaussian component m belongs. <math>\gamma_m(t)</math> is the posterior for Gaussian component m at time t which is calculated using the forward-backward algorithm with the parameters obtained from the first equation above.




4. Estimate <math>\mu_{r_c}</math> given the emphasis transform parameters <math>W_{r_e}</math>. Given sufficient statistics:
<center><math> \begin{matrix}
<center><math> \begin{matrix}
\hat \mu_{r_{c}} = A_{r_{e}}\mu_{r_{p}} +  b_{r_{e}} = W_{r_{e}(m)}\xi_{r_{p}(m)}\\
G_{r_p} = \sum_t\sum_{m\in{r_p}}{\gamma_m(t)}A_{r_e(m)}^T \sum_m^{-1}A_{r_e(m)}\\
\hat \sum_{r_{c}} = \sum_{r_{p}(m)}
K_{r_p} = \sum_t\sum_{m\in{r_p}}{\gamma_m(t)}A_{r_e(m)}^T \sum_m^{-1}(o_t - b_{r_e(m)})
\end{matrix}</math></center>
\end{matrix}</math></center>
and the new mean is then estimated by
<center><math> \begin{matrix}
\mu_{r_p} = G_{r_p}^{-1}k_{r_p}
\end{matrix}</math></center>
5. Given the updated mean <math>\mu_{r_p}</math> and transform<math>W_{r_e}</math>, perform context adaptation to get <math>\hat \mu_m</math>using the first equation above
6. The re-estimation of <math>\sum_{r_p}</math> is then performed using the standard covariance update formula with the adapted <math>\hat \mu_m</math>. Here, the statistics are accumulated for each leaf node <math>r_p</math> rather than each individual component <math>m</math>
<center><math> \begin{matrix}
\sum_{r_p} = diag(\frac{\sum_{t,m\in {r_p}}\gamma_m(t)(o_t-\hat \mu_m)^T}{\sum_{t,m\in {r_p}}\gamma_m(t)})
\end{matrix}</math></center>
where <math>\gamma_m(t)</math> is calculated using <math>\hat \mu_m</math> constructed from the new estimate of <math>\mu_{r_p}</math> and <math>W_{r_e}</math>
7. Go to step (3) until convergence
==State clustering==
The idea of decision tree based state clustering is to use a binary decision tree in which a question is attached to each non-leaf node, to assign the state distribution of every possible full context HMM model to a state cluster. When using a single Gaussian as the state output distribution, and considering that the Gaussian parameters <math>\mu(\Theta)</math> and <math>\sum(\Theta)</math> are ML estimates, the log likelihood of a set of states <math>\Theta</math> can be represented as
<center><math> \begin{matrix}
l(\Theta) = \sum_t\sum_{\theta\in\Theta}\gamma_\theta(o_t)logN(o_t;\mu(\Theta), \sum(\Theta))\\
= -\frac{\gamma(\Theta)}{2}(log |\sum(\Theta)|+ D log(2\pi) + D)
\end{matrix}</math></center>
where <math>D</math> is the data dimension, <math>\gamma(\Theta)</math> and <math>\sum(\Theta)</math> are the total occupancy and the covariance matrix of the pooled state respectively:
<center><math> \begin{matrix}
\gamma(\Theta) = \sum_{\theta\in\Theta}(\sum_t\gamma_\theta(o_t))\\
\sum(\Theta) = \sum_{\theta\in\Theta}(\sum_t\gamma_\theta(o_t))(\mu_\theta^T\mu_\theta + \sum_\theta)
\end{matrix}</math></center>
When using a structured context adaptive training representation, there are two sets of parameters to be clustered: transform and Gaussian parameters, resulting in two or more decision trees. There are three ways to build such trees:
* Independent construction: it assumes that the factorized decision trees are independent of each other and therefore built separately. This approximation results in a factorization that is purely dependent on the different sets of context questions used during the decision tree construction
* Dependent construction: it builds factorized decision trees one by one. Each is built assuming that the remaining parameter sets along with the sharing structure are fixed. An iterative process is used with all parameters being re-estimated after every split
* Simultaneous construction: it builds all factorized decision trees at once. At each split, all trees are optimized inter-dependently until the stopping criterion is met.
In this paper, independent construction is employed.
In speech synthesis techniques that use HMM, decision tree based clustering is usually performed twice to get better clustering structure. The general procedure is as follows:
1. Train mono-phone HMMs and construct untied full context dependent HMMs
2. Perform one EM re-estimation of the untied full context dependent HMMs.
3. Perform state clustering given the parameters of the untied model in step (2)
4. Perform several iterations of EM re-estimation of the clustered HMMs
5. Untie the clustered HMMs and perform one more EM re-estimation to get updated parameters of the untied full context dependent HMMs
6. Perform state clustering given the parameters of the untied model in step (5)


let m be used instead of <math>r_c</math> to denote the index of the atomic state cluster.
7. Perform several iterations of EM re-estimation of the clustered HMMs

Latest revision as of 08:45, 30 August 2017

Speech synthesis

Speech synthesis is the process of producing human like speech artificially. As mentioned in the original paper, speech synthesis requires a much larger and more complex set of contexts in order to achieve high quality synthesised speech. Examples of such contexts are the following:

  • Identity of neighbouring phones to the central phone. Two phones to the left and the right of the centre phone are usually considered as phonetic neighbouring contexts
  • Position of phones, syllables, words and phrases w.r.t. higher level units
  • Number of phones, syllables, words and phrases w.r.t. higher level units
  • Syllable stress and accent status
  • Linguistic role, e.g. part-of-speech tag
  • Emotion and emphasis

The traditional approach to address this different kinds of ambiguous contexts is by using distinct Hidden Markov Models for every possible combination of the possible contexts. This method, clearly, would require a huge training data set that covers all the possible all context combinations.

Notes

There are many factors that could affect the acoustic realisation of phones. The prior knowledge of such factors form the questions used in the decision tree based state clustering procedure. Some questions are highly correlated, e.g. the phonetic broad class questions and the syllable questions. Some others are not, like the example mentioned in the paper (phonetic broad class questions and emphasis questions).

MLLR based approach

let's rewrite the first equation in (4) of the original paper as:


[math]\displaystyle{ \begin{matrix} \hat \mu_{r_{c}} = \mu_m = A_{r_{e}}\mu_{r_{p}} + b_{r_{e}} = W_{r_{e}(m)}\xi_{r_{p}(m)}\\ \hat \sum_{r_{c}} = \hat \sum_{m} = \sum_{r_{p}(m)} \end{matrix} }[/math]

let m be used instead of [math]\displaystyle{ r_c }[/math] to denote the index of the atomic state cluster, while [math]\displaystyle{ W_{r_{e}} = [A_{r_{e}} b_{r_{e}}] }[/math] is the extended transform associated with leaf node [math]\displaystyle{ r_e }[/math], and all other parameters are as previously defined. From the above equation, the parameters of the combined leaf node can not be directly estimated. Instead, they are constructed using two sets of parameters with different state clustering structures. The detailed procedure is as follows:

1. Construct factorized decision trees for normal contexts [math]\displaystyle{ (r_p) }[/math] and emphasis contexts [math]\displaystyle{ (r_e) }[/math]. Let [math]\displaystyle{ m = r_e(m) \cap r_p(m) }[/math] be the atomic state cluster (atomic Gaussian in the single Gaussian case)

2. Get initial parameters of the atomic Gaussians from state clustering using normal decision tree and let [math]\displaystyle{ \hat \mu_m = \mu_{r_{p}(m)} }[/math]

3. Estimate [math]\displaystyle{ W_{r_{e}} }[/math] given the current model parameters [math]\displaystyle{ \mu_{r_{p}(m)} }[/math] and [math]\displaystyle{ \sum_{r_{p}(m)} }[/math] The [math]\displaystyle{ d^{th} }[/math] row of [math]\displaystyle{ W_{r_{e}}, w_{r_{e},d}^T }[/math] is estimated as

[math]\displaystyle{ \begin{matrix} w_{r_{e},d} = G_{r_{e},d}^{-1}k_{r_{e},d} \end{matrix} }[/math]

where the sufficient statistics for the [math]\displaystyle{ d^{th} }[/math] row are given by

[math]\displaystyle{ \begin{matrix} G_{r_{e},d} = \sum_t\sum_{m\in{r_e}}\frac {\gamma_m(t)}{\sigma_{dd}^{r_{p}(m)}}\xi_{r_{p}(m)}\xi_{r_{p}(m)}^T\\ G_{r_{e},d} = \sum_t\sum_{m\in{r_e}}\frac {\gamma_m(t)o_{t,d}}{\sigma_{dd}^{r_{p}(m)}}\xi_{r_{p}(m)} \end{matrix} }[/math]

where [math]\displaystyle{ o_{t,d} }[/math] is the [math]\displaystyle{ d^{th} }[/math] element of observation vector [math]\displaystyle{ o_t }[/math], and [math]\displaystyle{ \sigma_{dd}^{r_{p}(m)} }[/math] is the [math]\displaystyle{ d^{th} }[/math] diagonal element of [math]\displaystyle{ \sum_{r_p(m)} }[/math]. r_{p}(m) is the leaf node of the normal decision tree to which Gaussian component m belongs. [math]\displaystyle{ \gamma_m(t) }[/math] is the posterior for Gaussian component m at time t which is calculated using the forward-backward algorithm with the parameters obtained from the first equation above.


4. Estimate [math]\displaystyle{ \mu_{r_c} }[/math] given the emphasis transform parameters [math]\displaystyle{ W_{r_e} }[/math]. Given sufficient statistics:

[math]\displaystyle{ \begin{matrix} G_{r_p} = \sum_t\sum_{m\in{r_p}}{\gamma_m(t)}A_{r_e(m)}^T \sum_m^{-1}A_{r_e(m)}\\ K_{r_p} = \sum_t\sum_{m\in{r_p}}{\gamma_m(t)}A_{r_e(m)}^T \sum_m^{-1}(o_t - b_{r_e(m)}) \end{matrix} }[/math]

and the new mean is then estimated by

[math]\displaystyle{ \begin{matrix} \mu_{r_p} = G_{r_p}^{-1}k_{r_p} \end{matrix} }[/math]


5. Given the updated mean [math]\displaystyle{ \mu_{r_p} }[/math] and transform[math]\displaystyle{ W_{r_e} }[/math], perform context adaptation to get [math]\displaystyle{ \hat \mu_m }[/math]using the first equation above


6. The re-estimation of [math]\displaystyle{ \sum_{r_p} }[/math] is then performed using the standard covariance update formula with the adapted [math]\displaystyle{ \hat \mu_m }[/math]. Here, the statistics are accumulated for each leaf node [math]\displaystyle{ r_p }[/math] rather than each individual component [math]\displaystyle{ m }[/math]

[math]\displaystyle{ \begin{matrix} \sum_{r_p} = diag(\frac{\sum_{t,m\in {r_p}}\gamma_m(t)(o_t-\hat \mu_m)^T}{\sum_{t,m\in {r_p}}\gamma_m(t)}) \end{matrix} }[/math]

where [math]\displaystyle{ \gamma_m(t) }[/math] is calculated using [math]\displaystyle{ \hat \mu_m }[/math] constructed from the new estimate of [math]\displaystyle{ \mu_{r_p} }[/math] and [math]\displaystyle{ W_{r_e} }[/math]


7. Go to step (3) until convergence

State clustering

The idea of decision tree based state clustering is to use a binary decision tree in which a question is attached to each non-leaf node, to assign the state distribution of every possible full context HMM model to a state cluster. When using a single Gaussian as the state output distribution, and considering that the Gaussian parameters [math]\displaystyle{ \mu(\Theta) }[/math] and [math]\displaystyle{ \sum(\Theta) }[/math] are ML estimates, the log likelihood of a set of states [math]\displaystyle{ \Theta }[/math] can be represented as

[math]\displaystyle{ \begin{matrix} l(\Theta) = \sum_t\sum_{\theta\in\Theta}\gamma_\theta(o_t)logN(o_t;\mu(\Theta), \sum(\Theta))\\ = -\frac{\gamma(\Theta)}{2}(log |\sum(\Theta)|+ D log(2\pi) + D) \end{matrix} }[/math]

where [math]\displaystyle{ D }[/math] is the data dimension, [math]\displaystyle{ \gamma(\Theta) }[/math] and [math]\displaystyle{ \sum(\Theta) }[/math] are the total occupancy and the covariance matrix of the pooled state respectively:

[math]\displaystyle{ \begin{matrix} \gamma(\Theta) = \sum_{\theta\in\Theta}(\sum_t\gamma_\theta(o_t))\\ \sum(\Theta) = \sum_{\theta\in\Theta}(\sum_t\gamma_\theta(o_t))(\mu_\theta^T\mu_\theta + \sum_\theta) \end{matrix} }[/math]


When using a structured context adaptive training representation, there are two sets of parameters to be clustered: transform and Gaussian parameters, resulting in two or more decision trees. There are three ways to build such trees:

  • Independent construction: it assumes that the factorized decision trees are independent of each other and therefore built separately. This approximation results in a factorization that is purely dependent on the different sets of context questions used during the decision tree construction
  • Dependent construction: it builds factorized decision trees one by one. Each is built assuming that the remaining parameter sets along with the sharing structure are fixed. An iterative process is used with all parameters being re-estimated after every split
  • Simultaneous construction: it builds all factorized decision trees at once. At each split, all trees are optimized inter-dependently until the stopping criterion is met.

In this paper, independent construction is employed.


In speech synthesis techniques that use HMM, decision tree based clustering is usually performed twice to get better clustering structure. The general procedure is as follows:

1. Train mono-phone HMMs and construct untied full context dependent HMMs

2. Perform one EM re-estimation of the untied full context dependent HMMs.

3. Perform state clustering given the parameters of the untied model in step (2)

4. Perform several iterations of EM re-estimation of the clustered HMMs

5. Untie the clustered HMMs and perform one more EM re-estimation to get updated parameters of the untied full context dependent HMMs

6. Perform state clustering given the parameters of the untied model in step (5)

7. Perform several iterations of EM re-estimation of the clustered HMMs