Difference between revisions of "contributions on Context Adaptive Training with Factorized Decision Trees for HMMBased Speech Synthesis"
(→Speech synthesis vs. speech recognition) 
m (Conversion script moved page Contributions on Context Adaptive Training with Factorized Decision Trees for HMMBased Speech Synthesis to contributions on Context Adaptive Training with Factorized Decision Trees for HMMBased Speech Synthesis: C...) 
(No difference)

Latest revision as of 09:45, 30 August 2017
Speech synthesis
Speech synthesis is the process of producing human like speech artificially. As mentioned in the original paper, speech synthesis requires a much larger and more complex set of contexts in order to achieve high quality synthesised speech. Examples of such contexts are the following:
 Identity of neighbouring phones to the central phone. Two phones to the left and the right of the centre phone are usually considered as phonetic neighbouring contexts
 Position of phones, syllables, words and phrases w.r.t. higher level units
 Number of phones, syllables, words and phrases w.r.t. higher level units
 Syllable stress and accent status
 Linguistic role, e.g. partofspeech tag
 Emotion and emphasis
The traditional approach to address this different kinds of ambiguous contexts is by using distinct Hidden Markov Models for every possible combination of the possible contexts. This method, clearly, would require a huge training data set that covers all the possible all context combinations.
Notes
There are many factors that could affect the acoustic realisation of phones. The prior knowledge of such factors form the questions used in the decision tree based state clustering procedure. Some questions are highly correlated, e.g. the phonetic broad class questions and the syllable questions. Some others are not, like the example mentioned in the paper (phonetic broad class questions and emphasis questions).
MLLR based approach
let's rewrite the first equation in (4) of the original paper as:
let m be used instead of [math]r_c[/math] to denote the index of the atomic state cluster, while [math]W_{r_{e}} = [A_{r_{e}} b_{r_{e}}][/math] is the extended transform associated with leaf node [math]r_e[/math], and all other parameters are as previously defined. From the above equation, the parameters of the combined leaf node can not be directly estimated. Instead, they are constructed using two sets of parameters with different state clustering structures. The detailed procedure is as follows:
1. Construct factorized decision trees for normal contexts [math](r_p)[/math] and emphasis contexts [math](r_e)[/math]. Let [math] m = r_e(m) \cap r_p(m)[/math] be the atomic state cluster (atomic Gaussian in the single Gaussian case)
2. Get initial parameters of the atomic Gaussians from state clustering using normal decision tree and let [math]\hat \mu_m = \mu_{r_{p}(m)} [/math]
3. Estimate [math]W_{r_{e}}[/math] given the current model parameters [math]\mu_{r_{p}(m)}[/math] and [math]\sum_{r_{p}(m)}[/math] The [math]d^{th}[/math] row of [math]W_{r_{e}}, w_{r_{e},d}^T[/math] is estimated as
where the sufficient statistics for the [math]d^{th}[/math] row are given by
where [math]o_{t,d}[/math] is the [math]d^{th}[/math] element of observation vector [math]o_t[/math], and [math]\sigma_{dd}^{r_{p}(m)}[/math] is the [math]d^{th}[/math] diagonal element of [math]\sum_{r_p(m)}[/math]. r_{p}(m) is the leaf node of the normal decision tree to which Gaussian component m belongs. [math]\gamma_m(t)[/math] is the posterior for Gaussian component m at time t which is calculated using the forwardbackward algorithm with the parameters obtained from the first equation above.
4. Estimate [math]\mu_{r_c}[/math] given the emphasis transform parameters [math]W_{r_e}[/math]. Given sufficient statistics:
and the new mean is then estimated by
5. Given the updated mean [math]\mu_{r_p}[/math] and transform[math]W_{r_e}[/math], perform context adaptation to get [math]\hat \mu_m[/math]using the first equation above
6. The reestimation of [math]\sum_{r_p}[/math] is then performed using the standard covariance update formula with the adapted [math]\hat \mu_m[/math]. Here, the statistics are accumulated for each leaf node [math]r_p[/math] rather than each individual component [math]m[/math]
where [math]\gamma_m(t)[/math] is calculated using [math]\hat \mu_m[/math] constructed from the new estimate of [math]\mu_{r_p}[/math] and [math]W_{r_e}[/math]
7. Go to step (3) until convergence
State clustering
The idea of decision tree based state clustering is to use a binary decision tree in which a question is attached to each nonleaf node, to assign the state distribution of every possible full context HMM model to a state cluster. When using a single Gaussian as the state output distribution, and considering that the Gaussian parameters [math]\mu(\Theta)[/math] and [math]\sum(\Theta)[/math] are ML estimates, the log likelihood of a set of states [math]\Theta[/math] can be represented as
where [math]D[/math] is the data dimension, [math]\gamma(\Theta)[/math] and [math]\sum(\Theta)[/math] are the total occupancy and the covariance matrix of the pooled state respectively:
When using a structured context adaptive training representation, there are two sets of parameters to be clustered: transform and Gaussian parameters, resulting in two or more decision trees. There are three ways to build such trees:
 Independent construction: it assumes that the factorized decision trees are independent of each other and therefore built separately. This approximation results in a factorization that is purely dependent on the different sets of context questions used during the decision tree construction
 Dependent construction: it builds factorized decision trees one by one. Each is built assuming that the remaining parameter sets along with the sharing structure are fixed. An iterative process is used with all parameters being reestimated after every split
 Simultaneous construction: it builds all factorized decision trees at once. At each split, all trees are optimized interdependently until the stopping criterion is met.
In this paper, independent construction is employed.
In speech synthesis techniques that use HMM, decision tree based clustering is usually performed twice to get better clustering structure. The general procedure is as follows:
1. Train monophone HMMs and construct untied full context dependent HMMs
2. Perform one EM reestimation of the untied full context dependent HMMs.
3. Perform state clustering given the parameters of the untied model in step (2)
4. Perform several iterations of EM reestimation of the clustered HMMs
5. Untie the clustered HMMs and perform one more EM reestimation to get updated parameters of the untied full context dependent HMMs
6. Perform state clustering given the parameters of the untied model in step (5)
7. Perform several iterations of EM reestimation of the clustered HMMs