|Title:||An efficient speech generative model based on deterministic/stochastic separation of spectral envelopes|
Azarov, E. S.
Likhachov, D. S.
Petrovsky, A. A.
|Keywords:||доклады БГУИР;speech generative model;harmonic plus noise model;speech analysis;speech coding|
|Citation:||An efficient speech generative model based on deterministic/stochastic separation of spectral envelopes / Mostafa Taha [et al.] // Доклады БГУИР. – 2020. – № 18 (2). – P. 23–29. – DOI : http://dx.doi.org/10.35596/1729-7648-2020-18-2-23-29.|
|Abstract:||The paper presents a speech generative model that provides an efficient way of generating speech waveform from its amplitude spectral envelopes. The model is based on hybrid speech representation that includes deterministic (harmonic) and stochastic (noise) components. The main idea behind the approach originates from the fact that speech signal has a determined spectral structure that is statistically bound with deterministic/stochastic energy distribution in the spectrum. The performance of the model is evaluated using an experimental low-bitrate wide-band speech coder. The quality of reconstructed speech is evaluated using objective and subjective methods. Two objective quality characteristics were calculated: Modified Bark Spectral Distortion (MBSD) and Perceptual Evaluation of Speech Quality (PESQ). Narrow-band and wide-band versions of the proposed solution were compared with MELP (Mixed Excitation Linear Prediction) speech coder and AMR (Adaptive Multi-Rate) speech coder, respectively. The speech base of two female and two male speakers were used for testing. The performed tests show that overall performance of the proposed approach is speakerdependent and it is better for male voices. Supposedly, this difference indicates the influence of pitch highness on separation accuracy. In that way, using the proposed approach in experimental speech compression system provides decent MBSD values and comparable PESQ values with AMR speech coder at 6,6 kbit/s. Additional subjective listening testsdemonstrate that the implemented coding system retains phonetic content and speaker’s identity. It proves consistency of the proposed approach.|
|Appears in Collections:||№ 18(2)|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.