Speech Recognition

Speech is the vocalized form of human Harvery Fletcher and Homer Dudley communication. It is based upon the ?rmly established the importance of the syntactic combination of lexicals and signal spectrum for reliable identi? cation names that are drawn from very large of the phonetic nature of a speech sound. (usually about 10,000 different words) Following the convention established by vocabularies. Each spoken word is these two outstanding scientists, most created out of the phonetic combination modern systems and algorithms for of a limited set of vowel and consonant speech recognition are based on the speech sound units.

These vocabularies, concept of measurement of the (time- the syntax which structures them, and varying) speech power spectrum (or its their set of speech sound units differ, variants such as the cepstrum), in part creating the existence of many thousands due to the fact that measurement of the of different types of mutually unintelligible power spectrum from a signal is relatively human languages [1]. easy to accomplish with modern digital The speech is the quintessential form of signal processing techniques.

Due the human communication, is what has drive increased on the processing power at the the human race so far, talking about it on CPU on the modern computers this task technology is also and important subject become more and more every day, to study. On 1874, the experiments allowing to concentrated on the task of conduced by Alexander Graham Bell interpreting the speech and responding to proves that the frequency harmonics from actions from it than to await for an electrical signal can be divided, this processing the speech patterns. was the foundation that later on leads to The Problem with Automatic Speech the digitalization of the speech, entering.

Recognition (ARS) is in writing computer on the Speech Recognition era. programs that can comprehend a sound 1 wave and reproduced the same spectrogram or a spectrum analyzer, sequence of words that a person would though in vowels spoken with a high hear when listening to the same sound, fundamental frequency, as in a female or this means de? ne an association child voice, the frequency of the between the acoustic features of sounds resonance may lie between the widely- and the words people perceive. spread harmonics and hence no peak is visible.

Speech Recognizers In 1952, Davis, Biddulph, and Balashek The ? st attempts to design systems for of Bell Laboratories built a system for automatic speech recognition were isolated digit recognition for a single mostly guide by the theory of acoustic- speaker, using the formant frequencies phonetics. That is a sub? eld of phonetics measured (or estimated) during vowel which deals with acoustic aspects of regions of each digit [2], this system work speech sounds. Acoustic phonetics with the formant trajectories along the investigates properties like the mean dimensions of the ? rst and the second squared amplitude of a waveform, its formant frequencies for each of the ten duration, its fundamental frequency, or digits, one-nine and 0, respectively. other properties of its frequency.

These trajectories served as the spectrum, and the relationship of these “reference pattern” for determining the properties to other branches of phonetics, identity of an unknown digit utterance as and to abstract linguistic concepts like the best matching digit. phones, phrases, or utterances [3]. In another early recognition system Fry Another important term during the and Denes, at University College in process of speech recognition is the England, built a phoneme recognizer to formant o formants that in speech recognize 4 vowels and 9 consonants.

By science and phonetics, is used to mean incorporating statistical information about an acoustic resonance of the human allowable phoneme sequences in vocal tract. It is often measured as an English, they increased the overall amplitude peak in the frequency phoneme recognition accuracy for words spectrum of the sound, using a consisting of two or more phonemes, this through the supplying of the system with 2 previous entries or by basically training programming, in numerous variant forms the system to know the vowels and the as the Viterbi algorithm, this one is a consonants by repetition as we do now dynamic programming algorithm for with the neural networks.

This work ?nding the most likely sequence of marked the ? rst use of statistical syntax hidden states – called the Viterbi path – (at the phoneme level) in automatic that results in a sequence of observed speech recognition [2]. events, especially in the context of An alternative to the use of a speech Markov information sources and hidden segmenter was the concept of adopting a Markov models, has become an non-uniform time scale for aligning indispensable technique in automatic speech patterns.

This concept started to speech recognition. In speech-to-text gain acceptance in the 1960’s through (speech recognition), the acoustic signal the work Speech Recognition by Feature is treated as the observed sequence of Abstraction Techniques by Tom Martin at events, and a string of text is considered RCA Laboratories in witch he recognized to be the “hidden cause” of the acoustic the need to deal with the temporal non- signal.

The Viterbi algorithm  the uniformity in repeated speech events and most likely string of text given the suggested a range of solutions, including acoustic signal [4]. detection of utterance endpoints, which 4. Hidden Markov Model greatly enhanced the reliability of the The widespread popularity of the HMM recognizer performance and Speech framework can be attributed to its simple Discrimination by Dynamic Programming algorithmic structure, which is straight- by Vintsyuk in the Soviet Union, proposed forward to implement, and to its clear the use of dynamic programming for time performance superiority over alternative alignment between two utterances in recognition structures.

As part of this a order to derive a meaningful assessment speech-recognition task is often of their similarity. Others proposed taxonomized according to its different methods like dynamic time requirements in handling speci? c or warping, in speech pattern matching nonspeci? c talkers (speaker-dependent Since the late 1970’s, mainly due to the vs. speaker-independent) and in publication by Sakoe and Chiba, dynamic 3 accepting only isolated utterances or multiple acoustic features at a single ?uent speech (isolated word vs. point in time in a way that has not connected word). Systems based on previously been exploited in discrete- HMM have been demonstrated to be able observation Hidden Markov Models. to achieve 96% word accuracy.

These 6. Conclusion results sometimes rival human The DBN and HMM are the biggest ways performance and thus, of course, af? rm of working with Automatic Speech the potential usefulness of an automatic Recognition, those are the precursors of speech-recognition system in designated the neural networks that now a days are applications[7]. trying to make the switch of the old We also have to take in consideration systems that still pretty accurate. when we are talking about Hidden Markov Model, that this one is, one of the most simple Dynamic Bayesian Networks, so using a more complex DBN we can achieve better result, just because the complexity found on those networks.

Dynamic Bayesian Networks Over the last twenty years, probabilistic emerged as the method of choice for large-scale speech recognition tasks in two dominant forms: hidden Markov models (Rabiner b: Juang 1993), and neural networks with explicitly probabilistic interpretations (Bourlard & Morgan 1994; Robinson & Fallside 1991) [6]. This change is mainly due the fact that Dynamic Bayesian Networks are able to model the correlations among

A limited
time offer!
Save Time On Research and Writing. Hire a Professional to Get Your 100% Plagiarism Free Paper