Analysis of Speech in Human Communication

Mr. Basavaraj N Hiremath, Ms. Malini M Patil

Abstract


The human communication has a vitalmode called speech which results from the voice along with knowledge of language. The voice of each person is distinct because individual-specific vocal cord anatomy, vocal cavity, and oral and nasal cavities. It forms a basic block for copious knowledge into various analysis like lexical analytics, natural language processing, text mining, sentiment and satire. Apart from linguistics analysis, the physics of voice contributes to uniquely cognize as a signal. The paper aims at understanding a computer program called ‘PRAAT’ to analyze and synthesize a phonetics by computer. The work carried out in the paper focuses on presenting the comparative analysis real time voice data sample and the benchmark voice data in praat for all the parameters of the voice analysis. The results are promising and make a way to build decision making solutions to patterns of voice, recognition and reproduction processes using the facts of analytics.


Full Text:

PDF

References


P. Boersma and D. Weenink, “Praat, a system for doing phonetics by computer,” Glot Int., vol. 5, no. 9–10, pp. 341– 345, 2013.

B. N. Hiremath and M. M. Patil, “A Comprehensive Study of Text Analytics,” Int. J. Artif. Intell. Syst. Mach. Learn., vol. 9, no. 4, pp. 70–76, 2017.

Z. S. C. (India) Basavaraj N. Hiremath, Solution Architect, K. Private limited, Bengaluru, Malini M. Patil, Associate Prof., Dept. of Info. Sc. and Eng., J.S.S. Academy of Technical Education, and K. Bangalore, “Customer Relationship Management Through Natural Language Processing Using Text Analytics,” CSI Commun., vol. 40, no. 1, pp. 11–13, 2016.

V. van Heuven and P. Boersma, “Speak and unSpeak with PRAAT,” Glot International, vol. 5, no. 9–10. pp. 341–347, 2001.

V. L. and Y. V. G. Liji Antony, “Comparison of Fluency Characteristics in News-readers and Controls,” JAIISH(2015), vol. Vol 34, pp. 48–54, 2015.

Dr. Will Styler, “Praat: doing Phonetics by Computer. (2017).,” p. 85, 2017.

Y. Jadoul, B. Thompson, and B. de Boer, “Introducing Parselmouth: A Python interface to Praat,” J. Phon., vol. 71, no. 2018, pp. 1–15, 2018.

C. Weller, “http://www.businessinsider.in/IBM-speech-recognition-is-on-the-verge-of-super-human accuracy/articleshow/57562260.cms.” accessed on 20032019

K. Ramesh, S. R. M. Prasanna, and R. K. Das, “Significance of glottal activity detection and glottal signature for text dependent speaker verification,” 2014 Int. Conf. Signal Process. Commun. SPCOM 2014, 2014.

Larry page, Sergey Brin, “An Owner’s Manual” for Google’s Shareholders” https://abc.xyz/investor/founders-letters/2004-ipo-letter/ accessed on 20032019.

P. Virtala, E. Partanen, M. Tervaniemi, and T. Kujala, “Neural discrimination of speech sound changes in a variable context occurs irrespective of attention and explicit awareness,” Biol. Psychol., vol. 132, no. October 2017, pp. 217–227, 2018.

T. Matsui et al., “The role of prosody and context in sarcasm comprehension: Behavioral and fMRI evidence,” Neuropsychologia, 2016.

D. Tomar, D. Ojha, and S. Agarwal, “An Emotion Detection System Based on Multi Least Squares Twin Support Vector Machine,” Adv. Artif. Intell., vol. 2014, pp. 1–11, 2014.

M. S. Suri, D. Setia, and A. Jain, “PRAAT Implementation for Prosody Conversion,” pp. 1–4, 2010.

M. H. R. Pereira, F. L. C. Pádua, A. C. M. Pereira, F. Benevenuto, and D. H. Dalip, “Fusing Audio, Textual and

Visual Features for Sentiment Analysis of News Videos,” no. 2015, 2016.

J. Harrington and S. Cassidy, “Building an interface between EMU and Praat: a modular approach to speech database analysis,” … Phonetic Sci., pp. 355–358, 2003.

A. D’Andrea, F. Ferri, P. Grifoni, and T. Guzzo, “Approaches, tools and applications for sentiment analysis implementation,” Int. J. Comput. Appl., vol. 125, no. 3, pp. 26–33, 2015.

S. Jun, J. Irudayaraj, A. Demirci, and D. Geiser, “Pulsed UV-light treatment of corn meal for inactivation of Aspergillus niger spores,” Int. J. Food Sci. Technol., vol. 38, no. 8, pp. 883–888, 2003


Refbacks

  • There are currently no refbacks.