Please use this identifier to cite or link to this item:
https://dspace.ctu.edu.vn/jspui/handle/123456789/84964
Title: | IDENTIFICATION OF EMOTIONS FROM SPEECH USING DEEP LEARNING |
Authors: | Thái, Minh Tuấn Trương, Hoàng Thuận |
Keywords: | CÔNG NGHỆ THÔNG TIN - CHẤT LƯỢNG CAO |
Issue Date: | 2022 |
Publisher: | Trường Đại Học Cần Thơ |
Abstract: | For real two-way human-to-human communication and connection, emotions serve to color the language and serve as a crucial component. As listeners, we also respond to the emotional state of speaker and change how we behave based on the feelings the speaker conveys. For instance, speech generated while feeling fearful, angry, or joyful becomes loud and quick with a higher and wider range of pitches, whereas feelings like melancholy or exhaustion produce slow and low-pitched speech. Speech and voice pattern analysis can be used to identify human emotions, which has a variety of uses, including improving human-machine interactions. In particular, this study presents classification models for the emotions generated by speeches using Convolutional Neural Networks (CNNs), Support Vector Machine (SVM), and Multilayer Perceptron (MLP) Classification based on auditory variables like Mel Frequency Cepstral Coefficient (MFCC). The models have been trained to categorize seven various emotions (neutral, calm, happy, sad, angry, fearful, disgust, surprise). The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset and Toronto Emotional Speech Set (TESS) dataset are the two datasets used in the evaluation, which demonstrates that the proposed approach produces accuracy values of 86%, 84%, and 82% using CNN, MLP, and SVM, respectively, for 7 emotions. |
Description: | 43 Tr |
URI: | https://dspace.ctu.edu.vn/jspui/handle/123456789/84964 |
Appears in Collections: | Trường Công nghệ Thông tin & Truyền thông |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
_file_ Restricted Access | 1.19 MB | Adobe PDF | ||
Your IP: 3.143.241.205 |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.