Please use this identifier to cite or link to this item:
https://dspace.ctu.edu.vn/jspui/handle/123456789/100267
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Nguyen, Thi Thanh Tam | - |
dc.contributor.author | Nguyen, Thi Linh | - |
dc.date.accessioned | 2024-05-08T07:05:33Z | - |
dc.date.available | 2024-05-08T07:05:33Z | - |
dc.date.issued | 2021 | - |
dc.identifier.issn | 2525-2224 | - |
dc.identifier.uri | https://dspace.ctu.edu.vn/jspui/handle/123456789/100267 | - |
dc.description.abstract | Facial emotion recognition (FER) is meaningful for human machine interaction such as clinical practice, playing games, and behavioral description. FER has been an active area of research over the past few decades, and it is still challenging due to the high intra class variation, the heterogeneity of human faces, and variations in images such as different facial poses and various lighting conditions. Recently, deep learning models have shown great potential for FER. Besides, the visual attention technique has helped deep learning networks improve. In this paper, we present a visual attention based VGG 19 network for FER. The proposed outperforms the state of the art methods slightly on the PER 2013 dataset. | vi_VN |
dc.language.iso | en | vi_VN |
dc.relation.ispartofseries | Tạp chí Khoa học Công nghệ Thông tin và Truyền thông;Số 04(CS.01) .- Tr.137-143 | - |
dc.subject | Facial expression recognition | vi_VN |
dc.subject | Deep learning | vi_VN |
dc.subject | VGGnet | vi_VN |
dc.subject | Attention | vi_VN |
dc.title | A visual attention based VGG19 network for facial expression recognition | vi_VN |
dc.type | Article | vi_VN |
Appears in Collections: | Khoa học Công nghệ Thông tin và Truyền thông |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
_file_ Restricted Access | 1.86 MB | Adobe PDF | ||
Your IP: 18.221.161.43 |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.