MSDLF-K: A Multimodal Feature Learning Approach for Sentiment Analysis in Korean Incorporating Text and Speech
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Tae-Young | - |
dc.contributor.author | Yang, Jufeng | - |
dc.contributor.author | Park, Eunil | - |
dc.date.accessioned | 2025-01-14T00:30:17Z | - |
dc.date.available | 2025-01-14T00:30:17Z | - |
dc.date.issued | 2025-01 | - |
dc.identifier.issn | 1520-9210 | - |
dc.identifier.issn | 1941-0077 | - |
dc.identifier.uri | https://scholarx.skku.edu/handle/2021.sw.skku/119644 | - |
dc.description.abstract | Recently, sentiment analysis research has made significant improvements in addressing sentiment and subjectivity within textual content. The advent of multimodal deep learning techniques has further broadened this scope, enabling the integration of diverse modalities such as voice and image features alongside text. However, despite these advancements, the analysis of the Korean language remains challenging due to its inherently agglutinative nature and linguistic ambiguity, primarily examined at the sentence level. To effectively address this challenge, we propose a novel Multimodal Sentimental Deep Learning Framework for Korean (MSDLF-K), which can examine not only Korean text but also its associated speech. Our framework, MSDLF-K, integrates spectrograms and waveforms from Korean voice data with embedding vectors derived from script sentences, creating a unified multimodal representation. This approach facilitates the identification of both shared and unique features within the latent space, thereby offering valuable insights into their respective impacts on sentiment analysis performance. To validate the efficacy of MSDLF-K, we conducted a set of experiments using the emotion speech synthesis dataset. Our findings demonstrate that MSDLF-K achieves a remarkable accuracy of 79.0% in valence and 81.7% in arousal for emotion classification, metrics previously unexplored in the literature. Furthermore, empirical analysis reveals the significant influence of multimodal representations, encompassing both text and voice, on enhancing emotion analysis performance. In summary, our study not only presents a pioneering solution for sentiment analysis in the Korean language but also underscores the importance of incorporating multimodal approaches for more comprehensive and accurate sentiment analysis across diverse linguistic contexts. © 1999-2012 IEEE. | - |
dc.format.extent | 11 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.title | MSDLF-K: A Multimodal Feature Learning Approach for Sentiment Analysis in Korean Incorporating Text and Speech | - |
dc.type | Article | - |
dc.publisher.location | 미국 | - |
dc.identifier.doi | 10.1109/TMM.2024.3521707 | - |
dc.identifier.scopusid | 2-s2.0-105001087964 | - |
dc.identifier.wosid | 001442981700014 | - |
dc.identifier.bibliographicCitation | IEEE Transactions on Multimedia, v.27, pp 1266 - 1276 | - |
dc.citation.title | IEEE Transactions on Multimedia | - |
dc.citation.volume | 27 | - |
dc.citation.startPage | 1266 | - |
dc.citation.endPage | 1276 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Telecommunications | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Software Engineering | - |
dc.relation.journalWebOfScienceCategory | Telecommunications | - |
dc.subject.keywordPlus | CLASSIFICATION | - |
dc.subject.keywordPlus | FUSION | - |
dc.subject.keywordPlus | MODEL | - |
dc.subject.keywordAuthor | multimodal deep learning | - |
dc.subject.keywordAuthor | multimodal representation | - |
dc.subject.keywordAuthor | Natural language processing | - |
dc.subject.keywordAuthor | sentiment analysis | - |
dc.subject.keywordAuthor | speech recognition | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(03063) 25-2, SUNGKYUNKWAN-RO, JONGNO-GU, SEOUL, KOREA samsunglib@skku.edu
COPYRIGHT © 2021 SUNGKYUNKWAN UNIVERSITY ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.