RFFR-Net: Robust feature fusion and reconstruction network for clothing-change person re-identification
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Xiong, Mingfu | - |
dc.contributor.author | Yang, Xinxin | - |
dc.contributor.author | Sun, Zhihong | - |
dc.contributor.author | Hu, Xinrong | - |
dc.contributor.author | Alzahrani, Ahmed Ibrahim | - |
dc.contributor.author | Muhammad, Khan | - |
dc.date.accessioned | 2025-01-23T07:30:15Z | - |
dc.date.available | 2025-01-23T07:30:15Z | - |
dc.date.issued | 2025-06 | - |
dc.identifier.issn | 1566-2535 | - |
dc.identifier.issn | 1872-6305 | - |
dc.identifier.uri | https://scholarx.skku.edu/handle/2021.sw.skku/119963 | - |
dc.description.abstract | In the research field of person re-identification (ReID), especially in clothing-change scenarios (CC-ReID), traditional approaches are hindered by their reliance on clothing features, which are inherently unstable, leading to a significant decline in recognition accuracy when confronted with variations in clothes. To address these problems, this study proposes an innovative framework, the Robust Feature Fusion and Reconstruction Network for Clothing-Change Person ReID (RFFR-Net), which significantly improves the model's capability of processing non-clothing features (e.g., face, body shape) by incorporating the advanced Feature Attention Module (FAM) and Advanced Attention Module (AAM). In addition, the structure of the generative model of RFFR-Net is optimized by introducing the Refined Feature Reconstruction Module (RFRM), which effectively enhances the performance of feature extraction and processing, thus significantly enhancing the quality of the image reconstruction and the accuracy of the detailed representation. Experiments on three CC-ReID datasets show that our proposed method achieves an improvement of approximately 1.5% in mAP and CMC over the latest methods. In most cases, our method ranks within the top three across these evaluations. The results confirm the potential application of our RFFR-Net in person re-identification techniques and demonstrate its robustness and efficiency in the face of clothing changes. © 2025 Elsevier B.V. | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Elsevier B.V. | - |
dc.title | RFFR-Net: Robust feature fusion and reconstruction network for clothing-change person re-identification | - |
dc.type | Article | - |
dc.publisher.location | 네델란드 | - |
dc.identifier.doi | 10.1016/j.inffus.2024.102885 | - |
dc.identifier.scopusid | 2-s2.0-85214958542 | - |
dc.identifier.wosid | 001401454200001 | - |
dc.identifier.bibliographicCitation | Information Fusion, v.118 | - |
dc.citation.title | Information Fusion | - |
dc.citation.volume | 118 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Theory & Methods | - |
dc.subject.keywordAuthor | Advanced attention module | - |
dc.subject.keywordAuthor | Clothing-change | - |
dc.subject.keywordAuthor | Feature fusion | - |
dc.subject.keywordAuthor | Non-clothing features | - |
dc.subject.keywordAuthor | Person re-identification | - |
dc.subject.keywordAuthor | Refined feature reconstruction module | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(03063) 25-2, SUNGKYUNKWAN-RO, JONGNO-GU, SEOUL, KOREA samsunglib@skku.edu
COPYRIGHT © 2021 SUNGKYUNKWAN UNIVERSITY ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.