Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

RFFR-Net: Robust feature fusion and reconstruction network for clothing-change person re-identification

Full metadata record
DC Field Value Language
dc.contributor.authorXiong, Mingfu-
dc.contributor.authorYang, Xinxin-
dc.contributor.authorSun, Zhihong-
dc.contributor.authorHu, Xinrong-
dc.contributor.authorAlzahrani, Ahmed Ibrahim-
dc.contributor.authorMuhammad, Khan-
dc.date.accessioned2025-01-23T07:30:15Z-
dc.date.available2025-01-23T07:30:15Z-
dc.date.issued2025-06-
dc.identifier.issn1566-2535-
dc.identifier.issn1872-6305-
dc.identifier.urihttps://scholarx.skku.edu/handle/2021.sw.skku/119963-
dc.description.abstractIn the research field of person re-identification (ReID), especially in clothing-change scenarios (CC-ReID), traditional approaches are hindered by their reliance on clothing features, which are inherently unstable, leading to a significant decline in recognition accuracy when confronted with variations in clothes. To address these problems, this study proposes an innovative framework, the Robust Feature Fusion and Reconstruction Network for Clothing-Change Person ReID (RFFR-Net), which significantly improves the model's capability of processing non-clothing features (e.g., face, body shape) by incorporating the advanced Feature Attention Module (FAM) and Advanced Attention Module (AAM). In addition, the structure of the generative model of RFFR-Net is optimized by introducing the Refined Feature Reconstruction Module (RFRM), which effectively enhances the performance of feature extraction and processing, thus significantly enhancing the quality of the image reconstruction and the accuracy of the detailed representation. Experiments on three CC-ReID datasets show that our proposed method achieves an improvement of approximately 1.5% in mAP and CMC over the latest methods. In most cases, our method ranks within the top three across these evaluations. The results confirm the potential application of our RFFR-Net in person re-identification techniques and demonstrate its robustness and efficiency in the face of clothing changes. © 2025 Elsevier B.V.-
dc.language영어-
dc.language.isoENG-
dc.publisherElsevier B.V.-
dc.titleRFFR-Net: Robust feature fusion and reconstruction network for clothing-change person re-identification-
dc.typeArticle-
dc.publisher.location네델란드-
dc.identifier.doi10.1016/j.inffus.2024.102885-
dc.identifier.scopusid2-s2.0-85214958542-
dc.identifier.wosid001401454200001-
dc.identifier.bibliographicCitationInformation Fusion, v.118-
dc.citation.titleInformation Fusion-
dc.citation.volume118-
dc.type.docTypeArticle-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalWebOfScienceCategoryComputer Science, Artificial Intelligence-
dc.relation.journalWebOfScienceCategoryComputer Science, Theory & Methods-
dc.subject.keywordAuthorAdvanced attention module-
dc.subject.keywordAuthorClothing-change-
dc.subject.keywordAuthorFeature fusion-
dc.subject.keywordAuthorNon-clothing features-
dc.subject.keywordAuthorPerson re-identification-
dc.subject.keywordAuthorRefined feature reconstruction module-
Files in This Item
There are no files associated with this item.
Appears in
Collections
Computing and Informatics > Convergence > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher MUHAMMAD, KHAN photo

MUHAMMAD, KHAN
Computing and Informatics (Convergence)
Read more

Altmetrics

Total Views & Downloads

BROWSE