MultiPragEval: Multilingual Pragmatic Evaluation of Large Language Models
- Authors
- Park, Dojun; Lee, Jiwoo; Jeong, Hyeyun; Park, Seohyun; Koo, Youngeun; Hwang, Soonha; Park, Seonwoo; Lee, Sungeun
- Issue Date
- 2024
- Publisher
- Association for Computational Linguistics (ACL)
- Citation
- GenBench 2024 - GenBench: 2nd Workshop on Generalisation (Benchmarking) in NLP, Proceedings of the Workshop, pp 96 - 119
- Pages
- 24
- Indexed
- SCOPUS
- Journal Title
- GenBench 2024 - GenBench: 2nd Workshop on Generalisation (Benchmarking) in NLP, Proceedings of the Workshop
- Start Page
- 96
- End Page
- 119
- URI
- https://scholarx.skku.edu/handle/2021.sw.skku/120559
- Abstract
- As the capabilities of Large Language Models (LLMs) expand, it becomes increasingly important to evaluate them beyond basic knowledge assessment, focusing on higher-level language understanding. This study introduces MultiPragEval, the first multilingual pragmatic evaluation of LLMs, designed for English, German, Korean, and Chinese. Comprising 1200 question units categorized according to Grice’s Cooperative Principle and its four conversational maxims, MultiPragEval enables an in-depth assessment of LLMs’ contextual awareness and their ability to infer implied meanings. Our findings demonstrate that Claude3-Opus significantly outperforms other models in all tested languages, establishing a state-of-the-art in the field. Among open-source models, Solar-10.7B and Qwen1.5-14B emerge as strong competitors. By analyzing pragmatic inference, we provide valuable insights into the capabilities essential for advanced language comprehension in AI systems. The test suite is publicly available on our GitHub repository at https://github.com/DojunPark/MultiPragEval. © 2024 Association for Computational Linguistics.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - Computing and Informatics > Convergence > 1. Journal Articles

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.